Minimizing memory usage from script

OpenBOR itself will tell you just by turning on Debug. You can also look up the process in Windows (Ctrl + Shift + Esc).

Note however, this isn't entirely accurate across the board. Different ports may use a bit more or less memory to run. Also note the engine does its best to memory manage depending on how a module is built, so memory usage may vary wildly from one part of your game to the next.

DC
 
Of course, I thought there was another way inside OpenBor, thanks.
I just checked and I see that it occupies me 165mb.
I'm trying to implement this method to decrease memory usage, but I still find it a bit confusing what I should do.
 
I have the "escript.c" as animationscript in all my enemies. And in this script I have a #import. The procedure is the same as with the .h file of the example of Plombo?
 
Scripts in OpenBOR use a ton of memory, far more than they should. This is one of the engine's major shortcomings. But there are ways to mitigate this issue.

It's common to use the same animationscript for several models in a game. There's nothing inherently wrong with this practice; code reuse is good! But every time you load a model using that animationscript, OpenBOR recompiles it and gives it its own storage. In other words, if you have 20 characters using the same animationscript, the contents of that animationscript will be stored in memory 20 times!

There's an easy way around this, though. I'll demonstrate using magggas' fantastic, recently released Double Dragon Reloaded. 113 models in the game all use the animationscript
Code:
data/scripts/ani0020.h
. So let's rename
Code:
ani0020.h
to
Code:
ani0020_actual.h
and create a new file
Code:
ani0020.h
with this one line inside of it:
Code:
#import "data/scripts/ani0020_actual.h"

The
Code:
#import
directive ensures that the script is only compiled and stored in memory once, instead of 113 times. And that's all! No further changes are needed. Just this one change brings the memory usage of Double Dragon Reloaded down from 118.6 MB to 52.5 MB, according to the Windows system monitor.


You can also do this with other kinds of scripts used by several models, with the caveat that script types other than animationscript have a
Code:
main()
function, which the engine won't recognize if it's imported. The workaround is to rename
Code:
main()
to
Code:
actual_main()
in
Code:
scriptname_actual.c
and then have this in
Code:
scriptname.c
:
Code:
#import "data/scripts/scriptname_actual.c"

void main()
{
    actual_main();
}

Or, if your main() function is already small, you can just move all of the other functions to the new file and leave main() where it is.
Sorry I am still newbie. so I really don't understand everything you wrote....
I noticed that every existing player character has these in the character.c file:
animationscript data/scripts/drago.c
ondrawscript data/scripts/shadowon.c

and drago.c has a lot of scripts in it.

If I understood your post correctly,

basically I just need to rename drago.c to drago.c _actual.c
Then create a new .c file and name it as drago.c
add this code
#import "data/scripts/drago.c _actual.c " in this new drago.c file

This way the engine only loads this new file which has no script in it instead of the original file for each character.c?
Is that correct?

Thank you so much for sharing your knowledge!
 
Sorry I am still newbie. so I really don't understand everything you wrote....
I noticed that every existing player character has these in the character.c file:
animationscript data/scripts/drago.c
ondrawscript data/scripts/shadowon.c

and drago.c has a lot of scripts in it.

If I understood your post correctly,

basically I just need to rename drago.c to drago.c _actual.c
Then create a new .c file and name it as drago.c
add this code
#import "data/scripts/drago.c _actual.c " in this new drago.c file

This way the engine only loads this new file which has no script in it instead of the original file for each character.c?
Is that correct?

Thank you so much for sharing your knowledge!

That's right. You understood correctly
 
Sorry I am still newbie. so I really don't understand everything you wrote....
I noticed that every existing player character has these in the character.c file:
animationscript data/scripts/drago.c
ondrawscript data/scripts/shadowon.c

and drago.c has a lot of scripts in it.

If I understood your post correctly,

basically I just need to rename drago.c to drago.c _actual.c
Then create a new .c file and name it as drago.c
add this code
#import "data/scripts/drago.c _actual.c " in this new drago.c file

This way the engine only loads this new file which has no script in it instead of the original file for each character.c?
Is that correct?

Thank you so much for sharing your knowledge!

That's right. You understood correctly :)
 
IMO this is a bug and should be handled by the engine itself , indexing that script just once and assigning it to other entities that use it without actually making a clone of the same thing over and over , i assumed that engine already did this but it didnt ? especially that it actually works it means the code is not optimized enough for stuff like this.IT should check against same filename and location to prevent stacking up clones of the same file over and over filling memory. Its actually weird that it doesnt do that by default. because obvious result is memoryusage creep.
So instead of loading characterstuff.c many times, it would just load it once, and for all other chars it would not do that and just assing the same index as first instance of that file.
 
IMO this is a bug and should be handled by the engine itself

It is not a memory leak or a bug. The #include directive works the same way it does anywhere else. It's a straight up copy+paste. Making it do otherwise would be a bug, and go against established behaviors familiar to anyone with a coding background.

The #import directive was added later. It's a non-standard directive that gives creators an alternate means to re-use functions. Think of it as a trade. For a couple of extra restrictions and a bit more complexity you get much better memory efficiency. This thread is just showing you a quick and dirty way to use it.

indexing that script just once and assigning it to other entities that use it without actually making a clone of the same thing over and over , i assumed that engine already did this but it didnt ?

It DOES index the scripts. Scripts are loaded separately for each model, but not for each entity. When entities spawn they do get their own script instance for variables, but the code only exists at model level.

because obvious result is memoryusage creep.

That's not true either. Scripts are just another part of the model, and they're deleted when the model is. Script needs be unique to the model by default or it wouldn't work properly. Loading a new model is always going to use memory, so why is the script portion a memory leak?

So far as the memory amount a given script uses, that's another discussion. There's a whole crapton of technical reasons behind it. Most of them come down to making OpenBOR Script a simpler language to use and backward compatibility. That's why I get so frustrated when people howl over and over for simple and easy... they have NO perspective. You can't even imagine how much easier OpenBOR script is than C and similar languages, but that does come at a resource cost.

In any case, I'm not a big fan of this thread becasuse it kind of gives off a bad vibe. I'm all about squeezing every drop of memory, but at the same time, there should be some perspective. OpenBOR is a very light application overall, one of the best in the business for what you get out of it. I don't know of a single functional OpenBOR game that consumes more than 300MB. Most never get over 100MB. It's like we're arguing over taking five grains of sand from the beach instead of two.

DC
 
I'm all about squeezing every drop of memory, but at the same time, there should be some perspective. OpenBOR is a very light application overall, one of the best in the business for what you get out of it. I don't know of a single functional OpenBOR game that consumes more than 300MB. Most never get over 100MB. It's like we're arguing over taking five grains of sand from the beach instead of two.

DC
Yeah it would make sense to try to squeeze every bit of memory usage if the memory available on most devices stayed the same as it was some 10-20 years ago. But now with the amount of memory available on most devices, it seems pointless.
 
I personally think Its a weird practice to create pretty empty files juts to include link to other files in them just to save memory
Maybe its cause i did encountered slowdowns even with pretty ok cpus with the engine, some stuff definitely waits for improvement, that does not mean i juts came back to crap on devs , some stuff can really cripple some games if you want to go beyond low resolution games.
But im like one of maybe 3 guys that got affected by this so in the end i think its a low priority for devs to care changing it.
Its when i see some ways like this to "Save memory" im surprised this was not made into default engine behaviour under the hood and we have to do these fake txt files with paths to actual files.
Scripts are encouraged great but nobody tells you that your game is suddenly 20fps cause of one extra script.
While some people view those tricks as something great, i think its sad its not automated under the hood.
I had long break from gaming in general , did some short games for my 3yr old daughter , im building assets for future games all the time since few years, its just im between openbor and godot as engine choice, why ? cause i had a situation where i encountered a bug that had me abandon a feature i wanted to have in a game and im not as experienced and willing to learn C to fix this on my own, i mostly like animating and creating visuals ,which is pretty obvious in my mods.

So... "directive ensures that the script is only compiled and stored in memory once, instead of 113 times" - this is pretty strange, why would an engine even allow to load the same file 113 times ,same location same name... that is what i dont understand, it should do it just once.

In the end despite my blabbing about fake files , i will use that method in the future, theres no alternative.
 
Last edited:
Maybe its cause i did encountered slowdowns even with pretty ok cpus with the engine, some stuff definitely waits for improvement, that does not mean i juts came back to crap on devs , some stuff can really cripple some games if you want to go beyond low resolution games.

We've talked about this before. There are all kinds of games that have layers and layers, and layers after that with no problem. Plenty of HD games too. There's that Mass Effect one and it's over 10 years old. Spams the screen with HD blended effects and doesn't slow down a bit.

There must be something specific in your games that's causing it. I don't mean that as a knock on you at all. Everybody here knows you're one of the best creators around. I'm just saying that the nature of computers means things we think are tough on them rarely are, and things that should be simple can choke them like a polish sausage.

You can have 50 entities on screen, dozens of layers, lots of blending, all in HD, and a ten year old econobox will laugh at it because that's just Additive Growth. The machine either has the resources to run them or it doesn't, and your approach to that limit is a constant vector. IOW, more stuff is just more stuff.

It's when you accidentally create Multiplicative Growth the machine dies. Look at this population chart. We can image the population size as general resource consumption where 35 is the point it can no longer maintain 60+ framerate.

c42_un873-01.jpg


The trick is where are those multiplicative loads? Note some things are unavoidable multipliers, like resolution increases. Those are known up front - you either have the resources or you don't. I'm talking about runaway multipliers that grind the machine down. Most of the time they're very small and difficult to find because we as humans don't see them as difficult. The best example I can think of is the Tower of Hanoi. This is a super simple algorithm used in coding classes. It's just moving disks. You or I can solve it by hand in 10 seconds. But every time you add just one disk, the machine load increases exponentially. It doesn't take many at all before the computer needs years to complete.

Code:
FUNCTION MoveTower(disk, source, dest, spare):
IF disk == 0, THEN:
    move disk from source to dest
ELSE:
    MoveTower(disk - 1, source, spare, dest)   // Step 1 above
    move disk from source to dest              // Step 2 above
    MoveTower(disk - 1, spare, dest, source)   // Step 3 above
END IF

impl0.gif
impl2.gif


Somewhere in your projects, something caused one of those multiplicative loads. Maybe it's an engine bug that only specific conditions unique to your game cause. Maybe (my guess is most likely) there's a bottleneck in one of the scripts. There's also a chance it could be huge areas of transparency in the layers. What it's almost certainly not is a general slowdown just from lots of scripts in general or from putting stuff on the screen. FWIW, the engine already has all kinds of safeguards to prevent crashes and avoid runaway resource use, but we as devs can only do so much without limiting your creativity.

I hope that makes more sense.

DC
 
Last edited:
So... "directive ensures that the script is only compiled and stored in memory once, instead of 113 times" - this is pretty strange, why would an engine even allow to load the same file 113 times ,same location same name... that is what i dont understand, it should do it just once.
It's probably because the loading is tied with space/environnement creation. By default there is a seperate space/environement for each script. Like when you launch two instances of the same program. You don't want space sharing to be the default, because you want the two instances of the program to be independent. Space sharing would create a lot of interferences. For example if two entities shared the exact same space, then they would be fully synchronized, and could not have a "life" of their own. So even if it is not optimized, you want seperated spaces as the default.

You only want partial space sharing between instances of the same program, and by default OpenBOR can't know what should be shared and what should not (hence why by default, everything is not shared). The "trick" mentionned in this topic basically let you create "roads" between spaces. Two entities will still have their own separated and independent space, but it will be very small, containing only what make this entity different from the others. But both of these small specific entitty spaces will be connected to a bigger,shared,and abstract entity space.

So really, it's not a trick at all, it is the way it is presented that makes it look like a trick.

So instead of scriptXXX.h that imports scriptXXX_actual.h

See it as

script_specificXXX.h which imports scriptXXX_shared.h

The two files have not the same purpose. If they had the same purpose, then yeah, that would be a trick.
 
Last edited:
@bWWd I remade the tests in your Bouli mod today and same as before I see that the slowdowns are caused by the high resolution fglayers (1600x720) with alpha and bgspeed/vbgspeed enabled, once this module has most effects concentrated on fglayers like HUD and particle system (the falling snow).

The point in this case is, even if we have many different ways to achieve the same result, some ways have better performance than others and not always the easier way is the best way. For example, for HUDs I recommend using drawsprite and for the particle system sometimes I had better results by spawning every particle individually as entities and killing it at a certain point (like the rain system in SORX).

Both the resolution and particles always affects performance in most games, it's not exclusive to OpenBOR. For example, I can't run The Witcher 3 on my PC on "Ultra" using a resolution of 1920x1080 and having all the particle systems at max, it's simply unplayable for me with around 15fps.
So, the heavier the game gets, the more it needs optimization.

Specifically about your Bouli game, there's a trick you can do to improve the performance using fglayers. The performance drops during the snow sprite repetition according to the screen size, so bigger sprites will repeat less times than small sprites and then if you resize all the snow sprites to 4x the screen size (but maintaining the current snow size) the engine will have less repetition cycles to be made, saving the cpu.

In the video below you can see how I gained around 40fps by simply resizing three snow sprites. Note that most fglayer effects are active and it still around 100fps (above 60fps, which is common for most games).


Below I made some tests on the from video mode 6 (960x544) to 4k resolutions and as we can see the slowdowns only occur when the transparent fglayers are active and moving. With no fglayer active, OpenBOR runs fine even on 4k.


As for the #import function, this is a normal practice on programming languages anyway. Sometimes improving the native engine functions focusing on easier game development can have an unseen great impact under the hood.
I totally understand your point, but a few script lines in some cases are less costly than changing the engine source code.
 
Yeah i kinda get why there must be a copy of scripts per each entity, i thought its created on demand and not when loading assets...
Kratus i realy appreciate You taking time to tinker with my stuff , i suspect iuts the fglayers alpha code cause when you turnoff alpha then it runs much faster with same amount of fglauyers.
I am not smart enough to tinker with alpha implementation in opnebor sourcecode but i think its where the issue lies, its just 2d layer but pretty heavy but GTA IV runs perfectly fine on the same cpu/gpu.. go figure.
2 alpha fglayers is absolute max in my experience .
The best solution is of course to not use them (wont affect gameplay at all negatively cause its just decoration) , and yeah its another instance of giving up on stuff cause it makes game unplayable on some systems ( like phones or slower laptops)
If i can optimise it to run at least stable 60fps then thats good.
Damn i might try to do some demo at native 4k, i can do animatiosn at that scale
 
@bWWd

I think the issue is that you have a sprite that has large transparent areas AND alpha blending.

This is something you must avoid at all costs, and the engine you use won't matter. It's easy to assume the problem is with this or that engine because some AAA game has the effect, but that’s because you can't see all the workarounds and person hours they took to get it working.

I can guarantee you GTA's effects don't work by putting a big sheet over the screen and trying to process it with alpha blending.

I haven't looked at your module, but the snow effect is probably better off as a series of entities. You don't have to give up anything, you just have to approach it from a different angle.

This is something that's always been and always will be true because technology can't outpace mathematics.

DC
 
Kratus i realy appreciate You taking time to tinker with my stuff
No problem buddy, I'm glad to help. Your sprites are amazing, I really want to see the Bouli game finished.

Damn i might try to do some demo at native 4k, i can do animatiosn at that scale
I had not tested 4k in OpenBOR before, it's good to know that it works fine haha. I would like to see your sprites in a 4k resolution too.
 
Are there any specific specs for running 4k sprites?
I have experience using old dual core PC without GPU up to i7 + GTX with Openbor, so far the most different from them is at the time of video recording and loading time
I don't have the knowledge but the switch between openGL and SDL has an effect depending on specs.

doesn't means want to do it, just for knowledge
 
Are there any specific specs for running 4k sprites?

There's really no such thing as a sprite in OpenBOR or any other modern engine. That's just a term we've appropriated over time, and it's grossly misused. OpenBOR does everything in software so there's no technical limit at all. It's more of a soft ceiling you'll hit where the frame-rate starts to drop too much, and other than rough guesses I can't tell you where that is. Load is a matter of content and platform architecture, which outside of whatever hard-coded limits it might impose, an engine doesn't control. The best we can really say is "more".

At one extreme, if you're just pasting 3rd or 4th gen sprites on a larger canvas (ex. Big Blue), it won't cost much more than running any other res. At the other end, if you want to fill the screen with ultra high res sprites and detail, you'll have a limited audience. Again, there's just too many factors at play to give you hard specs. Do you really think game studios know "oh, if I use X number of ray casts and polygons, that means it will use this much memory and GPU"? Heck no. They have a rough idea obviously, but from there the designers and coders basically just experiment, play-test, and then fight each other until they land on a compromise of quality vs. frame-rate for their platform.

We just don't have any way at all to know what the load is because we're not the creators - you are.

DC
 
Back
Top Bottom