| Memory: Physical, usable, available, virtual, swap and more
Memory (RAM) is the lifeblood of a computer - it not only holds the data you are processing (e.g. while you read this, the HTML, that makes up this page), it also holds the active part of any program you use (e.g. the browser you display this page in) and some parts of the operating system (e.g. Android, which is based on Linux), that allows you to start a program in the first place. And however much RAM you have, it is allways in short supply, most possibly the most asked-after resource of your computer (Maybe in a close race with storage, such as a harddisk or an SD card). Even the fastest processor can't get much work done, if there is insufficient memory to hold the programs and data it is supposed to work on.
Over the decades of progress in building software a very universal way of dealing with memory has evolved, that applies to your poor little Vega tablet the same way, it applies to your turbo-boosted multicore high-end PC workstation. It characterizes memory in a couple of categories:
- Physical: How much memory is built into your computer. For a Vega, this is 512MB
- Usable: How much of the physical memory can actually be used by the operating system, as some might (and will) be lost to peripherials in your computer. On a Vega running VegaBean Beta 3.1, this is around 347MB, with most of the rest being gobbled up by the Nvidia graphics chip . On some modern laptops with, say, 6GB of RAM a Gig of which is in use for onboard graphics and maybe having a fancy harddrive controller, running Windows XP 32 bit you could easily end up with only ca 2.8 GB usable.
- Available: How much of the usable memory actually can be made available to programs by the operating system on short notice. Again, this will typically be much less.
- Virtual: Now here comes a neat trick: All modern operating systems, assisted by hardware in the CPU, will lay out a "map" of memory, that is much bigger, than usable memory. Applications can ask the operating system for a lot (and by "a lot" I mean several times as much as is physically present) of memory, the OS will pick free space in this map and assign it to the application.
As long, as the app doesn't really use it all, it will simply not be backed by physical RAM - see it as white space in an old chart: We know, there is something there, but we don't know or care what. That's why we call it virtual.
Only if the app starts using a chunk of this virtual memory will it be backed with physical RAM, and the moment the block (or "page" in lingo) is no longer used, the real memory will be detached from the map and maybe used somewhere else. - Swap: This is the last piece in the puzzle - think of the virtual memory map from above no longer (or just barely) being able to assign real memory to all the pages, that are actually used by apps: RAM is running out.
This is, where swap space comes in: All modern OSes have algorithms to detect such "claimed but not used" memory pages, write their content to some other storage medium, thus be able to reuse the physical RAM elsewhere. The moment, the application wants to read or write this page of memory again, it will be stopped cold, the data loaded back from the storage medium into a free page of physical RAM (which in turn might need to be freed by storing its content on the storage medium), the page attached to the correct spot in the memory map, then the app is allowed to move on, finding its data exactly where the map says it should be and crunching on happily after.
So what is this zRAM thingy supposed to do?
The nifty thing is, that it tries to combine the speed and durability of RAM with (part of) the memory size extension of swap. It works this way:
- First, a part of the physical RAM is taken away from the system and set aside. Yup. You start by sacrificing something like half of your memory.
- Second, this memory is used to simulate a storage medium like a harddisk or SDcard in memory - this is called a VBD, short for virtual block device (Disks, SDcards, etc. have in common, that they store data in fixed-size blocks). The important part about this is, that the blocks in this VBD are compressed with the LZO algorithm (that's where the "z" in "zRAM" comes from), so, depending on what data is stored (not everything can be compressed equally good), you can store more than 1MB of data in 1MB of RAM.
Ofcourse this compression (and later decompression, if the data is loaded back) uses some CPU power, but many computers, including the Vega tablet, over all have more CPU oomph available than can be used with the RAM being the limiting factor. - Third, this VBD is used as a storage medium for swap. Again Yup: We now have a virtual block device (implemented in virtual memory) storing the carryovers of a virtual memory map. Wrap your head around that one!
Things start to make sense now: We use compressed RAM as a swap for uncompressed RAM, so potentially being able to store more data in RAM than would be possible without it - in fact we trade CPU power (of which we have mostly enough) against RAM (of which we have not enough), looks like a good deal to me.
Practical use shows, that the LZO compression used consistently scores a compression ratio of ca. 300% with typical Vega workloads, so simple maths tell us, that a 100MB zRAM swap has the potential to transform the ca. 350MB of usable RAM in the Vega to 350-100+(300% of 100) or ca. 550MB. That's quite nice, even if the theoretical potential will never be realized in everyday use. Again, practical use shows, that a balance of +100MB can quite easily be achieved. That's a boost of ca .30% to the tablet's RAM with the slowdown caused by (de)compression being manifestly overcompensated by the speedup, more available RAM brings.
Parameters, settings and tuning
zRAM has a few knobs to fiddle with:
- The total size of RAM commited to compressed swap
- The number of VBDs used
- The socalled swappiness of the operating system kernel
Let's tackle them one after the other: While this is counterintuitive, the total size is not the biggest factor: This stems from the fact, that zRAM doesn't really take all configured RAM away from the OS the moment you create the VBD, but only on a running basis, when needed. So being overgenerous with zRAM, not really filling it, does not carry much of a performance penalty. Reasonable values for the Vega are 100-200MB, less is conservative.
The number of zRAM devices makes a huge difference: Every VBD is tied to a thread in the OS kernel, a structure, that can only run on one CPU core. Now, the Vega has a dual-core CPU, and using a single zRAM device will lead to situations, where one CPU core wants access to zRAM and has to wait for the other to provide it. 150MB zRAM in a single device perform much worse than 2 devices with 75MB each.
The swappiness is strictly seen not a parameter for zRAM, but for the OS kernel - given that on most Vegas zRAM will be the only swap used, this makes it a zRAM parameter too. It controls, how agressivly the OS will try to free up RAM pages in the memory map, by putting their content into swap. So basically this tells the OS how much compress-and-swap work it should do. Since swapping to zRAM includes much less overhead than swapping to a harddisk, unusually high values of 80 and up (on a 0-100 scale) provide the best results.
The zRAMconfig app allows you to change these values via the usual preferences menu. If you want to experiment, be advised, that the default values are a bit on the conservative side: 40% for the size and 80 for the swappiness. Try 50% for the size and 90 for swappiness, if you feel like it, much depends on your workload. The "zRAM swap info" button on the top right of the app (or "/data/local/bin/zram.sh env" from a SU terminal) will give you a good picture about what is going on in your zRAM swap.
Remember though, that zRAM parameters can't be changed while running: If you change values in the preferences, they will be applied only on the next zRAM startup (meaning you have to switch zRAM off and back on again to apply the new values)
This has been a long post again, I hope you enjoyed it nevertheless. My third (and last) zRAM post will address developers: I will be talking about a few obstacles, that had to be overcome for the VegaBean implementation of control script and GUI app - this might be of interest to others exploring the glue zone between the Android UI and the underlying Linux kernel features. |