Why does it presume no other service may make previous zram disk?
I have a few comments about zram-config that I feel is extremely poor in implementation and presumes it will be the first service to create a zram disk.
If the trouble can be made for different kernel descriptors
# load dependency modules
NRDEVICES=$(grep -c ^processor /proc/cpuinfo | sed 's/^0$/1/')
if modinfo zram | grep -q ' zram_num_devices:' 2>/dev/null; then
MODPROBE_
elif modinfo zram | grep -q ' num_devices:' 2>/dev/null; then
MODPROBE_
else
exit 1
fi
modprobe zram $MODPROBE_ARGS
Then surely it could do a simple check that num_devices:= 0 or just start from what the current devices are.
Also slightly confused as num_devices parameter is optional and tells zram how many devices should be pre-created. Default: 1
So the above isn't needed and just modprobe zram is
So really the first check should be something like:-
# Check Zram Class created
ZRAM_SYS_
if [ ! -d "${ZRAM_SYS_DIR}" ]; then
modprobe zram
# create /dev/zram0
# as /sys/class/
fi
#Next zram device = /dev/zram${hot_add}
Also another bugbear is the inflexibility of mem=$(((totalmem / 2 / ${NRDEVICES}) * 1024)) without being able to change 75% is probably a better level and using a factor divided by 100 will escape the need for floats. But why hard coded?
Same with alg as that is /proc/crypto and again log2ram would likely gain at little expense from even LZF if avail or deflate but choice should be available.
Also with big/little cores and the plethora of cpu core arrangements especially with Arm that is more likely to implement zram zram_drives=
Zram config is just a simple script and I am not sure why it is so simple and flawed.
As a suggestion maybe employ a /etc/ztab or /etc/zram/ztab to allow easy control with far more scope .
It would be so easy to provide finite control of devices/block of size, file system and compression and probably the only way to approximately keep to the $mem_factor for all drives. Or at least use Zram stats.
For flash based devices zram is still effective and also apps like log2ram also provide lower flash writes but need to assign devices earlier in startup.
If not the above 'ztab' then surely at least zram_config will check the param of num_devices before assigning and overwriting disk?
Its coding show an obvious bios to Intel architecture where generally its usage is minimal and its probably one of the worst implementations of a kernel technology currently existing.
Shouldn't be called zram_config maybe zram_make_
There is also one thing and I can not make my mind up if I am reading this correct.
https:/
If a device can have multiple concurrent streams then why are we creating a device per cpu?
In fact a device can not have single streams.. unless only one CPU why are we dividing into smaller partitioned blocks of cpu number?
But to be honest I am bamboozled by the relevance of that to what it means so its a question not a statement.
But if you read it one way then isn't a single larger device with multiple concurrent streams more efficient?
Or is max_streams a method to force serialisation of device(s) at one time?
But to be honest does any swap need to be made for each core, for any architecture? Does any swap device not support multiple streams?
Say with some dual big / quad little 6 devices may exist but max_streams on each device is set to 2?
Seems a bit pointless without core preference.
Again not implemented along with writeback even if I am struggling to see how a daily quota could be used.
Mem_Limit is the most important factor and ignored by zram_config its the actual mem_size of the drive and its that figure that should be the 50%.
Mem_Limit conversly sort of makes drive_size a weird question but it can be something like Mem_Limit*
Should echo "0" > /proc/sys/
Also vm_swapiness changes as the change to zram means the default of 60 which presumes harddrive media isn't there and a value of 80 seems to provide a balance.
Problem is zram uses crypto compression and so incurs load and really should work by dynamically changing vm_swapiness in response to load_avg.
If you have middle to low load vm_swapiness=80 - 100 can reduce overall load by up 20% by releasing memory via paging in limited systems.
Thing is when you have high load to the extreme of boot the increased load of zram compression is not required or wanted.
Its the problem with static parameters like vm_swapiness as zram_config should have methods to react to this so that its constantly tuning for the best level of operation.
You can not really set vm_swapiness to a level other than default mid ranges as with zram its dependent on load.
A simple script could though and that is all that is required.
For an application to use and benefit from a zram disk zram_config can not be installed as it overwrites previous with zero checks that any service has called modprobe zram.
That has forced the implementation of zram swaps if required by log2ram users in log2ram as zram_config can not be used.
The simple check that /sys/class/
Code is at https:/
Also the deb pre-install script is broken as the upstart file is referenced as upstart filename zram.conf whilst it is /etc/init/
Anyone ever asked https:/
Question information
- Language:
- English Edit question
- Status:
- Expired
- Assignee:
- No assignee Edit question
- Last query:
- Last reply: