Add Zram_Swap_Config 0.1 so that Zram_config_0.5 can be dropped
This is not a question I am just going to show how easy it would be to fix this package but its name scope needs to change.
Basically it should be called Zram_Swap_Config.
Zram works on a hot_plug system and any service / program can check if zram_control exists in /sys.
Services before and after should be able to create their own drives because it is a hot plug system with next drive
indicated by /sys/class/
Please stop talking about various packaging errors of stop services or purge needs as firstly the concept and implementation is completely broken.
For sys-admins the critical control parameter of mem_limit is missing from zram_config and it uses a guesstimate of max uncompressed size for control and doesn't take into account that if compression is low vastly much more memory could be taken.
Its absolutely impossible to control by setting a virtual uncompressed size of disk_size and omitting the actual compressed size of mem_limit.
Also each device supports multiple streams so firstly there is no reason to sub-divide and secondly dividing into partitions reduces the max size of available paging ram.
Take these 2 example running a stress-ng test both with the exact same mem_limit of 50% of total on a Pi-0.
Allocated swap memory divided by cores, bad idea!
pi@raspberrypi:~ $ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 160M 2.3M 573.6K 1000K 1 /var/log
/dev/zram1 lz4 54.2M 4K 63B 4K 1 [SWAP]
/dev/zram2 lz4 54.2M 4K 63B 4K 1 [SWAP]
/dev/zram3 lz4 54.2M 4K 63B 4K 1 [SWAP]
/dev/zram4 lz4 54.2M 4K 63B 4K 1 [SWAP]
pi@raspberrypi:~ $ stress --vm 2 --vm-bytes 512M --timeout 60s
stress: info: [906] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [906] (415) <-- worker 908 got signal 9
stress: WARN: [906] (417) now reaping child worker processes
stress: FAIL: [906] (451) failed run completed in 28s
pi@raspberrypi:~ $
Same allocated memory just a single multi stream device, good idea!
pi@raspberrypi:~ $ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 160M 2.5M 614.6K 1M 1 /var/log
/dev/zram1 lz4 650.2M 4K 64B 4K 1 [SWAP]
pi@raspberrypi:~ $ stress --vm 2 --vm-bytes 512M --timeout 60s stress: info: [837] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
stress: info: [837] successful run completed in 61s
You will also notice the disk size which is a guesstimate is not the control, but mem_limit is set that and both above are set to 50% of total memory. Drive_Size is an estimation of compression results ratio * mem_limit so in the 2nd example I went with optimism of 300%
Zram-Swap-
#!/usr/bin/env sh
. /etc/zram-
createZramSwaps () {
totalmem=
mem=$((( totalmem * MEM_FACTOR / 100 / SWAP_DEVICES ) * 1024 ))
drive_size=$((( mem * DRIVE_FACTOR ) /100 ))
# Check Zram Class created
if [ "${SWAP_OFF}" = true ]; then
swapoff -a
fi
ZRAM_SYS_
if [ ! -d "${ZRAM_SYS_DIR}" ]; then
modprobe zram
echo ${COMP_ALG} > /sys/block/
echo ${drive_size} > /sys/block/
echo ${mem} > /sys/block/
mkswap /dev/zram${RAM_DEV}
swapon -p ${SWAP_PRI} /dev/zram${RAM_DEV}
else
echo ${COMP_ALG} > /sys/block/
echo ${drive_size} > /sys/block/
echo ${mem} > /sys/block/
mkswap /dev/zram${RAM_DEV}
swapon -p ${SWAP_PRI} /dev/zram${RAM_DEV}
fi
if [ "$SWAP_DEVICES" -gt 1 ];then
for i in $(seq $((SWAP_DEVICES - 1))); do
RAM_DEV=$(cat /sys/class/
echo ${COMP_ALG} > /sys/block/
echo ${drive_size} > /sys/block/
echo ${mem} > /sys/block/
mkswap /dev/zram${RAM_DEV}
swapon -p ${SWAP_PRI} /dev/zram${RAM_DEV}
done
fi
echo ${PAGE_CLUSTER} > /proc/sys/
sysctl vm.swappiness=
}
There is a strong need for user paramters in /etc/zram-
$SWAP_DEVICES default=1 but /etc/zram-
$MEM_FACTOR default=50% to match system memory which sets the memory pool, with LZO/LZ4 like compression ratio's but /etc/zram-
Employing zstd 1.3.4 -1 instead of lz4 1.8.1 could use 35% for the exact same results in memory size.
$COMP_ALG default lz4 as LZ4 ratio/compressi
It has been noticed with some binaries that Arm Neon optimistation seems to be better with LZO and 2.108/650 MB/s/830 MB/s incurs much less cpu load but zstd 1.3.4 -1 and 2.877/470 MB/s/1380 MB/s could be equally valid depending on application.
$DRIVE_FACTOR default for LZ4 matching of 200% but complete guesswork dpending on resultant compression ratio of optimism of result or being frugal with (zram uses about 0.1% of the
size of the disk when not in use so a huge zram is wasteful)
$PAGE_CLUSTER default=3 but the option to garner much gain via zram mem usage as opposed to the default presumption of HDD swap adds much performance gains.
$SWAPPPINESS default=60 as system default but the tweak to higer increase zram performance gain
It massively important to use the hot_plug system and check that ZRAM_SYS_
$PAGE_CLUSTER / $SWAPPPINESS default = current system defaults
[EDIT]
$SWAP_OFF default=false give option to turn off existing swap for Zram only swap
Also added demo @ https:/
Question information
- Language:
- English Edit question
- Status:
- Expired
- Assignee:
- No assignee Edit question
- Last query:
- Last reply: