Writing something else than 0 in memory ?

Asked by Erwan Velu

I'm running sysbench in a cloud infrastructure (VMs) and I do have some strange results which sounds much more higher than expected.

I'm running the memory bench in write mode by 128M block size. While reading the code, I do see that sysbench is only writting 0 (zeros) to the memory. Isn't that an issue ?

Does KSM or the kernel cannot be in position to fake some results as it was able to merge some requests or dedup them ?

Could it be interesting to have something else than 0 written in memory ? Maybe a random number ?

Thanks for your insight,
Erwan

Question information

Language:
English Edit question
Status:
Answered
For:
sysbench Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Alexey Kopytov (akopytov) said :
#1

I am not aware of any technologies that would optimize zero block writes to memory. But yes, comparing performance with random numbers would be interesting.

Revision history for this message
Erwan Velu (erwan-t) said :
#2
Revision history for this message
Alexey Kopytov (akopytov) said :
#3

My understanding from that description is that KSM does not optimize memory bandwidth (on the contrary, it can make memory access less efficient), but it does optimize memory footprint by sharing pages between VMs/processes.

That is, whenever a process writes to a page, a real write occurs, regardless of its contents. However, a page merge can be performed _after_ the write by some background scan in the kernel.

Revision history for this message
Erwan Velu (erwan-t) said :
#4

Ok, make sense.

While reading this part of the code, don't you think we should move the " LOG_EVENT_START(msg, thread_id);" just after the " rand = sb_rnd();" to avoid counting some potential cpu time inside the memory loop ?

I do agree that's very few but if someday sb_rnd() generates some slowdow due to lack of entropy or change of code (who knows), that could induced a lowered perf result while it's not the case.

Getting the LOG_EVENT_START as close as possible to the code sounds better to my eyes.

Cheers,

Revision history for this message
Alexey Kopytov (akopytov) said :
#5

I agree, the current placement of LOG_EVENT_START/STOP makes the code a bit easier to read, but probably results in a minor skew in the stats.

On a general note, I don't see many people using cpu/memory/mutex/threads tests. I know they should likely be updated for modern architectures to be representative. Database and file I/O tests are used quite extensively though.

On the other hand, I'm happy to merge a patch to fix the LOG_EVENT_START() issue.

Can you help with this problem?

Provide an answer of your own, or ask Erwan Velu for more information if necessary.

To post a message you must log in.