The I/O workload consists of creating a small file on the mounted gluster volumes, i.e. both fuse and nfs. It is then variously written to, read from, renamed, and ultimately deleted. Several directories are created, and then the file is moved, linked, symlinked, statted, etc. Other missing ops, e.g. {set,get,del}xattr, will be added to the workload soon.
Obviously this test is very basic. Primarily this is because gluster, or valgrind, is rather finicky and the glusterfs (fuse, nfs) processes are prone to locking up. The finickyness also means that it's hard to run in cron jobs as if one locks up it will prevent following jobs from executing. Thus these valgrind reports will only appear occasionally.
Obviously it would be nice to get more coverage of glusterd, large file writes, directories with lots of files in them to test readdirp, and so on. The selfheald glusterfs is not exercised at all. There's lots of room for improvement. Watch this space.
If you find a leak, open a bug report at Red Hat Bugzilla
Results by git commit, master branch
Results by git commit, release-3.6 branch
Results by git commit, release-3.5 branch
Longevity is a CPU load and memory footprint snapshot from each of one client and eight servers running GlusterFS in a 4×2 distribute-plus-replica with sharding configuration. An I/O workload is running continuously. CPU% and Resident (RSS) and Virtual (VSZ) memory footprint are sampled hourly.
Longevity, GlusterFS 4.1.1 4×2 distribute-plus-replica + shard
Longevity, GlusterFS 3.12.1 4×2 distribute-plus-replica + shard
Longevity, GlusterFS 3.11.0 4×2 distribute-plus-replica + shard
Longevity, GlusterFS 3.10.1 4×2 distribute-plus-replica + shard
Longevity, GlusterFS 3.8.5 4×2 distribute-plus-replica + shard
Longevity, GlusterFS 3.8.1 4×2 distribute-plus-replica
Longevity, GlusterFS 3.7.8 4×2 distribute-plus-replica
Longevity, GlusterFS 3.6.0 4×2 distribute-plus-replica