Replies: 1 comment
-
At a first sight, the block size (i.e. Thus, try to experiment with various block sizes inside the microVM (I would recommend starting with 4M, 16M, 32M, 64M, 128M, and anything beyond that I don't think adds any value), then use the exact command in the host and compare the results. Also, run a read test first, then a write test, then again a read test (reading what was written) again. Also, if possible use random data for writing (although in this case the CPU can become the bottleneck especially in a throttled VM), as some virtualization systems, like for example QEMU with QCow2 if configured, detects runs of zeroes and considers that a trim operation.) Also, if possible, do this directly on the raw device (i.e. Also, you didn't mention what you've passed as a disk to the microVM? Was it exactly the same raw device, or is it a file on-top of a file-system on-top of that raw device? (If a file-on-a-file-system is used, then you are also benchmarking the host file-system, thus for a comparative test do the host And finally, don't forget that cloud-provides (especially AWS, GCP, etc.) have throttled I/O most often with bursts, thus take that into account by either planing your tests according to the burst schedule, or by repeating the same tests multiple times, in round-robin fashion, to make sure the bursting / throttling hits each and every test. Lastly (though I'm not a Firecracker developer), keep in mind that Firecracker's main focus is on security and thus perhaps a bit less on raw performance. (Thus, perhaps try to replicate your experiment inside a QEMU running on exactly the same host with their own microVM profile, or perhaps even CloudHypervisor with their microVM profile. Perhaps one of these alternatives has better raw performance if that is what matters to you.) |
Beta Was this translation helpful? Give feedback.
-
I’m running a Firecracker microVM (or other environment) on a device that reports itself as an NVMe drive (/dev/nvme0n2). However, sustained write performance is only ~120 MB/s—even after switching to a raw LVM volume. I’m trying to determine whether this is expected behavior or if there’s a configuration/host environment issue causing slower-than-expected speeds.
Environment Details
Host OS: Ubuntu 24.04
Firecracker Version: v1.10
Kernel Version: 5.15.0-*
Underlying Hardware/Cloud Provider: GCP
Device: /dev/nvme0n2 ~ running perf test getting about 400MB/s
I did the following
/dev/nvme0n2
withpvcreate
,vgcreate
,lvcreate
.dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct
Any advice on diagnosing or improving sustained write performance for this kind of device?
Beta Was this translation helpful? Give feedback.
All reactions