A little over a month ago Amazon released a new storage type called Provisioned IOPS. PIOPS are an Elastic Block Storage unit that gives you a guaranteed number of IOPS at least 99% of the time. You can also bring up your EC2 instance with a dedicated ethernet throughput to your PIOPS volume, instead of running disk traffic over the same ethernet device as your production query traffic.
PIOPS is very exciting to us here at Parse. We run a number of high-throughput, I/O intensive database clusters behind the scenes, and we were thrilled to be able to move to proper database-class hardware. We were already looking for more improvements to our MongoDB cluster scalability and speed, so PIOPS was a natural fit. We decided to upgrade from RAID-10 EBS volumes to striped 1000-IOPS EBS volumes.
We’ve been running on PIOPS for nearly a week now. A few of the metrics and changes we’ve observed are:
- Average end-to-end latency from the time a request hits the Elastic Load Balancer has dropped in half, to less than 100 milliseconds.
- Latency in our stack is almost completely flat. There are no more periodic latency spikes due to MongoDB write locks or EBS events. On the old disk volumes we would see latency and disk I/O spike on occasion due to resource contention on one or more of our EBS volumes.
- Memory warmup time has been cut by over 80%, and added latency time during warmup is miniscule. We have scripts to warm up our databases by reading the most active collections into memory, but we don’t even really need to use them with the PIOPS volumes. Switching to a “cold” PIOPS secondary adds only about 100 ms to latency for a few minutes.
Before PIOPS (y-axis from 0.0 to 2.5 seconds):
After PIOPS (y-axis from 0.0 to 0.6 seconds):
We’re thrilled with the performance of our databases on Amazon PIOPS volumes. Thanks, Amazon!