• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

vFrank

Essense of virtualization

  • LinkedIn
  • RSS
  • Twitter

PernixData FVP accelerates crawling All-Flash-Array

May 18, 2016 by FrankBrix Leave a Comment

Disclaimer: I am a PernixData Systems Engineer, this is a true story based on actual events.

A cold evening in October I was sitting late at night putting the final touches to a PowerPoint presentation for a VMUG meeting when an email ticked in. The email was brief and to the point

“Frank, we have an application that is not performing as it should, we want to try FVP to see if it can make the performance problems go away”.

This is not an unusual request to receive since I have been involved in several hundreds of POC’s with similar characteristics. We agreed on a POC the following week at their offices. Many POC’s we don’t even have to help the customer, since the software is so easy to install, configure and manage. I did not know anything about the customer environment, I just knew that they had an application that needed better storage performance.

The following week I met the client at their offices, he had already prepared a fresh Windows server for the PernixData Management application as requested. The virtual server we had to help was running on a two ESXi host dedicated cluster. We spend the first 15 minutes on installing the PernixData host extension one host at a time and then we spend another 15 minutes on installing the management server. Now we were ready to roll and see what was happening with the application.

Luckily PernixData had just released their newest product Architect in public beta, so we could leverage the Architect to understand the IO behavior and then use FVP to accelerate the VM. No SSD or PCI-E flash were available but the host had plenty of spare memory. We spent a few minutes creating a FVP cluster using 200GB of memory from each host and put the virtual server in Write-Through mode (read acceleration only).

Within 1 hour, everything was ready to go. Software installed, and acceleration policy implemented. The only thing missing was the actual application owner to arrive and give some feedback (or criticism).

The application owner let us know that they had a test script that should complete in less than 40 minutes according to their software vendor. But it took them 50 minutes to complete the script using a stop-watch measurement. We held our breath as we ran the same test script, using local RAM for acceleration. The application owner was stunned. He was surprised that everything was already accelerated, with no reboots,  nor downtime. Yes, FVP does not require any of those things 🙂

First run of the script with VM in Write-Through mode cut the run time from 50 minutes all the way down to 35 minutes. The application owner looked shocked, exclaiming “I am not sure what you did, but this is FAST.”

Next we expanded the test of the application to Write-Back mode as well both with write acknowledgements now coming from local RAM (0,05ms) and with 1 replication peer (0,05ms + vMotion network latency) with similar results. The application relied heavily on the extremely low latency for reads.

This was all accomplished in as little as two hours and we were still too early for lunch. So I suggested to take a look at what really happened with the VM’s IO behavior and why it was such a success. We spent 30 minutes looking at Architect, and with its block size visibility it was clear what had happened. The virtual server running the application were issuing a lot of large block size IO in the 64KB to 256KB range. This drove a significant throughput and the external storage array did not provide good latency for these IO. This was all visible from an easy to use graph, no need for ESXTOP or vSCSISTATS.

 

Everyone was happy. The VMware administrator solved the problem in a simple way and there were no changes whatsoever to their storage infrastructure. Even the application owner was satisfied. He solved his performance challenges, but also got great insight as to why they were occurring.

I was curious about what storage infrastructure the application using, and the I got the response.

“The VM is running on a very popular All-Flash-Array, we are deploying AFA so we never have to worry about Performance issues anymore”

For this customer it was a big eye opener. AFA’s are great for reducing latency and providing capacity, but it is not where performance stops. Using local RAM or even PCI-E NVME flash in the host will result in significantly reduced storage latencies. Latencies that are not even possible to achieve with flash behind a storage network.

I am all in favor for faster storage whether it is Hybrid or AFA and they can deliver decent performance. But for hyper performance you have to think alternatively, and that is exactly what PernixData is doing with the FVP acceleration software. Combined with Architect, storage teams, applications owners and virtualization administrators will work better together, be more effective and have the right tools to properly architect their data center of the future.

If you want to learn more about this, sign up for the webinar that goes into further details Wednesday May 25th at 9:00 PDT or 18.00 CET. right here: https://get.pernixdata.com/analytics-driven-design-afa

Filed Under: Uncategorized

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Blogroll

  • Hazenet
  • Michael Ryom
  • Perfect Cloud
  • vTerkel

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in