• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

vFrank

Essense of virtualization

  • LinkedIn
  • RSS
  • Twitter

Cohesity – Revolutionizing VMware backup

August 24, 2016 by FrankBrix Leave a Comment

Having a background as a System Administrator working in the pre-virtualized world puts a lot of things into perspective. I remember how VMware virtualization changed everything and made my job easier and funnier. No more worrying about hundreds of physical servers, different driver models and complicated installation processes.

When we were virtualizing we were running both virtual and physical servers. Virtual servers had higher availability than the physical  because of vMotion and HA. We still needed to backup the data and it was easy because we could leverage our existing in-guest backup agent on a virtual machines as we had been doing on the physical for many years.

VMware released the VMware Consolidated Backup VCB – that helped offload the backup. What a beast it was! (not meant in a good way). Luckily some interesting companies like Vizioncore (vRanger) and Veeam appeared with backup solutions built for making full-vms backups. These solutions enabled us to lower our RTO dramatically since we could do complete VM restore in a few hours. We had many challenges with sizing the VM backup environment since the hardware and software was not coupled together. It gave us flexibility but also many headaches when the hardware was too slow for backup or restore purposes extending our RTO with several hours.

The solutions changed our perspective on backups and were innovative, but have then slowed down since they built on a more than 10 year old design. It is the cycle of any software solution.

What I love most about working in the IT space is that new technologies will emerge and innovate a space where innovation has stopped! That innovation is happening right now in the backup and copy data management space.

To move to the next level you need a backup solution that tightly ties next-level backup software and the hardware together. Once you couple the software and hardware it enables a solution that just works better! No more bad RTO because of undersized hardware, no more challenges with “What am I gonna buy”, and several day workshops with the backup vendor to help sizing the hardware storing the mission critical data. No more workshops to understand how many proxy servers, media servers, transport servers you need and how much CPU and Memory you configure them with. This is legacy design that worked good for many years, but it is complicated with LOTS of moving parts. We loved it because it was the best there was.

Cohesity is now innovating this space and the way they do it is beautiful and brilliant. Let me help you understand how they do this.

  1. You build a fully distributed file-system (OASIS) and run it on physical servers (nodes). Each node got plenty of CPU power, Memory, SSD and disk capacity available.
  2. The file-system will use patented SnapTree technology that can store infinite snapshots of data with no performance impact. (Yes infinite!)
  3. The file-system supports De-Dupe and compression. The de-dupe is variable length and not fixed as most other solutions.
  4. The system scales from a minimum of 3 nodes to a maximum of infinite nodes. True scale-out architecture.

This will provide you with the perfect place to store your data!

On top of this you built a Distributed Data Protection engine that runs on the hyper-converged platform. Yes you read it right, the system is not running as virtual machines and it is not running on just one node in the cluster. Any node will we have to handle the roles and responsibilities. You have a fully redundant highly available backup software running natively on the hardware used to store your data.

This will eliminate your need to buy backup software and hardware independently. Using it together brings you next level services.

To deploy the solution? It is simply to add the nodes to your data center. You get 4 nodes in a 2U cabinet with 96TB raw capacity. Need more capacity? Add more nodes.

Then you connect it to vCenter and you define on policies how you want to protect your VMs. No more backup jobs! It is all policy based.

The solution will not only do backup of full VMs with application awareness. But it will also let you

  • Replicate data to another site or to a public cloud provider for site redundancy.
  • Instantaneous recovery times, all backups are fully hydrated snapshots and you can clone them and run them for test / dev purposes
  • Instant File Level search. All data is indexed and you have Google-like-search available to help you find the data you need to restore.
  • Physical server support for Windows and Linux
  • MS SQL, MS SharePoint, MS Exchange application awareness
  • Granular VM, File and object-level recovery.
  • And much more

If you are reviewing your backup and data protection strategy you would be crazy not to take the new innovators into account. The future is here and now the next leap in virtual backup and copy data management is happening.

Filed Under: Cohesity

Cohesity – What is a secondary storage platform

August 16, 2016 by FrankBrix 1 Comment

Cohesity is a secondary storage platform built on the principles of hyper-convergence.  But what does this  mean? What is a secondary storage system and what is the benefit of being built on hyper-convergence?

Lets define Secondary Storage: Secondary Storage is defined by everything that does not need strict SLAs.

Primary storage is typically the storage used for running virtual machines in the data center you need strict SLA and high performance and you are probably looking into putting them on all-flash-array or a modern hybrid array. Examples of primary storage systems are systems like XtremIO, Compellent, 3Par, Nimble, Tintri, Pure, Netapp and the like. Your VMware ESXi servers will consume the storage as a block device or NFS share. Once deployed and setup as a datastore in VMware you start to put your VM’s on the storage. Even VMware VSAN a primary storage system. It is simply built for running virtual machines.

From a storage capacity perspective 20% of  of your data should belong to the primary storage platform and the rest 80% should be on your secondary storage platform. You may argue that 100% of your data is on the primary storage platform and that can be the case. But if this is true it means you are missing out on a big opportunity of moving your data center to the next level.

Data that should not be on the primary storage platform includes

  • Backup data of your virtual machines and applications
  • Test / Dev virtual machines
  • Archive data
  • File shares

Cohesity has built a Hyper-Converged storage platform that will handle all of these workloads. Built on principles on Hyper-Convergence it is designed to scale. The minimum configuration is 3 nodes and the maximum of nodes are infinite! There is no ceiling on how many nodes you can add to the platform. Each node is a compute system with 8x Intel xeon processors, 64GB of memory, 24TB of hard disk capacity and 1.6TB of PCI-E flash. Full node information here. You get 4 nodes in a 2U chassis and this provides you with 96 TB of raw capacity.

cohesity-bezel-w-shad249x54

 

 

Some of the benefits of the Cohesity scale-out hyper-converged storage platform

  • If a node crashes other nodes will continue carrying the load. You can then choose to fix the node or let it be down.
  • with VIP (virtual IPs) any node can perform the work of another node.
  • Non-Disruptive upgrades – while the system is running you can upgrade the software. No need for service windows!
  • All nodes got lots of CPU resources that will enable new services and visibility into data.
  • Linear scalability
  • Scales to infinite capacity
  • OASIS file system with SnapTree, Global-De-dupe(variable length) and compression. All technologies to make sure you get much higher data capacity than just the raw terabytes available.

The secondary storage platform needs data to be to any use. How do you get data on it?

  • Use it as a de-dupe target appliance for any existing backup software
  • Use the built-in VADP data protection engine to replace the software you are currently using to backup VMware.
  • Use it as a filer (SMB or NFS share)

Once you got the data on the system it will allow you to do new things:

  • All data is always indexed and can be accessed with google-like-search
  • Run analytics on your indexed data (search for credit card numbers, social security numbers or anything you can think of)
  • Understand Dark-Data. Where did all of my capacity go? Where is my hot and cold data?
  • Replicate the data to a Cohesity cluster in another site or replicate the data to a cloud provider.
  • Spin up virtual machines from the backup in seconds, all snapshots are fully hydrated and accessible at any given time. This is awesome for test/dev use-cases or to test the impact of patching a system without impacting production.

These are just some of the benefits of secondary storage.

 

Filed Under: Cohesity

Cohesity – what is the big deal

August 10, 2016 by FrankBrix Leave a Comment

September 1st I will join Cohesity as a Systems Engineer covering the Nordic Region in Europe. When I was looking into Cohesity as a potential future employer several things got me super excited about their technology. Let me share my perspective and why Cohesity is doing something that is a big deal.

Today in the data center space, there is a lot of talk about Traditional storage (SAN/NAS) vs. Hyper-Converged (VSAN, Nutanix, Simplivity). It is important to have a place to 1. Run your VMs and 2. Store your VMs data (Primary Storage). But what we are not talking so much about is how do you manage and store all of your data that is not your VMs running in production. Things like

  • Backup data
  • File shares
  • Test/Dev workloads (Copy data management, spin up backed up VMs in seconds)
  • Archive data
  • Analytics
  • Cloud

I have never seen a data center where the secondary data was not fragmented or stored inefficiently.  Who have not had problems with sizing and scaling all of these silos. Today the only way to cope with this is to have a myriad of different solutions. This is 1. expensive and 2. highly inefficient

Cohesity has built a single solution that will take care of all of these use-cases in the data center. One platform that will replace multiple point solutions. The platform is a secondary storage platform built on the foundation of hyper-convergence. To put it very brief: “It is all about the file system”

The Cohesity Data Platform is built on the Open Architecture for Scalable Intelligent Storage (OASIS). The only file system that can combine infinite scalability and can consolidate multiple different business workloads on a single platform. The Platform includes things like

  • In-line and post dedupe (Variable lenght and not fixed size)
  • Compression
  • SnapTree Distributed-Redirect-on-Write snapshot mechanism. (infinite snapshots)
  • Indexing for google like search capabilities of all data
  • Integrated Data Protection Engine (Take backup of VMware with VADP or Physical servers, no external backup software needed!)
  • Analytics
  • and much more

The SnapTree technology is nothing but spectacular. Imagine that you can take infinite snapshots of VMs or data without any performance degradations. (Unlimited!) With SnapTree all blocks are always reachable within 3 hops, no matter how many snapshots you have.

If you are wondering, how do I use these services in my data center. Well here are just a few use-cases on top of my head:

  1. Expose NFS or SMB share and use the scalable Cohesity Platform as a de-dupe target appliance for any backup software on the market.
  2. Use the built-in VADP Data Protection Engine to take policy based backups of your virtual environment. You will get next-level backup and RTO and RPO times because of the underlying architecture and file system. This will totally eliminate the need for whatever software you are currently using to backup VMware.
  3. Use it as a file server
  4. Use it for test / development workloads. Once VMs are being backed up by the built-in Cohesity backup solution you can spin these VMs up in seconds on a NFS share presented back to your ESXi host.
  5. Use the analytics engine to understand your data and find anything you need.

You get all of this in a 2U appliance. If you need more capacity or power you simply add more nodes. This is the power of a true hyper-converged secondary storage product. This is what got me excited!

 

 

 

 

 

Filed Under: Cohesity

#Brixit PernixData

August 8, 2016 by FrankBrix 13 Comments

The last three years have been a fantastic journey. With PernixData I have personally been part of helping more than 100 customers getting low latency storage performance with FVP and never before experienced insight into their data center with Architect. I have made a lot of new friends, and I will miss all of the great colleagues at PernixData, never before have I been part of such a talented team. I did not expect our ways to part already but due to some rumours about a company acquiring PernixData I have chosen to get back to what I love the most: Evangelizing and helping a new young startup get their message out in the Nordics.

PernixData was my first startup and I was hired as employee number 75 and number 3 in EMEA. Working for a startup is nothing like any other job I have had. The energy, the passion and the dedication all around is unbelievable. Everybody wants to succeed and prove them self. All individual contributions makes a difference. There is no way to hide and great work is appreciated and applauded.

Shortly after I started working I was sent to new-hire training at the HQ in San Jose. I was bombed with information about the product, the processes, salesforce, marketing etc. Being this early has its benefits though. One thing I particularly remember was having private lunch with the CEO Poojan Kumar at a nearby restaurant. He welcomed me and told me how much he appreciated that I had chosen to join the company. I was in awe, my feeling was “thanks for letting me be the first SE hire in all of EMEA”, working with a great product with big potentials. I now understand that when you join a young startup with only a few months history of selling you are taking a big risk, the adventure could last many years or just a few months if things don’t work out.

Thinking back about the last 33 months it has been a thrilling experience. We celebrated many victories but also had our downs. One quarter selling like crazy to the next with PSOD (purple screen of deaths) that killed some major deals that was on the table. This is how it is, product is still maturing and not everything is perfect (yet)

My title at PernixData was “Systems Engineer” – but in the startup world your job far exceeds that of your title. You are doing marketing, lead generation, partner enablement, actual SE work like POC’s and presentations at VMUG. You pretty much do whatever it takes, and that is what is expected.

What I liked the most was how close you get to everyone at the company. You can talk directly to the CEO, you can give the Product Manager a call and have a big influence on the development of the product. Many things experienced in the field was brought back to PM, and later added or changed in the product.

Now that I am about to open a new chapter it has to be with another startup with disruptive technology. Joining one of the big established players is not even something I consider. Luckily timing is great and when my current employment ends I will start working for Cohesity as a Systems Engineer in the Nordic region. I am certain that we will meet at VMworld or many VMUGs to come. I am getting ready for another wild ride, with long working hours. (or as my girlfriend says, “Frank you do not work, you do what you love to do”)

 

 

Filed Under: Cohesity, PernixData

PernixData Management Appliance is here!

June 13, 2016 by FrankBrix Leave a Comment

The appliance that a lot of you have been asking for is finally here and it is just what requested. No more struggles with MS SQL connectivity. From here on installing the Management server is dead simple all you need for preparations are.

  • IP address for the Management Appliance
  • Username and password for an account with vCenter admin privileges.

The appliance is packaged as an OVA file. Download the OVA and start the “Deploy OVF Template” wizard in vCenter. During deployment you need to input some information

  • Configuration size (tiny, small, medium, large) – this choice will affect how much memory, cpu and disk space the appliance will be configured with
  • Networking Properties: IP Address, Subnet Mask, Default Gateway and DNS Server.

PernixData Appliance OVA

Screenshot 2016-06-03 11.35.59

Screenshot 2016-06-03 11.34.46

The Appliance will then be imported into vCenter and you can power it up. After first power up you will need to access its config page via your browser. In your browser go to https://ip-address-of-mgmt-appliance. When you get to the login page you login with default credentials of

Username: pernixdata
Password: pernixdataappliance

Screenshot 2016-06-03 11.44.27

 

Then you follow the brief setup wizard and put in your vCenter credentials. Once done the appliance is all ready and you can start using it.

Filed Under: PernixData

PernixData FVP accelerates crawling All-Flash-Array

May 18, 2016 by FrankBrix Leave a Comment

Disclaimer: I am a PernixData Systems Engineer, this is a true story based on actual events.

A cold evening in October I was sitting late at night putting the final touches to a PowerPoint presentation for a VMUG meeting when an email ticked in. The email was brief and to the point

“Frank, we have an application that is not performing as it should, we want to try FVP to see if it can make the performance problems go away”.

This is not an unusual request to receive since I have been involved in several hundreds of POC’s with similar characteristics. We agreed on a POC the following week at their offices. Many POC’s we don’t even have to help the customer, since the software is so easy to install, configure and manage. I did not know anything about the customer environment, I just knew that they had an application that needed better storage performance.

The following week I met the client at their offices, he had already prepared a fresh Windows server for the PernixData Management application as requested. The virtual server we had to help was running on a two ESXi host dedicated cluster. We spend the first 15 minutes on installing the PernixData host extension one host at a time and then we spend another 15 minutes on installing the management server. Now we were ready to roll and see what was happening with the application.

Luckily PernixData had just released their newest product Architect in public beta, so we could leverage the Architect to understand the IO behavior and then use FVP to accelerate the VM. No SSD or PCI-E flash were available but the host had plenty of spare memory. We spent a few minutes creating a FVP cluster using 200GB of memory from each host and put the virtual server in Write-Through mode (read acceleration only).

Within 1 hour, everything was ready to go. Software installed, and acceleration policy implemented. The only thing missing was the actual application owner to arrive and give some feedback (or criticism).

The application owner let us know that they had a test script that should complete in less than 40 minutes according to their software vendor. But it took them 50 minutes to complete the script using a stop-watch measurement. We held our breath as we ran the same test script, using local RAM for acceleration. The application owner was stunned. He was surprised that everything was already accelerated, with no reboots,  nor downtime. Yes, FVP does not require any of those things 🙂

First run of the script with VM in Write-Through mode cut the run time from 50 minutes all the way down to 35 minutes. The application owner looked shocked, exclaiming “I am not sure what you did, but this is FAST.”

Next we expanded the test of the application to Write-Back mode as well both with write acknowledgements now coming from local RAM (0,05ms) and with 1 replication peer (0,05ms + vMotion network latency) with similar results. The application relied heavily on the extremely low latency for reads.

This was all accomplished in as little as two hours and we were still too early for lunch. So I suggested to take a look at what really happened with the VM’s IO behavior and why it was such a success. We spent 30 minutes looking at Architect, and with its block size visibility it was clear what had happened. The virtual server running the application were issuing a lot of large block size IO in the 64KB to 256KB range. This drove a significant throughput and the external storage array did not provide good latency for these IO. This was all visible from an easy to use graph, no need for ESXTOP or vSCSISTATS.

 

Everyone was happy. The VMware administrator solved the problem in a simple way and there were no changes whatsoever to their storage infrastructure. Even the application owner was satisfied. He solved his performance challenges, but also got great insight as to why they were occurring.

I was curious about what storage infrastructure the application using, and the I got the response.

“The VM is running on a very popular All-Flash-Array, we are deploying AFA so we never have to worry about Performance issues anymore”

For this customer it was a big eye opener. AFA’s are great for reducing latency and providing capacity, but it is not where performance stops. Using local RAM or even PCI-E NVME flash in the host will result in significantly reduced storage latencies. Latencies that are not even possible to achieve with flash behind a storage network.

I am all in favor for faster storage whether it is Hybrid or AFA and they can deliver decent performance. But for hyper performance you have to think alternatively, and that is exactly what PernixData is doing with the FVP acceleration software. Combined with Architect, storage teams, applications owners and virtualization administrators will work better together, be more effective and have the right tools to properly architect their data center of the future.

If you want to learn more about this, sign up for the webinar that goes into further details Wednesday May 25th at 9:00 PDT or 18.00 CET. right here: https://get.pernixdata.com/analytics-driven-design-afa

Filed Under: Uncategorized

PernixData Architect 1.0 is now GA

November 4, 2015 by FrankBrix 1 Comment

I am thrilled to announce that PernixData Architect is now GA. At the same time FVP was updated to 3.1 and both can be accessed from our download portal for existing customers and partners.

Architect has been out in a public beta for the past 2-3 months, the feedback has been fantastic, for the first time ever people now can design and architect storage from facts and not from thumb rules and gut feelings.

The Architect software is bundled in the same Management server as FVP, and it also uses the same host extension.

If you want to try out Architect and see what it is all about, sign up here: http://www.pernixdata.com/pernixdata-architect-software#trial-form

 

 

Filed Under: Uncategorized

PernixData Nordic Tour 2015 with Frank Denneman

September 10, 2015 by FrankBrix 1 Comment

Frank Denneman (frankdenneman.nl) is back in the Nordic capitals for the PernixData tour 2015. The Agenda is PernixData FVP and our newest product Architect that will revolutionize the datacenter. Some highlights of what to expect:

  • Combine best-in-class user experience with robust real-time analytics
  • Deliver unprecedented visibility and control of virtualized applications and the underlying infrastructure
  • Maximize application performance while minimizing troubleshooting costs

Monday September 21st: Helsinki

Tuesday September 22nd: Stockholm

Wednesday September 23rd: Oslo

Thursday September 24th: Copenhagen

It is all free and you can sign up here for the event: Sign up here

Filed Under: PernixData, Uncategorized

PernixData Announcements coming at VFD5 June 24th

June 22, 2015 by FrankBrix Leave a Comment

I just wanted to let you know that Satyam Vaghani will have some big announcements at VFD5 Wednesday June 24.

The announcement will be live streamed. So tune in on Wednesday at 21:30 CET

Link: http://techfieldday.com/event/vfd5/

 

Filed Under: Uncategorized

Running the Intel NUC headless with VMware ESXi

April 28, 2015 by FrankBrix 9 Comments

I recently acquired two Intel NUC5i5MYHE for my home lab. This specific model features Intel AMT/VPRO technology. This means that you can connect to the NUC remotely via IP independent of the operating system installed on it. This has several benefits

– Power On , Power Off , Reset the NUC
– Hardware information
– Remote Console access

Screenshot 2015-04-28 18.25.46

 

My plan was to run the Intel NUC’s headless. I don’t have any monitor or keyboard connected to it. So I needed the AMT / VPRO technology. When I installed the NUC’s the first time they were both connected to my living room TV through HDMI. The NUC only got Mini Display Port 2 interface – so I had to use a mini display port to HDMI adapter. This worked perfectly and I also tested the AMT technology and got the remote console to work.

Once i put the NUC’s in the closet and powered them on the remote console was black. I did not get any picture when connecting. I had just seen this work when connected to my living room TV… Maybe that was the issue I thought, maybe it don’t send anything out of the display port when it does not see a monitor connected. After some google activities I found the following used for MAC minis to run them headless. It is called Fit-Headless and is a small HDMI dongle you plug-in to a HDMI port, then the MAC Mini / NUC will think a monitor is connected and remote console will work.

Screenshot 2015-04-28 18.32.09

 

Of course I could not just connect this to the NUC since I did not have HDMI in it. So I also bought an adapter like this:

Screenshot 2015-04-28 18.35.06

 

The result was perfect. Once I rebooted my Intel NUC’s with the display port to HDMI adapter and Fit-Headless connected I got the console back and it was not black. I use VNC View Plus that can connect directly to the IP address.

 

 

Screenshot 2015-04-28 18.36.45

 

 

So if you want to run the NUC and use Intel AMT / VPRO you need either a monitor connected, or use a dongle that “fakes” it.

Filed Under: Uncategorized

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 15
  • Go to Next Page »

Primary Sidebar

Blogroll

  • Hazenet
  • Michael Ryom
  • Perfect Cloud
  • vTerkel

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in