• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

vFrank

Essense of virtualization

  • LinkedIn
  • RSS
  • Twitter

Cohesity

vCenter er nede, hvad gør du?

November 3, 2017 by FrankBrix Leave a Comment

Hvad ville du gøre hvis dit vCenter blev utilgængeligt i morgen og hvilken indflydelse ville det have på din forretning? Svarene falder typisk i to kategorier:

  • Mit vCenter er ikke kritisk for min produktion og hvis det er nede installere jeg blot et nyt og forbinder det til mine ESXi servere.
  • KRISE! Hvis mit vCenter er nede er der ikke self-service, overvågning og styring af de virtuelle resourser. Jeg er på dybt vand!

Det er er en gammel kendt udfordring i et hvert datacenter. Hvordan beskytter man sin VMware mangement stack og får den hurtigt online igen med en lav RTO. VMware udvider sine management produkter. Hvor man tidligere kunne nøjes med en enkelt vCenter server består de fleste miljøer af adskillige servere til håndtere den daglige drift og rutiner. Den er i dag udbygget til:

  • vCenter (med intern eller ekstern database)
  • vRealize Operations Manager
  • vRealize Automation Center
  • vReaize Log Insight
  • NSX Manager
  • PSC (platform services controllers)
  • SDDC (VMware Cloud Foundation)
  • Management AD og DNS

Udover disse VMware services er der også flere administrative servere som IT afdelingen er afhængige af som kan tilføjes som kritiske komponenter til en disaster plan. Med det Software Defined Data Center er det ikke som tidligere hvor vCenter var “nice-to-have” – er det nu blevet til en kritisk funktion der altid skal være online. Hvis vCenter er nede skaber det problemer for ting som

  • Selv-provisionering af nye virtuelle maskiner
  • Overvågning
  • 3′ parts produkter der kommunikerer med vCenter

I et tilfælde hvor der er nedbrud eller datatab på management stacken er man ilde stedt. Hvordan bringer man stacken online når vCenter og evt. management AD og DNS er nede? Er din platform til beskyttelse 100% uafhængig af dem? Med stor kompleksitet er det muligt at bygge et system med traditional software som kan håndtere dette. Men hvordan tester man det? Hvordan sikrer man at alle windows servere benytter lokale service accounts og ikke AD konti? På hvilken måde kompromittere dette sikkerheden? Hvad med faren for Ransomware når traditional software kører på Windows og i værste tilfælde bliver backup data kryptereret og utilgængelig?

Til at løse dette er der behov for at se på problemet med friske øjne. Der er behov for en løsning der opfylder følgende

  • Baserer sig ikke på Windows og backup data er immutable
  • Har ingen afhængigheder af AD og DNS
  • Kan benyttes selvom vCenter er utilgængelig
  • Kan udføre en Instant-Mass-Restore og bringe ALLE administrative servere online med det samme og som en gruppe.
  • Simpel og alt inkluderet i et system (ikke 4-5 forskellige produkter og producenter)
  • Ingen single point of failures.

Hos Cohesity løser vi dette elegant og din management stack er beskyttet og muligt at lave recovery på få sekunder. De unikke funktioner ved Cohesity:

  • Policy baseret beskyttelse
  • Alt-i-et-system (de-dupe storage, backup software, databaser, always-online, fuld HA for alle komponenter software og hardware.)
  • Instant-Mass-Restore: Recovery af 5 eller 50 maskiner på få sekunder
  • SnapTree: Alle backup punkter er fully hydrated og instant-recoverable. Ingen baggrunds IO operationer for at lave syntetiske fulls
  • Test/Dev: Mulighed for at teste recovery når som helst og validere det virker

Hvis du ønsker at få en demo at dette i dit eget datacenter så tag kontakt.

Hvis du vil læse mere om hvordan Cohesity beskytter den fulde VMware Management Stack inklusive cloud foundation så læs mere her:

http://www.cohesity.com/vmware-cloud-foundation-vcf-cohesity-white-paper/

Video Demonstration:

https://www.youtube.com/watch?v=jtAoCi4HcX4

Filed Under: certification, Cohesity, Network, PernixData, SSO, Uncategorized, vCloud, vcops, View, vMotion, vSphere

DataProtection 2.0 – Policy Driven with Cohesity

October 25, 2016 by FrankBrix 1 Comment

After a hectic #VMworld week it is good to be back in Copenhagen. At Cohesity we had a very busy week and we were doing live demos 3-days straight without any interruptions. The demos are interactive and we get asked a lot of questions such as:

  • Do you support replication to another Cohesity cluster on a DR site?  YES
  • Do you support Physical Windows and Linux? YES
  • Do you support object level restore for MS SQL, SharePoint and Exchange? YES
  • Do you support native archival to the cloud such as Azure, Amazon, Google? YES
  • Do you support tape? YES

All of the basic requirements to replace the current backup solutions is already in place. We support the applications you have today.  Why are people considering moving away from what they have to something new? Let me sum it up

  • Simplicity! – The backup software runs natively in a hyper-converged system platform that can scale to infinity
  • Policy Driven! – All protection is policy based (more on this later)
  • Test / Dev environment! – Do more with the backup data. Every backup is a fully-hydrated snapshot on the Cohesity platform. Example: You can spin up 20 virtual machines in as little as 10 seconds – no data need to be copied due to patented SnapTree technology.
  • Analytics! – Let the backup data be more than insurance. Use it with built-in apps like “pattern search” and “password detector” to run analytics on copies of production data.

During the demo of Cohesity all of these things shines. PowerPoint presentations does not do the product justice. One of the features we demo is our Policy Driven approach to protection of data. 

Lets say we have a customer with the following data retention policy for their tier 1 data

  • Daily backup kept for 30 days
  • Monthly backup kept for 180 days
  • Yearly backup kept for 3650 days
  • Replicate daily backup to DR site and keep it for 1480 days
  • Archive every monthly backup to the cloud and keep it for 3650 days (Azure, Amazon, Google, Openstack, Custom S3, Tape)

It will only take a few minutes to put the numbers in a policy. Below I have created the “Gold Protection Policy”. Once we have the policy we simply need to attach it to a VM / Folder / Cluster / DataCenter / TAG in vCenter. Then the VM will be protected with the policy.

cohesity_policy_gold

Filed Under: Cohesity

Project Graveyard – Cohesity to the rescue

September 22, 2016 by FrankBrix Leave a Comment

Three weeks into my new role as a Cohesity Sales Engineer it is time to summarize the experience so far. It has been busy three weeks with a lot of customer and partner meetings and at the same time ramp up on all things technical on the platform.

The conversations with customers and partners have had phenomenal feedback. There is definitely a gap in what they offer today and how Cohesity can help them. No one else is offering a true web-scale secondary storage platform that will help the customer with multiple use-cases. There are so many entry points into a customer with Cohesity. I have written other blogs about the Cohesity use-cases and I suggest you start here if you need to learn more.

graveyard

One meeting with a large Scandinavian company was especially interesting. They have a graveyard project where they need to archive virtual servers up to 10 years and be able to restore them and retrieve data. File level restore is not enough, they need to be able to quickly spin up systems that consists of 10 virtual servers or more and access the complete system and get the data out. Once the data is retrieved they will shut down the servers and put them back in the archive. Of course with no changes to the data.

They are in the phase of figuring out how to handle this and they invited Cohesity to a meeting to present how we could solve this complex use-case.

We quickly realized that this would be a perfect fit for Cohesity. What we presented was the following:

  1. Start with a  a 2U Cohesity block that consists of 4 Nodes with 96TB of disk capacity. Once they run out of capacity they simply add more nodes 1 by 1. There is no requirement to add 4 nodes at a time. The platform scales infinite.
  2. Once configured and installed register the vCenter server to the Cohesity platform.
  3. Create a policy that defines how long to save the data on the platform
  4. Run a 1-time projection job of the servers that need to go to the graveyard.

The great thing about this is that this is a platform without complexity. Storage and Backup software is all built into the box. The customer only needs to supply the platform with IP addresses and connect it to their 10Gb network infrastructure.

They could at the other end of the spectrum choose to use tape or just some JBOD disks. But a key requirement is the ability to easily access data. And this is where the magic happens.

Cohesity will help the customer achieve the following:

  1. All data backed up is fully indexed. Google-like search capabilities to find whatever they need
  2. File level restore available
  3. Full VM instant restore available
  4. Use the built in Test / Dev use case to pick as many VMs as they need to instantly spin up a clone of the machines that have been backed up as part of a graveyard project. The customer will be able to spin up an environment of 10 or more virtual machines in as little as 10 seconds! This will provide the customer with the ability to run a complex system at any time they want. They can also use this feature to verify that the systems they have put in the graveyard works as they want.

With these capabilities the customer saw that they were actually getting more than they asked for and the loved it. No longer was it project graveyard – now it was project walking dead

the-walking-dead-zombies

Filed Under: Cohesity

Cohesity – Revolutionizing VMware backup

August 24, 2016 by FrankBrix Leave a Comment

Having a background as a System Administrator working in the pre-virtualized world puts a lot of things into perspective. I remember how VMware virtualization changed everything and made my job easier and funnier. No more worrying about hundreds of physical servers, different driver models and complicated installation processes.

When we were virtualizing we were running both virtual and physical servers. Virtual servers had higher availability than the physical  because of vMotion and HA. We still needed to backup the data and it was easy because we could leverage our existing in-guest backup agent on a virtual machines as we had been doing on the physical for many years.

VMware released the VMware Consolidated Backup VCB – that helped offload the backup. What a beast it was! (not meant in a good way). Luckily some interesting companies like Vizioncore (vRanger) and Veeam appeared with backup solutions built for making full-vms backups. These solutions enabled us to lower our RTO dramatically since we could do complete VM restore in a few hours. We had many challenges with sizing the VM backup environment since the hardware and software was not coupled together. It gave us flexibility but also many headaches when the hardware was too slow for backup or restore purposes extending our RTO with several hours.

The solutions changed our perspective on backups and were innovative, but have then slowed down since they built on a more than 10 year old design. It is the cycle of any software solution.

What I love most about working in the IT space is that new technologies will emerge and innovate a space where innovation has stopped! That innovation is happening right now in the backup and copy data management space.

To move to the next level you need a backup solution that tightly ties next-level backup software and the hardware together. Once you couple the software and hardware it enables a solution that just works better! No more bad RTO because of undersized hardware, no more challenges with “What am I gonna buy”, and several day workshops with the backup vendor to help sizing the hardware storing the mission critical data. No more workshops to understand how many proxy servers, media servers, transport servers you need and how much CPU and Memory you configure them with. This is legacy design that worked good for many years, but it is complicated with LOTS of moving parts. We loved it because it was the best there was.

Cohesity is now innovating this space and the way they do it is beautiful and brilliant. Let me help you understand how they do this.

  1. You build a fully distributed file-system (OASIS) and run it on physical servers (nodes). Each node got plenty of CPU power, Memory, SSD and disk capacity available.
  2. The file-system will use patented SnapTree technology that can store infinite snapshots of data with no performance impact. (Yes infinite!)
  3. The file-system supports De-Dupe and compression. The de-dupe is variable length and not fixed as most other solutions.
  4. The system scales from a minimum of 3 nodes to a maximum of infinite nodes. True scale-out architecture.

This will provide you with the perfect place to store your data!

On top of this you built a Distributed Data Protection engine that runs on the hyper-converged platform. Yes you read it right, the system is not running as virtual machines and it is not running on just one node in the cluster. Any node will we have to handle the roles and responsibilities. You have a fully redundant highly available backup software running natively on the hardware used to store your data.

This will eliminate your need to buy backup software and hardware independently. Using it together brings you next level services.

To deploy the solution? It is simply to add the nodes to your data center. You get 4 nodes in a 2U cabinet with 96TB raw capacity. Need more capacity? Add more nodes.

Then you connect it to vCenter and you define on policies how you want to protect your VMs. No more backup jobs! It is all policy based.

The solution will not only do backup of full VMs with application awareness. But it will also let you

  • Replicate data to another site or to a public cloud provider for site redundancy.
  • Instantaneous recovery times, all backups are fully hydrated snapshots and you can clone them and run them for test / dev purposes
  • Instant File Level search. All data is indexed and you have Google-like-search available to help you find the data you need to restore.
  • Physical server support for Windows and Linux
  • MS SQL, MS SharePoint, MS Exchange application awareness
  • Granular VM, File and object-level recovery.
  • And much more

If you are reviewing your backup and data protection strategy you would be crazy not to take the new innovators into account. The future is here and now the next leap in virtual backup and copy data management is happening.

Filed Under: Cohesity

Cohesity – What is a secondary storage platform

August 16, 2016 by FrankBrix 1 Comment

Cohesity is a secondary storage platform built on the principles of hyper-convergence.  But what does this  mean? What is a secondary storage system and what is the benefit of being built on hyper-convergence?

Lets define Secondary Storage: Secondary Storage is defined by everything that does not need strict SLAs.

Primary storage is typically the storage used for running virtual machines in the data center you need strict SLA and high performance and you are probably looking into putting them on all-flash-array or a modern hybrid array. Examples of primary storage systems are systems like XtremIO, Compellent, 3Par, Nimble, Tintri, Pure, Netapp and the like. Your VMware ESXi servers will consume the storage as a block device or NFS share. Once deployed and setup as a datastore in VMware you start to put your VM’s on the storage. Even VMware VSAN a primary storage system. It is simply built for running virtual machines.

From a storage capacity perspective 20% of  of your data should belong to the primary storage platform and the rest 80% should be on your secondary storage platform. You may argue that 100% of your data is on the primary storage platform and that can be the case. But if this is true it means you are missing out on a big opportunity of moving your data center to the next level.

Data that should not be on the primary storage platform includes

  • Backup data of your virtual machines and applications
  • Test / Dev virtual machines
  • Archive data
  • File shares

Cohesity has built a Hyper-Converged storage platform that will handle all of these workloads. Built on principles on Hyper-Convergence it is designed to scale. The minimum configuration is 3 nodes and the maximum of nodes are infinite! There is no ceiling on how many nodes you can add to the platform. Each node is a compute system with 8x Intel xeon processors, 64GB of memory, 24TB of hard disk capacity and 1.6TB of PCI-E flash. Full node information here. You get 4 nodes in a 2U chassis and this provides you with 96 TB of raw capacity.

cohesity-bezel-w-shad249x54

 

 

Some of the benefits of the Cohesity scale-out hyper-converged storage platform

  • If a node crashes other nodes will continue carrying the load. You can then choose to fix the node or let it be down.
  • with VIP (virtual IPs) any node can perform the work of another node.
  • Non-Disruptive upgrades – while the system is running you can upgrade the software. No need for service windows!
  • All nodes got lots of CPU resources that will enable new services and visibility into data.
  • Linear scalability
  • Scales to infinite capacity
  • OASIS file system with SnapTree, Global-De-dupe(variable length) and compression. All technologies to make sure you get much higher data capacity than just the raw terabytes available.

The secondary storage platform needs data to be to any use. How do you get data on it?

  • Use it as a de-dupe target appliance for any existing backup software
  • Use the built-in VADP data protection engine to replace the software you are currently using to backup VMware.
  • Use it as a filer (SMB or NFS share)

Once you got the data on the system it will allow you to do new things:

  • All data is always indexed and can be accessed with google-like-search
  • Run analytics on your indexed data (search for credit card numbers, social security numbers or anything you can think of)
  • Understand Dark-Data. Where did all of my capacity go? Where is my hot and cold data?
  • Replicate the data to a Cohesity cluster in another site or replicate the data to a cloud provider.
  • Spin up virtual machines from the backup in seconds, all snapshots are fully hydrated and accessible at any given time. This is awesome for test/dev use-cases or to test the impact of patching a system without impacting production.

These are just some of the benefits of secondary storage.

 

Filed Under: Cohesity

Cohesity – what is the big deal

August 10, 2016 by FrankBrix Leave a Comment

September 1st I will join Cohesity as a Systems Engineer covering the Nordic Region in Europe. When I was looking into Cohesity as a potential future employer several things got me super excited about their technology. Let me share my perspective and why Cohesity is doing something that is a big deal.

Today in the data center space, there is a lot of talk about Traditional storage (SAN/NAS) vs. Hyper-Converged (VSAN, Nutanix, Simplivity). It is important to have a place to 1. Run your VMs and 2. Store your VMs data (Primary Storage). But what we are not talking so much about is how do you manage and store all of your data that is not your VMs running in production. Things like

  • Backup data
  • File shares
  • Test/Dev workloads (Copy data management, spin up backed up VMs in seconds)
  • Archive data
  • Analytics
  • Cloud

I have never seen a data center where the secondary data was not fragmented or stored inefficiently.  Who have not had problems with sizing and scaling all of these silos. Today the only way to cope with this is to have a myriad of different solutions. This is 1. expensive and 2. highly inefficient

Cohesity has built a single solution that will take care of all of these use-cases in the data center. One platform that will replace multiple point solutions. The platform is a secondary storage platform built on the foundation of hyper-convergence. To put it very brief: “It is all about the file system”

The Cohesity Data Platform is built on the Open Architecture for Scalable Intelligent Storage (OASIS). The only file system that can combine infinite scalability and can consolidate multiple different business workloads on a single platform. The Platform includes things like

  • In-line and post dedupe (Variable lenght and not fixed size)
  • Compression
  • SnapTree Distributed-Redirect-on-Write snapshot mechanism. (infinite snapshots)
  • Indexing for google like search capabilities of all data
  • Integrated Data Protection Engine (Take backup of VMware with VADP or Physical servers, no external backup software needed!)
  • Analytics
  • and much more

The SnapTree technology is nothing but spectacular. Imagine that you can take infinite snapshots of VMs or data without any performance degradations. (Unlimited!) With SnapTree all blocks are always reachable within 3 hops, no matter how many snapshots you have.

If you are wondering, how do I use these services in my data center. Well here are just a few use-cases on top of my head:

  1. Expose NFS or SMB share and use the scalable Cohesity Platform as a de-dupe target appliance for any backup software on the market.
  2. Use the built-in VADP Data Protection Engine to take policy based backups of your virtual environment. You will get next-level backup and RTO and RPO times because of the underlying architecture and file system. This will totally eliminate the need for whatever software you are currently using to backup VMware.
  3. Use it as a file server
  4. Use it for test / development workloads. Once VMs are being backed up by the built-in Cohesity backup solution you can spin these VMs up in seconds on a NFS share presented back to your ESXi host.
  5. Use the analytics engine to understand your data and find anything you need.

You get all of this in a 2U appliance. If you need more capacity or power you simply add more nodes. This is the power of a true hyper-converged secondary storage product. This is what got me excited!

 

 

 

 

 

Filed Under: Cohesity

#Brixit PernixData

August 8, 2016 by FrankBrix 13 Comments

The last three years have been a fantastic journey. With PernixData I have personally been part of helping more than 100 customers getting low latency storage performance with FVP and never before experienced insight into their data center with Architect. I have made a lot of new friends, and I will miss all of the great colleagues at PernixData, never before have I been part of such a talented team. I did not expect our ways to part already but due to some rumours about a company acquiring PernixData I have chosen to get back to what I love the most: Evangelizing and helping a new young startup get their message out in the Nordics.

PernixData was my first startup and I was hired as employee number 75 and number 3 in EMEA. Working for a startup is nothing like any other job I have had. The energy, the passion and the dedication all around is unbelievable. Everybody wants to succeed and prove them self. All individual contributions makes a difference. There is no way to hide and great work is appreciated and applauded.

Shortly after I started working I was sent to new-hire training at the HQ in San Jose. I was bombed with information about the product, the processes, salesforce, marketing etc. Being this early has its benefits though. One thing I particularly remember was having private lunch with the CEO Poojan Kumar at a nearby restaurant. He welcomed me and told me how much he appreciated that I had chosen to join the company. I was in awe, my feeling was “thanks for letting me be the first SE hire in all of EMEA”, working with a great product with big potentials. I now understand that when you join a young startup with only a few months history of selling you are taking a big risk, the adventure could last many years or just a few months if things don’t work out.

Thinking back about the last 33 months it has been a thrilling experience. We celebrated many victories but also had our downs. One quarter selling like crazy to the next with PSOD (purple screen of deaths) that killed some major deals that was on the table. This is how it is, product is still maturing and not everything is perfect (yet)

My title at PernixData was “Systems Engineer” – but in the startup world your job far exceeds that of your title. You are doing marketing, lead generation, partner enablement, actual SE work like POC’s and presentations at VMUG. You pretty much do whatever it takes, and that is what is expected.

What I liked the most was how close you get to everyone at the company. You can talk directly to the CEO, you can give the Product Manager a call and have a big influence on the development of the product. Many things experienced in the field was brought back to PM, and later added or changed in the product.

Now that I am about to open a new chapter it has to be with another startup with disruptive technology. Joining one of the big established players is not even something I consider. Luckily timing is great and when my current employment ends I will start working for Cohesity as a Systems Engineer in the Nordic region. I am certain that we will meet at VMworld or many VMUGs to come. I am getting ready for another wild ride, with long working hours. (or as my girlfriend says, “Frank you do not work, you do what you love to do”)

 

 

Filed Under: Cohesity, PernixData

Primary Sidebar

Blogroll

  • Hazenet
  • Michael Ryom
  • Perfect Cloud
  • vTerkel

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in