• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

vFrank

Essense of virtualization

  • LinkedIn
  • RSS
  • Twitter

vcops

vCenter er nede, hvad gør du?

November 3, 2017 by FrankBrix Leave a Comment

Hvad ville du gøre hvis dit vCenter blev utilgængeligt i morgen og hvilken indflydelse ville det have på din forretning? Svarene falder typisk i to kategorier:

  • Mit vCenter er ikke kritisk for min produktion og hvis det er nede installere jeg blot et nyt og forbinder det til mine ESXi servere.
  • KRISE! Hvis mit vCenter er nede er der ikke self-service, overvågning og styring af de virtuelle resourser. Jeg er på dybt vand!

Det er er en gammel kendt udfordring i et hvert datacenter. Hvordan beskytter man sin VMware mangement stack og får den hurtigt online igen med en lav RTO. VMware udvider sine management produkter. Hvor man tidligere kunne nøjes med en enkelt vCenter server består de fleste miljøer af adskillige servere til håndtere den daglige drift og rutiner. Den er i dag udbygget til:

  • vCenter (med intern eller ekstern database)
  • vRealize Operations Manager
  • vRealize Automation Center
  • vReaize Log Insight
  • NSX Manager
  • PSC (platform services controllers)
  • SDDC (VMware Cloud Foundation)
  • Management AD og DNS

Udover disse VMware services er der også flere administrative servere som IT afdelingen er afhængige af som kan tilføjes som kritiske komponenter til en disaster plan. Med det Software Defined Data Center er det ikke som tidligere hvor vCenter var “nice-to-have” – er det nu blevet til en kritisk funktion der altid skal være online. Hvis vCenter er nede skaber det problemer for ting som

  • Selv-provisionering af nye virtuelle maskiner
  • Overvågning
  • 3′ parts produkter der kommunikerer med vCenter

I et tilfælde hvor der er nedbrud eller datatab på management stacken er man ilde stedt. Hvordan bringer man stacken online når vCenter og evt. management AD og DNS er nede? Er din platform til beskyttelse 100% uafhængig af dem? Med stor kompleksitet er det muligt at bygge et system med traditional software som kan håndtere dette. Men hvordan tester man det? Hvordan sikrer man at alle windows servere benytter lokale service accounts og ikke AD konti? På hvilken måde kompromittere dette sikkerheden? Hvad med faren for Ransomware når traditional software kører på Windows og i værste tilfælde bliver backup data kryptereret og utilgængelig?

Til at løse dette er der behov for at se på problemet med friske øjne. Der er behov for en løsning der opfylder følgende

  • Baserer sig ikke på Windows og backup data er immutable
  • Har ingen afhængigheder af AD og DNS
  • Kan benyttes selvom vCenter er utilgængelig
  • Kan udføre en Instant-Mass-Restore og bringe ALLE administrative servere online med det samme og som en gruppe.
  • Simpel og alt inkluderet i et system (ikke 4-5 forskellige produkter og producenter)
  • Ingen single point of failures.

Hos Cohesity løser vi dette elegant og din management stack er beskyttet og muligt at lave recovery på få sekunder. De unikke funktioner ved Cohesity:

  • Policy baseret beskyttelse
  • Alt-i-et-system (de-dupe storage, backup software, databaser, always-online, fuld HA for alle komponenter software og hardware.)
  • Instant-Mass-Restore: Recovery af 5 eller 50 maskiner på få sekunder
  • SnapTree: Alle backup punkter er fully hydrated og instant-recoverable. Ingen baggrunds IO operationer for at lave syntetiske fulls
  • Test/Dev: Mulighed for at teste recovery når som helst og validere det virker

Hvis du ønsker at få en demo at dette i dit eget datacenter så tag kontakt.

Hvis du vil læse mere om hvordan Cohesity beskytter den fulde VMware Management Stack inklusive cloud foundation så læs mere her:

http://www.cohesity.com/vmware-cloud-foundation-vcf-cohesity-white-paper/

Video Demonstration:

https://www.youtube.com/watch?v=jtAoCi4HcX4

Filed Under: certification, Cohesity, Network, PernixData, SSO, Uncategorized, vCloud, vcops, View, vMotion, vSphere

vC Ops: Building a Dashboard Based on Super Metrics

October 7, 2013 by FrankBrix Leave a Comment

If you have been looking into vCenter Operations Manager you have probably heard about Super Metrics, Dashboards and the Custom UI. In this article I will help you build your own dashboard that uses a super metric. The high level approach to this:

  1. Create super metric packages and add the metric packages to your inventory
  2. Build a super metric and add the super metric to a super metric package
  3. Build a dashboard that uses the super metric

1. Create super metric packages and add the metric packages to your inventory

A lot of different guides I have seen starts by building the super metrics. I think it makes more sense to start by building empty super metrics packages and apply them to your inventory. My suggestion is the following approach. Build super metric packages based on different levels.

  • Host Super Metric Package
  • Cluster Super Metric Package
  • Virtual Machine Super Metric Package
  • Datastore Super Metric Package
  • etc.

Then when you build your super metrics you add them to the group they belong, and after a few minutes data should be available in the inventory.  To build your package groups start by logging in to the vC Ops custom ui. In my environment the URL is: https://vcopsuivm.vclass.local/vcops-custom

Then go to Environment – Super Metrics – Package editor 

After creating your package groups you should see a screen similiar to this

sm1 

Now we need to add the packages to our inventory. We need to do two steps for this. One is to add it to existing objects and another is to add it to future objects. For instance when you add a new host to cluster you want the super metric package to be applied to that host automatically.

To add the host super metric package go to Environment – Environment Overview – Resource Kinds – Host Systems 

The select all hosts (if you want the super metric on all hosts and click “edit” and add the Host Super Metric Package to the objects. This means that whenever you create a new super metric for a host just add it to the correct super metric package and it will start to work right away!

sm2

Do the same for you Cluster, Datastore and Virtual Machine super metric packages.

The second step we can add is to make sure the super metric packages will also work on new objects, for instance new hosts og newly created virtual machines.

Go to Environment – Configuration – Resource Kinds Defaults and configure it like the screenshot

sm3

Now we are ready to start building our Super Metrics

 

2. Build a super metric and add the super metric to a super metric package

I have borrowed an example of a Super Metric from the following pdf the super metric we want to build is to find the virtual machine with the highest amount of Memory Ballooning in KB. To do this we need to use the metric “Memory Balloon (KB)” and the function maxN in vC Ops. 

Go to Environment – Super Metrics – and create a new Super Metric. Look at the screenshot

sm4 

After creating the super metric then go to the super metric packages and add it to the “Cluster Super Metric Package”

Now you just need to wait at least 5 minutes for the metric to start appearing. The easiest way to verify it is working is to to the normal vC Ops user interface, select your cluster, operations, all metric and look under the super metric category

sm5 

When you see data in the Metric Chart you are ready to go to the next step which is to add it to a dashboard.

3. Build a dashboard that uses the super metric

There are many guides out there showing how to build your own dashboards. In this configuration I build a dashboard with two “Generic Scoreboard Widgets” and two “Metric Sparklines”

sm6 

And the END result

sm7

Filed Under: vcops

vCenter Performance Graph Versus vCenter Operations Manager Metric Chart

March 11, 2013 by FrankBrix Leave a Comment

Most people who works with vSphere already knows the vCenter Performance Graphs. What most people don’t know is the “Metric Chart” in vCenter Operations Manager. In this article I want to discuss pros and cons of vCenter Performance Graphs vs. vCenter Operations Manager Metric Charts.

The Performance Graphs in vCenter can show information about the following CPU, Datastore, Disk, Memory, Network, Power, System, Virtual disk. You can the select if you want the graph to show: Real-time, Past day, Past week, Past month, past year or Custom. 

 When looking at these graphs the following table sums up what you are seeing.

vcenter_data

So by looking at the Real-Time graph you actually looks at a graph that is updated by a frequency of 20 seconds. So 180 datapoints in just one hour. But when you look at Past Day your graph is updated every 5 minutes. This is where rollup occurs. Without rollup you vCenter database would grow a lot in size. Whatever data you look at in the Past Day the graph is updated in 5 minutes interval, but what happens if you look at the graph more than 24 hours ago? Well if it is in the past week your graph will be updated every 30 minutes, but if you are further back your graph is updated every 2 hours! This basicly tells us that vCenter Performance Charts is BEST in Real-Time, it is alright in Past Day but moving further back than 24 hour data is being rolled up and we loose valuable insight. Look at the following screenshots to see what happens:

realtime_cpu_ready

real_time_cpu_day

 

real_time_yesterday

 

You should have noticed how we lose information when we try to investigate performance back in time. The further back we go the more we lose. Not only does data points get rolled up. But try and go back to the first graph. On the first graph we had the counters “Ready” and “Used” but on the other two we only have “Ready”. No I did not remove it. This is another feature of vCenter. Not only does it roll up. But it also chooses to remove counters as soon as we are looking at data more than one hour ago.

 cpu_realcpu_pastday

 

Now you should know some of the limitation with vCenter. So what is the solution if you want to look at data that is 1) More than one hour ago or 2) more than one day ago. The answer is of course vCenter Operations Managers Metric Charts. vCenter Operations Manager collects every counter and does not remove them when looking back. And vCenter Operations Manager have data points every 5 minutes. So yes you can argue that the past hour graph in vCenter is better than vCenter Operations Manager! That is true. The only time you actually should use vCenter Performance Graphs is when looking at the “Real-Time” graph.

Lets take a look in vCenter Operations Manager:

cpu_last_hour_vcops

 

The graph is for the “Last Hour” but I can dive into any period and see all counters updated every 5 minutes. Maybe it is monday morning now and we have a reported performance problem that happened this saturday between 2am and 3am. Well just change the graph to that interval. Data is there, and you have EVERY counter. 

Filed Under: vcops, vSphere

Understanding VC OPS oversized virtual machines

January 31, 2013 by FrankBrix 1 Comment

After teaching vCenter Operations Analyze & Predict and facilitating private workshops with customers I have had this one question that always pops up. What is the criteria for vC Ops to say a VM is oversized or undersized. 

To understand how it is calculated is just one thing. What is more important is what you do with the sizing recommendation. Even though Operations may find a machine to be oversized it does not necessarily make it true. 

First thing you will need to do is go into “configuration” from the vC Ops website. Here you can configure the thresholds for oversized and undersized VM’s. The settings in the screenshot is right now at default. I have made some marks in the screenshot. “1. Oversized threshold: 1%“, “2 CPU deman less than: 30%“, “3 Memory deman less than: 30%“

What this means is that if a virtual machines is using less than 30%CPU or 30%Memory for 1% of its running time it is considered oversized. Actually it is not based on time. But it is based on “Area” Look at the next screenshot. The threshold of 30% is the “U” and the threshold of 1% is the blue area. So if the area of the blue is more than 1% of the graph it will be considered oversized.

With the default settings of 30% CPU/Memory and oversized threshold of 1% I bet you are going to have A LOT of virtual machines that are considered oversized. For instance if you have some machines who are idling most of the time but in peaks need LOTS of CPU they will definitely be considered oversized even though from the administrator perspective it is not.

I would change the settings for the threshold. Per default you get way too many oversized virtual machines. I recommend the following:

  • Oversized Threshold: 10%
  • CPU Demand less than: 15%
  • Memory Demand less than: 15%

 With these settings a virtual machine is oversized if the area on the graph below the threshold “U” (15% CPU/MEM) is 10% or higher. This is just one recommendation, you can tweak the setting to your own liking.

 


 

Filed Under: vcops

vCenter Operations Manager and Heat Maps Cool Feature

January 16, 2013 by FrankBrix Leave a Comment

I have been playing around with vCenter Operations Manager a lot lately. The more I use the product the more I find it useful. At first glance Operations Manager you just see a 3 major badges and 8 minor with some colors that tell you whether an object is performing like it should or not. When you go deeper into the product you find a very useful feature called Heat Maps. You find the Heat Maps under the “Analysis” tab. Here is an example

This is a custom created Heat Map that shows us every VM grouped by ESXi host and colored by the CPU Ready time in ms. This is the configuration for the heat map

What you should notice in the configuration is that the VM’s are colored by “cpu usage” but the size is fixed. This means that every VM will be the same size. If we wanted to we could add another the cpu usage | ready (ms) to the size. This would mean that that heat map would change to the following

And the config looks like this now

In this case we used the same counter for both color and size. To make it more interesting we could create a heat map with two counters. Lets say we wanted to figure out what machines in our environment used the most IOPS and had the highest latency then we could create a config like this

And the result would be the following heat where the biggest box is the “commands per second” (IOPS) and the color is based on the read latency (you could change to write or use overall latency)

 

The custom heat maps are a great feature to use the data your vCenter Operations have collected. I would suggest you to look into this feature if you are so lucky to have the product installed.

Filed Under: vcops

Primary Sidebar

Blogroll

  • Hazenet
  • Michael Ryom
  • Perfect Cloud
  • vTerkel

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in