Module 10: Scalability
Back to the Summary tab, can now see the memory used, reserved, ballooned and vmkernel swapped all in one location. pg511
More of the same as previous version, explanations on shares, reservations etc. The ability to schedule resource changes on resource pool based upon time might be useful for applications that need resource during the night for example or set time periods such as end of month processing. Not something I have seriously looked at in the past due to environment I currently work in.
vCenter Linked mode
Now we are getting somewhere, linking multiple VCs together will be great from a support point of view. Another bonus of linked mode is that I no longer have to create the same roles with the same privileges in every VC that I set up. In linked mode I can set the roles up once and then each VC that I link in, will automatically be replicated to the other VCs. Then I can use those roles on any object inside that VC.
Another interesting item is vCenter Service Status, especially in linked mode, to see the status of VC and it’s components. I do think this might be a little redundant, since if the VC services are not running correctly, then you will not be able to connect via the VI Client to get to the status in the first place. However in linked mode you would get visibility of the the other VCs.
Yet more privileges added from 2.5 to 4, for more fine grained control, looking forward to looking at the networking and storage privileges especially.
VMotion, back to basics, same same.
DRS, as above. Item of notes is you need to disable DRS for VMs using RDMs and DirectPath. Also if you are or have virtualised VC then exclude this VM as well, so in a disaster you do not have to guess which ESX host in your cluster you VC server is on. There are some cool DRS cluster information now on the Summary tab and a distribution chart about where the resources in the cluster are being used. This looks good.
Power Management. DPM Now fully supported in vSphere. DPM basically gives the cluster the ability to assess the workload within the cluster and place excess compute resources (ESX servers) in the cluster into standby mode when not needed. Saving on power usage during quiet times. What times there are will entirely depend on you organisation. In the quiet times say during the night, your cluster usage might drop due to now users, DPM will migrate all VMs off one or more hosts and place them into Standby (basically off) and then in the morning when users start logging on and resource usage goes up, DPM will power on hosts to cope with the demand. Great little DPM video demonstration.
Requires one of the following protocols: WOL, IMPI or iLO pg608 (who know could end up being an exam question)
DVFS (Dynamic voltage and frequency scaling) enabled hardware is now fully supported by ESX4. These include Intel SpeedStep and AMD PowerNow compatible CPUs. These feature allows ESX to reduce the amount of power the CPUs in the host draws when the load is reduced. Thereby reducing power usage, heat output and cooling requirements. Used in conjunction with DPM could make some very good savings. I guess you mileage may vary. Sounds really good though, definitely going to be playing with these when I am back at work.
So overall, some old stuff and some new stuff.
