By David Klee (@kleegeek)
Introducing a virtualization solution into an existing environment should not be considered complete once the final server is migrated or host is powered on. Simply virtualizing a system stack and walking away is not enough.
Virtualization adds a new layer to the infrastructure. It is necessary for all system owners (DBAs in particular) to have a solid understanding of this new layer to properly manage the virtualized database servers. Access into this new layer for performance metrics capture is required so the DBA has a more complete view of the ecosystem. Training is necessary for the DBAs so they can properly interpret the new layer of data.
In some environments, this virtualization layer is inaccessible which makes it a black box for anybody managing items above it. The system stack underneath the application is already a black box in other environments. Organizational silos can exist and those limit the administrator’s scope. For insight into the virtualization host and storage, resource utilization and performance statistics are important. Database administrators need to be able to gather and interpret these statistics in order to better manage their systems.
As database administrators, we need to know what is going on in the entire system underneath our database servers. DBAs have been using various tools for years to get these details – Perfmon, DMVs, and third party tools. Other reports are not so obvious or readily available – SAN performance, network traffic, and application performance spring to mind.
Greater insight into the full system stack must be granted to the system stack owners (be it DBAs, system administrators, etc.). This is accomplished by opening up statistics access, but that’s not always enough. Additional training for the administrator is required so these statistics can be properly interpreted.
For example, would a traditional DBA know anything about network Layer-3 flow control or storage RAID configuration for multiple disk groups and the impact of other system backups on the same set of disks? It’s doubtful. Should they? I’d like to think they should have at least a basic understanding of the two. Having metrics from these scenarios to overlay with their systems can demonstrate the underlying cause of previously known but unidentified issues. Many of these issues are buried in configurations that are functionally transparent to the database system stack, but will materialize in performance problems.
Virtualization has the same sort of challenges. Would a traditional DBA understand a host-level CPU Ready metric without a thorough explanation first? What about memory ballooning? Think about how detrimental either of these can be to performance. Both are immediately accessible through either Perfmon or the vCenter statistics screen. In just a few seconds, an administrator can check these items for their performance levels and respond if they are out of the ordinary.
Any organization that utilizes performance data analysis for capacity planning already has a good handle on which performance metrics they collect. There will also be a stockpile of performance data that has not been utilized, the DBA will benefit from opening up this collection of statistics. One benefit that comes to mind is the greater insight into the system’s interactions and how they affect performance.
Grant your DBA (at minimum) read-only access into vCenter performance screens, which will open the black boxes on the infrastructure items underneath their systems. Provide training on how to interpret statistics on these systems. Help DBAs to better understand their environment and the performance statistics available, and they will help your business by becoming more efficient and effective.