Server virtualization can bring many benefits to companies. But there are cost and operational risks that IT managers should know. An overview.
Server virtualization is considered a universal weapon against sprawling IT costs and complex server structures. However, the shadow aspects of technology and their use are often overlooked. This is because virtualized infrastructures bring their own difficulties – from security problems to high requirements for disaster recovery and disaster recovery, to difficult licensing issues. The potential benefits such as cost savings and flexible IT deployment can only be achieved by those who know the risks and avoid the typical traps.
Case 1: Old wine in new tubes
The beginning is that many of the laws governing physical machines and servers, each of which was responsible for a single application, no longer apply in virtualized environments. Many of the tools, organization, and pre-virtualized thinking are no longer compatible with modern environments. The technology alone – that is, the hypervisor – can not cause a revolution in the data center and almost save costs in the past. Put on a short formula: virtual is not physical.
A hypervisor does not make a modern IT environment
While the decoupling of the servers from the hardware offers considerable simplifications and more flexibility, the requirements for intelligent management and automation of the new environment are increasing:
VM Flotation : The ease with which servers are moved to virtual machines (VMs) often results in more administrative tasks because there are suddenly more (virtual) servers present than before and thus the sheer mass of the systems are managed got to. Related problems: missing overview – in some cases, virtual machines can only be identified at the filename or a few meta information -, excessive consumption of storage space, update and care needs. Organizational measures and policies should take effect from the outset to prevent this. In addition, organizations should evaluate appropriate tools for managing virtual environments to take technical support.
New tools and processes : New options such as flexible provisioning, image retention, or VM migration from one machine to another require appropriate tools and new policies to ensure control over the entire system. Without standardization of processes and environments, nothing works.
Failure risk : Dependency on a few servers is vulnerable. Virtualized environments can be used by many servers to combine little hardware. This implies that many servers fail when a computer fails. Therefore, high demands are placed on the equipment and mechanisms for high availability. As a basis, IT organizations need shared storage in the form of a SAN (storage area network), usually based on Fiber Channel or iSCSI techniques. The storage network must also be designed to be highly available.
Storage in mind, virtualize storage
Server virtualization can not succeed without a sophisticated storage system. Local hard disk space is not enough for demanding systems that meet the requirements of high availability, dynamic load distribution and high-level automation. Shared storage is an indispensable prerequisite. The requirements for such memories are reliability and failure safety, but also flexibility and cost control. Especially in virtualized environments, memory requirements can explode.
Therefore, concepts for the efficient use of storage as well as the transparent integration of different systems from different manufacturers into a complete system are required. These requirements can only be realized with storage virtualization, which abstracts systematically from the underlying hardware through an additional software layer. Accessive systems and applications are separated from the hardware. This has many important effects: setting up logical storage areas (LUNs), disk extensions, or migrating data without physical interventions, many management processes run without interruption. As with server virtualization, storage virtualization improves availability,
Vulnerable security : While overall operational security is ensured by implementing HA mechanisms and can be increased against purely physical servers, the additional hypervisor software layer potentially offers new attack areas for attacks on system and data security. What used to be sheet metal is now a pure data object: the former server is reduced to a file that can be copied, moved and deleted, which opens up new security problems. In addition to purely technical security gaps, management problems often lead to further holes in the security. Security and compliance specialists are often not involved in the design and implementation of virtualization setups in the boat. This continues into network management, Where different teams are responsible for setting up and managing the new virtual networks, in addition to the existing physical network structures, and inevitably enter the game. Administrators of the VMs are not necessarily the administrators of the server infrastructure; However, in the case of introduction projects, these rollers are often mixed lightly, so that parts of the security architecture implemented to date are thereby undermined.
Case 2: False and exaggerated expectations
The virtualization euphoria entices companies to use both the skills of the technology and the potential usability effects with missed and partly exaggerated expectations. Thus, when planning and implementing virtualized servers, the consolidation capabilities of the new infrastructure are often overstepped, which is partly promoted by optimistic manufacturers’ statements. In test setups and under laboratory conditions, up to 50 different VMs can run on a computer sufficiently quickly and reliably. In realistic production scenarios, significantly lower consolidation rates are achieved. A restriction to 6 to 8 VMs per computer is quite common for the operation of resource-intensive business-critical applications. If higher levels of consolidation are used in the run-up, there is a risk of cost explosion, since more computers must be deployed, which in turn require more storage, network capacity, licenses and administration.
Particularly at the beginning of virtualization projects, fewer and less critical servers are consolidated on virtual systems, which usually works well. Problems arise when, in future project phases, these ratios are transferred to highly charged mission-critical systems with possibly high demands on network bandwidth. This leads inevitably to blatant bottle hacks.
As a rule of thumb, virtualized computers should be designed so that a maximum of 60 percent of the physical resources are loaded. This still achieves high consolidation rates, but also has sufficient reserves at the same time. Accordingly, increased costs are to be planned for this.
Project killer server and network performance
Due to the additional retracted soft-ware layer, virtualization results in a reduced performance in computing performance and network throughput. In well-designed infrastructures, in which the technical components are optimally coordinated with each other, this is not or hardly noticeable. However, if there is an outlier in only one of the many components, this can adversely affect the performance of the entire environment, making it impossible to use under productive conditions. Problems arise primarily with the network connection as well as with the memory I / O. Above all, the storage system must be performance-optimized for the virtualized environment. The access pattern of the virtualized environment to the memory is a central factor. This is usually random I / O. For this, the transmission bandwidth is not critical, but the number of possible input / output operations (I / O) per second.
In addition, not all workloads and servers are suitable for virtualization. Virtualization candidates should choose project managers as well as their technical abilities, especially in terms of performance. If a physical server is already more than 50 percent CPU-intensive and requires more than 6 or 8 GB of RAM, it is likely to be a candidate because the virtual environment can never be faster than the underlying physics. Moreover, the consolidation goal can not be met. The same is true for high I / O utilization.
Virtualization: Does the manufacturer not help or help?
Especially virtualization beginners can be surprised by the fact that many software developers still handle the support of their solutions in virtual environments very restrictively. Some players, such as SAP, IBM, or Oracle, officially support the market-leading hypervisor from Microsoft , VMware, and Citrix (Xen) to run their products, while other virtualization providers remain outside. Some vendors in the support case claim that a “unknown” problem is initially mapped to a physical server and support is provided, If there is no connection with the hypervisor. Oracle is particularly piquant about this – the database producer merely assumes its own hypervisor “Oracle VM” from this constraint.
The “S-Question” knows other game types: With some systems, including some common Linux distributions, the technical support is out of the question, but entails additional costs: Because you can create free of charge additional instances of its virtual server, then But for each additional server – be it physical or virtual – for the manufacturer’s support.
Case 3: Lack of technical know-how
At the beginning of virtualization projects, the complexity of the administration and the complexity associated with the technical environment are often underestimated. If there are shortcomings in the detailed knowledge, problems and incalculable risks are pre-programmed. The technique keeps enough fall cords ready for those who underestimate the details. For example, virtualization comes from the ability to move VMs back and forth between computers. However, the prerequisites for this are numerous: the participating computers must be equipped with the same hypervisor, linked to a pool, and connected to a network storage. In addition, the CPUs of the participating computers must be “identical”, ie at least belong to the same family. In this case, the subsequent expansion of pools can be detected by additional computers: If the processors are only slightly different in the same computer model, then the hypervisor may refuse the motion process of the VM.
The dear emergency with the backup
As a study by Kroll Ontrack shows, data is often lost in virtualized environments. The cause of this was in most cases human errors such as accidental deletion of VMs. Hardware defects are the cause in a quarter of the errors – which shows that backup and DR (disaster recovery) procedures are not yet established in virtual environments. Due to a lack of knowledge, the requirements for the backup of VMs and their restoration are often underestimated. These are in parts different than in purely physical setups, so the processes for backup, recovery and disaster recovery need to be redefined accordingly. It is important to know, for example, that backup operations can lead to resource bottlenecks as virtualized servers and the relevant network paths are more heavily utilized.
Case 4: Software Licensing and Virtualization
The migration of existing servers and applications into a virtual environment often leads to additional licensing costs or at least changed licensing conditions for both operating systems and application software. Here, inexperienced virtualization beginners may be in the cost, but at least in the compliance case.
Two licensing models are widely used in the software world: the calculation of the costs by the number of processors and the coupling of the license to a specific computer. Both approaches are often obsolete on virtual systems. Usually, several processors or cores are used in the virtual server. On the other hand, it is precisely a utility aspect of virtualization that makes it possible to move VMs back and forth between individual servers, whether it is to load distributions or to perform maintenance tasks without interruptions.
Nonetheless, several virtual machines may share a single processor and the user still has to pay the full fee per processor. Or a VM is only active at certain times and does not run in the rest of the time. In many of these cases, the user pays “too much”. On the market, no uniform, virtualized, and user-friendly model has been established. The software industry is currently still trying on different approaches, such as pay per use.
It is important for IT executives to keep an eye on the scaling effect: if the number of virtual machines increases over time, the cost increases linearly if the full license has to be paid for each VM. It is a good idea to find a license model in which a larger or even unlimited number of instances of a system can run on a server. For example, it may be worth buying a Windows 2008 server instead of buying the Enterprise version only once, since it can be used in up to four virtual machines at the same time. The datacenter version allows an unlimited number of instances of the operating system in virtual environments (see:
The Microsoft Windows Server 2008 R2 Datacenter Edition shows very clearly that the license devil is in the details : the limitation of the limitation applies only with a correct licensing. This requires the purchase of one license per CPU socket. Note the minimum number of CPU licenses per server. A server with a six-core processor requires two licenses for Windows Server 2008 R2 Datacenter Edition. And – beware – a second processor of the same design, since an installation may only be carried out on a computer with at least two sockets. If such Windows Server 2008 VMs are moved to a different host, the target host must also have the appropriate Datacenter licensing.
A further effect is the extended technical possibilities of the virtualization environment. If a customer is running a hardware-based server with a 32-bit architecture and wants to use the advantages of 64-bit architectures as part of the migration to the virtual world, He usually re-enters the pocket for a new operating system license. This applies, for example, to Windows servers, even if more memory than 4 GB of RAM is to be used – then the purchase of one of the Enterprise variants of the operating system, which can drastically increase the project costs.
The licensing models of applications and infrastructure components such as databases are often still oriented to the world of physical computers and have not yet been adapted to the requirements of the dynamic virtualized environments. For users of server virtualization, this leads to an unexpectedly high cost factor, in addition to complex licensing conditions and the lack of an overview. Problems arise mainly because of the tight binding of the licenses to the underlying hardware, in particular the CPU performance. This can lead to difficulties if only parts of the physical capacity are to be used and licensed.
Example 1: If an application is used in six virtual machines of a server with Quadcore CPU, then up to four or six licenses are required for the software, depending on the license conditions.
Example 2: For disaster recovery or backup purposes, clones of virtual machines are preferably created and stored offline. The use of a backup software would require a license for each of these VMs depending on the license model, which can become a real cost factor. However, the software manufacturers gradually adapt their models to the new circumstances. For example, Acronis has a license for its backup solution, which allows the backup and recovery of up to 99 VMs – at a fixed price. Prerequisite: The VMs must be on the same physical machine.
A further problem can arise because a power limiter of a VM is often not or not sufficiently possible. If an application is operated in a VM that is much too large, this can make maintenance and support contracts considerably more expensive.
Licensing of virtualized databases
The virtual conflict between manufacturers and their customers is particularly evident in the licensing of databases on virtual servers. Unless hard-partitioning is followed (this requires a segmentation of the server on a few certified hypervisors), Oracle applies soft partitioning. This means that all physical processors or their cores that are present in the server must be licensed. This is true regardless of how many CPUs actually address and use the VM in which the database is running.
IBM uses a similar model, but the soft-partitioning is based on the IBM License Metric Tool to determine the maximum actual process usage by the database and only license it.
Microsoft differentiates Server + CAL licensing as well as processor licensing. With the first variant, the user licenses the users or devices via CALs (Client Access License) as well as the necessary number of server licenses. In the second model the VMs are counted. SQL Server Standard Edition requires a server license per virtual environment. With one license, up to four virtual environments can be run within a physical server environment.
In the processor model, licensing is based on the physical CPU cores or on the basis of the virtual CPUs (vCPU) used by the VMs. The Enterprise Edition requires a minimum of 4 cores per processor. If all the cores of a computer are licensed, you automatically acquire an unlimited number of licenses on that host. Alternatively, customers can license the virtual CPUs of an SQL server VM, but also here at least four of them! Here, users run the risk of paying licenses they do not need because many SQL server installations do well with 1 to 2 cores. In addition, a Software Assurance (SA) contract is required to enable VM mobility (vMotion / Livemigration) more than once every 90 days.
The benefits of virtualization can only be reaped at all levels by companies when they are optimally deployed. In many cases, the IT processes are not designed to meet the specific requirements of virtual infrastructures. In addition to the technical infrastructure, IT managers should ensure new thinking processes, corresponding know-how and awareness. Organizational structures must also be adapted to the new requirements. (wh)
Checklist license costs and virtualization
- Do not underestimate the complexity of this topic and include the indirect licensing costs of the principal in the planning.
- Check whether additional or extended licenses are required. Observe all system levels: operating system, infrastructure, applications.
- Be prepared for larger restrictions or costs with older operating system versions as well as applications.
- Keep in mind that the additional capabilities of the new platform (RAM, CPUs, distribution) are partly accompanied by additional costs.
- Note that pay-per-use models are difficult to predict and control costs.
- Request licensing without mobility restrictions for the VMs.
- If possible, choose a licensing based on Named Users instead of based on the processors used.
- Set to a central license management. Use a software asset management tool to monitor and optimize the licensing of all machines.