What Is Computer Virtualization?
Virtualization is a broad term that, in computer terms, generally means that computing elements operate on a virtual basis rather than on a real basis. Virtualization technology can expand the capacity of hardware and simplify the process of software reconfiguration. CPU virtualization technology can simulate a single CPU with multiple CPUs in parallel, allowing a platform to run multiple operating systems at the same time, and applications can run in independent spaces without affecting each other, thereby significantly improving the work efficiency of computers.
- Fidelity (fidelity): the application system program is executed on the virtual machine, except for the time factor (which will be a bit slower than the physical hardware), the performance will be the same as the physical hardware. [1]
- In short, AMD Virtualization technology is a hardware-based technology that supports
- At the recent IDF Spring Forum,
- The release of enterprise application systems through Hexin Application Virtualization can bring a better experience to users. You don't need to change the current network structure and modify any applications to achieve rapid and flexible deployment. The built-in security and load balancing mechanisms ensure uninterrupted business operations, and greatly simplify the system administrator's management and maintenance workload.
- What are the benefits of using Hexin Application Virtualization?
- Using Hexin Application Virtualization can:
- -Centralized management of software applications;
- -Remote secure access and application access control;
- -Application acceleration;
- -Enterprise application platform migrated from LAN to Internet.
- Hexin application virtualization client environment requirements
- Operating system: Windows XP, 2003, Vista, Win7 32-bit and 64-bit systems
- Hexin Application Virtualization Client has no special requirements. As long as the above operating systems can be successfully installed, Hexin Application Virtualization can be used without any obstacles.
Computer virtualization technology what is virtualization
- Virtualization enables users to run multiple operating systems on a server at the same time, which is similar to "multitasking" technology. However, "multitasking" technology only allows users to run multiple programs in the same operating system of the same machine and equipment, while virtualization allows users to run multiple operating systems in the same machine and equipment. This allows users to allocate computer resources more flexibly and efficiently, and helps improve security performance.
- Imagine that an operating system can be booted in almost no time, even if it crashes, you can simply remove it and immediately load a new one. If you are running several operating systems at the same time, when you are ready to load a new image to one of them, you can immediately shut it down and offload the work being processed by the system to other systems. If you have 5 copies of RedHat running Apache server software, and one of them stops responding due to full load, no problem, you can simply forward the response request to the other 4 systems and restart the system that stopped working Just fine.
- If you have stored a "snapshot" of the operating system you are running, you can restart it whenever something unpleasant happens, such as being hacked or infected with a virus. Load the image from a secure partition and fix it. Virtualization also allows users to reinstall the operating system without time, without having to install device drivers like Ghost used to. You can simply load, unload and store the operating system just as you would with a normal program.
- Similarly, it allows you to use multiple different operating systems on the same machine. If you are a programmer and need to write code to make them run on Windows 95/98 / Me / 2000 / XP, you can prepare 5 machines or 5 virtual machines on your desk. Computer with operating system. At the same time, as a programmer, you need to verify these codes on each version of each browser. Obviously, Microsoft will not let you install a lower version of IE to do these things when you already have a higher version of IE. , But you can install the old operating systems one by one or take a better solution-let them run at the same time.
Computer virtualization technology existing virtualization technology and its defects
- Is everything simple and perfect? However, not everything is perfect in the virtual world. The most obvious is that there are so many copies of the operating system (do you have many copies of the operating system in the example above? You can imagine a web hosting company, 20, 50 are all possible) running on one computer at the same time requires a lot of Resources and cause more expensive server overhead. Data transfer becomes harder in any case, because the more stuff is loaded, the more memory capacity is required.
- Yes, the real killer is system overhead. There are currently several technologies in computer virtualization, but they all come with problems that reduce system performance to varying degrees. As far as the CPU is concerned, its occupancy rate can be from 10% to over 40%.
- Obviously, we need new technologies to solve these problems. The idea behind Intel's VT technology is to reduce system overhead during virtualization. Before we dive into how it works, we need to understand how virtualization technology enables different operating systems to work on the same CPU.
- There are currently three types of virtualization technologies: Paravirtualisation, binary code translation, and simulators. The most familiar one is probably the simulator. You can have a super Nintendo simulator running in one window of Windows XP, and you can also have another PS simulator. These can be considered the most basic forms of virtualization. The simulator consumes huge CPU overhead. If you have to simulate every bit of the hardware device, you will spend a lot of time and energy. The better way is to skip some of them. This is the simulator we use, and it works well.
- At the other end of the field is the currently popular and industry-recognized "Paravirtualisation" (PV) technology. Literally, it means programming simulation. It lets the host operating system know that they are working in a virtualized environment and modifies them to make them work better. Therefore, the operating system needs to be modified and adjusted for this method, they must go back and forth between the writer of the operating system and the person writing the virtualization software. From this perspective, it is not completely virtualized because of this cooperative relationship.
- PV technology works well in open source operating systems. Linux, xBSD are all candidates for a suitable PV work platform. You can arbitrarily adjust the required adjustments in these systems to make PV work better. . Windows doesn't work, which probably explains why many IT giants have touted the open source virtualization technology Xen recently.
- And "binary code translation" (hereinafter referred to as BT) technology can be said to be a more eclectic method. What it does depends on what the operating system will do and change it unknowingly. If the operating system attempts to execute instruction A, but this instruction A will cause some problems to the virtualization engine, then BT will convert it into some more suitable instructions and forge the results that instruction A should return. This is a deceptive job and consumes a lot of CPU resources. In addition, replacing one code with many codes will not make things run faster.
- When you understand this, you will have headaches. There are no purely fatal flaws, but neither has a simple solution. These virtualization technologies continue to be used, but people are trying to work within a lower degree of defect. What causes this?
What problems with computer virtualization technology are bothering us
- For the CPU of the X86 system architecture, at least in the 32-bit field, there are too many headaches, but as a general rule, they all include Ring Transitions and related instructions. Conceptually, a ring is a method of dividing system privilege levels (hence "Ring" is also called a privilege ring). You can allow the operating system to run at a privileged level without changing it by user programs. This way, even if something goes wrong with your program, it will not cause the system to crash, and the operating system can take control and close the faulty program. These loops force different parts of the system under control.
- Intel Corporation's X86 series CPUs (including 80386, 80486, Pentium, Pentium Pro, Pentium , Pentium , and now Pentium 4 CPU), provide 4 privilege levels R0, R1, R2 and R3. Larger numbers indicate lower privileges. We can simply understand that programs running at a certain level cannot change programs running at a lower number level, but programs at a lower number level can interfere with or even control running at a higher level Digital level procedures.
- In practice, only R0 and R3 are often used, that is, the highest level and the lowest level. The operating system runs on R0 and the user program runs on R3. When the X86 architecture is extended to 64-bit, one of the methods adopted is to remove the intermediate privilege levels-R1 and R2. Few people notice that they have disappeared, except for specific people who use virtualization technology.
- Software models such as VMware (Virtual Machines, hereinafter referred to as VMs) obviously must run at R0 level, but if they want to maintain full control, they must make the operating system outside this level. The most obvious solution is to force the host operating system to run in a lower-level ring, such as R1. Some of their codes were originally set from R0 to R3 instead of R1 to R3. Although in the PV environment, you can modify the operating system to make it work well, but if you want to find a satisfactory solution, you must make the operating system work at the R1 level.
- But then again, there are problems. Some instructions only work when they are issued from or to the R0 level. If they are not in the correct ring, these instructions will work strangely. If you try to do this, there will be very bad consequences. Having the code execute in the correct loop does prevent the operating system from damaging the VM, and it also prevents software running on the host operating system from damaging the operating system itself. This is the so-called "0/1/3" mode.
- Another mode is called "0/3" mode. This mode puts the VM into the R0 level and runs the operating system and user programs in the R3 level. But in essence, it still handles other things like "0/1/3" mode. At the R3 level, a privileged operating system can execute user programs more easily. Because there is no ring barrier, it also runs faster, but the system stability is not good.
- Another way to use "0/3" mode is to let the CPU keep two page tables of things running in R0 level. One is the operating system, and the other is an old program running in the R3 level. This results in a complete set of memory protections used to isolate user programs from the operating system space. Of course, this also consumes performance, but in a different way.
- In summary, in the "0/1/3" mode, the system security is higher, but the performance will be affected when converting from R3 to R1, R3 to R0 or R1 to R0, and vice versa. In "0/3" mode, there is only a transition between R0 and R3, so it potentially runs faster than a non-host operating system. But if you run into a problem, the "0/3" mode is more prone to blue screens than the "0/1/3" mode. Although the "0/3" mode will be widely used in the future, mainly because we mentioned above that R1 and R2 have been removed during the 64-bit extension, you must be forced to use the "0/3" mode. For computers, this is called progress, and similarly, a devastating crash is considered "weird" behavior.
- In theory, if you can tolerate a little instability, or sacrifice a little speed in the "0/1/3" mode, then it should be perfect. However, they do have some defects in some aspects, mainly including the following four points. The first are the instructions that check the instruction's own ring, and the other instructions that are in the wrong ring but do not properly protect the CPU field. The last two points are diametrically opposite: instructions that should raise an error without causing an error, and instructions that should not cause an error but cause many errors. All this makes life difficult for talented programmers writing VMs.
- The first point is obvious. If you give the operating system R1-level privileges, it will return 1 instead of 0 when it checks which ring it is running on. If at this time a program running on the system expects that it should be in the R0 ring, then it will cause an error because it gets R1. This can lead to blue screens, memory erasure or other undesirable consequences. Binary code translation technology can catch this kind of error and disguise the return value as 0, but this means that it takes dozens or hundreds of instructions to complete this work, obviously the speed is greatly affected.
- Protecting the scene is a potentially worse problem. Some things in the CPU are not easily saved in context switch programs. Those hidden segment register states are a good example. Once they are loaded into the main memory, some of them cannot be saved, resulting in a difference between the resident part of the memory and the actual value in the CPU, causing unexpected interruptions. Of course we can set up workspaces for them, but doing so is extremely complex and costly to make them smarter.
- The instructions that should have caused some problems without causing them are also a difficult problem before us. If you expect an instruction to cause an error in the interrupt traps of your design later, but not, it will not make you happy that there are no errors. The opposite situation is also very common. If you don't write to CR0 and CR4 in the correct ring, an error will occur and the system will crash. Although these two errors can be fixed without your attention, they cause a lot of performance loss.
- The entire virtualization technology involved is to put the operating system in a place where it should not be, and to keep running to try to solve all the problems. There are many problems, and these problems continue to occur, so the performance loss is not surprising.
Computer virtualization technology solution
- Intel's VT technology can solve these problems. The goal of VT is to add as many "virtualization holes" as possible while minimizing programmer pain. In this solution, VT-X targets X86 and VT-i targets Itanium. A new mode is introduced for different CPUs. Here we mainly look at VT-X. In fact, VT-i has many features in common with VT-X.
- This new model is called VMX and introduces a virtual machine monitor VMM running in it. It is set at R0 level and you can think of it as R-1 level or see it running next to the ring. The host operating system and all programs run in the VMX mode, while the VMM runs in the VMX root mode.
- Any operating system running in the VMX mode has all the functions and features of a normal operating system running in a non-VT system. It is also in the R0 level, has the right to handle everything as usual, and has no idea what is running next to it. When the situation is authorized, the CPU enters the VMX root mode, and the VMM can switch to another operating system running on another VMX instance. These switches are called VM logins and VM exits.
- The incredible thing that VT technology shows is that it will make the login and logout processing from VMX mode to VMX root mode (or from VMX root mode to VMX mode) easy to operate. Once the host operating system is involved, it must be in its own world. You must save the complete state of the virtual world and reload it when you return. Although there are many things to deal with in VT, it is designed as a task, so objectively speaking it is actually a simple and effortless process.
- Because every instance of the operating system is running in the right place, the four problems mentioned earlier are gone. The associated workspace is no longer needed, and the system overhead associated with it is gone. These can effectively increase speed. But these are not free, they just pay a lot less.
- To start a new host operating system, you need to set aside 4kB of memory for it and pass it to a VMPTLRD instruction. This area will be used to store all the status and important bits of the system instance when it is not activated. As long as the operating system instance exists, this area is valid until a VMCLEAR instruction is run on it. This sets up a virtual machine instance.
- If you want to give control to the virtual machine, you can either log into VMX non-root mode or simply, run VMX mode. These mentioned VM login instructions are VMLAUNCH and VMRESUME, there is not much difference between the two. The VMRESUME instruction simply loads the CPU state from the 4kB memory area that was initially initialized, and passes control to the host operating system. VMLAUNCH does the same job, but it will start a virtual machine control component VMCS, which contains some records behind the scene where the VM is set up, because it takes some time, so people try to avoid using VMLAUNCH during concurrent logins.
- From this point of view, the host operating system has begun its pleasant journey, running as much as possible without being aware of anything else running on its side. As planned in the past, it exists in its own world, running at or near full speed. The only question is how do you break all this good sight and shut it aside so that other operating systems in the machine can actually run. This is the complicated aspect of VT technology-some special bitmaps in VMCS.
- These bitmaps are 32-bit fields, and each bit signifies an event. If an event is triggered, the corresponding bit is set and the CPU triggers a VM exit instruction and returns control to the VMM running in the VMX root mode. The VMM can do whatever it wants and then pass the VMRESUME instruction to the next operating system, or the operating system just left. The booted operating system worked just fine until another VM exit command was triggered. This is repeated thousands of times per second.
- What can trigger these instructions? They can be platform events such as pin signals, CPU, exceptions, and page faults, all of which trigger VM exit instructions. The perfect thing about VT technology is that it has strong adaptability. Another similarity is to set breakpoints in the debugger. You can set one on each event or not. It's up to you.
- The pin signal event is to trigger an exit instruction when an internal interrupt or a non-maskable interrupt occurs. The CPU event is when you set any bit in a certain field, and when the corresponding CPU status receives it, it triggers an exit instruction. Although most instructions need to be set, there are some instructions that cause the VM to exit the instruction unconditionally. This is to control the VM on a very small level, allowing login and logout whenever you need it.
- The exception bit map is also a 32-bit field, and each Bit bit indicates the exception of each 32-bit instruction address. If the Bit bit is set and an exception is thrown, it will cause the VM to exit the instruction. If the Bit is empty or no exception, then the host operating system continues its happy journey, as usual. This is a very low overhead method for exiting from VMX mode and entering VMX root mode.
- Finally, there is a page exit error, which is very similar to an exception exit, except that it is controlled by two 32-bit fields. The Bits in these fields map every page fault code that could go wrong, so you can carefully choose where to exit from. Also, it is based on a very small level and the system overhead is very low.
- In the computer, VT works in a more privileged level than the traditional R0 ring. Any host operating system can run under the same old architecture without change, and it is unknown that a control program is controlling them. When encountering triggers set by some users, control will be transferred to VMM running on higher level VMX root mode. Because this is a passive trigger event and does not need to be actively monitored, system overhead is minimized.
- VT technology makes it easy to install and uninstall those operating environments that are more stable than previous VM modes. If you need to run a virtualization system, there is no reason not to implement it with a Vanderpool-capable CPU, and software virtualization opportunities are gradually being ignored.
- This may be good, but it must be remembered that it has to come at a price. Each login means the establishment of a 4kB storage area, and each time you log out, you must write data to this 4kB storage area. This seems a bit overwhelming, and it's surprisingly fast compared to older methods.
Computer virtualization technology summary
- Intel's VT technology surprises people, it allows us to complete computer virtualization at the hardware level. At present, the time is ripe. Intel will first introduce the technology into its desktop CPU. The latest Pentium 4 6 series CPU supports VT, which allows more users to get involved in the application of new technology, which greatly improves the use of CPU by users. Efficiency, writing VMM is no longer so difficult.
- However, we must also clearly realize that existing virtualization technologies will not disappear immediately. On the contrary, they will become more common and system overhead problems are improving. Large server providers will not Technological changes have taken place, after all, they still work. In addition, AMD has also announced the use of Pacifica virtualization technology on its 64-bit CPUs, so it will take some time for VT to replace the existing computer virtualization technology or gain acceptance. However, we believe that hardware-level virtualization technology is undoubtedly the future direction of computer development and has a bright future.