Running a guest OS in VMware Workstation. Guest OS and applications

There are a huge number of virtual machines (VMs) in the modern software market, but our review today is dedicated to VMWare Workstation. This is a program designed for simultaneous installation and use of several operating systems from one computer.

How to install VMWare Workstation

Installing VMWare Workstation is exactly the same as installing any other program. After the application window appears on your computer, on the left side you will see all the available virtual machines, and on the right - the working window itself, with the help of which management is carried out.

Particular attention should be paid to creating a VM. There are two methods in VMWare Workstation: normal and custom. After choosing the type of configuration, you need to select compatible programs and the operating system itself. Next, a list of settings will be automatically offered.


It is important that you can not only choose the type of operating system, but also the number of cores involved in the work. But experienced users advise not to spend too many resources on the guest OS. Increased resource consumption will only slow down the device, but will not affect the performance of the guest operating system. During installation, you are also asked to select the amount of RAM. As a rule, it is enough to select the value proposed by the system automatically, but the setting can also be done manually.


Setup steps

After this you can start launching. During operation, the software can be paused, and all entered settings and parameters will be saved. To turn it off, just click on the corresponding button in the working window. It is important that sooner or later the operation of the virtual machine will have to be stopped, since unnecessary files will accumulate in it, slowing down the operation of the device.

You can use the VM even after reinstalling the main OS. To do this, just open the working window and copy it.

In general, VMWare Workstation can be safely called useful software. This program has high functionality and a simple interface, and allows you to perform operations on different operating systems without negative consequences for the computer.

VMWare Workstation for Windows

The video recording of the demonstration is in good quality - 720p - everything is clearly visible. For those who do not want to watch the video, below the cut is a transcript of the demonstration and screenshots.

Disclaimers:

1. Transcript with minor editing.
2. Please remember the difference between written and spoken language.
3. Screenshots are in large size, otherwise you won’t be able to make out anything.

Colleagues, good afternoon. We are starting a demo of VMware NSX. So, what would you like to show today? Our demo infrastructure is deployed on the basis of our distributor MUK. The infrastructure is quite small - these are three servers, the servers are ESXI in version 6 update 1 v-center updated, that is, the latest that we can offer.

In this structure, before using NSX, we received from MUK network engineers several VLans in which we could place a server segment, video desktops, a segment that has access to the Internet, so that the video desktop could access to the Internet, and access to transit networks so that colleagues from MUK can connect from their internal network, right? Any operations related, for example, if I want to create a few more port-group networks for some demo or some pilot to make a more complex configuration, I need to contact my colleagues from MUK, ask them to use the equipment They did something there.
If I need interaction between segments, then either again negotiate with the networkers, or in a simple way - a small machine, a router inside, two interfaces, back and forth, as we are all used to doing.

Accordingly, the use of NSX 10 has two goals: on the one hand, it is to show how it works not on virtual machines, on hardware, yes, that it really works and it’s convenient; The second point is to really simplify certain of your tasks. Accordingly, what do we see at the moment? We are currently seeing what virtual machines have appeared in this infrastructure since NSX was implemented. Some of them are mandatory and you cannot do without them. Some appeared as a result of a certain setup of the infrastructure itself.

Accordingly, the NSX manager is mandatory - this is the main server through which we can interact with NSX, it provides the V-center web client with its graphical interface, it allows you to access it through the rest api in order to automate some actions, which can be done with NSX either with some scripts, or, for example, from various cloud portals. For example, VMware vRealize Automation or portals of other vendors. To work technically, NSX needs a cluster of NSX controllers, that is, these servers perform a service role. They determine, they know, yes, they store information about which physical esxi server is on, what IP addresses and mac addresses are currently present, what new servers we have added, which ones have fallen off, they distribute this information to esxi server, that is, in fact, when we have an L2 segment created using NSX, which should be accessible on several esxi servers, and the virtual machine tries to send an IP packet to another virtual machine in the same L2 segment, NSX- controllers know exactly which host the network packet actually needs to be delivered to. And they regularly give this information to the hosts. Each host owns a table of which IPs, which mac addresses are located on which hosts, and which actual, physical packets need to be transmitted.

If we open the configuration of a host, we can see the very interface through which packets will be transmitted over the network. That is, we have the parameters of the VMkernel adapter, this host will use this IP address to send VXLan packets to the network. Our monitor is a little small, so we have to get out of it.

And we see that a separate TCP/IP stack is used to transmit these packets, that is, there may be a separate default gateway different from the usual VMkernel interface. That is, from a physics point of view: in order to deploy this in the demo, I needed one VLan into which I can place these interfaces; preferably - and I checked that this VLan does not only apply to these three servers, it will extend a little further if a fourth or fifth server appears there, for example, not in the same chassis where these blades are located, but somewhere separate , maybe not even in this VLan, but which is routed here, I can add, for example, a fourth server that will be on a different network, but they will be able to communicate with each other, I can stretch my L2 networks created using NSX from these three servers including this new fourth one.

Well, actually now we move on to what the NSX itself looks like, that is: what can we do with it with a mouse and keyboard, right? We are not talking about rest api now when we want to automate something.

The Installation section (we’ll switch there now, I hope) allows us to see who our manager is, how many controllers we have, and the controllers themselves are deployed from here. That is, we download the manager from the VMware website, this is a template, we deploy it and implement it in the V-center. Next, we set up the manager, give it a login and password to connect to the V-center server, so that it can register the plugin here. After that, we connect to this interface and say: “We need to implement controllers.”

At a minimum, if we are talking, for example, about some kind of test bench, there may even be one controller, the system will work, but this is not recommended. The recommended configuration is three controllers, preferably on different servers, on different storages, so that the failure of any component cannot lead to the loss of all three controllers at once.

We can also determine which hosts and which clusters running this V-center server will interact with NSX. In fact, not necessarily the entire infrastructure under the control of one V-center server will work with this; there may be some selective clusters. Having selected the clusters, we must install additional modules on them that allow the esxi server to encapsulate/decapsulate VXLan packages; this is done, again, from this interface. That is, you don’t need to go to SSH, you don’t need to copy any modules manually, from here you pressed a button and tracked the “success/failure” status.

Next, we must choose, in fact, how this configuration of the VMkernel interface will take place. That is, we select a distributed switch, select uplinks, that is, where this will happen; we select load balancing parameters when there are several links, depending on this, when we have only one IP address of this type per host, sometimes there can be several of them. Now we use it in one at a time mode.

Next we must select the VXLan ID. That is, VXLan is a technology reminiscent of VLan, but these are additional tags that allow you to isolate different types of traffic in one real segment. If there are 4 thousand VLan identifiers, there are 16 million VXLan identifiers. Here we actually select a certain range of VXLan segment numbers, which, when automatically creating logical switches, will be assigned to them.

How to choose them? Actually, whatever you want, from this range, if you have a large infrastructure and there may be several NSX implementations so that they do not overlap. Just. Actually, the same as VLan. That is, I use the range from 18001 to 18999.

Next, we can create a so-called transport zone. What is a transport zone? If the infrastructure is large enough, imagine you have about a hundred esxi servers there, about 10 servers per cluster, you have 10 clusters. We may not use all of them to work with NSX. We can use those that we have used as one infrastructure, or we can divide them, for example, into several groups. To say that our first three clusters are one island, the fourth to tenth clusters are some other island. That is, by creating transport zones, we indicate how far this VXLan segment can spread. I have three servers here, you won’t get much money, right? Therefore, everything is simple here. It’s just that all my hosts fall into this zone.

And one more important point. When setting up a zone, we control how information will be exchanged to search for information about IP addresses and mac addresses. It could be Unicast, it could be L2 Multycast, it could be Unicast routed through L3. Again, depending on the network topology.

These are some preliminary things, that is, I repeat once again: all that was required from the network infrastructure was that all the hosts on which NSX should run could communicate with each other via IP, using, if necessary, and normal routing. And the second point is that the MTU in this segment where they interact should be 1600 bytes, and not 1500 - as is usually the case. If I can’t get 1600 bytes, then I’ll just have to explicitly set the MTU in all the designs that I create in NSX, for example to 1400, so that I can fit into 1500 in the physical transport.

Further. Using NSX, I can create a logical switch. This is the easiest way compared to a traditional network (cut VLan). The only thing is that I don’t know where my physical servers are connected, yes, here they are the same switches. In theory, the network could be more complex. You may have some servers connected to one switch, some to another, somewhere L2, somewhere L3. As a result, in fact, creating a logical switch, we cut VLan at once on all switches through which traffic will flow. Why? Because we're actually creating a VXLan, and the actual physical traffic that the switches will see is traffic from an ip address on one hypervisor to an ip address on another hypervisor, type udp, inside the VXLan content.

That is, in this way, cutting networks is very easy. We simply say: “Create a new segment”, select the unicast transport type, select through which transport zone, that is, in fact, on which esxi clusters this segment will be available. We wait a little - and now this segment will appear. Actually, what happens when we create these segments? That is, how to connect a virtual machine there? There are two options for this.

Option number one.

We say right here that connect a virtual machine to the physical network. And some of our customers said: “Oh, this is what our networkers want.” They go to the network settings, and says, here you have a cord from the machine, plugged into the port of the logical switch, right? And we select, here - the machine, here, accordingly, the interface.

Or the second option. In fact, when creating a logical switch, the NSX manager contacts the server's V-center and creates a port group on the distributed switch. Therefore, in fact, we can simply go to the properties of the virtual machine, select the desired port group, and enable the virtual machine there. Since the name is generated programmatically, it will include the name of this logical switch and the VXLan segment number. That is, in principle, from a regular V-center client it is quite clear that you will turn on the virtual machine in the logical segment.

Further. A few more cars that were visible at the very beginning, right? And that's where they came from. NSX implements some functions directly at the module level in the esxi kernel. This is, for example, routing between these segments, this is a firewall when moving between these segments, or even within this segment some functions are implemented using additional virtual machines. So-called EDGE gateways or additional services when needed. What do we see here? We see that there are three of them in my infrastructure, one of them is called NSX-dlr, dlr is a distributed logical router. This is a service virtual machine that allows the distributed router in NSX to work, no network traffic goes through it, it is not a data plane, but if our distributed router participates, for example, in exchanging routes using the dynamic routing protocols bgp, ospf, all these routes must come from somewhere, other routers must contact someone and exchange this information. Someone should be responsible for the status "is the distributed router working or not." That is, it is actually a kind of management module of a distributed router. When setting it up, we can specify that it should work reliably; accordingly, two virtual machines will be deployed in an HA pair. If for some reason one becomes unavailable, then the second will work in its place. The other two edges, which are of the NSX edge type, are virtual machines through which traffic is routed or sent to external networks that are not managed by NSX. In my scenario, they are used for two tasks: NSX edge is simply connected to the internal networks of the MUK data center, that is, for example, my V-center - it is like it was on a regular standard port group, it works there. In order for me to reach the V-center with some virtual machine on an NSX logical switch, I need someone to connect them. I have this virtual machine that connects them. Indeed, it has one interface connected to a logical switch, the other interface is connected to a regular port group on a standard switch on esxi, which is called NSX Internet edge. Guess what's different? About the same thing, but the port group to which it will connect is the port group that is connected to the DNZ network, in which honest white Internet addresses work. That is, it now has a white IP address configured on one of its interfaces and you can connect to this demo environment using NSX Networking. Accordingly, we configure additional services such as distributed routing here in the firewall item, but if we want to do firewalling, if we want to do nat, if we want to do log balancing or, for example, VPN, with an external connection, for this we open the properties internet edge.

We're running out of time a little, so I won't show everything I wanted to show.
Accordingly, in the edge properties we can manage the firewall, this is the firewall that will be applied when traffic passes through this virtual machine, that is, this is actually our perimeter firewall. Further, it may have a dhсp server, or it can forward as ip-helper, since dhсp-helper. It can do NAT - what I need here for edge, which looks on one side to the honest Internet and the other to internal networks. It has a load balancer. It can act as a point for a VPN tunnel or as a terminator for client connections. Here are two tabs: VPN and VPN Plus. VPN is a site to site, between edge and another edge, between edge and our cloud, between edge and the cloud of a provider that uses our VCNS or NSX technologies. SSL VPN Plus - there is a client for various operating systems that you can install on your laptop or user, they will be able to connect to the VPN infrastructure.

Well, literally the last few moments. A distributed firewall, that is, firewalling, is applied on each host; as rules we can specify here IP addresses, packet numbers, port numbers, virtual machine names, including masks, for example. Make a rule that from all machines whose name begins with up, allow 14:33 to all machines whose name begins with db. Allow traffic from a machine that is in one V-center folder to go to another, connect to the active directory, say that if our machine is part of a certain ad group, then allow this traffic, if not, then deny traffic. Well, and various other options.

Plus, again, what to do with traffic? Three actions: allow, deny, deny. What is the difference between prohibiting and refusing? Both of them block traffic. But one just does it quietly, and the second sends back a message that your traffic has been killed. And it’s much easier to diagnose later.

And literally an important addition at the end. That is, such a component as service composer. Here we can integrate NSX with some additional modules, for example, an external load balancer, an antivirus, or some kind of IDS/IPS system. That is, we can register them. We can see here what configs are there, we can describe those same security groups. For example, the demoUsers group is a group that includes itself in the machines on which a user from the demoUsers group is logged in. What are we seeing now? That now one virtual machine falls into this group. Where is she? Here.

Virtual desktop, the user is connected there. I can make a rule in the firewall that says allow users in one group to access some file servers, and allow users in another group to access other file servers. And even if these are two users who log into the same, for example, VDI desktop, but at different times, the firewall will dynamically apply different policies to different users. This way you can build a much more flexible infrastructure. There is no need to allocate separate network segments or separate machines for different types of users. That is, network policies can be dynamically reconfigured depending on who is currently using the network.

Distribution of VMware solutions in

A new version of the popular product became available in November; Windows and Linux are supported as host operating systems. VMware Workstation 7 is positioned as a personal virtualization platform. One of the reasons for this positioning is that a personal computer is used to run virtual machines. The product includes an extensive set of additional tools that allow you to use various network settings, record and playback virtual machine sessions, debug applications and much more. VMware Workstation 7 can be used in conjunction with a development environment, making it especially popular among developers, educators, and technical support specialists.

Exit VMware Workstation 7 means official support for Windows 7 as both a guest and host operating system. The product includes support for Aero Peek and Flip 3D, which makes it possible to observe the operation of a virtual machine by moving the cursor to the VMware taskbar or to the corresponding tab on the host desktop.

The new version can run on any version of Windows 7, just as any version of Windows can be run in virtual machines. In addition, virtual machines in VMware Workstation 7, fully support the Windows Display Driver Model (WDDM), which allows the use of the Windows Aero interface in guest machines.

IN VMware Workstation 7 3D support has been greatly enhanced, with support for OpenGL 2.1, Shader Model 3.0, as well as the XPDM (SVGAII) driver for Windows XP, Windows Vista and Windows 7. Added support for Windows XP Mode, allowing you to import a Windows XP Mode virtual machine using VMware Workstation 7 and run it using advanced features such as multiprocessing, high-quality graphics, and other VMware features.

Support for the new operating environment from Microsoft is not all innovations in VMware Workstation 7. New features include the use of up to four processors/cores, 32 GB of RAM, the ability to change the size of virtual disks on the fly, and the Auto Protect function allows you to create snapshots at specified intervals. IN VMware Workstation 7 It became possible to pause a running virtual machine to urgently free up system resources.

If you upgrade from VMware Workstation 6.5 to VMware Workstation 7 All settings of virtual machines and applications are saved. The only thing you need to do is install the new VMware Tools on your virtual machines to be able to use some of the new features. These features include the ability to install a host or network printer without additional drivers using the universal ThinPrint driver. The VMware Tools package itself is now dynamically updated.

Among other features VMware Workstation 7 We can highlight the ability to encrypt virtual machines, as well as the ability to run in a virtual machine without additional settings for the VMware ESX 4.0 hypervisor. This allows developers and other specialists to work with the hypervisor without the use of additional hardware. Developers receive enhanced debugging capabilities, as well as integration with the SpringSource Tools Suite for debugging Java applications.

Unlike some of its competitors, the product VMware Workstation 7 not free, but the capabilities presented in the product are impressive! Competing products do not offer the same variety of features.

It's safe to say that users using earlier versions VMware Workstation will want to upgrade immediately, and for those looking for a personal virtualization platform, the choice is obvious - this VMware Workstation 7.

VMware Workstation is a virtual machine for running operating systems installed on a computer. The VMware virtual machine emulates computer hardware, allows you to create virtual machines, run one or more operating systems running in parallel with Windows installed on the computer.

The VMware Workstation Pro program emulates computer hardware and allows you to run software on your computer in an isolated environment. You can install operating systems on a virtual machine (for example, Linux on Windows, or vice versa) to work in a virtual environment without affecting the real system.

Check unfamiliar or suspicious software, test a new antivirus without installing it on your computer, try working on a different operating system, etc. In this case, the real operating system will not be affected in the event of dangerous actions performed on a virtual machine.

The actual operating system installed on the computer is called the host, and the operating system installed on the virtual machine is called the guest operating system.

The American company Vmware is the largest manufacturer of virtualization software and produces programs for personal computers: paid VMware Workstation Pro and free VMware Player with reduced capabilities.

VMware Workstation Pro (there is a review of this program in the article) supports the installation of several different (or identical) operating systems: various distributions of Windows, Linux, BSD, etc.

Please note that the guest operating system consumes computer resources. Therefore, while the virtual machine is running, you should not run resource-intensive applications on a real computer, or open several virtual machines at once. The more powerful the computer, the more comfortable it is to work on a virtual machine. On powerful computers, several virtual machines will work simultaneously without any problems, but on weak ones, only one virtual machine.

Install VMware Workstation Pro on your computer. By default, the program works in English; there is a good Russification on the Internet from Loginvovchyk, which must be installed after installing the program. After this, the VMware Workstation Pro virtual machine will work in Russian.

After launch, the main VMware Workstation window will open. At the top of the window there is a menu for managing the program. On the left is the “Library”, which will display the virtual machines installed in VMware. The “Home” tab contains buttons for performing the most frequently required actions: “Create a new virtual machine”, “Open a virtual machine”, “Connect to a remote server”, “Connect to Vmware vCloud Air”.

Creating a new virtual machine

To create a virtual machine (VM), click on the “Create a new virtual machine” button, or go to the “File” menu, select “New virtual machine...”.

The New Virtual Machine Wizard will open. In the first window, select the configuration type “Typical (recommended)”, and then click on the “Next” button.

The next window prompts you to select the type of installation of the guest OS; three options are available:

  • installation from an installation DVD inserted into the computer drive;
  • use to install a system image file in ISO format from a computer;
  • installation of the operating system later.

If you select the first two options, after selecting the settings, the installation of the operating system on the virtual machine will begin. In the third case, the installation of the guest OS can be started at any other convenient time, after completing the setup of the virtual machine.

If installing later, select the guest operating system. If it is not in the list, select "Other". Then select your OS version. A large selection of versions is offered for each system (more than 200 OS are supported); there is also an Other option of various bit depths (34-bit and 64-bit).

If you are installing a guest system while creating a virtual machine, then a window will open with information about quick installation. You do not need to enter a Windows product key or password; you only need to select the Windows version.

If your computer has more than one logical drive, then I recommend changing the location where virtual machine files are stored in the user profile (default setting) to another drive on your computer.

What is it for? If Windows installed on your computer fails, you will need to reinstall the system. After reinstalling the operating system, the VMware virtual machine file saved in the user profile on the system disk will be lost. If the virtual machine is not located on the system disk, then reinstalling Windows will not affect it.

To reuse, you will need to install the VMware Workstation program and then connect the virtual machine. You don't have to install and configure everything again.

Therefore, on drive “E” (in your case, it will most likely be drive “D”) of my computer, I created a folder “Virtual Machines”, in which folders with files of virtual machines installed on my computer are saved.

For a new virtual machine, create a folder with the name of this VM in order to separate its files from other VMs.

Next, you need to select the maximum disk size occupied by the virtual machine (by default - 60 GB, the size can be changed), the type of saving the virtual disk: in one file, or in several files. This size will be taken from your computer's hard drive for the virtual machine's needs.

When saving a virtual disk in one file, the VM works more efficiently than when divided into several files.

In the final window, click on the “Finish” button. After this, the installation of the guest operating system will begin.

Read more about the Windows installation process here:

If you selected the option to install the operating system later, then in this window there will be no option “Enable this virtual machine after it is created”, and therefore the installation of the guest system will not begin.

Setting up a VMware virtual machine

By default, the virtual machine is configured optimally for most cases. If necessary, you can change some settings and also add shared folders.

In the settings, in the “Hardware” tab, you can change the amount of memory for this virtual machine, the number of processor cores, and the amount of hard disk occupied by the virtual machine. In the “CD/DVD (SATA)” section, you can select a disk drive or operating system image file for installation (if you select installation later), and make other settings.

In the “Settings” tab, in the “Shared folders” section, select the “Always on” setting, activate the “Map as network drive in Windows guests” option.

Next, click on the “Add…” button in the Add Shared Folder Wizard window, create a shared folder for exchanging data with the real system and other guest systems. It is advisable to create a shared folder not on the system drive for the reasons described above.

I already have such a folder on my computer (Data Sharing). I chose this folder for the new virtual machine. Next, enable this resource.

With the default settings, dragging, inserting and copying files from the real to the virtual system, and in the opposite direction, is allowed.

Opening a virtual machine

After reinstalling Windows (my case), you can open previously created virtual machines saved on your computer. In the main window of VMware Workstation, click on the “Open virtual machine” button, or in the “File” menu, select “Open...”.

Select the file (on my computer, virtual machines are in the “Virtual Machines” folder) of the virtual machine, and then click on the “Open” button.

On my computer, I opened previously saved virtual operating systems: Windows 10 x64, Windows 10, Windows 8.1, Windows 7, Mac OS X.

Running a Guest OS in VMware Workstation

To launch a guest operating system, in the VMware Workstation Pro program window, select the tab with the desired OS (if several guest OSes are installed), and then click on the “Enable virtual machine” button. You can turn on the system from the menu “Virtual machine”, “Power”, “Start virtual machine”.

To release the mouse cursor from the virtual machine, press the “Ctrl” + “Alt” keys, and to switch the mouse cursor to the virtual machine, press “Ctrl” + “G” (or click in the virtual machine window).

Installing VMware Tools

VMware Tools is a package of drivers and services that improve the operation of a virtual machine and its interaction with peripheral devices. Immediately after installing the operating system on the virtual machine, you need to install VMware Tools. A reminder about this will appear in the program window.

In the “Virtual Machine” menu, select “Install VMware Tools package...”. Next, open Explorer and run the installation of VMware Tools from the CD-ROM drive. Once the package installation is complete, reboot the guest operating system.

Guest OS Snapshots

In VMware Workstation, you can create a snapshot of the guest OS. After creating a snapshot of the system state, in case of failures in the guest OS, you can return to the previous operating state of the system.

In the “Virtual Machine” menu, click on the “Create Snapshot” item. Next, give the photo a name and add a description if necessary.

To restore the state of the guest OS at the time the snapshot was taken, select “Return to snapshot: Snapshot N” from the context menu. Next, restore the system state. The current state of the OS will be lost.

The created snapshots can be managed through the Snapshot Manager: create, clone, delete snapshots. The menu bar has three buttons for managing system snapshots.

Disabling a virtual machine

To exit the virtual machine, in the “Virtual Machine” menu, click on the “Power” context menu item, and then select “Shut down guest OS”. The operating system will shut down as if you were shutting down your computer normally.

When you select the “Suspend guest OS” option, the system will pause its operation without disabling services and applications.

How to enter the BIOS of a VMware virtual machine

While the virtual machine is starting, it is not possible to enter the BIOS due to the fact that the BIOS screen loads almost instantly.

In order for the user to be able to enter the BIOS of a virtual machine when the system boots, it is necessary to open the configuration file (file extension .vmx) of this virtual machine in Notepad. The configuration file is located in the virtual machine folder, in the location chosen when creating the virtual machine.

Enter the following line at the very end of the configuration file:

Bios.bootdelay = 15000

This setting configures the BIOS screen delay in milliseconds, in this case, 15000 = 15 seconds. You can choose a different time interval.

Now the user can press the desired key on the BIOS screen that opens.

Removing a virtual machine

To remove a virtual machine, open the tab for that virtual machine in VMware Workstation Pro. In the “Virtual Machine” menu, select the “Manage” context menu item, and then select “Remove from disk”. In the warning window, agree to deletion (this is an irreversible action).

After this, all files of the guest virtual machine will be deleted from the computer.

Conclusions of the article

VMware Workstation Pro virtual machine is a powerful application for creating guest virtual operating systems that run on a computer along with the real OS. The guest operating system will be isolated from the Windows installed on the computer.

Today I would like to tell you about products that were previously produced by VMware, but for one reason or another were discontinued and ceased to develop. The list is far from complete and contains, for the most part, my opinion about the products based on the results of working with them.

VMware ESX Server

I'll start with perhaps the most significant product, thanks to which VMware has become a leader in the server virtualization market.

VMware ESX Server is the first Type 1 hypervisor for Intel x86 processors. ESX wasn't the first server hypervisor, or even VMware's first product. However, it was the first to implement such features as live VM migration (vMotion), high availability of VMs (High Availability), automatic balancing (Distributed Resource Scheduler), power management (Distributed Power Management) and much more.

By the way, have you ever wondered what the abbreviation ESX means? So, ESX is Elastic Sky X. Which once again proves that back in 2002, VMware developed its products with cloud computing in mind...

ESX was built on a monolithic architecture; all drivers, networking, and I/O subsystem worked at the hypervisor level. However, to manage hyperzovir, a small service VM was installed on each host - Service Console based on a modified Red Hat Linux distribution. On the one hand, this imposed a number of restrictions - the service VM consumed part of the host’s computing resources, its disks, like any other VM, needed to be placed on VMFS storage, and each host needed at least two IP addresses, one for the VMKernel interface , the second is for Service Console. On the other hand, Service Console provided the ability to install third-party software (agents, plugins), which expanded the capabilities for monitoring and managing the hypervisor. The presence of the Service Console has given rise to a common misconception that the ESX hypervisor is a modified Linux.

It's worth mentioning that the first versions of ESX were installed and managed separately, however, starting with ESX 2.0, VMware VirtualCenter (now well known as vCenter Server) was introduced for centralized management of multiple hosts. Then, in fact, Virtual Infrastructure appeared, which was a set of virtualization products consisting of the ESX hypervisor and VirtualCenter management software. By version 4.0, Virtual Infrastructure was renamed vSphere.

In 2008, an alternative hypervisor appeared - ESXi, which did not need the Service Console, was much smaller in size, but did not support much of what ESX could do (ESXi did not have a WEB interface, a built-in firewall, the ability to boot via SAN, integration with Active Directory, etc.). With each new version, VMware gradually increased the functionality of ESXi. VMware vSphere 4.1 was the latest release to include the ESX hypervisor. Starting with 5.0, VMware left only ESXi.

VMware GSX Server/Server

For many years, VMware GSX Server was released in parallel with VMware ESX. Ground Storm X (as the abbreviation GSX stands for) was a type 2 hypervisor and was installed on top of Microsoft Windows, RedHat or SUSE Linux server operating systems. Using a type 2 hypervisor had its advantages. Firstly, GSX supported a much wider range of hardware and could even run on desktop hardware, unlike the “capricious” ESX. Secondly, VMware GSX was extremely easy to install and configure; anyone who worked with VMware Workstation was able to handle GSX. Thirdly, GSX had a built-in NAT and DHCP server, which made it easy to configure the network for the VM.

Like its older brother, GSX supported centralized management via VirtualCenter.

Later, GSX was renamed VMware Server, and gained the ability to run 64-bit VMs, as well as allocate several virtual processors to VMs. Released at the end of 2008, VMware Server 2.0 became free, acquired a full-fledged web interface and the ability to forward USB devices inside a VM, but lost support for VMware VirtualCenter.

By this time, ESX and ESXi hypervisors occupied the majority of the server virtualization market. The release of free versions of VMware ESXi Free and Microsoft Hyper-V Server became the final nail in the coffin of VMware Server. VMware and Microsoft abandoned their hypervisors for server operating systems.

VMware vCenter Server Heartbeat

The product, designed to ensure high availability of vCenter services and related services (DBMS, SSO, Update Manager), was developed not by VMware itself, but by a third-party company - Neverfail Group.

The protection mechanism was based on the idea of ​​organizing a two-node cluster operating in active-passive mode. The passive node monitored the state of the main node, and if it was unavailable, it launched clustered services. The cluster did not require shared storage because changes made on the active node were periodically replicated to the passive node. vCenter Heartbeat provided protection for both physical and virtual, and even mixed vCenter configurations where one node was physical and the other was virtual.

Although for some time vCenter Heartbeat was the only way to protect vCenter not only from hardware but also from software failures, the implementation was frankly lame. The complex procedure for installing and maintaining the cluster, as well as a lot of bugs, took their toll. As a result, starting with vSphere 5.5 U3 / vSphere 6.0, VMware abandoned vCenter Heartbeat and returned to the more familiar method of clustering using Microsoft Failover Cluster.

VMware vCenter Protect

For those of you who have worked with vSphere since at least version 4, you should know that at that time vCenter Update Manager supported installing updates not only for ESX/ESXi hypervisors, but also guest operating systems and various software. However, starting with 5.0, this functionality was excluded from Update Manager; instead, VMware began to offer a separate product - VMware vCenter Protect, which was acquired together with Shavlik.


In addition to updating guest OSes, vCenter Protect made it possible to perform an inventory of software and hardware, run various scripts on a schedule, and scan for vulnerabilities.

But, apparently, sales were not going very well, in addition, VMware’s portfolio included vRealize Configuration Manager, acquired in 2010 from EMC, which performed the functions of patch management, inventory, and much more. Therefore, in 2013, vCenter Protect was sold to LANDesk.

VMware Virtual Storage Appliance

Virtual Storage Appliance is VMware's first attempt to play in the software-defined storage market. VSA was intended for SMB and made it possible to create a common fault-tolerant storage system based on local disks installed in the server.


A special VSA application was deployed on each ESXi host. VSA virtual disks were placed on VMFS storage created on local RAID controller volumes. Half of the disk space was intended for mirroring data from another VSA (a kind of network analogue of RAID 1) located on a neighboring host, half was left for useful data. Each app then presented its mirrored storage via the NFS protocol back to all virtualization hosts. One installation supported 2 or 3 virtualization hosts; when using 2 hosts, vCenter Server acted as an arbiter and had to be deployed on a separate physical server or ESXi host that was not part of the VSA.

The functionality of VSA was very limited. For example, the first version of VSA only supported placement on VMFS volumes with RAID 1 or 10, which led to high data storage overhead (in fact, the usable space was less than 1/4 of the volume of local disks), there was no support for VAAI, there was no support for caching or sharing.

All this, combined with a not too low price and low performance, did not allow VSA to displace conventional storage systems from the SMB segment. Therefore, shortly after the release of the first version of Virtual SAN in 2014, the product was discontinued from sale.

VMware Virsto

Another victim of Virtual SAN, a product of the company of the same name, which VMware acquired in 2013. As far as I know, after the purchase, Virsto never appeared in the price lists, but was almost immediately multiplied by zero.

A promising development in the field of software-defined data storage, Virsto was a virtual app that acts as a storage virtualizer, i.e. storage resources were presented to the upline, and the upline, in turn, gave disk space to the hosts using the NFS protocol. The heart of Virsto was VirstoFS - a specialized file system that allows you to optimize write and read operations through the use of mechanisms similar to those seen in NetApp FAS storage systems. Virsto could accumulate random write operations in a special log and then sequentially write data to the storage system, which had a positive effect on IOPS and latency. In addition, Virsto supported multi-level data storage (tiring) and optimized work with snapshots by storing metadata in RAM about which data block is located in which of the snapshots.


Despite the fact that the product was never released, the efforts of the developers were not in vain - in Virtual SAN 6.0, instead of VMFS-L, a new disk layout format based on VirstoFS and support for “advanced” snapshots appeared.

VMware Lab Manager

A product for automating the deployment and lifecycle management of VMs in test environments.

Essentially, Lab Manager was a manager of managers, deployed on top of an existing installation of VMware ESX/ESXi and vCenter and made it possible to organize multi-user (multi-tenant) access to a shared virtual infrastructure, allocate the required set of computing resources to users, automatically issue IP addresses to VMs from pools, and create isolated networks for VM, indicate the lease period for the VM.


With the growing popularity of the topic of cloud computing, VMware switched to another product - vCloud Director, gradually transferring all the developed features from Lab Manager and closing it.

VMware ACE

I want to finish the review with a rather rare beast - VMware ACE. Even before the advent of VDI in its classic form and the widespread adoption of BYOD, VMware offered clients software for centralized management of virtual workstations that could run on users’ personal computers - VMware ACE.


ACE worked in conjunction with the VMware Workstation and Player client hypervisors and made it possible to manage VMs based on specified policies. Using policies, administrators could limit the functionality of the VM (for example, disable forwarding of USB devices or control network access), force encryption of virtual disks, allow access to the VM only for authorized users, configure the VM lifetime, after which the VM stopped starting, etc. d. The VM, along with policies and the VMware Player hypervisor, could be exported as a ready-made Pocket ACE package and transferred to the user in any convenient way (on a CD, flash drive or over the network). If necessary, the administrator could deploy an ACE Management Server on the network, to which client hypervisors connected and requested the latest policy settings for the VM.

Despite the interesting functionality, the product was not widely used, and according to VMware, it did not meet all the requirements of the few customers who used it, so it was discontinued in 2011. A few years later, ACE was replaced by VMware Horizon FLEX, which has its own mechanism for delivering VMs to user computers, as well as supporting the VMware Fusion Pro hypervisor for Apple MAC OS X.