Monday 16 November 2015

10 Virtualization Tips Every Administrator Should Consider
Virtualization has become a necessity for all organizations irrespective of their size. Virtualization reduces costs and enables organizations to get more out of their technology investments. Because virtualization is a vast area, proper knowledge of how to use it in the best possible way is the key to success.

Keeping this in mind, let's take a look at what should be considered while implementing virtualization.

10 Tips for Implementing Virtualization
Quick recap: Virtualization is a process used to create a virtual environment. It allows a user to run multiple operating systems on one computer simultaneously. It is the creation of a virtual (rather than actual) version of something, such as an operating system, a server or network resources, etc. For many companies, virtualization can be viewed as part of an overall trend in IT environments which are able to manage themselves based on perceived activity and utility computing. The most important goal of virtualization is to reduce administrative tasks while improving scalability and workloads. In a nutshell, virtualization abstracts the computing functionality of a device from its physical hardware. Now that we have that out of the way, here are 10 virtualization tips that you should keep in mind if you are planning or running a virtual environment.

1. Consider the Hardware
Hardware virtualization has become a dominant trend for IT departments. The hardware for virtualization begins with a server’s memory and computing resources. When you are planning hardware for virtual capacity, you must consider purchasing hardware that is bigger than you would normally need, as the virtual machine takes up more space on the server.

The pitfalls of using hardware virtualization are the risk of deletion or overwriting data stored in one system, as well as the cost of upfront fees.

2. Track the Virtual Machine Life Cycle
You need to keep track of your virtual machine from its starting point to its end point. The life cycle of a virtual machine provides the ability to use physical resources in an efficient and productive manner. The life cycle includes two parts:


  • Configuration: This is performed in a development environment, which allows for creation, testing and modification of the virtual machine.
  • Deployment: This is performed in the production environment.
As an administrator, you should handle both the configuration and deployment.

3. Avoid Virtualizing Everything

The server must process internal traffic from a large number of users, and you should make an appropriate plan for everything that is virtualized. Virtualization provides cost savings, lower use of resources and administrative capabilities. But some things are not suitable for a virtual environment. These include:


  • Anything that requires physical hardware
  • Anything that requires extreme performance
  • Applications or operating systems that don’t allow for virtualization due to license agreements
  • Applications or any resources that have not been tested
  • Virtual environments depending on host systems, images, authentication, network and storage
  • Physical environments that mainly depend on two points of failure: failure in itself and its host
  • Systems that may contain secure information that should not be accessible to others
In some cases, such as when old desktops need to be converted to a virtualized desktop infrastructure, it is easier and less time consuming to simply replace them with thin clients.

4. Monitor Virtual and Non-Virtual Traffic
You need to monitor all traffic, both virtual and non-virtual. Don’t think that the virtual hosts are safer and should not be considered for security. Monitoring of both virtual and non-virtual machines is very important in the matter of internal and external traffic of the virtual machine. After some time, you may need to give more resources to specific machines while other virtual machines will continue to stand alone.

5. Don't Expect Virtual Resources to Be Free
Virtual machines usually take less space on the server, but that doesn't mean these resources come without a cost. Virtualization clients needs to understand that there is price for a virtual machine, which is gained from a server being virtualized, which goes along with virtualization. Sometimes virtualization costs are so high that your company alone can’t pay the bill.

6. Virtual Machines Can Be a Temporary Service
Sometimes you need service temporarily. This service can be provided by virtual machines better than any other machines. With virtual machines, there is no need for an FTP server, temporary print server or Web server for providing temporary service. As virtual machines are free of hardware resource costs, use of virtual machines has become quite easy. Therefore, virtual machines allow for the creation of specified machines for disposable tasks, and you can use them whenever you need them.

7. Virtual Machine Templates Make Deployment Easier
Virtual machines can create templates for easy deployment of these machines based on specific configurations or needs. They provide a set of virtual machine templates so that deployment can proceed as simply as possible. This can save time and effort by selling web servers, which often provide a specific service. Once you create the template for a virtual machine, then you can use it as often as necessary; there is no need to perform this task repeatedly. So both time and money can be saved.

8. Allocate Thick Provisioning for Virtual Machines
In many organizations, most of the administrators allocate dynamically created disks for their virtual machines. To provide as much performance as possible, you need to allocate thick provisioning for virtual machines. To provide thick provisioning, you need to set an actual size for the disk in the virtual machine configuration. Before allocating thick provisioning, you need to make sure that the host machine has enough space for allocating thick provisioning on the virtual machines. Once you allocate thick provisioning, the performance of the virtual machine will be better.

9. Use Guest Add-Ons and Virtualization Tools to Improve Performance
If you want to improve the experience and performance of your virtualized environment, you need to install:


  • Guest add-ons
  • Virtualization tools
These are provided by virtual machines, such as VMware and Virtual Box.

This improves the communication between the guest and the host machines. Most administrators neglect this and assume that it should be unnecessary to install these add-ons and virtualization tools on the host. By installing these with additional tools such as display drivers, mouse integration, guest-to-host time synchronization and other tools, your can improve the life and the performance of the virtual machine.

10. Make Sure the Host Machine's Patches Are Always Up to Date
The guest OS plays big role in the process of a virtual machine. If a server is hosting numerous virtual machines, and those machines aren't properly patched and protected, then there is the potential for a huge amount of data loss. Therefore, always keep your host machine fully patched and secure.

In the world of IT sectors, virtualization has become a necessity for nearly all organizations. Therefore, administrators in IT should have clear idea about the processes and options used in virtualization. By following the rules described above, virtualization enables organizations to get more mileage out of their tech investments.


Get your VMs up and running with the Hyper-V Automatic Start Action

Hyper-V gives administrators several options for automating the startup and shutdown of VMs on a host.

In a Hyper-V environment, administrators can choose to start a virtual machine automatically when a Hyper-V server is booted, as well as control the behavior of a VM during the hypervisor shutdown. This article examines some of the best practices for automatic start and stop actions.
Automatic Start Action options
The Hyper-V Automatic Start Action is configured on a per VM basis. You can access the Hyper-V Automatic Start Action options by opening the Hyper-V Manager, right clicking on a VM and selecting the Settings command from the shortcut menu -- the automatic start settings are also exposed through System Center Virtual Machine Manager. Once the settings window for the VM is displayed, simply select the Hyper-V Automatic Start Action option from the console tree. You can see the Automatic Start Action settings below in Figure A.
There are three available Hyper-V Automatic Start Action options.
As you can see in the figure above, Hyper-V gives you three options for automatically starting VMs. The first option is "Nothing." This option essentially tells Hyper-V not to do anything with the VM when the Hyper-V server is booted. In other words, setting the Automatic Start Action to "Nothing" prevents the VM from automatically being started when the Hyper-V server is booted.

The second option is to automatically start if it was running when the service stopped. This is the default option, and it tells Hyper-V to check to see what the status of the VM was at the time that the server was shut down. If the VM was running, then Hyper-V will automatically start the VM as a part of the host server's boot process.
The third option is to always start the VM automatically. Users are given an option to select an automatic start delay. Using this option allows you to choose to delay the start of a VM for a number of seconds.
Although the default behavior is to automatically start the VM if it was running at the time that the server was stopped, I recommend using this automatic start option only for critical infrastructure servers, such as DNS servers, DHCP servers and domain controllers. Application servers and less critical infrastructure servers should be configured to use a startup delay. The reason for this is that some application servers will fail to come online if the required infrastructure is not already present. Using a startup delay ensures that the infrastructure has time to come online before application servers begin to boot.
Automatic Stop Actions
Like the automatic start options, the automatic stop options are also set on a per VM basis and can be configured using either the Hyper-V Manager or System Center Virtual Machine Manager. To access these settings within the Hyper-V Manager, right click on the VM that you want to configure and select the Settings command from the shortcut menu. When the Settings window appears, select the Automatic Stop Action option from the console tree. You can see the available settings in FigureB.
There are three available Hyper-V Automatic Stop Action options.
Hyper-V gives you three options for handling a VM in response to a physical-host-server shutdown. The first option is to save the VM state. In many cases, this is a good option, but it can require a significant amount of disk space. Hyper-V uses disk space equal to the amount of memory that has been allocated to the VM. This allows the memory contents to be written to the disk so that the VM can later be brought back online in the same state in which it previously existed.
The second option is to turn off the VM. This option is similar to yanking the power cord out of a physical server. It isn't a graceful shutdown, but this is the option that you will probably have to use if you have a VM without Hyper-V Integration Services.
The third option is to shut down the guest operating system. This option uses the Hyper-V Integration Services to perform a graceful shutdown of the VM.
Many resources suggest that saving the VM state is the preferred Hyper-V Automatic Stop Action. However, there are some situations in which saving the VM state may not be the best course of action. For example, if you have a VM that has been allocated a very large amount of memory, but your storage resources are limited, then saving the VM state might not be a good idea because it will consume storage space. This is also a poor choice if you need the VMs to be taken offline as quickly as possible.
Likewise, if you have a distributed application that spans multiple VMs, then using the saved state option usually isn't going to be a good idea. The reason for this is that the VMs might be brought back online in a random order, thereby confusing the distributed application. For these types of VMs, it may be better to shut down the guest operating system.
As you can see, Hyper-V allows you to control the automatic start and Automatic Stop Actions for your VMs. It is worth noting that the settings behave a little bit differently if the VM has been made highly available. If a host server is taken offline, highly available VMs will failover to another cluster node rather than initiating an Automatic Stop Action.
Another thing to keep in mind is that Automatic Stop Actions do not work in the event of a host server crash. A crash happens abruptly, and Automatic Stop Actions are only initiated in the event of a graceful host shutdown.

Thursday 12 November 2015

Microsoft's Nano Server is shaking up containers

Nano Server is more than a slimmed down operating system, and it could have a big impact on how applications are deployed.
Container technology continues to grow as developers are embracing this new technology that is starting to find a place in the data center. Containers are not micro-operating systems but pieces of the OS needed to run an application. They are thinner than Java and come in a lot fewer versions.

Previously Java and Docker have focused on other platforms, mostly because Microsoft controlled the Windows and .Net platform.
Microsoft, with Windows Server 2008, released Windows Server Core which was a full operating system without the GUI. While this was not a container, it did show that you don't need the entire OS to run core services or applications.Windows Server Core is included in Windows Server 2012 and the upcoming Windows Server 2016, however it will be Microsoft's new Nano Server that may be a game changer.
Nano Server is not simply a container platform, nor is it a lightweight version of Windows Server Core. Nano Server, while based on the Microsoft server platform, has much of the interface, application stack and traditional .Net framework removed. The Nano Server becomes alightweight host for Hyper-V VMs or applications designed to run on the .Net Core framework. The ideal target with the Nano Server is for infrastructure of native cloud-based applications. The small footprint in disk space and code help to make the Nano Server a platform that should require little patching or maintenance -- making it ideal for cloud-based environments.
The Nano Server isn't Microsoft starting over -- but it is pretty close. Without the traditional .Net Framework, remote management is needed. Even many of the traditional hooks that allow servers with graphical user interfaces to perform remote management are missing. This OS is designed for remote management with scripting automation through code rather than the traditional OS management tools. Nano Server is Microsoft's entry into themicroservices world. Similar to what microsegmentation is doing for software-defined networking, microservices have the ability to shake up how we work with applications today.
Today, Windows Server is a Swiss army knife that has the ability to run millions of different applications, and in that is where the problem lies. The base OS continues to grow in size and complexity. The overhead of a traditional Windows Server OS providing a single core service is staggering. Simple features, such as DNS or DHCP, came with a 20+ GBs GUI server installation. Windows Server Core helped address this issue, and now Nano Server is the next step in the evolution.
It is very unlikely that Nano Server will replace the traditional server OS overnight. Microsoft is still working on tools for the administrator to support this new Nano Server. Windows Server Core 2008 suffered slow deployment due to the lack of remote tools for the administrator, a problem that was addressed in Windows Server 2012. The other challenge will be developing applications for the Nano Server. Since these containers do not run a full installation of the .Net Framework, it will require developers to redesign at least part of their applications to take advantage of the .Net core framework. While this may seem troubling, streamlining the server to focus only on exactly what it needs to do is ideal in today's world where a system administrator's time is so heavily focused on administration duties, such as patching and security hardening.
The other important functions for the Nano Server are in Hyper-V and scale-out file server roles. Both of these roles fit very well within the Azure and cloud-based strategy that Microsoft is moving forward with. The Hyper-V role should be of particular interest to many administrators looking to move forward with Hyper-V as an alternative to VMware. While Nano Server is still not as streamlined as VMware's ESXi, it is a great step in the right direction and an improvement over Windows Server Core. However, the unique thing about Nano Server is it can run on bare metal, as a virtual machine or even as a container, something VMware's ESXi cannot do, giving the developer and administrator the ultimate flexibility.
Microsoft's Nano Server is a unique departure for Microsoft and, according to the company, the future of the Windows Server platform. Linux has a head start with the microservices journey but Microsoft has shown an uncanny ability to turn on a dime when needed. If Microsoft can find the balancing point between the agile, quick, streamlined container platform, that is still versatile enough to support the gigantic Windows developer community all while allowing balanced administration, the Nano Server could be a game changer. While this all sounds like a lot to balance, (and it is) let's not forget the improvements Microsoft made with Server Core from Windows Server 2008 to Windows Server 2012. The changes from Windows Server Core 2008 to 2012 put Windows Server Core 2012 into the enterprise with the proper balance between performance, versatility and managerial features. Nano Server looks to be that evolutionary and revolutionary step for Windows Server.

How does the Hyper-V parent partition architecture work?

Microsoft's hypervisor relies on several modules and services to deploy and manage virtual machines. Do you know what they are and how they work?
There are several modules that operate together to build the Microsoft Hyper-V hypervisor concept. Hyper-V implements a main partition called the parent partition, which runs Hyper-V's main service called Virtual Machine Management Service. VMMS is the main module designed to control all aspects of Hyper-V server virtualization, but also uses several sub-modules as explained below.

WMI Provider: This module acts as an interface between developers and VMs running in the child partitions. The Windows Management Instrumentation (WMI) Provider component implements the necessary WMI classes for developers to execute an action on the VMs running on a Hyper-V host. Microsoft implements root\virtualization as the core WMI Provider that contains networking, VM BIOS, storage and video classes to help you interact with Hyper-V VMs.
Hyper-V VSS Writer: The backups that a Hyper-V application performs are handled by the Volume Shadow Copy Service (VSS) Writer component. The Hyper-V VSS Writer backs up VMs without any downtime. The Hyper-V VSS Writer and Hyper-V Volume Shadow Copy Requestor service running in a VM as part of Integration Services enable online backup functionality. Any requests coming for VM backups are handled by the Hyper-V VSS Writer and then sent to the Hyper-V Volume Shadow Copy Requestor service.
Virtual Machine, Worker Process and Snapshot Managers: The Virtual Machine Manager component is responsible for managing VM states. When you open the Hyper-V Manager, VMMS.exe calls the Virtual Machine Manager component to refresh VM statuses. Worker Process Manager launches a VM worker process for each VM and keeps track of all worker processes running in the parent partition. Worker Process Manager also processes snapshots or checkpoints for running VMs. On the other hand, Snapshot Manager – as the name suggests – handles snapshots or checkpoints for VMs that are offline.
Single Port Listener for RDP: Remote Desktop Protocol (RDP) is used by the Virtual Machine Connection Manager tool to connect to a VM over network port 2179. The VMMS.exe listens on network port 2179 for incoming RDP requests from the VMConnect.exe tool. When VMMS.exe receives a RDP request, it redirects the request to the Single Port Listener for RDP component, which in turn, helps to enable RDP of a VM.
Cluster Resource Control: With the help of Cluster Resource Control component, VMMS.exe enables high availability for VMs running in a Hyper-V cluster. Cluster Resource Control uses HVCLUSRES.DLL to interact with VM resources.

The power of the SMB protocol in Hyper-V

The power of the SMB protocol in Hyper-V

The SMB protocol allows admins to achieve high availability, reduce operational costs and migrate virtual machines faster.

If you look at the history of the Server Message Block (SMB) protocol, you get an idea that it has widely been used to provide file services to the SMB clients. An SMB client makes requests for accessing, reading and writing to file shares residing on an SMB server. It depends on the SMB server component as to what it has to offer when an SMB client requests for file services. SMB server components running on Windows Server 2012 or later have a lot to offer for SMB clients. Microsoft has taken a step further in the development of the SMB protocol in Windows Server 2012 and introduced enterprise grade features that not only have helped reduce the cost for small and midsize businesses, but has also become a strong selling point for Hyper-V server virtualization.

Initially, SMB was known for just file sharing, but starting with Windows Server 2012, there have been significant changes in the SMB  protocol 3.0, including SMB Direct, SMB Multichannel (multiple connections per SMB session), SMB Transparent Failover, SMB Scale-Out, SMB Directory Leasing and many more. In Windows Server 2012 R2, a new version of SMB 3.02 was introduced that included significant changes including improved performance of SMB Direct, improved SMB bandwidth management, and enabling Hyper-V live migration of VMs and VM storage without failover clustering to name a few.
Achieving high availability of Hyper-V VMs without block-based storage: One benefit that boosts the return on investments is the use of SMB file shares as a shared storage for Hyper-V hosts. This is sometimes referred to as "Hyper-V over SMB." In earlier versions of Hyper-V you had to store Hyper-V VMs on a block-based storage to achieve high availability of the VMs. In Windows Server 2012 and later OSes, you can configure SMB file shares to host Hyper-V VM files such as VM configuration, virtual hard disks and snapshot files and expect the same reliability, availability and high performance that you achieve using block-based storage. When implemented as a shared storage on Windows Server 2012 or later, SMB helps deploy a Hyper-V server virtualization infrastructure without spending precious IT dollars on expensive SAN devices. There are several other Hyper-V scenarios where SMB can be a useful. You can create a file server cluster running on Windows Server 2012 or later, create SMB shares, configure the available properties for the SMB shares, and then have the SMB shares available for Hyper-V hosts for hosting the VM files.
If you deploy VM files over an SMB share created on Windows Server 2012 or later file servers, the SMB Transparent Failover feature can help you provide continuous availability of VMs in case of any maintenance activity for one of the clustered nodes in a file server cluster. This is achievable by implementing SMB Scale-Out file server cluster. A scale-out file server hosts SMB shares that are simultaneously online on all the nodes in a file server cluster.
Reducing operational expenditure: You can manage file shares instead of requiring someone with expertise to manage storage fabric and LUNs which, in turn, helps you reduce the operating expenditures associated with managing a SAN environment. A virtual administrator who knows how to manage a file share can easily manage SMB shares rather than requiring another administrator to manage the complex SAN environment.
SMB as a shared storage for VHDX file sharing: The VHDX file sharing feature of Hyper-V helps you implement guest clustering without exposing SAN storage to Hyper-V guests. The VHDX file that will be shared among the multiple VMs must be kept on a shared storage. Since an SMB share created on Windows Server 2012 or later can act as a shared storage, you can implement guest clustering without requiring storage from a SAN. Note that to use the VHDX file sharing feature, you would need to host VMs on a Windows Server 2012 R2 Hyper-V host.
Some Hyper-V features use SMB: It is worth mentioning that SMB has a role to play for some of the notable Hyper-V features such as Storage Live Migration and Shared Nothing Live Migration introduced in Windows Server 2012. Neither of those features require Windows failover clustering to be implemented to utilize the SMB 3.0 capabilities to move VM and its storage while the VM is running. In case you have noticed a Hyper-V VM or a Storage Live Migration transfer without a Hyper-V failover cluster, it is the SMB 3.0 running on Windows Server 2012 R2 that does the job in the background.
Faster Live Migration of VMs: Most of Hyper-V virtual administrators who are familiar with Hyper-V Live Migration and SMB Direct or "SMB over RDMA" will select SMB as the live migration protocol for transferring VMs to other Hyper-V hosts. It is because live migration over SMB takes advantage of RDMA network acceleration, which in turn gives you faster live migration.
A protocol for future virtualization: It is imperative to understand that Microsoft has been working continuously on the development of the SMB protocol in upcoming Windows Server 2016. There are a few new features in In Windows Server 2016 that help you run a secure SMB environment such as pre-authentication integrity and various encryption options for encrypted SMB connection. The SMB version that will be available in the upcoming Windows Server 2016 is SMB 3.1.1.
The minimum infrastructure required to deploy Hyper-V over SMB is that you obtain a copy of Windows Server 2012 or later, install File and Storage Services role, configure SMB share, and then have the SMB shares available to a standalone Hyper-V host or a Hyper-V cluster.


What did Microsoft add to Hyper-V Replica in 2012 R2?

Microsoft updated its Hyper-V Replica feature for Windows Server 2012 R2, but do you know how these changes can affect your RTO and recovery options?

Hyper-V Replica provides replication services for virtualized workloads running on Windows Server 2012 (and later) Hyper-V hosts. In its first release with Windows Server 2012, Hyper-V Replica provided a number of great backup and recovery features. Microsoft’s main focus has been to provide replication features that will help reduce the overall recovery time objective (RTO) for the virtualized workloads and help in workloads at both primary and replica sites.

Reduction in overall RTO: When using Hyper-V Replica, the primary objective is to reduce the time it takes to restore business services. Restoring business continuity depends on two things: How fast a replica VM can come online and how much data you would need to restore to bring replica VM up to date. Microsoft provides two new features in Hyper-V 2012 R2 that would help you reduce the overall RTO:
·         Generation 2 VMs and Hyper-V Replica: Microsoft introduced Generation 2 VMs in Hyper-V 2012 R2. Since Generation 2 VMs offer faster boot, it helps you bring up the replica VM as quickly as possible. The replica VM is turned off at the replica site, it is brought online during the replication cycle in case of any disasters with primary site or VM. It takes less time to bring a Generation 2 replica VM online than it  takes to boot up a Generation 1 VM.
·         Replication frequency: In Hyper-V running on Windows Server 2012, the replication interval was five minutes, and  was not configurable. In Hyper-V 2012 R2, you can select between 30 seconds, five minutes and 15 minutes as the replication interval. If you run an application inside the primary VM that performs write operations every few seconds, such aSQL Server, setting a replication interval of 30 seconds will ensure that changes made at the primary VM are replicated to the replica VM as quickly as possible which, in turn, helps reduce the time it takes to recover a VM. For example, if changes are replicated every 30 seconds, you wouldn’t have to spend much time in bringing the replica VM up to date. You could survive 30 seconds data loss, but recovering five minutes of data might take some time. You might not want to set replication frequency of 30 seconds for all everything, but the replication interval can be set per VM.
Recovery copies: Recovery copies are generated every hour. You can keep a maximum of 15 recovery copies in Windows Server 2012 and was increased to 24 in Hyper-V 2012 R2. This means you can recover a VM from a recovery point that was created 24 hours prior.
Extra layer of protection: Microsoft introduced the extended replication feature in Hyper-V running on Windows Server 2012 R2 hosts. With the extended replication feature, you can extend replication of a VM to a third site which allows you to recover a VM in case both the primary and replica servers are impacted by a disaster. All changes that occur at the primary server are copied to the replica server and the replica server, in turn, copies these changes to the extended replica server.

It is worth mentioning that Hyper-V 2012 R2 supports replication for newer Linux distributions and provides VHD-level consistent snapshots that you can use to recover a Replica VM running a Linux operating system.




Wednesday 11 November 2015

Requirements of integrating a PXE server with SCVMM 2012 R2

Requirements of integrating a PXE server with SCVMM 2012 R2
Integrating a PXE server with SCVMM helps perform bare-metal Hyper-V host deployments but there are requirements to do so.

If you want to perform bare-metal Hyper-V host deployments, integrating a Preboot Execution Environment (PXE) server with SCVMM is an option to explore. SCVMM has supported integrating PXE servers since SCVMM 2012 SP1, but there are a few requirements you need to keep in mind before integrating PXE server with SCVMM.

The first thing to understand is SCVMM only supports integrating aPXE server that is provided with Windows Deployment Services(WDS). Although you might have success in integrating third-party PXE servers with SCVMM, Microsoft won't offer support for integration or operating system deployment issues. In case you are running SCVMM 2012 R2, you must deploy WDS on a computer running Windows Server 2008 R2 or later.
It is imperative to understand that SCVMM cannot perform bare-metal installations for physical hosts residing in different IP subnets. Make sure the PXE server is running on the same subnet as the physical hosts, which must be configured for PXE boot. To ensure the PXE server has been integrated with SCVMM successfully, run the "wdsutil /get-server /show: config" command on the WDS Server. The command output should return VMMOSDProvider as the PXE provider provided by SCVMM.
Many SCVMM administrators are confused by the settings on the PXE Response tab on the WDS server. Although it seems that the "Respond to all client computers (known and unknown)" setting must be configured before SCVMM deploys operating systems onto physical hosts, that's not the case. Since SCVMM uses its own PXE provider called VMMOSDProvider, SCVMM will ignore any settings you have configured on the PXE Response property tab of the WDS server.

·    Once PXE Server is integrated with SCVMM, the bare-metal deployment of physical hosts will be handled by the SCVMM. But before you can start deploying OSes on physical hosts, ensure that you have taken the necessary steps in SCVMM.

Make sure the physical computer profiles in SCVMM have been configured. Admins also need to ensure that a sys prepped VHD/VHDX file has been stored in the SCVMM library and any required files such as hardware drivers, answer files, scripts have been copied to the SCVMM library.
In case Hyper-V hosts need to be given a static IP address during the bare-metal deployment, ensure that you have created a VM network along with a network site and a static IP address pool has been created in the SCVMM.





Pick and choose when to use Hyper-V PowerShell cmdlets

Pick and choose when to use Hyper-V PowerShell cmdlets

The amount of time you can save by using PowerShell cmdlets is enticing but in certain situations you should explore other methods.

Microsoft has put in a lot of effort into designing a scripting framework that helps IT administrators get information from roles and features installed on Windows Server OSes. Almost every role or feature ships with a set of PowerShell cmdlets that you can use to reduce the time it takes to perform a task. The Hyper-V role also ships with PowerShell cmdlets that you can use to interact with Hyper-V host and virtual machines running on them. For example, you can use the command Get-VM to list VMs running on a particular Hyper-V host or the commands Start-VM and Stop-VM to start or stop a single or multiple VMs.

Not all Hyper-V PowerShell cmdlets are useful 

Although it is certainly true that performing a task with a command takes less time than doing the same task from a GUI, using a command is not for every task. For example, in the case of Hyper-V, before you can add an external virtual switch using the New-VMSwitchPowerShell cmdlet, you will need to know the available physical network adapter to which the new external virtual switch will be mapped to. From the PowerShell window, you can use Get-NetAdapter to get all physical network adapters. Next, you need to specify the switch type using the –SwitchType parameter, setting any required options such as –AllowManagementOS and then typing the command to create a new external virtual switch on a Hyper-V host. Similarly, in case you need to connect one or more VM virtual network adapters to a Hyper-V virtual switch, you can use Connect-VMNetworkAdapter.
Although you have PowerShell cmdlets available to create an external virtual switch and connect virtual network adapters to a Hyper-V virtual switch, how many times in a day do you perform these tasks? Creating external virtual switches on Hyper-V hosts are part of a planning and designing process and you will find it easier to create a Hyper-V virtual switch using the Hyper-V Manager rather than using a PowerShell cmdlet. Generally, you don't remove and create virtual switches every day. Using Connect-VMNetworkAdaptercan be useful if you want to connect virtual network adapters of multiple VMs to a Hyper-V virtual switch. However, you can specify the name of the VMs separated by a comma and the Hyper-V virtual switch name in the –SwitchName parameter to connect the virtual network adapter of multiple VMs to a Hyper-V virtual switch. It is important to note that in a production environment, you don't connect/disconnect VMs to a Hyper-V virtual switch every day.
Why you should use Hyper-V PowerShell cmdlets
Although they might not be for every scenario, there are several reasons why you should consider using Hyper-V PowerShell cmdlets. In case you are designing a PowerShell script that performs repeated tasks, you can use Hyper-V cmdlets in a script. Most of the Hyper-V PowerShell cmdlets ship with "-WhatIf" parameter. This parameter, once specified with a cmdlet, shows what would happen if the cmdlet runs. Remember that this option is not available with Hyper-V Manager. If you use Hyper-V Manager to perform a task, the task is executed immediately. Although Hyper-V Manager GUI will seek your confirmation, the results are known only after the task has been executed.

Most of the Hyper-V PowerShell cmdlets accept multiple inputs such as specifying multiple VMs or Hyper-V hosts in a cmdlet, which is very useful if you want to execute an urgent task on more than one VM or more than one Hyper-V host without spending much time working with Hyper-V Manager. For example, in case there is a maintenance activity on one of the Hyper-V hosts and you need to save all VMs or live migrate all VMs to one of the Hyper-V hosts, PowerShell cmdlets Suspend-VM and Move-VM play a vital role. To suspend all VMs running on local Hyper-V hosts, you can use Get-VM | Where State –eq "running" | Suspend-VM command. In case you need to suspend all VMs on a remote Hyper-V host, you can use the suspend or invoke command as it appears below Invoke-Command {Get-VM | Where State –eq "Running" | Suspend-VM} –ComputerName Hyper-VHost1.
Finally, most of the new features of Hyper-V 2012 R2 can only be configured using Hyper-V PowerShell cmdlets, including enabling resource metering for the VMs, enabling port mirroring, setting up some replication options such as –BypassProxy server setting or enabling resynchronization of the VMs, and granting/revoking single or multiple users access to connect to VMs running on Windows Server 2012 R2 and later Hyper-V hosts. You have no choice except to use PowerShell cmdlets for these tasks. For example, Windows Server 2012 R2 and later Hyper-V hosts enable controlling user access to connect to VMs, but you can only control access to VMs by using the Grant-VMConnectAccess and Revoke-VMConnectAccess cmdlets. Hyper-V Manager does not provide any option to grant/revoke users access to connect to VMs. As an example, to allow user1 to connect to a SQLVM running on a 2012 R2 Hyper-V host, you will use Grant-VMConnectAccess –VMName SQLVM –Username Domain.com\User1 and to revoke access, you will use the Revoke-VMConnectAccess –VMName SQLVM –Username Domain.com\User1 command.
PowerShell has been an integral part of Microsoft's server and client computing. It is certainly true that Hyper-V PowerShell cmdlets help you increase the productivity and also help in automating repeated tasks, but not all Hyper-V cmdlets are as helpful as explained above. Hyper-V Manager GUI can be used to quickly execute a task, but not all Hyper-V tasks can be performed using Hyper-V Manager.



How Microsoft Nano Servers will change VM management

How Microsoft Nano Servers will change VM management

While Nano Servers will improve hardware consolidation, they will also challenge traditional server management practices.

One of the Windows Server 2016 features that has received a lot of attention is Nano Servers -- micro Windows Server deployments that include an extremely bare bones code set. In fact, Nano Servers are far smaller than server core deployments and have a storage footprint of less than 1 GB.
In some ways, Nano Servers are going to be great for virtualized environments. A Microsoft Nano Server's tiny size means that they will make efficient use of thephysical hardware resources that are available for use by virtual servers. This will no doubt go a long way toward increasing a host server's potential VM density and may also end up improving VM performance. At the same time however, virtualization admins may have to rethink the way they manage VMs.
The adoption of Nano Servers probably won't force large, enterprise-class organizations to drastically change the way that they manage VMs. Conversely, smaller organizations that begin using Nano Servers will have to transition into using a management model that closely resembles one used by the largest enterprises. There are two main reasons for this:
The first reason is that an organization that makes full use of Nano Serversis probably going to have more Nano Servers than the total number of VMs that were previously being used. Second, Nano Servers do not have a server console.
Nano Servers aren't like a regular Windows server deployment. According to Microsoft, each Nano Server should be configured to perform one very specific task. Consequently, you will probably never have a multirole or a multifunction Nano Server. A Nano Server is meant to perform one task.
Of course this raises the question of how to configure application servers, since some applications have numerous dependencies. SharePoint, for example, depends on SQL Server, IIS and the .NET Framework.
Nano Servers are not intended to replace every VM in your organization. They are primarily intended for use as infrastructure servers (DNS, DHCP and so on). Applications cannot run on Nano Servers (at least not yet).
The point is that an organization can conserve a significant amount of hardware resources, thereby driving down hardware costs by transitioning to Nano Servers wherever possible. The end result will be a mixture of Nano Servers and VMs running more traditional Windows Server deployments. However, this model is almost certain to increase the number of VMs that must be managed.
Enterprise-class organizations commonly manage thousands of VMs and have become very adept at doing so. Although it is unlikely that smaller organizations will suddenly find themselves managing thousands of VMs, they can learn a lot by looking at the VM management techniques used by larger organizations.
Many of these management techniques used in larger organizations will become more universally important, because Nano Servers do not have a server console.
At first, the idea of a server not having a console may not seem all that different from the way things are today. After all, when Microsoft introduced the concept of server core deployments, it was often said that core servers did not have a console. However, there is a big difference. Server core deployments have a pseudo console. These servers might, for instance, provide a command-line interface. Nano Servers do not have a command-line interface. If you open a Microsoft Nano Server VM's console, you will see a black screen with a flashing cursor, and that's all. At least that is how it looks in the preview release.
In smaller organizations it is easy to get into the habit of managing VMs by opening VM Manager or VMware vCenter, opening a server console and then performing whatever task needs to be done on that server. Enterprise-class organizations don't do this. There are simply too many VMs to be able to manage them all manually. Large organizations rely heavily on bulk management techniques.
The adoption of Nano Servers will force even smaller organizations to adopt bulk VM management techniques similar to those used in enterprise environments. Even if these smaller organizations don't see a huge increase in the number of VMs, the lack of a server console will make manual management nearly impossible. Microsoft Nano Server administrators will have no choice but to adopt bulk management techniques.
Nano Servers have the potential to reduce the cost of server virtualization by decreasing VM hardware use. However, Nano Servers are significantly different from traditional Windows Server deployments and will therefore force smaller shops to adopt bulk VM management techniques.