Esxi Installer Fatal Error 33

Posted on by

The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit. ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, 'DESIGNS') IN THIS MANUAL ARE PRESENTED 'AS IS,' WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

Esxi Installer Fatal Error 33

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS.

USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO. To keep pace with the market, you need systems that support rapid, agile development processes.

Cisco HyperFlex ™ Systems let you unlock the full potential of hyper-convergence and adapt IT to the needs of your workloads. The systems use an end-to-end software-defined infrastructure approach, combining software-defined computing in the form of Cisco HyperFlex HX-Series Nodes, software-defined storage with the powerful Cisco HyperFlex HX Data Platform, and software-defined networking with the Cisco UCS fabric that integrates smoothly with Cisco ® Application Centric Infrastructure (Cisco ACI ™). Together with a single point of connectivity and management, these technologies deliver a pre-integrated and adaptable cluster with a unified pool of resources that you can quickly deploy, adapt, scale, and manage to efficiently power your applications and your business This document provides architecture reference and design guide for up to a1200 seat workload on an 8-node Cisco HyperFlex system with Citrix XenDesktop 7.11. We provide deployment guidance and performance data for Windows Server 2012 R2 XenApp server-based sessions and pooled and persistent Windows 10 with Office 2016 virtual desktops on vSphere 6. We demonstrate performance using Citrix Provisioning Server for pooled desktops and XenApp virtual machines and Citrix Machine Creation Services for pooled and persistent XenDesktop virtual desktops. The solution is a predesigned, best-practice data center architecture built on the Cisco Unified Computing System (UCS), the Cisco Nexus® 9000 family of switches and Cisco HyperFlex Data Platform version1.8.1b. The solution payload is 100 percent virtualized on Cisco HX220c-M4S rack server booting via on-board Flex-Flash controller configured SD cards running VMware vSphere 6.0 U2 patch03 hypervisor.

Feb 12, 2014. Hi guys (and girls), Just wanted to let you know that if you get the error message 'Error loading /tools.t00 Fatal error: 10 (Out of resources)' when trying to install VMware's ESXi 5.5 hypervisor on a Cisco UCS C240 M3 SFF rackable server with some GPUs in it, just follow the next steps to fix.

The virtual desktops are configured with Citrix XenDesktop and XenApp 7.11 and provide unparalleled scale and management simplicity for Windows 10 pooled or persistent desktops (1000) and hosted server-based sessions (1200) on an eight node Cisco HyperFlex cluster. Where applicable document provides best practice recommendation and sizing guidelines for customer deployment of this solution. The solution provides outstanding virtual desktop end user experience as measured by the Login VSI 4.1 Knowledge Worker workload running in benchmark mode, with 1 second or less index average response times. The current industry trend in data center design is towards shared infrastructures. By using virtualization along with pre-validated IT platforms, enterprise customers have embarked on the journey to the cloud by moving away from application silos and toward shared infrastructure that can be quickly deployed, thereby increasing agility and reducing costs. Cisco HyperFlex uses best of breed storage, server and network components to serve as the foundation for desktop virtualization workloads, enabling efficient architectural designs that can be quickly and confidently deployed. The audience for this document includes, but is not limited to; sales engineers, field consultants, professional services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.

This document provides a step by step design, configuration and implementation guide for the Cisco Validated Design for Cisco HyperFlex system running Citrix XenDesktop 7.11 mixed workload solution with Cisco UCS 6248UP Fabric Interconnects and Cisco Nexus 9300 series switches. This is the first Cisco Validated Design with Cisco HyperFlex system running Virtual Desktop Infrastructure. It incorporates the following features: Validation of Cisco Nexus 9000 with Cisco HyperFlex Support for the Cisco UCS 3.1(2b) release and Cisco HyperFlex Data Platform version 1.8(1b) VMware vSphere 6.0 U2 Hypervisor Citrix XenDesktop 7.11 Pooled Desktops, Persistent Desktops and XenApp shared server sessions Citrix Provisioning Services (PVS) and Citrix Machine Creation Service virtual machine deployment The data center market segment is shifting toward heavily virtualized private, hybrid and public cloud computing models running on industry-standard systems. These environments require uniform design points that can be repeated fir ease if management and scalability. The factors have led to the need predesigned computing, networking and storage building blocks optimized to lower the initial design cost, simply management, and enable horizontal scalability and high levels of utilization. The use cases include: Enterprise data Center (small failure domains) Service Provider Data Center (small failure domains) Commercial Data Center Remote Office/Branch Office SMB Standalone Deployments.

Esxi Installer Fatal Error 33

This Cisco Validated Design prescribes a defined set of hardware and software that serves as an integrated foundation for both XenDesktop Microsoft Windows 10 virtual desktops and XenApp RDS server desktop sessions based on Microsoft Server 2012 R2. The mixed workload solution includes Cisco HyperFlex, Cisco Nexus®, the Cisco Unified Computing System (Cisco UCS®), Citrix XenDesktop and XenApp and VMware vSphere hypervisor in a single package. The design is efficient enough that the networking, computing, and storage can fit in a 12 rack unit footprint in a single rack. Port density on the Cisco Nexus switches and Cisco UCS Fabric Interconnects enables the networking components to accommodate multiple HyperFlex clusters in a single Cisco UCS domain. A key benefit of the Cisco Validated Design architecture is the ability to customize the environment to suit a customer's requirements.

A Cisco Validated Design can easily be scaled as requirements and demand change. The unit can be scaled both up (adding resources to a Cisco Validated Design unit) and out (adding more Cisco Validated Design units). The reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of a hyper-converged desktop virtualization solution. A solution capable of consuming multiple protocols across a single interface allows for customer choice and investment protection because it truly is a wire-once architecture.

The combination of technologies from Cisco Systems, Inc. And Citrix produced a highly efficient, robust and affordable desktop virtualization solution for a hosted virtual desktop and hosted shared desktop or mixed deployment supporting different use cases. Key components of the solution include the following: More power, same size. Cisco HX-series rack server with dual 14-core 2.6 GHz Intel Xeon (E5-2690v4) processors and 512GB of memory for Citrix XenDesktop supports more virtual desktop workloads than the previously released generation processors on the same hardware.

The Intel Xeon E5-2690 v4 14-core processors used in this study provided a balance between increased per-server capacity and cost. Fault-tolerance with high availability built into the design. The 1000 user Windows 10 virtual desktops and 1200 hosted shared server desktop designs are based on multiple Cisco HX-Series rack servers for virtualized desktop and infrastructure workloads.

The design provides N+1 server fault tolerance for hosted virtual desktops, hosted shared desktops and infrastructure services. Stress-tested to the limits during aggressive boot scenario.

The 1000 user Windows 10 virtual desktops and 1200 hosted shared server desktop environment booted and registered with the XenDesktop Studio in under 15 minutes, providing our customers with an extremely fast, reliable cold-start desktop virtualization system. Stress-tested to the limits during simulated login storms. All 1000 or 1200 simulated users logged in and started running workloads up to steady state in 48-minutes without overwhelming the processors, exhausting memory or exhausting the storage subsystems, providing customers with a desktop virtualization system that can easily handle the most demanding login storms. Ultra-condensed computing for the datacenter.

The rack space required to support the system is 12 rack units, including Cisco Nexus Switching and Cisco Fabric interconnects. Incremental Cisco HyperFlex clusters can be added in 8 rack unit groups to add additional 1000 user capability, conserving valuable data center floor space. 100% Virtualized: This CVD presents a validated design that is 100 percent virtualized on VMware ESXi 6.0.

All of the virtual desktops, user data, profiles, and supporting infrastructure components, including Active Directory, Provisioning Servers, SQL Servers, XenDesktop and XenApp infrastructure components, XenDesktop Windows 10 desktops and XenApp RDS servers were hosted as virtual machines. This provides customers with complete flexibility for maintenance and capacity additions because the entire system runs on the Cisco HyperFlex hyper-converged infrastructure with stateless Cisco UCS HX-series servers. (Infrastructure VMs were hosted on two Cisco UCS C220 Rack Servers outside of the HX cluster to deliver the highest capacity and best economics for the solution.) Cisco Datacenter Management: Cisco maintains industry leadership with the new Cisco UCS Manager 3.1(2) software that simplifies scaling, guarantees consistency, and eases maintenance. Cisco’s ongoing development efforts with Cisco UCS Manager, Cisco UCS Central, and Cisco UCS Director insure that customer environments are consistent locally, across Cisco UCS Domains and across the globe, our software suite offers increasingly simplified operational and deployment management, and it continues to widen the span of control for customer organizations’ subject matter experts in compute, storage and network. Cisco 10G Fabric: Our 10G unified fabric story gets additional validation on 6200 Series Fabric Interconnects as Cisco runs more challenging workload testing, while maintaining unsurpassed user response times. Cisco HyperFlex Storage Performance: Cisco HyperFlex provides industry-leading storage solutions that efficiently handle the most demanding I/O bursts (for example, login storms), profile management, and user data management, deliver simple and flexible business continuance, and help reduce storage cost per desktop.

Cisco HyperFlex Simplicity: Cisco HyperFlex provides a simple to understand storage architecture for hosting all user data components (VMs, profiles, user data) on the same hyper-converged storage system. Cisco HyperFlex Agility: Cisco HyperFlex System enables users to seamlessly add, upgrade or remove storage from the infrastructure to meet the needs of the virtual desktops.

Cisco HyperFlex vCenter Integration: Cisco HyperFlex plugin for VMware vSphere hypervisor has deep integrations with vSphere, providing easy-button automation for key storage tasks such as storage provisioning and storage resize, cluster health status and performance monitoring directly from the VCenter web client in a single pane of glass. Experienced vCenter administrators have a near zero learning curve when HyperFlex is introduced into the environment.

Citrix XenDesktop and XenApp Advantage: Citrix XenDesktop and XenApp application and desktop virtualization delivers high-performance, scalability, and business agility that supports mobile users without sacrificing security and compliance. These services can be accessed from any device, at any time, from any location, to provide superior user experience and bring-your-own-device flexibility to users while delivering centralized application and OS management and security to IT managers. Optimized for Performance and Scale: Optimized to achieve the best possible performance and scale. For hosted shared desktop sessions, the best performance was achieved when the number of vCPUs assigned to the XenApp 7.11 virtual machines did not exceed the number of hyper-threaded (logical) cores available on the server.

In other words, maximum performance is obtained when not overcommitting the CPU resources for the virtual machines running virtualized RDS systems. Provisioning Choices Explored: Citrix provides two core provisioning methods for XenDesktop and XenApp virtual machines: Citrix Provisioning Services for pooled virtual desktops and XenApp virtual servers and Citrix Machine Creation Services for pooled or persistent virtual desktops. This paper provides guidance on how to use each method and documents the performance of technology. Today’s IT departments are facing a rapidly evolving workplace environment. The workforce is becoming increasingly diverse and geographically dispersed, including offshore contractors, distributed call center operations, knowledge and task workers, partners, consultants, and executives connecting from locations around the world at all times. This workforce is also increasingly mobile, conducting business in traditional offices, conference rooms across the enterprise campus, home offices, on the road, in hotels, and at the local coffee shop. This workforce wants to use a growing array of client computing and mobile devices that they can choose based on personal preference.

These trends are increasing pressure on IT to ensure protection of corporate data and prevent data leakage or loss through any combination of user, endpoint device, and desktop access scenarios (Figure 1). These challenges are compounded by desktop refresh cycles to accommodate aging PCs and bounded local storage and migration to new operating systems, specifically Microsoft Windows 10 and productivity tools, specifically Microsoft Office 2016. Some of the key drivers for desktop virtualization are increased data security and reduced TCO through increased control and reduced management costs. Cisco focuses on three key elements to deliver the best desktop virtualization data center infrastructure: simplification, security, and scalability. The software combined with platform modularity provides a simplified, secure, and scalable desktop virtualization platform.

Simplified Cisco UCS provides a radical new approach to industry-standard computing and provides the core of the data center infrastructure for desktop virtualization. Among the many features and benefits of Cisco UCS are the drastic reduction in the number of servers needed and in the number of cables used per server, and the capability to rapidly deploy or re-provision servers through Cisco UCS service profiles. With fewer servers and cables to manage and with streamlined server and virtual desktop provisioning, operations are significantly simplified. Thousands of desktops can be provisioned in minutes with Cisco UCS Manager service profiles and Cisco storage partners’ storage-based cloning. This approach accelerates the time to productivity for end users, improves business agility, and allows IT resources to be allocated to other tasks.

Cisco UCS Manager automates many mundane, error-prone data center operations such as configuration and provisioning of server, network, and storage access infrastructure. In addition, Cisco UCS B-Series Blade Servers, C-Series and HX-Series Rack Servers with large memory footprints enable high desktop density that helps reduce server infrastructure requirements. Simplification also leads to more successful desktop virtualization implementation. Cisco and its technology partners like VMware Technologies have developed integrated, validated architectures, including predefined hyper-converged architecture infrastructure packages such as HyperFlex. Cisco Desktop Virtualization Solutions have been tested with VMware vSphere. Secure Although virtual desktops are inherently more secure than their physical predecessors, they introduce new security challenges.

Mission-critical web and application servers using a common infrastructure such as virtual desktops are now at a higher risk for security threats. Inter–virtual machine traffic now poses an important security consideration that IT managers need to address, especially in dynamic environments in which virtual machines, using VMware vMotion, move across the server infrastructure. Desktop virtualization, therefore, significantly increases the need for virtual machine–level awareness of policy and security, especially given the dynamic and fluid nature of virtual machine mobility across an extended computing infrastructure.

The ease with which new virtual desktops can proliferate magnifies the importance of a virtualization-aware network and security infrastructure. Cisco data center infrastructure (Cisco UCS and Cisco Nexus Family solutions) for desktop virtualization provides strong data center, network, and desktop security, with comprehensive security from the desktop to the hypervisor.

Security is enhanced with segmentation of virtual desktops, virtual machine–aware policies and administration, and network security across the LAN and WAN infrastructure. Scalable Growth of a desktop virtualization solution is all but inevitable, so a solution must be able to scale, and scale predictably, with that growth. The Cisco Desktop Virtualization Solutions support high virtual-desktop density (desktops per server), and additional servers scale with near-linear performance. Cisco data center infrastructure provides a flexible platform for growth and improves business agility. Cisco UCS Manager service profiles allow on-demand desktop provisioning and make it just as easy to deploy dozens of desktops as it is to deploy thousands of desktops. Cisco UCS servers provide near-linear performance and scale. Cisco UCS implements the patented Cisco Extended Memory Technology to offer large memory footprints with fewer sockets (with scalability to up to 1 terabyte (TB) of memory with 2- and 4-socket servers).

Using unified fabric technology as a building block, Cisco UCS server aggregate bandwidth can scale to up to 80 Gbps per server, and the northbound Cisco UCS Fabric Interconnect can output 2 terabits per second (Tbps) at line rate, helping prevent desktop virtualization I/O and memory bottlenecks. Cisco UCS, with its high-performance, low-latency unified fabric-based networking architecture, supports high volumes of virtual desktop traffic, including high-resolution video and communications traffic. In addition, Cisco HyperFlex help maintain data availability and optimal performance during boot and login storms as part of the Cisco Desktop Virtualization Solutions.

Recent Cisco Validated Designs based on Citrix XenDesktop, Cisco HyperFlex solutions have demonstrated scalability and performance, with up to 1000 hosted virtual desktops or 1200 hosted shared desktops up and ready in less than 15minute or 5 minutes respectively. Cisco UCS and Cisco Nexus data center infrastructure provides an excellent platform for growth, with transparent scaling of server, network, and storage resources to support desktop virtualization, data center applications, and cloud computing. Savings and Success The simplified, secure, scalable Cisco data center infrastructure for desktop virtualization solutions saves time and money compared to alternative approaches. Cisco UCS enables faster payback and ongoing savings (better ROI and lower TCO) and provides the industry’s greatest virtual desktop density per server, reducing both capital expenditures (CapEx) and operating expenses (OpEx). The Cisco UCS architecture and Cisco Unified Fabric also enables much lower network infrastructure costs, with fewer cables per server and fewer ports required.

In addition, storage tiering and deduplication technologies decrease storage costs, reducing desktop storage needs by up to 50 percent. The simplified deployment of Cisco UCS for desktop virtualization accelerates the time to productivity and enhances business agility. IT staff and end users are more productive more quickly, and the business can respond to new opportunities quickly by deploying virtual desktops whenever and wherever they are needed. The high-performance Cisco systems and network deliver a near-native end-user experience, allowing users to be productive anytime and anywhere.

The ultimate measure of desktop virtualization for any organization is its efficiency and effectiveness in both the near term and the long term. The Cisco Desktop Virtualization Solutions are very efficient, allowing rapid deployment, requiring fewer devices and cables, and reducing costs. The solutions are also very effective, providing the services that end users need on their devices of choice while improving IT operations, control, and data security. Success is bolstered through Cisco’s best-in-class partnerships with leaders in virtualization and storage, and through tested and validated designs and services to help customers throughout the solution lifecycle. Long-term success is enabled through the use of Cisco’s scalable, flexible, and secure architecture as the platform for desktop virtualization. All of the Windows 10 virtual desktops were provisioned with 2GB of memory for this study. Typically, persistent desktop users may desire more memory.

If 3GB or more of memory is needed, the third memory channel on the Cisco HX220c M4 servers should be populated. Data provided here will allow customers to run HSD and Hosted Virtual Desktops (HVDs) to suit their environment. For example, additional Cisco HyperFlex clusters can be deployed to increase capacity. This document guides you through the low-level steps for deploying the base architecture, as shown in Figure 7 above.

These procedures cover everything from physical cabling to network, compute and storage configurations. This document provides details for configuring a fully redundant, highly available configuration for a Cisco Validated Design for various type of Virtual Desktop workload with Cisco HyperFlex. Configuration guidelines are provided that refer to which redundant component is being configured with each step For example, Cisco Nexus A or Cisco Nexus B identifies the pair of Cisco Nexus switches that are configured and Cisco UCS 6248UP Fabric Interconnects are similarly configured. Additionally, this document details the steps for provisioning multiple Cisco UCS hosts, and these are identified sequentially: VM-Host-Infra-01, VM-Host-Infra-02and so on. Finally, to indicate that you should include information pertinent to your environment in a given step, appears as part of the command structure. This section describes the infrastructure components used in the solution outlined in this study.

Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System™ (Cisco UCS) and the Cisco HyperFlex™ hyperconverged platform through an intuitive GUI, a command-line interface (CLI), and an XML API. The manager provides a unified management domain with centralized management capabilities and can control multiple chassis and thousands of virtual machines. Cisco UCS is a next-generation data center platform that unites computing, networking, and storage access.

The platform, optimized for virtual environments, is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain.

The main components of Cisco UCS are: Computing: The system is based on an entirely new class of computing system that incorporates blade servers based on Intel® Xeon® processor E5-2600/4600 v4 and E7-2800 v4 family CPUs. Network: The system is integrated on a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks, which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables needed, and by decreasing the power and cooling requirements. Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.

Storage access: The system provides consolidated access to local storage, SAN storage, and network-attached storage (NAS) over the unified fabric. With storage access unified, Cisco UCS can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Small Computer System Interface over IP (iSCSI) protocols. This capability provides customers with choice for storage access and investment protection. In addition, server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity and management and helping increase productivity.

Management: Cisco UCS uniquely integrates all system components, enabling the entire solution to be managed as a single entity by Cisco UCS Manager. The manager has an intuitive GUI, a CLI, and a robust API for managing all system configuration processes and operations. Figure 8 Cisco Data Center Overview Cisco UCS is designed to deliver: Reduced TCO and increased business agility. Increased IT staff productivity through just-in-time provisioning and mobility support. A cohesive, integrated system that unifies the technology in the data center; the system is managed, serviced, and tested as a whole. Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand. Industry standards supported by a partner ecosystem of industry leaders.

Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System across multiple chassis, rack servers, and thousands of virtual machines. Cisco UCS Manager manages Cisco UCS as a single entity through an intuitive GUI, a command-line interface (CLI), or an XML API for comprehensive access to all Cisco UCS Manager Functions.

The Cisco HyperFlex system provides a fully contained virtual server platform, with compute and memory resources, integrated networking connectivity, a distributed high performance log-based filesystem for VM storage, and the hypervisor software for running the virtualized servers, all within a single Cisco UCS management domain. Figure 9 HyperFlex System Overview The Cisco UCS 6200 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system. The Cisco UCS 6200 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet, FCoE, and Fibre Channel functions. The Fabric Interconnects provide the management and communication backbone for the Cisco UCS B-Series Blade Servers, Cisco UCS C-Series and HX-Series rack servers and Cisco UCS 5100 Series Blade Server Chassis.

All servers, attached to the Fabric Interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6200 Series provides both LAN and SAN connectivity for all blades in the domain. For networking, the Cisco UCS 6200 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, 1-terabit (Tb) switching capacity, and 160 Gbps of bandwidth per chassis, independent of packet size and enabled services. The product series supports Cisco low-latency, lossless, 10 Gigabit Ethernet unified network fabric capabilities, increasing the reliability, efficiency, and scalability of Ethernet networks. The Fabric Interconnects support multiple traffic classes over a lossless Ethernet fabric, from the blade server through the interconnect.

Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated. Figure 10 Cisco UCS 6200 Series Fabric Interconnect The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5108 Blade Server Chassis, is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.

Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS Fabric Extenders. A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot from each Fabric Extender. The chassis is capable of supporting 40 Gigabit Ethernet standards. Figure 11 Cisco UCS 5108 Blade Chassis Front and Rear Views Cisco UCS 2204XP Fabric Extender The Cisco UCS 2200 Series Fabric Extenders multiplex and forward all traffic from blade servers in a chassis to a parent Cisco UCS Fabric Interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between blades on the same chassis or virtual machines on the same blade, is forwarded to the parent interconnect, where network profiles are managed efficiently and effectively by the Fabric Interconnect.

At the core of the Cisco UCS fabric extender are application-specific integrated circuit (ASIC) processors developed by Cisco that multiplex all traffic. The Cisco UCS 2204XP Fabric Extender has four 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that connect the blade chassis to the Fabric Interconnect.

Each Cisco UCS 2204XP has sixteen 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the chassis. Figure 12 Cisco UCS 2204XP Fabric Extender Cisco UCS B200-M4 Nodes For workloads that require additional computing and memory resources, but not additional storage capacity, a compute-intensive hybrid cluster configuration is allowed. This configuration contains a minimum of three HX240c-M4SX Nodes with up to four Cisco UCS B200-M4 Blade Servers for additional computing capacity. The HX240c-M4SX Nodes are configured as described previously, and the Cisco UCS B200-M4 servers are equipped with boot drives. Use of the Cisco UCS B200-M4 compute nodes also requires the Cisco UCS 5108 blade server chassis, and a pair of Cisco UCS 2204XP Fabric Extenders. Cisco UCS B200 M4 blade servers support one NVIDIA Tesla M-6 MLOM form factor GPU for virtual desktops that require graphics processor support.

Cisco UCS B200 M4 blade servers can join Cisco HyperFlex hyperconverged clusters running Cisco HyperFlex Data Platform software version 1.8 or later as compute only nodes. Cisco UCS C220 M4 Rack Mount Server Nodes For workloads that require additional computing and memory resources with optional limited onboard storage options, a compute-intensive hybrid cluster configuration is allowed. This configuration contains a minimum of three HX240c-M4SX Nodes with up a matching number of Cisco UCS C220-M4 Rack Mount Servers for additional computing capacity. The HX240c-M4SX Nodes are configured as described later in this guide. The Cisco UCS C220-M4 servers are equipped with boot drives. Cisco UCS C220 M4 rack mount servers can join Cisco HyperFlex hyperconverged clusters running Cisco HyperFlex Data Platform software version 1.8 or later as compute only nodes. Figure 14 Cisco UCS C220 M4 Rack Mount Server Cisco UCS C240 M4 Rack Mount Servers For workloads that require additional computing and memory resources with optional expandable onboard storage options, a compute-intensive hybrid cluster configuration is allowed.

This configuration contains a minimum of three HX240c-M4SX Nodes with up a matching number of Cisco UCS C240-M4 Rack Mount Servers for additional computing capacity. The HX240c-M4SX Nodes are configured as described later in this guide. The Cisco UCS C240-M4 servers are equipped with boot drives. Cisco UCS C240 M4 rack mount servers support one or two NVIDIA Tesla M-60 PCIe form factor GPU cards for virtual desktops that require graphics processor support.

Cisco UCS C220 M4 rack mount servers can join Cisco HyperFlex hyperconverged clusters running Cisco HyperFlex Data Platform software version 1.8 or later as compute only nodes. Figure 15 Cisco UCS C240 M4 Rack Mount Server A HyperFlex cluster requires a minimum of three HX-Series nodes (with disk storage).

Data is replicated across at least two of these nodes, and a third node is required for continuous operation in the event of a single-node failure. Each node that has disk storage is equipped with at least one high-performance SSD drive for data caching and rapid acknowledgment of write requests. Each node also is equipped with up to the platform’s physical capacity of spinning disks for maximum data capacity. At first release, we offer three tested cluster configurations: The Cisco UCS Virtual Interface Card (VIC) 1227 is a dual-port Enhanced Small Form-Factor Pluggable (SFP+) 10-Gbps Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe) modular LAN-on-motherboard (mLOM) adapter installed in the Cisco UCS HX-Series Rack Servers (Figure 6). The mLOM slot can be used to install a Cisco VIC without consuming a PCIe slot, which provides greater I/O expandability.

It incorporates next-generation converged network adapter (CNA) technology from Cisco, providing investment protection for future feature releases. The card enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality-of-service (QoS) policies of the PCIe interfaces are all determined using the service profile. For workloads that require additional computing and memory resources, but not additional storage capacity, a compute-intensive hybrid cluster configuration is allowed.

This configuration contains a minimum of three HX240c-M4SX Nodes with up to four Cisco UCS B200-M4 Blade Servers for additional computing capacity. The HX240c-M4SX Nodes are configured as described previously, and the Cisco UCS B200-M4 servers are equipped with boot drives. Use of the B200-M4 compute nodes also requires the Cisco UCS 5108 blade server chassis, and a pair of Cisco UCS 2204XP Fabric Extenders. The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5108 Blade Server Chassis, is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.

Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS Fabric Extenders. A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot from each Fabric Extender. The chassis is capable of supporting 40 Gigabit Ethernet standards. The Cisco UCS 2200 Series Fabric Extenders multiplex and forward all traffic from blade servers in a chassis to a parent Cisco UCS Fabric Interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between blades on the same chassis or virtual machines on the same blade, is forwarded to the parent interconnect, where network profiles are managed efficiently and effectively by the Fabric Interconnect.

At the core of the Cisco UCS fabric extender are application-specific integrated circuit (ASIC) processors developed by Cisco that multiplex all traffic. The Cisco UCS 2204XP Fabric Extender has four 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that connect the blade chassis to the Fabric Interconnect. Each Cisco UCS 2204XP has sixteen 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the chassis. The Cisco HyperFlex HX Data Platform is a purpose-built, high-performance, distributed file system with a wide array of enterprise-class data management services. The data platform’s innovations redefine distributed storage technology, exceeding the boundaries of first-generation hyperconverged infrastructures.

The data platform has all the features that you would expect of an enterprise shared storage system, eliminating the need to configure and maintain complex Fibre Channel storage networks and devices. The platform simplifies operations and helps ensure data availability. Enterprise-class storage features include the following: Replication replicates data across the cluster so that data availability is not affected if single or multiple components fail (depending on the replication factor configured). Deduplication is always on, helping reduce storage requirements in virtualization clusters in which multiple operating system instances in client virtual machines result in large amounts of replicated data.

Compression further reduces storage requirements, reducing costs, and the log- structured file system is designed to store variable-sized blocks, reducing internal fragmentation. Thin provisioning allows large volumes to be created without requiring storage to support them until the need arises, simplifying data volume growth and making storage a “pay as you grow” proposition. Fast, space-efficient clones rapidly replicate storage volumes so that virtual machines can be replicated simply through metadata operations, with actual data copied only for write operations. Snapshots help facilitate backup and remote-replication operations: needed in enterprises that require always-on data availability. The Cisco HyperFlex HX Data Platform is administered through a VMware vSphere web client plug-in.

Through this centralized point of control for the cluster, administrators can create volumes, monitor the data platform health, and manage resource use. Administrators can also use this data to predict when the cluster will need to be scaled. For customers that prefer a light weight web interface there is a tech preview URL management interface available by opening a browser to the IP address of the HX cluster interface. Additionally, there is an interface to assist in running CLI commands via a web browser. For Tech Preview Web UI connect to HX controller cluster IP: controller cluster ip/ui Figure 21 HyperFlex To run CLI commands via HTTP, connect to HX controller cluster IP.

Figure 22 Web CLI A Cisco HyperFlex HX Data Platform controller resides on each node and implements the distributed file system. The controller runs in user space within a virtual machine and intercepts and handles all I/O from guest virtual machines. The platform controller VM uses the VMDirectPath I/O feature to provide PCI pass-through control of the physical server’s SAS disk controller. This method gives the controller VM full control of the physical disk resources, utilizing the SSD drives as a read/write caching layer, and the HDDs as a capacity layer for distributed storage.

The controller integrates the data platform into VMware software through the use of two preinstalled VMware ESXi vSphere Installation Bundles (VIBs): IO Visor: This VIB provides a network file system (NFS) mount point so that the ESXi hypervisor can access the virtual disks that are attached to individual virtual machines. From the hypervisor’s perspective, it is simply attached to a network file system. VMware API for Array Integration (VAAI): This storage offload API allows vSphere to request advanced file system operations such as snapshots and cloning. The controller implements these operations through manipulation of metadata rather than actual data copying, providing rapid response, and thus rapid deployment of new environments.

The Cisco HyperFlex HX Data Platform controllers handle all read and write operation requests from the guest VMs to their virtual disks (VMDK) stored in the distributed datastores in the cluster. The data platform distributes the data across multiple nodes of the cluster, and also across multiple capacity disks of each node, according to the replication level policy selected during the cluster setup. This method avoids storage hotspots on specific nodes, and on specific disks of the nodes, and thereby also avoids networking hotspots or congestion from accessing more data on some nodes versus others.

The policy for the number of duplicate copies of each storage block is chosen during cluster setup, and is referred to as the replication factor (RF). The default setting for the Cisco HyperFlex HX Data Platform is replication factor 3 (RF=3). Replication Factor 3: For every I/O write committed to the storage layer, 2 additional copies of the blocks written will be created and stored in separate locations, for a total of 3 copies of the blocks.

Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate simultaneous failures 2 entire nodes without losing data and resorting to restore from backup or other recovery processes. Replication Factor 2: For every I/O write committed to the storage layer, 1 additional copy of the blocks written will be created and stored in separate locations, for a total of 2 copies of the blocks.

Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate a failure 1 entire node without losing data and resorting to restore from backup or other recovery processes. For each write operation, data is written to the SSD of the node designated as it’s primary, and replica copies of that write are written to the caching SSD of the remote nodes in the cluster, according to the replication factor setting. For example, at RF=3 a write will be written locally where the VM originated the write, and two additional writes will be committed in parallel on two other nodes. The write operation will not be acknowledged until all three copies are written to the caching layer SSDs. Written data is also cached in a write log area resident in memory in the controller VM, along with the write log on the caching SSDs (Figure 23).

This process speeds up read requests when reads are requested of data that has recently been written. The Cisco HyperFlex HX Data Platform constructs multiple write caching segments on the caching SSDs of each node in the distributed cluster. As write cache segments become full, and based on policies accounting for I/O load and access patterns, those write cache segments are locked and new writes roll over to a new write cache segment. The data in the now locked cache segment is destaged to the HDD capacity layer of the node.

During the destaging process, data is deduplicated and compressed before being written to the HDD capacity layer. The resulting data after deduplication and compression can now be written in a single sequential operation to the HDDs of the server, avoiding disk head seek thrashing and accomplishing the task in the minimal amount of time (Figure 23). Since the data is already deduplicated and compressed before being written, the platform avoids additional I/O overhead often seen on competing systems, which must later do a read/dedupe/compress/write cycle. Deduplication, compression and destaging take place with no delays or I/O penalties to the guest VMs making requests to read or write data.

For data read operations, data may be read from multiple locations. For data that was very recently written, the data is likely to still exist in the write log of the local platform controller memory, or the write log of the local caching SSD. If local write logs do not contain the data, the distributed filesystem metadata will be queried to see if the data is cached elsewhere, either in write logs of remote nodes, or in the dedicated read cache area of the local and remote SSDs. Finally, if the data has not been accessed in a significant amount of time, the filesystem will retrieve the data requested from the HDD capacity layer. As requests for reads are made to the distributed filesystem and the data is retrieved from the HDD capacity layer, the caching SSDs populate their dedicated read cache area to speed up subsequent requests for the same data. This multi-tiered distributed system with several layers of caching techniques, insures that data is served at the highest possible speed, leveraging the caching SSDs of the nodes fully and equally.

The Cisco Nexus 9372PX/9372PX-E Switches has 48 1/10-Gbps Small Form Pluggable Plus (SFP+) ports and 6 Quad SFP+ (QSFP+) uplink ports. All the ports are line rate, delivering 1.44 Tbps of throughput in a 1-rack-unit (1RU) form factor.

Cisco Nexus 9372PX benefits are listed below. Some XenDesktop editions include the features available in XenApp. Citrix XenDesktop and XenApp application and desktop virtualization delivers high-performance, scalability, and business agility that supports mobile users without sacrificing security and compliance. These services can be accessed from any device, at any time, from any location, to provide superior user experience and bring-your-own-device flexibility to users while delivering centralized application and OS management and security to IT managers - increasing workplace flexibility, business continuity, user mobility, and productivity. Virtual GPU sharing capabilities bring high-end graphics and HD video processing to every device, giving even mobile devices and tablets the equivalent power of workstations costing thousands of dollars.

Framehawk technology provides a superior user experience over any network. Even HD video appears sharp and uninterrupted despite high latency or spotty connections.

Industry-leading single-disk management tools allow administrators to manage thousands of user desktops as well as multiple applications with a single disk image per OS or app, greatly simplifying installations and upgrades while enabling full customization capabilities for end users. Streaming provisioning services allow for instant OS provisioning and upgrades with minimal interruption for end users. Best-in-class security keeps applications and sensitive data safe with no intellectual property leaving the confines of the data center. Deployments that span widely-dispersed locations connected by a WAN can face challenges due to network latency and reliability. Configuring zones can help users in remote regions connect to local resources without forcing connections to traverse large segments of the WAN. Using zones allows effective Site management from a single Citrix Studio console, Citrix Director, and the Site database.

This saves the costs of deploying, staffing, licensing, and maintaining additional Sites containing separate databases in remote locations. Zones can be helpful in deployments of all sizes. You can use zones to keep applications and desktops closer to end users, which improves performance. For more information, see the article. When you configure the databases during Site creation, you can now specify separate locations for the Site, Logging, and Monitoring databases. Later, you can specify different locations for all three databases.

In previous releases, all three databases were created at the same address, and you could not specify a different address for the Site database later. You can now add more Delivery Controllers when you create a Site, as well as later. In previous releases, you could add more Controllers only after you created the Site. For more information, see the and articles. Configure application limits to help manage application use. For example, you can use application limits to manage the number of users accessing an application simultaneously. Similarly, application limits can be used to manage the number of simultaneous instances of resource-intensive applications, this can help maintain server performance and prevent deterioration in service.

For more information, see the article. You can now choose to repeat a notification message that is sent to affected machines before the following types of actions begin: Updating machines in a Machine Catalog using a new master image Restarting machines in a Delivery Group according to a configured schedule If you indicate that the first message should be sent to each affected machine 15 minutes before the update or restart begins, you can also specify that the message be repeated every five minutes until the update/restart begins. For more information, see the and articles. By default, sessions roam between client devices with the user. When the user launches a session and then moves to another device, the same session is used and applications are available on both devices. The applications follow, regardless of the device or whether current sessions exist.

Similarly, printers and other resources assigned to the application follow. You can now use the PowerShell SDK to tailor session roaming. This was an experimental feature in the previous release. For more information, see the article. When using the PowerShell SDK to create or update a Machine Catalog, you can now select a template from other hypervisor connections. This is in addition to the currently-available choices of VM images and snapshots. See the article for full support information. Information about support for third-party product versions is updated periodically.

By default, SQL Server 2012 Express SP2 is installed when you install the Delivery Controller. SP1 is no longer installed. The component installers now automatically deploy newer Microsoft Visual C++ runtime versions: 32-bit and 64-bit Microsoft Visual C++ 2013, 2010 SP1, and 2008 SP1. Visual C++ 2005 is no longer deployed. You can install Studio or VDAs for Windows Desktop OS on machines running Windows 10. You can create connections to Microsoft Azure virtualization resources.

Figure 29 Logical Architecture of Citrix XenDesktop Most enterprises struggle to keep up with the proliferation and management of computers in their environments. Each computer, whether it is a desktop PC, a server in a data center, or a kiosk-type device, must be managed as an individual entity. The benefits of distributed processing come at the cost of distributed management. It costs time and money to set up, update, support, and ultimately decommission each computer. The initial cost of the machine is often dwarfed by operating costs. Citrix PVS takes a very different approach from traditional imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vDisk) rather than copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiency of centralized management and the benefits of distributed processing.

In addition, because machines are streaming disk data dynamically and in real time from a single shared image, machine image consistency is essentially ensured. At the same time, the configuration, applications, and even the OS of large pools of machines can be completed changed in the time it takes the machines to reboot. Using PVS, any vDisk can be configured in standard-image mode. A vDisk in standard-image mode allows many computers to boot from it simultaneously, greatly reducing the number of images that must be maintained and the amount of storage that is required. The vDisk is in read-only format, and the image cannot be changed by target devices.

Benefits for Citrix XenApp and Other Server Farm Administrators If you manage a pool of servers that work as a farm, such as Citrix XenApp servers or web servers, maintaining a uniform patch level on your servers can be difficult and time consuming. With traditional imaging solutions, you start with a clean golden master image, but as soon as a server is built with the master image, you must patch that individual server along with all the other individual servers. Rolling out patches to individual servers in your farm is not only inefficient, but the results can also be unreliable. Patches often fail on an individual server, and you may not realize you have a problem until users start complaining or the server has an outage. After that happens, getting the server resynchronized with the rest of the farm can be challenging, and sometimes a full reimaging of the machine is required.

With Citrix PVS, patch management for server farms is simple and reliable. You start by managing your golden image, and you continue to manage that single golden image. All patching is performed in one place and then streamed to your servers when they boot. Server build consistency is assured because all your servers use a single shared copy of the disk image.

If a server becomes corrupted, simply reboot it, and it is instantly back to the known good state of your master image. Upgrades are extremely fast to implement. After you have your updated image ready for production, you simply assign the new image version to the servers and reboot them. You can deploy the new image to any number of servers in the time it takes them to reboot. Just as important, rollback can be performed in the same way, so problems with new images do not need to take your servers or your users out of commission for an extended period of time. Benefits for Desktop Administrators Because Citrix PVS is part of Citrix XenDesktop, desktop administrators can use PVS’s streaming technology to simplify, consolidate, and reduce the costs of both physical and virtual desktop delivery.

Many organizations are beginning to explore desktop virtualization. Although virtualization addresses many of IT’s needs for consolidation and simplified management, deploying it also requires deployment of supporting infrastructure. Without PVS, storage costs can make desktop virtualization too costly for the IT budget.

However, with PVS, IT can reduce the amount of storage required for VDI by as much as 90 percent. And with a single image to manage instead of hundreds or thousands of desktops, PVS significantly reduces the cost, effort, and complexity for desktop administration. Different types of workers across the enterprise need different types of desktops. Some require simplicity and standardization, and others require high performance and personalization. XenDesktop can meet these requirements in a single solution using Citrix FlexCast delivery technology.

With FlexCast, IT can deliver every type of virtual desktop, each specifically tailored to meet the performance, security, and flexibility requirements of each individual user. Not all desktops applications can be supported by virtual desktops. For these scenarios, IT can still reap the benefits of consolidation and single-image management. Desktop images are stored and managed centrally in the data center and streamed to physical desktops on demand. This model works particularly well for standardized desktops such as those in lab and training environments and call centers and thin-client devices used to access virtual desktops. Citrix Provisioning Services Solution Citrix PVS streaming technology allows computers to be provisioned and re-provisioned in real time from a single shared disk image. With this approach, administrators can completely eliminate the need to manage and patch individual systems.

Instead, all image management is performed on the master image. The local hard drive of each system can be used for runtime data caching or, in some scenarios, removed from the system entirely, which reduces power use, system failure rate, and security risk. The PVS solution’s infrastructure is based on software-streaming technology. After PVS components are installed and configured, a vDisk is created from a device’s hard drive by taking a snapshot of the OS and application image and then storing that image as a vDisk file on the network. A device used for this process is referred to as a master target device.

The devices that use the vDisks are called target devices. VDisks can exist on a PVS, file share, or in larger deployments, on a storage system with which PVS can communicate (iSCSI, SAN, network-attached storage [NAS], and Common Internet File System [CIFS]). VDisks can be assigned to a single target device in private-image mode, or to multiple target devices in standard-image mode. Citrix Provisioning Services Infrastructure The Citrix PVS infrastructure design directly relates to administrative roles within a PVS farm. The PVS administrator role determines which components that administrator can manage or view in the console. A PVS farm contains several components. Figure 30 illustrates a high-level view of a basic PVS infrastructure and shows how PVS components might appear within that implementation.

For the purposes of the validation represented in this document, both XenDesktop Virtual Desktops and XenApp Hosted Shared Desktop server sessions were validated. Each of the sections provides some fundamental design decisions for this environment. When the desktop user groups and sub-groups have been identified, the next task is to catalog group application and data requirements. This can be one of the most time-consuming processes in the VDI planning exercise, but is essential for the VDI project’s success. If the applications and data are not identified and co-located, performance will be negatively affected. The process of analyzing the variety of application and data pairs for an organization will likely be complicated by the inclusion cloud applications, like SalesForce.com.

This application and data analysis is beyond the scope of this Cisco Validated Design, but should not be omitted from the planning process. There are a variety of third party tools available to assist organizations with this crucial exercise.

Now that user groups, their applications and their data requirements are understood, some key project and solution sizing questions may be considered. General project questions should be addressed at the outset, including: Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications and data? Is there infrastructure and budget in place to run the pilot program? Are the required skill sets to execute the VDI project available? Can we hire or contract for them?

Do we have end user experience performance metrics identified for each desktop sub-group? How will we measure success or failure? What is the future implication of success or failure? Below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group: What is the desktop OS planned?

Windows 7, Windows 8, or Windows 10? 32-bit or 64-bit desktop OS? How many virtual desktops will be deployed in the pilot? In production? All Windows 7/8/10?

How much memory per target desktop group desktop? Are there any rich media, Flash, or graphics-intensive workloads? What is the end point graphics processing capability? Will Citrix XenApp for Server Hosted Shared sessions used? What is the hypervisor for the solution? What is the storage configuration in the existing environment?

Are there sufficient IOPS available for the write-intensive VDI workload? Will there be storage dedicated and tuned for VDI service? Is there a voice component to the desktop?

Is anti-virus a part of the image? Is user profile management (e.g., non-roaming profile based) part of the solution? What is the fault tolerance, failover, disaster recovery plan? Are there additional desktop sub-group specific questions? Citrix XenDesktop 7.11integrates Hosted Shared and VDI desktop virtualization technologies into a unified architecture that enables a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as a service. Users can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs, and thin clients. XenDesktop delivers a native touch-optimized experience with HDX high-definition performance, even over mobile networks.

Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity called a Machine Catalog. In this CVD, VM provisioning relies on Citrix Provisioning Services to make sure that the machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are configured to run either a Windows Server OS (for RDS hosted shared desktops) or a Windows Desktop OS (for hosted pooled VDI desktops). To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines from the catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications, or a combination of desktops and applications to users. Creating a Delivery Group is a flexible way of allocating machines and applications to users.

In a Delivery Group, you can: Use machines from multiple catalogs Allocate a user to multiple machines Allocate multiple users to one machine As part of the creation process, you specify the following Delivery Group properties: Users, groups, and applications allocated to Delivery Groups Desktop settings to match users' needs Desktop power management options Figure 32 illustrates how users access desktops and applications through machine catalogs and delivery groups. Server OS and Desktop OS Machines configured in this CVD to support hosted shared desktops and hosted virtual desktops (both non-persistent and persistent). Citrix XenDesktop 7.11can be deployed with or without Citrix Provisioning Services (PVS). The advantage of using Citrix PVS is that it allows virtual machines to be provisioned and re-provisioned in real-time from a single shared-disk image. In this way administrators can completely eliminate the need to manage and patch individual systems and reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiencies of a centralized management with the benefits of distributed processing. The Provisioning Services solution’s infrastructure is based on software-streaming technology.

After installing and configuring Provisioning Services components, a single shared disk image (vDisk) is created from a device’s hard drive by taking a snapshot of the OS and application image, and then storing that image as a vDisk file on the network. A device that is used during the vDisk creation process is the Master target device.

Devices or virtual machines that use the created vDisks are called target devices. When a target device is turned on, it is set to boot from the network and to communicate with a Provisioning Server. Unlike thin-client technology, processing takes place on the target device (Step 1). Figure 33 Citrix Provisioning Services Functionality The target device downloads the boot file from a Provisioning Server (Step 2) and boots. Based on the boot configuration settings, the appropriate vDisk is mounted on the Provisioning Server (Step 3). The vDisk software is then streamed to the target device as needed, appearing as a regular hard drive to the system.

Instead of immediately pulling all the vDisk contents down to the target device (as with traditional imaging solutions), the data is brought across the network in real-time as needed. This approach allows a target device to get a completely new operating system and set of software in the time it takes to reboot.

This approach dramatically decreases the amount of network bandwidth required and making it possible to support a larger number of target devices on a network without impacting performance Citrix PVS can create desktops as Pooled or Private: Pooled Desktop: A pooled virtual desktop uses Citrix PVS to stream a standard desktop image to multiple desktop instances upon boot. Private Desktop: A private desktop is a single desktop assigned to one distinct user. The alternative to Citrix Provisioning Services for pooled desktop deployments is Citrix Machine Creation Services (MCS), which is integrated with the XenDesktop Studio console. Locating the PVS Write Cache When considering a PVS deployment, there are some design decisions that need to be made regarding the write cache for the target devices that leverage provisioning services. The write cache is a cache of all data that the target device has written.

If data is written to the PVS vDisk in a caching mode, the data is not written back to the base vDisk. Instead it is written to a write cache file in one of the following locations: Cache on device hard drive. Write cache exists as a file in NTFS format, located on the target-device’s hard drive. This option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.

Cache on device hard drive persisted. (Experimental Phase) This is the same as “Cache on device hard drive”, except that the cache persists. At this time, this method is an experimental feature only, and is only supported for NT6.1 or later (Windows 10 and Windows 2008 R2 and later). This method also requires a different bootstrap. Cache in device RAM. Write cache can exist as a temporary file in the target device’s RAM.

This provides the fastest method of disk access since memory access is always faster than disk access. Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and is only available for Windows 10 and Server 2008 R2 and later.

When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first. When RAM is full, the least recently used block of data is written to the local differencing disk to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that the target device will consume.

Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk I/O and network traffic.

For additional security, the Provisioning Server can be configured to encrypt write cache files. Since the write-cache file persists on the hard drive between reboots, encrypted data provides data protection in the event a hard drive is stolen. Cache on server persisted. This cache option allows for the saved changes between reboots. Using this option, a rebooted target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image. If a vDisk is set to this method of caching, each target device that accesses the vDisk automatically has a device-specific, writable disk file created. Any changes made to the vDisk image are written to that file, which is not automatically deleted upon shutdown.

In this CVD, Provisioning Server 7.11was used to manage Pooled/Non-Persistent VDI Machines and XenApp RDS Machines with “Cache in device RAM with overflow on hard disk” for each virtual machine. This design enables good scalability to many thousands of desktops. Provisioning Server 7.11was used for Active Directory machine account creation and management as well as for streaming the shared disk to the hypervisor hosts. Two examples of typical XenDesktop deployments are the following: A distributed components configuration A multiple site configuration Since XenApp and XenDesktop 7.11are based on a unified architecture, combined they can deliver a combination of Hosted Shared Desktops (HSDs, using a Server OS machine) and Hosted Virtual Desktops (HVDs, using a Desktop OS). Distributed Components Configuration You can distribute the components of your deployment among a greater number of servers, or provide greater scalability and failover by increasing the number of controllers in your site. You can install management consoles on separate computers to manage the deployment remotely.

A distributed deployment is necessary for an infrastructure based on remote access through NetScaler Gateway (formerly called Access Gateway). Figure 34 shows an example of a distributed components configuration. A simplified version of this configuration is often deployed for an initial proof-of-concept (POC) deployment. The CVD described in this document deploys Citrix XenDesktop in a configuration that resembles this distributed components configuration shown. Multiple Site Configuration If you have multiple regional sites, you can use Citrix NetScaler to direct user connections to the most appropriate site and StoreFront to deliver desktops and applications to users. In Figure 35, depicting multiple sites, a site was created in two data centers. Having two sites globally, rather than just one, minimizes the amount of unnecessary WAN traffic.

Two Cisco blade servers host the required infrastructure services (AD, DNS, DHCP, Profile, SQL, Citrix XenDesktop management, and web servers). You can use StoreFront to aggregate resources from multiple sites to provide users with a single point of access with NetScaler. A separate Studio console is required to manage each site; sites cannot be managed as a single entity. You can use Director to support users across sites. Citrix NetScaler accelerates application performance, load balances servers, increases security, and optimizes the user experience. In this example, two NetScalers are used to provide a high availability configuration. The NetScalers are configured for Global Server Load Balancing and positioned in the DMZ to provide a multi-site, fault-tolerant solution.

With Citrix XenDesktop 7.11, the method you choose to provide applications or desktops to users depends on the types of applications and desktops you are hosting and available system resources, as well as the types of users and user experience you want to provide. Server OS machines You want: Inexpensive server-based delivery to minimize the cost of delivering applications to a large number of users, while providing a secure, high-definition user experience.

Your users: Perform well-defined tasks and do not require personalization or offline access to applications. Users may include task workers such as call center operators and retail workers, or users that share workstations. Application types: Any application. Desktop OS machines You want: A client-based application delivery solution that is secure, provides centralized management, and supports a large number of users per host server (or hypervisor), while providing users with applications that display seamlessly in high-definition.

Your users: Are internal, external contractors, third-party collaborators, and other provisional team members. Users do not require off-line access to hosted applications.

Application types: Applications that might not work well with other applications or might interact with the operating system, such as.NET framework. These types of applications are ideal for hosting on virtual machines. Applications running on older operating systems such as Windows XP or Windows Vista, and older architectures, such as 32-bit or 16-bit. By isolating each application on its own virtual machine, if one machine fails, it does not impact other users. Remote PC Access You want: Employees with secure remote access to a physical computer without using a VPN. For example, the user may be accessing their physical desktop PC from home or through a public WIFI hotspot. Depending upon the location, you may want to restrict the ability to print or copy and paste outside of the desktop.

This method enables BYO device support without migrating desktop images into the datacenter. Your users: Employees or contractors that have the option to work from home, but need access to specific software or data on their corporate desktops to perform their jobs remotely. Host: The same as Desktop OS machines. Application types: Applications that are delivered from an office computer and display seamlessly in high definition on the remote user's device. For the Cisco Validated Design described in this document, a mix of Hosted Shared Desktops (HSDs) using RDS-based Server OS machines and Hosted Virtual Desktops (HVDs) using VDI-based Desktop OS machines were configured and tested. The mix consisted of a combination of both use cases. The following sections discuss design decisions relative to the Citrix XenDesktop deployment, including the CVD test environment.

The architecture deployed is highly modular. While each customer’s environment might vary in its exact configuration, the reference architecture contained in this document once built, can easily be scaled as requirements and demands change. This includes scaling both up (adding additional resources within existing Cisco HyperFlex system) and out (adding additional Cisco UCS HX-series nodes). The solution includes Cisco networking, Cisco UCS and Cisco HyperFlex hyper-converged storage, which efficiently fits into a single data center rack, including the access layer network switches. A dedicated network or subnet for physical device management is often used in datacenters.

In this scenario, the mgmt0 interfaces of the two Fabric Interconnects would be connected to that dedicated network or subnet. This is a valid configuration for HyperFlex installations with the following caveat; wherever the HyperFlex installer is deployed it must have IP connectivity to the subnet of the mgmt0 interfaces of the Fabric Interconnects, and also have IP connectivity to the subnets used by the hx-inband-mgmt VLANs listed above. All HyperFlex storage traffic traversing the hx-storage-data VLAN and subnet is configured to use jumbo frames, or to be precise all communication is configured to send IP packets with a Maximum Transmission Unit (MTU) size of 9000 bytes. Using a larger MTU value means that each IP packet sent carries a larger payload, therefore transmitting more data per packet, and consequently sending and receiving data faster. This requirement also means that the Cisco UCS uplinks must be configured to pass jumbo frames. Failure to configure the Cisco UCS uplink switches to allow jumbo frames can lead to service interruptions during some failure scenarios, particularly when cable or port failures would cause storage traffic to traverse the northbound Cisco UCS uplink switches. Three VMware Clusters were configured in one vCenter datacenter instance to support the solution and testing environment: Infrastructure Cluster: Infra VMs (vCenter, Active Directory, DNS, DHCP, SQL Server, VMware Connection Servers, VMware Replica Servers, View Composer Server, Nexus 1000v Virtual Supervisor Module & VSMs, etc.) HyperFlex Cluster: Citrix XenDesktop HSD VMs (Windows Server 2012 R2) or Persistent/Non-Persistent HVD VM Pools (Windows 10 64-bit).

HyperFlex release v1.8.1 supports 16 nodes in single HyperFlex cluster. The maximum supported configuration is eight hyperconverged nodes with eight compute-only nodes. The compute only node count in a HyperFlex cluster cannot exceed the hyperconverged node count. Login VSI Launchers Cluster: Login VSI Cluster (The Login VSI launcher infrastructure was connected using the same set of switches and vCenter instance, but was hosted on separate local storage and servers.) Figure 38 VMware vSphere Clusters on vSphere Web GUI The following sections detail the design of the elements within the VMware ESXi hypervisors, system requirements, virtual networking and the configuration of ESXi for the Cisco HyperFlex HX Distributed Data Platform. The Cisco HyperFlex system has a pre-defined virtual network design at the ESXi hypervisor level. Four different virtual switches are created by the HyperFlex installer, each using two uplinks, which are each serviced by a vNIC defined in the Cisco UCS service profile.

The vSwitches created are: vswitch-hx-inband-mgmt: This is the default vSwitch0 which is renamed by the ESXi kickstart file as part of the automated installation. The default vmkernel port, vmk0, is configured in the standard Management Network port group. The switch has two uplinks, active on fabric A and standby on fabric B, without jumbo frames. A second port group is created for the Storage Platform Controller VMs to connect to with their individual management interfaces.

The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere vswitch-hx-storage-data: This vSwitch is created as part of the automated installation. A vmkernel port, vmk1, is configured in the Storage Hypervisor Data Network port group, which is the interface used for connectivity to the HX Datastores via NFS.

The switch has two uplinks, active on fabric B and standby on fabric A, with jumbo frames required. A second port group is created for the Storage Platform Controller VMs to connect to with their individual storage interfaces. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere vswitch-hx-vm-network: This vSwitch is created as part of the automated installation. The switch has two uplinks, active on both fabrics A and B, and without jumbo frames.

The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere vmotion: This vSwitch is created as part of the automated installation. The switch has two uplinks, active on fabric A and standby on fabric B, with jumbo frames required. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere The following table and figures help give more details into the ESXi virtual networking design as built by the HyperFlex installer: Table 3 ESXi host virtual switch configuration. For Cisco UCS B200-M4, C220 M4 and/or C240 M4 compute-only nodes, the HyperFlex installer places a lightweight storage controller VM on a 3.5 GB VMFS datastore, provisioned from the SD cards. Figure 41 HX240c Controller VM Placement Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure CPU resource reservations for the controller VMs. This reservation guarantees that the controller VMs will have CPU resources at a minimum level, in situations where the physical CPU resources of the ESXi hypervisor host are being heavily consumed by the guest VMs.

Table 4 details the CPU resource reservation of the storage controller VMs. Number of vCPU Shares Reservation Limit 8 Low 10800 MHz unlimited Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure memory resource reservations for the controller VMs. This reservation guarantees that the controller VMs will have memory resources at a minimum level, in situations where the physical memory resources of the ESXi hypervisor host are being heavily consumed by the guest VMs. Table 5 details the memory resource reservation of the storage controller VMs. Cisco UCS B200-M4, C220 M4 and/or C240 M4 compute-only blades have a lightweight storage controller VM, it is configured with only 1 vCPU and 512 MB of memory reservation.

The new HyperFlex cluster has no default datastores configured for virtual machine storage, therefore the datastores must be created using the vCenter Web Client plugin. A minimum of two datastores is recommended to satisfy vSphere High Availability datastore heartbeat requirements, although one of the two datastores can be very small. It is important to recognize that all HyperFlex datastores are thinly provisioned, meaning that their configured size can far exceed the actual space available in the HyperFlex cluster. Alerts will be raised by the HyperFlex system in the vCenter plugin when actual space consumption results in low amounts of free space, and alerts will be sent via auto support email alerts. Overall space consumption in the HyperFlex clustered filesystem is optimized by the default deduplication and compression features.

The Cisco HyperFlex solution requires an existing VMware vCenter 6.0 server or appliance that is not installed on a HyperFlex node. Figure 43 provides sample configuration topology options for HyperFlex. For this study, we used an eight node HX220c M4S cluster.

The following subsections detail the physical connectivity configuration of the Citrix XenDesktop 7.11 environment. The information in this section is provided as a reference for cabling the physical equipment in this Cisco Validated Design environment. To simplify cabling requirements, the tables include both local and remote device and port locations. The tables in this section contain the details for the prescribed and supported configuration. This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment site.

These interfaces will be used in various configuration steps. It is recommended to let the HX Installer handle upgrading the server firmware automatically as designed. This will occur once the service profiles are applied to the HX nodes during the automated deployment process. Optional: If you are familiar with Cisco USC Manager or you wish to break the install into smaller pieces, you can use the server auto firmware download to pre-stage the correct firmware on the nodes. This will speed up the association time in the HyperFlex installer at the cost of running two separate reboot operations. This method is not required or recommended if doing the install in one sitting. Power on all of the Cisco HyperFlex nodes that will become part of your Cisco HyperFlex Cluster Verify that all of the Cisco HyperFlex HX220c M4S servers appear on the Equipment tab, Server node This section details the Cisco UCS configuration that was done as part of the infrastructure build out by the Cisco HyperFlex installer.

Many of the configuration elements are fixed in nature, meanwhile the HyperFlex installer does allow for some items to be specified at the time of creation, for example VLAN names and IDs, IP pools and more. Where the elements can be manually set during the installation, those items will be noted in >brackets. For the complete details about racking, power, and installing the chassis is described in the Installation guide (see www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/products-installation-guides-list.html) and it is beyond the scope of this document. For more information about each step, refer to the following documents: Cisco UCS Manager Configuration Guides – GUI and Command Line Interface (CLI) During the HyperFlex Installation, a Cisco UCS Sub-Organization is created named “hx-cluster”.

The sub-organization is created underneath the root level of the Cisco UCS hierarchy, and is used to contain all policies, pools, templates and service profiles used by HyperFlex. This arrangement allows for organizational control using Role-Based Access Control (RBAC) and administrative locales at a later time if desired. In this way, control can be granted to administrators of only the HyperFlex specific elements of the Cisco UCS domain, separate from control of root level elements or elements in other sub-organizations. Figure 45 Cisco UCS Manager configuration: HyperFlex Sub-organization The Cisco HyperFlex Data Platform installer version 1.8(1c) needs to be downloaded prior to beginning. Download latest installer OVA from Cisco.com. Software Download Link: 2. Deploy the OVA to an existing host in the environment.

Use either the Thick Client (C#) or vSphere Web Client to deploy OVA on ESXi host. We will use the Web Client for all activities in the solution.

For vCenter web client based deployment, log into vCenter web client via login to web browser with vCenter management ip address. Select ESXi host under hosts and cluster when HyperFlex data platform installer VM to deploy.

Right-click ESXi host, select Deploy OVF Template. Follow the deployment steps to configure HyperFlex data-platform installer VM deployment. Select OVA file to deploy, click Next. Review and verify details for OVF template to deploy, click Next. Enter a name for OVF to template deploy, select the datacenter and folder location.

Select virtual disk format, VM storage policy set to datastore default, select datastore for OVF deployment. Select Network adapter destination port-group.

Fill out the parameters requested for hostname, gateway, DNS, IP address, and netmask. Alternatively, leave all blank for a DHCP assigned address. If required, an additional network adapter can be added on HyperFlex Platform Installer VM after OVF deployment completed successfully with matching port-group select on ESXi host to attach with. For example, in case of separate Inband and Out-Of-Mgmt network. Review the settings selected for the OVF deployment, check the box for Power on after deployment.

Click Finish. Configuration MCS Provisioned Pooled Virtual Machines MCS Provisioned Persistent Virtual Machines Operating system Microsoft Windows 10 64-bit Microsoft Windows 10 64-bit Virtual CPU amount 2 2 Memory amount 2.0 GB (reserved) 2.0 GB (reserved) Network VMXNET3 vm-network VMXNET3 vm-network Citrix MCS vDisk size and location 24 GB (thick) MCS-vDisk volume 40 GB (thick) MCS-vDisk volume Additional software used for testing Microsoft Office 2016 Login VSI 4.1.5 (Knowledge Worker Workload) Microsoft Office 2016 Login VSI 4.1.5 (Knowledge Worker Workload).

When preparing Machine Creation Services (MCS) master image, Cisco recommends using SEsparse disk format for optimal performance because MCS does not leverage VMware CBRC or VAAI integrations. The following section details how to do that. To create a new VM with SEsparse disk, complete the following steps: 1. In vSphere Web Client create a diskless Virtual Machine with all other necessary parameters, such as OS, number of vCPU, amount of memory, and specific networking, configured for your environment. This will become a master Virtual Machine. Create SEsparse formatted disk: a. Log into the EXS shell of one of the HX cluster hosts and browse to the VM folder on the HX datastore: cd /vmfs/volumes// e.g.

Cd /vmfs/volumes/ae706752-328c3938/XDBase4MCS/ b. Create disk using vmkfstools utility: vmkfstools -c -d SEsparse VM_name.vmdk e.g. Vmkfstools -c 20g -d SEsparse XDBase4MCS.vmdk 3. Attach newly created disk to the master Virtual Machine: a. Using VMware Web Client browse to the master Virtual Machine. Right-click the Virtual Machine and select Edit Settings c. From New Device drop down list select Existing Hard Disk click Add.

Browse datastore to master VM folder and select the created previously SEsparse disk. Verify the disk has a proper format ( Flex-SE).

The Virtual Machine is ready for OS deployment and further configuration. This section details the installation of the core components of the XenDesktop/XenApp 7.11 system.

This CVD installs two XenDesktop Delivery Controllers to support hosted shared desktops (HSD), non-persistent virtual desktops (VDI), and persistent virtual desktops (VDI). Citrix recommends that you use Secure HTTP (HTTPS) and a digital certificate to protect vSphere communications. Citrix recommends that you use a digital certificate issued by a certificate authority (CA) according to your organization's security policy. Otherwise, if security policy allows, use the VMware-installed self-signed certificate. To install vCenter Server self-signed Certificate, complete the following steps: 1.

Add the FQDN of the computer running vCenter Server to the hosts file on that server, located at SystemRoot/WINDOWS/system32/Drivers/etc/. This step is required only if the FQDN of the computer running vCenter Server is not already present in DNS. Open Internet Explorer and enter the address of the computer running vCenter Server (for example, as the URL). Accept the security warnings. Click the Certificate Error in the Security Status bar and select View certificates. Click Install certificate, select Local Machine, and then click Next.

Select Place all certificates in the following store and then click Browse. Select Show physical stores. Select Trusted People.

Click Next and then click Finish. Perform the above steps on all Delivery Controllers and Provisioning Servers. The process of installing the XenDesktop Delivery Controller also installs other key XenDesktop software components, including Studio, which is used to create and manage infrastructure components, and Director, which is used to monitor performance and troubleshoot problems. To install the Citrix License Server, complete the following steps: 1. To begin the installation, connect to the first Citrix License server and launch the installer from the Citrix XenDesktop 7.11 ISO. Click “Extend Deployment – Citrix License Server.” 3.

Read the Citrix License Agreement. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.

Click Next 7. Select the default ports and automatically configured firewall rules. Click Install. Click Finish to complete the installation.

To install the Citrix Licenses, complete the following steps: 1. Copy the license files to the default location (C: Program Files (x86) Citrix Licensing MyFiles) on the license server. Restart the server or Citrix licensing services so that the licenses are activated. Run the application Citrix License Administration Console. Confirm that the license files have been read and enabled correctly. To install the XenDesktop, complete the following steps: 1. To begin the installation, connect to the first XenDesktop server and launch the installer from the Citrix XenDesktop 7.11 ISO.

The installation wizard presents a menu with three subsections. Click “Get Started - Delivery Controller.” 4. Read the Citrix License Agreement. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.

Select the components to be installed on the first Delivery Controller Server: a. Delivery Controller b. Dedicated StoreFront servers should be implemented for large scale deployments. Since a SQL Server will be used to Store the Database, leave “Install Microsoft SQL Server 2012 SP1 Express” unchecked. Select the default ports and automatically configured firewall rules. Click Install to begin the installation. (Optional) Click the Call Home participation.

Click Finish to complete the installation. (Optional) Check Launch Studio to launch Citrix Studio Console. Citrix Studio is a management console that allows you to create and manage infrastructure and resources to deliver desktops and applications. Replacing Desktop Studio from earlier releases, it provides wizards to set up your environment, create workloads to host applications and desktops, and assign applications and desktops to users.

Citrix Studio launches automatically after the XenDesktop Delivery Controller installation, or if necessary, it can be launched manually. Studio is used to create a Site, which is the core XenDesktop 7.11 environment consisting of the Delivery Controller and the Database. To configure XenDesktop, complete the following steps: 1.

From Citrix Studio, click Deliver applications and desktops to your users. Select “An empty, unconfigured Site.' Enter a site name. Provide the Database Server Locations for each data type and click Next. Provide the FQDN of the license server. Click Connect to validate and retrieve any licenses from the server.

High availability will be available for the databases once added to the SQL AlwaysOn Availability Group. To configure the XenDesktop Site administrators, complete the following steps: 1.

Connect to the XenDesktop server and open Citrix Studio Management console. From the Configuration menu, right-click Administrator and select Create Administrator from the drop-down list. Select/Create appropriate scope and click Next. Choose an appropriate Role. Review the Summary and click Finish.

After the first controller is configured and the Site is operational, you can add additional controllers. In this CVD, we created two Delivery Controllers. To configure additional XenDesktop controllers, complete the following steps: 1. To begin the installation of the second Delivery Controller, connect to the second XenDesktop server and launch the installer from the Citrix XenDesktop 7.11 ISO. Click Delivery Controller. Select the components to be installed: 5.

Delivery Controller a. StoreFront 6. Repeat the same steps used to install the first Delivery Controller, including the step of importing an SSL certificate for HTTPS between the controller and vSphere. Review the Summary configuration. Click Install.

Confirm all selected components were successfully installed. Verify the Launch Studio checkbox is checked. Click Finish. To add the second Delivery Controller to the XenDesktop Site, complete the following steps: 1. Click Connect this Delivery Controller to an existing Site. Enter the FQDN of the first delivery controller.

Click Yes to allow the database to be updated with this controller’s information automatically. When complete, test the site configuration and verify the Delivery Controller has been added to the list of Controllers. Citrix Studio provides wizards to guide you through the process of setting up an environment and creating desktops. To set up a host connection for a cluster of VMs for the HSD and VDI desktops, complete the following steps. The instructions below outline the procedure to add a host connection and resources for HSD and VDI desktops. Connect to the XenDesktop server and launch Citrix Studio. From the Configuration menu, right-click Hosting and select Add Connection and Resources.

Select the Connection Type, VMware vSphere®. Enter the FQDN of the vCenter server. Enter the username (in domain username format) for the vSphere account.

Provide the password for the vSphere account. Provide a connection name.

Select the Other tools radio button. Select the Trust certificate and click OK. Review the Summary. Click Finish. From the Configuration menu, right-click Hosting and select Add Connection and Resources. Select the Use existing Connection radio button and select your connection from the drop-down list. Select the Use storage shared by hypervisors and Browse to the HyperFlex cluster.

Select the Storage Selection to be used by this connection. Select the Network selection to be used by this connection. Review the Summary page and click Finish.

Citrix StoreFront stores aggregate desktops and applications from XenDesktop sites, making resources readily available to users. In this CVD, StoreFront is installed on the dedicated virtual machines as part of the initial Delivery Controller installation. To install and configure Citrix StoreFront, complete the following steps: 1. To begin the installation of the second Delivery Controller, connect to the second XenDesktop server and launch the installer from the Citrix XenDesktop 7.11 ISO.

Click Extend Deployment >Citrix StoreFront. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button. Select the default ports and automatically configured firewall rules. Click Install. Click Install. For a multiple server deployment use the load balancing environment in the Base URL box.

Specify a name for your store and click Next. Add the required Delivery Controllers to the store click Next. Specify how connecting users can access the resources, in this environment only local users on the internal network are able to access the store, and click Next. On the Authentication Methods page, select the methods your users will use to authenticate to the store and click Next. You can select from the following methods: a. Username and password: Users enter their credentials and are authenticated when they access their stores.

Domain passthrough: Users authenticate to their domain-joined Windows computers and their credentials are used to log them on automatically when they access their stores. Configure the XenApp Service URL for users who use PNAgent to access the applications and desktops and click Create. After creating the store click Finish. To configure second StoreFront if used, complete the following steps: 1. From the StoreFront Console on the second server select “Join existing server group”. In the Join Server Group dialog, enter the name of the first Storefront server.

Before the additional StoreFront server can join the server group, you must connect to the first Storefront server, add the second server, and obtain the required authorization information. Connect to the first StoreFront server. Using the StoreFront menu on the left, you can scroll through the StoreFront management options. Select Server Group from the menu.

To add the second server and generate the authorization information that allows the additional StoreFront server to join the server group, select Add Server. Copy the Authorization code from the Add Server dialog. Connect to the second Storefront server and paste the Authorization code into the Join Server Group dialog.

A message appears when the second server has joined successfully. The Server Group now lists both StoreFront servers in the group. In most implementations, there is a single vDisk providing the standard image for multiple target devices. Thousands of target devices can use a single vDisk shared across multiple Provisioning Services (PVS) servers in the same farm, simplifying virtual desktop management. This section describes the installation and configuration tasks required to create a PVS implementation.

The PVS server can have many stored vDisks, and each vDisk can be several gigabytes in size. Hike Messenger Download For Blackberry Curve. Your streaming performance and manageability can be improved using a RAID array, SAN, or NAS. PVS software and hardware requirements are available at: Prerequisites To fulfill the prerequisites, complete the following steps: 1. Set the following Scope Options on the DHCP server hosting the PVS target machines (for example, VDI, RDS). As a Citrix best practice cited in this, apply the following registry setting both the PVS servers and target machines: HKEY_LOCAL_MACHINE SYSTEM CurrentControlSet Services TCPIP Parameters Key: 'DisableTaskOffload' (dword) Value: '1' Only one MS SQL database is associated with a farm. You can choose to install the Provisioning Services database software on an existing SQL database, if that machine can communicate with all Provisioning Servers within the farm, or with a new SQL Express database machine, created using the SQL Express software that is free from Microsoft.

The following MS SQL 2008, MS SQL 2008 R2, MS SQL 2012, MS SQL 2012 R2 and MS SQL 2014 Server (32 or 64-bit editions) databases can be used for the Provisioning Services database: SQL Server Express Edition, SQL Server Workgroup Edition, SQL Server Standard Edition, SQL Server Enterprise Edition. Microsoft SQL 2012 R2 was installed separately for this CVD.

To install and configure Citrix Provisioning Service 7.11, complete the following steps: 1. Insert the Citrix Provisioning Services 7.11 ISO and let AutoRun launch the installer. Click Console Installation. Click Install to install the required prerequisites. Read the Citrix License Agreement. If acceptable, select the radio button labeled “I accept the terms in the license agreement.” 7.

Optionally provide User Name and Organization. Accept the default path. Click Install to start the console installation. From the main installation screen, select Server Installation. The installation wizard will check to resolve dependencies and then begin the PVS server installation process. Click Install on the prerequisites dialog.

Click Yes when prompted to install the SQL Native Client. Click Next when the Installation wizard starts. Review the license agreement terms. If acceptable, select the radio button labeled “I accept the terms in the license agreement.” 20. Provide User Name, and Organization information.

Select who will see the application. Accept the default installation location. Click Install to begin the installation. Click Finish when the install is complete. The PVS Configuration Wizard starts automatically. Since the PVS server is not the DHCP server for the environment, select the radio button labeled, “The service that runs on another computer.” 30. Since DHCP boot options 66 and 67 are used for TFTP services, select the radio button labeled, “The service that runs on another computer.” 32.

Since this is the first server in the farm, select the radio button labeled, “Create farm.' Enter the FQDN of the SQL server. Provide the Database, Farm, Site, and Collection names.

Provide a vDisk Store name and the storage path to the Pure Storage vDisk share. This will vary per environment.

“7 days” for the configuration was appropriate for testing purposes. Keep the defaults for the network cards. Select Use the Provisioning Services TFTP service checkbox. Make sure that the IP Addresses for all PVS servers are listed in the Stream Servers Boot List 54. If desired fill in Problem Report Configuration and click Next. Click Finish. When the installation is completed, click Done.

Complete the installation steps on the additional PVS servers up to the configuration step where it asks to Create or Join a Farm. In this CVD, we repeated the procedure to add a total of three PVS servers. To install additional PVS servers, complete the following steps: 1. On the Farm Configuration dialog, select “Join existing farm.” 2.

Provide the FQDN of the SQL Server. Accept the Farm Name.

Accept the Existing Site. Accept the existing vDisk store. Provide the PVS service account information. Set the Days between password updates to 7.

Accept the network card settings. Select Use the Provisioning Services TFTP service checkbox.

Make sure that the IP Addresses for all PVS servers are listed in the Stream Servers Boot List 20. Click Finish to start the installation process.

Click Done when the installation finishes. After completing the steps to install the second PVS server, launch the Provisioning Services Console to verify that the PVS Servers and Stores are configured and that DHCP boot options are defined.

Launch Provisioning Services Console and select Connect to Farm. Enter localhost for the PVS1 server. Click Connect. Select Store Properties from the drop-down menu. In the Store Properties dialog, add the Default store path to the list of Default write cache paths.

Click Validate. If the validation is successful, click OK to continue. Virtual Delivery Agents (VDAs) are installed on the server and workstation operating systems, and enable connections for desktops and apps. The following procedure was used to install VDAs for both HVD and HSD environments.

By default, when you install the Virtual Delivery Agent, Citrix User Profile Management is installed silently on master images. (Using profile management as a profile solution is optional but was used for this CVD, and is described in a later section.) To install XenDesktop Virtual Desktop Agents, complete the following steps: 1.

Launch the XenDesktop installer from the XenDesktop 7.11 ISO. Click Start on the Welcome Screen.

To install the VDA for the Hosted Virtual Desktops (VDI), select Virtual Delivery Agent for Windows Desktop OS. After the VDA is installed for Hosted Virtual Desktops, repeat the procedure to install the VDA for Hosted Shared Desktops (RDS). In this case, select Virtual Delivery Agent for Windows Server OS and follow the same basic steps. Select “Create a Master Image.” 5.

For the VDI vDisk, select “No, install the standard VDA.” 7. Optional: Select Citrix Receiver. Select “ Do it manually” and specify the FQDN of the Delivery Controllers. Accept the default features. Allow the firewall rules to be configured Automatically. Verify the Summary and click Install. (Optional) Select Call Home participation.

Check “Restart Machine.” 19. Click Finish and the machine will reboot automatically. Repeat the procedure so the VDAs are installed for both HVD (using the Windows 10 OS image) and the HSD desktops (using the Windows Server 2012 R2 image).

Select the appropriate workflow for the HSD desktop. The Master Target Device refers to the target device from which a hard disk image is built and stored on a vDisk.

Provisioning Services then streams the contents of the vDisk created to other target devices. This procedure installs the PVS Target Device software that is used to build the RDS and VDI golden images. To install the Citrix Provisioning Server Target Device software, complete the following steps. The instructions below describe the process of creating a vDisk for VDI desktops. When you have completed these steps, repeat the procedure to build a vDisk for HSD. The PVS Imaging Wizard's Welcome page appears.

The Connect to Farm page appears. Enter the name or IP address of a Provisioning Server within the farm to connect to and the port to use to make that connection.

Use the Windows credentials (default) or enter different credentials. Select Create new vDisk. The Add Target Device page appears.

Select the Target Device Name, the MAC address associated with one of the NICs that was selected when the target device software was installed on the master target device, and the Collection to which you are adding the device. The New vDisk dialog displays. Enter the name of the vDisk.

Select the Store where the vDisk will reside. Select the vDisk type, either Fixed or Dynamic, from the drop-down menu. (This CVD used Dynamic rather than Fixed vDisks.) 13.

On the Microsoft Volume Licensing page, select the volume license option to use for target devices. For this CVD, volume licensing is not used, so the None button is selected. Select Image entire boot disk on the Configure Image Volumes page. Select Optimize for hard disk again for Provisioning Services before imaging on the Optimize Hard Disk for Provisioning Services.

Select Create on the Summary page. Review the configuration and click Continue.

When prompted, click No to shut down the machine. Edit the VM settings and select Force BIOS Setup under Boot Options. Restart Virtual Machine. Configure the BIOS/VM settings for PXE/network boot, putting Network boot from VMware VMXNET3 at the top of the boot device list.

Select Exit Saving Changes. After restarting the VM, log into the VDI or RDS master target.

The PVS imaging process begins, copying the contents of the C: drive to the PVS vDisk located on the server. If prompted to Restart select Restart Later. A message is displayed when the conversion is complete, click Done. Shutdown the VM used as the HVD or HSD master target. Connect to the PVS server and validate that the vDisk image is available in the Store.

Right-click the newly created vDisk and select Properties. On the vDisk Properties dialog, change Access mode to “Standard Image (multi-device, read-only access)”. Set the Cache Type to “Cache on device hard disk.” 34. Repeat this procedure to create vDisks for both the Hosted VDI Desktops (using the Windows 10 OS image) and the Hosted Shared Desktops (using the Windows Server 2012 R2 image). Non-Persistent PVS streamed desktops To create HVD and HSD machines, complete the following steps: 1. Select the Master Target Device VM from the vSphere Client.

Right-click the VM and select Clone. Name the cloned VM Desktop-Template.

Select the cluster and datastore where the first phase of provisioning will occur. Remove Hard disk 1 from the Template VM. Hard disk 1 is not required to provision desktop machines as the XenDesktop Setup Wizard dynamically creates the write cache disk.

Convert to the Desktop-Template VM to a Template. Start the XenDesktop Setup Wizard from the Provisioning Services Console. Right-click the Site. Choose XenDesktop Setup Wizard from the context menu. Enter the XenDesktop Controller address that will be used for the wizard operations.

Select the Host Resources on which the virtual machines will be created. Provide the Host Resources Credentials (Username and Password) to the XenDesktop controller when prompted. Select the Template created earlier. Select the vDisk that will be used to stream virtual machines. Select “Create a new catalog”. The catalog name is also used as the collection name in the PVS site. On the Operating System dialog, specify the operating system for the catalog.

Specify Windows Desktop Operating System for VDI and Windows Server Operating System for RDS. If you specified a Windows Desktop OS for VDIs, a User Experience dialog appears. Specify that the user will connect to “A fresh new (random) desktop each time.” 26. On the Virtual machines dialog, specify: a. The number of VMs to create.

(Note that it is recommended to create 200 or less per provisioning run. Create a single VM at first to verify the procedure.) b. Number of vCPUs for the VM (2 for VDI, 6 for RDS) c. The amount of memory for the VM (1.7GB for VDI, 24GB for RDS) d. The write-cache disk size (10GB for VDI, 30GB for RDS) e. PXE boot as the Boot Mode 28.

Select the Create new accounts radio button. Specify the Active Directory Accounts and Location. This is where the wizard should create the computer accounts. Provide the Account naming scheme.

An example name is shown in the text box below the name scheme selection location. Click Finish to begin the virtual machine creation. When the wizard is done provisioning the virtual machines, click Done. Provisioning process takes ~10 seconds per machine. Verify the desktop machines were successfully created in the following locations: a.

PVS1 >Provisioning Services Console >Farm >Site >Device Collections >VDI-NP >CTX-VDI-001 b. CTX-XD1 >Citrix Studio >Machine Catalogs >VDI-NP c. AD-DC1 >Active Directory Users and Computers >dvpod2.local >ComputerOU >CTX-VDI-001 37. Logon to newly provisioned desktop machine, using the Virtual Disk Status verify the image mode is set to Ready Only and the cache type as Device Ram with overflow on local hard drive. Connect to a XenDesktop server and launch Citrix Studio. Choose Create Machine Catalog from the drop-down menu. Select Desktop OS and click Next.

Select appropriate machine management and click Next. Select Random for desktop experience. Specify the number of desktop to create and machine configuration. Specify AD account naming scheme and OU where accounts will be created.

On Summary page specify Catalog name and click Finish to start deployment. Persistent Static provisioned with MCS 1. Connect to a XenDesktop server and launch Citrix Studio.

Choose Create Machine Catalog from the drop-down menu. Select Desktop OS and click Next. Select appropriate machine management and click Next. Select Static, Dedicated for Desktop Experience and click Next. Select a Virtual Machine to be used for Catalog Master image and click Next. Specify the number of desktop to create and machine configuration.

Use Full Copy for machine copy mode. Specify AD account naming scheme and OU where accounts will be created. On the Summary page specify Catalog name and click Finish to start deployment. Delivery Groups are collections of machines that control access to desktops and applications. With Delivery Groups, you can specify which users and groups can access which desktops and applications. To create delivery groups, complete the following steps.

The instructions below outline the procedure to create a Delivery Group for HSD desktops. When you have completed these steps, repeat the procedure to a Delivery Group for HVD desktops. Connect to a XenDesktop server and launch Citrix Studio.

Choose Create Delivery Group from the drop-down menu. Select Desktops the catalog will deliver. To make the Delivery Group accessible, you must add users, click Add 7. In the Select Users or Groups dialog, add users or groups.

When users have been added, click Next. Specify Applications catalog will deliver. On the Summary dialog, review the configuration.

Enter a Delivery Group name and a Display name (for example, HVD or HSD). Select Finish. Citrix Studio lists the created Delivery Groups and the type, number of machines created, sessions, and applications for each group in the Delivery Groups tab. Select Delivery Group and in Action List, select “Turn on Maintenance Mode.” Policies and profiles allow the Citrix XenDesktop environment to be easily and efficiently customized.

Citrix XenDesktop policies control user access and session environments, and are the most efficient method of controlling connection, security, and bandwidth settings. You can create policies for specific groups of users, devices, or connection types with each policy. Policies can contain multiple settings and are typically defined through Citrix Studio. (The Windows Group Policy Management Console can also be used if the network environment includes Microsoft Active Directory and permissions are set for managing Group Policy Objects). Figure 46 shows policies for Login VSI testing in this CVD. Profile management provides an easy, reliable, and high-performance way to manage user personalization settings in virtualized or physical Windows environments.

It requires minimal infrastructure and administration, and provides users with fast logons and logoffs. A Windows user profile is a collection of folders, files, registry settings, and configuration settings that define the environment for a user who logs on with a particular user account. These settings may be customizable by the user, depending on the administrative configuration. Examples of settings that can be customized are: § Desktop settings such as wallpaper and screen saver § Shortcuts and Start menu setting § Internet Explorer Favorites and Home Page § Microsoft Outlook signature § Printers Some user settings and data can be redirected by means of folder redirection.

However, if folder redirection is not used these settings are stored within the user profile. The first stage in planning a profile management deployment is to decide on a set of policy settings that together form a suitable configuration for your environment and users. The automatic configuration feature simplifies some of this decision-making for XenDesktop deployments. Screenshots of the User Profile Management interfaces that establish policies for this CVD’s RDS and VDI users (for testing purposes) are shown below. Basic profile management policy settings are documented here: Figure 47 VDI User Profile Manager Policy In this project, we tested a single Cisco HyperFlex cluster running eight Cisco UCS HX220c-MS4 Rack Servers in a single Cisco UCS domain. This solution is tested to illustrate linear scalability for each workload studied. Hardware Components: 2 x Cisco UCS 6248UP Fabric Interconnects 2 x Cisco Nexus 9372PX Access Switches 8 x Cisco UCS HX220c-M4 Rack Servers (2 Intel Xeon processor E5-2690 v4 CPUs at 2.6 GHz, with 512 GB of memory per server [32 GB x 16 DIMMs at 2400 MHz]).

Cisco VIC 1227 mLOM 120GB 2.5” 6G SATA SSD drive 480GB 2.5” 6G SATA SSD drive 6 x 1.2TB 2.5” 12G 10K RPM SAS drive 2 x 64GB SD card Software components: Cisco UCS firmware 3.1(2b) Cisco HyperFlex data platform 1.8.1b VMware vSphere 6.0 Citrix XenDesktop 7.11 Hosted Virtual Desktops Citrix XenApp 7.11Hosted Shared Desktops. Citrix Provisioning Services 7.11 File Server for User Profiles Microsoft SQL Server 2012 Microsoft Windows 10 Microsoft Windows 2012 R2 Microsoft Office 2016 Login VSI 4.1.5.115 All validation testing was conducted on-site within the Cisco labs in San Jose, California. The testing results focused on the entire process of the virtual desktop lifecycle by capturing metrics during the desktop boot-up, user logon and virtual desktop acquisition (also referred to as ramp-up,) user workload execution (also referred to as steady state), and user logoff for the Citrix XenDesktop and XenApp session under test. Test metrics were gathered from the virtual desktop, storage, and load generation software to assess the overall success of an individual test cycle. Each test cycle was not considered passing unless all of the planned test users completed the ramp-up and steady state phases (described below) and unless all metrics were within the permissible thresholds as noted as success criteria. Three successfully completed test cycles were conducted for each hardware configuration and results were found to be relatively consistent from one test to the next.

You can obtain additional information and a free test license from. The following protocol was used for each test cycle in this study to insure consistent results. All machines were shut down utilizing the Citrix XenDesktop 7.11 Administrator Console. All Launchers for the test were shut down.

They were then restarted in groups of 10 each minute until the required number of launchers was running with the Login VSI Agent at a “waiting for test to start” state. To simulate severe, real-world environments, Cisco requires the log-on and start-work sequence, known as Ramp Up, to complete in 48 minutes.

Additionally, we require all sessions started, whether 60 single server users or 1000 full scale test users, to become active within two minutes after the last session is launched. In addition, Cisco requires that the Login VSI Benchmark method is used for all single server and scale testing. This assures that our tests represent real-world scenarios. For each of the three consecutive runs on single server tests, the same process was followed. To perform the test run protocol, complete the following steps: 1. Time 0:00:00 Start PerfMon Logging on the following systems: § Infrastructure and VDI Host Blades used in test run § All Infrastructure VMs used in test run (AD, SQL, View Connection brokers, image mgmt., etc.) 2. Time 0:00:10 Start Storage Partner Performance Logging on Storage System.

Time 0:05: Boot RDS Machines using Citrix XenDesktop 7.11 Administrator Console. Time 0:06 First machines boot. Time 0:35 Single Server or Scale target number of RDS Servers registered on XD. No more than 60 Minutes of rest time is allowed after the last desktop is registered and available on Citrix XenDesktop 7.11 Administrator Console dashboard. Typically a 20-30 minute rest period for Windows 10 desktops and 10 minutes for RDS VMs is sufficient. Time 1:35 Start Login VSI 4.1.4 Office Worker Benchmark Mode Test, setting auto-logoff time at 900 seconds, with Single Server or Scale target number of desktop VMs utilizing sufficient number of Launchers (at 20-25 sessions/Launcher).

Time 2:23 Single Server or Scale target number of desktop VMs desktops launched (48 minute benchmark launch rate). Time 2:25 All launched sessions must become active. All sessions launched and active must be logged off for a valid test run.

The Citrix XenDesktop 7.11 Administrator Dashboard must show that all desktops have been returned to the registered/available state as evidence of this condition being met. Time 2:57 All logging terminated; Test complete. Time 3:15 Copy all log files off to archive; Set virtual desktops to maintenance mode through broker; Shutdown all Windows 7 machines. Time 3:30 Reboot all hypervisors. Time 3:45 Ready for new test sequence.

Our “pass” criteria for this testing follows is Cisco will run tests at a session count level that effectively utilizes the blade capacity measured by CPU utilization, memory utilization, storage utilization, and network utilization. We will use Login VSI version 4.1.5 to launch Knowledge Worker workloads. The number of launched sessions must equal active sessions within two minutes of the last session launched in a test as observed on the VSI Management console. The Citrix XenDesktop Studio Desktop Group dashboard will be monitored throughout the steady state to make sure of the following: All running sessions report In Use throughout the steady state No sessions move to unregistered, unavailable or available state at any time during steady state Within 20 minutes of the end of the test, all sessions on all launchers must have logged out automatically and the Login VSI Agent must have shut down. Stuck sessions define a test failure condition. Cisco requires three consecutive runs with results within +/-1% variability to pass the Cisco Validated Design performance criteria. For white papers written by partners, two consecutive runs within +/-1% variability are accepted.

(All test data from partner run testing must be supplied along with proposed white paper.). We will publish Cisco Validated Designs with our recommended workload following the process above and will note that we did not reach a VSImax dynamic in our testing. The purpose of this testing is to provide the data needed to validate Citrix XenDesktop 7.11 Hosted Shared Desktop using Microsoft Windows Server 2012 R2 sessions on Cisco UCS HX220c-M4S. The information contained in this section provides data points that a customer may reference in designing their own implementations. These validation results are an example of what is possible under the specific environment conditions outlined here, and do not represent the full characterization of VMware products. Four test sequences, each containing three consecutive test runs generating the same result, were performed to establish single blade performance and multi-blade, linear scalability.

VSImax 4.1.x Description The philosophy behind Login VSI is different to conventional benchmarks. In general, most system benchmarks are steady state benchmarks. These benchmarks execute one or multiple processes, and the measured execution time is the outcome of the test. Simply put, the faster the execution time or the bigger the throughput, the faster the system is according to the benchmark. Login VSI is different in approach. Login VSI is not primarily designed to be a steady state benchmark (however, if needed, Login VSI can act like one). Login VSI was designed to perform benchmarks for SBC or VDI workloads through system saturation.

Login VSI loads the system with simulated user workloads using well known desktop applications like Microsoft Office, Internet Explorer and Adobe PDF reader. By gradually increasing the amount of simulated users, the system will eventually be saturated.

Once the system is saturated, the response time of the applications will increase significantly. This latency in application response times show a clear indication whether the system is (close to being) overloaded. As a result, by nearly overloading a system it is possible to find out what its true maximum user capacity is. After a test is performed, the response times can be analyzed to calculate the maximum active session/desktop capacity. Within Login VSI this is calculated as VSImax. When the system is coming closer to its saturation point, response times will rise. When reviewing the average response time it will be clear the response times escalate at saturation point.

This VSImax is the “Virtual Session Index (VSI)”. With Virtual Desktop Infrastructure (VDI) and Terminal Services (RDS) workloads this is valid and useful information. This index simplifies comparisons and makes it possible to understand the true impact of configuration changes on hypervisor host or guest level. Server-Side Response Time Measurements It is important to understand why specific Login VSI design choices have been made. An important design choice is to execute the workload directly on the target system within the session instead of using remote sessions. The scripts simulating the workloads are performed by an engine that executes workload scripts on every target system, and are initiated at logon within the simulated user’s desktop session context.

An alternative to the Login VSI method would be to generate user actions client side through the remoting protocol. These methods are always specific to a product and vendor dependent. More importantly, some protocols simply do not have a method to script user actions client side.

For Login VSI the choice has been made to execute the scripts completely server side. This is the only practical and platform independent solutions, for a benchmark like Login VSI. Calculating VSImax v4.1.x The simulated desktop workload is scripted in a 48-minute loop when a simulated Login VSI user is logged on, performing generic Office worker activities. After the loop is finished it will restart automatically. Within each loop the response times of sixteen specific operations are measured in a regular interval: sixteen times in within each loop. The response times of these five operations are used to determine VSImax. The five operations from which the response times are measured are: Notepad File Open (NFO) Loading and initiating VSINotepad.exe and opening the openfile dialog.

This operation is handled by the OS and by the VSINotepad.exe itself through execution. This operation seems almost instant from an end-user’s point of view. Notepad Start Load (NSLD) Loading and initiating VSINotepad.exe and opening a file. This operation is also handled by the OS and by the VSINotepad.exe itself through execution. This operation seems almost instant from an end-user’s point of view. Zip High Compression (ZHC) This action copy's a random file and compresses it (with 7zip) with high compression enabled.

The compression will very briefly spike CPU and disk IO. Zip Low Compression (ZLC) This action copy's a random file and compresses it (with 7zip) with low compression enabled. The compression will very briefly disk IO and creates some load on the CPU. CPU Calculates a large array of random data and spikes the CPU for a short period of time. These measured operations within Login VSI do hit considerably different subsystems such as CPU (user and kernel), Memory, Disk, the OS in general, the application itself, print, GDI, etc. These operations are specifically short by nature. When such operations become consistently long: the system is saturated because of excessive queuing on any kind of resource.

As a result, the average response times will then escalate. This effect is clearly visible to end-users. If such operations consistently consume multiple seconds the user will regard the system as slow and unresponsive. Figure 48 Sample of a VSI Max Response Time Graph, Representing a Normal Test Figure 49 Sample of a VSI Test Response Time Graph with a Clear Performance Issue When the test is finished, VSImax can be calculated.

When the system is not saturated, and it could complete the full test without exceeding the average response time latency threshold, VSImax is not reached and the amount of sessions ran successfully. The response times are very different per measurement type, for instance Zip with compression can be around 2800 ms, while the Zip action without compression can only take 75ms. This response time of these actions are weighted before they are added to the total. This ensures that each activity has an equal impact on the total response time.

In comparison to previous VSImax models, this weighting much better represent system performance. All actions have very similar weight in the VSImax total. The following weighting of the response times are applied.

The following actions are part of the VSImax v4.1 calculation and are weighted as follows (US notation): Notepad File Open (NFO): 0.75 Notepad Start Load (NSLD): 0.2 Zip High Compression (ZHC): 0.125 Zip Low Compression (ZLC): 0.2 CPU: 0.75 This weighting is applied on the baseline and normal Login VSI response times. With the introduction of Login VSI 4.1 we also created a new method to calculate the base phase of an environment.

With the new workloads (Taskworker, Powerworker, etc.) enabling 'base phase' for a more reliable baseline has become obsolete. The calculation is explained below. In total 15 lowest VSI response time samples are taken from the entire test, the lowest 2 samples are removed and the 13 remaining samples are averaged.

The result is the Baseline. The calculation is as follows: Take the lowest 15 samples of the complete test From those 15 samples remove the lowest 2 Average the 13 results that are left is the baseline The VSImax average response time in Login VSI 4.1.x is calculated on the amount of active users that are logged on the system. Always a 5 Login VSI response time samples are averaged + 40% of the amount of “active” sessions. For example, if the active sessions is 60, then latest 5 + 24 (=40% of 60) = 31 response time measurement are used for the average calculation.

To remove noise (accidental spikes) from the calculation, the top 5% and bottom 5% of the VSI response time samples are removed from the average calculation, with a minimum of 1 top and 1 bottom sample. As a result, with 60 active users, the last 31 VSI response time sample are taken. From those 31 samples the top 2 samples are removed and lowest 2 results are removed (5% of 31 = 1.55, rounded to 2). At 60 users the average is then calculated over the 27 remaining results. VSImax v4.1.x is reached when the VSIbase + a 1000 ms latency threshold is not reached by the average VSI response time result.

Depending on the tested system, VSImax response time can grow 2 - 3x the baseline average. In end-user computing, a 3x increase in response time in comparison to the baseline is typically regarded as the maximum performance degradation to be considered acceptable. In VSImax v4.1.x this latency threshold is fixed to 1000ms, this allows better and fairer comparisons between two different systems, especially when they have different baseline results. Ultimately, in VSImax v4.1.x, the performance of the system is not decided by the total average response time, but by the latency is has under load.

For all systems, this is now 1000ms (weighted). The threshold for the total response time is: average weighted baseline response time + 1000ms. When the system has a weighted baseline response time average of 1500ms, the maximum average response time may not be greater than 2500ms (1500+1000). If the average baseline is 3000 the maximum average response time may not be greater than 4000ms (3000+1000).

When the threshold is not exceeded by the average VSI response time during the test, VSImax is not hit and the amount of sessions ran successfully. This approach is fundamentally different in comparison to previous VSImax methods, as it was always required to saturate the system beyond VSImax threshold. Lastly, VSImax v4.1.x is now always reported with the average baseline VSI response time result. For example: “The VSImax v4.1 was 125 with a baseline of 1526ms”.

This helps considerably in the comparison of systems and gives a more complete understanding of the system. The baseline performance helps to understand the best performance the system can give to an individual user.

VSImax indicates what the total user capacity is for the system. These two are not automatically connected and related: When a server with a very fast dual core CPU, running at 3.6 GHZ, is compared to a 10 core CPU, running at 2,26 GHZ, the dual core machine will give and individual user better performance than the 10 core machine. This is indicated by the baseline VSI response time. The lower this score is, the better performance an individual user can expect. However, the server with the slower 10 core CPU will easily have a larger capacity than the faster dual core system. This is indicated by VSImax v4.1.x, and the higher VSImax is, the larger overall user capacity can be expected. With Login VSI 4.1.x a new VSImax method is introduced: VSImax v4.1.

This methodology gives much better insight in system performance and scales to extremely large systems. For Eight Node HX220c-M4S Rack Server HyperFlex Cluster For Citrix XenDesktop 7.11 Hosted Shared Desktop use cases, the recommended maximum workload was determined based on both Login VSI Knowledge Worker workload end user experience measures and the blade server operating parameters. This recommended maximum workload approach allows you to determine the server N+1 fault tolerance load the blade can successfully support in the event of a server outage for maintenance or upgrade.

Our recommendation is that the Login VSI Average Response and VSI Index Average should not exceed the Baseline plus 2000 milliseconds to insure that end user experience is outstanding. Additionally, during steady state, the processor utilization should average no more than 90-95%. This test studied random assignment, non-persistent desktop pool with 1000 PVS provisioned Windows 10 VMs hosting 1000 User Sessions on 8 x HX220c-M4S Servers running the Login VSI Knowledge Worker workload in benchmark mode. Test result highlights include: 0.7 second baseline response time 1 second average response time with 1000 desktops running second maximum response time with 1000 desktops running Average CPU utilization of 80 percent during steady state Average of 300 GB of RAM used out of 512 GB available 400MBps peak network utilization per host. This Cisco HyperFlex solution addresses urgent needs of IT by delivering a platform that is cost effective and simple to deploy and manage.

The architecture and approach used provides for a flexible and high-performance system with a familiar and consistent management model from Cisco. In addition, the solution offers numerous enterprise-class data management features to deliver the next-generation hyperconverged system. Delivering responsive, resilient, high performance Citrix XenDesktop 7.11 provisioned Windows 10 Virtual Machines and Citrix XenApp 7.11 hosted shared server desktop sessions has many advantages for desktop virtualization administrators. Citrix administrators have guidance on deployment scenarios utilizing both Machine Creation Service as well as Provisioning Server Virtual desktop end-user experience, as measured by the Login VSI tool in benchmark mode, is outstanding with Intel Broadwell E5-2600 v4 processors and Cisco 2400Mhz memory. In fact, we have set the industry standard in performance for Desktop Virtualization on a hyper-converged platform. Vadim is a subject matter expert on Cisco HyperFlex, Cisco Unified Computing System, Cisco Nexus Switching, VMware vSphere, Citrix XenDesktop, Citrix XenApp and Citrix PVS desktop and application virtualization.

Vadim is a member of the Cisco’s Computer Systems Product Group team. For their support and contribution to the design, validation, and creation of this Cisco Validated Design, we would like to acknowledge the significant contribution and expertise that resulted in developing this document: Mike Brennan, Product Manager, Desktop Virtualization and Graphics Solutions, Cisco Systems, Inc. Shyam Palakodety, Technical Marketing Engineer, Springpath, Inc. The following charts delineate performance parameters for the 8-node cluster during a Login VSI 4.1 1000 XenDesktop 7.11 MCS provisioned pooled desktop benchmark test. The CPU and memory charts show all 8 nodes’ performance on a single chart. Individual charts for each node are included for network throughput for clarity.

The performance charts indicates that the HyperFlex hybrid nodes running Data Platform version 1.8(1b) were operating consistently from node to node and well within normal operating parameters for hardware in this class. The data also supports the even distribution of the workload across all 8 servers.

I am facing an issue, seems to be a rare one. No one has asked this question in any of the blog except for. One of my Esxi 5.1 host (not an actual server, but just a physical workstation acting as a server) was working till last week. Don't know what happened suddenly, it started showing the error as shown below: I can't try installing a fresh Esxi on top of this, as there is so much of data residing on this. Are there anyone who faced this kind of issue?

Any idea on how to solve this, besides the way mentioned? Basically, my ESXi installation, became corrupted. I fixed it with the 3rd option.

Options:- • Fix as per this article • Recovery Mode - Press Shift and R at ESXi Boot, if you have ever upgraded ESXi, you will be able to rollback, to the previous version, and the upgrade again. • Re-install ESXi. When re-installing, it will discover the older installation, and ask to upgrade or re-install, if you perform a re-install it will discover the existing VMFS partition, and ask you if you want to PRESERVE, select YES, and this will keep all your VMs, and then later at power on, just re-register the VMs with Inventory, which is browse the datastore, browse the folder, select the VMX file, Right Click and select Register VM.