Nutanix .NEXT Announcement – Acropolis and KVM

I’m happy to see that Nutanix has officially announced their upcoming strategic direction at the .NEXT conference. Using Nutanix Acropolis, KVM, and Prism – data center administrators now have the ability to truly make infrastructure invisible.

What Is It?

To read more about the specific details take a look at Andre Leibovici’s post here, then come back. It has great pictures and lists of features, you’ll like it.

Key advantages for me as a UC administrator:

  1. Linux KVM as a fully featured and consumer friendly hypervisor
  2. Nutanix Prism and Acropolis presenting a seamless management interface for VMs regardless of the underlying hypervisor
  3. Management interfaces designed with Nutanix web-scale principles such as distributed-everything, shared-nothing architecture in mind
  4. Simple migration of existing VMs into a Nutanix XCP (Xtreme Computing Platform) environment

I see an exciting future for enterprises that want to virtualize but don’t want to get locked into a particular hypervisor. Real choice is now available to put workloads on the hypervisor that makes most sense.

Combined with the ability to scale compute and storage effortlessly, administrators can stop worrying about infrastructure and start planning for what truly matters, Unified Communication applications 😉 I might be a little biased there, but its applications that drive business productivity, not compute and storage infrastructure.

Your compute, storage, and now even your hypervisor can be seen as a commodity that’s just available to applications.

What Does It Mean For My UC?

Test and development virtual environments can be virtualized and managed without paying for hypervisor licenses.

Production environments that support Linux KVM can be migrated with a few clicks.

Your VM management infrastructure becomes more resilient and reliable with one-click upgrades possible for BIOS, Firmware, Hypervisor, and Storage Software for the entire infrastructure stack.

Less time spent managing infrastructure and more time spent working on UC.

But I Can’t Use KVM or Don’t Want To

Nutanix still supports using VMware vSphere or Microsoft Hyper-V and the same flexible storage and compute layer is still available. Infrastructure is still invisible, but in these cases VM Management will be performed through the corresponding VMware or MS tools.

Some UC vendors such as Microsoft already support multiple Hypervisors. MS Lync (Skype for Business) is supported on any Hypervisor listed in the SVVP program, for example. In the past, Avaya supported the Aura “System Platform” which was XenServer.

I expect the UC marketplace to open up and support alternative hypervisors in the future. Customer demand can drive vendor behavior, like it did with Cisco’s support for specs based virtualization of UC.

What NEXT?

Give Nutanix a try for your environment with the free Nutanix Community Edition. See if you can save on test or development VM environments at first. Think about what happens if you can truly separate your applications from the infrastructure stack. Where is the best place for those apps to run? If you already have a Nutanix Environment, then investigate standing up a cluster with Acropolis and KVM.

If you’re at .NEXT, stop by the Avaya booth and talk with Steven Given about the work already done to verify interoperability between Avaya’s Software Defined Datacenter and Nutanix Software Defined Storage.

Vienna Avaya Technology Forum

Part of my role on the Nutanix Performance and Solutions team is to “evangelize” the technology and tell the world about all the great work we’re doing writing documents, testing products and solutions, and assisting with customer engagements. The physical manifestation of that is me sitting in an airport typing up this blog post, on my way to the Avaya Technology Forum in Vienna, Austria.


Nutanix will have a booth and I’ll be doing demos of the product interface and reaching out to Avaya communications and networking customers. I’ll be joined by members of the local Nutanix team to help share the duties. I’m looking forward to meeting more of the international Nutanix team!

The Nutanix Virtual Computing Platform is a great fit for Avaya customers looking to virtualize their communications infrastructure running Avaya Aura, or IP Office. Nutanix also simplifies the compute and storage side of the data center for those leveraging Avaya Fabric Connect to simplify the network stack.

Imagine being able to scale your compute and storage seamlessly with auto discovery. Imagine one click upgrades of the entire compute and storage ecosystem (INCLUDING THE HYPERVISOR!). More importantly, imagine all the time you’ll have to work on the applications that really matter.

IP Office Reference Architecture

Avaya Aura Reference Architecture

Stop by the Nutanix booth in the Solutions Zone at the Hilton Vienna on May 5th – 8th if you’re in the area!

Nutanix Avaya Aura Reference Architecture

I’m happy to announce that the Reference Architecture for Avaya Aura on Nutanix has been completed!

Aura is a Unified Communications platform with a lot of different components. All of these pieces can now be deployed in VMware vSphere thanks to the Avaya Aura Virtualized Environment and Customer Experience Virtualized Environment initiatives at Avaya. These projects bring together different Aura apps and produce virtualization guides and OVA templates for each product.

The Nutanix Reference Architecture above goes through the most common Virtualized Environment components and breaks down the rules, requirements, and best practices for running on Nutanix.

I’m happy that this document serves as an excellent reference for the administrator in charge of virtualizing Aura. Right now the information in these Avaya docs are spread all over the place. Having a unifying reference source is pretty helpful to any Nutanix administrator sitting there thinking “How do I virtualize this again?” and even helpful to Avaya admins thinking “Where is that doc?”

Aura Components

The core components I address are as follows:

Component Purpose
Call Control Aura Session Manager and Communications Manager
Voice Mail and Messaging Aura Messaging
Presence Aura Presence Services
Configuration Management System Manager
3rd Party Integration Application Enablement Services

There are many additional components not covered directly in the guide, but I’ve included links to these where appropriate.

Planning and Design

Much like other applications on Nutanix, Aura designers and architects need to answer these question about each Aura VM:

  • How many vCPUs does this VM use and reserve (core count / MHz)?
  • How much RAM does this VM use and reserve (GB)?
  • How much storage space does this VM use (GB)?
  • What sort of IOPS are generated / required during peak hours?
  • Are there any other special requirements?

The Nutanix Avaya Aura Reference Architecture doc attempts to address all of these questions.

Here’s an example of the information for Avaya Aura Communication Manager Duplex:


Put this individual machine information together with a sample layout. Your layout may vary based on the Aura design. Work closely with the Avaya Aura design team to figure out what components are required and what size those components need to be.


Once we know how many VMs and what their specs are, we can figure out the resource utilization of the end system:


With all this information together, the right Nutanix virtualization platform can be chosen. You can use the system with right CPU core count, the right amount of RAM, and the storage capacity and performance to provide exceptional end-user experience.

Your Aura design will certainly differ from the one listed above, but the processes laid out in the guide can help plan for a system of any size with any number of components.

If you have questions feel free to leave a comment, or head over to forums and visit the Workloads & Applications > Unified Communications section.


Survivable UC – Avaya Aura and Nutanix Data Protection

I wanted to share a bit of cool “value add” today, as my sales and marketing guys would call it. This is just one of the things for Avaya Aura and UC in general that a Nutanix deployment can bring to the table.

Nutanix has the concept of Protection Domains and Metro Availability that have been covered in pretty great detail by some other Nutanix bloggers. Check out detailed articles here by Andre Leibovici, and here by Magnus Andersson for in depth info and configuration on Metro Availability.

Non-redundant Applications

In an Avaya Aura environment, most machines will be protected from failure at the application level. A hot standby VM will be running to take over operation in the event of primary machine failure such as with Session Manager and Communication Manager. In the following example we see that System Manager, AES, and a number of other service don’t have a hot standby. This might be because it’s too expensive resource wise, licensing wise, or the application demands don’t call for it.


If multiple Nutanix clusters are in place, we actually have two ways to protect these VMs at the Nutanix level.

Nutanix Protection Domains

First, let’s look at Protection Domains. With a Protection Domain, we configure a NDFS (Nutanix Distributed Filesystem) level snapshot that happens at a configurable interval. This snapshot is intelligently (with deduplication) replicated to another Nutanix cluster. It’s different than a vSphere snapshot because the Virtual Machine has no knowledge that a snapshot took place and no VMDK fragmentation is required. None of the standard warnings and drawbacks of running with snapshots apply here. This is a Nutanix metadata operation that can happen almost instantly.

We pick individual VMs to be part of the Protection Domain and replicate these to one or more sites.

In the event of a failure of a site or cluster, the VM can be restored at another site, because all of the files that make up the Virtual Machine (excluding memory) are preserved on the second Nutanix cluster.



Nutanix Metro Availability

But I hear you saying, “Jason that’s great, but a snapshot taken at intervals is too slow. I can’t possibly miss any transactions. My UC servers are the most important thing in my Data Center. I need my replication interval to be ZERO.” This is where Metro Availability comes in.

Metro Availability is a synchronous write operation that happens between two Nutanix clusters. The requirements are:

  1. A new Nutanix container must be created for the Metro Availability protected machines.
  2. RTT latency between clusters must be less than 5 milliseconds (about 400 kilometers)

Since this write is synchronous, all disk write activity on a Metro Availability protected VM must be completed on both the local and the remote cluster before it’s acknowledged. This means all data writes are guaranteed to be protected in real time. The real-world limitation here is that every bit of distance between clusters adds latency to writes. If your application isn’t write-heavy you may be able to hit the max RTT limit without noticing any issues. If your application does nothing but write constantly to disk, 400km may need to be re-evaluated. Most UC machines are generally not disk intensive though. Lucky you!


In the previous image we have two Nutanix clusters separated by a metro ethernet link. The standalone applications like System Manager, Utility Services, Web License Manager, and Virtual Application Manager are being protected with Metro Availability.

In the even of Data Center 1 failure, all of the redundant applications will already be running in Data Center 2. The administrator can then either manually (or through a detection script) start the non-redundant VMs using the synchronous copies residing in Data Center 2.


Avaya Aura Applications are highly resilient and often provide the ability for multiple copies of each app to run simultaneously in different locations, but not all Aura apps work this way. With Nutanix and virtualization, administrators have even more flexibility to protect the non-redundant Aura apps using Protection Domains and Metro Availability.

These features present a consumer-friendly GUI for ease of operation, and also expose APIs so the whole process can be automated into an orchestration suite. These Nutanix features can provide peace of mind and real operational survivability on what would otherwise be very bad days for UC admins. Nutanix allows you to spend more time delivering service and less time scrambling to recover.



Virtualized Avaya Aura on Nutanix – In Progress

Explaining the Nutanix Distributed Filesystem

The Avaya Technology Forum in Orlando was a great success! Thanks to everyone who attended and showed interest in Nutanix by stopping at the booth. I met a lot of interested potential customers and partners and was also able to learn more about what people are virtualizing these days. There is nothing quite like asking people directly “What virtualization projects do you have coming up?”

Explaining the Nutanix Distributed Filesystem
Explaining the Nutanix Distributed Filesystem

After talking about Nutanix and what I do on the Solutions team, some key themes I heard repeated by attendees were:

“Wow, that’s really cool technology!”


“When will you have a document for Avaya Aura?”

The response to the first one is easy. Yeah, I think it’s really cool technology too. Nutanix will allow you to compress a traditional three tier architecture into just a few rack units. It gives you the benefits of locally attached fast flash storage AND the benefits of a shared storage pool. Customers can use this to save money, improve performance, and focus on their applications instead of their infrastructure. After you compress you also have the ability to scale up the number of nodes in the Nutanix cluster with no hard limit in place. Performance grows directly with cluster growth.

The second question is actually why I’m writing this blog today. When will the reference architecture for Avaya Aura on Nutanix be completed?

I’m in the research phase now because Avaya Aura is a monster of an application. It’s actually a set of dozens of different systems that all work together. Each system will have its own requirements for virtualization. Part of getting a reference architecture or best practices guide right is figuring out what each individual component requires to succeed.

Let’s give an example by looking at the Avaya Aura Virtual Environment overview doc. This list is the number of different OVAs that are available:

Avaya Aura® applications for VMware
• Avaya Aura® Communication Manager
• Avaya Aura® Session Manager
• Avaya Aura® System Manager
• Avaya Aura® Presence Services
• Avaya Aura® Application Enablement Services
• Avaya Aura® Agile Communication Environment (ACE)
• Avaya Aura® Messaging
• Communication Manager Messaging
• Avaya Virtual Application Manager
• Avaya Aura® Utility Services
• WebLM
• Secure Access Link
• Session Border Controller for Enterprise
• Avaya Aura Conferencing

Avaya Call Center on VMware (OVA files)
• Avaya Aura® Call Center Elite
• Elite Multichannel Feature Pack
• Avaya Aura® Experience Portal
• Call Management System

Each of the applications listed above is a separate OVA file available from Avaya. Each application has its own sizing, configuration, and redundancy guides. To deploy an Aura solution you can use some, or all of these components.

An Aura document on Nutanix is in the works, but it’s going to be a lot of WORK. I plan on focusing on just the core components at first and a few sample deployments to cover the majority of cases.

I’ve read every single Avaya Virtual Environment document and now just need to compile this information into an easy to digest Nutanix-centric format. In the meantime if you have Avaya Aura questions on Nutanix feel free to reach out to me @bbbburns

The great thing so far is that I don’t see any potential road blocks to deploying Aura on Nutanix. In fact at the ATF we performed a demo Aura deployment on a single Nutanix 3460 block (4 nodes). We demonstrated Nutanix node failure and Aura call survivability of the active calls and video conferences.

Part of the challenge of deploying any virtual application, especially real time applications, is that low-latency is KING. This was repeated over and over by all the Avaya Aura experts at the conference. Aura doesn’t use storage very heavily, but since it’s a real-time app the performance better be there when the app asks for it. All the war stories around virtualizing Aura dealt with oversubscribed hosts, oversubscribed storage, or contention for resources.

Deploying Aura on Nutanix is going to eliminate these concerns! Aura apps will ALWAYS have fast storage access. There will never be any contention because our architecture precludes it. I’m excited to work on projects like this because I know customers are going to save HUGE amounts of money while also gaining performance and reliability.

We really will change your approach to the data center.

Nutanix and The 2015 Avaya Technology Forum

I’m at the 2015 Avaya Technology Forum with Nutanix to talk about Avaya Unified Communications on the Nutanix platform. Stop by the Nutanix and CRI booth to see the Nutanix gear in action. Nutanix 3460 and 1450 nodes will be powering all the demos you see for Avaya Aura and other applications!

I’ve been testing with the helpful engineers at Avaya to do two important things:

  1. Ensure Avaya Unified Communications applications run flawlessly on Nutanix.
  2. Test the Nutanix Distributed File System (NDFS) performance and operation on top of Avaya Fabric Connect.

The result of all this work is being presented here at the Avaya Technology Forum in sunny Orlando. The Avaya colleagues I’ve been working with are from the Boston area (and Canada), so I imagine coming down here to find 81 degrees and sunshine is a welcome change!

The first item I want to bring to your attention is the Nutanix Avaya Unified Communications Solution Brief. This is a high level piece to show the overall benefits of combining Nutanix and Avaya Unified Communications. Nutanix makes the data center admin’s life easier by eliminating silos between UC and other data center apps, bringing scalable compute and storage to the masses, cutting down on management time, providing blinding fast I/O performance, and tying it all together with high availability baked in.


Whether you’re running Avaya IP Office, a full blown contact center with Avaya Aura, or something in between, the Nutanix platform brings web-scale technologies to these virtual applications. To top it off – Avaya Fabric Connect technologies allow the data center admin to provision highly resilient, low-latency, high-throughput network backbones without the drawbacks of traditional spanning tree architectures.

Nutanix performs hyper-convergence at the storage and compute layer using a software defined Controller Virtual Machine. Find out more here at the Nutanix Bible to see how Nutanix ties together the disks of many nodes to form a resilient, distributed, high-performance compute and storage cluster.

Avaya brings Software Defined Networking and Virtualization with Avaya Fabric Connect.

These two technologies together save time and money in the datacenter, while also providing blazing performance.


Check back for updates during the conference. I’ll be sharing a Reference Architecture for Avaya IP Office Server Edition running on Nutanix. In the future you’ll also see a Reference Architecture for Avaya Aura on Nutanix.

Find me at the conference by tweeting @bbbburns or stopping by the Nutanix and CRI booth.

Nutanix and UC – Part 4: VM Placement and System Sizing

In the last blog post I talked about sizing individual VMs. Today we’ll look at placing UC VMs onto a Nutanix node (an ESXi host) and coming up with overall system sizing.

First I’d like to announce the publication of my document for Virtualizing Cisco UC on Nutanix. Readers of the blog will recognize the content and the diagrams 😉 I’ve combined all of this information for publication and delivery to customers and partners planning to deploy Cisco Unified Communications.

Next, let’s look at placing Cisco UC VMs to size a Nutanix system. Once you have a count of all the VMs needed and their individual sizes you can spread them around on paper to see how much hardware rack and stack is in your future. With Nutanix you’ll have a lot less work ahead of you than with any other solution! Use all the methods documented in the previous posts to size the individual VMs.

There are a few options for VM placement. I used Omnigraffle on my Mac to create diagrams like the one you see here, but Visio or MS Excel will work just as well. The “Hypervisor CPU Cores” represent the space available on a single Nutanix node. I didn’t specify ESXi, Hyper-V, or KVM directly because Nutanix can support all three hypervisors.

In a Nutanix block you can have up to 4 nodes in a 2 RU device. Below we see a single 16 core node. New Nutanix models will be released in the future with different core counts, roughly keeping track with Intel’s releases of new hardware. Size your core count based on what’s available on the Nutanix hardware platform page.

*EDIT on 2015-10-23* Nutanix switched to a “Configure To Order” model and now many more processor core options are available, from 2×8 core all the way up to 2×18 core. This provides a lot of flexibility for sizing UC solutions.

Cisco UC VMLayoutTake some space and reserve it for the Nutanix Controller Virtual Machine. Exactly how much space reserved really depends on the IO load expected. The CVM will reserve four vCPUs at a minimum. Looking at the CVM properties in vSphere you can see it actually has eight vCPUs provisioned which is why the shaded area exists. These four vCPUs that exist in a limbo state (provisioned but not reserved) can be used by any application that doesn’t mind CPU oversubscription.

Unfortunately Cisco UC and most other UC applications don’t allow oversubscription so we have to just chop off eight vCPUs right at the start to abide by Cisco’s requirements. Don’t worry though, four of these vCPUs are not lost entirely. Make good use of them by putting a DHCP server there, or DNS, or a Domain Controller. Put a Linux SFTP backup server there if you like for handling incoming application backups from Cisco UC. Mine bitcoins. These cores are yours, you have options!

If you know that a Nutanix node is going to push SERIOUS IO traffic because you’ve read the IOPS requirements and see that you’ll need many many thousands of IOPS for the VMs, bump up the number of vCPUs that you leave for the Nutanix CVM. Under heavy load, the multi-threaded process will use all available vCPUs to handle IO requests. Under normal load the four reserved vCPUs will be plenty.

If you’re unsure of the IO load a machine will generate, fear not! The Nutanix Prism interface shows detailed stats per virtual machine. You can get an idea of what a VM is doing just by watching the Prism page for that VM. Below we see a VM that exhibits a spike in IOPS over a period of time.

Prism stats
Prism VM Stats


Along with Excel, Omnigraffle, and Visio, tools exist on the Cisco website to do VM placement. I like to use the UC Placement Tool just because it’s simple. A custom CVM image can be created that uses eight vCPUs (or four) and then the Cisco UC images can be selected from existing templates.

Cisco VM Placement Tool
Cisco VM Placement Tool

This tool is extremely helpful because the sizes of the various Cisco UC components are embedded in the various templates as shown above. The IOPS calculation of this tool isn’t really there yet in the templates. It’s an exercise to the reader (or user) to fill out the expected IOPS of each virtual machine. This info can be cobbled together from the various Cisco wiki pages or from information gathered via the Nutanix Prism page.

Nutanix also makes a sizing tool that can be used to size a Nutanix cluster once the specs of the virtual workload are known.  Check out this video to get an idea of how the Nutanix Sizer works:

When sizing UC servers, we’ll use the “Server Virtualization” workload type. This means for each VM type (CUCM, CUC, CER) we’ll specify the number of vCPUs, amount of RAM, size of disk, and expected IOPS. Once this information is entered a Nutanix system (along with a number of nodes) will be chosen. This can be checked against sizing calculations above to ensure the right size system is selected.  Here we size 11 CUCM virtual machines. Each VM has 2vCPUs, 6GB RAM, 110GB storage, and an average IOPS of 40 (taken from the Cisco DocWiki).

CUCM Custom Workload
Creating a custom workload for CUCM

Cisco UC is a unique case because the 1000 series Nutanix processors do not currently meet the hardware processor requirements that Cisco specifies. This means the 1000 series nodes aren’t appropriate for Cisco UC, but all other node types are. We’re going to maximize for CPU cores because of Cisco’s 1:1 core:vCPU mapping. With most Cisco UC virtual machines we won’t run into any storage size or storage performance limitations on the Nutanix system. The primary driver of sizing will be number of free cores!

Maximizing for available storage space or other factors due to other workloads (like MS SQL or Exchange for instance) may lead to selection of a different node type. Nodes in a cluster can be many different types and can be mixed together in the same cluster. A cluster will often contain several storage heavy nodes for VMs with large storage requirements.

Summary and Next Steps

We’ve covered an overview of Cisco UC and Nutanix, and how to size individual UC VMs and place them on a Nutanix system. With this information it’s possible to design a complete Cisco UC solution powered by the Nutanix platform.

Assets from both Cisco and Nutanix can be leveraged to build a completely supported UC solution that takes up less rack space, power, and cooling. It’ll be simpler to setup because there are fewer components. It’ll be simpler to manage for the same reason AND because of the slick web front end to the combined compute and storage components. It’ll be more secure because federal STIG requirements are built into the product in easy to manage config settings (running a security script). One click upgrades for the entire compute and storage infrastructure means admins will be spending more time on the slopes or drinking beer and less on weekend change windows. That’s something I can get behind!

To learn more about Nutanix I recommend reading through the Nutanix Bible by Steve Poitras. It’s a wealth of great information on how the technology under the hood works. The nu.shool YouTube channel also has some excellent white board videos that I highly recommend.

Feel free to reach out to me on Twitter @bbbburns for follow up, or comment here on the blog.



Nutanix and UC – Part 3: Cisco UC on Nutanix

In the previous posts we covered an Introduction to Cisco UC and Nutanix as well as Cisco’s requirements for UC virtualization. To quickly summarize… Nutanix is a virtualization platform that provides compute and storage in a way that is fault tolerant and scalable. Cisco UC provides a VMware centric virtualized VoIP collaboration suite that allows clients on many devices to communicate. Cisco has many requirements before their UC suite can be deployed in a virtual environment and the Nutanix platform is a great way to satisfy these requirements.

In this post I’m going to cover the actual sizing and implementation details needed to design and deploy a real world Cisco UC system. This should help tie all the previous information together.

Cisco UC VM Sizing

Cisco UC VMs are deployed in a two part process. The first part is a downloaded OVA template and the second part is an installation ISO. The OVA determines the properties of the VM such as number of vCPUs, amount of RAM, and number and size of disks and creates an empty VM. The installation ISO then copies the relevant UC software into the newly created blank VM.

There are two ways to size Cisco UC VMs:

  1. Wing it from experience
  2. Use the Cisco Collaboration Sizing Tool

I really like “Option 1 – Wing it from experience” since the sizing calculator is pretty complicated and typically provides output that I could have predicted based on experience. “Option 2 – Collaboration Sizing Tool” is a requirement whenever you’re worried about load and need to be sure a design can meet customer requirements. Unfortunately the Sizing Tool can only be used by registered Cisco partners so for this blog post we’re just going to treat it as a black box.

Determine the following in your environment:

  • Number of Phones
  • Number of Lines Per Phone
  • Number of Busy Hour calls per line
  • Number of VM boxes
  • Number of Jabber IM clients
  • Number of Voice Gateways (SIP, MGCP, or H.323)
  • Redundancy Strategy (where is your failover, what does it look like?)

Put this information into the Collaboration Sizing Tool and BEHOLD the magic.

Let’s take an example where we have 1,000 users and we want 1:1 call processing redundancy. This means we need capacity for 1,000 phones on one CUCM call processor, and 1,000 phones on the failover system. We would also assume each user has 1 voicemail box, and one Jabber client.

This increases our total to 2,000 devices (1 phone and 1 Jabber per user) and 1,000 voicemail boxes.

Let’s assume that experience, the Cisco Sizing Tool, or our highly paid and trusted consultant tells us we need a certain number of VMs of a certain size to deploy this environment. The details are all Cisco UC specific and not really Nutanix specific so I’ll gloss over how we get to them.

We need a table with “just the facts” about our new VM environment:

Product VM Count vCPUs RAM HDD OVA
CUCM 2 1 4GB 80GB 2500 user
IM&P 2 1 2GB 80GB 1000 user
CUC 2 2 4GB 160GB 1000 user
CER 2 1 4GB 80GB 20000 user
PLM 1 1 4GB 50GB NA

The first column tells us the Cisco UC application. The second column tells us how many VMs of that application are needed. The rest of the columns are the details for each individual instance of a VM.

The DocWiki page referenced in the last article has details of all OVAs for all UC products. In the above example we are using a 2,500 user CUCM OVA. If you wanted to do a 10,000 user OVA file for each CUCM VM the stats can easily be found:



Visit the DocWiki link above for all stats on all products.

Reserving Space for Nutanix CVM

The Nutanix CVM runs on every hypervisor host in the cluster so it can present a virtual storage layer directly to the hypervisor using local and remote disks. By default it will use the following resources:

  • 8 vCPU (only 4 reserved)
    • Number of vCPUs actually used depends on system load
  • 16GB RAM
    • Increases if compression or deduplication are in use
  • Disk

In a node where we have 16 cores available this means we’d have 12 cores (16 – reserved 4) for all guest VMs such as Cisco UC. A cautious reading of Cisco’s requirements though would instruct us to be more careful with the math.

The Cisco docwiki page says “No CPU oversubscription for UC VMs” which means in theory we could be in an oversubscribed state if we provision the following in a 16 core node:

CVM x 4 vCPUs, UC VMs x 12 vCPUS = 16 total

It’s safer to provision:

CVM x 8 vCPUs, UC VMs x 8 vCPUs = 16 total

Even though it’s unlikely the CVM will ever use all 8 vCPUs.

Placing Cisco UC VMs

That’s a lot of text. Let’s look at a picture of how that placement works on a single node.

I’ve taken a single Nutanix node and reserved vCPU slots (on paper) for the VMs I want to run. Repeat this process for additional Nutanix nodes until all of your UC VMs have a place to live. Depending on the Nutanix system used you may have a different number of cores available. Consult the Nutanix hardware page for details about all of the available platforms. As new processors are released this page is sure to be updated.

*EDIT on 2015-10-23* Nutanix switched to a “Configure To Order” model and now many more processor core options are available, from 2×8 core all the way up to 2×18 core. This provides a lot of flexibility for sizing UC solutions.

The shaded section of the provisioned, but not reserved, CVM vCPU allocation is critical to sizing and VM placement. 4 vCPUs that will go unused unless the system is running at peak load. UC VMs are typically not IOPS intensive, so I would recommend running some other Non-Cisco workload in this free space. This allows you to get full efficiency from the Nutanix node while also following Cisco guidance.

Follow best practices on spreading important functions to multiple separate nodes in the cluster. This applies to ALL virtualization of UC. If we have one piece of hardware running our primary server for 1,000 users, it’s probably a good idea that the backup unit run on a DIFFERENT piece of hardware. In this case, another Nutanix node would be how we accomplished that.

Remember that at least 3 Nutanix nodes must be used to form a cluster. In the diagram above I’ve shown just a single node, but we’ll have at least two more nodes to place any other VMs we like following all the same rules. In a large Nutanix environment a cluster could contain MANY more nodes.

Installation Considerations

After the UC VM OVAs are deployed the next step is to actually perform the application installation. Without installation the VM is just an empty shell waiting for data to be written to the disk.

I’ll use an example CUCM install because it’s a good proxy for other UC applications.


The first Nutanix node has two CUCM servers and the second Nutanix node also has two CUCM servers. The installation ISO has to be read somehow by the virtual machine as it’s booted. In VMware we have a number of options available.

  • Read from a drive on the machine where vSphere Client is running
  • Read from a drive inserted into the ESXi Host
  • Read from an ISO located on a Datastore


When we select Datastore we can leverage a speedup feature of the Nutanix NDFS. If we put the CUCM ISO in the same NDFS container where the VM disk resides we can use Shadow Clones to make sure that the ISO is only ever read over the network once per Nutanix node.

In our previous example with two CUCM servers, the first CUCM server on the second node would be installed from Datastore. When the second CUCM installation was started on that same second node, it would read the ISO file from the local NDFS shadow clone copy.

 Rinse and Repeat

For all of the UC VMs and all Nutanix nodes the same process would be followed:

  1. Figure out how many and what size UC VMs are needed.
  2. Plan the placement of UC VMs on Nutanix nodes by counting cores and staggering important machines.
  3. Deploy the OVA templates according to your plan.
  4. Install the VMs from ISO making sure to use the Datastore option in vSphere.

In our next blog post we’ll  look at tools that can be used to make VM placement a bit easier and size Nutanix for different workloads.

Thanks for following along! Your comments are always welcome.

Nutanix and UC – Part 2: Cisco Virtualization Requirements

In the last post I covered an Introduction to Cisco UC and Nutanix. In this post I’ll cover UC performance and virtualization requirements.

A scary part of virtualizing Cisco Unified Communications is worrying about being fully supported by Cisco TAC if a non-standard deployment path is chosen. This is due to a long history of strict hardware requirements around UC. When Cisco UC was first released in it’s current Linux based incarnation around 2006 as version 5.0 it could only be installed on certain HP and IBM hardware server models. Cisco was VERY strict about hardware revisions of these servers and a software to hardware matrix was made available.

This lead to the creation of a “Specifications” table, listing exact processors, disks, and RAM for each supported server model. When you hear “Specifications Based” or “Spec Based” it all started here.

Customers were welcome to purchase a server directly from HP or IBM that used all of the same hardware components, but the Cisco MCS server (which was just a rebranded HP or IBM server) was recommended. If it was discovered a customer had deviated from the hardware specs listed in the matrix, they could be in an unsupported configuration. If this unsupported configuration was found to be causing a particular problem then a customer might have had to change out the server hardware before further support could be obtained. These calls to technical support were often very stressful and harrowing if it was discovered that the purchasing process for the hardware didn’t follow the Spec based matrix exactly.

From a support perspective this makes sense. UC is a critical real-time application and non-standard hardware with less than excellent performance characteristics could cause all sorts of hard to diagnose and hard to troubleshoot problems. Working in support I saw my fair share of these cases where failing disks or problem hardware caused periodic interruptions that only revealed themselves through odd and intermittent symptoms.

UC Performance Needs

Let’s take a break from history to look at why performance is so critical to UC.

Figure 1: Signal vs Media


Figure 1 shows where the CUCM Virtual Machine fits into the call path. Each IP Phone will have a TCP session open at all times for call control, sometimes called signaling. Typically in a Cisco environment this is the SCCP protocol, but things are moving to the SIP protocol as an open standard. All the examples below assume SCCP is in use.

The SCCP call control link is used when one phone wants to initiate a call to another phone. Once a call is initiated a temporary media link with Real Time Protocol (RTP) audio/video traffic is established directly between phones. The following process is used to make a phone call.

Basic Phone Call Process

  1. User goes off hook by lifting handset, pressing speaker, or using headset
  2. User receives dial-tone
  3. User dials digits of desired destination and prepares a media channel
  4. CUCM performs destination lookup as each digit is received
  5. CUCM sends back confirmation to calling user that the lookup is proceeding
  6. CUCM sends “New Call” to destination IP Phone
  7. Destination phone responds to CUCM that it is ringing
  8. CUCM sends back confirmation to calling phone that the destination is ringing
  9. Destination phone is answered
  10. CUCM asks destination phone for media information (IP, port, audio codec)
  11. CUCM asks originating phone for media information (IP, port, audio codec)
  12. CUCM relays answer indication and media information to the originating phone
  13. CUCM relays media information to the destination phone
  14. Two way audio is established directly between the IP phones

At every step in the above process one or more messages have to be exchanged between the CUCM server and one of the IP phones. There are three places delay is commonly noticed by users:

  1. Off-hook to dial-tone
    1. User goes off hook, but CUCM delays the acknowledgement. This leads to perceived “dead air”
  2. Post dial delay
    1. User dials all digits, but doesn’t receive lookup indication (ringback). This can cause users to hang up. This is EXTREMELY important to avoid because during a 911 call users will typically only wait a second or two to hear some indication that the call is in progress before hanging up. Consider the psychological impact and stress of even momentary dead air during an emergency situation.
  3.  Post answer, media cut-through delay
    1. Destination phone answers, but audio setup is delayed at the CUCM server. This leads to a user picking up a phone saying “Hello, this is Jason”, and the calling user hearing “o, this is Jason”.

Also consider that each of the above messages for a single phone call had to be logged to disk. Huge advances have been made in compression and RAM-disk usage, but log writing is still a critical component of a phone call. Call logs and call records are crucial to an enterprise phone system.

Let’s look at this at scale.

Figure 2: Cluster Scale

With a cluster of fully meshed call control servers and tens of thousands of IP phones, the situation is slightly more complex. Any single phone can still call any other phone, but now an extra lookup is needed. Where the destination phone registers for call control traffic is now important. Users in the Durham office may be located on a different Call Control server than users in the San Jose office. This means all of the above steps must now be negotiated between two different CUCM servers as well as the two phone endpoints.

CUCM uses Inter Cluster Communication Signaling (ICCS) to do lookups and call control traffic between servers. A problem now on any one server could spell disaster for thousands of users who need to place calls and have immediate response. Any server response time latency will be noticed.

Now that we have some background on why performance is so crucial to a real time communication system, let’s get back to the history.

Enter virtualization!

Cisco was slow to the virtualization game with Unified Communications. All the same fears about poor hardware performance were amplified with the hypervisor adding another possibly hard to troubleshoot abstraction layer. Virtualization support was first added only for certain hardware platforms (Cisco MCS) and only with certain Cisco UC versions. All the same specifications based rules applied to IBM servers (by this point HP was out of favor with Cisco).

What everyone knew is that virtualization was actually amazing for Cisco UC – in the lab. Every aspiring CCIE Voice candidate had snapshots of Cisco UC servers for easy lab recreates. Customers had lab or demo as proof of concept or test. Cisco used virtualization extensively internally for testing and support.

A Cisco UC customer wanting to virtualize had two options at this point for building a virtual Cisco UC cluster on VMware.

  1. Buy Cisco MCS servers (rebranded IBM)
  2. Buy IBM servers

The Cisco DocWiki page was created and listed the server requirements and IBM part numbers and a few notes about VMware configuration.

To any virtualization admin it should be immediately clear that neither of the above options are truly desirable. Virtualization was supposed to give customers choice and flexibility and so far there was none. Large customers were clamoring for support for Hardware Vendor X, where X is whatever their server virtualization shop was running. Sometimes Cisco UC customers were direct competitors to IBM, so imagine the conversation:

“Hello IBM competitor. I know you want Cisco UC, but you’ll have to rack these IBM servers in your data center.”

Exceptions were made and the DocWiki was slowly updated with more specifications based hardware.

Cisco UCS as Virtualization Door Opener

Cisco Unified Computing System (UCS) is what really drove the development of the Cisco DocWiki site to include considerations for Network Attached Storage and Storage Area Networks. Now Cisco had hardware that could utilize these storage platforms and best practices needed to be documented for customer success. It also started the process of de-linking the tight coupling between UC and very specific server models for support. Now a whole class of servers based on specifications could be supported. This is largely the result of years of caution and strict requirements that allowed UC and virtualization to mature together. Customers had success with virtualization and demanded more.

UC Virtualization Requirements Today

Today everything about Cisco UC Virtualization can be found on the Cisco DocWiki site. A good introductory page is the UC Virtualization Environment Overview, which serves to link to all of the other sub pages.

In these pages you’ll find a number of requirements that cover CPU, RAM, Storage, and VMware. Let’s hit the highlights and show how Nutanix meets the relevant requirements.


This isn’t anything Nutanix specific, but it’s important nonetheless. No oversubscription of ANY resource is allowed. CPUs must be mapped 1 vCPU to one physical core (ignore HT logical core count). RAM must be reserved for the VM. Storage is recommended to be done with Thick Provisioning, but Thin Provisioning is allowed.

The big one here is 1:1 vCPU to core mapping. This will be a primary driver of sizing and is evidenced in all of the Cisco documentation. If you know how many physical cores are available, and you know how many vCPUs a VM takes, most of the sizing is done already!

CPU Architecture

Specific CPU architectures and speeds are listed in order to be classified as a “Full Performance CPU”. The Nutanix home page provides a list of all processors used in all node types. All Nutanix nodes except the NX-1000 series are classified as Full Performance CPUs at the time of this writing. That means the NX-1000 is not a good choice for Cisco UC, but all other platforms such as the very popular NX-3000 are a great fit.


Nutanix presents an NFS interface to the VMware Hypervisor. The Nutanix Distributed Filesystem backend is seen by VMware as a simple NFS datastore. The DocWiki page lists support for NFS under the Storage System Design Requirements section. There is also a listing under the storage hardware section. Most of the storage requirements listed apply to legacy SAN or NAS environments so aren’t directly applicable to Nutanix.

The key requirements that must be met are latency and IOPS. This is another area where calculation from the traditional NAS differs. In the legacy NAS environment the storage system performance was divided by all hosts accessing the storage. In the Nutanix environment each host accesses local storage, so no additional calculations are required as the system scales! Each node has access to the full performance of the NDFS system.

Each UC application has some rudimentary IOPS information that can be found here on the DocWiki storage site. These aren’t exact numbers and are missing some information about the type of testing that was performed to achieve these values, but they get you in the ballpark. None of the UC applications listed are disk intensive with average utilization less than 100 IOPS for most UC applications. This shows that again the CPU will be the primary driver of sizing.

VMware HCL

Cisco requires that any system for UC Virtualization must be on the VMware HCL and Storage HCL. Nutanix works very hard to ensure that this requirement is met, and has a dedicated page listing Nutanix on the VMware HCL.


With the above requirements met we can now confidently select the Nutanix platform for UC virtualization and know it will be supported by Cisco TAC. The DocWiki is an incredibly useful tool to know that all requirements are met. Check the Cisco DocWiki frequently as it’s updated often! 

Cisco UC OVA Files

Before we conclude let’s take a look at one more unique feature of Cisco UC and the DocWiki page.

Each Cisco UC application is installed using the combination of an OVA file and an install ISO. The OVA is required to ensure that exact CPU, RAM, and Disk sizes and reservations are followed. All Cisco OVA files can be found here on the DocWiki. Be sure to use these OVA files for each UC application and use the vCPU and RAM sizes from each OVA template to size appropriately on Nutanix. The ISO file for installation is a separate download or DVD delivery that happens on purchase.

In the next post, we’ll cover the exact sizing of Cisco UC Virtual Machines and how to fit them onto an example Nutanix block.

Nutanix and UC – Part 1: Introduction and Overview

I’ll be publishing a series of blog posts outlining Cisco Unified Communications on Nutanix. At the end of this series I hope to have addressed any potential concerns running Cisco UC and Nutanix and provided all the tools for a successful deployment. Your comments are welcome and encouraged. Let’s start at the beginning, a very good place to start.

Cisco UC Overview

Let’s start with an overview of Cisco Unified Communications just to make sure we’re all on the same page about the basics of the solution. UC is just a term used to describe all of the communications technologies that an enterprise might use to collaborate. This is really a series of different client and server technologies that might provide Voice, Video, Instant Messaging,  and Presence.

Clients use these server components to communicate with each other. They also use Gateway components to talk to the outside world. The gateway in the below image shows how we link into a phone service provider such as AT&T or Verizon to make calls to the rest of the world.

Cisco UC Overview
Cisco Unified Communications Overview


Each of the above components in the Cisco UC Virtual Machines provides a critical function to the clients along the bottom. In the past there may have been racks full of physical servers to accomplish these functions, but now this can be virtualized. Redundancy is still one of our NUMBER 1 concerns in a UC deployment, but scale is also important. When the phone system goes down and the CEO or CIO can’t dial into the quarterly earnings call there is huge potential for IT staff changes. Even more importantly, everyone relies on this system for Emergency 911 calls. The phone system MUST be up 100% of the time (or close to it).

Virtualization actually helps both in terms of scale AND redundancy on this front. Let’s look at each component of the UC system and see what it does for us as well as how it fits into a virtual environment.

Cisco Unified Communications Manager

Cisco Unified Communications Manager (CUCM) is the core building block of all Cisco UC environments. CUCM provides call control. All phones will register to the CUCM and all phone calls will go through the CUCM for call routing. Because the CUCM call control is such a critical function it is almost always deployed in a redundant full-mesh cluster of servers. A single cluster can support up to 40,000 users with just 11 VMs. Additional clusters can be added to scale beyond 40,000 users.

Once the size of the Cisco CUCM cluster is determined the next step is to deploy the VMs required. Each VM is deployed from an OVA which has a number of fixed values that cannot be changed. The number of vCPUs, the amount of RAM, and the size of the disks is completely determined by the Cisco OVA.

The Cisco DocWiki site lists various OVAs available to deploy a CUCM server. The size of the CUCM server OVA used depends on the number of endpoints the cluster will support.

Cisco Unity Connection

Cisco Unity Connection (CUC) provides Voice Message services, acting as the voice mailbox server for all incoming voice messages. CUC can also be used as an Interactive Voice Response server, playing a series of messages from a tree structure and branching based on user input. For redundancy each CUC cluster is deployed in and Active/Active pair that can support up to 20,000 voice mailboxes. Scaling beyond 20,000 users is just a matter of adding clusters.

The OVA for CUC can be found on the Cisco DocWiki site. Notice that these OVAs for CUC have much larger disk sizes.

Cisco Instant Messaging & Presence

Cisco IM&P is the primary UC component that provides service to Cisco Jabber endpoint Presence and Instant Messaging. Jabber clients will register to the IM&P server for all contact list functions and IM functions. The Jabber clients ALSO connect to the CUCM server for call control and CUC server for Voice Messaging.

IM&P servers are deployed in pairs called subclusters. Up to 3 subclusters (6 IM&P servers total) can be paired with a single CUCM cluster supporting up to 45,000 Jabber clients. The OVA templates for IM&P can be found on the DocWiki site. Each IM&P cluster is tied to a CUCM cluster. Adding more IM&P clusters will also mean adding more CUCM clusters.

Cisco Emergency Responder

911 emergency calls using a VoIP service often fall under special state laws requiring the exact location of the emergency call to be sent to the Emergency Public Service Answering Point (PSAP). The 911 operator needs this location to dispatch appropriate emergency services. VoIP makes this more complex because the concept of a phone now encompasses laptops and phones with wireless roaming capabilities which are often changing locations.

Cisco Emergency Responder (CER) is deployed in pairs of VMs (primary and secondary) to provide Emergency Location to the PSAP when a 911 call is placed. CER will use either SNMP discovery of switch ports, IP subnet based discovery, or user provided location to provide a location to the PSAP.

OVAs for CER can be found on the Cisco DocWiki.

Additional voice server components can be found on the DocWiki page. They follow a similar convention of describing the number of vCPUs, RAM, and Disk requirements for a specific platform size.

We’ll talk more about these individual components in the next part of this series, but for now it’s enough to just understand that each of these services will be provisioned from an OVA as a VM on top of VMware ESXi.

Nutanix Overview

Nutanix has been covered in great detail by Steven Poitras over at the Nutanix Bible. I won’t repeat all of the work Steve did because I’m sure I wouldn’t do it justice. I will however steal a few images and give a brief summary. For more info please head over to Steve’s page.

The first image is the most important for understanding what makes Nutanix so powerful. Below we see that the Nutanix Controller Virtual Machine (CVM) has direct control of the attached disks (SSD and HDD). The Hypervisor talks directly to the Nutanix CVM processes for all disk IO using NFS in the case of VMware ESXi. This allows Nutanix to abstract the storage layer and do some pretty cool things with it.

The Hypervisor could be VMware ESXi, Microsoft Hyper-V, or Linux kvm. We’ll focus on ESXi here because Cisco UC requires VMware ESXi for virtualization.

Nutanix Node Detail
Nutanix Node Detail

The great thing is that to User Virtual Machines such as Cisco Unified Communications this looks exactly like ANY OTHER virtual environment with network storage. There is no special work required to get a VM running on Nutanix. The same familiar hypervisor you know and love presents storage to the VMs.

Now we have the game changer up next. Because the CVM has control of the Direct Attached Storage, and because the CVM runs on every single ESXi host, we can easily scale out our storage layer by just adding nodes.

Nutanix CVM NDFS Scale
Nutanix CVM NDFS Scale

Each Hypervisor knows NOTHING about the physical disks, and believes that the entire storage pool is available for use. The CVM optimizes all data access and stores data locally in flash and memory for fast access. Data that is less frequently accessed can be moved to cold tier storage on spinning disks. Once local disks are exhausted the CVM has the ability to write to any other node in the Nutanix cluster. All writes are written once locally, and once on a remote node for redundancy.

Because all writes and reads will happen locally we can scale up while preserving performance.

Nutanix Distributed Filesystem requires at least 3 nodes to form a cluster. Lucky for us the most common “block” comes with space for 4 “nodes”. Here’s an inside view of the 4 nodes that make up the most common Nutanix block. The only shared components between the 4 nodes are the redundant power supplies (2). Each node has access to its own disks and 10GbE network ports.

Back of Nutanix Block
Back of Nutanix Block

Additional nodes can be easily added to the cluster 1 – 4 at a time using an auto discovery process.

Up Next – Cisco Requirements for Virtualization

Now that I’ve been at Nutanix for a few months I’ve had a chance to really wrap my head around the technology. I’ve been working on lab testing, customer sizing exercises, and documentation of UC Best Practices on Nutanix. One of the most amazing things is how well UC runs on Nutanix and how frictionless the setup is.

I had to do a lot of work to document all of the individual Cisco UC requirements for virtualization, but with that exercise completed the actual technology portion runs extremely well.

In the next blog post I’ll cover all of the special requirements that Cisco enforces on a non-Cisco hardware platform such as Nutanix. I’ll cover exactly how Nutanix meets these requirements.