Nutanix and UC – Part 2: Cisco Virtualization Requirements

In the last post I covered an Introduction to Cisco UC and Nutanix. In this post I’ll cover UC performance and virtualization requirements.

A scary part of virtualizing Cisco Unified Communications is worrying about being fully supported by Cisco TAC if a non-standard deployment path is chosen. This is due to a long history of strict hardware requirements around UC. When Cisco UC was first released in it’s current Linux based incarnation around 2006 as version 5.0 it could only be installed on certain HP and IBM hardware server models. Cisco was VERY strict about hardware revisions of these servers and a software to hardware matrix was made available.

This lead to the creation of a “Specifications” table, listing exact processors, disks, and RAM for each supported server model. When you hear “Specifications Based” or “Spec Based” it all started here.

Customers were welcome to purchase a server directly from HP or IBM that used all of the same hardware components, but the Cisco MCS server (which was just a rebranded HP or IBM server) was recommended. If it was discovered a customer had deviated from the hardware specs listed in the matrix, they could be in an unsupported configuration. If this unsupported configuration was found to be causing a particular problem then a customer might have had to change out the server hardware before further support could be obtained. These calls to technical support were often very stressful and harrowing if it was discovered that the purchasing process for the hardware didn’t follow the Spec based matrix exactly.

From a support perspective this makes sense. UC is a critical real-time application and non-standard hardware with less than excellent performance characteristics could cause all sorts of hard to diagnose and hard to troubleshoot problems. Working in support I saw my fair share of these cases where failing disks or problem hardware caused periodic interruptions that only revealed themselves through odd and intermittent symptoms.

UC Performance Needs

Let’s take a break from history to look at why performance is so critical to UC.

Signal-vs-Media
Figure 1: Signal vs Media

 

Figure 1 shows where the CUCM Virtual Machine fits into the call path. Each IP Phone will have a TCP session open at all times for call control, sometimes called signaling. Typically in a Cisco environment this is the SCCP protocol, but things are moving to the SIP protocol as an open standard. All the examples below assume SCCP is in use.

The SCCP call control link is used when one phone wants to initiate a call to another phone. Once a call is initiated a temporary media link with Real Time Protocol (RTP) audio/video traffic is established directly between phones. The following process is used to make a phone call.

Basic Phone Call Process

  1. User goes off hook by lifting handset, pressing speaker, or using headset
  2. User receives dial-tone
  3. User dials digits of desired destination and prepares a media channel
  4. CUCM performs destination lookup as each digit is received
  5. CUCM sends back confirmation to calling user that the lookup is proceeding
  6. CUCM sends “New Call” to destination IP Phone
  7. Destination phone responds to CUCM that it is ringing
  8. CUCM sends back confirmation to calling phone that the destination is ringing
  9. Destination phone is answered
  10. CUCM asks destination phone for media information (IP, port, audio codec)
  11. CUCM asks originating phone for media information (IP, port, audio codec)
  12. CUCM relays answer indication and media information to the originating phone
  13. CUCM relays media information to the destination phone
  14. Two way audio is established directly between the IP phones

At every step in the above process one or more messages have to be exchanged between the CUCM server and one of the IP phones. There are three places delay is commonly noticed by users:

  1. Off-hook to dial-tone
    1. User goes off hook, but CUCM delays the acknowledgement. This leads to perceived “dead air”
  2. Post dial delay
    1. User dials all digits, but doesn’t receive lookup indication (ringback). This can cause users to hang up. This is EXTREMELY important to avoid because during a 911 call users will typically only wait a second or two to hear some indication that the call is in progress before hanging up. Consider the psychological impact and stress of even momentary dead air during an emergency situation.
  3.  Post answer, media cut-through delay
    1. Destination phone answers, but audio setup is delayed at the CUCM server. This leads to a user picking up a phone saying “Hello, this is Jason”, and the calling user hearing “o, this is Jason”.

Also consider that each of the above messages for a single phone call had to be logged to disk. Huge advances have been made in compression and RAM-disk usage, but log writing is still a critical component of a phone call. Call logs and call records are crucial to an enterprise phone system.

Let’s look at this at scale.

cluster-scale
Figure 2: Cluster Scale

With a cluster of fully meshed call control servers and tens of thousands of IP phones, the situation is slightly more complex. Any single phone can still call any other phone, but now an extra lookup is needed. Where the destination phone registers for call control traffic is now important. Users in the Durham office may be located on a different Call Control server than users in the San Jose office. This means all of the above steps must now be negotiated between two different CUCM servers as well as the two phone endpoints.

CUCM uses Inter Cluster Communication Signaling (ICCS) to do lookups and call control traffic between servers. A problem now on any one server could spell disaster for thousands of users who need to place calls and have immediate response. Any server response time latency will be noticed.

Now that we have some background on why performance is so crucial to a real time communication system, let’s get back to the history.

Enter virtualization!

Cisco was slow to the virtualization game with Unified Communications. All the same fears about poor hardware performance were amplified with the hypervisor adding another possibly hard to troubleshoot abstraction layer. Virtualization support was first added only for certain hardware platforms (Cisco MCS) and only with certain Cisco UC versions. All the same specifications based rules applied to IBM servers (by this point HP was out of favor with Cisco).

What everyone knew is that virtualization was actually amazing for Cisco UC – in the lab. Every aspiring CCIE Voice candidate had snapshots of Cisco UC servers for easy lab recreates. Customers had lab or demo as proof of concept or test. Cisco used virtualization extensively internally for testing and support.

A Cisco UC customer wanting to virtualize had two options at this point for building a virtual Cisco UC cluster on VMware.

  1. Buy Cisco MCS servers (rebranded IBM)
  2. Buy IBM servers

The Cisco DocWiki page was created and listed the server requirements and IBM part numbers and a few notes about VMware configuration.

To any virtualization admin it should be immediately clear that neither of the above options are truly desirable. Virtualization was supposed to give customers choice and flexibility and so far there was none. Large customers were clamoring for support for Hardware Vendor X, where X is whatever their server virtualization shop was running. Sometimes Cisco UC customers were direct competitors to IBM, so imagine the conversation:

“Hello IBM competitor. I know you want Cisco UC, but you’ll have to rack these IBM servers in your data center.”

Exceptions were made and the DocWiki was slowly updated with more specifications based hardware.

Cisco UCS as Virtualization Door Opener

Cisco Unified Computing System (UCS) is what really drove the development of the Cisco DocWiki site to include considerations for Network Attached Storage and Storage Area Networks. Now Cisco had hardware that could utilize these storage platforms and best practices needed to be documented for customer success. It also started the process of de-linking the tight coupling between UC and very specific server models for support. Now a whole class of servers based on specifications could be supported. This is largely the result of years of caution and strict requirements that allowed UC and virtualization to mature together. Customers had success with virtualization and demanded more.

UC Virtualization Requirements Today

Today everything about Cisco UC Virtualization can be found on the Cisco DocWiki site. A good introductory page is the UC Virtualization Environment Overview, which serves to link to all of the other sub pages.

In these pages you’ll find a number of requirements that cover CPU, RAM, Storage, and VMware. Let’s hit the highlights and show how Nutanix meets the relevant requirements.

Oversubscription

This isn’t anything Nutanix specific, but it’s important nonetheless. No oversubscription of ANY resource is allowed. CPUs must be mapped 1 vCPU to one physical core (ignore HT logical core count). RAM must be reserved for the VM. Storage is recommended to be done with Thick Provisioning, but Thin Provisioning is allowed.

The big one here is 1:1 vCPU to core mapping. This will be a primary driver of sizing and is evidenced in all of the Cisco documentation. If you know how many physical cores are available, and you know how many vCPUs a VM takes, most of the sizing is done already!

CPU Architecture

Specific CPU architectures and speeds are listed in order to be classified as a “Full Performance CPU”. The Nutanix home page provides a list of all processors used in all node types. All Nutanix nodes except the NX-1000 series are classified as Full Performance CPUs at the time of this writing. That means the NX-1000 is not a good choice for Cisco UC, but all other platforms such as the very popular NX-3000 are a great fit.

Storage

Nutanix presents an NFS interface to the VMware Hypervisor. The Nutanix Distributed Filesystem backend is seen by VMware as a simple NFS datastore. The DocWiki page lists support for NFS under the Storage System Design Requirements section. There is also a listing under the storage hardware section. Most of the storage requirements listed apply to legacy SAN or NAS environments so aren’t directly applicable to Nutanix.

The key requirements that must be met are latency and IOPS. This is another area where calculation from the traditional NAS differs. In the legacy NAS environment the storage system performance was divided by all hosts accessing the storage. In the Nutanix environment each host accesses local storage, so no additional calculations are required as the system scales! Each node has access to the full performance of the NDFS system.

Each UC application has some rudimentary IOPS information that can be found here on the DocWiki storage site. These aren’t exact numbers and are missing some information about the type of testing that was performed to achieve these values, but they get you in the ballpark. None of the UC applications listed are disk intensive with average utilization less than 100 IOPS for most UC applications. This shows that again the CPU will be the primary driver of sizing.

VMware HCL

Cisco requires that any system for UC Virtualization must be on the VMware HCL and Storage HCL. Nutanix works very hard to ensure that this requirement is met, and has a dedicated page listing Nutanix on the VMware HCL.

 

With the above requirements met we can now confidently select the Nutanix platform for UC virtualization and know it will be supported by Cisco TAC. The DocWiki is an incredibly useful tool to know that all requirements are met. Check the Cisco DocWiki frequently as it’s updated often! 

Cisco UC OVA Files

Before we conclude let’s take a look at one more unique feature of Cisco UC and the DocWiki page.

Each Cisco UC application is installed using the combination of an OVA file and an install ISO. The OVA is required to ensure that exact CPU, RAM, and Disk sizes and reservations are followed. All Cisco OVA files can be found here on the DocWiki. Be sure to use these OVA files for each UC application and use the vCPU and RAM sizes from each OVA template to size appropriately on Nutanix. The ISO file for installation is a separate download or DVD delivery that happens on purchase.

In the next post, we’ll cover the exact sizing of Cisco UC Virtual Machines and how to fit them onto an example Nutanix block.

Nutanix and UC – Part 1: Introduction and Overview

I’ll be publishing a series of blog posts outlining Cisco Unified Communications on Nutanix. At the end of this series I hope to have addressed any potential concerns running Cisco UC and Nutanix and provided all the tools for a successful deployment. Your comments are welcome and encouraged. Let’s start at the beginning, a very good place to start.

Cisco UC Overview

Let’s start with an overview of Cisco Unified Communications just to make sure we’re all on the same page about the basics of the solution. UC is just a term used to describe all of the communications technologies that an enterprise might use to collaborate. This is really a series of different client and server technologies that might provide Voice, Video, Instant Messaging,  and Presence.

Clients use these server components to communicate with each other. They also use Gateway components to talk to the outside world. The gateway in the below image shows how we link into a phone service provider such as AT&T or Verizon to make calls to the rest of the world.

Cisco UC Overview
Cisco Unified Communications Overview

 

Each of the above components in the Cisco UC Virtual Machines provides a critical function to the clients along the bottom. In the past there may have been racks full of physical servers to accomplish these functions, but now this can be virtualized. Redundancy is still one of our NUMBER 1 concerns in a UC deployment, but scale is also important. When the phone system goes down and the CEO or CIO can’t dial into the quarterly earnings call there is huge potential for IT staff changes. Even more importantly, everyone relies on this system for Emergency 911 calls. The phone system MUST be up 100% of the time (or close to it).

Virtualization actually helps both in terms of scale AND redundancy on this front. Let’s look at each component of the UC system and see what it does for us as well as how it fits into a virtual environment.

Cisco Unified Communications Manager

Cisco Unified Communications Manager (CUCM) is the core building block of all Cisco UC environments. CUCM provides call control. All phones will register to the CUCM and all phone calls will go through the CUCM for call routing. Because the CUCM call control is such a critical function it is almost always deployed in a redundant full-mesh cluster of servers. A single cluster can support up to 40,000 users with just 11 VMs. Additional clusters can be added to scale beyond 40,000 users.

Once the size of the Cisco CUCM cluster is determined the next step is to deploy the VMs required. Each VM is deployed from an OVA which has a number of fixed values that cannot be changed. The number of vCPUs, the amount of RAM, and the size of the disks is completely determined by the Cisco OVA.

The Cisco DocWiki site lists various OVAs available to deploy a CUCM server. The size of the CUCM server OVA used depends on the number of endpoints the cluster will support.

Cisco Unity Connection

Cisco Unity Connection (CUC) provides Voice Message services, acting as the voice mailbox server for all incoming voice messages. CUC can also be used as an Interactive Voice Response server, playing a series of messages from a tree structure and branching based on user input. For redundancy each CUC cluster is deployed in and Active/Active pair that can support up to 20,000 voice mailboxes. Scaling beyond 20,000 users is just a matter of adding clusters.

The OVA for CUC can be found on the Cisco DocWiki site. Notice that these OVAs for CUC have much larger disk sizes.

Cisco Instant Messaging & Presence

Cisco IM&P is the primary UC component that provides service to Cisco Jabber endpoint Presence and Instant Messaging. Jabber clients will register to the IM&P server for all contact list functions and IM functions. The Jabber clients ALSO connect to the CUCM server for call control and CUC server for Voice Messaging.

IM&P servers are deployed in pairs called subclusters. Up to 3 subclusters (6 IM&P servers total) can be paired with a single CUCM cluster supporting up to 45,000 Jabber clients. The OVA templates for IM&P can be found on the DocWiki site. Each IM&P cluster is tied to a CUCM cluster. Adding more IM&P clusters will also mean adding more CUCM clusters.

Cisco Emergency Responder

911 emergency calls using a VoIP service often fall under special state laws requiring the exact location of the emergency call to be sent to the Emergency Public Service Answering Point (PSAP). The 911 operator needs this location to dispatch appropriate emergency services. VoIP makes this more complex because the concept of a phone now encompasses laptops and phones with wireless roaming capabilities which are often changing locations.

Cisco Emergency Responder (CER) is deployed in pairs of VMs (primary and secondary) to provide Emergency Location to the PSAP when a 911 call is placed. CER will use either SNMP discovery of switch ports, IP subnet based discovery, or user provided location to provide a location to the PSAP.

OVAs for CER can be found on the Cisco DocWiki.

Additional voice server components can be found on the DocWiki page. They follow a similar convention of describing the number of vCPUs, RAM, and Disk requirements for a specific platform size.

We’ll talk more about these individual components in the next part of this series, but for now it’s enough to just understand that each of these services will be provisioned from an OVA as a VM on top of VMware ESXi.

Nutanix Overview

Nutanix has been covered in great detail by Steven Poitras over at the Nutanix Bible. I won’t repeat all of the work Steve did because I’m sure I wouldn’t do it justice. I will however steal a few images and give a brief summary. For more info please head over to Steve’s page.

The first image is the most important for understanding what makes Nutanix so powerful. Below we see that the Nutanix Controller Virtual Machine (CVM) has direct control of the attached disks (SSD and HDD). The Hypervisor talks directly to the Nutanix CVM processes for all disk IO using NFS in the case of VMware ESXi. This allows Nutanix to abstract the storage layer and do some pretty cool things with it.

The Hypervisor could be VMware ESXi, Microsoft Hyper-V, or Linux kvm. We’ll focus on ESXi here because Cisco UC requires VMware ESXi for virtualization.

Nutanix Node Detail
Nutanix Node Detail

The great thing is that to User Virtual Machines such as Cisco Unified Communications this looks exactly like ANY OTHER virtual environment with network storage. There is no special work required to get a VM running on Nutanix. The same familiar hypervisor you know and love presents storage to the VMs.

Now we have the game changer up next. Because the CVM has control of the Direct Attached Storage, and because the CVM runs on every single ESXi host, we can easily scale out our storage layer by just adding nodes.

Nutanix CVM NDFS Scale
Nutanix CVM NDFS Scale

Each Hypervisor knows NOTHING about the physical disks, and believes that the entire storage pool is available for use. The CVM optimizes all data access and stores data locally in flash and memory for fast access. Data that is less frequently accessed can be moved to cold tier storage on spinning disks. Once local disks are exhausted the CVM has the ability to write to any other node in the Nutanix cluster. All writes are written once locally, and once on a remote node for redundancy.

Because all writes and reads will happen locally we can scale up while preserving performance.

Nutanix Distributed Filesystem requires at least 3 nodes to form a cluster. Lucky for us the most common “block” comes with space for 4 “nodes”. Here’s an inside view of the 4 nodes that make up the most common Nutanix block. The only shared components between the 4 nodes are the redundant power supplies (2). Each node has access to its own disks and 10GbE network ports.

Back of Nutanix Block
Back of Nutanix Block

Additional nodes can be easily added to the cluster 1 – 4 at a time using an auto discovery process.

Up Next – Cisco Requirements for Virtualization

Now that I’ve been at Nutanix for a few months I’ve had a chance to really wrap my head around the technology. I’ve been working on lab testing, customer sizing exercises, and documentation of UC Best Practices on Nutanix. One of the most amazing things is how well UC runs on Nutanix and how frictionless the setup is.

I had to do a lot of work to document all of the individual Cisco UC requirements for virtualization, but with that exercise completed the actual technology portion runs extremely well.

In the next blog post I’ll cover all of the special requirements that Cisco enforces on a non-Cisco hardware platform such as Nutanix. I’ll cover exactly how Nutanix meets these requirements.

Nutanix and Unified Communications

The past week has been a whirlwind of studying, research, and introductions now that I’ve started at Nutanix! I’m happy to be on the team working on Reference Architectures for Unified Communications.

Nutanix Logo

I’m planning to investigate the major Unified Communications platforms (VoIP, Voice Messaging, IM & Presence, E911) from the top vendors and come up with Best Practices for deployment on Nutanix. This is a hot opportunity because customers are excited about Nutanix and have real need for Unified Communications.

The savings and consolidation that Nutanix can bring to other areas in the data center can also be applied to Unified Communications. Imagine ditching all of your SAN or NAS storage and deploying on a hyper-converged solution that utilizes the on-box storage of every node in the cluster to its full potential. Imagine scaling up the size of your cluster by simply adding new nodes and not worrying about the storage.

With my past Cisco CCIE experience I’ll be tackling these technologies first, but I’m also planning on working on Microsoft Lync and Avaya Aura. To me this seems like the key area of opportunity at Nutanix, proving that any workload can run successfully on our systems.

Cisco has a great resource in the DocWiki pages that identify how to design and deploy Cisco UC in a virtual environment. I’m getting started there and hope to have a Best Practices guide (including sample cluster builds) put together by the end of November. After reading through all of the requirements and restrictions on the Cisco DocWiki site I’m confident Nutanix and Cisco UC will be successful!

What Unified Communications platform is YOUR company using? Is it virtualized? How much of your cost is in the SAN?

 

I’m looking forward to your comments. Keep an eye on this space and the Nutanix website for releases of our Best Practice documents in the future.

OpenID Connect

At work I’ve been doing a ton of Single Sign On, SAML, and certificate based authentication. I wanted to try that out for my own personal use. It turned out to be much easier than I expected.

I’ve already blogged here about updating my site certificates using StartCom SSL. Another free service they offer is an OpenID Connect certificate.

The process is actually pretty straight forward.

To get started with StartCom in the first place you have to download a client side certificate into your web browser. This is a file that you must keep on your computer and must protect. When your web browser connects to StartCom services it presents this certificate and says “Here I am”. Since you should be the only person with that certificate StartCom can say “OK, come on in”. 

This is nice because you don’t have to remember a username and password. The downside is that you have to keep this certificate handy to load it onto each machine you connect from. An encrypted USB key can be handy for this. You have to figure out how to make your Operating System and Browser combination present this certificate as an identity cert. Often this is an advanced setting in the browser that allows you to import an Identity Certificate.

OpenID Connect takes this to the next step.

StartCom knows who I am and knows my certificate. As a provider of web services (bbbburns.com WordPress) I can make the decision to allow in certain users that an OpenID provider has authenticated. I downloaded the WordPress OpenID plugin, and tied my ID of bbbburns.startssl.com to my WordPress account.

When I login to my own WordPress site now I can just type in “bbbburns.startssl.com” as the user and hit Login. The site redirects me to startssl.com for authentication using my client certificate. If successful, I get redirected back to bbbburns.com with an authentication assertion. Since bbbburns.startssl.com is tied to my WordPress account on the server I’m automatically logged in as this user.

The setup for all of this took just a few minutes!

URL protocol specifier

Woah – why didn’t anyone tell me I could replace https:// or http:// with JUST // to preserve the protocol on the current page?

I didn’t know about it until I saw YouTube embed links using it.

This is amazing news. I’ll have to go through and update my blog posts to see if this breaks anything.

BTC Trading Insanity

I’ve been flailing around trying to figure out how trading BitCoins works. I haven’t really DONE any trading – but check out these crazy graphs over at BitCoinWisdom.com. There is so much going on. Click on all the things. You’ll see.

Tech Weekend – HTTPS Everywhere

To improve privacy and security HTTPS should be used everywhere. It SHOULD be the default option. Unfortunately this isn’t always the case. Even worse, you have no idea what the browser is doing behind your back. For instance – this site you’re reading now is going off and contacting Google Analytics and downloading images from other locations. If you trust me (the author) you can assume I’ve typed in those URLs as HTTPS instead of HTTP, but why trust me when you don’t have to?

DISCLAIMER: I know my site cert is self-signed. I’ll get to a CA signed cert eventually. Deal with it ;) (edit 2013-12-ish: I took care of this with a cert from StartCom.)

HTTPS Everywhere is a browser plugin that can be used to solve this exact problem. Load the plugin into Firefox or Chrome and off you go. The most common sites are automatically converted from http:// in your address bar to https://.

But WAIT – it’s of course not that simple. Let’s take Google as an example. Here is the default Google search URL

http://google.com

So what is the secure Google search URL?

https://encrypted.google.com/

Thank you security for ALWAYS making my life more complex than it needs to be. So if we want a plugin that can convert the most popular sites from http to https we need a long list of rules that are site specific. The plugin comes with these by default.

Now let’s say I’m a WordPress Admin and I want to make sure I ALWAYS log into the following instead of the http site.

https://bbbburns.com/blog/wp-admin/

This is where I type my password in to the cloud based server so it had better be over a secure connection. Unfortunately the EFF / plugin does not know who I am so there is no bundled rule for my site. I have to write my own.

Now we get to the whole point of this post. All of the instructions for writing your own custom HTTPS Everywhere rules are for Firefox. No rules exist for Chrome.. UNTIL NOW. Also – the rules are extremely technically detailed on the syntax, but leave me wanting more when they describe which files to change and where.

Writing HTTPS Everywhere Rules for Google Chrome Browser

Take THAT search engine. No .. really.. take it.. I hope someone finds this useful. The EFF instructions are missing the following pieces.

Search for your rules

Search on your computer for the default.rulesets file. This will get you in the right directory. I found mine here: (I converted all backslashes to forward slashes because of a funky problem I was having with the post)

C:/Users/user/AppData/Local/GoogleChrome/User Data/Default/Extensions/<random string>/2013.10.16_0/rules/default.rulesets

This file was pretty long and I didn’t want to edit the thing directly. I wanted to just take my OWN custom rulesets file and load that. Luckily it looks like the following file can do exactly that. I’ve added the second entry and created a new file with that name custom.rulesets.

C:/Users/user/AppData/Local/GoogleChrome/User Data/Default/Extensions/<random string>/2013.10.16_0/rule_list.js

var rule_list = [
"rules/default.rulesets",
"rules/custom.rulesets",
];

That allows us to have our rules for converting http://bbbburns.com to https://bbbburns.com. Here’s what I entered into the custom.rulesets file.

 C:UsersuserAppDataLocalGoogleChromeUser DataDefaultExtensions<random string>2013.10.16_0rulescustom.rulesets

<ruleset name="BBBBBurns">
  <target host="www.bbbburns.com" />
  <target host="bbbburns.com" />

  <rule from="^http://(www.)?bbbburns.com/" to="https://bbbburns.com/"/>
</ruleset>

Save those files and restart Chrome and you’re on your way to a more secure browsing experience. If there are sites you visit that HTTPS Everywhere doesn’t encrypt by default you can add these rules.

I recommend saving your custom file and the rule list in your “Development” directory or “scripts” or “hacks” or whatever you call it because surely this is all going to be blown away when Chrome auto updates. That’s just my assumption looking at the folder names above which seem version specific.

I think this concludes the Tech Weekend for me this weekend. Stay tuned for posts about password managers, two factor authentication, and PGP encryption and signing for email and other things.

Tech Weekend – TrueCrypt

Since I have a BitCoin Wallet now I figure I should probably have an encrypted offline storage mechanism for all the keys and the wallet file itself. If best practices have been followed then your Wallet is password protected already, but let’s go ONE MORE step and encrypt them on a USB drive.

So far TrueCrypt rocks. The user interface is extremely unhelpful to n00bs up front but once you straighten things out it should prove easy to use.

Here we see that I’ve mounted the 100MB file F:/SwissMemory-100.tc as drive S. Drive F is where the USB key actually is and Drive S is the new virtual encrypted drive.

  1. Download True Crypt
  2. Download True Crypt Key and Signature (another blog post for PGP)
  3. Verify signature with PGP tool of choice
  4. Install
  5. Plug in your favorite USB drive
  6. Choose the Create Volume button and a Wizard launches
  7. Select encrypted volume and make a .tc file of the desired size on the USB key. If you just want to store some small files you can make a pretty small volume. It’ll be a small opaque file on the USB drive. There are other options to encrypt the entire volume but honestly my resume and other assorted files on there don’t need encrypting. This means the USB drive can still be used in other computers without TrueCrypt installed. You’ll only need TrueCrypt to get access to what you keep in the encrypted part.
  8. Create  a long ass random password for the volume. (LastPass is what I’m using for this. Another blog post is required for comparing password managers).
  9. Change the TrueCrypt preferences to auto open Explorer window for mounted volume.
  10. In TrueCrypt select your newly created .tc file to mount as a drive letter. You’ll have to enter that super long password again. This should cause the folder to open automatically on your desktop.
  11. Copy all of your important files into this folder. It’ll show up as a new drive that you can manage natively from your PC.

There you go – now you’ve securely stored your key files and wallet files. If you drop the USB drive somewhere it’s safe from prying eyes and your BitCoins won’t be stolen.

This DOES NOT protect you from someone either extorting you to provide the password or an agent of the law / court  ordering you to provide the password.  In this case a Hidden Volume would be required to have plausible deniability that any encrypted volume EVEN EXISTS.

Check out this pretty cool TrueCrypt article on what they call a hidden volume. It’s neat how they do it.

Tech Weekend – Barley

I realized I’ve been seriously neglecting my blog. Mike Rundle (@flyosity) posted a quick link to Barley showing they had Word Press support. I watched the video and was pretty impressed. WYSIWYG editing for a blog. I just GO to the site and can start typing into the web page. AWESOME!

 

I’m having some difficulty so I’m not sure I’m going to keep using it, but if they fix some bugs I could be on board. I could probably try a different browser and see if that helps. Using Chrome now.

  1. Sometimes when I highlight text I don’t get the popup to take an action.
  2. When you DO highlight text and get the edit box to pop up for a link you’re still not guaranteed smooth sailing. Moving the cursor outside the popup causes it to disappear. I do this often because the popup has http:// auto filled. I’m going to copy my link straight from the URL of my other open tab so I need to select the pre-filled http:// and delete the damn thing. If I click and drag outside the tiny popup it goes away. CTRL-A has become my answer for this. Instead just don’t prefill http:// dudes :(  
  3. The Publish option isn’t even available! I’m writing this in the standard editor now. to finish it up.
  4. When you insert a YouTube video the video preview doesn’t show right away so you just have a big white space where you HOPE the video will go.

Here’s what I mean about the size of the pop. I usually start by clicking on the right and dragging back to the left. I’ve only got about 8 pixels or whatever of run off on the left before fiery popup death. Very frustrating.

If all these get fixed though that’ll make me more likely to blog. After going through and reading this list again I’m initially disappointed. I’ll give it more time before I throw in the towel.

Tech Weekend – BitCoin

Project #1 Today: BitCoin

Today’s first item on the agenda was to figure out how BitCoin works and then invest some money into the thing. I had thought that anonymity was a cool part of BitCoin but it looks like if you ALSO want the convenience of sitting in your underwear at home you’ll need to give some of that up.

Step 1.  Research Research Research

Lots and lots of reading. I read pessimistic articles, positive articles, neutral articles, articles by fanatics (the true anarchist BTC believers online are a bit nuts), articles that focused on tech, articles that focused on the economics of it, all sorts of reading and videos.

Conclusion: Seems potentially legit but risky. Possibility for extreme shady activity but ALSO possible for regular transactions. Subject to wild ass price fluctuations. Be prepared to have your financial or legal ass handed to you if you’re not careful and try to do something stupid. Don’t do anything illegal.

I learned that you need to have a secure wallet with a secure backup, or use a trusted online wallet. Coinbase.com is a generally respected online wallet and Multibit was a trusted lightweight personal wallet. Backup your shit onto a DVD, USB, paper, whatever if you’re going the personal route. Learn how PGP signatures work. Learn how secure mail and messaging works (this part is just for fun – not really required).

At this point you should now have a wallet with an address.

Step 2. Get some BitCoins (BTC)

This part is where you’re making a trade off between anonymity and ease of use. You could go to a face to face person local to your area willing to trade cash for BTC. This preserves your anonymity but you’ll have to put on pants. Also – this person is essentially a money changer so I believe they have certain things they have to follow or else they’re in a shady financial / legal area.

I think we’re at a self discovery point. I value my anonymity less than I value the convenience of remaining pantsless! Other semi-anonymous cash transfers are things like the CVS money exchange.

So let’s trade in my anonymity for convenience and a more ethically firm approach. Coinbase.com comes up again as a site where a Bank Account can be linked by ACH transfer. Once you verify your identity you have some trade restrictions lifted and get the ability to perform immediate transfers from USD to BTC and back.

So we’ve done an ACH transfer between Coinbase and our bank account. You’ve gotta have some serious trust here at this point because you’re disappearing real world money for cryptographic  virtual assets. Gird your loins – you just typed your bank account info. It might be preferable to setup a small real world bank account with just a bit of cash in it for this exact purpose rather than linking your life savings to this site.

At the end of this step we should have some fraction of a BTC based on how much money we foolishly threw to the wind. You can keep this money in the online Coinbase wallet, you can spend it at merchant, or you can transfer it to another wallet offline.

Step 3. Transfer to another wallet (just for fun)

I purchased 0.1 BTC today just to see what would happen. Turns out that was roughly $100 USD.  Enough that I felt I have some stake in things and so little that I wouldn’t mind if “terrible things” happened. As I write this it’s now worth $108 USD. ANYWAY – we have the money sitting there in an account. Now you can send some portion to another wallet via a BitCoin Address which might look something like

<redacted because this wallet doesn't exist anymore>

Feel free to send BTC to that address by the way if you found this article useful ;)

Let’s look at that address. BitCoin uses something called a shared ledger. You can think of this as a big checkbook for EVERY transaction. Everyone has a copy of the checkbook and to make updates into the checkbook you need to be cryptographically verified. This means EVERYONE can see EVERY transaction. So that account above – let’s see what it’s been up to:

That account has a .001 transfer into it as the only history. Here is where the BitCoin Address comes into play. If I spent from that address you could track my actions. Coinbase has a way (that I don’t understand yet) to have the transfer out come from a different address than the transfer in. That’s an anonymity best practice I’m still trying to wrap my head around.

Step 4. ???

Here I don’t really know. I guess buy some things with your BitCoins. Hold onto them for a while. Cash them out immediately. Whatever you want to do, so long as you don’t forget to report any profits on your taxes! I’m interested to see if CoinBase will provide any documentation to me at the end of the year to facilitate claiming any loss or profit (like my stock accounts do for me now).

Step 5. Bask in your new-found knowledge

Now you know a thing or two about BitCoin. Annoy your friends (like I’m doing RIGHT NOW). You can hang out in the SubReddit for BitCoin or watch the SUPER boring FinCEN / Senate hearing which had some interesting parts.