Obsidian for Tasks and Lowering Blood Pressure

I started creating tasks in Obsidian using the tasks plugin and at first it was amazing. It really helped me lower my stress levels because I knew that I could just write something down in my notes and Obsidian would remember it for me. I could quickly scan through my task list and make sure I wasn’t forgetting something important.

This went fine for a few weeks, until my pile of incomplete tasks started to stack up. Then I’d wake up on a Saturday morning to look at my list of outstanding obligations. My failure as a husband, friend, and homeowner were standing in clear definition right in front of me.

How could I possibly sit down and relax if I had all this stuff to do.

Let’s look at my problematic solution. I was using a simple query to easily pull together all of the incomplete tasks from every note.

```tasks
not done
```

I’d see something like the following when that query was executed.

Stuff I Haven’t Done Because I Suck

Then my amazing sister had a great idea. What if on the weekend I cut myself some slack? What if I just looked at the high priority stuff on the weekends and let everything else go for some other time?

This has been a treat and has really lowered my blood pressure.

Now my query is:

```tasks
not done
priority is high
starts before tomorrow
short mode
group by tags
```

And my list looks something like

I can do this!

The key is really filtering for the high priority tasks. That’s the priority is high line. Then hiding anything that hasn’t started yet starts before tomorrow is great too. Anything that I would have to do next week, or next month, doesn’t need to wreck my Saturday.

The short mode and the group by are just there for a little more structure.

Thanks technology for helping me prioritize. Also thanks to my sister!

Obsidian for Notes

I’ve been using Obsidian to take notes and track all the action items in my life. I’ve really enjoyed it and even signed up for the paid sync service.

I blame my sister Jen for this, because she’s the one who got me started with bullet journals. I journaled on paper for a few years, but then got a bit behind. I found myself either too busy, too tired, or too distracted to pick up the pen and the journal. I ALSO found myself really struggling with how to even make sense of two years worth of bullet journals and the pile of unresolved bullets.

Obsidian helps out here because I can take notes from my phone, iPad, or computer. I’m also a huge nerd about queries so I’m using Tasks and DataView so I can keep track of and sort the outstanding items.

Like most things I pick up – it INEVITABLY gets put back down when everything else gets too busy – but I’ve found it easy to pick back up.

I also created a “Shortcut” so I can voice activate create a note with Siri. Maybe that’s a post for another day.

Home Improvement Landscaping

The previous owner of our property believed in letting nature run its course entirely without human intervention. As a result our front yard had been a long outstanding project that we’d been afraid to tackle. In addition, we had some serious drainage problems that led to standing water in the driveway, and sump pumps that just dumped out into the front yard surface.

This year we decided to change all that and had For Garden’s Sake do some landscaping for us. In addition to dealing with the drainage, and overgrowth, we wanted them to fix the front yard that AT&T had dug up numerous times, as well as connect our awesome new front porch to the driveway in a nice looking way. I would say mission accomplished – but you can judge for yourself looking at the following before and after shots:

Before:

New paint, new porch, old landscaping.

After:

New landscaping and stonework leading up to the front porch.

It’s a huge improvement, and the galleries above should give you an idea of the overall project and progress for the entire front yard. We’re happy with the work and looking forward to watching all of these new native plants grow.

Best of all, there is no more standing water to worry about!

18 Years of Blogging

This blog has been alive in some form since 2002. As you can imagine a lot has changed in 18 years. The blog has moved physically from servers under desks, servers in closets, to finally virtual machines in the cloud. Even those VMs have changed AWS instance sizes and Ubuntu LTS versions over time. The blogging platform has also changed. I started this thing with a custom set of PHP pages that I wrote myself to pull entries from a MySQL database. I migrated to something called Serendipity for a while to get more features and then finally to WordPress. All the while the MySQL backend has been pretty much the same.

My photo sharing strategy has also changed. I moved from a directory of images, to Menalto Gallery, to Gallery3, and recently to Piwigo. Gallery3 has been unsupported for a VERY long time, but the nail in the coffin was that it required me to keep an older PHP version on the server. The switch to Piwigo allows me to ditch my Ubuntu PPAs for old PHP versions, and lets me use a photo sharing platform that receives updates, has themes, and works well on mobile.

The problem I face now is that in 18 years I’ve linked to a LOT of photos directly from the blog posts. In some places I wrote the HTML directly, in other places I inserted a link to an image using a WordPress block. In almost all of those cases I’m linking to Gallery3 album pages that no longer function. ALL of the older JPG images are still in the Gallery3 folders, taking up lots of space since they also exist in Piwigo.

I’m left with a few options:

  1. Go back and edit every single blog post that linked to images with the new Piwigo links.
    1. This may be easier if I create a LOT of permalinks inside Piwigo’s “category” structure for albums.
  2. Create some complicated mod_rewrite rule that intercepts the old links and sends to the new link.
    1. This will REQUIRE that I create a lot of permalinks in Piwigo.
  3. Put some boilerplate text at the top of every old post that points to the new high level photo gallery.
    1. Not great, because if I’ve learned anything – this won’t be the last platform migration I ever do.
  4. Do nothing. Leave the old links and images broken. Think only of now.
    1. This is tempting, but there is a lot of nostalgia in those old posts.

Wish me luck on this project. I still have some work to do, and next week I should have some time to focus on it.

Update 2020-04-29:

Crazy twist. I reinstalled a BUNCH of PHP packages from a custom PPA and replaced them with the default Ubuntu repo packages and now my old Gallery3 is working again. I suppose this gives me an option to keep old images on Gallery3, and new images in Piwigo. Still more thought needed here.

Home Improvement Porch Project

Kat bought our house around the same time that I bought my condo in 2014, before we even met! It’s an amazing property on a large wooded lot with nice privacy and set back a bit from the street in a quiet neighborhood.

The house had a great back deck with room for an outdoor dining table. Being in the woods and in NC means we had quite a few usable months for the porch – but our nemesis, the Aedes aegypti mosquito, made the outdoors pretty much a no-go-zone whenever it was above freezing for a few weeks.

Old Back Deck – Tree Removal In Progress

For years we were spraying for mosquitoes trying to keep the situation under control. We weren’t happy about having a service spray chemicals, but the mosquitoes were so thick in the air as to create a visible blood-sucking cloud.

We stopped the spray service when our next-door neighbor started raising bees and at that point we ceded control back to the mosquitoes. We needed to take more drastic (and expensive) measures to maintain an outdoor dining experience.

Last year, we decided we were going to replace the rotting front and rear decks with new long-lasting Trex Deck material and build a covered screened in section in the back. Along the way we cut down a number of trees, replaced the roof, and are nearing the final point where the back porch gets a screen added. Check out the album below to see the house as it changed over time.

Photos of our house project

We’re excited to use the new covered section as the temperatures warm up. We even plan on spending a few of our work from home days out there. We made sure there was PLENTY of power and that the WiFi signal covers the area!

There are still a few finishing touches we’re waiting on in addition to the screens. We need gutters and downspouts, to finish the paint on the outside siding, a few more lights and switches wired in, and some interior trim.

The next part of this crazy project is landscaping – but that may have to wait a while. What do you think of the new porch and the new look?

Ditching No-IP for DuckDNS

I’ve been using dynamic IP address services from no-ip.com for a long time to give a DNS name to my dynamic home IP. I remember having a no-ip.com address in college around the early 2000s for my dorm, then apartment, computer. I went through a hiatus after college where I managed my domain name to IP mappings manually for a few years (and my own Bind setup), until it seemed like the ISPs started giving me a new address monthly and the burden was too much. I also sold the hardware the DNS server ran on at some point during my moves.

I was happy to see that after over 10 years away, no-ip was still there, and got signed up right away. I have an Ubuntu Linux system as my home desktop (and server really) and was glad to find a client that seemed to run on it. It was a little finicky to get going and was a black box sort of client to install. That wasn’t a good sign to start.

Then came the emails. Oh my god, the emails. Every 30 days they want to make sure you’re still there, and have you verify your domain name. They give you a 1 week notice too, so this means every 21 days you’re getting an email to verify that you’re still there. If you don’t verify – they say they’ll delete your account. When you try to click the email and go to the site it’s VERY clear they’d rather have you sign up for a paid account. Every 21 days for a few years I got nagged until I couldn’t take it any more! They clearly told me that I should go somewhere else for my business if I didn’t want to pay.

This sort of things just bugs me.

At the Open Source conference All Things Open, someone mentioned DuckDNS and I decided to give it a try. It’s been a night and day difference from no-ip so far, for the better.

Here’s what I like:

  1. There is no black box app or script to install on my Linux box. They just use cron and wget.
  2. The instructions are clear and easy to copy and paste.
  3. No one has emailed me anything yet, and since they have ZERO marketing team, I’m assuming no one will.

If you want to get started I highly recommend them. If you’re running a Linux server it’s dead simple.

The whole idea is that you sign up on duckdns.org and get a unique key for your desired domain name. You take this unique key and their copy and paste script and run that every 5 minutes as a cron job.

Every 5 minutes, your server uses wget (called via cron) with the duckdns URL and key and their server figures out your external IP and updates it accordingly. It was ridiculously easy to get started and used systems I already know and love (bash and cron).

Let’s Encrypt – How do I Cron?

Let’s Encrypt was really easy to setup, but Cron was less so. I kept getting emails that the Let’s Encrypt renewal was failing:

2017-03-09 02:51:02,285:WARNING:letsencrypt.cli:Attempting to renew cert from /etc/letsencrypt/renewal/bbbburns.com.conf produced an unexpected error: The apache plugin is not working; there may be problems with your existing configuration.
The error was: NoInstallationError(). Skipping.
1 renew failure(s), 0 parse failure(s)

I had a cron job setup with the absolute bare minimum:

crontab -e
56 02 * * * /usr/bin/letsencrypt renew >> /var/log/le-renew.log

When I ran
/usr/bin/letsencrypt renew
at the command line, everything worked just fine. I was like, “Oh – this must be some stupid cron thing that I used to know, but never remember.”

Turns out the problem was the cron environment PATH variable. Cron didn’t have access to /usr/sbin and apparently certbot was using that for access to the apache2 binary. The fix was to change the cron to the following:

56 02 * * * /root/le-renew.sh

Then create a script that runs the renewal after the PATH variable is set correctly:

cat /root/le-renew.sh
#!/bin/bash
#Automate the LE renewal process

#Need /usr/sbin for apache2
# https://github.com/certbot/certbot/issues/1833
export PATH=$PATH:/usr/sbin

#Renew the certs and log the results
/usr/bin/letsencrypt renew >> /var/log/le-renew.log

It was a good thing I put the link to the problem right in the script, or I never would have been able to find it again to write this blog.

NOW my renewal works absolutely fine. Problem solved. Thanks Cron.

Let’s Encrypt – Easy – Free – Awesome

I recently saw a news article about StartCom being on Mozilla and Google’s naughty list. Things looked bad, and my StartCom certs were up for renewal on the blog.

I have seen articles flying around about Let’s Encrypt for a while now. The idea seemed awesome, but the website seemed so light on technical instructions that I didn’t know if it would actually work. I wanted to know EXACTLY what lines it would propose to hack into my carefully manicured Apache configuration. And by carefully manicured, I mean “strung together with stuff I copied and pasted from stack overflow“.

I couldn’t find the information I really wanted – so I just JUMPED in and started installing things and running commands. 30 seconds later, I had a fully functioning cert on my site. I was blown away. It copied my existing non-ssl vhost config and created a new vhost with SSL enabled. All I had to do was enter my email address, select the vhost to enabled SSL for, and hit GO.

I had to put in a crontab entry myself to get the auto-renewal to work but that wasn’t so bad. I would hope they improve that in the future – but cron is no big deal.

I’m interested to see if everything works when my web certs expire 90 days from now! Crazy times. I used to do this and dread it once per year because the process was so manual. Now that it’s automated – I’ll get new certs while I’m sleeping. Woohoo.

Nutanix AHV Best Practices Guide

In my last blog post I talked about networking with Open vSwitch in the Nutanix hypervisor, AHV. Today I’m happy to announce the continuation of that initial post – the Nutanix AHV Best Practices Guide.

Nutanix  introduced the concept of AHV, based on the open source Linux KVM hypervisor. A new Nutanix node comes installed with AHV by default with no additional licensing required. It’s a full-featured virtualization solution that is ready to run VMs right out of the box. ESXi and Hyper-V are still great on Nutanix, but AHV should be seriously considered because it has a lot to offer, with all of KVMs rough edges rounded off.

Part of introducing a new hypervisor is describing all of the features, and then recommending some best practices for those features. In this blog post I wanted to give you a taste of the doc with some choice snippets to show you what this Best Practice Guide and AHV are all about.

Take a look at Magnus Andersson’s excellent blog post on terminology for some more detailed background on terms.

Acropolis Overview

Acropolis (one word) is the name of the overall project encompassing multiple hypervisors, the distributed storage fabric, and the app mobility fabric. The goal of the Acropolis project is to provide seamless invisible infrastructure whether your VMs exist in AWS, Hyper-V, ESXi, or the AHV. The sister project, Prism, provides the user interface to manage via GUI, CLI, or REST API.

AHV Overview

AHV is based on the open source KVM hypervisor, but is enhanced by all the other components of the Acropolis project. Conceptually, AHV has access to the Distributed Storage Fabric for storage, and the App Mobility Fabric powers the management plane for VM operations like scheduling, high availability, and live migration.

 

The same familiar Nutanix architecture exists, with a network of Controller Virtual Machines providing storage access to VMs. The CVM takes direct control of the underlying disks (SSD and HDD) with PCI passthrough, and exposes these disks to AHV via iSCSI (The blue dotted VM I/O line). The management layer is spread across all Nutanix nodes in the CVMs using the same web-scale principles of the storage layer. This means that by-default, a highly available VM management layer exists. No single point of failure anymore! No additional work to setup VM management redundancy – it just works that way.

AHV Networking Overview

Networking in AHV is provided by an Open vSwitch instance (OVS) running on each AHV host. The BPG doc has a comprehensive overview of the different components inside OVS and how they’re used. I’ll share a teaser diagram of the default network config after installation in a single AHV node.

AHV Networking Best Practices

Bridges, Bonds, and Ports – oh my. What you really want to know is “How do I plug this thing into my switches, setup my VLANs, and get the best possible load balancing. You’re in luck, because the Best Practice Guide covers the most common scenarios for creating different virtual switches and configuring load balancing.

Here’s a closer look at one possible networking configuration, where the 10gigabit adapters and 1gigabit adapters have been connected into separate OVS bridges. User VM2 has the ability to connect to multiple physically separate networks with this design to allow things like virtual firewalls.

After separating network traffic, the next thing is load balancing. Here’s a look at another possible load balancing method called active-slb. Not only does the BPG provide the configuration for this, but also the rationale. Maybe fault tolerance is important to you. Maybe active-active configuration with LACP is important. The BPG will cover the config and the best way to achieve your goals.

For information on VLAN configuration, check out the Best Practices Guide.

Other AHV Best Practices

This BPG isn’t just networking specific. The standard features you expect from a hypervisor are all covered.

  • VM Deployment
    • Leverage the fantastic aCLI, GUI, or REST API to deploy or clone VMs.
  • VM Data Protection
    • Backup up VMs with local or remote snapshots.
  • VM High Availability
    • During physical host failure, ensure that VMs are started elsewhere in the cluster.
  • Live Migration
    • Move running VMs around in the cluster.
  • CPU, Memory, and Disk Configuration
    • Add the right resources to machines as needed.
  • Resource Oversubscription
    • Rules for fitting the most VMs onto a running cluster for max efficiency.

Take a look at the AHV Best Practice Guide for information on all of these features and more. With this BPG in hand you can be up and running with AHV in your datacenter and get the most out of all the new features Nutanix has added.

Survivable UC – Avaya Aura and Nutanix Data Protection

I wanted to share a bit of cool “value add” today, as my sales and marketing guys would call it. This is just one of the things for Avaya Aura and UC in general that a Nutanix deployment can bring to the table.

Nutanix has the concept of Protection Domains and Metro Availability that have been covered in pretty great detail by some other Nutanix bloggers. Check out detailed articles here by Andre Leibovici, and here by Magnus Andersson for in depth info and configuration on Metro Availability.

Non-redundant Applications

In an Avaya Aura environment, most machines will be protected from failure at the application level. A hot standby VM will be running to take over operation in the event of primary machine failure such as with Session Manager and Communication Manager. In the following example we see that System Manager, AES, and a number of other service don’t have a hot standby. This might be because it’s too expensive resource wise, licensing wise, or the application demands don’t call for it.

1000-user_topology

If multiple Nutanix clusters are in place, we actually have two ways to protect these VMs at the Nutanix level.

Nutanix Protection Domains

First, let’s look at Protection Domains. With a Protection Domain, we configure a NDFS (Nutanix Distributed Filesystem) level snapshot that happens at a configurable interval. This snapshot is intelligently (with deduplication) replicated to another Nutanix cluster. It’s different than a vSphere snapshot because the Virtual Machine has no knowledge that a snapshot took place and no VMDK fragmentation is required. None of the standard warnings and drawbacks of running with snapshots apply here. This is a Nutanix metadata operation that can happen almost instantly.

We pick individual VMs to be part of the Protection Domain and replicate these to one or more sites.

In the event of a failure of a site or cluster, the VM can be restored at another site, because all of the files that make up the Virtual Machine (excluding memory) are preserved on the second Nutanix cluster.

ProtectionDomain

 

Nutanix Metro Availability

But I hear you saying, “Jason that’s great, but a snapshot taken at intervals is too slow. I can’t possibly miss any transactions. My UC servers are the most important thing in my Data Center. I need my replication interval to be ZERO.” This is where Metro Availability comes in.

Metro Availability is a synchronous write operation that happens between two Nutanix clusters. The requirements are:

  1. A new Nutanix container must be created for the Metro Availability protected machines.
  2. RTT latency between clusters must be less than 5 milliseconds (about 400 kilometers)

Since this write is synchronous, all disk write activity on a Metro Availability protected VM must be completed on both the local and the remote cluster before it’s acknowledged. This means all data writes are guaranteed to be protected in real time. The real-world limitation here is that every bit of distance between clusters adds latency to writes. If your application isn’t write-heavy you may be able to hit the max RTT limit without noticing any issues. If your application does nothing but write constantly to disk, 400km may need to be re-evaluated. Most UC machines are generally not disk intensive though. Lucky you!

MetroAvailability

In the previous image we have two Nutanix clusters separated by a metro ethernet link. The standalone applications like System Manager, Utility Services, Web License Manager, and Virtual Application Manager are being protected with Metro Availability.

In the even of Data Center 1 failure, all of the redundant applications will already be running in Data Center 2. The administrator can then either manually (or through a detection script) start the non-redundant VMs using the synchronous copies residing in Data Center 2.

Summary

Avaya Aura Applications are highly resilient and often provide the ability for multiple copies of each app to run simultaneously in different locations, but not all Aura apps work this way. With Nutanix and virtualization, administrators have even more flexibility to protect the non-redundant Aura apps using Protection Domains and Metro Availability.

These features present a consumer-friendly GUI for ease of operation, and also expose APIs so the whole process can be automated into an orchestration suite. These Nutanix features can provide peace of mind and real operational survivability on what would otherwise be very bad days for UC admins. Nutanix allows you to spend more time delivering service and less time scrambling to recover.