Create dedicated Cisco management interface

It is a best practice to have your production network devices connected to their respective production networks and also to an out-of-band network, for strict management purposes. Most Cisco switches, such as a Cisco Nexus 5k or a Cisco Catalyst 2960-S have a built-in physical management interface. This interface is often different from the other interfaces on the switch, in that it has it's own default gateway to keep the out-of-band routing self-contained in the out-of-band network. On the Cisco Nexus, the interface is assigned to a default management VRF.

Not all switches have this dedicated management interface implemented in hardware, so we can create one, mocking the intended design of a physical management interface. To do this, we have to do a few things (this method is intended for use on L3 catalyst switches):

  1. Dedicate an interface on the switch for management purposes, to be connected to your out-of-band and/or management network.
  2. Create a separate "management only" routing/forwarding table, aka VRF
  3. Assign your dedicated interface to use this management VRF, via SVI

On the Cisco Catalyst, when licensed for VRF usage, this can be achieved. I am using a 3560-CX running IOS 15.2, which does not have a dedicated managment interface.

To start, you need to prepare you physical interface. You need to assign it to your respective management/OOB VLAN, my example uses VLAN 10.

sw1.lab1#conf t
sw1.lab1(config)#int gi0/9
sw1.lab1(config-if)#description "Management VRF intf"
sw1.lab1(config-if)#switchport mode access
sw1.lab1(config-if)#switchport access vlan 10
sw1.lab1(config-if)#exit

Next, we need to prepare the VRF before associating our new interface to it. Note that I create the vrf definition and then I enter the IPv4 address-family, entering the address-family is mandatory and your VRF will not route/forward IPv4 traffic if this does not exist. If you intend to use IPv6 for management on this switch, also add it's address family.

sw1.lab1(config)#vrf definition management
sw1.lab1(config-vrf)#address-family ipv4
sw1.lab1(config-vrf-af)#exit-address-family
sw1.lab1(config-vrf)#exit

Now that the definition is created, we can add a route to our new VRF! Since this is for strict management purposes and we don't need to inter-vlan route, we only need to add a default route, which will mock the default-gateway command used on a Catalyst switch with a dedicated physical management interface.

sw1.lab1(config)#ip route vrf management 0.0.0.0 0.0.0.0 10.10.0.1

With this method, we are utilizing a SVI, so this needs to be created (for the respective VLAN). If it has not been created already, create it and assign it to your new VRF with the 'vrf forwarding VRFNAME' command

sw1.lab1(config)#int vlan10
sw1.lab1(config-if)#vrf forwarding management
sw1.lab1(config-if)#ip add 10.10.0.100
sw1.lab1(config-if)#end
sw1.lab1#wr mem

This complete the configuration and you should now be able to reach your switch via the new management VRF. Note that if you had the SVI created before adding it to the VRF, it may have deleted IP related information such as IP address, etc. and also, this traffic will be routed using your new default route which you added to the management vrf, so make sure that the next hop device that you set is able to route back to the intended management workstation.

If you have any questions or comments, please leave them below!

Moved blog to jekyll on nginx on docker

After my yearly lease came up for my VPS service, I decided to move providers and ended up going with AWS. I also did not want to use Wordpress anymore and liked the idea of static site generators and with some thought, I chose Jekyll. Since I was not using GitHub pages, I wanted an alternative way to have my site re-generated every time I have a successful git push. This leaded me to my next few choices:

  1. Run my blog on Docker

To respawn a site after every change, made docker seem like a perfect choice and I had some experience with it already. After I had created my site in Jekyll, I gathered all the dependencies and needs to run it in docker....

  1. Use nginx for control

Using just Jekyll, leaves you with a working static www root but no control for rewrites/redirects or custom magic. And no offense to Jekyll, it does a great job with static site generation but you should serve it a real web server. Jekyll does support dumping the site content into a folder, which makes it very compatible with nginx or apache

the configuration

I worked with Jekyll locally and created a template I admired, which started from martin308's left-stripe theme and made a handful or modificiations to make it work for me. These modifications included category support, social share buttons, a complete color theme change, and a few others.

After my testing, I had come up with a solid config, my Dockerfile below:

############################################################
# Dockerfile to build Nginx Jekyll blog
# ~ on Ubuntu ~
############################################################

###################################
# Set the base image to Ubuntu
###################################
FROM ubuntu:latest

###################################
# File Author / Maintainer
###################################
MAINTAINER Nick Reese [email protected]

###################################
# Update the repository
###################################
RUN apt-get update

##################################
# Download and Install utilities
###################################
RUN apt-get install -y curl bc apt-transport-https

###################################
# Download and install dev
###################################
RUN apt-get install -y rubygems ruby-dev gcc make
RUN gem install jekyll rouge kramdown git zip

###################################
# Download and Install Services
###################################
RUN apt-get install -y nginx nginx-extras 

###################################
# Copy a configuration files
###################################
RUN mkdir /srv/site-repo/
ADD . /srv/site-repo/

####################################
# nginx
####################################
RUN mkdir /etc/nginx/ssl

ADD etc/nginx/nginx.conf        /etc/nginx/
ADD etc/nginx/ssl/self-signed.crt /etc/nginx/ssl/
ADD etc/nginx/ssl/self-signed.key /etc/nginx/ssl/
ADD etc/nginx/conf.d/common.conf  /etc/nginx/conf.d/

ADD etc/nginx/sites-available/rsty.me                   /etc/nginx/sites-available/
ADD etc/nginx/sites-available/blog.rsty.me                   /etc/nginx/sites-available/

RUN ln -s /etc/nginx/sites-available/rsty.me         /etc/nginx/sites-enabled/
RUN ln -s /etc/nginx/sites-available/blog.rsty.me         /etc/nginx/sites-enabled/

RUN mkdir /var/www/blog.rsty.me
RUN jekyll build -s /srv/site-repo/ -d /var/www/blog.rsty.me/

RUN rm /etc/nginx/sites-enabled/default

###################################
# Expose ports
###################################
RUN rm -rf /usr/share/nginx/html

EXPOSE 80
EXPOSE 443

###################################
# Add build scripts
###################################
ADD run/services /run/
RUN chmod +x /run/services

###################################
# START SERVICES
###################################
CMD /run/services

The above configuration does a few things:

  1. Sets the base image to Ubuntu 15.10
  2. Updates the image and installs my dependencies
  3. Copies my complete repository to the container
  4. Copies my nginx configs to the container
  5. Creates the www root and runs Jekyll using my copied repository files as source and www root as destination
  6. Expose ports 80 and 443
  7. Start my services script, which basically just start nginx

So this gets my blog up and running but I would need to manually build and run my docker images. I want to push them to my private BitBucket repo and automatically restart my container with the new content. To do this, I need to have BitBucket communicate to the DockerHub and DockerHub relay a successful build notification to my server. This is the fun part :)

the automagical devopsy stuff

After having my docker/nginx/jekyll work completed, I needed to create a workflow for comitting changes and triggering automatic builds. Having all my BitBucket repo pushes trigger a DockerHub build was pretty simple to set up. Now I needed an endpoint for DockerHub to send towards, after a successful build. After some searching, I found captainhook, a nice project by bketelsen which is a web listener that can run scripts based on the URL called, aka "webhook."

However, captainhook does not run over SSL, so I recommend having nginx running to forward requests to captainhook, which is how I set mine up (SSL reverse proxy). Also, run captainhook in cron so that it never dies or alternatively, use supervisord.

* * * * * pidof captainhook || /home/ec2-user/go/bin/captainhook -configdir /home/ec2-user/caphook &

This ensures that if captainhook has a hiccuup and dies, it will start again under a different PID. My configuration for restarting my docker containers looks like this:

update-docker.json

{
    "scripts": [
        {
            "command": "/home/ec2-user/caphook/update-docker.sh",
            "args": [
                "NULL"
            ]
        }
    ]
}

update-docker.sh

!/bin/bash

DOCKER_CURRENT=$(docker ps -a | egrep -i 'myrepo\/blog:latest' | awk '{ print $1}')
OLD_IMAGES=$(docker images | egrep -vi '(myrepo\/blog|REPO)' | awk '{ print $3}')

docker pull myrepo/blog
docker kill $DOCKER_CURRENT
docker rm $DOCKER_CURRENT
docker run -d -p 80:80 -p 443:443 myrepo/blog:latest

# cleanup
docker rmi $OLD_IMAGES

My ultimate workflow

  1. Test my content changes locally with Jekyll using jekyll --serve
  2. If satisfied, I delete the temp Jekyll _site destination directory and git commit & git push

That sums it up! Let me know what you think!