I will try to automate everything I can, so even when configuring one automation system (Puppet), I’m using another one (Fabric) to do it. I’m using this fabfile always when I need to install another Puppet agent or if I need to install a Puppet server. Tested only with Ubuntu.
I wrote about Fabric’s error handling in this post and there task execution errors were handled like this:
env.warn_only=True def cmd(cmd): if run(cmd).failed: sudo(cmd)
What if the task is a bit more complex and has multilple parts that can go wrong? You might want to abort the execution and do some kind of rollback action.
The task could be aborted by just using env.warn_only=False but then then the task would be aborted before we can do any rollback actions.
Without a wrapper it could be done like this:
Ubuntu 12.04 LTS has been released and it’s time to upgrade! The do-release-upgrade is the recommended way and don’t worry! I will use it. The problem is this: do-release-upgrade asks a lot of questions during the install but I have to upgrade multiple machines and I don’t feel like answering those same questions on each machine. So how to automate this?
There’s a way to answer those questions in advance and in a single command. You can pipe some of the answers with echo and then use DEBIAN_FRONTEND=noninteractice for the rest of them. No preseeding needed at all. With this command you can automate a release upgrade completely.
Remember to take backups in case something goes wrong. Read more of this post
In this tutorial I’m going to concentrate on various different settings you can use in your fabfile.
These settings can be added to your environmental variables, as decorators or inside the tasks. With roles
we can define which machines are for example servers and which are workstations.
I hope you’ve also read previous tutorials 1 & 2:
Your default settings should be in the environmental variables right in the beginning of your fabfile.
These are the settings that are used in all of your tasks unless you define it otherwise. These
are the settings that we’ll modify in our tasks later. We have used these in the previous tutorials.
env.hosts=["firstname.lastname@example.org","webserver.local"] env.user="hng" env.password="password" env.parallel=True env.skip_bad_hosts=True env.timeout=1 env.warn_only=True
Last week I was given an opportunity to give a lecture on our Linux course of Centralized Management taught by Tero Karvinen. I’m actually a student on this course so I was quite surprised when Tero asked me. This was the first time ever I’ve been asked to actually give a proper lecture to a class of more than 20 students so I was very honored and excited. Of course I’ve done many presentations in front of a class before but giving a full four hour lecture was something new.
The topic was a Python tool called Fabric and I was given free hands on how to do it. I do have some knowledge about Fabric as I was responsible for our Fabric configuration in our AwaseConfigurations project and I’m also using Fabric in my thesis which I’m working on right now. I had about a week to prepare for the lecture and it was plenty enough to get ready. I was a bit worried though as I’ve never even thought of lecturing like this and who knows if I’m any good.
And you know what? It couldn’t have gone any better. The students were paying attention and asking questions, listening to what I had to say. I had time to go through all of my material and I like to believe many learned the basics of Fabric quite well.
It was overall very nice experience and I’m happy I did it. The planning and the actual lecture also taught me to structure my thoughts in a way that it was easy to share the information and teach others. Big thanks to Tero for trusting me and giving me this opportunity!
I’ve gathered all the discussed material and a bit more to these two Fabric tutorials:
On the first turorial we learned to run commands on remote hosts with Fabric. Now we move on to
transfering files. Transfering new configuration files is usually quite important part of system administration.
Also retrieving log files from the remote machines might be useful.
Let’s assume we’ve made a new ssh_config file with important changes and we want to send it to our
remote hosts. Here’s a task for sending files.
def file_send(localpath,remotepath): put(localpath,remotepath,use_sudo=True)
Run it with:
or if the modified ssh_config is in the directory where you’re running Fabric:
If we’re sending the file to a location that doesn’t need sudo eg. /tmp/, we don’t need the use_sudo=True.
Another example: Read more of this post
This is a guide for installing and using Fabric on Ubuntu.
Fabric is a Python tool and a library for combining Python with SSH.
The tool can be used to execute Python functions from the command line. It’s also
possible to execute shell commands over SSH with Fabric and by combining the Python functions
with the SSH, it’s possible to automate system administration tasks. (fabfile.org)
You can use Fabric as a tool for centralized configuration management. You can run administrative tasks
on all of your machines simultaneously. Fabric is fast and easy to install and start using since there’s
no configuration needed, just install and start adding tasks to your fabfile.
Fabric doesn’t require anything else on the remote systems but a working SSH server. So if you
can control a device with SSH, you can also control it with Fabric. This is why Fabric is such
an easy and nice tool to add to your sysadmin tools. If you prefer Ruby over Python, take a look at a
similiar tool called Capistrano.
In these tutorials I will go through the installation and all the basics you need to start using
Fabric efficietly. Read more of this post