How to set up OTRS on Ubuntu

First, little info about OTRS taken from Wikipedia:
OTRS, an initialism for Open-source Ticket Request System, is a free and open-source trouble ticket system software package that a company, organization, or other entity can use to assign tickets to incoming queries and track further communications about them. It is a means of managing incoming inquiries, complaints, support requests, defect reports, and other communications. http://en.wikipedia.org/wiki/OTRS

So my interest in OTRS started couple of months ago. We had been using a very outdated ticketing system in my workplace for years and were thinking of changing to a better one. So as I was working during the summer holidays, I had time to look for an alternative. Found OTRS, set it up on my test server, tested for a few weeks and then migrated it to our production environment. Now we’ve been using OTRS for a month. Works like a charm. Usually if something is not working right in OTRS, it’s only a misconfiguration and easily fixed with the web GUI. Now we were also able to replace an old Windows server with an Ubuntu server. Slowly migrating from Windows to Linux… 😉 Read more of this post

Puppet with Windows clients

I was asked if I had configured Windows clients for a Puppet server running on Linux. I have, and I write everything down whenever I make some new configurations. So here’s my notes on configuring Windows Puppet clients for Ubuntu Puppet server. Don’t forget to check the official guides.

Read more of this post

Fabfile for Puppet installations

I will try to automate everything I can, so even when configuring one automation system (Puppet), I’m using another one (Fabric) to do it. I’m using this fabfile always when I need to install another Puppet agent or if I need to install a Puppet server. Tested only with Ubuntu.

Test environment:
Ubuntu 12.10
Puppet 2.7.18
Fabric 1.4.2

Read more of this post

Nagios email notifications & Puppet

I’ve been writing a bit about my Nagios configuration lately, but there was still at least one important part missing from my Nagios configuration: Email notifications. With email notifications configured there will be an email alert every time one of my nagios hosts or it’s services reaches warning or critical state.

Postfix will be used to send emails and I will include a Puppet module for the configuration as well.

Read more of this post

Nagios – NRPE and Windows Hosts

I was using check_nt on my Windows monitoring setup but check_nt is actually quite outdated and limited as Mr Medin the creator of NSClient++ told me. He helpfully advised that I could use check_nrpe instead of check_nt. I’ve done some tests now and got the nrpe working. Also updated my nagios3 and nscp Puppet modules to include nrpe configurations.

Here’s and example of a service that monitors physical memory usage on windows host:

define service {
  use                  generic-service
  check_command        check_nrpe!CheckMEM!MaxWarn=80% MaxCrit=90% ShowAll type=physical
  service_description  Memory usage
  host_name            remote-windows-host
}

Read more of this post

Puppet: Nagios3 module

On my previous post I told how I got Windows monitoring working with Nagios. The post included a puppet module for the NSClient, which Nagios uses to communicate with Windows. The most important module in that setup is obviously the actual Nagios3 module. Which manages the Nagios server and all the hosts. I’ve been working on it for couple of days now and although it’s not complete, it works and is already available on our github.

You can find the module here:
https://github.com/awaseroot/awaseroot/blob/master/puppet/modules/nagios3/manifests/init.pp

Read more of this post

Monitoring Windows with Nagios

I’m working in a Windows environment at my current job so I will be posting a little bit about Windows related topics in the future but the main focus will of course still stay in Linux. Setting up Nagios on Linux server to monitor Windows machines felt like a good way to introduce some Linux functionality to our Windows network.

Windows monitoring was fairly simple to set up but I did run into some small issues. All the guides and tutorials that I found were so outdated that they weren’t really much of a help. This guide is for the latest Nagios and nsclient versions (at least for now). Puppet module for the NSClient at the end of this post.

Read more of this post

Improved Puppet LAMP module

I have done slight improvements to my old LAMP module. The new one can be found from our Github here. And the blog post about the old one is here.

The module has been tested on Ubuntu 12.04 and Centos 6.2. It might work on Redhat and Debian as well but I haven’t tested those yet. It installs and configures Apache, PHP and MySQL. At it’s present state it works quite fine, but I might still keep improving it.
Read more of this post

New Script – Install Puppet on CentOS

As I’m now using Puppet also with CentOS I’d like to share the script I use to install Puppet on my CentOS VM. The script installs Ruby as well since you need it to run Puppet.

At the moment I’ve been using Puppet on CentOS without a puppetmaster so it’s properly tested only as serverless Puppet, but there’s shouldn’t be any problems even if you’re using a puppetmaster. I’m actually combining the use of Fabric and serverless Puppet to do some quick tests on multiple virtual machines but maybe I’ll write a bit about that in another post some day. Anyway, here’s the script: Read more of this post

Subsys lock problem with CentOS 6.2 and Apache

While I was improving my LAMP module I ran in to a problem with Apache 2.2.15 on CentOS 6.2.
Apache wasn’t working with my new module so I decided to install it normally via yum and see what’s going on.
Got it installed just fine with:
yum install httpd

Ran sudo service httpd restart.
All went fine.

But now when I checked status:

service httpd status
httpd dead but subsys locked

Read more of this post

Fabric – Wrappers and more error handling

I wrote about Fabric’s error handling in this post and there task execution errors were handled like this:

env.warn_only=True

def cmd(cmd):
    if run(cmd).failed:
        sudo(cmd)

What if the task is a bit more complex and has multilple parts that can go wrong? You might want to abort the execution and do some kind of rollback action.

The task could be aborted by just using env.warn_only=False but then then the task would be aborted before we can do any rollback actions.

Without a wrapper it could be done like this:

Read more of this post

Script for adding multiple Vagrant boxes

I’ve been using Vagrant for a while now to test new configurations with simple virtual machines. Usually it’s just a single VM that I’m using but now I wanted to do some tests on more than one machine at the same time.

So I was looking for a way to easily create multiple identical virtual machines with Vagrant but couldn’t find a good solution so I made this small script to do it for me. If you know a better way, could you leave a comment and then we can both laugh at this script. I have also included a guide for controlling these new VMs with Fabric.

The script copies a packaged vagrant box as many times as you define and adds them to vagrant. Then adds the new virtual boxes to the Vagrantfile config. These new vagrant machines have hostonly static IP network setting but you can change it to bridged in the script if you want IPs from DHCP. The hostonly means that the machines can only be accessed from the host machine that’s running vagrant.

If you want to use multiple vagrant boxes without the script, here are the steps:
Create a single Vagrant box. Guide
Package it. Guide
Copy the package.box file. You need a copy for each virtual box. Guide 😉
Add the new box copies to vagrant. Guide
Add new config to Vagrantfile for each new box. Guide
Start the boxes with ‘vagrant up’

And here’s the guide for using my script:
Read more of this post

Setting up a Github project

So I felt like setting up Github repository for awaseroot. Git is a version control system and with Github you can share your git project/repository online. We use Github as a place to share all our awaseroot material. There’s many good tutorials available (for example: http://help.github.com/win-set-up-git/) but here’s the steps I took one by one. Replace awaseroot with whatever you want.

Ubuntu 12.04
Git version 1.7.9.5

The goal is to publish this:

Like this: Read more of this post

Puppet module for LAMP installation

We are using Puppet in the course I spoke of before and we’re creating our own Puppet modules. Here’s what I came up with. A module that installs a LAMP server( LAMP = Linux Apache MySQL PHP) in two different ways to two different nodes. The lamp module has slightly more features than the easylamp. For example it changes the mysql password which is missing from the easylamp module. The modules might not be very sophisticated as these are some of the first Puppet modules I’ve ever made. Have to start somewhere!

UPDATE 6.10.2012: I have made improvements to the LAMP module and there’s a new blog post about it here:
https://awaseroot.wordpress.com/2012/10/06/improved-puppet-lamp-module/

Puppetmaster = Ubuntu 12.04
Puppet agents = Ubuntu 11.10
Puppet version 2.7.11

Modules and manifests

manifests/site.pp

node default {}

node 'bubuntu.elisa' {
    include lamp
}

node 'hubuntu.elisa' {
    include easylamp
}

Read more of this post

Ubuntu Release Upgrade – Fully automatic non-interactive upgrade

Ubuntu 12.04 LTS has been released and it’s time to upgrade! The do-release-upgrade is the recommended way and don’t worry! I will use it. The problem is this: do-release-upgrade asks a lot of questions during the install but I have to upgrade multiple machines and I don’t feel like answering those same questions on each machine. So how to automate this?
EASY!

There’s a way to answer those questions in advance and in a single command. You can pipe some of the answers with echo and then use DEBIAN_FRONTEND=noninteractice for the rest of them. No preseeding needed at all. With this command you can automate a release upgrade completely.

Remember to take backups in case something goes wrong. Read more of this post

Fabric tutorial 3 – Settings and roles

In this tutorial I’m going to concentrate on various different settings you can use in your fabfile.
These settings can be added to your environmental variables, as decorators or inside the tasks. With roles
we can define which machines are for example servers and which are workstations.

I hope you’ve also read previous tutorials 1 & 2:
https://awaseroot.wordpress.com/2012/04/23/fabric-tutorial-1-take-command-of-your-network/
https://awaseroot.wordpress.com/2012/04/25/fabric-tutorial-2-file-transfer-error-handling/

Environmental variables

Your default settings should be in the environmental variables right in the beginning of your fabfile.
These are the settings that are used in all of your tasks unless you define it otherwise. These
are the settings that we’ll modify in our tasks later. We have used these in the previous tutorials.

fabfile.py

env.hosts=["simo@10.10.10.10","webserver.local"]
env.user="hng"
env.password="password"
env.parallel=True
env.skip_bad_hosts=True
env.timeout=1
env.warn_only=True

Read more of this post

Lecture on Fabric

Last week I was given an opportunity to give a lecture on our Linux course of Centralized Management taught by Tero Karvinen. I’m actually a student on this course so I was quite surprised when Tero asked me. This was the first time ever I’ve been asked to actually give a proper lecture to a class of more than 20 students so I was very honored and excited. Of course I’ve done many presentations in front of a class before but giving a full four hour lecture was something new.

The topic was a Python tool called Fabric and I was given free hands on how to do it. I do have some knowledge about Fabric as I was responsible for our Fabric configuration in our AwaseConfigurations project and I’m also using Fabric in my thesis which I’m working on right now. I had about a week to prepare for the lecture and it was plenty enough to get ready. I was a bit worried though as I’ve never even thought of lecturing like this and who knows if I’m any good.

And you know what? It couldn’t have gone any better. The students were paying attention and asking questions, listening to what I had to say. I had time to go through all of my material and I like to believe many learned the basics of Fabric quite well.

It was overall very nice experience and I’m happy I did it. The planning and the actual lecture also taught me to structure my thoughts in a way that it was easy to share the information and teach others. Big thanks to Tero for trusting me and giving me this opportunity!

I’ve gathered all the discussed material and a bit more to these two Fabric tutorials:
https://awaseroot.wordpress.com/2012/04/23/fabric-tutorial-1-take-command-of-your-network/
https://awaseroot.wordpress.com/2012/04/25/fabric-tutorial-2-file-transfer-error-handling/

Fabric tutorial 2 – File transfer & error handling

On the first turorial we learned to run commands on remote hosts with Fabric. Now we move on to
transfering files. Transfering new configuration files is usually quite important part of system administration.
Also retrieving log files from the remote machines might be useful.

Sending files

Let’s assume we’ve made a new ssh_config file with important changes and we want to send it to our
remote hosts. Here’s a task for sending files.

def file_send(localpath,remotepath):
    put(localpath,remotepath,use_sudo=True)

Run it with:
fab file_send:path/to/edited/ssh_config,/etc/ssh/ssh_config
or if the modified ssh_config is in the directory where you’re running Fabric:
fab file_send:ssh_config,/etc/ssh/ssh_config

If we’re sending the file to a location that doesn’t need sudo eg. /tmp/, we don’t need the use_sudo=True.

Another example: Read more of this post

Fabric tutorial 1 – Take command of your network

This is a guide for installing and using Fabric on Ubuntu.
Tested versions:
Ubuntu 11.10
Fabric 1.4.1

Fabric is a Python tool and a library for combining Python with SSH.
The tool can be used to execute Python functions from the command line. It’s also
possible to execute shell commands over SSH with Fabric and by combining the Python functions
with the SSH, it’s possible to automate system administration tasks. (fabfile.org)

Why Fabric?
You can use Fabric as a tool for centralized configuration management. You can run administrative tasks
on all of your machines simultaneously. Fabric is fast and easy to install and start using since there’s
no configuration needed, just install and start adding tasks to your fabfile.

Fabric doesn’t require anything else on the remote systems but a working SSH server. So if you
can control a device with SSH, you can also control it with Fabric. This is why Fabric is such
an easy and nice tool to add to your sysadmin tools. If you prefer Ruby over Python, take a look at a
similiar tool called Capistrano.

In these tutorials I will go through the installation and all the basics you need to start using
Fabric efficietly. Read more of this post