Zabbix: “TLS handshake fail” workaround: Enable Jabber (XMPP) notifications via script (Cent OS)


Once you’ve been around for a while dealing with a medium sized company, you start to crave a system to control your machines and not get to work one morning to a crashed server or some other worse catastrophe.

Bad things happen, computers crash, disks fail, power supplies burn out (believe me, I’ve suffered a couple of them zapping and then leaving that characteristic burnt dust smell) – there’s nothing we can do about it.

There are a lot of tools to monitor your hardware: The traditional Nagios, PandoraFMS… Some are free, some other offer support.

Some years ago, I had a Nagios Setup and what I missed the most was a web UI to configure my devices: You had (and still have) to go through text files configuring each device: Once it’s done, it’s done, you don’t have to touch it anmore: But the learning curve can be very steep.

This is why some days ago I went ahead and tried Zabbix: It’s simple, looks great and gets the things done and oh, guess what: It’s got a UI for adding hosts and templates. You still will have to use the console, but not as much.


Anyway, let’s get to the point of this post: I was configuring the alerts: The e-mail ones are pretty straightforward, but I wasn’t able to get the jabber notifications to work

tls handshake error

There was no way to get around the tls handshake error.  I tried on CentOS, Ubuntu and OpenSUSE: All of them had the same issue.

At least, on CentOS, I figured out (well, some guy over at the Zabbix forums did) that the culprit of this was a library called eksemel. Its github page has not been updated for the past 6 years. Latest version, 1.5 seems to support newest (and supported) cypher systems, but 1.4, which is the version included in CentOS, does not.

I went on and tried to manually update it, found an rpm and started trying, but got entangled on a hell of dependencies. So, I had to find another way.

Note: this instructions are for CentOS. If you are using Ubuntu or other distro, you can do the same, but slightly changing the most distro-specific steps, such as repository addition, etc… also, bear in mind, some directory location may and WILL differ from this walkthrough.

Let’s get to it:

First, activate Software Collections (SCL)

sudo yum install centos-release-SCL

Once enabled, you’ll be able to install this piece of software that will let you send Jabber messages from the command line: sendxmpp

sudo yum install sendxmpp

This will also pull a bunch of dependencies.

Now, let’s get a jabber account ready, if you don’t have it: you’ll need an account from where to send the messages from, and another account for yourself, to receive those messages, if you didn’t have it before.

I chose Dismail. There’s also Jabjab, and many others: Feel free to browse this feature matrix and choose whichever you like most (the greener, the better, I guess..)

Now, go to your zabbix server to the alertscripts folder:

cd /usr/lib/zabbix/alertscripts

And create the script:

echo “$3” | /usr/bin/sendxmpp -u <username> -j <domain> -p <password> -s “$2” “$1” -t

Where <username>, type in your jabber user name (without the @xxxx)

Where <domain>, type in the domain (what’s after the @)

Where <password>, well, your password.

You can put all this sensitive data in a text file somewhere secure and reference it, avoiding having plain text passwords in a script.

Once the script is created, save it, and give it execution permissions:

chmod +x

You can go now to your web UI.

Under Administration -> Media types, create a new Media type


Name it as you wish- It’s important you select the type correctly “Script” and give the proper parameters to the script and in that particular order:




And enable it.


As the final step, you will have to go over to Administration -> Users, edit your user and go to the “Media” tab.

There, Add a new media for the jabber script you created.


Select as “Type” the name you gave the media type on the previous screen, and set the destination address. Save, and you’re done! No more tls errors!


I’ll have to point out that even the Ubuntu appliance you download from the zabbix download page has the same tls handshake error issue.

I hope they implement a different approach some day so can use the integrated system that comes with the suite.

Thanks to the zabbix team, the people over at their forums and their subreddit for helping.


Setting up HTML signatures on OSX

Lately, I had to deal with the OSX mail app while trying to install some HTML signatures. It’s surprising to see that in 2017, there’s still e-mail clients that don’t accept HTML signatures. Thunderbird lets you pick an HTML file and you’re done, but with OSX’s mail app, things can get a bit difficult.


First of all, you’ll have to create your html signature: Make it the way you like, insert images, etc… Once you’re done, head to your Mac and fire up the Mail app, here’s where things get tedious:

Create a blank signature: This will be a placeholder, we just want the app to create a certain hidden file that we will later edit and include the HTML in.

Once the signature is created, save it and completely close the mail app: I can’t stress this enough: The mail app must be closed and not opened until the whole process is finished

Next, fire up the terminal and navigate to


and find the latest created .mailsignature file.

Open this file:

open -a textEdit CC820FFC-0F10-4D9E-8637-3D823E865F43.mailsignature

Then, replace everything inside the <body> </body> tags for the content inside the same tags from the signature you want to use, then save and close.

Finally, there’s a catch: Once you open the mail app, it will restore your previous signature, so you will have to lock the archive to prevent it from any further changes:

Do this with:

chflags uchg CC820FFC-0F10-4D9E-8637-3D823E865F43.mailsignature

And you’re done!!!! – or are you?

Well, mainly, for newer versions of the system (I think 10.7 onwards) this would be the method, but on older versions, you would have to do a couple of things different:

First, you’d have to go to


Note this time it’s V2 instead of V3.

Open your signature on Mac’s Safari and save it as a .webarchive file

Inside the Signatures folder you navigated to before, there would be a newly created .webarchive signatire: Yes, you guessed it right: You’ll have to replace this .webarchive file with the one you saved with Safari, keeping the filename and finally, lock it again so the mail app does’nt revert the changes:

chflags uchg CC820FFC-0F10-4D9E-8637-3D823E865F43.webarchive


After days banging my head trying to understand why any company would force their users to hop through these kind of loops in order to get a simple html signature, I only can come up with a thought: Why, oh, apple… why?!

Solving “ERROR: One or more PGP signatures could not be verified!” (Arch LINUX)

Arch linux adding PGP verification some years ago was a really good thing after realizing that perhaps, just downloading from any repository without any kind of verification was a bad idea.

The process for signing and managing keys for the official repos is pretty straightforward and automated, however, with AUR, this is quite different.

Sometimes, you can run into signature errors such as the following:

==> Validating source files with md5sums…
cower-16.tar.gz … Passed
cower-16.tar.gz.sig … Skipped
==> Verifying source file signatures with gpg…
cower-16.tar.gz … FAILED (error during signature verification)
==> ERROR: One or more PGP signatures could not be verified!
==> ERROR: Makepkg was unable to build cower.
==> Restart building cower ? [y/N]

This happens because your keys repository is lacking a certain key needed to authenticate a package authenticity.

If you edit the PKGBUILD, you might see (if the author followed the convetions) the needed key and the owner of such key.

For this example package (cower), the PKGBUILD had a line telling us the needed key corresponded to a maintainer called “Dave Reisner”.

After googling a bit, you can find a reference to this person’s pgp key here

In this page you can find the public key ID, which is “F56C0C53”

All you have to do is add this public key to your keys repository, and you’ll be good to go. No more PGP errors for packages maintained by this particular maintainer:

gpg –recv-keys F56C0C53

you can learn more about package signing on Arch’s magnificent wiki

Running Owncloud on Docker

dockerManaging multiple appliances on the same server allways had its issues: Dependencies and updates are a risk you have to get over with in order to keep an up to date and secure service.

Docker is a technology that has been giving a lot to talk lately.

Imagine a world where you don’t have to worry about dependencies or configurations gone wrong: Docker is here for this purpose.

It’s not virtualization like we are used to with Virtualbox, VMWare or such. This is a technology involving completely isolated containers with different purposes. Completely abstracted from the underlying OS, so, in case an appliance becomes unstable, the other ones running ont he same server stay unharmed.

With this in mind, you can have a docker daemon running on a Ubuntu Server or CentOS, for example and be running multiple sub-systems (called containers) at the same time.

They offer a public repository of appliances on their web page . This is a public repository contributed by users from all around the world and organizations: Anyone can pack a new appliance and submit it to docker hub so anyone else can pull it on their machine and run it in minutes.

This is how you can get the simplest owncloud installation you can imagine.

First, you should have docker daemon running on your machine.

On Arch LINUX, follow their wiki:

For Ubuntu, they have a dedicated section on how to configure it:

Once it’s working, you only have to pull the image:

docker pull owncloud

Optionally, create a persistent storage volume, so the data stays out of the docker image, and on your real OS’s filesystem:

mkdir /owncloudData

And finally, run the image on a container:

docker run –restart=always -d -p 80:80 -v /owncloudData:/var/www/html owncloud

Then, just point your browser to the machine where the docker daemon is running and you’ll find the initial screen where you can create your user and, optionally, select the database backend you want to use.

By default, it uses sqlite, which is a good choice for portability and if you want to keep a simple installation to use with few users, but you can modify this.

It couldn’t be simpler.

Why secure strong passwords matter

“I don’t have a secure password, because I have nothing to hide” is, sadly, something I hear oftenly. But why should everyone use a secure password?, why is this so important?: This question is a no-brainer. Over the last few years, social networking has become part of our lifestyle. We share moments with others, and have a public social image that we like to, somehow, control or manage the way we like.

But hackers know this. Hackers are on the hunt for new vulnerabilities and ways to make profit using social engineering.

In the LINUX world, it’s oftenly said the best antivirus is common sense. With online scams, it’s more or less the same. But first, before we identifying those threats, we should start by securing what is ours, making sure we make things overly difficult for hackers to just flip our doorknob and have access to all our social content, to our system, to our bacups or even worse: To our financial information, because let’s make this clear again: hackers are after money and money is what they will look for.

I wanted to create this post to concienciate about password security. There was a time where you had a couple of accounts and that was it, but over the years, this got more compicated: Forums, multiple e-mail accounts, social networks, banking… all of those services require a kind of login information and it would be a tremendous mistake to use the very same one on all of them. We can strengthen security using double step authentication (mor about this on a future post), but not all platforms offer this option.

“yeah, but there’s no way to remember hundreds of different strong passwords by heart” – That’s totally true.

Hackers (god, I hate using this term to speak about the bad guys) use brute force attacks in order to login into your accounts. The method is simple: they get a dictionary, which is just a plain text document full of words and try, one by one until there’s a match with your password. This methods are slow and usually have countermeasures ready: That’s why after 10 tries, iPhones get locked and erase their memory.

There are some simple systems to make passwords both different and also easy to remember, hence lowering the brute force attacks chance of success: for instance, you can make different passwords using a base, like, for instance, your name backwards, plus your birthdate and a symbol in between: “noelnomanoj+20010612x” is a pretty secure password and i’m sure i should be able to remember it. Ad to this password the service or page you are going to use it with, like “noelnomanoj+20010612x-gmail” and you’ve got a unique password, different from the other ones you might use somewhere else.

Or you can go with random password generators:
On linux, you have console tools like pwgen, makepasswd… etc

… But they will be impossible to remember by heart.

There’s a web where you can test your password’s toughness to have an idea of how to tailor your passwords and how long time it would take for a computer to crack it using brute force attacks and word combinations:

See the lock on top of the bar?: this means your password is safe when you input it on this web, at least while travelling to their server. As you can see, the password I tailored before is quite secure and would take 573. years to crack it: good luck.

I consider a “strong” password would have a combination of:

  • Lowercase
  • Uppercase
  • Numbers
  • Special Symbols
  • 12 or more characters long

But…. how to remember multiple, unique, strong passwords, different for each site and on a safe place?: you can go with the traditional way and have all on a phisical notepad, written in paper, or you can go with password storage services like lastpass, or 1password: However, these online services are massively attacked by hackers and some of them have even sucumbed to them: . It is because of this keepass exixts and is my preferred alternative, but more about this option on a future post.


netdata: The perfect real time monitoring tool

When you have to deal with lots of machines and their well-being is your responsability, you tend to use tools like nagios, centreon, or something alike.

However, for your day-to-day usage on single machines, it would be great and pretty useful to have a place to see all your system’s stats in order to find out those horrible bottlenecks that are locking your system or just to have a glipmse of how your system performs.

Netdata can do all that… and more… and how!

This is netdata’s top part, where you get a quick overview of your system load

It’s got many, many sections and it’s fully configurable, but here you have some captures of some parts of your system netdata monitors:


CPU Utilization


Disk Usage

TCP Connections

Network traffic


…and many more, all on your web browser when pointing at your machine’s ip:19999

You can install it from AUR on arch LINUX. It’s pretty straightforward. Install, start/enable systemd unit, and you’re good to go.

And here you have a live previeo on their official page:

Socks5 tunnel over SSH on windows

A couple of days ago, I was trying to reach a certain page over the internet. The strange thing was that sometimes, the page seemed to be down so I let it rest for a couple of hours: they could be under maintenance and dropping connections. But after a couple of hours, the page was facing the same issues. So, I tried to connect over my phone’s 4g connection: To my surprise, the page was loading perfectly, so I tried tracing where the connection was being dropped:

On windows:


On Linux:


After 9 hops, the connection was lost, there was no response from some IP already on the destination page’s sub-network, shile on my phone, I was still getting a perfect user eperience on the page. After switching to google’s dns and having the same set of issues, I figured out for some reason, the remote machine was dropping my ISP¡s connections, most likely because of some issue, so I decided to try tunneling my connection via my home connection, on a different ISP.

In order to do so, you must have a working SSH server with it’s ports properly mapped so it can be reached from outside (while SSH uses port 22 by default, I recommend using something different, like 2207 and map it to your internal 22 port: It’s a very tempting port for hackers).

Then, you just have to stablish the tunnel, on linux:

Let’s say your configuration goes like this:

  • Your home’s address (You can get a cheap solution for this with
  • Your ssh server user name to log in: user
  • Your ssh port: 2207
  • The local port you are going to use for socks connection: 8080

ssh -D 8080 -C -p 2207

Log in and keep the window open.


On windows, you will have to use a 3rd party software in order to make the tunnel.

I use kitty, a putty fork that i prefer for having extended features and a better sounding name in spanish rather than the original version, but you can use either one of them.

Then, configure it like you would usually do in order to stablish a ssh connection: Input your server’s address and port:

Then, go to tunnel settings and change these:


Once your connection is set you just have to redirect the connection from your browser. I used firefox because it’s easier to configure:

And you’re just done. Go to to see your new IP.

Taking advantage of btrfs snapshots

One of the killer features of BTRFS are the snapshots.

In a nutshell, imagine you just installed your system and want to have a “copy” of that current state in the event of having a system crash in the future, so you can go back to that “stable” moment, avoiding all the fuzz of reinstalling and potentially losing data:

btrfs snapshot <source> <destination>

This will create that “copy” of the <source> btrfs subvolume on the desired <destination>.

You can create a snapshot inside of your subvolume: BTRFS is clever enough to know this and not create recursive snapshots.

This is useful for going back to previous states or just to recover data you have previously deleted (un)intentionally.

This feature is pretty neat, but it is quite useless when you can’t have it done automatically.

Sure, you can create a cron job to run the snapshot command every now and then, but for this, the guys over at SUSE  have already thought of it and created a very handy tool called snapper

To start using snapper (after pulling it from the repos or aur), you have to create configurations. Let’s start by creating a configuration for our root subvolume:

snapper -c root create-config /

However, snapper will not work if you don’t have a daemon, like cronie running, so install (if needed) and enable it

systemctl enable cronie

This will create, by default one snapshot every hour. You can list the current snapshots with the command:

snapper -c root list

And see the snapshots you have available

You can delete snapshots using

snapper -c root delete (id)

where ID, you can either enter one ID or a range of snapshots like “3-20” and snapper will delete all the snapshots in that range: Don’t worry if there’s no snapshot with an id 10 in this case: Snapper will skip unexisting snapshots and won’t fail.


Pretty nice, right?

Now, let’s take it up a notch: Let’s say your system fails,a nd you want to revert to a previous snapshot: Snapper has a build-in “snapper rollback” feature tht i didn’t manage to make work: Besides, I prefer when I do this kind of stuff manually: Helps you understand what is really going on 🙂

Just boot a live system and mount the root btrfs filesystem

mount /dev/sda1 /mnt

Now, you will have all your subvolumes under /mnt

Let’s say you created the btrfs filesystem at /dev/sda1 and created two different subvolumes: /mnt/root and /mnt/home

Snapper would have created snapshots under /mnt/root/.snapshots/#/snapshots/, being /mnt/root/.snapshots another subvolume.

You should first move this subvolume out of the root subvolume (you can have it on a separate subvolume at the same level of /mnt/root and /mnt/home, but let’s leave that for later on)

mv /mnt/root/.snapshots /mnt

Then, rename the “broken” root subvolume

mv /mnt/root /mnt/root.broken

Then, find a snapshot that you know was still working under /mnt/.snapshots/#/ and move it back to the top subvolume:

mv /mnt/root/.snapshots/119/snapshot /mnt/root

…and you’re done: unmount and reboot and you will be back at your working system.

Access owncloud via webdav from kde

Owncloud is the best solution if you want to have your own private cloud. I will probably write a step by step manual in the future about how to do this on arch.

One of the perks of being open software is that they try to keep it as compatible as possible: Instead of just having their own client, they also give you the option to connect via WebDav protocol.

This is the best choice when you want to open it on your file browser (i.e. kde’s dolphin)

ll you have to do is enter the url like so on the address bar:


…and you will be prompted with a login dialogue.

Easy, ¿isn’t it?

Let’s go a step further: Let’s say you want to mount the dav folder on your FS, all you have to do is (prior to have the package davfs2 on arch installed) is:

mount -t davfs https://your-owncloud.url/remote.php/webdav /path/to/mountpoint

and/or add an entry to your /etc/fstab:

https://your-owncloud.url/remote.php/webdav /path/to/mountpoint davfs user,noauto,uid=username,file_mode=600,dir_mode=700 0 1

More info at the Arch Wiki Davfs page