Running Owncloud on Docker

dockerManaging multiple appliances on the same server allways had its issues: Dependencies and updates are a risk you have to get over with in order to keep an up to date and secure service.

Docker is a technology that has been giving a lot to talk lately.

Imagine a world where you don’t have to worry about dependencies or configurations gone wrong: Docker is here for this purpose.

It’s not virtualization like we are used to with Virtualbox, VMWare or such. This is a technology involving completely isolated containers with different purposes. Completely abstracted from the underlying OS, so, in case an appliance becomes unstable, the other ones running ont he same server stay unharmed.

With this in mind, you can have a docker daemon running on a Ubuntu Server or CentOS, for example and be running multiple sub-systems (called containers) at the same time.

They offer a public repository of appliances on their web page . This is a public repository contributed by users from all around the world and organizations: Anyone can pack a new appliance and submit it to docker hub so anyone else can pull it on their machine and run it in minutes.

This is how you can get the simplest owncloud installation you can imagine.

First, you should have docker daemon running on your machine.

On Arch LINUX, follow their wiki:

For Ubuntu, they have a dedicated section on how to configure it:

Once it’s working, you only have to pull the image:

docker pull owncloud

Optionally, create a persistent storage volume, so the data stays out of the docker image, and on your real OS’s filesystem:

mkdir /owncloudData

And finally, run the image on a container:

docker run –restart=always -d -p 80:80 -v /owncloudData:/var/www/html owncloud

Then, just point your browser to the machine where the docker daemon is running and you’ll find the initial screen where you can create your user and, optionally, select the database backend you want to use.

By default, it uses sqlite, which is a good choice for portability and if you want to keep a simple installation to use with few users, but you can modify this.

It couldn’t be simpler.


Why secure strong passwords matter

“I don’t have a secure password, because I have nothing to hide” is, sadly, something I hear oftenly. But why should everyone use a secure password?, why is this so important?: This question is a no-brainer. Over the last few years, social networking has become part of our lifestyle. We share moments with others, and have a public social image that we like to, somehow, control or manage the way we like.

But hackers know this. Hackers are on the hunt for new vulnerabilities and ways to make profit using social engineering.

In the LINUX world, it’s oftenly said the best antivirus is common sense. With online scams, it’s more or less the same. But first, before we identifying those threats, we should start by securing what is ours, making sure we make things overly difficult for hackers to just flip our doorknob and have access to all our social content, to our system, to our bacups or even worse: To our financial information, because let’s make this clear again: hackers are after money and money is what they will look for.

I wanted to create this post to concienciate about password security. There was a time where you had a couple of accounts and that was it, but over the years, this got more compicated: Forums, multiple e-mail accounts, social networks, banking… all of those services require a kind of login information and it would be a tremendous mistake to use the very same one on all of them. We can strengthen security using double step authentication (mor about this on a future post), but not all platforms offer this option.

“yeah, but there’s no way to remember hundreds of different strong passwords by heart” – That’s totally true.

Hackers (god, I hate using this term to speak about the bad guys) use brute force attacks in order to login into your accounts. The method is simple: they get a dictionary, which is just a plain text document full of words and try, one by one until there’s a match with your password. This methods are slow and usually have countermeasures ready: That’s why after 10 tries, iPhones get locked and erase their memory.

There are some simple systems to make passwords both different and also easy to remember, hence lowering the brute force attacks chance of success: for instance, you can make different passwords using a base, like, for instance, your name backwards, plus your birthdate and a symbol in between: “noelnomanoj+20010612x” is a pretty secure password and i’m sure i should be able to remember it. Ad to this password the service or page you are going to use it with, like “noelnomanoj+20010612x-gmail” and you’ve got a unique password, different from the other ones you might use somewhere else.

Or you can go with random password generators:
On linux, you have console tools like pwgen, makepasswd… etc

… But they will be impossible to remember by heart.

There’s a web where you can test your password’s toughness to have an idea of how to tailor your passwords and how long time it would take for a computer to crack it using brute force attacks and word combinations:

See the lock on top of the bar?: this means your password is safe when you input it on this web, at least while travelling to their server. As you can see, the password I tailored before is quite secure and would take 573. years to crack it: good luck.

I consider a “strong” password would have a combination of:

  • Lowercase
  • Uppercase
  • Numbers
  • Special Symbols
  • 12 or more characters long

But…. how to remember multiple, unique, strong passwords, different for each site and on a safe place?: you can go with the traditional way and have all on a phisical notepad, written in paper, or you can go with password storage services like lastpass, or 1password: However, these online services are massively attacked by hackers and some of them have even sucumbed to them: . It is because of this keepass exixts and is my preferred alternative, but more about this option on a future post.


netdata: The perfect real time monitoring tool

When you have to deal with lots of machines and their well-being is your responsability, you tend to use tools like nagios, centreon, or something alike.

However, for your day-to-day usage on single machines, it would be great and pretty useful to have a place to see all your system’s stats in order to find out those horrible bottlenecks that are locking your system or just to have a glipmse of how your system performs.

Netdata can do all that… and more… and how!

This is netdata’s top part, where you get a quick overview of your system load

It’s got many, many sections and it’s fully configurable, but here you have some captures of some parts of your system netdata monitors:


CPU Utilization


Disk Usage

TCP Connections

Network traffic


…and many more, all on your web browser when pointing at your machine’s ip:19999

You can install it from AUR on arch LINUX. It’s pretty straightforward. Install, start/enable systemd unit, and you’re good to go.

And here you have a live previeo on their official page:

Socks5 tunnel over SSH on windows

A couple of days ago, I was trying to reach a certain page over the internet. The strange thing was that sometimes, the page seemed to be down so I let it rest for a couple of hours: they could be under maintenance and dropping connections. But after a couple of hours, the page was facing the same issues. So, I tried to connect over my phone’s 4g connection: To my surprise, the page was loading perfectly, so I tried tracing where the connection was being dropped:

On windows:


On Linux:


After 9 hops, the connection was lost, there was no response from some IP already on the destination page’s sub-network, shile on my phone, I was still getting a perfect user eperience on the page. After switching to google’s dns and having the same set of issues, I figured out for some reason, the remote machine was dropping my ISP¡s connections, most likely because of some issue, so I decided to try tunneling my connection via my home connection, on a different ISP.

In order to do so, you must have a working SSH server with it’s ports properly mapped so it can be reached from outside (while SSH uses port 22 by default, I recommend using something different, like 2207 and map it to your internal 22 port: It’s a very tempting port for hackers).

Then, you just have to stablish the tunnel, on linux:

Let’s say your configuration goes like this:

  • Your home’s address (You can get a cheap solution for this with
  • Your ssh server user name to log in: user
  • Your ssh port: 2207
  • The local port you are going to use for socks connection: 8080

ssh -D 8080 -C -p 2207

Log in and keep the window open.


On windows, you will have to use a 3rd party software in order to make the tunnel.

I use kitty, a putty fork that i prefer for having extended features and a better sounding name in spanish rather than the original version, but you can use either one of them.

Then, configure it like you would usually do in order to stablish a ssh connection: Input your server’s address and port:

Then, go to tunnel settings and change these:


Once your connection is set you just have to redirect the connection from your browser. I used firefox because it’s easier to configure:

And you’re just done. Go to to see your new IP.

Taking advantage of btrfs snapshots

One of the killer features of BTRFS are the snapshots.

In a nutshell, imagine you just installed your system and want to have a “copy” of that current state in the event of having a system crash in the future, so you can go back to that “stable” moment, avoiding all the fuzz of reinstalling and potentially losing data:

btrfs snapshot <source> <destination>

This will create that “copy” of the <source> btrfs subvolume on the desired <destination>.

You can create a snapshot inside of your subvolume: BTRFS is clever enough to know this and not create recursive snapshots.

This is useful for going back to previous states or just to recover data you have previously deleted (un)intentionally.

This feature is pretty neat, but it is quite useless when you can’t have it done automatically.

Sure, you can create a cron job to run the snapshot command every now and then, but for this, the guys over at SUSE  have already thought of it and created a very handy tool called snapper

To start using snapper (after pulling it from the repos or aur), you have to create configurations. Let’s start by creating a configuration for our root subvolume:

snapper -c root create-config /

However, snapper will not work if you don’t have a daemon, like cronie running, so install (if needed) and enable it

systemctl enable cronie

This will create, by default one snapshot every hour. You can list the current snapshots with the command:

snapper -c root list

And see the snapshots you have available

You can delete snapshots using

snapper -c root delete (id)

where ID, you can either enter one ID or a range of snapshots like “3-20” and snapper will delete all the snapshots in that range: Don’t worry if there’s no snapshot with an id 10 in this case: Snapper will skip unexisting snapshots and won’t fail.


Pretty nice, right?

Now, let’s take it up a notch: Let’s say your system fails,a nd you want to revert to a previous snapshot: Snapper has a build-in “snapper rollback” feature tht i didn’t manage to make work: Besides, I prefer when I do this kind of stuff manually: Helps you understand what is really going on 🙂

Just boot a live system and mount the root btrfs filesystem

mount /dev/sda1 /mnt

Now, you will have all your subvolumes under /mnt

Let’s say you created the btrfs filesystem at /dev/sda1 and created two different subvolumes: /mnt/root and /mnt/home

Snapper would have created snapshots under /mnt/root/.snapshots/#/snapshots/, being /mnt/root/.snapshots another subvolume.

You should first move this subvolume out of the root subvolume (you can have it on a separate subvolume at the same level of /mnt/root and /mnt/home, but let’s leave that for later on)

mv /mnt/root/.snapshots /mnt

Then, rename the “broken” root subvolume

mv /mnt/root /mnt/root.broken

Then, find a snapshot that you know was still working under /mnt/.snapshots/#/ and move it back to the top subvolume:

mv /mnt/root/.snapshots/119/snapshot /mnt/root

…and you’re done: unmount and reboot and you will be back at your working system.