Access owncloud via webdav from kde

Owncloud is the best solution if you want to have your own private cloud. I will probably write a step by step manual in the future about how to do this on arch.

One of the perks of being open software is that they try to keep it as compatible as possible: Instead of just having their own client, they also give you the option to connect via WebDav protocol.

This is the best choice when you want to open it on your file browser (i.e. kde’s dolphin)

ll you have to do is enter the url like so on the address bar:

webdavs://your-owncloud.url/remote.php/webdav

…and you will be prompted with a login dialogue.

Easy, ¿isn’t it?

Let’s go a step further: Let’s say you want to mount the dav folder on your FS, all you have to do is (prior to have the package davfs2 on arch installed) is:

mount -t davfs https://your-owncloud.url/remote.php/webdav /path/to/mountpoint

and/or add an entry to your /etc/fstab:

https://your-owncloud.url/remote.php/webdav /path/to/mountpoint davfs user,noauto,uid=username,file_mode=600,dir_mode=700 0 1

More info at the Arch Wiki Davfs page

 

Advertisements

Importing not-exported ZFS pool to new system

So, recently, I replaced my system’s HDD. I could have just copied the system over to the new disk, but that Arch installation has been with me for years without major issues and I wanted to start fresh, also because I was leaving behind the BIOS system and diving into UEFI.

To have a bit of background, I have 4 hard drives: One system HD with a windows partition and an arch linux partition and the other three are 3x2TB HDD forming a raidz (raid5) 4Tb setup where I can afford to lose one disk and still keep my data while I run to the nearest components shop and get me a new disk to replace the failed one.

ZFS keeps its information on a cache file at /etc/zfs in order to be able to re-assemble the volumes at boot time, but I just forgot to export the pool so it gets “released” for saying it somehow, from the old system, so I can import it on the new one.

The process for importing a non-exported pool is fairly easy:

zpool import -d /dev/disk/by-id -aN -f

This will scan the disks on your system and import the pools it can find. Verify the results with:

zpool status

Pretty simple, isn’t it? Well, not quite: When you install arch:

pacstrap /mnt base

“base” includes mdadm,which is a clever system that scans your partitions automatically at boot time and assembles any array it can detect, regardless of the configuration set in /etc/mdadm.conf, pulling information from the partitions each disk have: If there are partitions marked as array type, mdadm will understand there’s an array, not minding if the disk also has a, let’s say ZFS partition.

This would have not been a problem if my disks haven’t been a software array ever, but they did, and when I made the change to zfs I did not clear the partition information properly.

Before you are going to use mdadm or after you stop using it, you should issue the command

mdadm –zero-superblock /dev/sda

on every member disk of your array: This clears every partition information from the disk, avoiding future problems. Be careful, though: You can’t use this command after you put new data on your disk or you will lose all that data

There’s another way (also destructive, sorry) to do this. A more generic one with:

wipefs -a /dev/sdX

This will clear every (-a) data structure information on your disk: It will wipe the partition information and leave the disk partitionless.

Going back to my issues with ZFS: After re-importing the pools and the system generating the needed cache files for zfs import at boot time, my zfs pool was not surviving the reboot and instead, i found myself with a “/dev/mdxxx” device instead.

My first reaction after finding out was to remove the mdadm package completely.

In the future, I plan to wipe every disk of my array one by one, perhaps to avoid this issue the next time I might reinstall the system, perhaps pulling a device out of the array, issuing a wipefs command on it and then, re-assigning it to the array and performing a resilvering operation: I’m only wondering if the stress of 3 resilvering operations is worth the effort.

Backup your linux system to a compressed file

It’s well known how LINUX, compared to Windows, manages pretty well hardware migration. It involves 3rd party (usually paid) apps or some time editing registry settings and a bit of luck to get it right. However, on any *nix environment, all you have to do is create the new partitions on the target system, pack your files on the old (or just directly transfer them), deploy them on the new, edit /etc/fstab, and the bootloader reference to the disk you want to boot from and perhaps a

mkinitcpio -p linux

to regenerate the boot images and you’re done!

This is a little command I use to make a backup of my whole system from time to time in case something goes wrong:

tar cvpzf /home/backup-$(date +%Y-%m-%d_%H%M%S).tgz –exclude=/proc –exclude=/lost+found –exclude=/mnt –exclude=/var/cache/pacman –exclude=/sys –exclude=/home –exclude=/.snapshots /

The output would be a file under /home with the date on it, so you ca keep old versions easily and basically, I skip every volatile and external directory from the backup. Finally, I want to keep the btrfs snapshots out of the picture as well, hence the last –exclude option.

Fix deluge-web error with default certs when connecting via https

I use deluge on a headless server as a seedbox. The way to connect is via http, using the deluge-web service that comes with the deluge standard package in arch linux. As usual with every http connection, you should enable https whenever possible, just to get that basic security level, even though you (most of the times) will use self-signed certificates that will give you the typical security warnings when connecting.

For reasons I still don’t know (perhaps because the certificates I was using were quite old already), a setup I have been using for years now, suddenly started giving errors upon connecting that didn’t give me the choice to accept the security risks and connect regardless, so I had to re-generate the certificates.

It’s a pretty easy task: you just have to navigate to the deluge configuration folder where the certs are stored at:

/srv/deluge/.config/deluge/ssl

And regenerate new certificates like so:

openssl req -new -x509 -nodes -out deluge.cert.pem -keyout deluge.key.pem

And then go back to the deluge web UI settings and reference them on the Preferences -> Interface tab.

Hope it helps, since it took me a couple of hours to figure it out!

Migrate BTRFS setup to a new (bigger or equal) disk: The dirty way

Imagine you are in the situation where your btrfs disk is getting filled up, or is just about to fail or you just want to install a bigger HD, or even better: You are migrating to a SSD disk setup.

Migrating partitions with traditional filesystems such as ext4, or FAT32 is easy, since you just have to format a partition on the new disk with the same type and just

cp -a /mnt/source /mnt/destination

Being /mnt/source your mountpoint for the old disk partition and /mnt/destination the new one.

But with btrfs, the thing gets a little more complicated, since the contents of those partitions are not just plain and simple files and folders: You have subvolumes, snapshots, etc… and they behave in a special way or have specific commands to manage them.

There is a solution, a fast solution, in case you wan to migrate without all the fuss of replicating the subvolumes, use btrfs send and receive, etc….

You can just add the new disk’s partition to the btrfs pool, getting a raid1 data structure, then reduce the data structure to raid0 and finally, removing the old disk from the array.

First, create the new btrfs partition on the new disk, I’m going to assume /dev/sda2 is the btrfs partition on the old disk, while /dev/sdb2 is the btrfs partition on the new one.

First, we create the btrfs partition:

mkfs.btrfs /dev/sdb2

Then, we add the partition to our array: Let’s suppose you mounted /dev/sda2 on /mnt, we would add the new partition like so:

btrfs device add /dev/sdb2 /mnt

You will be warned by a message saying all the data on /dev/sdb2 will be destroyed, and this is because everything on this partition, including the UUID, will be identical to what you had on the original one at /dev/sda2

Afterwards, we have to convert the data structure to raid0:

btrfs balance start -dconvert=raid0 -mconvert=raid0 -sconvert=raid0 /mnt -f

We will probably get a warning if we don’t specify the “-f” option: BTRFS does not like to have single system data structures if you have more than one disk, but it’ll be fine.

NOTE: These operations might take a long time to complete, depending on the amount of data on your brtfs partitions(s)

Finally, once the balance concludes, you can remove the “old” disk’s partition:

btrfs device delete /dev/sda2 /mnt

Btrfs will take a really long time on this final step, since it has to move every chunk of data remaining on /dev/sda2 over to /dev/sdb2, but once you’re done, you can safely reboot and remove the old disk.

I must say again, this is a “dirty” and not so elegant method, but it does the job and is the best way in case you are in a rush or the subvolume/snapshot structure is way too complicated. Keep in mind the new partition must be same size or bigger than the old one for this to work without problems.

Enjoy!

Hello world!

Well, here we are: Ten years working as IT for a small company have given me lots of experience and gave me enough time to test and try multiple ways of building environments and solutions so my colleagues can go through their work day without worrying about their computer, the network or anything computer related. This is not the most challenging part, however: Aside from the fact that an IT employee’s work remains in the background as long as there’s no problem at all, there’s a secondary handicap: To do thing right, to do them fast and sometimes, most important: To do them free or as cheap as possible: That’s where my interest for free software comes in handy: Not as a way to save money for a small company in a declining market as is printing: But also as a way to find siting solutions that don’t lock you to closed software. Sticking to standards could be one of the most appealing qualities of free software and that’s why I choose to implement it whenever I can: Both at my professional workplace as back at my home.

It’s been long since I first tried LINUX. Slackware back at 97, or 98… I really can’t remember, it was way too long ago. Needless to say, there was no internet at that time an you fully depended on your skills at reading manuals. I never made it to work but was a very interesting couple of months I tried to figure out how *nix systems worked.

6 years later, while I was studying IT, I went back to try my luck with LINUX: It was getting more and more mature and that time, Mandrake was my favorite: Making my USB modem to work was a total challenge! But it was worth it.

Then, along came Ubuntu, which I used for a coupe of years, always feeling that lack of total freedom. I have nothing against them and still think it’s the perfect start-distro, but that’s all it is: A simple, yet robust, distro.

2008 was the year I discovered Arch linux and since then, I didn’t look back. The atomic configuration , the KISS mentality and its Wiki and community, along with the fact of being a rolling release distro (no need to reinstall or fear new releases) and their AUR is the best qualities I can speak of regarding this distro. I have it on every available PC at home: On my two PC’s and on my raspberry, which I use as LIVE cam.

But LINUX is not my only passion: Technology itself. I enjoy reading articles about new gadgets, computer hardware, new inventions, breakthroughs in science. Even though my specialty while studying was software development and the fact that I had to design a couple of apps (in visual basic, first and java later), I always have been more interested in the hardware and software part rather than programming.

Which finally led me here: After all this years I have been hoarding handbooks, tips I wrote myself and thousands of hours testing software and ways to implement solutions, until today, when I finally thought it would be a nice experience to try and share it with the world.

I’m not looking to be a big time blog with thousands of visits, and my main goal is to help anyone that goes through the same as I do and give them an insight of my experience.

But, why in English?, you might ask…. Even though it’s not my mother tongue, I always believed language exists to communicate the humanity, and, even though I speak the second most spoken language, as is Spanish, I decided to write my notes in English to make them more accessible for everyone. Spanish is the second most spoken language after Chinese, of that, I’m sure: But English seems like the best middle ground for everyone after all, it’s accessible, as it’s taught in school in most countries, and it’s a fairly simple language as well. (however, I don’t think it would be too difficult to have a Spanish version of this blog in the future, who knows!).

Anyway, to sum it up: Whoever you are, and wherever you are, I hope you enjoy and find my notes useful. Feel free to ask any question you might have. See you around!