DesignSpark Electrical Logolinkedin
Menu Search
Ask a Question

Raspberry Pi 4 Personal Datacentre Part 1: Ansible, Docker and Nextcloud

Pi 4 packs plenty of punch and is perfect for a private home or office cloud server.

In this series of posts, we show how a Raspberry Pi 4 can be used to create a personal cloud solution that is managed using Ansible and Docker — powerful tools that are used by many large scale cloud platforms, which automate configuration tasks and provide containerisation for applications.

This first post takes a look at initial Raspbian configuration, followed by setting up Ansible and Docker, and then finally the installation of Nextcloud using these tools.

Nextcloud provides a whole host of services, including file sharing, calendaring, videoconferencing and much more. However, as we will come to see, this is just the beginning and with our tooling set up, the capable Pi 4 can quickly be put to many other uses as a personal cloud server.

Hardware

With its quad-core processor the Raspberry Pi 3 Model B was no slouch and the new Pi 4 features a system-on-chip upgrade which delivers a significant further performance boost. Not only this but with models that have up to 4GB RAM, it’s capable of running much more advanced and multiple larger workloads. In addition to which the upgrade from 100M to 1G Ethernet and USB 2.0 to USB 3.0, means that it’s perfect for a personal server and fast external storage can be attached.

The OKdo Raspberry Pi 4 4GB Model B Starter Kit (192-5286) provides everything that we need to get up and running: a 4GB Pi 4, heatsinks, enclosure with a fan, mains power supply, cables, a 32GB Micro SD card, and an SD card reader.

Basic setup

So if we start by downloading the latest version of Raspbian and, since this is going to be a “headless” server, the Lite variant should be selected. Once unzipped the image needs to be written out to an SD card and on Linux, this was done with:

$ sudo dd if=2019-09-26-raspbian-buster-lite.img of=/dev/mmcblk0 bs=1M

Note that the input filename (if) will vary depending upon the Raspbian version and the output filename (of) will vary depending upon your system. Great care must be taken using the dd command and if unsure or using Windows or a Mac, see the installation guide.

If you’re going to configure your Raspberry Pi over a network connection see the SSH documentation. VNC could be used instead, but this would mean installing the full desktop version of Raspbian and this feels like overkill for a server and would use up more disk space.

Once booted the latest Raspbian updates can be applied with:

$ sudo apt update
$ sudo apt dist-upgrade

To set the hostname to something more meaningful, two files need to be edited. E.g. using nano:

$ sudo nano /etc/hostname
$ sudo nano /etc/hosts

Where “raspberrypi” is seen in each file replace this with your hostname of choice. For example, if you were to use “cloud” this would mean that, after rebooting the Pi, you could connect via SSH from another Linux system by using the command:

$ ssh pi@cloud.local

Ansible

Ansible is a powerful open-source software for provisioning, configuration management, and application deployment. In short, it allows you to automate complex tasks that can quickly become boring and error-prone. Also by writing an Ansible script to do something, it means that you can cheat a little and get away with taking fewer notes since you don’t have to remember all the individual steps involved. Furthermore, via the magic of Ansible “roles”, we can take advantage of common tasks that have been automated by others via pre-packaged units of work.

So to start with if we use the Raspbian package management system to install Ansible, along with a Python module that will enable it to manage Docker:

$ sudo apt install ansible python3-docker

Then to install an Ansible role which automates setting up Docker, we simply enter:

$ ansible-galaxy install geerlingguy.docker_arm

Note that this hasn’t installed Docker and rather instead has just installed a role called geerlingguy.docker_arm.

There are lots of freely available roles for automating all manner of tasks and for details see the Ansible Galaxy website.

Docker installation

Now we could just install Docker using the Raspian package management system (with the apt or apt-get command), but instead, we’re going to use the Ansible role that we just installed, via an Ansible playbook — which is basically a list of tasks.

Playbooks are written in YAML and so we need to be careful to ensure that any indentation is with spaces and not tabs! If we create a file called docker.yml with the contents:

---
- name: "Docker playbook"
  hosts: localhost
  connection: local
  become: yes
  vars:
    docker_install_compose: false
    docker_users:
      - pi
  roles:
    - geerlingguy.docker_arm

What this says is that the playbook is called “Docker playbook” and it should be run on the local computer — in this case, our Raspberry Pi. The line “become: yes” means that we need to become the user root before executing tasks. The user “pi” is in the docker_users variable, which means that it will be added to the docker group so that it is allowed to manage containers. Finally, we then specify the role which actually sets up Docker.

More typically an Ansible playbook would be used to configure a number of remote servers and in such cases, these would be specified in the line starting “hosts:” instead of localhost (the local computer). In fact here the playbook could have been run from a different computer which then uses Ansible to remotely configure our Raspberry Pi over SSH, but for the sake of simplicity, we’ll just run the playbook directly on the Pi itself.

So now on to running our first playbook!

$ ansible-playbook docker.yml

Note how we didn’t need to prefix the command with sudo. This is because the playbook contains the line “become: yes” and so we become root before running tasks.

If all goes well we should see output similar to that above.

And eventually, the playbook should complete with zero failed “plays”.

Docker should now be installed and started. It has a ps command that lets you see the running containers. See above how the first time we ran this it failed because we weren’t in the docker group. However, all we had to do was to log out and back in again, following which we then picked up the docker group and could use commands to manage Docker.

Docker basics

Docker provides a containerisation system whereby applications are packaged along with all their dependencies and so in principle, you can deploy a Docker container on any flavour or version of Linux provided it has Docker available. This hugely reduces the overhead of having to maintain software for different distributions and versions. Furthermore, it allows you to neatly bundle lots of different pieces of software together and make it trivial to distribute complex application stacks.

Distribution of software is via images that are typically published to an online registry, such as Docker Hub. When you create a new container, the required image is first pulled from the online registry and once a local copy is on your machine, this is then used to create the container.

Other very cool Docker features include private networking that can be set up between containers and the ability to map ports on external IP addresses to ports inside containers. This means that you can easily have multiple different containers, based on the same or different images, running on different ports. E.g. instances of an app for home and work, or for production and testing.

Because data is kept separate from Docker containers, it means that you can delete a container and retain the application data, then subsequently deploy a new container that might use an updated image and which is configured to use data stored in an existing volume.

Some useful commands are as follows:

  • docker image ls (list images on this machine)
  • docker ps (show running containers)
  • docker volume ls (list volumes)
  • docker start|stop|restart (start, stop or restart the container)
  • docker logs (show the logs for the container)
  • docker inspect (show container config)
  • docker volume inspect (show volume config)
  • docker rm (delete container)
  • docker volume rm (delete volume)

One of the key things to note with Docker is that we only have one copy of the operating system running, in contrast to virtualisation where we have many. This means that Docker makes far more efficient use of resources, as we don’t have to allocate CPUs and memory to VMs which then each run their own copy of an operating system. Instead, we have applications running side-by-side, albeit in their own containers, which provide increased manageability and security.

Nextcloud installation

---
- name: "Nextcloud playbook"
  hosts: localhost
  connection: local
  become: yes
  tasks:
    - docker_container:
        restart: yes
        restart_policy: always
        name: nextcloud
        image: ownyourbits/nextcloudpi
        pull: yes
        volumes:
          - 'ncdata:/data'
        ports:
          - '80:80'
          - '443:443'
          - '4443:4443'

Above we can see the contents of the playbook, nextcloud.yml, which we will use to install Nextcloud via a Docker container. This time instead of using a ready-made “role” we have specified tasks, with the first and only one being to set up a docker container with the configuration:

  • To always restart the container (app) if it fails
  • Use the name of “nextcloud” for the container
  • Deploy using the container image, ownyourbits/nextcloud-armhf
  • Always pull the latest image version
  • Use a volume called “ncdata” to store user data
  • Map external to internal ports: 80>80 and 443>443

This mapping ability gives us a lot of flexibility, both with data — here the volume ncdata is mounted (appears) inside the container as /data — port numbers used for networking.

To run the playbook simply enter:

$ ansible-playbook nextcloud.yml

Once again this should complete without any failures. We can also check the container logs with:

$ docker logs nextcloud

And we should see something similar to the output shown above.

Nextcloud configuration

The Nextcloud container we have selected is generally used by itself as part of a turnkey SD card image which has a hostname of “nextcloudpi” configured. However, we eventually want to be able to run other things alongside it and so we picked a more generic hostname for our Raspberry Pi. Hence there is going to be just a little bit more configuration required.

If we browse to http://cloud.local (or whatever hostname you configured) this will redirect to a secure page and initially, there may be a certificate error that we need to acknowledge. We will then be presented with a page similar to that shown above, with login details for the NextcloudPi and Nextcloud web interfaces. These should be noted down. Also, unless we did earlier use a hostname of nextcloudpi, the two URLs will need to be modified to use the hostname that we selected.

We can then select Activate. The NextcloudPi web interface is where we can easily make key Nextcloud system changes and we should load this (the one with a :4443 suffix) next. So just to be clear, this will be https://<hostname>.local:4443.

Here we need to select nc-trusted-domains from the left-hand menu and enter the appropriate domain. E.g. for a hostname of “cloud” this would be “cloud.local”. Then select Apply.

Finally, we can now visit the other URL, which will be https://<hostname>.local (no :4443 suffix). Then log in using the other set of credentials from the activation page. At this point, we can now create new users, upload files, collaborate, make use of apps and install new ones.

Nextcloud has a vibrant developer ecosystem and lots of great apps that can be installed with just a click from within the web interface. There are also companion apps for Android and iOS.

Wrapping up

We’ve by no means taken the simplest route to get Nextcloud up and running on our Raspberry Pi, but have done this in a way which makes it trivial to re-deploy whenever we need — onto the same or additional Pi boards! Furthermore, as we’ll come to see in the next post, the approach taken will allow us to easily run additional applications side-by-side and manage these with great ease.

  — Andrew Back

Open source (hardware and software!) advocate, Treasurer and Director of the Free and Open Source Silicon Foundation, organiser of Wuthering Bytes technology festival and founder of the Open Source Hardware User Group.

12 Nov 2019, 13:10

Comments

March 24, 2020 14:18

How do you do updates of NextCloud with this type of installation?

0 Votes

March 27, 2020 09:35

@Henry13 Ah, that's super easy. You stop the container and then delete it with "docker rm nextcloud". Then re-run the Ansible playbook and because this has the line "pull: yes", it will pull the latest version of the Nextcloud container image down before re-creating the container. Any config/data will be in the ncdata volume and hence is separated from the application code.

CB1

February 3, 2020 13:38

  • Moderated

Hey, a nice general intro to Ansible on a Pi, thanks.

Going slightly out of scope, if I set: docker_install_compose: TRUE
implying it also installs docker-compose though, it errors out. is that because pip is "pip v2" and not v3? Any hints to fix?

I guess though I'll force it in now and play with it all, and then see how to debug playbooks later. Thanks!

0 Votes

November 21, 2019 08:22

thanks for this post, its exactly what I was looking for. will order a rpi 4 today, already have a big usb hard drive I can use for this.

November 21, 2019 09:59

@firehopper enjoy! Run into any problems and let me know.

November 23, 2019 10:05

@Andrew Back okay, doing the docker logs nextcloud thing says it can't find /nc-init.sh not found so I dont think I did something right. and going to nas.local just pops up a Initializing NextCloudPi for the first time Please wait... and it just sits there.. nothing happens..

November 25, 2019 08:50

@firehopper oops, my bad! Just updated the nextcloud.yml playbook, as it was pointing to the wrong image (tried a few different ones out...) This is seemingly a known problem with the -armhf one previously referenced, which is a simpler container without the additional web admin interface on :4443. So you need to update the playbook and then delete the container and its data volume with: $ docker stop nextcloud && docker rm nextcloud && docker volume rm ncdata Then just re-run the updated playbook and it will use the correct image this time.

November 25, 2019 08:50

@Andrew Back added a 4443:4443 to the ports setting in the nextcloud playbook. and it works now :)

November 25, 2019 08:50

@firehopper now just need to figure out how to add the hard drive on usb aka /dev/sda

November 25, 2019 08:50

@firehopper you'd need to update /etc/fstab so that it gets mounted at boot time. Then if you inspect the ncdata volume you will see this is mapped to "/var/lib/docker/volumes/ncdata/_data", so on the assumption that all apps will use Docker you could use a mount point of "/var/lib/docker/volumes" (after having copied the contents of this to the root of the drive). Alternatively you could probably mount the drive to a more generic mount point, e.g. /data, and then sym link the Docker volumes root to a sub-directory, e.g. /data/docker. Or maybe update the Docker config to create volumes elsewhere. First approach feels like the cleanest and the others would need some research. Note that there are also other drivers available for volumes. The default is called "local".

[Comment was deleted]

November 26, 2019 16:21

@Andrew Back Hi, Thank you very much for the guide, everything works right from the box! I got a question though, which is bugging me. As i'm learning Docker/Ansible and overall Linux stuff, I managed to get my external USB HDD (NTFS) (its dir is /mnt/mydisk) setup and its working fine. The thing that I cannot understand is how can i get my data always be uploaded there instead of my SD card? What and where do I need to change so that all the data goes into lets say /mnt/mydisk/Nextcloud Thanks again!

November 27, 2019 08:48

@Alex9112 see my earlier response to @firehopper. Easiest way would be to: 1) stop the container; 2) copy the contents of /var/lib/docker/volumes to the external drive; 3) unmount the drive and remount it as /var/lib/docker/volumes; 4) start the container. Then each time you create a new volume, e.g. for additional containers, it would exist on the external drive.

November 27, 2019 08:47

@Andrew Back Thanks for the reply, I will test it out. Do you know if it matters if the external disk is ntfs or it must be ext4? I've been banging my head for the past few days and apparently NTFS will not suffice.

December 2, 2019 08:13

@Alex9112 last I heard I think Linux could read NTFS but not write and even if it can now, I'd avoid it and use ext4 instead.

November 25, 2019 08:50

@Andrew its not the router as far as I can see.. not sure what to do to fix it. the pi is running no gui so I cant try on that. :)

November 25, 2019 08:50

@Andrew Back well it runs and all.. but I cant connect to the 4443 port. Firefox can’t establish a connection to the server at nas.local:4443. it might be the new router. I'm gonna see if I can fix it there.

November 22, 2019 14:12

@Andrew Back thanks, have the pi, and a router with cables, everything is installed, now to follow your instructions and setup the raspberrypi. might get a small 2x20 or something lcd screen and use that to show the IP of the pi. as the router app doesnt show ip addresses of the things on it for some reason. for use to ssh into the pi for controlling it from my pc on the network.

November 23, 2019 10:05

@firehopper Nice idea. Though you should also be able to SSH to it using the hostname, e.g. from another Linux computer and if you haven't changed the hostname, "ssh pi@raspberrypi.local". Not sure if from Windows you don't need the ".local" suffix. In any case, you should be able to use mDNS to connect by name and therefore not need to know the IP.

November 19, 2019 13:21

Ordered a Pi 4 today and looking forward to your next post. Any idea when it will be coming out? Great work!

November 20, 2019 09:16

@Conjada thanks for the kind words! Probably the first or second week in December. Just in the process of figuring out a few final details.