In this Mastodon era, I wanted to try another approach to be used as a personal ActivityPub microblogging playground instance. After some research I decided to install an Akkoma into a Raspberry Pi 4. Quite straight forward installation!
To play around. Actually, my idea is to stop spreading my bots around, and have a Raspberry that runs them, and what better than an own instance to allocate them? At this point, what I'm thinking is a single-user instance that has some bots publishing some stats from my Raspberries, instances, processes and other bots, in a kind of alerting system approach.
To be honest, the motivation is the less important thing. I just want to play arround with any system that is not the Mastodon mainstream one.
I will use my Raspberry Pi 4 4GB to host it, now that I moved the Nextcloud to the 8GB one. I have installed the official Raspberry OS 64 bits Lite version (no desktop). The steps are the same that I explained in my past article Quick DNS server in a Raspberry Pi, but making sure that we're selecting the 64 bits Lite version. What do I want to have setup in your Raspberry Pi?
Download and install Docker
curl -sSL https://get.docker.com | sh
Add the current user into the
sudo usermod -aG docker $USER
Exit the session and ssh in again.
Test that it works:
docker run hello-world
Get the container ID of this test run. In my case is
docker ps -a
Remove the test container
docker rm 9373dee4491c
First of all, check what is the latest version by navigating to their releases: https://github.com/docker/compose/releases. At the moment of this article, the latest stable release was v2.16.0
Dowload the right binary from their releases into the binaries directory of our system:
sudo curl -L "https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Give executable permissions to the downloaded file
sudo chmod +x /usr/local/bin/docker-compose
Test the installation:
It should print something like
Docker Compose version v2.16.0
Assuming we're in
corellia and in our home path:
Clone the repository:
git clone https://akkoma.dev/AkkomaGang/akkoma.git -b stable
Move into the akkoma directory
Get the example environment variables file and copy it into our definitive file
cp docker-resources/env.example .env
Add the current user and current user's grou into the environment variables file:
echo "DOCKER_USER=$(id -u):$(id -g)" >> .env
We can see how it ended up:
Create the folder that will contain the data
Retrieve the dependencies. It may ask to install some dependencies. Just do it.
./docker-resources/manage.sh mix deps.get
Compile all together, generating the pleroma app. It may ask again to install some other dependencies. It will also take quite a long time.
./docker-resources/manage.sh mix compile
Generate the configuration. It asks some questions that the default values should suffice. According to the official documentation, the database hostname is
db, the database password is
akkoma (not auto generated), and set the ip to 0.0.0.0.
./docker-resources/manage.sh mix pleroma.instance gen
Copy the configuration to the expected location
cp config/generated_config.exs config/prod.secret.exs
Start the DB container.
docker-compose run --rm --user akkoma -d db
Run the Postgress setup script:
docker-compose run --rm akkoma psql -h db -U akkoma -f config/setup_db.psql
According to the official documentation, here we should receive the name of the container here, something like
akkoma_db_run, but instead I received some warnings telling me that the role "akkoma" already exists and the database "akkoma" too.
Get to know the containers that are running in this moment:
This gave me 2 containers, and one was named like
akkoma_db_run_ac56fc75fa55, which I guess is what the official documentation was expecting. The other one is called
akkoma-db-1 and feels like it started together with the instance setup, because it is running for 30 minutes already. I take the first name I mentioned for the next step.
Stop the container that we opened to run the Postgress setup script:
docker stop akkoma_db_run_ac56fc75fa55
Run the migrations. It will take some time, because it also compiles the files as the configuration changed.
./docker-resources/manage.sh mix ecto.migrate
It took some time to compile and when actually applying the migrations, the DB disconnected leaving the process with a beautiful red error. I re-tried the same command again and performed the migration. At the very end showed that one query failed. I re-ran again the migration and it just jumped everything to simply show the same error in the migration (in particular, it complained about the migration
20191220174645 Pleroma.Repo.Migrations.AddScopesToPleromaFEOAuthRecords.up/0). I could fix the issue with the following command:
./docker-resources/manage.sh mix ecto.reset
That asked me to drop the database (I said yes), then it created the DB again and applied all the migrations without problems. Yay!
Start the server in the foreground so that we can see that everything is working:
The official docs mention that we should be able to navigate to http://localhost:4000, but we're in a headless Debian 11, so open another ssh to
corellia and when the shell is active, try to
curl the localhost in the port 4000. It should give a HTTP 200 OK
Now shutdown the instance in the foreground run with
Start the instance in the background:
docker-compose up -d
./docker-resources/manage.sh mix pleroma.user new xavi firstname.lastname@example.org --admin
It prints a summary of the data that will generate for this user and after confirmation it will create it. Also, it will output a link to change the password. Keep it, as we'll need to use it when we have the SSL fixed.
At this point every configuration will differ. In my case I don't want to install yet another reverse proxy, so I'm going to reuse the reverse proxy that I installed some days ago in a separated Raspberry Pi. To do so:
Edit the file that holds the Virtual Hosts configuration
sudo nano /etc/apache2/sites-available/010-reverse-proxy.conf
Add a new Virtual Host for the Akkoma:
<VirtualHost *:80> ServerName socia.mydomain.com ProxyPreserveHost On ProxyPass / http://corellia:4000/ nocanon ProxyPassReverse / http://corellia:4000/ ProxyRequests Off </VirtualHost>
Stop the reverse proxy. The certbot needs full control to the HTTP & HTTPS ports
sudo service apache2 stop
Execute the certbot for this new domain:
sudo certbot --agree-tos -d social.mydomain.com --apache
Start the apache again
sudo service apache2 start
Once the backend is up and running, we should install a frontend so that we can nicely interact. No, it does not come bundled, and I actually wonder why, I'll check. The installation is kinda trivial:
Install the pleroma frontend:
./docker-resources/manage.sh mix pleroma.frontend install pleroma-fe --ref stable
Install the admin frontend:
./docker-resources/manage.sh mix pleroma.frontend install admin-fe --ref stable
At this point, I was expecting to be able to navigate to
http://social.mydomain.com, and it actually receives the requests, redirects to HTTPS (thanx to the changes done by the certbot) but then it throws a 503.
After some digging, I saw that the Akkoma's
docker-compose.yml still keeps listening only to
127.0.0.1, so only the localhost can forward traffic to it. Let's change it:
Move yourself to the akkoma stack
Edit the docker-compose file
Search for the
ports section, and replace the
0.0.0.0:4000:4000. We want that the container redirects all requests to the port 4000 to inside, not only the ones that come from localhost.
Recreate the container:
docker-compose up -d
Now we should be ready to navigate to
http://social.mydomain.com and see the Akkoma interface appearing!
Oh yeah, I have no clue of what I'm doing. What I have clear is the goal of this server, at least at this point in time:
Very first thing, not even trying anything else, just in case I regret about forgetting anything.
All by default, then:
Everything is Disabled. No changes.
General MRF settings
I guess this is where I should come back later to set up an external disk setup, whenever I have one.