Setting up a debian package archive with aptly

This article describes how to set up a debian archive with aptly on a debian 9 “stretch” computer, served by an nginx web server.

Initial setup

  1. Add a DNS alias for your virtual nginx web site (outside of the scope of this blog post). The examples below assume that is the DNS alias
  2. Install the required software, logged in as root, give the following command
    apt-get install gnupg pinentry-curses nginx aptly
  3. Logged in as your regular user, do the following:
    1. Create a gpg keyNote! It is a good idea to do the key generation when logged into a debian desktop and move the mouse about during generation, to get good random values for the key generation.Giving the following command at the command line
      gpg --full-generate-key
      1. At the prompt for key type, just press ENTER to select the default (RSA and RSA)
      2. At the prompt for key size, type “4096” (without the quotes) and press ENTER
      3. At the prompt for how long the key should be valid, type “0” without the quotes and press ENTER
      4. At the prompt for “Real name”, type your real name and press ENTER
      5. At the prompt for “Email address”, type your email address and press ENTER
      6. At the prompt for “Comment”, type the host name of your archive web server, e.g. “” and press ENTER
      7. At the prompt “Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?”, type “O” (without the quotes) and press ENTER
      8. At the prompt for a passphrase, give a passphrase you will remember. You will be asked for this passphrase every time every time the repository is published
    2. Export the public key of the gpg key pair in a form that the web server can use
      1. First list the keys to find the id of the key
        steinar@lorenzo:~$ gpg --list-keys
        pub   rsa4096 2017-12-27 [SC]
        uid           [ultimate] Steinar Bang ( <>
        sub   rsa4096 2017-12-27 [E]
      2. Then use the id of the key to export the public key of the gpg key pair in a form that the web server can return
        gpg --output apt_pub.gpg --armor --export 6B7638490707CCE365DF5415D2BA778731DC1EAC
      3. Publish the key with the default GPG keyserver
        gpg --send-keys 6B7638490707CCE365DF5415D2BA778731DC1EAC
    3. Create a local repository “stable”
      aptly repo create -distribution="stable" stable
    4. Configure an architecture in the archive: open the ~/.aptly.conf file in a text editor, and change the line
      "architectures": [],


      "architectures": ["amd64"],

      Note! Without a concrete architecture in place, aptly refuses to publish. So add an architecture here, even you are going to publish packages with architecture “all” (e.g. java, python, shell script). In the example I’m using “amd64” which, despite its name, is appropriate for modern 64 bit intel chips (i5 or i7 of various generations).

    5. Import a package into “stable” (the example uses the package built in Installing apache karaf on debian stretch)
      aptly repo add stable git/karaf-deb-packaging/karaf_4.1.4-1_all.deb
    6. Publish the archive (switch the gpg-key with the id of your actual repository key):
      aptly publish repo --gpg-key="6B7638490707CCE365DF5415D2BA778731DC1EAC" stable

      Note! If you get a time out error instead of a prompt for the GPG key passphrase, and you’re logged in remotely to the server, the reason could be that gpg tries to open a GUI pinentry tool. Switch to explictly using a curses-based pinentry and try the “aptly publish” command again. Do the command:

      update-alternatives --config pinentry

      and select “pinentry-curses” in “Manual mode”

  4. Log in as root and do the following
    1. Create a root directory for a new web server and copy in the public key used to sign the published achive
      mkdir -p /var/www-apt
      cp /home/steinar/apt_pub.gpg /var/www-apt/
    2. In a text editor, create the file /etc/nginx/sites-available/apt with the following content
      server {
      	listen 80;
      	listen [::]:80;
      	root /var/www-apt;
      	allow all;
      	autoindex on;
      	# Full access for everybody for the stable debian repo
      	location /public {
      		root /home/steinar/.aptly;
      		allow all;
      	# Allow access to the top level to be able to download the GPG key
      	location / {
      		allow all;

      Note! I actually started out with also serving HTTPS and signing with letsencrypt, but as it turns out APT doesn’t support HTTPS out of the box, so there was no point in including it in this HOWTO

    3. Enable the site by creating a symlink and restarting nginx
      cd /etc/nginx/sites-enabled
      ln -s /etc/nginx/sites-available/apt .
      systemctl restart nginx

Your APT artchive is now up and running.

Use the new APT archive

To start using the APT archive, do the following on a debian computer:

  1. Log in as root
  2. Add the archive key
    wget -O - | apt-key add -
  3. Add the archive by adding the following lines to /etc/apt/sources.list
    # My own apt archive
    deb stable main
  4. Update APT to get the information from the new archive
    apt-get update
  5. Install a package from the archive
    apt-get install karaf

Future additions and updates of existing packages can be done as your regular user, with no need to log in as root during the process.

Publish a new version of a package

To update an existing package:

  1. Build a new version of the package
  2. Add the new version of package to the package archive
    aptly repo add stable git/karaf-deb-packaging/karaf_4.1.5-1_all.deb
  3. Update the publish the package archive (i.e. the package archive served by nginx)
    aptly publish update --gpg-key="6B7638490707CCE365DF5415D2BA778731DC1EAC" stable
  4. Do “apt-get update” on the computers using the archive
  5. Do “apt-get dist-upgrade” on the computers using the archive, and the package should be upgraded

Installing apache karaf on debian stretch

Apache karaf is an OSGi container/application server with some nice properties:

  1. It has an SSH server you can log into and a command line where you can inspect and configure the karaf instance
  2. It can be provisioned using apache maven, basically you can start with an empty karaf, ssh into the SSH server and pull in and start your application using “maven magic”
  3. It is much simpler to get an OSGi application up and running in apache karaf, than in any other system I have tried, since I first was to introduced to OSGi in 2006
  4. Karaf can also be used to run non-OSGi applications packaged as jar or war files
  5. In a development setting is very simple to deploy new versions of the code using maven and remote debug the deployed code frome eclipse or IntelliJ

Running karaf on a debian GNU/linux system is a little hampered by there not being a native .deb package. I have opened an RFP (Request For Packaging) bug for karaf in the debian bug tracker. When/if that issue is ever resolved as done, karaf will be easily availabel on debian and also on all of the distros that are based on debian (e.g. ubuntu and mint).

Until then do my own debian packaging. I forked the packaging I found at and made some changes:

  1. Switched from oracle JDK 8, to openjdk 8
  2. Updated to karaf version 4.0.7 (the currently newest stable release at the time of forking), later upgraded to karaf 4.1.1 and again upgraded to karaf 4.1.2
  3. Use /var/lib/karaf/data instead of /usr/local/karaf/data
  4. Use package version “-1” instead of “-3”
  5. Switched from using the the service wrapper (karaf-wrapper) to plain systemd start using the scripts and config from bin/contrib in the karaf distribution
  6. Made the stop of running services more robust

The resulting .deb package will follow the usual service pattern of a debian service: the service will run with a user named after the service (i.e. user “karaf” which is the single member of group “karaf” and the owner of all files the service need to touch). The service will log to the regular debian syslog. The configuration will end up in /etc/karaf and all files not part of the installation will be left untouched on a .deb package uninstall and upgrade.

My fork of the packaging, lives at

To create the package and install karaf, do the following steps:

  1. Log in as root on a debian system
  2. Install the prequisites for building the package, debian packages and ruby gem:
    apt-get update
    apt-get install git maven openjdk-8-jdk postgresql ruby ruby-dev build-essential
    gem install fpm
  3. Clone the packaging project and build the deb package:
    cd /tmp
    git clone
    cd karaf-deb-packaging
    mkdir -p /root/debs
    cp *.deb /root/debs
  4. Install the .deb package:
    dpkg --install karaf_4.1.4-1_all.deb

After karaf has been installed it is possible to log in as user “karaf”, with the following command

ssh -p 8101 karaf@localhost

The password is also “karaf” (without the quotes).

This opens the karaf console command line

        __ __                  ____
       / //_/____ __________ _/ __/
      / ,<  / __ `/ ___/ __ `/ /_
     / /| |/ /_/ / /  / /_/ / __/
    /_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (4.1.4)

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit 'system:shutdown' to shutdown Karaf.
Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session.


At this command line, you can eg.

  1. install an application
  2. start, stop and list running applications
  3. set the configuration used by the applications

But all of these except for the first, will be items for later posts.

Til minne om kaninen Daisy

Jeg tar et avbrekk fra postinger om datateknikk til å minnes kaninen Daisy.

Siste bilde av kaninen Daisy
Fortsatt en glad kanin som koste seg med løvetann.

En liten svart og hvit løvehodekanin.

Hun ble født i november 2009 og døde 29. august 2017, litt over 7,5 år gammel. Ungene fikk først hilse på henne den 22. desember 2009. Fra rett over nyttår 2010 og fram til hun døde bodde hun hos oss.

Da hun kom til oss var hun verdens mykeste og nusseligste lille kanin. En bitteliten pelsball som var redd for å hoppe ned fra sofaen. Vi måtte legge puter på golvet for at hun skulle tørre å hoppe ned. Silkemyk pels var en egenskap hun beholdt helt fram til sin død.

Ungene var 7 og 8 år gamle da vi fikk henne, så hun har vært en del av familien gjennom en stor del av oppveksten deres. Vi hadde Daisy da bestemor og bestefar på Mo solgte huset og første gang ungene kjørte Trollstigen og Geiranger og gikk Besseggen.

Hun fikk være med oss på høstferier og vinterferier og påskeferier. Hun har vært på Gålå og på Sjusjøen og på Femundsenden og masse andre plasser. Hun var ikke så glad i reiseburet men hun så til å trives når hun fikk hoppe rundt og undersøke et nytt sted.

De fleste feriene tilbrakte hun imidlertid hos Heidi på Rotnes katte- og kaninpensjonat og der hadde hun det fint.

Hun fikk sin egen instagram-konto, kaninen Daisy.

Jeg fikk min egen nisselue🎅

A post shared by ❤️Daisy❤️ (@kaninen_daisy) on


Daisy var glad i jula men aller mest glad i juletrærne. Hun var også glad i snø som lot henne grave huler og ganger på verandaen hun tilbragte mye tid på.

Ikke alle kaniner er like hyggelige, men Daisy var en kjælen og vennlig kanin som gjerne hoppet opp i fanget til folk og elsket en god ørekos.

Hun fikk ingen kaninvenner men hun fikk mye oppmerksomhet fra menneskene sine og jeg tror hun hadde et fint lite kaninliv hos oss.

Daisy elsket kvister fra hengebjørk, mislikte tørr fjellbjørk, men gumlet både løvetann og kløver med stort velbehag. Hun var en ganske renslig og pertentlig liten kaninfrøken, og hun holdt seg på matta (dvs. hun var ikke veldig glad i parketten), noe som gjorde at hun fikk gå mye fritt. Ikke spiste hun ledninger heller om vi ser bort ifra et par-tre hodetelefoner tilhørende Eirik.

Det er trist at vi aldri mer skal få se kaninen danse foran verandadøra for å bli sluppet inn, og aldri mer få se kaninen sitte på den lille matta foran buret og stirre forhåpningfult mot matskapet. Og aldri mer få se kaninen være “crazy Daisy” og sprinte rundt på teppet og opp i sofaen.

Det er trist at Daisy er død men jeg er veldig glad for at akkurat hun ble vår kanin og at hun var hos oss i disse årene. Farvel lille venn!

Sign nginx website and dovecot imap server on debian with let’s encrypt

If you have a setup with a single server with multiple services (web, IMAP etc.), and one CNAME per service (,, and you would like to get the services signed in a manner that doesn’t give warnings or errors in browsers  (especially browsers in phones and tablets with iOS and Android), then this article may be of interest.

Self-signed certificates is a nuisance and the initiative has been losing support. Let’s encrypt offers the possibility of having free (as in both cost and feedom) SSL certificates that don’t give warnings in web browsers. The only problem the threshold of taking the time to figure out how to use it.

It turned out there wasn’t much figuring to: on a debian jessie GNU/linux system, the certbot program from takes care of everything, including keeping the certificates automatically updated (the .deb package for certbot sets up a cronjob that does the right thing).

The way certbot works is that it requires that each server you wish to sign must be accessible on http (port 80) and the local path “/.well-known/” on each server must be accessible and map to a file area that certbot can put files in.

The certbot program works by contacting let’s encrypt saying that it wants a certificate for a DNS name,  and let’s encrypt will then access the HTTP URL to verify that certbot is indeed running on a server that can be found using that DNS name.

This means that, for certbot to work:

  1. Even if your HTTP server responds only on HTTPS and/or requires authentication, you will need to make a plain HTTP connection available and have the local path “/.well-known/” map to a part of the file system, and be available without authentication
  2. Even if you’re making a certificate for a non-HTTP service (e.g. an IMAP server), you will need to make a plain http (port 80) server responding to that DNS CNAME that can serve the local parth “/.well-known/” from the local

This article explains how to set up free SSL certificates signed with let’s encrypt on an nginx web server and a dovecot IMAP server, on a debian jessie GNU/linux system.

The certbot instructions takes you part of the way, but it has some holes and not a lot of explanation, which is why I wrote this article.

The steps are:

  1. Add jessie-backports to APT (click the link and follow the instructions)
  2. Install certbot from jessie-backports:
    1. Open a command shell as root and give the following command:
      apt-get install certbot -t jessie-backports
  3. Disable the default nginx site
    1. Edit the /etc/nginx/sites-available/default file to have the following content:
      server {
              listen 80 default_server;
              listen [::]:80 default_server;
              root /var/www/html;
              server_name _;
              location / {
                      deny all;
    2. Run the following command in the command shell openes as root
      systemctl reload nginx
  4. Add DNS CNAME-records for the virtual hosts you are going to sign.
    In the examples used in this procedure, the host is and it has two CNAME aliases: and
  5. Add a nginx site
    1. Create a file /etc/nginx/available-sites/www with the following contents:
      server {
              listen 80;
              listen [::]:80;
              root /var/www/html;
              index index.html index.htm index.nginx-debian.html;
              location / {
                      allow all;
    2. Give the following commands in the command shell opened as root:
      cd /etc/nginx/enabled-sites/
      ln -s /etc/nginx/available-sites/www .
      systemctl reload nginx
  6. Add an nginx site
    Note! This isn’t a real website but it is necessary to give HTTP access to a web server listening to this CNAME alias so that the certbot program can create and auto-update the certificate that dovecot uses.

    1. Create a file /etc/nginx/available-sites/imap with the following contents:
      # The port 80 listener only gives access to certbot
      server {
              listen 80;
              listen [::]:80;
              root /var/www-imap/;
              location /.well-known/ {
                      allow all;
              location / {
                      deny all;
    2. Give the following commands in the command shell opened as root:
      cd /etc/nginx/enabled-sites/
      ln -s /etc/nginx/available-sites/imap .
      systemctl reload nginx
  7. Add a certificate for
    1. Give the following command in the command shell opened as root:
      certbot certonly --webroot -w /var/www/html -d
  8. Configure certificates for the nginx web site
    1. Change the /etc/nginx/available-sites/www file to the following:
      server {
              listen 80;
              listen [::]:80;
              # SSL configuration
              listen 443 ssl default_server;
              listen [::]:443 ssl default_server;
              ssl_certificate     /etc/letsencrypt/live/;
              ssl_certificate_key /etc/letsencrypt/live/;
              root /var/www/html;
              location / {
                      allow all;
    2. Give the following command in the command shell opened as root
      certbot certonly --webroot -w /var/www-imap -d
    3. Open the server (replace with your actual URL) and observe that the browser reports it as secure with a valid certificate
  9. Add a certificate for
    1. Give the following command in the command shell opened as root:
      certbot certonly --webroot -w /var/www-imap -d
  10. Configure dovecot to use the certificate
    1. Change/modify the following lines in the /etc/dovecot/conf.d/10-ssl.conf file:
      # SSL/TLS support: yes, no, required. <doc/wiki/SSL.txt>
      ssl = yes
      # PEM encoded X.509 SSL/TLS certificate and private key. They're opened before
      # dropping root privileges, so keep the key file unreadable by anyone but
      # root. Included doc/ can be used to easily generate self-signed
      # certificate, just make sure to update the domains in dovecot-openssl.cnf
      ssl_cert = </etc/letsencrypt/live/
      ssl_key = </etc/letsencrypt/live/
    2. Give the following command in the command shell opened as root:
      /etc/init.d/dovecot reload

The certificates have a 90 day lifetime, but as mentioned earlier, the certificates will be automatically updated by certbot when they have 30 days valid time remaining. The certbot deb package installs a cronjob that runs twice every day at random second in the hour following 00:00 and 12:00 and checks if certificates needs to be updated and updates the ones that are ready for updating.

Making a Java windows service in 10 minutes

This blog post describes how to create a windows service from a Java application, it is a slightly more fleshed out version of the JavaZone 2016 lightning talk “A Java windows service in 10 minutes”.

A problem sometimes encountered by a Java programmer, is to make your Java program into a Windows Service. This is may be a bump in your project, particularly if you don’t know anything about windows services, or much about windows for that matter.

The demo created a running, working, Windows service server using 14 lines of Java code, and some maven configuration.

Before starting on the demo, a few words on what windows services are (from a GNU/linux/UNIX perspective):

  • Windows services are the “daemons” of the windows world
  • Windows services are normally started when the windows system starts, and stopped when the windows system shuts down
  • Windows services can be stopped and started by administrator users, both using a GUI and using command line commands
  • Windows services can be configured to run with a particular user, restricting what the service can do (default is the local user “Local System”)

To create the installer the demo use a maven plugin called maven-windows-service-installer-plugin. The maven plugin in turn relies on izpack for the installer and uses the apache commons daemon to execute the Java program.

The Java program turned into a windows service during the demo, is the Wiser test SMTP server. Wiser was picked, because:

  1. It has an appropriate API
  2. An SMTP service is easy to demonstrate, and it is something other than yet another HTTP service

Since the demo might be hard to follow (a lot of information in 10 minutes), this blog post describes all steps of the demo (note: the complete code can be found on github at ).

Required prior knowledge:

  • Java programming
  • Apache maven

Required software to retrace the demo:

  • Apache maven (any maven 2 or 3 will do)
  • A Java SDK (I’m using the newest Java 1.8, but any Java SDK 1.7 will probably do)
  • An eclipse IDE (I’m using Eclipse Neon, but any recent eclipse will probably do)
  • A telnet command line application (since this is for windows, just use Windows cygwin bash with the inetutils package, just run the installer and include inetutils)

To retrace the demo, do the following operations:

  1. Start eclipse and open the Workspace “C:\workspace”
  2. Right click the package explorer and select New->Other…
  3. In the “New” dialog box:
    1. Select Maven->Maven Project
    2. Click the “Next>” button
    3. Checkmark the checkbox “Create a simple project (skip archetype selection)” at the to of the dialogbox
    4. Click the “Next>” button
    5. In the “Group id” text box, type
    6. In the “Artifact id” text box, type
    7. Click the “Finish” button
  4. Open the “ansmtpserver” project and double click “pom.xml” to open it
  5. In the pom.xml editor (title “ansmtpserver/pom.xml”):
    1. Select the Dependencies tab
    2. Click the “Add…” button
    3. In the “Select Dependency” dialog box:
      1. In the field “Enter groupId, artifactId or sha1 prefix or pattern (*)”, type
      2. Select “com.alexkasko.installer windows-service-installer-common”
      3. Click the “OK” button
    4. Click the “Add…” button
    5. In the “Select Dependency” dialog box:
      1. In the field “Enter groupId, artifactId or sha1 prefix or pattern (*)”, type
      2. Select “org.subethamail subethasmtp”
      3. Click the “OK” button
    6. Click the “Add…” button
    7. In the “Select Dependency” dialog box:
      1. In the field “Enter groupId, artifactId or sha1 prefix or pattern (*)”, type
      2. Select “org.slf4j slf4j-simple”
      3. Click the “OK” button
    8. Save the pom.xml file
  6. Right-click ansmtpserver->src/main/java in the “Package Explorer” and select New->Package
  7. In the “New Java Package” dialog box:
    1. Let the “Name” field have its default (“ansmtpserver”)
    2. Click the “Finish” button
  8. Right-click the ansmtpserver->src/java/main->ansmtpserver package in the “Package Explorer” and select New->Class
  9. In the “New Java Class Dialog”
    1. In the “Name” field, type
    2. In “Interfaces”, click the “Add…” button
    3. In the “Implemented Interfaces Selection” dialog box:
      1. In “Choose interfaces”, type
      2. In “Matching items”, select “DaemonLauncher – com.alexkasko.installer”
      3. Click the “OK” button
    4. Click the “Finish” button
  10. Modify the generated file in the following way
    package ansmtpserver;
    import org.subethamail.wiser.Wiser;
    import com.alexkasko.installer.DaemonLauncher;
    public class AnSmtpServer implements DaemonLauncher {
    	private Wiser server;
    	public AnSmtpServer() {
    		server = new Wiser();
    	public void startDaemon() {
    	public void stopDaemon() {
    1. Add a Wiser field
    2. In the constructor, create an Wiser instance, set the host name, and the port number
    3. In the startDaemon() method start the Wiser server
    4. In the stopDaemon() method stop the Wiser server
  11. Save the modified file
  12. Right-click ansmtpserver->src/main/resources in the “Package Explorer” and select New->File
  13. In the “New File” dialog box
    1. In “File name”, type
    2. Click the “Finish” button
  14. Modify the “” file to have the following content

    and save the file

  15. Select the “ansmtpserver/pom.xml” editor, and select the “pom.xml” tab, and paste the following before the </project> end tag. This configuration will be the same for all installers with the exception of the <prunsrvDaemonLauncherClass> tag
  16. Open ansmtpserver->src/main/java->ansmtpserver-> in the “Package Explorer”, right-click the “AnSmtpServer” class, and select “Copy Qualified Name” and paste the name into the <prunsrvDaemonLauncherClass> element
  17. Save the pom.xml file
  18. Open a cmd.exe window, and type the following commands to build the installer
    cd c:\windows\ansmtpserver
    mvn clean install
  19. Open a windows explorer on C:\Windows\ansmtpserver\target
  20. Right click the file and select “Extract all…” to the folder “C:\workspace\ansmtpserver\target”
  21. Open the folder “C:\workspace\ansmtpserver\target\ansmtpserver-0.0.1-SNAPSHOT-installer”, right-click the “install.exe” file and select “Run as administrator”
  22. Open a “Cygwin 64 terminal” window and type the following command
    telnet localhost 2022

    The expected response is

    telnet: Unable to connect to remote host: Connection refused

    since nothing is listening to port 2200

  23. Click the installer all the way to the end, using defaults for everything
  24. Open the windows services window and there will be a new windows service “ansmtpservice” shown as “Running”Windows services with the ansmtpserver shown
  25. Try the “telnet localhost 2200” command again, and this time there will be a response, and it will be possible to talk SMTP over the connectiontelnet_session
  26. Stop the “ansmtpservice” service and the telnet connection will be disconnected

Thus ends the installer part.

Some simple improvements to this installer are possible:

  • Better descrption for the service in “Windows Services”
    • Just add the following to the <configuration> setting of the maven-windows-service-installer-plugin:
      <prunsrvDisplayName>An SMTP server</prunsrvDisplayName>
      <prunsrvDescription>This service responds to incoming STMP connections on port 2200.</prunsrvDescription>
  • Install the service under “C:\Programs and Files”
    • Just add the following to the <configuration> setting of the maven-windows-service-installer-plugin:
  • Attach the zip file containing the installer to the maven artifact, so that the installer can be deployed to a maven repository, where other maven files can download and unpack the installer from (easy distribution)
    • Add the following inside <build><plugins></plugins></build> of the pom.xml build

A windows-service-installer that contains the above improvements and more, is this installer for Apache Jena Fuseki.

Get update notifications in the MATE desktop on debian jessie

One thing I have been missing since Gnome 2 was suceeded by the (IMO) horrible Gnome 3, is a tool tray notification icon for pending debian updates.

When someone continued Gnome 2 as MATE and MATE became available on debian, there was no notification tooltray icon to be found.

But now there is such a tooltray icon: pk-update-icon and since debian with MATE again is my primary desktop this I was something I was happy to discover.

When there are packages available, the icon looks like this:software-updates-mate-jessie

Then you can either click on “Install updates” and use the GUI to inspect, and install the updates, or you can pop to a root terminal window do:

apt-get dist-upgrade

To install the pk-update-icon:

  1. the apt-line for “jessie-backports”First ensure that you have the apt-line for “jessie-backports”, by adding the following line to /etc/apt/sources.list :
    deb jessie-backports main contrib non-free
  2. Update APT to get the index files for “jessie-backports”, do the following command in a root command line window:
    apt-get dist-upgrade
  3. Install pk-update-icon from jessie backports, give the following command in a root command line window:
    apt-get install -t jessie-backports pk-update-icon

    Answer yes to the question about if the install should proceed

  4. Log out of the desktop and log back in, and the next time updates arrive the icon will show up, like in the screen shot above

Logging to persistent tmpfs on Raspbian “jessie”

At the end of Using a Raspberry Pi 2 Model B as a router/firewall for the home LAN I wrote that I decided not to put /var/log into tmpfs, because:

  1. I wanted the logs to be persistent
  2. I thought that the wear would result in less and less of the sd card to become available (and 16GB for logs should last a loong time)

As it turned out the sd card died after one month.

I don’t know if the cause was excessive logging, the use of ntopng (which did write quite a lot, both in the number of files, the number of files, and in the total storage used, which was approximately 0,5GB after 30 days of uptime) or simply a bad sd card.

However, going forward with a new sd card, I’ve done the following:

  1. Removed ntopng
  2. Put /var/log on tmpfs (limited to 100MB in size), synced to a backing store on the sd card using rsync

For setting up the logging I found some existing web pages that took me part of the way, but not all the way:

Here is what I did:

  1. Logged in as root and did everything below as root
  2. Edited /etc/fstab and added the following line:
    tmpfs    /var/log    tmpfs    defaults,noatime,nosuid,mode=0755,size=100m    0 0
  3. Created an /etc/init.d/ramdiskvarlog file with the following contents
    # Provides:          ramdiskvarlog
    # Required-Start:    $local_fs $time
    # X-Stop-After:      $time
    # Required-Start:    $local_fs $time
    # Required-Stop:     $local_fs
    # Default-Start:     S
    # Default-Stop:      0 1 6
    # Short-Description: Restore to and save logs from tmpfs filesystem
    # Description:       Restore to and save logs from tmpfs filesystem
    # /etc/init.d/ramdiskvarlog
    case "$1" in
        echo "Copying files to ramdisk"
        rsync -av /var/backup/log/ /var/log/
        echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched from HD >> /var/log/ramdisk_sync.log
        echo "Synching files from ramdisk to Harddisk"
        echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log
        rsync -avy --delete --recursive --force /var/log/ /var/backup/log/
        echo "Synching logfiles from ramdisk to Harddisk"
        echo [`date +"%Y-%m-%d %H:%M"`] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log
        rsync -av --delete --recursive --force /var/log/ /var/backup/log/
        echo "Usage: /etc/init.d/ramdisk {start|stop|sync}"
        exit 1
    exit 0
  4. Made /etc/init.d/ramdiskvarlog executable:
    chmod +x /etc/init.d/ramdiskvarlog
  5. Created a directory to store the logs persistently, and populated it initially with the contents of the existing /var/log with the following command line commands :
    mkdir -p /var/backup/log
    /etc/init.d/ramdiskvarlog sync
  6. Made the /etc/init.d/ramdiskvarlog script be run at boot time and during orderly shutdown with the following command line command
    systemctl enable ramdiskvarlog
  7. Made the /etc/init.d/ramdiskvarlog script copy the contents of /var/log to the sd card once every 24 hours
    1. At the command line gave the command
      crontab -e
    2. In the editor that opened on the crontab, added a line with the following contents
      2 7 * * * /etc/init.d/ramdiskvarlog sync >> /dev/null 2>&1
  8. Created a test file with “touch /var/log/test.log”, rebooted the raspberry pi with “sync; reboot”, and then:
    1. Checked with the mount command that /var/log was on tmpfs, found the following line in the output, which meant that /var/log was on tmpfs
      tmpfs on /var/log type tmpfs (rw,nosuid,noatime,size=102400k,mode=755)
    2. Checked that the /var/log/test.log file was present (and the file was present, which meant that it had been synced to persistent storage on shutdown and restored on boot)

After completing the setup, I popped the sd card out and put it into a card reader on a debian desktop computer. Then I made an image of the working sd card, so that if/when the sd card dies, getting a working router again should be as quick as just dd’ing the image to a new sd card and then switching sd card on the raspberry Pi.

Lesson learned!