Category Archives: debian

How I learned about linux’ “OOM Killer”

This blog post describes how I discovered a linux feature called “OOM Killer” that can have strange effects if it interrupts a program at a place where it really shouldn’t be interrupted.

I have a low-end VPS (Virtual Private Server), or at least: it used to be low-end, now it’s at least one step above lowest and cheapest.

On this server I’m running various personal stuff: email (with spamassassin), some public web pages, and some private web pages with various small webapps for the household (e.g. the “weekly allowance app” registering work done and payments made).

I had noticed occasional slow startups of the webapps, and in particular in September this year, when I was demonstrating/showing off the webapps at this year’s JavaZone the demos were less than impressive since the webapps took ages to load.

I was quick to blame the slowness on the wi-fi, but as it turns out, that may have been completely unfair.

The webapps had no performance issues on my development machine even when running with a full desktop, IDE and other stuff.  The webapps on the VPS also seemed to have no performance issues once they loaded.

I thought “this is something I definitely will have to look into at some later time…” and then moved on with doing more interesting stuff, i.e. basically anything other than figuring out why the webapps on a VPS were slow to start.

But then the webapps started failing nightly and I had to look into the problem.

What I saw in the logs was that the reason the webapps were broken in the morning was that they were stuck waitning for a liquibase database changelog lock that never was released.

Liquibase is how my webapps set up and update database schemas. Every time a webapp starts it connects to the database and checks what liquibase scripts that have been run against that database and applies the ones that have not already been run. The list of scripts that have been run is a tabled called databasechangelog. And to avoid having more than one liquibase client attempting to modify the database schema, liquibase uses a different table called databasechangeloglock to moderate write access to the database,

I.e. the databasechangeloglock is just a database table that has one or 0 rows. A liquibase client tries to insert a lock into the table at startup and waits and retries if this fails (and eventually completely fails).

In my case the webapps were failing because they were hanging at startup, trying to get a liquibase lock and failing to get one and were hanging in limbo and never completing their startup process. Manually clearing the lock from the table and restarting the webapps made the webapps start up normally. However, the next day the webapps were failing again for the same reason: the webapps were stuck waiting for a liquibase lock.

I initially suspected errors in my code, specifically in the liquibase setup. But googling for similar problems, code examination and debugging revealed nothing. I found nothing because there was nothing to be found.  The actual cause of the problem had nothing to do with the code or with liquibase.

I run my webapps in an instance of apache karaf that is started and controlled by systemd. And I saw that karaf was restarted 06:30 (or close to 06:30) every morning. So my next theory was that systemd for some reason decided to restart karaf 06:30 every morning.

No google searches for similar symptoms found anything interesting.

So I presented my problem to a mailing list with network and linux/unix experts and got back two questions:

  1. Was something else started at the same time?
  2. Did that something else use a lot of memory and trigger the OOM killer?

And that turned out to be the case.

I have been using and maintaining UNIX systems since the mid to late 80ies and setting up and using and maintaining linux systems since the mid to late 90ies, but this was the first time I’d heard of the OOM killer.

The OOM killer has been around for a while (the oldest mention I’ve found is from 2009), but I’ve never encountered it before.

The reason I’ve never encountered it before is that I’ve mostly dealt with physical machines. Back in the 80ies I was told that having approximately two and a half times physical memory was a good rule of thumb for scaling swap space, so that’s a rule I’ve followed ever since (keeping the ratio as the number of megabytes increased, eventually turning into gigabytes).

And when you have two and a half times the physical memory as a fallback, you never encounter the conditions that make the OOM killer come alive.  Everything slows down and the computer starts trashing before the condtions that triggers the OOM killer comes into play.

The VPS on the other hand, has no swap space. And with the original minimum configuration (1 CPU core, 1GB of memory), if it had been a boat it would have been said to be riding low in the water. It was constantly running at a little less than the available 1GB. And if nothing special happened, everything ran just fine.

But when something extraordinary happened, such as e.g. spamassassin’s spamd starting at 06:30 and requiring more memory than was available, then OOM started looking for a juicy fat process to kill, and the apache karaf process was a prime cadidate (perhaps because of “apache” in its name combined with OOM killer’s notorious hatred of feathers?).

And then systemd discovered that one of it’s services had died and immediately tried to restart it, only to have OOM killer shoot it down, and this continued for quite a while.

And in one of the attempted restarts, the webapp got far enough to set the databasechangeloglock before it was rudely shot down, and the next time(s) it was attempted started it got stuck waiting for a lock that would never be released.

The solution was to bump the memory to the next step, i.e. from 1GB to 2GB. Most of the time the VPS is running at the same load as before (i.e. slightly below 1GB) but now a process that suddenly requires a lot of memory no longer triggers the OOM killer and everything’s fine.  Also the available memory is used for buff/cache and everything becomes much faster.

I bumped the memory 8 weeks ago and the problem hasn’t occurred again, so it looks like (so far) the problem has been solved.

Faking a debian repository for package development

I use aptly to deliver my unofficial debian packages both to myself and others that might be interested.

However I’ve found that using aptly to do package development is a bad idea, because you can’t (by design, probably) overwrite packages in an aptly archive.  You can only create new versions.

For some installation tests it’s OK to use “dpkg –install”. But if your package needs to pull in depdencies, or if you wish to test a package upgrade, you need to use APT.

This article explains how to create a fake debian repository for use in package development.

Initial setup

Initial setup:

  1. Create the repository directory (this should be done as the same user you use to build the package)
    mkdir -p /tmp/repo
  2. Open /etc/apt/sources.list in a text editor and add the following (you need to be root to do this)
    # Test install apt archive
    deb [trusted=yes] file:///tmp repo/
    

Add new package to the repo

This the development cycle:

  1. Build the package (the example builds karaf)
    cd ~/git/karaf-debian/
    dpkg-buildpackage
  2. Copy the package to the fake repo (this example uses karaf, replace with your own package and package version):
    cp ~/git/karaf_4.1.5-0~9.30_all.deb /tmp/repo
  3. Generate a Packages file
    (cd /tmp; dpkg-scanpackages repo >repo/Packages)

Upgrading an existing package

This has to be done as root:

  1. First update APTs database of packages and archives
    apt-get update
  2. Run an upgrade with the following command
    apt-get dist-upgrade
  3. There will will be a question for if you wish to continue, to continue is the default, just press ENTER
    Do you want to continue? [Y/n]
  4. There will be a warning about that the package to be installed cannot be authenticated, the default here is not to install, so press “y” (without the quotes) to continue
    Install these packages without verification? [y/N]

Installing a package

This is used to e.g. test that a package is able to install its dependencies.

These operations have to be done as root:

  1. First update APT’s database of packages and archives
    apt-get update
  2. Use the following command to install a package and its dependencies, this example installs apache karaf:
    apt-get install karaf
  3. If the package pulls in new dependencies there will be a prompt for if you wish to continue. The default is to continue, so just press ENTER
    Do you want to continue? [Y/n]
  4. There will be a warning about that the package to be installed cannot be authenticated, the default here is not to install, so press “y” (without the quotes) to continue
    Install these packages without verification? [y/N]

Installing apache karaf on debian

Until the RFP (Request For Packaging) bug for karaf in the debian bug tracker is resolved, here is an APT archive with a karaf package for debian (architecture “all”).  The package is created using native debian packaging tools, and built from a source tarball and the APT archive itself is created, using aptly.

The package has been tested on Debian 9 “stretch” (the current stable), amd64.

Do the following commands as root on a debian GNU/linux system:

  1. Add the keys for the APT archive (Edit: needed to sign with the first key in the keyring because of an aptly bug, must add this key as well, if using the repository)
    wget -O - https://apt.bang.priv.no/apt_pub.gpg | apt-key add -
    wget -O - https://apt.bang.priv.no/maven_pub.gpg | apt-key add -
  2. Open the /etc/apt/sources.list file in a text editor, and add the following lines:
    # APT archive for apache karaf
    deb http://apt.bang.priv.no/public stable main
  3. Install karaf with apt-get
    apt-get update
    apt-get install openjdk-8-jdk karaf
  4. Log in with SSH (password is “karaf” (without the quotes)) and try giving some commands:
    ssh -p 8101 karaf@localhost

 

Packaging karaf with native debian packaging tools

Note! This is an improvement over the packaging in  Installing apache karaf on debian stretch, this package is packaged using native debian packaging tools instead of fpm, and is built from the karaf source tarball instead of the karaf binary tarball.

Apache karaf is an OSGi container and application server that is provisioned from maven, and has an ssh server. Basically it is possible to start an empty karaf, ssh in and give some commands to install an application using maven.

There still isn’t a native .deb package on maven (see  the RFP (Request For Packaging) bug for karaf in the debian bug tracker), but this package can be installed from my own maven repository.

The packacing projecct can be found on github: https://github.com/steinarb/karaf-debian

Procedure to build the package

  1. Install the required build tools
    apt-get update
    apt-get install openjdk-sdk git maven-debian-helper devscripts
  2. Clone the karaf package repository
    mkdir -p ~/git
    cd ~/git/
    git clone https://github.com/steinarb/karaf-debian.git
  3. Build the package
    cd ~/git/karaf-debian/
    dpkg-buildpackage

After this, there will be a karaf-*.deb package in the directory above the karaf-debian directory.

Setting up a debian package archive with aptly

This article describes how to set up a debian archive with aptly on a debian 9 “stretch” computer, served by an nginx web server.

Initial setup

  1. Add a DNS alias for your virtual nginx web site (outside of the scope of this blog post). The examples below assume that apt.mydomain.com is the DNS alias
  2. Install the required software, logged in as root, give the following command
    apt-get install gnupg pinentry-curses nginx aptly
  3. Logged in as your regular user, do the following:
    1. Create a gpg keyNote! It is a good idea to do the key generation when logged into a debian desktop and move the mouse about during generation, to get good random values for the key generation.Giving the following command at the command line
      gpg --full-generate-key
      1. At the prompt for key type, just press ENTER to select the default (RSA and RSA)
      2. At the prompt for key size, type “4096” (without the quotes) and press ENTER
      3. At the prompt for how long the key should be valid, type “0” without the quotes and press ENTER
      4. At the prompt for “Real name”, type your real name and press ENTER
      5. At the prompt for “Email address”, type your email address and press ENTER
      6. At the prompt for “Comment”, type the host name of your archive web server, e.g. “apt.mydomain.com” and press ENTER
      7. At the prompt “Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?”, type “O” (without the quotes) and press ENTER
      8. At the prompt for a passphrase, give a passphrase you will remember. You will be asked for this passphrase every time every time the repository is published
    2. Export the public key of the gpg key pair in a form that the web server can use
      1. First list the keys to find the id of the key
        steinar@lorenzo:~$ gpg --list-keys
        /home/steinar/.gnupg/pubring.gpg
        ---------------------------
        pub   rsa4096 2017-12-27 [SC]
              6B7638490707CCE365DF5415D2BA778731DC1EAC
        uid           [ultimate] Steinar Bang (apt.mydomain.com) <sb@dod.no>
        sub   rsa4096 2017-12-27 [E]
        
      2. Then use the id of the key to export the public key of the gpg key pair in a form that the web server can return
        gpg --output apt_pub.gpg --armor --export 6B7638490707CCE365DF5415D2BA778731DC1EAC
      3. Publish the key with the default GPG keyserver
        gpg --send-keys 6B7638490707CCE365DF5415D2BA778731DC1EAC
    3. Create a local repository “stable”
      aptly repo create -distribution="stable" stable
    4. Configure an architecture in the archive: open the ~/.aptly.conf file in a text editor, and change the line
      "architectures": [],

      to

      "architectures": ["amd64"],

      Note! Without a concrete architecture in place, aptly refuses to publish. So add an architecture here, even you are going to publish packages with architecture “all” (e.g. java, python, shell script). In the example I’m using “amd64” which, despite its name, is appropriate for modern 64 bit intel chips (i5 or i7 of various generations).

    5. Import a package into “stable” (the example uses the package built in Installing apache karaf on debian stretch)
      aptly repo add stable git/karaf-deb-packaging/karaf_4.1.4-1_all.deb
    6. Publish the archive (switch the gpg-key with the id of your actual repository key):
      aptly publish repo --gpg-key="6B7638490707CCE365DF5415D2BA778731DC1EAC" stable

      Note! If you get a time out error instead of a prompt for the GPG key passphrase, and you’re logged in remotely to the server, the reason could be that gpg tries to open a GUI pinentry tool. Switch to explictly using a curses-based pinentry and try the “aptly publish” command again. Do the command:

      update-alternatives --config pinentry

      and select “pinentry-curses” in “Manual mode”

  4. Log in as root and do the following
    1. Create a root directory for a new web server and copy in the public key used to sign the published achive
      mkdir -p /var/www-apt
      cp /home/steinar/apt_pub.gpg /var/www-apt/
    2. In a text editor, create the file /etc/nginx/sites-available/apt with the following content
      server {
      	listen 80;
      	listen [::]:80;
      
      	server_name apt.mydomain.com;
      	root /var/www-apt;
      	allow all;
      	autoindex on;
      
      	# Full access for everybody for the stable debian repo
      	location /public {
      		root /home/steinar/.aptly;
      		allow all;
      	}
      
      	# Allow access to the top level to be able to download the GPG key
      	location / {
      		allow all;
      	}
      }
      

      Note! I actually started out with also serving HTTPS and signing with letsencrypt, but as it turns out APT doesn’t support HTTPS out of the box, so there was no point in including it in this HOWTO

    3. Enable the site by creating a symlink and restarting nginx
      cd /etc/nginx/sites-enabled
      ln -s /etc/nginx/sites-available/apt .
      systemctl restart nginx

Your APT artchive is now up and running.

Use the new APT archive

To start using the APT archive, do the following on a debian computer:

  1. Log in as root
  2. Add the archive key
    wget -O - https://apt.mydomain.com/apt_pub.gpg | apt-key add -
  3. Add the archive by adding the following lines to /etc/apt/sources.list
    # My own apt archive
    deb http://apt.mydomain.com/public stable main
  4. Update APT to get the information from the new archive
    apt-get update
  5. Install a package from the archive
    apt-get install karaf

Future additions and updates of existing packages can be done as your regular user, with no need to log in as root during the process.

Publish a new version of a package

To update an existing package:

  1. Build a new version of the package
  2. Add the new version of package to the package archive
    aptly repo add stable git/karaf-deb-packaging/karaf_4.1.5-1_all.deb
  3. Update the publish the package archive (i.e. the package archive served by nginx)
    aptly publish update --gpg-key="6B7638490707CCE365DF5415D2BA778731DC1EAC" stable
  4. Do “apt-get update” on the computers using the archive
  5. Do “apt-get dist-upgrade” on the computers using the archive, and the package should be upgraded

Remove old versions of a package

To delete old versions of a package:

  1. Do a query to verify that the expression matches only the packages you want to delete
    aptly repo search stable 'karaf (<=4.1.4-4)'
  2. Remove all packages matching the query
    aptly repo remove stable 'karaf (<=4.1.4-4)'
  3. Remove old versions of the database (this is where the disk usage of the repository is reduced)
    aptly db cleanup

 

Installing apache karaf on debian stretch

Edit: It is now possible to install karaf on debian without building it yourself, the package installed is not the one described here, but the new and improved package built from source with native debian packaging tools, that can be found here  https://github.com/steinarb/karaf-debian

Apache karaf is an OSGi container/application server with some nice properties:

  1. It has an SSH server you can log into and a command line where you can inspect and configure the karaf instance
  2. It can be provisioned using apache maven, basically you can start with an empty karaf, ssh into the SSH server and pull in and start your application using “maven magic”
  3. It is much simpler to get an OSGi application up and running in apache karaf, than in any other system I have tried, since I first was to introduced to OSGi in 2006
  4. Karaf can also be used to run non-OSGi applications packaged as jar or war files
  5. In a development setting is very simple to deploy new versions of the code using maven and remote debug the deployed code frome eclipse or IntelliJ

Running karaf on a debian GNU/linux system is a little hampered by there not being a native .deb package. I have opened an RFP (Request For Packaging) bug for karaf in the debian bug tracker. When/if that issue is ever resolved as done, karaf will be easily availabel on debian and also on all of the distros that are based on debian (e.g. ubuntu and mint).

Until then do my own debian packaging. I forked the packaging I found at https://github.com/DemisR/karaf-deb-packaging and made some changes:

  1. Switched from oracle JDK 8, to openjdk 8
  2. Updated to karaf version 4.0.7 (the currently newest stable release at the time of forking), later upgraded to karaf 4.1.1 and again upgraded to karaf 4.1.2
  3. Use /var/lib/karaf/data instead of /usr/local/karaf/data
  4. Use package version “-1” instead of “-3”
  5. Switched from using the the service wrapper (karaf-wrapper) to plain systemd start using the scripts and config from bin/contrib in the karaf distribution
  6. Made the stop of running services more robust

The resulting .deb package will follow the usual service pattern of a debian service: the service will run with a user named after the service (i.e. user “karaf” which is the single member of group “karaf” and the owner of all files the service need to touch). The service will log to the regular debian syslog. The configuration will end up in /etc/karaf and all files not part of the installation will be left untouched on a .deb package uninstall and upgrade.

My fork of the packaging, lives at https://github.com/steinarb/karaf-deb-packaging

To create the package and install karaf, do the following steps:

  1. Log in as root on a debian system
  2. Install the prequisites for building the package, debian packages and ruby gem:
    apt-get update
    apt-get install git maven openjdk-8-jdk postgresql ruby ruby-dev build-essential
    gem install fpm
  3. Clone the packaging project and build the deb package:
    cd /tmp
    git clone https://github.com/steinarb/karaf-deb-packaging
    cd karaf-deb-packaging
    ./dist_karaf.sh
    mkdir -p /root/debs
    cp *.deb /root/debs
  4. Install the .deb package:
    dpkg --install karaf_4.1.4-1_all.deb

After karaf has been installed it is possible to log in as user “karaf”, with the following command

ssh -p 8101 karaf@localhost

The password is also “karaf” (without the quotes).

This opens the karaf console command line

        __ __                  ____
       / //_/____ __________ _/ __/
      / ,<  / __ `/ ___/ __ `/ /_
     / /| |/ /_/ / /  / /_/ / __/
    /_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (4.1.4)

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit 'system:shutdown' to shutdown Karaf.
Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session.

karaf@root()>

At this command line, you can eg.

  1. install an application
  2. start, stop and list running applications
  3. set the configuration used by the applications

But all of these except for the first, will be items for later posts.

Sign nginx website and dovecot imap server on debian with let’s encrypt

If you have a setup with a single server with multiple services (web, IMAP etc.), and one CNAME per service (www.somedomain.com, imap.somedomain.com), and you would like to get the services signed in a manner that doesn’t give warnings or errors in browsers  (especially browsers in phones and tablets with iOS and Android), then this article may be of interest.

Self-signed certificates is a nuisance and the cacert.org initiative has been losing support. Let’s encrypt offers the possibility of having free (as in both cost and feedom) SSL certificates that don’t give warnings in web browsers. The only problem the threshold of taking the time to figure out how to use it.

It turned out there wasn’t much figuring to do: on a debian jessie GNU/linux system, the certbot program from eff.org takes care of everything, including keeping the certificates automatically updated (the .deb package for certbot sets up a cronjob that does the right thing).

The way certbot works is that it requires that each server you wish to sign must be accessible on http (port 80) and the local path “/.well-known/” on each server must be accessible and map to a file area that certbot can put files in.

The certbot program works by contacting let’s encrypt saying that it wants a certificate for a DNS name,  and let’s encrypt will then access the HTTP URL to verify that certbot is indeed running on a server that can be found using that DNS name.

This means that, for certbot to work:

  1. Even if your HTTP server responds only on HTTPS and/or requires authentication, you will need to make a plain HTTP connection available and have the local path “/.well-known/” map to a part of the file system, and be available without authentication
  2. Even if you’re making a certificate for a non-HTTP service (e.g. an IMAP server), you will need to make a plain http (port 80) server responding to that DNS CNAME that can serve the local parth “/.well-known/” from the local

This article explains how to set up free SSL certificates signed with let’s encrypt on an nginx web server and a dovecot IMAP server, on a debian jessie GNU/linux system.

The certbot instructions takes you part of the way, but it has some holes and not a lot of explanation, which is why I wrote this article.

The steps are:

  1. Add jessie-backports to APT (click the link and follow the instructions)
  2. Install certbot from jessie-backports:
    1. Open a command shell as root and give the following command:
      apt-get install certbot -t jessie-backports
      
  3. Disable the default nginx site
    1. Edit the /etc/nginx/sites-available/default file to have the following content:
      server {
              listen 80 default_server;
              listen [::]:80 default_server;
      
              root /var/www/html;
      
              server_name _;
      
              location / {
                      deny all;
              }
      }
      
    2. Run the following command in the command shell openes as root
      systemctl reload nginx
      
  4. Add DNS CNAME-records for the virtual hosts you are going to sign.
    In the examples used in this procedure, the host is hostname.somedomain.com and it has two CNAME aliases: http://www.somedomain.com and imap.somedomain.com.
  5. Add a http://www.mydomain.com nginx site
    1. Create a file /etc/nginx/available-sites/www with the following contents:
      server {
              listen 80;
              listen [::]:80;
      
              server_name www.mydomain.com;
      
              root /var/www/html;
      
              index index.html index.htm index.nginx-debian.html;
      
              location / {
                      allow all;
              }
      }
      
    2. Give the following commands in the command shell opened as root:
      cd /etc/nginx/enabled-sites/
      ln -s /etc/nginx/available-sites/www .
      systemctl reload nginx
      
  6. Add an imap.mydomain.com nginx site
    Note! This isn’t a real website but it is necessary to give HTTP access to a web server listening to this CNAME alias so that the certbot program can create and auto-update the certificate that dovecot uses.

    1. Create a file /etc/nginx/available-sites/imap with the following contents:
      # The port 80 listener only gives access to certbot
      server {
              listen 80;
              listen [::]:80;
      
              server_name imap.bang.priv.no;
      
              root /var/www-imap/;
      
              location /.well-known/ {
                      allow all;
              }
      
              location / {
                      deny all;
              }
      }
      
    2. Give the following commands in the command shell opened as root:
      cd /etc/nginx/enabled-sites/
      ln -s /etc/nginx/available-sites/imap .
      systemctl reload nginx
      
  7. Add a certificate for http://www.mydomain.com
    1. Give the following command in the command shell opened as root:
      certbot certonly --webroot -w /var/www/html -d www.mydomain.com
      
  8. Configure certificates for the http://www.mydomain.com nginx web site
    1. Change the /etc/nginx/available-sites/www file to the following:
      server {
              listen 80;
              listen [::]:80;
      
              server_name www.mydomain.com;
      
              # SSL configuration
              #
              listen 443 ssl default_server;
              listen [::]:443 ssl default_server;
              ssl_certificate     /etc/letsencrypt/live/www.mydomain.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/www.mydomain.com/privkey.pem;
      
              root /var/www/html;
      
              location / {
                      allow all;
              }
      }
      
    2. Give the following command in the command shell opened as root
      certbot certonly --webroot -w /var/www-imap -d imap.mydomain.com
      
    3. Open the https://www.somedomain.com server (replace with your actual URL) and observe that the browser reports it as secure with a valid certificate
  9. Add a certificate for imap.mydomain.com
    1. Give the following command in the command shell opened as root:
      certbot certonly --webroot -w /var/www-imap -d imap.mydomain.com
      
  10. Configure dovecot to use the imap.mydomain.com certificate
    1. Change/modify the following lines in the /etc/dovecot/conf.d/10-ssl.conf file:
      # SSL/TLS support: yes, no, required. 
      ssl = yes
      
      # PEM encoded X.509 SSL/TLS certificate and private key. They're opened before
      # dropping root privileges, so keep the key file unreadable by anyone but
      # root. Included doc/mkcert.sh can be used to easily generate self-signed
      # certificate, just make sure to update the domains in dovecot-openssl.cnf
      ssl_cert = </etc/letsencrypt/live/imap.bang.priv.no/fullchain.pem
      ssl_key = </etc/letsencrypt/live/imap.bang.priv.no/privkey.pem
      
    2. Give the following command in the command shell opened as root:
      /etc/init.d/dovecot reload
      

The certificates have a 90 day lifetime, but as mentioned earlier, the certificates will be automatically updated by certbot when they have 30 days valid time remaining. The certbot deb package installs a cronjob that runs twice every day at random second in the hour following 00:00 and 12:00 and checks if certificates needs to be updated and updates the ones that are ready for updating.

Get update notifications in the MATE desktop on debian jessie

One thing I have been missing since Gnome 2 was suceeded by the (IMO) horrible Gnome 3, is a tool tray notification icon for pending debian updates.

When someone continued Gnome 2 as MATE and MATE became available on debian, there was no notification tooltray icon to be found.

But now there is such a tooltray icon: pk-update-icon and since debian with MATE again is my primary desktop this I was something I was happy to discover.

When there are packages available, the icon looks like this:software-updates-mate-jessie

Then you can either click on “Install updates” and use the GUI to inspect, and install the updates, or you can pop to a root terminal window do:

apt-get dist-upgrade

To install the pk-update-icon:

  1. the apt-line for “jessie-backports”First ensure that you have the apt-line for “jessie-backports”, by adding the following line to /etc/apt/sources.list :
    deb http://http.debian.net/debian jessie-backports main contrib non-free
  2. Update APT to get the index files for “jessie-backports”, do the following command in a root command line window:
    apt-get dist-upgrade
  3. Install pk-update-icon from jessie backports, give the following command in a root command line window:
    apt-get install -t jessie-backports pk-update-icon

    Answer yes to the question about if the install should proceed

  4. Log out of the desktop and log back in, and the next time updates arrive the icon will show up, like in the screen shot above

Debian “jessie” on Intel “Skylake”

Except for work computers with GNU/linux, the last of which was retired in 2008, my GNU/linux computers have been outdated hand-me-downs. And when the P4 I got back in 2010 went belly up, I figured it was time for trying a modern machine.

Note: I wasn’t going for a top-of-the-line gaming computer with high performance everything. Just a modern state of the art computer.

I wasn’t satisfied with the combination of price and specs on the desktop computers sold by the consumer electronic retailers, so I asked an old colleague who likes building his own computers (thanks Alexey!) to help me come up with an order for components that would work when I put it together. This is what I ordered:

  • Main board: ASUS H170M-PLUS, Socket-1151
  • CPU: Intel Core i5-6600 Skylake
  • Memory: Corsair Vengeance LPX DDR4 2133MHz 16GB
  • SSD: Kingston SSDNow V300 120GB 2.5″ OEM
  • Hard disk: Seagate Barracuda® 1TB
  • Cabinet: Fractal Design Define S Black
  • Power supply: Corsair CX500, 500W PSU

I won’t spend much time on the task of putting the parts together to make a working computer, suffice to say that with re-watching this video, frequent phone calls to Alexey, compared with close reading of the documentation (Alexey told me to do that), I got it working and was greeted by the fancy screen that has taken the place of the BIOS.

UEFI Boot screen

I tried, and gave up on making PXE boot work for the debian install on the UEFI BIOS, and put the debian-8.3.0-amd64-netinst.iso image on an USB flash drive. I then inserted the USB flash drive in one of the USB3 connectors on the front of the cabinet, pressed F10 in the UEFI BIOS, and then kept F8 pressed F8 until I got to the boot menu.

In the boot menu, I selected

UEFI: Generic Flash Disk 8.07, Partition 1 (7640MB)

and then pressed ENTER.

In the debian installer:

  1. Selected the “Graphical installer”
  2. Selected “English” as the installer language
  3. Selected “Norway” as the time zone
  4. Selected “en_US.UTF-8” as the locale
  5. Selected “Norwegian” as the keyboard layout
  6. Gave “lorenzo” as the computer name
  7. Gave “hjemme.lan” as the domain name
  8. Set the root password
  9. I created a user for myself, and set the password
  10. Partitioned the disks manually:
    1. Partitioned the 120GB SSD. I put the root partition on the SSD to get a quick startup of the system, and to get fast startup of applications. I also had to put an EFI partition here. Without an EFI partition, the base-installer failed with a “No space left on device” error message:
      Number Size File system Name Flags
      #1 1GB fat32 efi boot,esp
      #2 119GB ext4 root
    2. Partitioned the 1TB HDD:
      1. I put the swap, sized to twice the physical memory, (something I’ve been doing since I installed my first GNU/linux box back in the 90-ies)
      2. To avoid SSD wear from frequent writing, I put the /var partition (where /var/log resides) on the spinning disk
      3. Finally, I made the rest of the disk the /home directory
      Number Size File system Name Flags
      #1 32GB linux-swap(v1) swap
      #2 100GB ext4 var
      #3 868GB ext4
  11. In the installer, I selected a package mirror from Norway (it doesn’t really matter which one, because of the NIX), selected “No proxy”, and continued
  12. I let the installer install GRUB on the hard disk
  13. During the installation of the system, the installer stopped with the following error message:
    Unable to install busybox
    An error was returned while trying to instal lthe busybox package
    onto the target system.
    
    Check /var/log/syslog or see virtual console 4 for details
  14. I googled for the error message, found this ubuntu bug report, and tried the following workaround from a comment on the bug, and the installer continued past the problem spot.The workaround/hack, was to boot the installer, press Ctrl-Alt-F2 to get a virtual console, and at the prompt in that console, type:
    # while true; do rm /var/run/chroot-setup.lock; sleep 1; done

    and then switch back to the installer in Ctrl-Alt-F1 and continue with the installation

  15. I let the installer run until completion, and pulled the USB flash drive from the USB3 connection (probably not necessary, since pressing F8 was necessary to get to the boot menu in the first place), and let the computer reboot
  16. The computer booted with the familiar debian gdm login screen, and a disappointing 1024×768 screen resolution
  17. I logged in to see what the display settings of the desktop had to say, but the display setting had 1024×768 as the only choice
  18. I let apt-get update the distribution
    apt-get update
    apt-get dist-upgrade
  19. I rebooted again after the update had completed, but the update wasn’t enough to fix the screen resolution, the display still had 1024×768 as the only available resolution
  20. This was my first test of Gnome 3 (when “gnome” in debian changed its meaning from the quite usable “Gnome 2” to “Gnome 3”, the old hardware on my previous debian computer wasn’t able to display anything at all), and I found it ugly and incomprehensible
  21. So I installed MATE
    apt-get install mate-desktop-environment

    and rebooted and logged in again

  22. This time, after logging in, I met something that looked very much like the old and familiar “Gnome 2” desktop, but still with 1024×768 as the only available display resolution
  23. I edited /etc/apt/sources.list and added apt lines for jessie-backports
    # jessie backports (4.3 kernel)
    deb http://http.debian.net/debian jessie-backports main contrib non-free
    
  24. I installed the kernel, firmware and xserver-xorg-video-intel
    apt-get -t jessie-backports install linux-image-amd64 firmware-linux
    apt-get -t jessie-backports xserver-xorg-video-intel
  25. After a new reboot I was up and running, and this time with 1600×1200 resolution on the display, which is the maximum the old display I was using would support
  26. Since I got a working system by using packages from backports, I didn’t make the jump to debian testing immediately, but I figured I might as well get as new packages as possible from jessie-backports, so I created an /etc/apt/preferences file with the following contents:
    Package: *
    Pin: release a=jessie
    Pin-priority: 700
    
    Package: *
    Pin: release a=jessie-updates
    Pin-priority: 710
    
    Package: *
    Pin: release a=jessie-backports
    Pin-priority: 720
    
  27. Then I did apt-get update followed by dist-upgrade and pulled in new versions of many packages
    apt-get update
    apt-get dist-upgrade
  28. I used apt-get to install many familiar packages from my old system
    apt-get install xscavenger
    apt-get install default-jdk
    apt-get install chromium
    apt-get install flightgear
    apt-get install oolite
  29. Like I always do on debian systems, I pulled in “real” firefox from Mint debian edition:
    1. I edited /etc/apt/sources.list file and added the apt lines for Mint debian edition
      # Linux Mint Debian Edition (has firefox)
      deb http://packages.linuxmint.com debian import
    2. I installed the key for Mint debian edition
      apt-get update
      apt-get install linuxmint-keyring
      apt-get update
    3. I used apt to install firefox
      apt-get install firefox
  30. I installed apticron that will check for updates daily and notify me about updates
    apt-get install apticron
  31. Then I rebooted into the system I’m currently running

That’s it basically. Things seems to work out of the box, sound, video etc. (youtube doesn’t play in chromium, but it does play in firefox).

Installing debian “squeeze” with PXE boot on a Samsung N145 Plus netbook

Introduction

This article describes the steps necessary to install debian 6 “squeeze” on a Samsung N145 Plus netbook, with the following specification:

  • Intel Atom processor
  • 10.1″ display
  • 1GB RAM
  • 340GB HDD
  • Windows 7 preinstalled

Setting up netboot of the debian installer

DHCP requests in my home LAN network is provided by dnsmasq on a desktop PC running GNU/linux debian stable (which at the time of writing, was Debian 6 squeeze). One nice feature of dnsmasq is that it can provide PXE network boot.

So what I did was to download the i386 network boot image and put the contents in the /var/tftpd/debian-installer/i386 directory of the computer running dnsmasq, and then edit the /etc/dnsmasq.conf file in the following way:

  1. Remove the comment in front of the dhcp-boot config line:
    dhcp-boot=pxelinux.0
  2. Set the tftp-root pointing to the directory containing the pxelinux.0 file:
    tftp-root=/var/tftpd/debian-installer/i386

Installing debian

Booting from the network

I connected the netbook with to the switch in my home LAN an RJ45  twisted pair cable, and powered on the netbook, and kept the F12 button pressed during boot, and ended up in the debian text based installer.

I set the time zone and location of the install (Oslo, Norway), created an initial user and set the root password.

Partitioning

The netbook came with a 340GB and Windows 7 preinstalled.  The hard disk was partitioned so that the Win7 system had both a C: and a D: drive, with the operating system installed on the C: drive.

The plan was to keep the Windows 7 installation, sans its D: drive and install debian in the part of the hard disk occupied by the D: drive.

The initial partitioning table looked like this:

#1 primary 104.9 MB B ntfs
#2 primary 93.4 GB ntfs
#5 logical 138.3 GB ntfs
#4 primary 28.2 GB ntfs

I guessed that partititon #1 was the boot partition, and that partition #2 was the C: drive containing the Windows 7 installation, and that #4 was either some kind of Samsung software (diagnostics possibly) or something belonging to the Windows 7 installation.

I left partition #1, #2 and #4 alone, and deleted the partition containing the D: drive (partition #5), and turned that into free space:

#1 primær 104.9 MB B ntfs
#2 primær 93.4 GB ntfs
pri/log 138.3 GB FREE SPACE
#4 primær 18.2 GB ntfs

I added a swap partition twice the size of the physical memory i.e. 2GB, and added an ext3 partition using the rest of the free space, and ended up with a partitioning table looking like this:

#1 primary 104.9 MB B ntfs
#2 primary 93.4 GB ntfs
#5 logical 136.3 GB B f ext3 /
#6 logical 2.0 GB f swap swap
#4 primary 18.2 GB ntfs

I saved the partitioning table and continued.

Installing the system

After completing the partitioning, I selected the following items to install:

  • SSH server
  • Laptop
  • Base tools

I let the installer run, using defaults for all questions. I answered YES to the question of whether GRUB should be installed on MBR. The installer found the Windows 7 installation and added it to the GRUB boot menu.  When the time came to reboot, I let the installer reboot.

After the reboot I logged in as root and installed the “KDE Plasma netbook” package:

apt-get install plasma-netbook kde-l10n-nb

I opened the /etc/apt/sources.list in a text editor, and modified it:

I then updated the APT database with the new sources and added all updates to the already installed software:

apt-get update
apt-get install linuxmint-keyring
apt-get update
apt-get dist-upgrade

I then installed all software I assumed was necessary:

apt-get install ttf-mscorefonts-installer
apt-get install openoffice.org openoffice.org-l10n-nb
apt-get install firefox firefox-l10n-nb

I rebooted the laptop and then logged into the plasma desktop using the user created at the start of the installation process. The desktop was missing network support and other useful software.

I logged in as root using the “failsafe” alternative, and installed missing software in the terminal window:

apt-get install network-manager-kde update-notifier-kde
apt-get install synaptic software-center gdebi

I rebooted and logged into plasma again. I tried to plug in an USB flash memory, and discovered that the desktop had no file manager, konqueror was missing. I installed konqueror (and discovered I should have picked the package “kde-plasma-netbook”, rather than just “plasma-netbook”):

apt-get install konqueror

The plasma desktop looked great, but was way to slow on an atom processor without much in the way of graphical hardware acceleration.

So I decided to try gnome and installed gnome with the command:

apt-get install gnome

I let apt set gdm3 as the default login instead of kdm.

I rebooted and logged into the gnome desktop, and it performed a lot better than the plasma desktop.

I rebooted again chose Windows 7 from the grub menu, and Windows 7 booted and logging into the desktop worked.

Making the Fn keys adjust the display brightness

The Fn keys for the adjusting the brightness didn’t work. I googled, and found two promising web pages:

  1. Fixing brightnes control, etc. on a Samsung R510 with Debian Squeeze
  2. InstallingDebianOn Samsung Samsung N150

I decided to try the first approach, and downloaded the packages created for Ubuntu Natty from https://launchpad.net/~voria/+archive/ppa

I then installed the downloaded .deb packages in the following way:

  1. Installed the easy-slow-manager:
    1. I let gdebi pull in all depdendencies (gcc, the linux-headers, make, etc)
  2. Installed samsung-backlight:
    1. Edited /etc/default/grub changing the line GRUB_CMDLINE_LINUX_DEFAULT
      GRUB_CMDLINE_LINUX_DEFAULT="quiet"
      to
      GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi_backlight=vendor"
    2. Ran the command
      update-grub
  3. Installed samsung-tools:
      1. Installed the devscripts
        apt-get install devscripts
      2. Unpacked the samsung-tools tarball
        cd /tmp
        tar zxvf samsung-tools_1.4~ppa3~loms~natty.tar.gz
        cd /tmp/samsung-tools_1.4~ppa3~loms~natty
        dch -l sb
        1. Added “Compiled for debian squeeze” as the final comment

     

     

  4. Built the deb package
    cd /tmp/samsung-tools-1.4~ppa3~loms~nattysb1
    dpkg-buildpackage -rfakeroot -us -uc
  5. Installed the deb package
    gdebi /tmp/samsung-tools_1.4~ppa3~loms~nattysb1_all.deb
    1. I let gdebi install all of the required dependencies
  6. Rebooted

After the reboot I tried the Fn+Up and Fn+Down keys to adjust the display brightness and the keys worked fine.