My Little Corner of the Net

Let’s Encrypt Enters Public Beta

There’s a new certificate authority in town, and it seeks to change the way we think about web security.  Let’s Encrypt, run by the Internet Security Research Group (ISRG), a partnership of a number of Internet-focused organizations like the EFF, Mozilla, Cisco, Akamai, and now Facebook, entered it’s public beta period this week and is now issuing free, browser-trusted, domain-validated certificates to pretty much anyone who wants them.

I’ve been doing some experimenting with Let’s Encrypt for the past few weeks as part of their limited beta program, and this post was already in the works.  In fact, I hadn’t even noticed that they entered public beta until after I started writing it and I went to the site to check on some facts.

While there are many issues with the SSL/TLS current certificate issuing paradigm, Let’s Encrypt aims to solve one of the biggest: the barrier of entry.  Not only are Let’s Encrypt certificates free, the process for getting them is mostly automatic.  To contrast, I’ve used another CA to obtain free certs for a number of my personal sites and projects for years and the process to get them is not easy.  With that CA, getting a new certificate is a multi-step process which includes  creating a client certificate to authenticate with the service, installing it in my browser, multiple steps of verifying domains, email addresses, and the like, waiting for verification codes to come by email, and manually installing the certificates when they arrive.  That CA also doesn’t support Subject Alternative Names (SANs), so I’m limited to creating certificates that work with only one domain or subdomain.

With Let’s Encrypt, after a little bit of one-time server setup, I just run a single shell command and it does everything for me (or most of it, anyway).

Before going too much further, let’s point out that Let’s Encrypt is not for everyone.  The certificates they issue are domain validated (DV), which means that the only validation done is a check that the server responding to the domain name in the certificate request is actually aware of the request.  As such, there is no information about the organization running the site in the certificate, so while a Let’s Encrypt certificate will enable encrypted connections in browsers (and without any certificate warnings), it does nothing to validate that the server is actually being run by the entity that it purports to be.  Because of this, Let’s Encrypt certificates probably shouldn’t be used for most production-level sites, especially not for ecommerce, banking, or anything else where a high level of trust is required.  On the other hand, they could be perfect for dev and test servers, where the level of trust is not required.

Let’s Encrypt intends their certificates to be used on the millions of personal sites, running software like WordPress, where site owners log in in the clear.  These kinds of sites are often the targets of hackers because (among many other reasons) people tend to reuse passwords, so the clear-text passwords they use to log in to their blog may very well be the same ones they use to log in to their banks.

So how does automatic certificate generation work?  In a nutshell, you install the Let’s Encrypt software on your server and use it to request a certificate for one or more domains.  The client uses the Automated Certificate Management Environment (ACME) protocol, which was developed by Let’s Encrypt, to submit the certificate request (which it generates for you), to the Let’s Encrypt CA.  The CA then verifies the request by checking that a specific file, created by the client software, is available from your server via the web.  If the server finds the file, the domain is verified, and the certificate is issued.  If the request is made for a domain that isn’t hosted by the server, the verification will fail, and the certificate won’t be issued.  When the certificate is issued, it’s returned to the client which will save it to the server and, depending on the implementation, may also reconfigure the web server to use the certificate for the domains it covers.

Let’s Encrypt supports SANs on it’s certificates, so it’s possible, for instance, to cover the .com, .net, and .org variants of a given domain, with both and without www prefixes, in a single cert (this isn’t even possible with the most commercial CAs, unless you are willing to pay for it—SANs are usually a premium feature and it usually costs almost as much to add an additional domain as it does to get a whole new cert).  Let’s Encrypt also says that wildcard certs are in the works, though there are some issues they need to work through concerning how to validate them, since a wildcard certificate would be valid for any subdomain on the domain it was issued to.  Fortunately, since there’s no limit to the number of SANs you can add or the number of certificates you can request (within reasonable limits, for now, at least), wildcard certificates would only be necessary in some very specific use cases.

The biggest downside to Let’s Encrypt is also one of it’s biggest strengths: certificates are only valid for 90 days.  This is a stark contrast to the current industry standard that allows certificates to be issued for up to two years and terms of five years or more being common not that long ago.  The 90-day limit means that site owners need to stay vigilant of expiration dates, but in a world of script-kiddie exploits and expired domains getting scooped up by domain squatters faster than you can say “connect,” it’s a pretty smart idea to recycle certificates quickly.  Plus, renewing is as easy as running the ACME client again, and an auto-renewal process is promised somewhere down the road.  For now, most monitoring tools can warn about certificate expiration in advance of it happening.  Let’s Encrypt recommends renewing every 60 days to ensure there’s plenty of time to fix any issues that may come in in the renewal process.

If a public key ever gets compromised, revoking a certificate is easy, too…it’s just another command to the Let’s Encrypt client.  Remember that other CA I mentioned?  They actually charge a fee to revoke a cert and won’t issue a new one until either the old one expires or you revoke it, even if you just lost the private key because of stupidly copying another file on top of it…not that I’ve ever done that!

I figure I’ll probably start replacing my free certs with Let’s Encrypt certs as they expire.  Using them will be a bit more work in that I’ll have to remember to renew them every couple months, but I’m working on a script that handles the installation process with Vesta, where most of my sites reside, so the process will be much, much easier than it is now.  I plan to release this script once I’ve done some thorough testing, so watch for another post in the next week or so.

Using Mailman with Vesta on CentOS 6

About a year and a half ago I moved several of my websites to a server running the VestaCP control panel. At the time I posted an extensive review of the software, including a footnote that I found a way to get Mailman working with it. Apparently this is something a lot of people want to do because, since then, several people have contacted me looking for instructions. Life’s been hectic, and this wasn’t on the top of my mind, but I promised that I’d write a post. So, better late than never, here it is…

This tutorial will help you get GNU Mailman 2.1.x running on a Linux server. It’s geared toward CentOS 6.x, but will probably work with other Linux distros, although some file paths may change. It also assumes a standard VestaCP installation, with both Apache and Nginx running on the server.

Before we get too far into this, I should also point out that this tutorial only gets Mailman running on a Vesta server, it does not integrate it with the Vesta web interface. This means that you, as root, will need to set up new lists on the command line—you won’t be able to let your users create their own lists. Once a list is set up, however, it can be completely administered through the Mailman web interface, so hopefully this won’t be too big a deal for most situations.

Mailman is available as a CentOS package, but at the time that I did the install, several of the big email providers had recently made changes to their DMARC policies that broke older versions of Mailman, so I chose to build from the then-latest source release, which included new workarounds to address the issue.

When I did my installation, Mailman 3 was in beta, I believe, but since it wasn’t yet stable, and since I was moving existing Mailman 2.1 lists, I chose to stick with 2.1. Mailman 3 is now generally available and has some interesting new features, but it’s a significantly different beast and this tutorial probably won’t be helpful if you want to jump to 3.0. Fortunately, Mailman 2.1 is still in support and is still receiving regular updates, with 2.1.20 being the most recent version as of this writing.

This tutorial assumes that System V is used to manage services on the server. Many new Linux distros, including CentOS 7, have switched to systemd, which uses a different configuration format. If you’re using a systemd-based distribution, you’re on your own to figure that out, as I haven’t tried to do it yet (with Mailman anyway) myself.

There are four main parts to getting Mailman running on Vesta:

  • Install Prerequisites
  • Building and installing Mailman
  • Configuring Exim
  • Configuring Apache and Nginx
  • Creating your lists

While Mailman 2.1 supports multiple domain names, it does not allow the same list name to be used multiple times on different domains. In other words, if you host cats.com and dogs.com on the same server, and you create a customers@cats.com list, you can’t also create a customers@dogs.com list. While this wasn’t really a deal breaker for me, I decided to come up with a work around anyway, taking my lead from cPanel. cPanel appends the domain name onto the list name (i.e. customers_cats.com), but somehow strips it out in the email messages so that users only see customers@cats.com. I couldn’t figure out exactly how cPanel does this, but I came up with a pretty good facsimile by using Exim’s address rewriting features.

Installing Prerequisites

Mailman requires that the a C compiler and the Python development libraries be installed on the server, which are not installed by default. In addition, the pip command is required to install the dnspython Python module:

yum install -y gcc gcc-c++ python-devel python-pip

Now run the following command to install dnspython:

pip install dnspython

Building Mailman

We’ll start by downloading the latest release of the 2.1 branch of Mailman (2.1.20 as of this writing). Check http://launchpad.net/mailman to be sure you’re using the latest version.

wget https://launchpad.net/mailman/2.1/2.1.20/+download/mailman-2.1.20.tgz

Unzip the package

tar xvzf mailman-2.1.20.tgz

Switch to the directory that was created when the package was unzipped

cd ./mailman-2.1.20

Create the account and group that Mailman will run under:

useradd -r mailman

Mailman expects that the directories it will be installed into exist when you start the installation. Create those directories and set the ownership and permissions that Mailman requires:

mkdir -p /usr/local/mailman /var/mailman
chown mailman.mailman /usr/local/mailman /var/mailman
chmod 02775 /usr/local/mailman /var/mailman

Run the configure script to ensure all necessary libraries are available and to get Mailman ready to build:

./configure --prefix=/usr/local/mailman --with-var-prefix=/var/mailman --with-mail-gid=mailman --with-cgi-gid=apache

Build the package:

make

And install it:

make install

Run the check_perms script to ensure that permissions are as Mailman expects them to be:

/usr/local/mailman/bin/check_perms

If the above script returns any errors (and it probably will), run it again with the -f flag to have it try to fix the errors. In some cases, you may need to do this a few times before everything works.

/usr/local/mailman/bin/check_perms -f

Copy the /var/mailman/scripts/mailman file to /etc/init.d. This is the script that will be used to start and stop the mailman service:

cp /usr/local/mailman/scripts/mailman /etc/init.d

Copy the mailman crontab file to /etc/cron.d so that Mailman’s periodic tasks, such as sending out email reminders of posts awaiting moderation and managing the list archive, are run regularly.

cp /usr/local/mailman/cron/crontab.in /etc/cron.d/mailman

Now set a default password for Mailman. This password can be used in place of any list’s administrator password, so be sure to select a strong password.

/usr/local/mailman/bin/mmsitepass

Mailman requires that a default list, aptly named “mailman,” be created. You’ll be prompted for an administrator email address and a list password when you run this command. You can ignore the list of aliases that is displayed when you run the command.

/usr/local/mailman/bin/newlist mailman

Configure the system so that Mailman is automatically started when the server boots up:

chkconfig mailman on

And start the mailman service:

service mailman start

Mailman writes its log files to /var/mailman/logs. To be more consistent with other services, thereby making the logs easier to find, symlink Mailman’s logs directory in /var/log:

ln -s /var/mailman/logs /var/log/mailman

You’ll also want to rotate these logs on a regular basis so that they don’t get too big. To do this, create a new log rotate script. (Note: I prefer vi and use it in any instructions that require editing files in this tutorial, but feel free to substitute nano or your favorite editor if you don’t know vi or prefer something else.)

vi /etc/logrotate.d/mailman

Add the following contents to that file:

/var/log/mailman/bounce /var/log/mailman/digest /var/log/mailman/error /var/log/mailman/post /var/log/mailman/smtp /var/log/mailman/smtp-failure /var/log/mailman/qrunner /var/log/mailman/locks /var/log/mailman/fromusenet /var/log/mailman/subscribe /var/log/mailman/vette {
  missingok
  sharedscripts
  postrotate
  /usr/local/mailman/bin/mailmanctl reopen >/dev/null 2>&1 || true
  endscript
}

Configuring Exim

The next step in the configuration process is to integrate it with Exim, Vesta’s mail transfer agent (MTA). This allows Vesta to properly route incoming messages sent to a Mailman list to the Mailman software to process them. Exam is actually a preferred MTA to use with Mailman because it’s ability to route messages based on directory listings means that no per-list Exim configuration is necessary. By contrast, most other MTAs require you to set up several email aliases for each list you create.

The next several steps require editing the /etc/exim/exim.conf file. To make what’s going on more understandable, I’m going to start at the bottom of the file and work my way toward the top.

First, create a backup of the conf file, just to be safe:

cp /etc/exim/exim.conf /etc/exim/exim.bak

Then open it to edit.

vi /etc/exim/exim.conf

As mentioned above, Mailman does not support lists on multiple domains that share the same name. To work around this, I decided to follow cPanel’s lead and appended the domain name to the list name in the form listtname__domain.tld__. (Note that the trailing double underscore is necessary to properly parse address that contain a command, such as “-unsubscribe.” I couldn’t get the rewrite to properly parse these addresses without the underscores there.) This creates list email addresses that look like listname__domain.tld__@domain.tld which is undesireable. When mail is sent, however, Exim rewrites the email addresses it finds in the message into the preferred form, listtname@domain.tld, so end users see the cleaner form of the address.

Jump to the bottom of the file and find a line that starts with “begin rewrites” and add the following after that line:

#messages generated by Mailman will have the format of list__domain__@domain
#this rule will rewite them to list@domain before they are delivered
^([a-z0-9-\.]+)__[a-z0-9-\.]+__(-[a-z0-9]+)?@(.*) $1$2@$3 SEh

The above rewrite rule will strip the extraneous domain name from the list name when messages are sent, but only when the address appears in specific email headers. While this ensures that the to, from, and cc headers, for example are rewritten, it does not rewrite some of the more obscure headers, such as those that instruct mail clients how to handle unsubscribe requests. For this reason, we include two transports and two routers, the next features we’ll configure, that will properly route inbound messages that use either address format.

Transports that tell Exim how to handle a given incoming message. In this case, the transports instruct Exim to open a pipe to Mailman, when it receives a message associated with a list, through which it will pass the contents of the message, allowing Mailman to take over the processing.

Find the line “begin transports,” and add the following lines after this line but before the “begin rewrites” line.  There are several other transports already defined.  Where you put these doesn’t matter, as long as they’re in the “transports” section of the file.

mailman_transport:
  driver = pipe
  command = /usr/local/mailman/mail/mailman \
    '${if def:local_part_suffix \
    {${sg{$local_part_suffix}{-(\\w+)(\\+.*)?}{\$1}}} \
    {post}}' \
    ${lc:$local_part}__${lc:$domain}__
  current_directory = /usr/local/mailman
  home_directory = /usr/local/mailman
  user = mailman
  group = mailman
mailman_transport_norewrite:
  driver = pipe
  command = /usr/local/mailman/mail/mailman \
    '${if def:local_part_suffix \
    {${sg{$local_part_suffix}{-(\\w+)(\\+.*)?}{\$1}}} \
    {post}}' \
    ${lc:$local_part}
  current_directory = /usr/local/mailman
  home_directory = /usr/local/mailman
  user = mailman
  group = mailman

Finally, we create two routers. These tell Exim where to look to determine whether an incoming message has a valid destination on the server and, when a match is found, which transport it should be routed to for processing.

Add the following lines to the file between “begin routers” and “begin transports.”  Like with the transports, positioning doesn’t matter, as long as both definitions appear before the “begin transports line of the file.

mailman_router:
  driver = accept
  require_files = /usr/local/mailman/mail/mailman : \
    /var/mailman/lists/${lc::$local_part}__${lc::$domain}__/config.pck
  local_part_suffix_optional
  local_part_suffix = -admin : \
    -bounces : -bounces+* : \
    -confirm : -confirm+* : \
    -join : \
    -leave : \
    -owner : \
    -request : \
    -subscribe : \
    -unsubscribe
  transport = mailman_transport
mailman_router_norewrite:
  driver = accept
  require_files = /usr/local/mailman/mail/mailman : \
    /var/mailman/lists/${lc::$local_part}/config.pck
  local_part_suffix_optional
  local_part_suffix = -admin : \
    -bounces : -bounces+* : \
    -confirm : -confirm+* : \
    -join : \
    -leave : \
    -owner : \
    -request : \
    -subscribe : \
    -unsubscribe
  transport = mailman_transport_norewrite

Save the changes to the file and restart exim to enable them:

service exim restart

Configuring Apache

Configuring Apache was a bit of a challenge because Mailman is very specific about which user account it runs under. Vesta, on the other hand, uses suexec to run all scripts under the UID of the site owner, which breaks mailman. The solution is to run Mailman on a different port, where it is not bound by suexec’s rules. Later, we’ll set up a proxy in Nginx so that users won’t need to remember complicated URLs to manage their lists.

Mailman’s web-based admin tool has several small images at the bottom, which it expects to find in Apache’s /var/www/icons directory. Create symlinks in the icons directory that point at these images.

cp -s /usr/local/mailman/icons/* /var/www/icons

Create a new configuration file for Mailman’s Apache configuration.

vi /etc/httpd/conf.d/mailman.conf

Add a listen directive at the very top of this file. This tells Apache to bind to port 8090 when it starts and to listen for HTTP connections on this port.

Listen 8090

Next, add a VirtualHost block to handle requests coming in to this port.

<VirtualHost *:8090>
  ScriptAlias /mailman/ /usr/local/mailman/cgi-bin/
 
  AllowOverride None
  Options ExecCGI
  Order allow,deny
  Allow from all

  Alias /pipermail/ /usr/local/mailman/archives/public/
  Options Indexes MultiViews FollowSymLinks
  AllowOverride None
  Order allow,deny
  Allow from all
</VirtualHost>

Save the file and restart Apache:

service httpd restart

Now, depending on your firewall settings, you may be able to access the Mailman web interface at http://domain.tld:8090/mailman/listinfo. If you can’t, don’t worry about it as we’ll set that up next.

Configuring Nginx

Next, Nginx needs to be configured on each site to proxy requests to Mailman’s Apache listener on port 8090. Fortunately, we only need to apply this change to a few template files. Vesta offers a tool to reapply the template to a site’s configuration, which is a big help if you already have a lot of sites configured on the server.

Vesta stores it’s Nginx configuration templates in /usr/local/vesta/data/templates/web/nginx. This directory contains templates for each of the “proxy templates” you can choose when setting up a hosting package. Files with the extension .tpl are for HTTP configurations and .stpl files are for HTTPS configurations. You’ll need make the follow edits to each of the templates in the directory (or at least each of the templates that you use on your server).

We’ll start with HTTPS. Mailman, for the most part, uses relative URLs for most of it’s interface, so it runs fine on both HTTP and HTTPS. The user administration pages, however, use absolute URLs, so when managing users you can be dropped to HTTP unexpectedly. While not a standard module of Nginx, the CentOS builds include the ngx_http_sub_module, which can do string substitutions on page output. We can use this to rewrite the HTTP URLs in Mailman’s output to HTTPS to avoid problems.

Open the hosting.stpl file in the directory noted above:

vi /usr/local/vesta/data/templates/web/nginx/hosting.stpl

Between the end of the block that begins “location /error/“ and before the beginning of the block that starts “location @fallback” add the following:

location ~ ^/((mailman|pipermail)/?.*)$ {
  proxy_pass http://127.0.0.1:8090/$1$is_args$args;
  sub_filter http://%domain_idn% https://%domain_idn%;
  sub_filter_once off;
}
location /icons/ {
  alias /var/www/icons/;
}

Save the file and do the same for the other .stpl files in the directory.

There are two options for the HTTP configuration. If you know that all of your sites will have SSL certificates, as mine do, you can use the following configuration to direct all HTTP requests to their HTTPS counterparts. I recommend this approach if you can support it.

Open the hosting.tpl file:

vi /usr/local/vesta/data/templates/web/nginx/hosting.tpl

Again, between the “location /error/“ and “location @fallback” blocks, add the following:

location ~ ^/((mailman|pipermail)/?.*)$ {
  rewrite ^(.*)$ https://$host$1;
}
location /icons/ {
  alias /var/www/icons/;
}

And, like before, repeat the change on the other .tpl files as well.

If you can’t rely on all of your sites having SSL available, you can instead use a variation on the HTTPS configuration that doesn’t include the HTTP-to-HTTPS conversion. Follow the same instructions for the above HTTP changes, but add the following instead:

location ~ ^/((mailman|pipermail)/?.*)$ {
  proxy_pass http://127.0.0.1:8090/$1$is_args$args;
}
location /icons/ {
  alias /var/www/icons/;
}

Now that the templates are updated, we need to apply them to each existing site. To do this, run the following command for each hosting user account (not domain) on the server:

/usr/local/vesta/bin/v-rebuild-web-domains username

Once all accounts have been updated, restart Nginx to make the changes take effect:

service nginx restart

Creating Lists

Mailman is now up and running. All that’s left to do is to start creating lists.

When setting up a new list, remember that it is necessary to use the full address syntax, listname__domain.tld__@domain.tld for the list address. You should also specify both the emailhost and urlhost options to ensure that the list is configured correctly.

/usr/local/mailman/bin/newlist —emailhost=domain.tld —urlhost=www.domain.tld listname__domain.tld__@domain.tld

You’ll be prompted for an email address of the list administrator and for a list password. When the list is set up, you can access it at http://www.domain.tld/mailman/admin/listname__domain.tld__.

Since, through the Exim rewrites, we’re running the list from a different address than we configured Mailman to use, it is necessary to make one small settings change to the list’s settings. Open the list’s administrative page in a browser and log in with the password you provided in the previous step.

Once logged in, click on “Privacy Options” in the “Configuration Categories” menu. Then click on “Recipient Filters.”

On the Recipient Filters page, find the field labelled “Alias names (regexps) which qualify as explicit to or cc destination names for this list” and add the preferred list address (listtname@domain.tld). Without this, Mailman will not recognize the preferred address as being a valid list address and will hold any messages that are sent using it for moderation.

Click the “Submit Your Changes” button to save the change. Then make any additional settings changes you require and add recipients to the list under “Membership Management” in the “Configuration Categories” menu. You’re list is now ready to use.

Remote Access on a Raspberry Pi

OK, so you have a Raspberry Pi running heedlessly (no keyboard or screen) on your network and you want to do something with it. What do you do? Well, there’s SSH, of course, but what if you want to play with any of the Pi’s graphical tools?

The Raspbian OS (as well as most of the other general-use OS options available on the Raspberry Pi site) runs an XWindow service by default. This provides the GUI when the Pi is plugged in to a screen, but it can also be accessed remotely. This post will look at some of the several ways to do this.

TL;DR version: most of these examples are either too difficult to set up or too impractical to use reliably. For a no-nonsense tutorial on a tool that works pretty well, jump straight to the last section, XRDP.

X11 Forwarding

On Raspbian, the Pi’s SSH server has X11 forwarding turned on by default. This means that you can run GUI programs on your Pi, but display the interface on your local desktop, provided your local desktop has an X server itself. If you’re on Linux or some other form of graphical Unix, you’re good to go. Mac OS X users will need to install an X server, such as XQuartz. Then ssh into the Pi as you normally would, but add a -Y flag to enable local machine to receive the X11 data (replace the IP in the example with that of your Pi, of course):

ssh -Y pi@192.168.1.123

Once you log in, you’ll have a prompt that looks like any normal SSH session, but try running an XWindow program, like xeyes:

xeyes &

You should see a window open with two eyes it it that follow your mouse around the screen. Note the ampersand at the end of the command. This tells the Linux shell to move the xeyes process to the background, allowing the shell to return a prompt for the next thing you want to run. If you don’t include it, you’ll need to close xeyes before you can run something else.

Windows users don’t need to feel left out, either, as there’s a number of X server implementations for Windows, such as XMing, Cygwin/X, and XWin32 (commercial).

The advantage of X11 forwarding is that it’s already built in to the Raspbian OS and doesn’t require a lot of work to set up. The downsides are that you need to know the name of the programs you want to run, since you don’t have the GUI menu bar to select from and that it can be a little tricky to get working on the client desktop, especially if that machine runs Windows.

On the first point, however, you can launch the LXDE (Raspbian’s graphical environment) menu system by running:

lxsession &

This will open the Raspberry PI menu bar on your local screen so you can easily launch programs, but it won’t create a windowed version of the Pi desktop as you might expect, but instead becomes a weird mix of your local desktop and the remote desktop that’s confusing and difficult to use. Some X servers have an option to switch to a windowed mode, but if you want the windowed interface without a lot of fuss, you may want to consider another option, such as one of the ones below.

VNC

VNC, which stands for Virtual Network Computing, is a graphical desktop sharing protocol that was developed by a partnership with Olivetti, Oracle, and later AT&T in the late 1990’s. Since the code for the protocol was open sourced, many different clients and servers have been developed for nearly every platform you might encounter.

There are plenty of tutorials for getting VNC running on a Raspberry Pi, so I won’t spend time on that here. If you want to try it, this tutorial on the Raspberry Pi site will get you going.

You’ll also probably need to install a VNC client on your desktop—TightVNC seems to be one of the more popular choices, as are RealVNC (from the original developers of the protocol), UltraVNC (Windows only), and Chicken (formerly Chicken of the VNC, Mac only). Mac users take note: there’s already a VNC client built in to OS X—it’s called “Screen Sharing.app.” It’s buried pretty deep in the system, so you won’t find it in your Applications folder, but it should come up in a Spotlight search.

The problem with VNC is that its underlying Frame Buffer Protocol sends entire copies of the remote screen to the client, even if only a small portion of the screen has changed, which means it can feel extremely sluggish, even when doing simple tasks, like editing a document.

Chrome Remote Desktop

Chrome Remote Desktop is a remote access solution created by Google and available through the Chrome browser via Chrome Web Store. Rather than connecting directly, machines running the Chrome Remote Desktop service register themselves with Google’s servers when they start up, and Google serves as a proxy between the remote machine and the client accessing it, in a way similar to how instant messaging services work. This allows connecting over the Internet to remote computers that are sitting behind NAT firewalls, which is not possible with any of the other services listed here.

I use Chrome Remote Desktop regularly to access the Mac Pro in my office when I need to work remotely. The service uses SSL encryption to ensure privacy and Google’s VP8 video format to send the screen image, and it’s very responsive.

I have not tried Chrome Remote Desktop on a Raspberry Pi, but others have reported good luck with it on the older Raspbian Wheezy. Unfortunately, Chromium, Chrome’s open source cousin, is not available in the new Raspbian Jessie repositories (yet?), so short of building from source, this isn’t an option for me at the moment. Also, since it requires using Chromium to set it up, the Pi needs to be connected to something with a screen, at least initially (or you could configure it with X11 Forwarding).

NX

NX is a protocol developed by NoMachine, with client and server implementations for Linux, Mac, and Windows. Prior to version 4, NX was open source software that was tunneled to the client over SSH, so it was extremely easy to get running on a Linux box with very little fuss. Unlike VNC, however, NX uses compression to reduce the transferred data so that connections are responsive, even over slower networks.

NX has been my go-to tool for Linux remote access for years, but unfortunately no precompiled versions of it, or any of the open source forks or it, are readily available for the Raspberry Pi, and compiling it from source is tricky given the Pi’s limited resources.

XRDP

Fortunately I discovered XRDP some time ago. If you’re a Windows user, you might be familiar with the Windows Remote Desktop Protocol (RDP), which has been included as part of most Windows distributions since Windows XP. RDP achieves very fast speeds by sending only the portions of the screen that have changed to the client. Because of this, many programs appear almost as responsive remotely as they do when logged in to the machine directly.

XRDP is implemented as a hybrid between VNC and RDP. The actual remote control of the machine done in VNC, but data is sent back to the client through RDP, where it can benefit from the efficiencies of that protocol. This helps make XRDP faster than VNC, since much of the VNC overhead is never sent over the wire. RDP is also a widely supported protocol, with clients built in to most Windows computers.

To install XRDP, simply run the following commands on the pi:

sudo apt-get update
sudo apt-get install xrdp

Once XRDP is installed, you’ll want to make one small change to the configuration. By default, XRDP is configured with limited encryption, so someone could conceivably eavesdrop on your session. To fix this, open the xrdp.ini file on your Pi in your favorite editor (mine is vi (vim, actually), but feel free to substitute nano or something else if you’re not comfortable with vi):

sudo vi /etc/xrdp/xrdp.ini

In the general section of the file, find the line that starts with crypt_level and set it to high:

crypt_level=high

The crypt levels are defined as follows:

– low — Data you send to the server is encrypted with 40-bit RC4 encryption, but the data you receive is sent in the clear.
– medium — Data is sent in both directions using 40-bit RC4 encryption.
– high — Data is sent in both directions using 128-bit RC4 encryption

Once you’ve set the crypt_level, save the file and restart the service:

service xrdp restart

Then open your Remote Desktop Connection app (it’s in the Accessories folder of nearly every Windows machine, the official Microsoft version is a free download in the Mac App Store, and there are lots of third-party versions for Linux, iOS, and Android) and type in your Pi’s IP address:

Remote Desktop Setup

When you connect, you’ll be prompted for your credentials on the Pi. Leave the “module” set to “sesman-Xvnc,” enter your username and password, and click OK.

XRDP Login Window

In a few seconds, you should have full access to your Raspberry Pi desktop.

XRDP Desktop

Dead Simple Dynamic DNS Updater

I run a VPN on my home network which lets me access my systems and files remotely and gives me a secure route to the Internet when I have to use questionable networks. Since my Internet provider does not give me a static IP address, I rely on dynamic DNS services to keep my IP mapped to a hostname I can always use to “phone home.”

Since the DNS servers for the service I’ve been using seemed to vanish a couple weeks ago, I started “shopping” for a new provider and came across dtDNS. dtDNS allows you to set up five dynamic DNS hostnames for free, or you can pay a $5.00 one-time fee to get unlimited (“within reason,” according to the site) hosts.

Once I had my new hostname set up, it was time to set up a client app to keep my IP in sync. I had some trouble getting ddclient, which I’ve been using for a while now, to work with dtDNS, and the Linux options on dtDNS’s update clients page were either no longer available, required Java, or expected the machine to have a public IP address, which mine does not. So with a bit of research, I wrote my own.

My updater is a simple shell script with less than 10 lines of code. It uses icanhazip.com to find the external IP address, so it will work on systems that don’t have public IPs, and it only pushes a change request when it sees that the IP has changed.

#!/bin/bash
# dtDNS Dynamic IP update Script
# Author: Jason R. Pitoniak 
# 
# Copyright (c) 2015 Jason R. Pitoniak

# Set your dtDNS hostname and password below
HOSTNAME='MYNAME.dtdns.net'
PASSWORD='PASSWORD'

# We need to find your external IP address as your system may have an non-public address
# on your local network. icanhazip.com (or any number of other sites) will do this for us
EXTIP=`curl -s http://icanhazip.com/`

# Now we check which IP dtDNS currently has recorded by checking their DNS server
LASTIP=`nslookup $HOSTNAME ns1.darktech.org | tail -2 | awk '{ print $2 }'`

# If the current external IP is different from the one with dtDNS, update dtDNS
if [ "$EXTIP" != "$LASTIP" ]
then
    curl "https://www.dtdns.com/api/autodns.cfm?id=$HOSTNAME&pw=$PASSWORD&ip=$EXTIP"
fi

It should run on any Unix-like system including Mac OS X. It will probably even work on Windows with cygwin, but I haven’t tried. Just copy it to a file named dtdns-update somewhere on your system, update the HOSTNAME and PASSWORD variables to reflect your account, and chmod the file so that it is accessible only to the user that will run it:

chmod 700 dtdns-update

To test the script, call it from the command line:

/path/to/dtdns-update

The script will return whatever response it receives from the dtDNS update API, whether it is an error or success message. If nothing is returned it means that dtDNS already has the correct IP, so no action was taken.

Now we’ll set up a cron job to run the script periodically. To do this, enter the following on the command line:

crontab -e

A text editor will open. Add the following to the end of the file:

*/5 * * * * /path/to/dtdns-update >/dev/null 2>&1

This will run the script once every five minutes. You can adjust the interval as you feel is appropriate. Once you save the file, the new cron job will be installed and will begin running within a few minutes. Now you can rest assured that your IP address will always be up to date with dtDNS.

Protip: If your dtDNS hostname is too difficult to easily remember and you own a domain name, you can set up a hostname on your own domain that points to your dtDNS name. If you maintain your own DNS, create a CNAME record for whatever host name you want with your dtDNS hostname as the target. If you don’t maintain your DNS yourself, ask you host if they can configure this for you.

Domain Aliases with Exim 4

As I’ve noted before, I recently moved several of my sites to a server running VestaCP. One of the convenient features of Vesta is the ability to specify any number of domain aliases when setting up a new site. I’ve now learned, however, that Vesta only aliases these domains for web access, not for email.

One of the sites I host has the .com, .net, and .org variants of its domain. Being a not-for-profit organization, we use the .org as our primary domain and do 301 redirects to it on the web, but for historic reasons much of our email comes through several .net addresses, so it was important to me to keep email flowing through this variant.

We currently have around 100 email addresses hosted with this site, most of which are aliases. Since Vesta configures Exim, its standard SMTP server, to check per-domain text files for aliases, it would have been trivial for me to set up additional files for the additional domains. However, since the aliases change on a somewhat-regular basis, I wanted to avoid having to duplicate files. And since the domain name is part of the alias in these files, simple symlinks aren’t an option.

The Internet wasn’t much help here, either. I found several references on how to set up individual redirects on each of the domains, and references on how to create wildcard redirects that send all mail on a domain to a single address on another domain, neither of which works for what I want to do. I simply want to redirect any mail that comes in on one of the secondary domains to the same “local part” on the primary.

Not being an Exim expert, it took some work to get what I wanted. After some trial and error, I finally found a working solution. I’m not sure that this is the best way to handle this, but since the Internet seems to be devoid of this specific solution, I figured I’d share it here. I should also point out that I’m running this on CentOS 6; YMMV with other platforms and configurations.

To start, I created a file, /etc/exim/domain_aliases that simply maps secondary domains to the primary domain to which they should forward:

sudo vi /etc/exim/alias_domains

The contents of this file are as follows:

ourdomain.com: ourdomain.org
ourdomain.net: ourdomain.org

Next, I added a new router to /etc/exim/exim.conf. This can go anywhere after the “begin routers” section of the config as long as it falls before the “begin transports” section.

sudo vi /etc/exim/exim.conf

The configuration to add looks like this:

domain_aliases:
     driver = redirect
     data = ${extract{1}{:}{${lookup{$domain}lsearch{/etc/exim/domain_aliases}{"$local_part@$value"}}}}
     require_files = /etc/exim/domain_aliases

The most important line here is the one that starts with “data.” Since it’s rather complex, I’ll break it down.

The first part of this action is the “extract.” This takes a string, parses it at a delimiter, and returns a result based on whether or not a search string (in this case it’s the incoming domain name) is found. The general syntax of this directive is as follows:

${extract{search_parameter}{delimiter}{search_string}{return_string_success}{return_string_fail}}

* search_parameter is an integer that specifies which parameter position will be checked for the match. In this example, we want to check the first (leftmost) parameter.
* delimiter is the character used as the delimiter between columns. I’m using a colon.
* search_string is the full string being searched. This can be a simple string or, as in this case, a more complex expression. I’ll explain what I’m doing in more detail below.
* return_string_success is returned when a successful match is made. If this parameter is not specified, the result of the match (i.e. the string searched) is returned instead.
* return_string_fail is returned when no match is made. It defaults to an empty string when not specified.

In this implementation a second directive, a lookup, is inserted in place of the search_string. Lookups match a value to a key and take this form:

${lookup{key}search_type{path}{return_string_success}{return_string_fail}}

* key is a string value containing the key we want to find. In this case we pass in $domain which holds the domain name to which the incoming message was sent.
* search_type is the type of search we want to do. I’m using lsearch which allows searching in the individual lines of a file.
* path is the absolute path to the text file to be used for the lookup. This is he domain_aliases file created above.
* return_string_success and return_string_fail work the same way they do in the extract. In this case, if a match on the original domain is found we return a new email address, built from the $local_part (everything to the left of the “@“) of the original email address and the $value returned by the lookup, a variable containing the string result of the match. Again, if no match is made, an empty string is returned.

Together, these two directives provide exam with a new destination address for messages coming in via the secondary domains. Once the match is made, Exim restarts the lookup process, looking now for a handler for the message using the newly returned address.

There is still one more step, however, because Exim needs to know that it is able to accept mail for the secondary domains. To do this, the secondary domains need to be added to Exim’s local_domains and relay_to_domains lists. While there’s a number of ways to do this, I found it easiest to add another reference to my domain_aliases file.

First, find the lines in /etc/exim/exim.conf that start “domainlist local_domains” and “domainlist relay_to_domains.” Under Vesta, they’ll look something like this:

domainlist local_domains = dsearch;/etc/exim/domains/
domainlist relay_to_domains = dsearch;/etc/exim/domains/

Vesta stores each mail domain’s configuration in a directory named with the domain name, so it does a directory search (dsearch) and, if it finds a matching directory, it knows it can handle mail for that domain. We’ll add second option: another lsearch of the domain_aliases file, like so:

domainlist local_domains = dsearch;/etc/exim/domains/:lsearch;/etc/exim/domain_aliases
domainlist relay_to_domains = dsearch;/etc/exim/domains/:lsearch;/etc/exim/domain_aliases

Finally reload Exim’s configuration and your secondary domains should start handling mail:

service exim reload

Again, as I mentioned above, I am far from being an expert at configuring Exim, so I could be missing something completely obvious. Still, this seems like it would be a pretty normal thing to do and unless my Google-fu has failed me, it seems not many others are doing it. Until I learn of a better way, this is how I’m configuring my domains. If you know of a better way to handle multiple domains, or you find this helpful, please comment. Exim, like most popular daemons, is a Swiss Army Knife of possibilities. Getting the most from it takes time, patience, and the generosity of those willing to share their struggles.

<