My Little Corner of the Net

Dead Simple Dynamic DNS Updater

I run a VPN on my home network which lets me access my systems and files remotely and gives me a secure route to the Internet when I have to use questionable networks. Since my Internet provider does not give me a static IP address, I rely on dynamic DNS services to keep my IP mapped to a hostname I can always use to “phone home.”

Since the DNS servers for the service I’ve been using seemed to vanish a couple weeks ago, I started “shopping” for a new provider and came across dtDNS. dtDNS allows you to set up five dynamic DNS hostnames for free, or you can pay a $5.00 one-time fee to get unlimited (“within reason,” according to the site) hosts.

Once I had my new hostname set up, it was time to set up a client app to keep my IP in sync. I had some trouble getting ddclient, which I’ve been using for a while now, to work with dtDNS, and the Linux options on dtDNS’s update clients page were either no longer available, required Java, or expected the machine to have a public IP address, which mine does not. So with a bit of research, I wrote my own.

My updater is a simple shell script with less than 10 lines of code. It uses icanhazip.com to find the external IP address, so it will work on systems that don’t have public IPs, and it only pushes a change request when it sees that the IP has changed.

#!/bin/bash
# dtDNS Dynamic IP update Script
# Author: Jason R. Pitoniak 
# 
# Copyright (c) 2015 Jason R. Pitoniak

# Set your dtDNS hostname and password below
HOSTNAME='MYNAME.dtdns.net'
PASSWORD='PASSWORD'

# We need to find your external IP address as your system may have an non-public address
# on your local network. icanhazip.com (or any number of other sites) will do this for us
EXTIP=`curl -s http://icanhazip.com/`

# Now we check which IP dtDNS currently has recorded by checking their DNS server
LASTIP=`nslookup $HOSTNAME ns1.darktech.org | tail -2 | awk '{ print $2 }'`

# If the current external IP is different from the one with dtDNS, update dtDNS
if [ "$EXTIP" != "$LASTIP" ]
then
    curl "https://www.dtdns.com/api/autodns.cfm?id=$HOSTNAME&pw=$PASSWORD&ip=$EXTIP"
fi

It should run on any Unix-like system including Mac OS X. It will probably even work on Windows with cygwin, but I haven’t tried. Just copy it to a file named dtdns-update somewhere on your system, update the HOSTNAME and PASSWORD variables to reflect your account, and chmod the file so that it is accessible only to the user that will run it:

chmod 700 dtdns-update

To test the script, call it from the command line:

/path/to/dtdns-update

The script will return whatever response it receives from the dtDNS update API, whether it is an error or success message. If nothing is returned it means that dtDNS already has the correct IP, so no action was taken.

Now we’ll set up a cron job to run the script periodically. To do this, enter the following on the command line:

crontab -e

A text editor will open. Add the following to the end of the file:

*/5 * * * * /path/to/dtdns-update >/dev/null 2>&1

This will run the script once every five minutes. You can adjust the interval as you feel is appropriate. Once you save the file, the new cron job will be installed and will begin running within a few minutes. Now you can rest assured that your IP address will always be up to date with dtDNS.

Protip: If your dtDNS hostname is too difficult to easily remember and you own a domain name, you can set up a hostname on your own domain that points to your dtDNS name. If you maintain your own DNS, create a CNAME record for whatever host name you want with your dtDNS hostname as the target. If you don’t maintain your DNS yourself, ask you host if they can configure this for you.

Domain Aliases with Exim 4

As I’ve noted before, I recently moved several of my sites to a server running VestaCP. One of the convenient features of Vesta is the ability to specify any number of domain aliases when setting up a new site. I’ve now learned, however, that Vesta only aliases these domains for web access, not for email.

One of the sites I host has the .com, .net, and .org variants of its domain. Being a not-for-profit organization, we use the .org as our primary domain and do 301 redirects to it on the web, but for historic reasons much of our email comes through several .net addresses, so it was important to me to keep email flowing through this variant.

We currently have around 100 email addresses hosted with this site, most of which are aliases. Since Vesta configures Exim, its standard SMTP server, to check per-domain text files for aliases, it would have been trivial for me to set up additional files for the additional domains. However, since the aliases change on a somewhat-regular basis, I wanted to avoid having to duplicate files. And since the domain name is part of the alias in these files, simple symlinks aren’t an option.

The Internet wasn’t much help here, either. I found several references on how to set up individual redirects on each of the domains, and references on how to create wildcard redirects that send all mail on a domain to a single address on another domain, neither of which works for what I want to do. I simply want to redirect any mail that comes in on one of the secondary domains to the same “local part” on the primary.

Not being an Exim expert, it took some work to get what I wanted. After some trial and error, I finally found a working solution. I’m not sure that this is the best way to handle this, but since the Internet seems to be devoid of this specific solution, I figured I’d share it here. I should also point out that I’m running this on CentOS 6; YMMV with other platforms and configurations.

To start, I created a file, /etc/exim/domain_aliases that simply maps secondary domains to the primary domain to which they should forward:

sudo vi /etc/exim/alias_domains

The contents of this file are as follows:

ourdomain.com: ourdomain.org
ourdomain.net: ourdomain.org

Next, I added a new router to /etc/exim/exim.conf. This can go anywhere after the “begin routers” section of the config as long as it falls before the “begin transports” section.

sudo vi /etc/exim/exim.conf

The configuration to add looks like this:

domain_aliases:
     driver = redirect
     data = ${extract{1}{:}{${lookup{$domain}lsearch{/etc/exim/domain_aliases}{"$local_part@$value"}}}}
     require_files = /etc/exim/domain_aliases

The most important line here is the one that starts with “data.” Since it’s rather complex, I’ll break it down.

The first part of this action is the “extract.” This takes a string, parses it at a delimiter, and returns a result based on whether or not a search string (in this case it’s the incoming domain name) is found. The general syntax of this directive is as follows:

${extract{search_parameter}{delimiter}{search_string}{return_string_success}{return_string_fail}}

* search_parameter is an integer that specifies which parameter position will be checked for the match. In this example, we want to check the first (leftmost) parameter.
* delimiter is the character used as the delimiter between columns. I’m using a colon.
* search_string is the full string being searched. This can be a simple string or, as in this case, a more complex expression. I’ll explain what I’m doing in more detail below.
* return_string_success is returned when a successful match is made. If this parameter is not specified, the result of the match (i.e. the string searched) is returned instead.
* return_string_fail is returned when no match is made. It defaults to an empty string when not specified.

In this implementation a second directive, a lookup, is inserted in place of the search_string. Lookups match a value to a key and take this form:

${lookup{key}search_type{path}{return_string_success}{return_string_fail}}

* key is a string value containing the key we want to find. In this case we pass in $domain which holds the domain name to which the incoming message was sent.
* search_type is the type of search we want to do. I’m using lsearch which allows searching in the individual lines of a file.
* path is the absolute path to the text file to be used for the lookup. This is he domain_aliases file created above.
* return_string_success and return_string_fail work the same way they do in the extract. In this case, if a match on the original domain is found we return a new email address, built from the $local_part (everything to the left of the “@“) of the original email address and the $value returned by the lookup, a variable containing the string result of the match. Again, if no match is made, an empty string is returned.

Together, these two directives provide exam with a new destination address for messages coming in via the secondary domains. Once the match is made, Exim restarts the lookup process, looking now for a handler for the message using the newly returned address.

There is still one more step, however, because Exim needs to know that it is able to accept mail for the secondary domains. To do this, the secondary domains need to be added to Exim’s local_domains and relay_to_domains lists. While there’s a number of ways to do this, I found it easiest to add another reference to my domain_aliases file.

First, find the lines in /etc/exim/exim.conf that start “domainlist local_domains” and “domainlist relay_to_domains.” Under Vesta, they’ll look something like this:

domainlist local_domains = dsearch;/etc/exim/domains/
domainlist relay_to_domains = dsearch;/etc/exim/domains/

Vesta stores each mail domain’s configuration in a directory named with the domain name, so it does a directory search (dsearch) and, if it finds a matching directory, it knows it can handle mail for that domain. We’ll add second option: another lsearch of the domain_aliases file, like so:

domainlist local_domains = dsearch;/etc/exim/domains/:lsearch;/etc/exim/domain_aliases
domainlist relay_to_domains = dsearch;/etc/exim/domains/:lsearch;/etc/exim/domain_aliases

Finally reload Exim’s configuration and your secondary domains should start handling mail:

service exim reload

Again, as I mentioned above, I am far from being an expert at configuring Exim, so I could be missing something completely obvious. Still, this seems like it would be a pretty normal thing to do and unless my Google-fu has failed me, it seems not many others are doing it. Until I learn of a better way, this is how I’m configuring my domains. If you know of a better way to handle multiple domains, or you find this helpful, please comment. Exim, like most popular daemons, is a Swiss Army Knife of possibilities. Getting the most from it takes time, patience, and the generosity of those willing to share their struggles.

Vesta CP

I’ve been running most of my personal sites from a VPS running the Interworx control panel on CentOS for the past several years. After a long, stable run, my operating system reached end of life and it was time to upgrade.

I’m pretty comfortable with the Linux command line, I have a ton of experience configuring Apache, and I’m pretty good at keeping MySQL up and running, so I could probably get away without a control panel. On the other hand, I have very little experience with mail servers and I like the convenience of a point-and-click interface to handle most of my administrative needs. Happy with Interworx, I considered buying a new Interworx license for the new server, but I also wanted to shop the competition a bit as well. That’s when I discovered that there are quite a few open source control panels available.

I started downloading some of the open source panels and installed them on VMWare virtual machines on my laptop to try them out. Most, I found, either had questionable histories in terms of security, didn’t seem to be in active development, or had horrible user interfaces. Others, like ISPConfig, took perceived security a bit too far, forcing PHP into such a small sandbox that much of my code, which follows industry best practices in terms of structure and security, would not work without extensive modification.

I finally tried and settled on Vesta, a relatively new PHP-based control panel launched by a Russian development team in early 2013. Vesta offers the essential features for hosting, like web, database, email, and DNS, without a lot of unnecessary cruft. Installation was easy and while I do think the UI and UX could use a little work, Vesta’s web front-end is cleanly designed and easy to use.

Installation

Vesta is designed to be installed on a “bare metal” server with just an operating system installed. While I chose RedHat-clone CentOS for my server, Vesta will also run on Debian-based systems like Ubuntu. Setup is as easy as downloading a script and running it on the command line; the script then uses the OS’s package manager to download and install all of its components, making it easy to keep things up to date as the OS releases updated packages.

Accounts

Vesta accounts are standard Linux accounts which are created in /home. Each account can host multiple websites and, I believe, accounts can be configured as resellers, so that they can create new accounts as well. Several quotas are enforced, including disk space, number of sites, and number of email accounts per site. You can also manage the account’s shell from the web interface, including a “nologin” option if you want to disable command line access to the account (and allow FTP access only).

Vesta stores all of the config files used by an account in the account’s home directory, symlinking them to the locations where their respective applications expect to see them. This makes backing up a site is a snap—in fact, Vesta backs up every site automatically each night, keeping the last three backups available as a downloadable tar file.

When adding a site, DNS zone is created automatically. You can also specify any number of alias domains and these will be set up automatically as well.

I did not see a way to “jail” (chroot) an account, though this isn’t important to me as I am the only user with direct access to the server.

One thing that bugged me a bit was that vesta’s quota settings did not seem to allow for “unlimited” options. In Interworx I created an “unlimited package” to which I subscribe all of my sites, effectively disabling individual site quotas. In Vesta I simply set all of my limits extremely high, to the point that I should never exceed any of them.

Web

Vesta installs both Nginx and Apache web servers, and is the only control panel I’ve seen that does this. Nginx, known for being extremely fast, sits at the front end and handles most static content while proxying anything it can’t handle to Apache. This helps speed up response times for things like images and CSS files while still allowing most off-the-shelf web software, like WordPress and Drupal, to run without modification, since most of these packages are preconfigured to run on Apache.

There are several options available for Nginx, including the ability to use it as a reverse proxy cache. Caching requires a lot of memory, however, so I wouldn’t recommend trying this on a VPS.

Early on I was having an issue where Apache would keep consuming more and more memory until the server ran out of RAM, at which point MySQL would be shut down. Since MySQL is a requirement of most of my sites, this wasn’t acceptable, so I started trying to tune Apache to avoid it. After several failed attempts (I never did find a definite cause), I switched Apache to use Worker MPM (i.e. threaded processes) and the memory footprint of the server immediately dropped to almost zero. While Worker mode is not compatible with several Apache modules, experience has taught me that it can be a huge help in improving server performance. In fact, it probably negates the benefits of running Nginx now, but I don’t feel like trying to configure Nginx out of the picture.

Database

Vesta installs MySQL by default. It looks like a patch for adding PostgreSQL support is also available, but I haven’t tried it.

PHP

Vesta installs the OS’s PHP packages. By default, it runs PHP under mod_ruid which, by my understanding, is basically a variation of mod_php that runs scripts under the owner’s UID. This can be changed, on a per-site basis, to several other options including straight CGI and PHP-FPM.

I’ve decided to use mod_fcgid because this is what I have the most experience using and because it works well in the enterprise hosting environment I oversee at work. I did have to tweak the default settings a bit to get the best performance for my server’s resources, but I kind of expect that every server is going to need some degree of customization to balance available resources to desired performance level. With the tweaks in place, my PHP sites load quickly with minimal memory overhead.

CentOS 6 ships with PHP 5.3. After the installation I decided to upgrade to PHP 5.5 using the Webtatic repo. I basically did a “yum erase” on each of the installed “php” packages and installed the equivalent “php55w” package in its place (note that they are not a 1:1 match). So far I have not seen a single issue with this, though YMMV.

DNS

Vesta installs BIND 9 as its DNS server. The Vesta web interface makes it easy to configure manual DNS zones for domains not hosted on the server or for adding adding additional records to hosted domains.

With my old host, I had three IP addresses, one for the server and two to use as “separate” DNS servers. Without getting into the reasons why this is a bad idea, this is how I ran my DNS for the past several years, though it did burn me a couple times. My new host only allows one IP per server, so for a backup DNS I installed PowerDNS on another VPS I have with another provider in another datacenter. With PowerDNS’s MySQL storage engine and the concept of “supermasters” plus a tiny config change to Vesta’s main BIND configuration, the secondary server is updated automatically every time I add or change a domain in Vesta, making my NS2 server truly “set and forget.”

Mail

Vesta uses Exim as its MTA (SMTP) and Dovecot as its MDA (POP3 and IMAP).

Email accounts are configured in the Vesta web interface and can be set up with any number of aliases. Incoming messages can be forwarded elsewhere, with or without a copy being kept, and you can specify an auto-reply message to send when mail is received. RoundCube is installed for webmail.

Vesta will install SpamAssassin and ClamAV (clamd) automatically if a server has more than 3Gb of RAM. Mine does not, so I had to install them manually. SpamAssassin was not a problem, but on my first attempt on building the server, with 512Mb of RAM, I was not able to start clamd. After reconfiguring the VPS to have 1Gb of RAM, I was able to start clamd, but it consumed most of my available memory. At that point, I decided that I didn’t really need to virus scan my email on the server, so I disabled it. I enabled it again after seeing how little memory Apache was using after switching to Worker and I’m now consistently using a bit less 50% of my available memory when the server is at normal load. I could probably switch back to 512Mb, but I don’t plan to. For the email that I’ve received on the new server, only two or three spam messages have made it to my inbox.

What Vesta is missing is an easy way to create email forwarders that aren’t attached to an email account. One of the sites that I’m hosting makes extensive use of these. Fortunately I was able to locate where Exim stores aliases for the domains it manages and I added them all manually. I also tested to ensure that my manual edits would be safe when I make email changes in Vesta and so far they seem to be, but I’m being careful to keep a backup of that file, just in case.

Vesta also doesn’t include a mailing list manager. One of the sites I host relies heavily on mailing lists, currently with Mailman. I tried to get Mailman working with Vesta and thought I had a solution in place, but I ran into some complications when I started moving the lists over. To prevent delays in my migration, I created a subdomain on a cPanel server and used it’s built in Mailman installation to manage the lists for now. I still plan to continue working on getting Mailman running and who knows, I might submit my method to the Vesta team for future implementation if I’m successful.

Conclusion

While I’m not sure that I’d use Vesta as a control panel for hosting paying clients, as it still has some rough edges, I think it will meet my personal needs quite well. The product still has some bugs, but in the month or so since I’ve installed it, I’ve already seen several of them fixed. The development team seems to be focused on making a lightweight control panel that works well on small servers and VPSes, which is nice to see.

Vesta documentation is still somewhat lacking, consisting of mostly just an FAQ page right now. There is a user forum and a bug/feature request tracker, but being a Russian project, many of the posts are in Russian. Still, Vesta seems to be catching on, so it is hopefully only a matter of time before documentation improvements start to be made. Truth be told, I haven’t had much of a need for more documentation, but I’d suspect someone with less Linux administration experience might. The developers do offer paid support plans, but I have not purchased one.

My biggest gripe with Vesta is how it formats lists: lists of users, sites, domains, etc. are extremely verbose with all of the details of the list item presented, making the page difficult to scan or to find the links to administrative actions for a list item. Clicking to do simple, everyday actions, like modifying an email address, often takes several more clicks than seem necessary. I’d much rather see terser lists with clear calls to action, including the option to see more info about a list item when I need to. That said, this is a wart I can live with.

So far, after some tweaking, Vesta seems like a pretty good panel. It will be interesting to watch as it progresses over the next few years, I think it has a lot of potential.

M.T.A.

“Why’s it called a CharlieTicket?” I asked of the ticket I bought so that the troop could ride the “T” into Boston earlier this week, proving that while I may have grown up in Massachusetts, I am not a Bostonian.

Local TV Nightly Sign-Off

It’s hard to believe that, back when I was in high school (though I know that really was a long time ago now), local television stations didn’t broadcast 24-hours a day. Here’s a nightly sign-off clip from WOKR (now 13-WHAM) in Rochester, recorded in 1992, that I stumbled upon on YouTube. The recognizable voice of Don Alhart provides the voiceover.

<