My Little Corner of the Net

I Finally Figured Out Why There’s an Ethernet Port on the Back of Some Cable Boxes

A pair of ActionTec ECB6200 MoCA 2.0 adapters

As I’m about to drop cable TV in an effort to start saving money on something I hardly use, I got curious about the Ethernet port on the back of my set top boxes.  When we moved about four years ago, the tech doing the install in the new house looked at my existing box and said “wow, that’s old” and then set me up with two brand new, much smaller, modern looking Arris boxes.  I noticed at the time that they had an Ethernet port on the back, and wondered about it, but I didn’t look into it.  I though maybe the box was some kind of all-in-one unit that could double as a cable modem.

Recently, one of my mother-in-law’s cable boxes started to malfunction, so I took it home with me so that I could get it swapped out.  Her box was a little older, but while it was sitting on the half wall between my kitchen and family room, I noticed that it, too, had an Ethernet port.  Intrigued, I started doing some research.

It turns out that there is a technology called MoCA, short for Multimedia Over Coax Alliance. MoCA allows sending Ethernet signals over the coax cable in your home, similar to the way powerline Ethernet (i.e. Homeplug) works over electrical lines.  It looks like this is the reason for the Ethernet port on the box, at least for the Arris boxes, anyway, though it isn’t clearly documented in the product manual beyond a one-liner that says it supports MoCA.

To make it work, you’d need to add a MoCA adapter near your network switch.  They are readily available on Amazon, with prices ranging anywhere from about $20 for a pair all the way up to over $200 a piece.  To make it work, you plug the Ethernet end of the device into a port on your router and connect the coax end to your home’s cable TV line.  Then you connect another Ethernet cable between the cable box and the device you want to connect to the network, like a smart TV or a streaming adapter, or even a computer.  From what I’ve read, you can expect maximum speeds of around 200Mbps—not super fast, but certainly decent enough to make streaming work in a WiFi dead zone.

Though I haven’t actually used it, MoCA is supposed to work fine alongside your regular cable service.  I’m now wondering if I might be able to squeeze even a bit more speed out of it if I disconnect the incoming line (which will soon only carry Internet service, with my modem already located where the service enters the house), and run it over the otherwise dead coax lines.

I’ve been considering running CAT-6 to my TVs to get better connections as I switch to streaming everything, but MoCA could make for a much easier installation, and probably with no noticeable difference in video quality.

Have you tried MoCA?  I’d love to hear your experience in the comments.

Waxing Nostalgic

I just watched a webinar on how to hold a conference online. Given the current state of affairs, with COVID-19 keeping us locked up in our homes, the planning committee for the Rochester Security Summit is rightly concerned that we might not be able to hold our traditional conference this October and we’re exploring virtual options.

During the presentation, the host noted that in the online conference that she had run, they had placed white rabbits around the virtual exhibit hall. If you clicked on these Easter eggs (or perhaps Easter bunnies) they linked you out to a website,, where you can use your mouse to slap some random guy in the face with an eel.

This got me thinking of another site, one that I first learned about from the crudely drawn posters hung along RIT’s Quarter Mile during my freshman year, Poke Alex in the Eye. (Incidentally, I once interviewed Alex, the creator and star—or perhaps victim is a better term—of the site, for a position we had available at one time, and I was surprised to learn that the site was still up so many years later. Sadly, it no longer seems to be.)

That, in turn, led me to think of two other online games I remember playing back in the mid-90’s. I believe they were both Shockwave games (yeah, that thing like Flash before Flash), and as such are probably long lost to the history of the Internet. I’m pretty sure that I linked to both of them on an early version of my personal “home page” while I was in school, but the earliest version of my site that the Wayback Machine has captured does not have them.

The first site was a whack-a-mole style game that had celebrities popping up for you to whack with your mouse. IIRC, they were all celebrities that had some form of black mark on them at the time for something they did. I also remember that the Queen of England would occasionally pop out of one of the holes and I think you’d lose points if you whacked her.

The other was a celebrity boxing game where you’d pick a celebrity and then go at them in the ring, first-person-shooter style. Again, I think the celebrities were all people we loved to hate at the time. This one may have been Celebrity Slugfest, which I’ve found a few references to on the web, but unsurprisingly the site itself appears to be long gone.

So, do you remember these old Internet classics? Do you have any other old favorites that you’d like to reminisce about? Please share in the comments, I’d love to hear a your Internet nostalgia.

Jekyll for Drupal Users

For the past ten years or so, I’ve had various responsibilities over a web hosting environment that relies on Drupal to power hundreds of sites.  I was largely responsible for the selection of Drupal and it was definitely the right solution for us when we picked it, but over the years I’ve grown frustrated with it, mainly because of its complexity, its insanely granular templating system, and the fact that its proprietary database format (which holds content, configuration, and debugging logs, ugh!) makes it very difficult to migrate anything between a development and production environment without a full overwrite.

When I started redesigning the Tay House site for our centennial this past spring, I decided I was definitely not using Drupal.  But what would I use?  The site had been running in ModX Evolution (now Evolution CMS), a CMS I was excited about when I first launched the site years ago. Evolution is, however, an older product now, and I’ve kind of lost interest in it as time has gone by.  I wanted to see what was out there, and my search brought me to Jekyll.

Jekyll is a static site builder.  Rather than relying on a server-side technology to build the site as it is accessed, a Jekyll site consists of a series of files containing the structure and content of the site which are compiled into a set of stand-alone HTML files to be deployed to a server.  Since no code runs on the server, the site will be both very fast and very secure.  I liked it because it meant that I could deploy the site and essentially forget about it.  While there are some dynamic aspects of the Tay House site, the site is largely static and the content doesn’t change that often.  Though I had set up several sites with ModX back in the day, Tay House was the only one I had that still used it, and since I wasn’t working on the site all the time, it was easy to lose track of when it needed to be updated.

This post explores my initial Jekyll experience and compares it to my many years experience with Drupal.  I also touch on some of the interesting solutions I’ve come up with to get around some of shortcomings of not running an on-server CMS, though I’ll probably write some follow-up posts that get into them in more detail.


In Drupal, everything is controlled by modules.  If you want to implement a feature, you install a module to do it.  The code behind the module controls everything from the way the feature is displayed on the front-end to how it’s configured and interacted with by site editors.  Drupal has a thorough system of hooks that allow modules to interact with each other and can influence or override just about any action.

Jekyll has a concept of plugins to allow sites to implement features that are not natively provided by Jekyll.  Plugins are very similar to Drupal modules, but since Jekyll does not have a web-based backend and does not run interactively, the plugins tend to be much less complex.  Also, since Jekyll provides a lot of flexibility over how sites are built in it’s core, there’s much less need to reply on plugins to extend Jekyll.  In my initial launch of the Tay House site, I only needed one plugin—to help me generate context-specific menus–though I’ve since added a couple more as I’ve continued to build out the site.


Content in Drupal is stored as what Drupal calls “nodes.”  Typically a node is analogous with a page of the site, though sometimes nodes are used to store data that is used in other ways and is never accessed directly as a page.  Each node has a content type which defines which fields are available to that node and, therefore, what type of data can be stored.  If you’re a developer, a node is essentially an object and the content type is the class.

Jekyll has a looser data structure in terms of what fields are available to a piece of data, so there’s some degree of flexibility to how it can be used.  A page in Jekyll is typically written in Markdown (though HTML can also be used) and each page is stored in a separate file.  The file contains a YAML header, known as the front matter, which can hold variables specific to that page, followed by the page content, which is analogous to Drupal’s default content field.  The front matter can consist of any of several standard variables, such as title (the title of the page) and permalink (the URL of the page in the generated site), but it can also contain custom variables which can be accessed when the site is generated.  Custom variables do not need to be defined anywhere, so you can add as many page-specific variables as you’d like.

One of Jekyll’s standard page variables is layout, which specifies the template that will be used to render the page.  Layouts can be used as a simple way to support Drupal-like content types by matching layout elements with expected front matter variables.

Jekyll also supports a concept called collections which can take the content type analogy a bit farther.  Collections are groups of related data which are stored in individual files in a specifically-named directory.  They can be set up to render as individual pages, though they don’t have to be–in some cases it makes more sense to access the data more like you would with a Drupal view.  When data is rendered into pages, however, collections give some specific advantages, such as the ability to apply a permalink template to all collection items, similar to Drupal’s Pathauto module.


Blocks in Drupal are used to show the secondary content on a page.  A block might be used to show a site’s navigation, a list of related pages, a Twitter feed, a advertisement, or pretty much anything else you might want to show on one or more pages of a site.  Blocks containing static HTML can be defined in Drupal’s UI, they can be created with views to show dynamically updated data based on the view results, or they can be implemented through a module, enabling almost any imaginable functionality.

Jekyll does not have the same concept as blocks, but because of its flexible templating system, it’s possible to implement something similar.  Typically, the functionality of a block would be implemented in an include, a sub-template that can be called from within a layout template.  This makes the “block” reusable, as it can be included in multiple layouts easily.  Another approach would be to create an overarching layout that contains all of the “block” content and then use a sub-layout for each “content type” that contains the page-specific layout, similar to Drupal’s onion-skin template model.


Drupal has a very elaborate, and very complicated, theme system.  Drupal uses an onion-skin approach to theming that starts at the meta-structure of the page, continues to the basic page layout, and then gets into individual elements of the page.  Default templates are specified in the modules that implement them, but they can be overridden by themes.  Further, generic templates can be overridden by more specific ones by category or specific element.  It’s a very powerful system, but it’s very difficult to understand, especially for novices.

Jekyll uses layouts, which are template files implemented in the Liquid layout language developed by Shopify.  I found Liquid very easy to work with since it is very similar to Smarty, which I’ve used for years with my PHP applications.

A Jekyll layout is a single file containing HTML with additional markup for inserting values from variables as well as some rudimentary logic such as loops and if statements.  A layout is specified in a page’s front matter via the layout variable. Layouts can access any of several variables including page, which contains the page’s front matter values, and site, for site-wide values including information about other pages and any auxiliary data loaded from files in the _data directory. The main content of the page is available in the content variable.

Jekyll pages can also contain their own Liquid logic and layouts can insert themselves into “parent” templates by specifying a layout variable in their own front matter.  When that happens, the rendered content of the child layout is passed in as the content to the parent layout.


One of Drupal’s most powerful features is the Views module, a query-by-example interface for accessing the data stored in Drupal.  Views is commonly used to create lists of data, such as all of the nodes of a given content type tagged with a certain value.  For example, to create a landing page that automatically includes all of the pages within a certain section of a site, you might use a view displaying summaries of each page, and links to them.  Or, you could use a view to build a list of locations for a business, with each location being it’s own node, so that they could all be cleanly listed on a single page of a site, even if those nodes are never accessed individually.  This way, when you add or remove locations, the page is adjusted and sorted automatically, with no redesign necessary.

Jekyll doesn’t have the concept of views, but it does have multiple ways of getting data into the site, including data files and collections.  Once Jekyll has that data, it’s easy to iterate over it using Liquid logic in a page, layout, or include, mimicking the output of a view.

I’ve touched on collections already, so I won’t discuss them further, other than to say they can be accessed in Liquid via the site.collections variable.  For example, if a collection of files is stored in a directory named _locations, this data becomes available in site.collections.locations, which is an array of the parsed contents of each file in the directory.
Similarly, data can be passed to Jekyll in CSV, tab-separated, JSON, or YAML files, which are stored in the site’s _data directory.  When Jekyll runs, it parses each file and inserts the contents, as an array, into the variable with the key being the file’s name without the extension.  So a file named locations.csv would be accessed through  For CSV and TSV files, the individual elements’ keys are derived from the first line of the file.

Web-Based Tools

Drupal is completely web based–from configuration, to content editing, to viewing the site, everything can be done through a web UI.  While this poses some managment challenges, such as making it difficult to promote changes through a typical dev-test-prod workflow, it does make it easy for site owners with no web authoring experience to make changes to their sites.

Jekyll is file based and has no web backend.  Software engineers will like that Jekyll can be easily integrated into source control systems like git and can be deployed using CI/CD tools, but less technical users may struggle with the semi-complex file organization system and the need to write markdown or raw HTML without an editor.  Fortunately there are a few tools to help with this, such as Forestry and NetlifyCMS.  These tools provide a web interface for editing content and automatically commit changes to git repositories without the user needing to know anything about git.  Forestry is a hosted tool and has a subscription cost associated with it, though a limited free version is available.  NetlifyCMS is open source and can be installed along-side the Jekyll site, but it isn’t as polished.  Neither is as tightly integrated or as customizable as the Drupal admin, but they do make decent solutions for content editing.


Drupal has a pretty elaborate feeds system for importing data in various formats.  It can be used for everything from creating nodes from data in a CSV file to aggregating news from another site’s RSS feed, to populating a dropdown in a form with options from a JSON API.  Imported data is stored in Drupal entities, most often as nodes though it can be stored in any entity type, such as taxonomy terms.  The data is linked back to the original feed via a unique ID so that updates can be automatically applied, and there are many options available for how and when to expire old content.

Like many things Drupal, the feeds system is powerful for when you need it, but it can be overkill for simple tasks.  Want to display the latest headlines or today’s events in a block on the homepage?  Create a feed importer, import each item as it’s own node, and then create a view to show those locally stored nodes.  Then set the feed’s deletion policies so that those nodes get deleted when they get dropped from the feed, else you end up with a lot of cruft in your database.

With Jekyll, I was able to do something similar with two plugins.  jekyll_get enables the import of JSON data from a URL. The URL gets called early in the site’s build process, the feed is parsed, and the data it contains gets added to the data variable under a key you specify in the site’s _config.yml, much like if the data was dropped in a file in the _data directory.  From there you can use it throughout the site in Liquid markup.  Since the feed is pulled each time the site is rendered, there’s no need to worry about stale data being left behind, though it’s more difficult to collect old data if the feed is limited to only the newest content.

Data, whether from file or API, is not rendered into pages but can be easily iterated over to create something analogous to a Drupal view.  Sometimes, however, you may want to create individual pages.  For example, I want to pull events out of a external calendar but display the details of each event as a page on the site.  For this I found the data_page_generator plugin.  This plugin lets you specify a data set, a layout (which the plugin refers to as a template), some filters, and some details about how to name the files it generates and, when the site is built, you’ll end up with a set of pages containing data from the data set.  Again, since the data is reprocessed each time the site it built, if a particular row of data is removed, the page containing that data will also be removed.

Dynamic Content

Drupal is built in PHP and pages are built on the fly (unless, of course, they are cached), so it’s easy to add dynamic content via modules, template files (ugh!), or through PHP code embedded into blocks or nodes (double ugh!).

With Jekyll, being a static site builder, you’d think dynamic content would be out of the question, but it is actually possible with a little creativity.  For example, I wanted a dynamic feedback form for the Tay House site, so I wrote it in PHP, and added a <?php include(‘/blah/blah/contact-form.php’); ?> into my Jekyll page where I wanted the form to appear.  Then I just set the permalink of the generated page to have a .php extension and now I have a dynamic page on my static site.

I’m looking to take this concept a bit farther as I further develop the site by having some sections of the site that are locked down via version control, others that can be automatically updated (but still statically built) by having a separate headless CMS trigger the build process, and still have others that can pull late-breaking information in from an API on page load.

Some Jekyll purists warn against imbedding code, saying it defeats the purpose of a static site generator and the security it provides.  Many Jekyll sites, I’ve noticed, rely on Javascript-embeddable services, like Disqus, to handle add-on features like blog comments, but I’m an experienced developer with a background in web security, and I’d much rather trust code I’ve written over some black box service that I have no control over.

My Project

As I was starting to redo the Tay House site, I was quite surprised at how well Jekyll was able to do everything I wanted without much effort.  So far I’ve rebuilt all of the sites general content as Jekyll pages and can render them using a set of layouts that I built from scratch in a matter of hours. (For the record, I’ve never been able to build a Drupal theme completely from scratch and often spend as long as it took me to build the Jekyll templates, if not longer, just trying to disable crap I don’t want from Drupal’s starter themes.). I probably had the first pass at the site built and functional, but without a lot of content, in the time I’d spend just trying to figure out what modules I’d need in Drupal.

The site is currently managed in my own git repository, which I host using Gitea.  I may end up moving it to GitHub, however, since I’m not sure I’ll be able to get NetlifyCMS or Forestry to work with Gitea and I hope to get one of them working in the near future.

Once I had the basic site done, it was time for some semi-dynamic content.  While most of the site doesn’t change often, some things, namely the announcements and calendar, need to be edited more frequently.  I figured that putting these in a headless CMS would make it easier for me to let other people keep on top of them.  I selected Directus for this, since I like how it uses normalized SQL while still retaining revision history.

Now that I’ve proven that Jekyll will work for my use case, I’ve started to come up with my ideal configuration.  I feed the data from Directus into the site at build time using its JSON API and jekyll_get and create an individual page for each item with data_page_generator.

For now, I have to build the site manually each time I make a change and then deploy it manually with rsync.  I’m looking to automate the workflow with some sort of CI/CD workflow.  Ideally, I’ll get a web-based editor set up that deploys changes to a devel branch of the git repo, allowing me give other people an easy way to mange the site while allowing me to retain editorial control of the site by controlling the merges to prod.

I also hope to configure Directus to rebuild the prod site automatically whenever a calendar event or news item is added or changed, as I want those changes to become available as soon as possible.  Since I don’t store the rendered site in git, I can do this without having to worry about merge conflicts in the repository.  I’d also like to do automatic nightly rebuilds so that I can, say, show a list of the next five events on the homepage and have them drop off automatically as they pass.

There’s a pretty good community of Jekyll users out there and, whenever I’ve gotten stuck, I’ve been able to find answers to most of my questions online.  Now that I’ve gotten more used to using Jekyll, I’m starting to put the envelope a bit more, so I’ll start posting some tutorials with some of the cool stuff I’m doing soonish.


I started using Xmarks to sync my bookmarks between multiple browsers and computers so long ago that it may have still been called Foxmarks when I started using it.  While I had a handful of problems with it from time to time–mainly with syncs failing and leaving my bookmarks corrupted in a given browser–the tool worked very well for me, so I was quite disappointed when LastPass announced that they were discontinuing the service back in May.

Today I think I may have found my replacement in xBrowserSync.  xBrowserSync is a open-source, anonymous, encrypted, and decentralized bookmark syncing tool that works a lot like Xmarks used to. xBrowserSync doesn’t do everything that Xmarks did, but it syncs bookmarks (which is the only Xmarks feature I ever used), works in Chrome and Firefox, and treats bookmarks as bookmarks (as opposed to making you have to access them through a website), so it meets my needs.  There’s also Android support that I might check out.  I added a few bookmarks in one browser after installing xBrowserSync to one of my machines today and confirmed that they synced to other browsers, so it seems to work.

xBrowserSync is completely anonymous and doesn’t require any signup to use.  Instead, you simply provide the extension with an “encryption password” that is used to create an encryption key, used to encrypt your bookmark collection before it is sent to the server.  When you set up your first browser, the extension generates a unique “sync ID” that identifies your bookmark collection.  On subsequent browser setups, you simply provide these two pieces of information and xBrowserSync retrieves and decrypts your bookmarks.  Encryption and decryption is done in the browser via the cryptography API, and your password and encryption key never leave your browser.

There are currently three public xBrowserSync service providers to choose from, which, combined with the fact that the code is all open source, helps alleviate concerns that this service may too go the way of Xmarks.  If the developer decides to no longer support the project users will just need to move their bookmark collections to another service provider.  Switching is easily done via the extension’s settings.  The server code is also available on github, so it’s also possible to run your own server if you are truly paranoid.

The only “complaint” I have about xBrowserSync at this point, now that I’ve installed it on several browsers on Windows, Mac, and Linux machines, is that when pulling down the bookmark library for the first time in a new browser, xBrowserSync wipes all of your existing bookmarks and replaces them with the copy from the server.  This wasn’t a big deal for me as all of my bookmarks were pretty much in sync across systems already, but a first time user, trying to merge work and home bookmarks for example, might be in for quite a surprise when one of the two collections gets wiped out.  To xBroserSync’s credit, though, the extension does give ample warnings about this.

Complaints aside, xBrowserSync seems to do exactly what it says it will.  If you’re still wondering what to do now that Xmarks is gone, give xBrowserSync a look.

Getting Around Spectrum’s Email Blocks

Our local cable TV and broadband provider, Spectrum, in their infinite wisdom, appears to have blocked the entire IP range owned by Digital Ocean (and possibly other similar hosting providers) from sending mail to their email users. I discovered this a month or two ago when mail from my scout troop’s emails just stopped going to any of our families that use addresses. Of course this is just speculation because Spectrum has not acknowledged any of my many, many emails requesting unblocking, and trying to get help from customer service is a painful experience of being bounced between help desks of techs that are trained to handle front-end issues and who have no idea what to do with back-end questions. I suspect the block is more widespread than just Digital Ocean, too, as another email account I use, this one hosted by a much smaller hosting company, also seems no longer to be able to communicate with my Spectrum account.

Since I can’t seem to get anywhere with Spectrum, I started thinking about alternative solutions. The mail server in question runs on a Digital Ocean droplet running VestaCP, which uses Exim 4 as it’s MTA. A little research into Exim configurations showed that I could set up a “smart host,” basically a process for relaying all outgoing mail matching certain criteria to an external mail server. “Perfect,” I thought, “I’ll just set up a new Spectrum email and relay all of the mail bound for Spectrum users through that!” For the most part, that worked—the only issue was authentication. Exim already had two authentication schemes set up to authenticate email clients when they’d try to send email and these conflicted with the configuration I needed for Exim to authenticate to Spectrum’s SMTP server.

Not to be outdone, I quickly had a new idea. Exim supports a construct called a pipe, based on the Unix construct of the same name, where the contents of an email message are passed on to an external program for further processing. If I could just find some utility that could take the message contents from Exim and pass them on to an authenticated SMTP session with Spectrum, I’d be all set. After a couple of hours of comparing Linux command line tools for sending mail I came up with nothing that would do exactly what I wanted. Maybe I’d have to write something myself.

As I was researching SMTP client libraries for PHP, hoping not to have to write my own, I came across a recommendation to reconfigure PHP to use msmtp, instead of sendmail, with the mail() function. “msmtp—-what’s that?” Turns out that it’s an SMTP client that implements the sendmail command-line interface. Unlike Exim however, which also implements the sendmail interface, msmtp won’t try to route mail itself; it will only send it through one of it’s preconfigured SMTP servers. In other words, exactly what I needed.

I should note that I happen to be a Spectrum customer, so I was able to create a new email account with Spectrum to do this. If you don’t have that luxury, you can probably use a different provider, such as Gmail (there are numerous examples to using Gmail with msmtp available online). Note, though, that doing this could cause problems for you with SPF, DKIM, and DMARC validations, so plan accordingly. When using a Spectrum account, since the connection is authenticated, Spectrum seems to accept the message without doing any further verifications.

Installing msmtp on my CentOS server was easy since there’s already a package available. The package is in the epel repo, however, so if you don’t have epel configured already, you’ll need to do that first. You’ll also need to be root to do this.

yum install -y msmtp

Thinking about how best to set it up, I decided to create a new service account to handle mail being passed to msmtp. While I could have used a global configuration that would not have required this, it would require me to make a file containing the password for the mail relay readable to everyone on the server. In reality, I’m the only user on the server, but still, that’s not a very good security decision. With the separate user account, I can better protect the msmtp configuration.

I created the user account with the following command. The -r sets it up as a service account with no password, but the -m and -d set up a home directory for the account, which is not normally done for service accounts. I’m using this home directory to store the configuration and scripts needed to make this thing work.

useradd -rmd /usr/lib/msmtp msmtp

msmtp wants to write to a log file, so I’ve also set up a directory for that. I’m putting it into /var/log so that it would be easy to find when I need it and also to be close to the logs for other processes, such as Exim, that I may need to consult concurrently when doing email debugging.

mkdir /var/log/msmtp
chmod 750 /var/log/msmtp
chown msmtp.msmtp /var/log/msmtp

Next, I switch to the newly created msmtp account to set up my configuration.

su - msmtp

Here I create the configuration file, When it starts up, msmtp looks for a file named .msmtprc in the user’s home directory.

vi .msmtprc

The contents of the file should look like the following sample. Note that it is possible to set up multiple accounts in the file; just start each one with a new “account” line and a unique name. The indentation in the file is my own—I found it easier to follow what was what when I used indentation, but it is not required.

    tls on
    tls_starttls on
    tls_trust_file /etc/ssl/certs/ca-bundle.crt
    logfile /var/log/msmtp/msmtp.log

account spectrum
    port 587
    auth on
    password PASSWORD

OK…so what’s going on here? The lines following the “defaults” section apply to all accounts configured in the file. Here I’m saying that I want to use TLS encryption on all connections and I provide the path to the CentOS default certificate store that msmtp will use to validate the certificate that servers present when a connection is made. I also define the path to the log file that msmtp should use to record it’s activities.

The next section defines an account. I’ve given the account the name “spectrum,” which will be used when calling msmtp a bit later. The rest is fairly straightforward: SMTP server host and port, a directive to use an authenticated connection along with the credentials to use, and the default “from” address for the so-called SMTP envelope, the initial communication between the client and server before the message is actually sent. Obviously, you should replace EMAIL_ADDRESS and PASSWORD with your actual credentials.

For security reasons, msmtp checks that the .msmtp file is only readable and writeable by the owner and won’t let you use the program if it is not.

chmod 600 .msmtprc

msmtp is now set up and ready to use. We can test it by creating a sample email message in a file and trying to send it.

vi email.txt

The contents of this file should look something like this:

Subject: This is a test

Testing mail from msmtp.

Note that the headers in this file, such as the “From” and “To” email addresses, only reflect what is displayed in the recipient’s email client when they receive the message. The actual recipients are specified in the SMTP Envelope, which is generated from parameters passed to msmtp on the command line or specified in the .msmtprc file.

To test msmtp, run the following, replacing EMAIL_ADDRESS with the email address of your desired recipient.

cat email.txt | msmtp -a spectrum

msmtp won’t show any output if it’s successful, but you can check the log file to make sure your message went through.

tail /var/log/msmtp/msmtp.log

If it worked, you should see something like this:

Nov 29 10:34:03 tls=on auth=on user=YOUR_EMAIL from=YOUR_EMAIL 
recipients=RECIPIENT_EMAILS mailsize=7959 smtpstatus=250 smtpmsg='250 2.0.0 MESSAGE_ID 
mail accepted for delivery' exitcode=EX_OK

Since msmtp emulates sendmail, I had figured that I’d be able to simply include the call to smtp in my Exim transport, but I found that Exim includes an extra newline at the beginning of the message that causes the email headers to get pushed into the message body. To get around this, I use a short shell script that uses the read command to strip off the extra line.

vi route-spectrum

Add the following to this file:


#strip leading newline from messaage passed from exim

# pass remainder of stdin (via cat) to msmtp to send to remote MTA
# additional arguments from Exim are passed in $@
cat | msmtp -C $CONFIG_FILE -a $ACCOUNT $@

I’m not sure that Exim creates a full shell environment for the script when it runs, so I added the full path to the .msmtprc file I wanted to use on the command line, via the -C option. Be sure to adjust the $ACCOUNT variable to reflect the name you gave to the server configuration in the .msmtprc file.

Since it’s a script, it needs to be executable.

chmod 755 route-spectrum

Now I’m done working in the msmtp account, so let’s go back to the root account.


Now I need to build the Exim configuration. My server uses a single file for the Exim config, so this tutorial reflects that. YMMV.

vi /etc/exim/exim.conf

In the section that starts with begin routers I add the following. This tells Exim to give special attention to messages bound for the domain (adjust this as necessary for your use, something like “ : *” might be more inclusive).

   driver = accept
   domains =
   transport = spectrum_smtp

This basically tells Exim that when it encounters a message bound for someone on to process that message using the spectrum_smtp transport, which I’ll add next.

In the section that starts begin transports, I add the block below. This is the workhorse of the process, passing the email message on the script we created above so that it can be passed on through msmtp.

  driver = pipe
  command = "/usr/lib/msmtp/route-spectrum  $pipe_addresses"
  user = msmtp
  batch_max = 10

The user directive tells Exim to run the command as the msmtp user we set up earlier. This ensures that the script has access to the server configuration in the .msmtprc file that we took care to protect because it contained our password. The batch_max directive tells Exim that it can process up to 10 recipients of the same message with one call to the script; otherwise it would process each one with a separate connection to the Spectrum server. I’m not sure what the perfect number to use here is, but 10 seemed decent for my needs. The list of email addresses that are processed are placed into the $pipe_addresses variable and are passed as an argument to the script.

Since I changed the configuration, I need to restart Exim.

service exim restart

Now I can go into my email client and send a message to a Spectrum user. Checking the Exim main log (/var/log/exim/main.log) and msmtp log (/var/log/msmtp/msmtp.log) will confirm that it was delivered correctly.

The last thing that I want to do is set up log rotation for the msmtp log file. This will start a new log file each week, compressing the old one to save space on the server. To do this, create a new file in the logrotate.d directory:

vi /etc/logrotate.d/msmtp

And add the following to it:

/var/log/msmtp/msmtp.log {

That’s about it. This process isn’t the most sustainable, but it’s a decent workaround until Spectrum realizes that legitimate small businesses and non-profits do run mail servers on bulk hosting and they shouldn’t treat us all as spammers.

There is flaw in this configuration, but it should affect anyone too often: if, for some reason, msmtp is not able to connect to the remote mail server, Exim will view the failure as a hard fail and the message will be dropped. While I can work around this, I need to do a little more research on the exit status codes returned by msmtp, which I haven’t yet done.