My Little Corner of the Net

Jekyll for Drupal Users

For the past ten years or so, I’ve had various responsibilities over a web hosting environment that relies on Drupal to power hundreds of sites.  I was largely responsible for the selection of Drupal and it was definitely the right solution for us when we picked it, but over the years I’ve grown frustrated with it, mainly because of its complexity, its insanely granular templating system, and the fact that its proprietary database format (which holds content, configuration, and debugging logs, ugh!) makes it very difficult to migrate anything between a development and production environment without a full overwrite.

When I started redesigning the Tay House site for our centennial this past spring, I decided I was definitely not using Drupal.  But what would I use?  The site had been running in ModX Evolution (now Evolution CMS), a CMS I was excited about when I first launched the site years ago. Evolution is, however, an older product now, and I’ve kind of lost interest in it as time has gone by.  I wanted to see what was out there, and my search brought me to Jekyll.

Jekyll is a static site builder.  Rather than relying on a server-side technology to build the site as it is accessed, a Jekyll site consists of a series of files containing the structure and content of the site which are compiled into a set of stand-alone HTML files to be deployed to a server.  Since no code runs on the server, the site will be both very fast and very secure.  I liked it because it meant that I could deploy the site and essentially forget about it.  While there are some dynamic aspects of the Tay House site, the site is largely static and the content doesn’t change that often.  Though I had set up several sites with ModX back in the day, Tay House was the only one I had that still used it, and since I wasn’t working on the site all the time, it was easy to lose track of when it needed to be updated.

This post explores my initial Jekyll experience and compares it to my many years experience with Drupal.  I also touch on some of the interesting solutions I’ve come up with to get around some of shortcomings of not running an on-server CMS, though I’ll probably write some follow-up posts that get into them in more detail.

Modules

In Drupal, everything is controlled by modules.  If you want to implement a feature, you install a module to do it.  The code behind the module controls everything from the way the feature is displayed on the front-end to how it’s configured and interacted with by site editors.  Drupal has a thorough system of hooks that allow modules to interact with each other and can influence or override just about any action.

Jekyll has a concept of plugins to allow sites to implement features that are not natively provided by Jekyll.  Plugins are very similar to Drupal modules, but since Jekyll does not have a web-based backend and does not run interactively, the plugins tend to be much less complex.  Also, since Jekyll provides a lot of flexibility over how sites are built in it’s core, there’s much less need to reply on plugins to extend Jekyll.  In my initial launch of the Tay House site, I only needed one plugin—to help me generate context-specific menus–though I’ve since added a couple more as I’ve continued to build out the site.

Nodes

Content in Drupal is stored as what Drupal calls “nodes.”  Typically a node is analogous with a page of the site, though sometimes nodes are used to store data that is used in other ways and is never accessed directly as a page.  Each node has a content type which defines which fields are available to that node and, therefore, what type of data can be stored.  If you’re a developer, a node is essentially an object and the content type is the class.

Jekyll has a looser data structure in terms of what fields are available to a piece of data, so there’s some degree of flexibility to how it can be used.  A page in Jekyll is typically written in Markdown (though HTML can also be used) and each page is stored in a separate file.  The file contains a YAML header, known as the front matter, which can hold variables specific to that page, followed by the page content, which is analogous to Drupal’s default content field.  The front matter can consist of any of several standard variables, such as title (the title of the page) and permalink (the URL of the page in the generated site), but it can also contain custom variables which can be accessed when the site is generated.  Custom variables do not need to be defined anywhere, so you can add as many page-specific variables as you’d like.

One of Jekyll’s standard page variables is layout, which specifies the template that will be used to render the page.  Layouts can be used as a simple way to support Drupal-like content types by matching layout elements with expected front matter variables.

Jekyll also supports a concept called collections which can take the content type analogy a bit farther.  Collections are groups of related data which are stored in individual files in a specifically-named directory.  They can be set up to render as individual pages, though they don’t have to be–in some cases it makes more sense to access the data more like you would with a Drupal view.  When data is rendered into pages, however, collections give some specific advantages, such as the ability to apply a permalink template to all collection items, similar to Drupal’s Pathauto module.

Blocks

Blocks in Drupal are used to show the secondary content on a page.  A block might be used to show a site’s navigation, a list of related pages, a Twitter feed, a advertisement, or pretty much anything else you might want to show on one or more pages of a site.  Blocks containing static HTML can be defined in Drupal’s UI, they can be created with views to show dynamically updated data based on the view results, or they can be implemented through a module, enabling almost any imaginable functionality.

Jekyll does not have the same concept as blocks, but because of its flexible templating system, it’s possible to implement something similar.  Typically, the functionality of a block would be implemented in an include, a sub-template that can be called from within a layout template.  This makes the “block” reusable, as it can be included in multiple layouts easily.  Another approach would be to create an overarching layout that contains all of the “block” content and then use a sub-layout for each “content type” that contains the page-specific layout, similar to Drupal’s onion-skin template model.

Themes

Drupal has a very elaborate, and very complicated, theme system.  Drupal uses an onion-skin approach to theming that starts at the meta-structure of the page, continues to the basic page layout, and then gets into individual elements of the page.  Default templates are specified in the modules that implement them, but they can be overridden by themes.  Further, generic templates can be overridden by more specific ones by category or specific element.  It’s a very powerful system, but it’s very difficult to understand, especially for novices.

Jekyll uses layouts, which are template files implemented in the Liquid layout language developed by Shopify.  I found Liquid very easy to work with since it is very similar to Smarty, which I’ve used for years with my PHP applications.

A Jekyll layout is a single file containing HTML with additional markup for inserting values from variables as well as some rudimentary logic such as loops and if statements.  A layout is specified in a page’s front matter via the layout variable. Layouts can access any of several variables including page, which contains the page’s front matter values, and site, for site-wide values including information about other pages and any auxiliary data loaded from files in the _data directory. The main content of the page is available in the content variable.

Jekyll pages can also contain their own Liquid logic and layouts can insert themselves into “parent” templates by specifying a layout variable in their own front matter.  When that happens, the rendered content of the child layout is passed in as the content to the parent layout.

Views

One of Drupal’s most powerful features is the Views module, a query-by-example interface for accessing the data stored in Drupal.  Views is commonly used to create lists of data, such as all of the nodes of a given content type tagged with a certain value.  For example, to create a landing page that automatically includes all of the pages within a certain section of a site, you might use a view displaying summaries of each page, and links to them.  Or, you could use a view to build a list of locations for a business, with each location being it’s own node, so that they could all be cleanly listed on a single page of a site, even if those nodes are never accessed individually.  This way, when you add or remove locations, the page is adjusted and sorted automatically, with no redesign necessary.

Jekyll doesn’t have the concept of views, but it does have multiple ways of getting data into the site, including data files and collections.  Once Jekyll has that data, it’s easy to iterate over it using Liquid logic in a page, layout, or include, mimicking the output of a view.

I’ve touched on collections already, so I won’t discuss them further, other than to say they can be accessed in Liquid via the site.collections variable.  For example, if a collection of files is stored in a directory named _locations, this data becomes available in site.collections.locations, which is an array of the parsed contents of each file in the directory.
Similarly, data can be passed to Jekyll in CSV, tab-separated, JSON, or YAML files, which are stored in the site’s _data directory.  When Jekyll runs, it parses each file and inserts the contents, as an array, into the site.data variable with the key being the file’s name without the extension.  So a file named locations.csv would be accessed through site.data.locations.  For CSV and TSV files, the individual elements’ keys are derived from the first line of the file.

Web-Based Tools

Drupal is completely web based–from configuration, to content editing, to viewing the site, everything can be done through a web UI.  While this poses some managment challenges, such as making it difficult to promote changes through a typical dev-test-prod workflow, it does make it easy for site owners with no web authoring experience to make changes to their sites.

Jekyll is file based and has no web backend.  Software engineers will like that Jekyll can be easily integrated into source control systems like git and can be deployed using CI/CD tools, but less technical users may struggle with the semi-complex file organization system and the need to write markdown or raw HTML without an editor.  Fortunately there are a few tools to help with this, such as Forestry and NetlifyCMS.  These tools provide a web interface for editing content and automatically commit changes to git repositories without the user needing to know anything about git.  Forestry is a hosted tool and has a subscription cost associated with it, though a limited free version is available.  NetlifyCMS is open source and can be installed along-side the Jekyll site, but it isn’t as polished.  Neither is as tightly integrated or as customizable as the Drupal admin, but they do make decent solutions for content editing.

Feeds

Drupal has a pretty elaborate feeds system for importing data in various formats.  It can be used for everything from creating nodes from data in a CSV file to aggregating news from another site’s RSS feed, to populating a dropdown in a form with options from a JSON API.  Imported data is stored in Drupal entities, most often as nodes though it can be stored in any entity type, such as taxonomy terms.  The data is linked back to the original feed via a unique ID so that updates can be automatically applied, and there are many options available for how and when to expire old content.

Like many things Drupal, the feeds system is powerful for when you need it, but it can be overkill for simple tasks.  Want to display the latest headlines or today’s events in a block on the homepage?  Create a feed importer, import each item as it’s own node, and then create a view to show those locally stored nodes.  Then set the feed’s deletion policies so that those nodes get deleted when they get dropped from the feed, else you end up with a lot of cruft in your database.

With Jekyll, I was able to do something similar with two plugins.  jekyll_get enables the import of JSON data from a URL. The URL gets called early in the site’s build process, the feed is parsed, and the data it contains gets added to the data variable under a key you specify in the site’s _config.yml, much like if the data was dropped in a file in the _data directory.  From there you can use it throughout the site in Liquid markup.  Since the feed is pulled each time the site is rendered, there’s no need to worry about stale data being left behind, though it’s more difficult to collect old data if the feed is limited to only the newest content.

Data, whether from file or API, is not rendered into pages but can be easily iterated over to create something analogous to a Drupal view.  Sometimes, however, you may want to create individual pages.  For example, I want to pull events out of a external calendar but display the details of each event as a page on the site.  For this I found the data_page_generator plugin.  This plugin lets you specify a data set, a layout (which the plugin refers to as a template), some filters, and some details about how to name the files it generates and, when the site is built, you’ll end up with a set of pages containing data from the data set.  Again, since the data is reprocessed each time the site it built, if a particular row of data is removed, the page containing that data will also be removed.

Dynamic Content

Drupal is built in PHP and pages are built on the fly (unless, of course, they are cached), so it’s easy to add dynamic content via modules, template files (ugh!), or through PHP code embedded into blocks or nodes (double ugh!).

With Jekyll, being a static site builder, you’d think dynamic content would be out of the question, but it is actually possible with a little creativity.  For example, I wanted a dynamic feedback form for the Tay House site, so I wrote it in PHP, and added a <?php include(‘/blah/blah/contact-form.php’); ?> into my Jekyll page where I wanted the form to appear.  Then I just set the permalink of the generated page to have a .php extension and now I have a dynamic page on my static site.

I’m looking to take this concept a bit farther as I further develop the site by having some sections of the site that are locked down via version control, others that can be automatically updated (but still statically built) by having a separate headless CMS trigger the build process, and still have others that can pull late-breaking information in from an API on page load.

Some Jekyll purists warn against imbedding code, saying it defeats the purpose of a static site generator and the security it provides.  Many Jekyll sites, I’ve noticed, rely on Javascript-embeddable services, like Disqus, to handle add-on features like blog comments, but I’m an experienced developer with a background in web security, and I’d much rather trust code I’ve written over some black box service that I have no control over.

My Project

As I was starting to redo the Tay House site, I was quite surprised at how well Jekyll was able to do everything I wanted without much effort.  So far I’ve rebuilt all of the sites general content as Jekyll pages and can render them using a set of layouts that I built from scratch in a matter of hours. (For the record, I’ve never been able to build a Drupal theme completely from scratch and often spend as long as it took me to build the Jekyll templates, if not longer, just trying to disable crap I don’t want from Drupal’s starter themes.). I probably had the first pass at the site built and functional, but without a lot of content, in the time I’d spend just trying to figure out what modules I’d need in Drupal.

The site is currently managed in my own git repository, which I host using Gitea.  I may end up moving it to GitHub, however, since I’m not sure I’ll be able to get NetlifyCMS or Forestry to work with Gitea and I hope to get one of them working in the near future.

Once I had the basic site done, it was time for some semi-dynamic content.  While most of the site doesn’t change often, some things, namely the announcements and calendar, need to be edited more frequently.  I figured that putting these in a headless CMS would make it easier for me to let other people keep on top of them.  I selected Directus for this, since I like how it uses normalized SQL while still retaining revision history.

Now that I’ve proven that Jekyll will work for my use case, I’ve started to come up with my ideal configuration.  I feed the data from Directus into the site at build time using its JSON API and jekyll_get and create an individual page for each item with data_page_generator.

For now, I have to build the site manually each time I make a change and then deploy it manually with rsync.  I’m looking to automate the workflow with some sort of CI/CD workflow.  Ideally, I’ll get a web-based editor set up that deploys changes to a devel branch of the git repo, allowing me give other people an easy way to mange the site while allowing me to retain editorial control of the site by controlling the merges to prod.

I also hope to configure Directus to rebuild the prod site automatically whenever a calendar event or news item is added or changed, as I want those changes to become available as soon as possible.  Since I don’t store the rendered site in git, I can do this without having to worry about merge conflicts in the repository.  I’d also like to do automatic nightly rebuilds so that I can, say, show a list of the next five events on the homepage and have them drop off automatically as they pass.

There’s a pretty good community of Jekyll users out there and, whenever I’ve gotten stuck, I’ve been able to find answers to most of my questions online.  Now that I’ve gotten more used to using Jekyll, I’m starting to put the envelope a bit more, so I’ll start posting some tutorials with some of the cool stuff I’m doing soonish.

xBrowserSync

I started using Xmarks to sync my bookmarks between multiple browsers and computers so long ago that it may have still been called Foxmarks when I started using it.  While I had a handful of problems with it from time to time–mainly with syncs failing and leaving my bookmarks corrupted in a given browser–the tool worked very well for me, so I was quite disappointed when LastPass announced that they were discontinuing the service back in May.

Today I think I may have found my replacement in xBrowserSync.  xBrowserSync is a open-source, anonymous, encrypted, and decentralized bookmark syncing tool that works a lot like Xmarks used to. xBrowserSync doesn’t do everything that Xmarks did, but it syncs bookmarks (which is the only Xmarks feature I ever used), works in Chrome and Firefox, and treats bookmarks as bookmarks (as opposed to making you have to access them through a website), so it meets my needs.  There’s also Android support that I might check out.  I added a few bookmarks in one browser after installing xBrowserSync to one of my machines today and confirmed that they synced to other browsers, so it seems to work.

xBrowserSync is completely anonymous and doesn’t require any signup to use.  Instead, you simply provide the extension with an “encryption password” that is used to create an encryption key, used to encrypt your bookmark collection before it is sent to the server.  When you set up your first browser, the extension generates a unique “sync ID” that identifies your bookmark collection.  On subsequent browser setups, you simply provide these two pieces of information and xBrowserSync retrieves and decrypts your bookmarks.  Encryption and decryption is done in the browser via the cryptography API, and your password and encryption key never leave your browser.

There are currently three public xBrowserSync service providers to choose from, which, combined with the fact that the code is all open source, helps alleviate concerns that this service may too go the way of Xmarks.  If the developer decides to no longer support the project users will just need to move their bookmark collections to another service provider.  Switching is easily done via the extension’s settings.  The server code is also available on github, so it’s also possible to run your own server if you are truly paranoid.

The only “complaint” I have about xBrowserSync at this point, now that I’ve installed it on several browsers on Windows, Mac, and Linux machines, is that when pulling down the bookmark library for the first time in a new browser, xBrowserSync wipes all of your existing bookmarks and replaces them with the copy from the server.  This wasn’t a big deal for me as all of my bookmarks were pretty much in sync across systems already, but a first time user, trying to merge work and home bookmarks for example, might be in for quite a surprise when one of the two collections gets wiped out.  To xBroserSync’s credit, though, the extension does give ample warnings about this.

Complaints aside, xBrowserSync seems to do exactly what it says it will.  If you’re still wondering what to do now that Xmarks is gone, give xBrowserSync a look.

Getting Around Spectrum’s Email Blocks

Our local cable TV and broadband provider, Spectrum, in their infinite wisdom, appears to have blocked the entire IP range owned by Digital Ocean (and possibly other similar hosting providers) from sending mail to their email users. I discovered this a month or two ago when mail from my scout troop’s emails just stopped going to any of our families that use @rochester.rr.com addresses. Of course this is just speculation because Spectrum has not acknowledged any of my many, many emails requesting unblocking, and trying to get help from customer service is a painful experience of being bounced between help desks of techs that are trained to handle front-end issues and who have no idea what to do with back-end questions. I suspect the block is more widespread than just Digital Ocean, too, as another email account I use, this one hosted by a much smaller hosting company, also seems no longer to be able to communicate with my Spectrum account.

Since I can’t seem to get anywhere with Spectrum, I started thinking about alternative solutions. The mail server in question runs on a Digital Ocean droplet running VestaCP, which uses Exim 4 as it’s MTA. A little research into Exim configurations showed that I could set up a “smart host,” basically a process for relaying all outgoing mail matching certain criteria to an external mail server. “Perfect,” I thought, “I’ll just set up a new Spectrum email and relay all of the mail bound for Spectrum users through that!” For the most part, that worked—the only issue was authentication. Exim already had two authentication schemes set up to authenticate email clients when they’d try to send email and these conflicted with the configuration I needed for Exim to authenticate to Spectrum’s SMTP server.

Not to be outdone, I quickly had a new idea. Exim supports a construct called a pipe, based on the Unix construct of the same name, where the contents of an email message are passed on to an external program for further processing. If I could just find some utility that could take the message contents from Exim and pass them on to an authenticated SMTP session with Spectrum, I’d be all set. After a couple of hours of comparing Linux command line tools for sending mail I came up with nothing that would do exactly what I wanted. Maybe I’d have to write something myself.

As I was researching SMTP client libraries for PHP, hoping not to have to write my own, I came across a recommendation to reconfigure PHP to use msmtp, instead of sendmail, with the mail() function. “msmtp—-what’s that?” Turns out that it’s an SMTP client that implements the sendmail command-line interface. Unlike Exim however, which also implements the sendmail interface, msmtp won’t try to route mail itself; it will only send it through one of it’s preconfigured SMTP servers. In other words, exactly what I needed.

I should note that I happen to be a Spectrum customer, so I was able to create a new email account with Spectrum to do this. If you don’t have that luxury, you can probably use a different provider, such as Gmail (there are numerous examples to using Gmail with msmtp available online). Note, though, that doing this could cause problems for you with SPF, DKIM, and DMARC validations, so plan accordingly. When using a Spectrum account, since the connection is authenticated, Spectrum seems to accept the message without doing any further verifications.

Installing msmtp on my CentOS server was easy since there’s already a package available. The package is in the epel repo, however, so if you don’t have epel configured already, you’ll need to do that first. You’ll also need to be root to do this.

yum install -y msmtp

Thinking about how best to set it up, I decided to create a new service account to handle mail being passed to msmtp. While I could have used a global configuration that would not have required this, it would require me to make a file containing the password for the mail relay readable to everyone on the server. In reality, I’m the only user on the server, but still, that’s not a very good security decision. With the separate user account, I can better protect the msmtp configuration.

I created the user account with the following command. The -r sets it up as a service account with no password, but the -m and -d set up a home directory for the account, which is not normally done for service accounts. I’m using this home directory to store the configuration and scripts needed to make this thing work.

useradd -rmd /usr/lib/msmtp msmtp

msmtp wants to write to a log file, so I’ve also set up a directory for that. I’m putting it into /var/log so that it would be easy to find when I need it and also to be close to the logs for other processes, such as Exim, that I may need to consult concurrently when doing email debugging.

mkdir /var/log/msmtp
chmod 750 /var/log/msmtp
chown msmtp.msmtp /var/log/msmtp

Next, I switch to the newly created msmtp account to set up my configuration.

su - msmtp

Here I create the configuration file, When it starts up, msmtp looks for a file named .msmtprc in the user’s home directory.

vi .msmtprc

The contents of the file should look like the following sample. Note that it is possible to set up multiple accounts in the file; just start each one with a new “account” line and a unique name. The indentation in the file is my own—I found it easier to follow what was what when I used indentation, but it is not required.

defaults
    tls on
    tls_starttls on
    tls_trust_file /etc/ssl/certs/ca-bundle.crt
    logfile /var/log/msmtp/msmtp.log

account spectrum
    host mail.twc.com
    port 587
    auth on
    user EMAIL_ADDRESS
    password PASSWORD
    from EMAIL_ADDRESS

OK…so what’s going on here? The lines following the “defaults” section apply to all accounts configured in the file. Here I’m saying that I want to use TLS encryption on all connections and I provide the path to the CentOS default certificate store that msmtp will use to validate the certificate that servers present when a connection is made. I also define the path to the log file that msmtp should use to record it’s activities.

The next section defines an account. I’ve given the account the name “spectrum,” which will be used when calling msmtp a bit later. The rest is fairly straightforward: SMTP server host and port, a directive to use an authenticated connection along with the credentials to use, and the default “from” address for the so-called SMTP envelope, the initial communication between the client and server before the message is actually sent. Obviously, you should replace EMAIL_ADDRESS and PASSWORD with your actual credentials.

For security reasons, msmtp checks that the .msmtp file is only readable and writeable by the owner and won’t let you use the program if it is not.

chmod 600 .msmtprc

msmtp is now set up and ready to use. We can test it by creating a sample email message in a file and trying to send it.

vi email.txt

The contents of this file should look something like this:

From: me@mydomain.com
To: you@yourdomain.com
Subject: This is a test

Testing mail from msmtp.

Note that the headers in this file, such as the “From” and “To” email addresses, only reflect what is displayed in the recipient’s email client when they receive the message. The actual recipients are specified in the SMTP Envelope, which is generated from parameters passed to msmtp on the command line or specified in the .msmtprc file.

To test msmtp, run the following, replacing EMAIL_ADDRESS with the email address of your desired recipient.

cat email.txt | msmtp -a spectrum me@rochester.rr.com

msmtp won’t show any output if it’s successful, but you can check the log file to make sure your message went through.

tail /var/log/msmtp/msmtp.log

If it worked, you should see something like this:

Nov 29 10:34:03 host=mail.twc.com tls=on auth=on user=YOUR_EMAIL from=YOUR_EMAIL 
recipients=RECIPIENT_EMAILS mailsize=7959 smtpstatus=250 smtpmsg='250 2.0.0 MESSAGE_ID 
mail accepted for delivery' exitcode=EX_OK

Since msmtp emulates sendmail, I had figured that I’d be able to simply include the call to smtp in my Exim transport, but I found that Exim includes an extra newline at the beginning of the message that causes the email headers to get pushed into the message body. To get around this, I use a short shell script that uses the read command to strip off the extra line.

vi route-spectrum

Add the following to this file:

#!/bin/bash
CONFIG_FILE=/usr/lib/msmtp/.msmtprc
ACCOUNT=spectrum

#strip leading newline from messaage passed from exim
read

# pass remainder of stdin (via cat) to msmtp to send to remote MTA
# additional arguments from Exim are passed in $@
cat | msmtp -C $CONFIG_FILE -a $ACCOUNT $@

I’m not sure that Exim creates a full shell environment for the script when it runs, so I added the full path to the .msmtprc file I wanted to use on the command line, via the -C option. Be sure to adjust the $ACCOUNT variable to reflect the name you gave to the server configuration in the .msmtprc file.

Since it’s a script, it needs to be executable.

chmod 755 route-spectrum

Now I’m done working in the msmtp account, so let’s go back to the root account.

exit

Now I need to build the Exim configuration. My server uses a single file for the Exim config, so this tutorial reflects that. YMMV.

vi /etc/exim/exim.conf

In the section that starts with begin routers I add the following. This tells Exim to give special attention to messages bound for the rochester.rr.com domain (adjust this as necessary for your use, something like “rr.com : *.rr.com” might be more inclusive).

spectrum_mail:
   driver = accept
   domains = rochester.rr.com
   transport = spectrum_smtp
   no_more
   no_verify

This basically tells Exim that when it encounters a message bound for someone on rochester.rr.com to process that message using the spectrum_smtp transport, which I’ll add next.

In the section that starts begin transports, I add the block below. This is the workhorse of the process, passing the email message on the script we created above so that it can be passed on through msmtp.

spectrum_smtp:
  driver = pipe
  command = "/usr/lib/msmtp/route-spectrum  $pipe_addresses"
  user = msmtp
  batch_max = 10

The user directive tells Exim to run the command as the msmtp user we set up earlier. This ensures that the script has access to the server configuration in the .msmtprc file that we took care to protect because it contained our password. The batch_max directive tells Exim that it can process up to 10 recipients of the same message with one call to the script; otherwise it would process each one with a separate connection to the Spectrum server. I’m not sure what the perfect number to use here is, but 10 seemed decent for my needs. The list of email addresses that are processed are placed into the $pipe_addresses variable and are passed as an argument to the script.

Since I changed the configuration, I need to restart Exim.

service exim restart

Now I can go into my email client and send a message to a Spectrum user. Checking the Exim main log (/var/log/exim/main.log) and msmtp log (/var/log/msmtp/msmtp.log) will confirm that it was delivered correctly.

The last thing that I want to do is set up log rotation for the msmtp log file. This will start a new log file each week, compressing the old one to save space on the server. To do this, create a new file in the logrotate.d directory:

vi /etc/logrotate.d/msmtp

And add the following to it:

/var/log/msmtp/msmtp.log {
    missingok
}

That’s about it. This process isn’t the most sustainable, but it’s a decent workaround until Spectrum realizes that legitimate small businesses and non-profits do run mail servers on bulk hosting and they shouldn’t treat us all as spammers.

There is flaw in this configuration, but it should affect anyone too often: if, for some reason, msmtp is not able to connect to the remote mail server, Exim will view the failure as a hard fail and the message will be dropped. While I can work around this, I need to do a little more research on the exit status codes returned by msmtp, which I haven’t yet done.

Setting Up a Raspberry Pi From Scratch

The SD card on one of my Raspberry Pi systems filled up recently, so I decided to wipe it and reinstall Raspbian. The main Rasbian distribution, however, is designed to help teach kids to code and includes lots of tools that I generally don’t use, so I decided to build it from scratch.

The Raspberry Pi Foundation provides a light version of the Raspbian OS, which is intended for Pis used as servers or in embedded devices. Raspbian Jessie Lite doesn’t include a graphical interface by default, nor does it include any of the various browsers, office tools, or programming tools that you’d normally expect to see on a Pi. It’s pretty much just the basic OS and the standard utilities you’ll to find on any Linux machine.

To get started, download the latest Raspbian Jessie Lite image and use your method of choice to install it on your SD card. Since I’m working form a Mac, I usually use IvanX’s Pi Filler. Once the card is built, stick it in your Pi, boot up, and run raspi-config right away to set up your Pi and change your password.

sudo raspi-config

Since this is an advanced tutorial, I’ll assume you’re already familiar with raspi-config, so I won’t spend time on it here. If you need help, there are plenty of tutorials online.

Now, I’ve never been a fan of the Raspberry Pi’s desktop environment, LXDE. I just don’t really like how it looks and it’s difficult to customize. PIXEL, the new implementation of LXDE on new versions of Raspbian is a little better looking, but I still can’t get into it. Instead, I much prefer XFCE, another lightweight desktop environment. Unlike LXDE, you can configure XFCE up the ying-yang, making things look exactly the way you want.

We first need to install the XWindow system, which is the low-level software that allows graphical desktops. Since we don’t want to accidentally install any of the default PIXEL libraries, however, we’ll use the -—no-install-recommends option with apt-get:

sudo apt-get install -—no-install-recommends xserver-xorg
sudo apt-get install -—no-install-recommends xinit

Next we’ll install XFCE, the XFCE terminal application, and the LightDM display manager:

sudo apt-get install xfce4 xfce4-terminal lightdm

This will install a ton of dependencies, and will take a while to complete.

At this point you could run startx and have a working system, but it would be extremely basic, so we’ll also add a bunch of XFCE utilities. These let you add desktop features like status bar icons and desktop widgets that help make the interface more user friendly:

sudo apt-get install xfce4-goodies xfce4-indicator-plugin xfce4-limelight-plugin xfce4-mpc-plugin xfce4-whiskermenu-plugin

Now lets start the graphical environment:

startx

The first time you run XFCE, you’ll see a screen that looks something like this:

XFCE First Launch

The first time start XFCE, you’ll be prompted to set up your desktop.

XFCE can be configured in a lot of different ways, and everyone has their own preference. Personally, I prefer something that looks kind of like an older Windows system with a bar at the bottom of the screen featuring an application menu, buttons to indicate running programs, and some status indicators, quick access menus, and the like. I’ll take you through my personal preferred setup in the next few steps, but feel free to customize things to your liking.

First, click the “One empty panel” button in the dialog box. A small white box will appear near the top of the screen.

First Panel

The first panel will initially appear near the top of the screen.

Drag the box to the bottom left corner of the screen. Right click on this box and choose “Panel preferences…”

Panel Preferences

Right click on the panel and select Panel Preferences.

On the “Display” tab of the Panel dialog, set the mode to “Horizontal,” the row size to 32 pixels, number of rows to one, and the length to 100%.

Panel Preferences

Sample settings for the panel.

Next, click on the “Items” tab. The click the “+” button to start adding items to the panel. Below is the list of items I use on my panel, in order from left to right:

* Whisker Menu (an application launcher similar to the “Start” menu in Windows)
* Separator
* Window Buttons
* Separator
* Workspace Switcher (lets you quickly change between virtual desktops)
* Notification Area (a place for applications to show status indicators; for example, a email app might indicate when you have new messages)
* Show Desktop (minimize all windows)
* Audio Mixer (control the speaker volume)
* Wastebasket Applet (quick access to your system’s trash)
* Clock
* Action Buttons (this give you quick access to functions like lock screen, logout, and reboot)

Feel free to add other options that make sense for your system. For example, if you’re using WiFi, you might want to add the Wavelan item to help you manage your wireless network connections. When you’re done, your screen should look something like this:

Customized panel

The newly customized panel.

Most of the panel items have a set of preferences that go along with them which you can access by selecting the item you want to configure and clicking the properties button. It probably looks like a wrench on top of a sheet of paper and is located just below the “-“ button.

You can explore the settings on your own, but to help make things look a little cleaner, choose the second separator and click the properties button. Then check the “Expand” checkbox and click “Close.” That pushes the items after the separator up against the right side of the screen and makes the panel look more balanced.

Separator Preferences

Expanding the separator helps balance the look of the screen.

Now close the Panel dialog. We’ve got our desktop set up, but it still looks a little blah. Let’s do something about that.

Use the newly added Whisker Menu to run the Terminal Emulator, which should be at the top of the list.

Using Whisker Menu

running the terminal emulator from the Whisker Menu.

We’ll now add some additional desktop themes (which set the style and color of system elements like window borders, buttons, and the like), icons, and background images. We’ll start with the themes. At the moment, I’m really liking the MurrinaBlue theme that’s part of the Murrine theme package:

sudo apt-get install gtk2-engines-murrine

And, although I’m not a big fan of PIXEL, I really do like its background images:

sudo apt-get install pixel-wallpaper

Finally, we’ll install the Elementary icon set. ElementaryOS is a lesser-known Linux distribution, but it has one of the nicest, most professional looking sets of icons I’ve seen—and they work well with XFCE.

While some Linux distributions have an XFCE-Elementary icon package available, Raspbian does not, so we’ll simply clone it from GitHub.

First install git:

sudo apt-get install git

Then run these commands:

sudo mkdir /usr/local/share/icons
cd /usr/local/share/icons
sudo git clone https://github.com/elementary/icons.git elementary

Now open the Whisker Menu, type “appearance,” and open the “Appearance (Customize the look of your desktop)” applet that appears.

On the Appearance dialog’s “Style” tab, you’ll see a long list of desktop themes. As you select them, your screen will change to show you a preview. As I noted earlier, my go-to theme is currently MurrinaBlue.

Selecting a Desktop Theme

Selecting a theme for the desktop

Next, click on the Icons tab. Again, as you click through the various options, you’ll see the desktop change. Choose “elementary” (not “elementary Xubuntu dark”).

Selecting an Icon Set

Selecting an Icon Set

Now close the Appearance dialog, right-click on the desktop, and select “Desktop Settings.”

Desktop Settings

Right click on the desktop and select Desktop Settings,

You’ll see that XFCE already comes with several wallpaper images, but we want to use the PIXEL images, so click the “+” button:

Selecting Desktop Images

Click the “+” icon to add additional wallpaper imeages.

In the “Add Image File(s)” dialog, click in the whitespace of the file listing (middle box) and start typing /usr/share/pixel-wallpaper and then press enter.

Click on the first image (aurora.jpg), then scroll to the bottom of the list, hold down Shift, and click on the last image (waterfall.jpg) to select the entire set of images. Then click add. Now select your wallpaper and close the Desktop dialog.

Selecting PIXEL Wallpaper

Selecting all of the PIXEL wallpaper images will make them available in the XFCE Desktop Properties window.

Now that we’ve gotten everything set up, you’ll probably want to open a terminal window and run raspi-config again.

sudo raspi-config

Go to “Boot Options,” and “Desktop / CLI,” and chose to boot to “Desktop” instead of the default, command-line interface. Reboot when prompted.

Now you’ve got a basic system, but it doesn’t do much. You need some software. You’ll probably want a web browser or two:

sudo apt-get install firefox-esr chromium-browser

Maybe an office suite (I’m also including the LibreOffice SIFR icon set, as it’s much nicer looking than the default set):

sudo apt-get install libreoffice libreoffice-style-sifr

You’ll probably want a PDF viewer:

sudo apt-get install xpdf

And don’t forget an email client:

sudo apt-get install icedove

That should be enough to get you started. There is, of course, a ton of other software you can install, depending on your needs. Poke around on the Internet and you’ll be sure to find applications to do just about anything you want.

Extra Credit

When you rebooted after switching to the “Desktop” login, you probably noticed that the default Debian wallpaper appeared under the login dialog. Want to change that?

One a terminal window and type:

sudo vi /etc/lightdm/lightdm-gtk-greeter.conf

If you don’t like vi, feel free to use your preferred editor (such as nano), but if you’re going to be a Linux user understanding vi is a very useful skill that you should really take the time to learn. And don’t worry, I’ll clearly explain everything.

Press the / key (forward slash) to search the file and then type background (without the quotes, of course), followed by the n key (for “next match,” since the word background appears once before in the file), which will jump you to where the login screen’s wallpaper image is set.

Use the arrow keys to position the cursor on the first / after the equals sign and press the c key (for “change”), followed by the $ key (shift is required), which tells vi that you want to edit everything to the end of the line.

Now type the path to one of the PIXEL wallpaper images, such as

/usr/share/pixel-wallpaper/aurora.jpg

So that the entire line looks like this:

background=/usr/share/pixel-wallpaper/aurora.jpg

When you’re done typing, press the esc key and then type :wq (colon, w, q, which tells vi to write the file back to disk and then quit.

Now reboot the Pi again and you should see a much more welcoming login screen.

Extra Credit Part II

Now that you’re on your quest for software to install on your Pi, it might make sense for you to install a graphical package manager. Let’s face it, using apt-get to install packages or run updates is pretty easy, but using the command line to search for packages? Not so much.

The graphical package manager I generally prefer is called Synaptic. It’s pretty no-frills as far as the interface goes, but it’s easy to use and gets the job done. Like everything else, you can install Synaptic via apt-get:

sudo apt-get install synaptic

Synaptic will automatically add itself to the Whisker Menu (under “System,” but the default settings use an internal mechanism for prompting for the root user’s password to get the necessary privileges to install packages. By default, most Debian-based OSes, such as Raspberry don’t set a root password, so the menu item won’t work.

Fortunately, it’s easy to fix. First, you’ll need to install gksu, a package containing graphical equivalents to the su and sudo command-line programs:

sudo apt-get install gksu

Next we need to edit the synaptic.desktop file that creates the menu item. We can do this is one quick step:

sudo sed "s/synaptic-pkexec/gksudo synaptic/" /usr/local/share/applications/synaptic.desktop

This creates a new synaptic.desktop file in /usr/local/share/applications with an updated command in it. Without getting into the differences between /usr and /usr/local, the file in /usr/local will take precedence over the one in /usr, and we don’t have to worry about it getting overwritten when an updated package is released.

Now, when you select Synaptic from the Whisker Menu, you should get right in to the app without being prompted for an non-existing password.

Extra Credit Part III

Sometimes you may want to add your own items to your Whisker Menu. While it’s not too difficult to manually write the .desktop files yourself, there’s a much easier way: using a menu editor. One such menu editor that’s available for the Pi is called MenuLibre.

To install MenuLibre, run:

sudo apt-get install menulibre

MenuLibre will add itself as “Menu Editor” to the Accessories and Settings menus in Whisker Menu. Run it and you’ll have access to tweak any of your system’s menu items, or even add new ones.

The .desktop files that MenuLibre creates are stored in your own account and will only appear in your menu, not the menus of other users of the system (assuming you have any). If you want to share them, you should move the files from ~/.local/share/applications to /usr/local/share/applications:

sudo mv ~/.local/share/applications/myapp.desktop /usr/local/share/applicatons

Printing Pi

A while back I somewhat impulsively bought a floor-model Brother MFC-7365DN printer that was on the clearance shelf at Sam’s. I say somewhat impulsively because I needed a new printer and I was seriously looking at the Brother MFC line, but I didn’t know much about this particular model and hadn’t, prior to walking in to the store at least, completely decided on that line of printers. The deal was too good to pass up, though, and the printer has proven itself to be a great decision.

One thing that I didn’t know (and didn’t really think much about) going in to the purchase was Linux support. As it turns out, Brother has great support for Linux, but only for x86 devices. Their Linux drivers are proprietary and they don’t offer source code, so printing from ARM devices is a challenge. There’s plenty of discussion online about which Gutenprint drivers usually work (none of the recommendations I found did more than push out blank sheets of paper, if anything at all) or using QEMU to emulate x86 hardware (which was reported as being extremely slow, so I didn’t try).

I generally don’t print from my Raspberry Pis directly but, as we use Google Apps more and more at work, I find myself wanting to use Cloud Print more often. I really wanted, as I had done in the past, to set one of my Pis up as a Cloud Print server, but without a printer driver that wasn’t happening. In fact, I even went so far as to create an extremely kludgy, but surprisingly effective, solution that involved an always available SSH connection (using autossh and a dynamic DNS service) between a remote x86-based VPS and one of my Pis, with a tunnel to my printer. I then set up Cloud Print on the server and used that as my workaround…until this weekend.

I was browsing some packages in Synaptic when I decided to search for “Brother.” I’m pretty sure I did this before with no luck, but this time I found a package, printer-driver-brlaser, described as a “CUPS filter driver supporting (some) Brother laser printers,” which appears to be distributed by the OpenPrinting project. One of the printers listed was model DCP-7065DN–a model number quite similar looking to my printer’s. On checking out the specs for the DCP-7065DN, it looked to be pretty much the same printer as mine, less the fax capabilities, so I installed the package, set up the printer in CUPS (which was already installed from previous attempts), selecting “Brother” as the make and “Brother DCP-7065DN” as the model, and printed a test page…successfully!

I should note that CUPS found my printer automatically, but I couldn’t get it to work from the discovered selection. Instead, I manually configured the printer’s address to be socket://IP_ADDRESS:9100 (with my printer’s IP address in place of IP_ADDRESS, obviously). The driver didn’t give me an option to set the duplex default, so I’m not sure if I’ll be able to do two-sided printing or not (I haven’t tried printing anything longer than one page yet), but it’s not the end of the world if I can’t.

Just thought I’d share my experience as there seem to be lots of frustrated Brother owners trying to print from ARM devices out there.

<