Simplifying local development: The ./run executable
Robin Winslow
on 5 July 2017
Tags: Design
Canonical’s webteam manage over 18 websites as well as many supporting projects and frameworks. These projects are built with any combination of Python, Ruby, NodeJS, Go, PostgreSQL, MongoDB or OpenStack Swift.
We have 9 full-time developers – half the number of websites we have. And naturally some of our projects get a lot of time spent on them (like www.ubuntu.com), and others only get worked on once every few months. Most devs will touch most projects at some point, and some may work on a few of them in any given day.
Before any developer can start a new piece of work, they need to get the project running on their computer. These computers may be running any flavour of Linux or macOS (thankfully we don’t yet need to support Windows).
A focus on tooling
If you’ve ever tried to get up and running on a new software project, you’ll certainly appreciate how difficult that can be. Sometimes developers can spend days simply working out how to install a dependency.
Given the number and diversity of our projects, and how often we switch between them, this is a delay we simply cannot afford.
This is why we’ve invested a lot of time into refining and standardising the local development tooling, making it as easy as possible for any of our devs, or any contributors, to get up and running as simply as possible.
The standard interface
We needed a simple, standardised set of commands that could be run across all projects, to achieve predictable results. We didn’t want our developers to have to dig into the README or other documentation every time they wanted to get a new project running.
This is the standard interface we chose to implement in all our projects, to cover the basic functions common to almost all our projects:
./run # An alias for "./run serve"
./run serve # Prepare the project and run the local server
./run build # Build the project, ready for distribution or release
./run watch # Watch local files for changes and rebuild as necessary
./run test # Check code syntax and run unit tests
./run clean # Remove any temporary or built files or local databases
We decided on using a single run
executable as our single entry-point into all our projects only after previously trying and eventually rejecting a number of alternatives:
- A Makefile: The syntax can be confusing. Makefiles are really made for compiling system binaries, which doesn’t usually apply to our projects
- gulp, or NPM scripts: Not all our projects need NodeJS, and NodeJS isn’t always available on a developer’s system
- docker-compose: Although we do ultimately run everything through Docker (see below), the
docker-compose
entrypoint alone wasn’t powerful enough to achieve everything we needed
In contrast to all these options, the run
script allows us to perform whatever actions we choose, using any interpreter that’s available on the local system. The script is currently written in Bash because it’s available on all Linux and macOS systems. As an additional bonus, ./run
is quicker to type than the other options, saving our devs crucial nanoseconds.
The single dependency that developers need to install to run the script is Docker, for reasons outlines below.
Knowing we can run or build our projects through this standard interface is not only useful for humans, but also for supporting services – like our build jobs and automated tests. We can write general solutions, and know they’ll be able to work with any of our projects.
Using ./run
is optional
All our website projects are openly available on GitHub. While we believe the ./run
script offers a nice easy way of running our projects, we are mindful that people from outside our team may want to run the project without installing Docker, want to have more fine-grained control over how the project is run, or just not trust our script.
For this reason, we have tried to keep the addition of the ./run
script from affecting the wider shape of our projects. It remains possible to run each of our projects using standard methods, without ever knowing or caring about the ./run
script or Docker.
- Django projects can still be run with
pip install -r requirements.txt; ./manage.py runserver
- Jekyll projects can still be run with
bundle install; bundle exec jekyll serve
- NodeJS projects can still be run with
npm install; npm run serve
While the documentation in our READMEs recommend the ./run
script, we also try to mention the alternatives, e.g. www.ubuntu.com’s HACKING.md.
Using Docker for encapsulation
Although we strive to keep our projects as simple as possible, every software project relies on dependent libraries and programs. These dependencies pose 2 problems for us:
- We need to install and run these dependencies in a predictable way – which may be difficult in some operating systems
- We must keep these dependencies from affecting the developer’s wider system – there’s nothing worse than having a project break your computer
For a while now, developers have been solving this problem by running applications within virtual machines running Linux (e.g. with VirtualBox and Vagrant), which is a great way of encapsulating software within a predictable environment.
Linux containers offer light-weight encapsulation
More recently, containers have entered the scene.
A container is a part of the existing system with carefully controlled permissions and an encapsulated filesystem, to make it appear and behave like a separate operating system. Containers are much lighter and quicker to run than a full virtual machine, and yet provide similar benefits.
The easiest and most direct way to run containers is probably LXD, but unfortunately there’s no easy way to run LXD on macOS. By contrast, Docker CE is trivial to install and use on macOS, and so this became our container manager of choice. When it becomes easier to run LXD on macOS, we’ll revisit this decision.
Each project uses a number of Docker images
Running containers through Docker helps us to carefully manage our projects’ dependencies, by:
- Keeping all our software, from Python modules to databases, from affecting the wider system
- Logically grouping our dependencies into separate light-weight containers: one for the database, and a separate one for each technology stack (Python, Ruby, Node etc.)
- Easily cleaning up a project by simply deleting its associated containers
So the ./run
script in each project will run the necessary commands to start the project by running the relevant commands inside the relevant Docker images. For example, in partners.ubuntu.com, the ./run
command will:
- Install the NPM dependencies (vanilla-framework and node-sass) using our
canonicalwebteam/node
image - Build CSS with
node-sass
using thecanonicalwebteam/node
image - Start an empty database using the library/postgres image
- Link the database container to a Django container running our canonicalwebteam/django image (which will automatically install or update Python dependencies from
requirements.txt
as needed) - Use the Django container to provision the database and run the site
Docker is the only dependency
By using Docker images in this way, the developer doesn’t need to install any of the project dependencies on their local system (NodeJS, Python, PostgreSQL etc.). Docker – which should be trivial to install on both Linux and macOS – is the single dependency they need to run any of our projects.
Keeping the ./run
script up-to-date across projects
A key feature of this our solution is to provide a consistent interface in all of our projects. However, the script itself will vary between projects, as different projects have different requirements. So we needed a way of sharing relevant parts of the script while keeping the ability to customise it locally.
It is also important that we don’t add significant bloat to the project’s dependencies. This script is just meant to be a useful shorthand way of running the project, but we don’t want it to affect the shape of the project at large, or add too much extra complexity.
However, we still need a way of making improvements to the script in a centralised way and easily updating the script in existing projects.
A yeoman generator
To achieve these goals, we maintain a yeoman generator called canonical-webteam. This generator contains a few ways of adding the ./run
architecture, for some common types of projects we use:
$ yo canonical-webteam:run # Add ./run for a basic node-only project
$ yo canonical-webteam:run-django # Add ./run for a databaseless Django project
$ yo canonical-webteam:run-django-db # Add ./run for a Django project with a database
$ yo canonical-webteam:run-jekyll # Add ./run for a Jekyll project
These generator scripts can be used either to add the ./run
script to a project that doesn’t have it, or to replace an existing ./run
script with the latest version. It will also optionally update .gitignore
and package.json
with some of our standard settings for our projects.
Try it out!
To see this ./run
tooling in action, first install Docker by following the official instructions.
Run the www.ubuntu.com website
You should now be able to run a version of the www.ubuntu.com website on your computer:
-
Download the www.ubuntu.com codebase, E.g.:
curl -L https://github.com/canonical-websites/www.ubuntu.com/archive/master.zip > www.ubuntu.com-master.zip unzip www.ubuntu.com-master.zip cd www.ubuntu.com-master
-
Run the site!
$ ./run # Wait a while (the first time) for it to download and install dependencies. Until: Starting development server at http://0.0.0.0:8001/ Quit the server with CONTROL-C.
-
Visit http://127.0.0.1:8001 in your browser, and you should see the latest version of the https://www.ubuntu.com website.
Forking or improving our work
We have documented this standard interface in our team practices repository, and we keep the central code in our canonical-webteam Yeoman generator.
Feel free to fork our code, or if you’d like to suggest improvements please submit an issue or pull-request against either repository.
Also published on Medium.
Talk to us today
Interested in running Ubuntu in your organisation?
Newsletter signup
Related posts
Designing Canonical’s Figma libraries for performance and structure
How Canonical’s Design team rebuilt their Figma libraries, with practical guidelines on structure, performance, and maintenance processes.
Visual Testing: GitHub Actions Migration & Test Optimisation
What is Visual Testing? Visual testing analyses the visual appearance of a user interface. Snapshots of pages are taken to create a “baseline”, or the current...
Let’s talk open design
Why aren’t there more design contributions in open source? Help us find out!