A basic deploy pipeline for Rust on CentOS

4 minute read Published:

Some basic things you can do to package and deploy Rust software on CentOS hosts

A recap of some things I learned recently while packaging Rust code for deployment on CentOS hosts.

This is meant to be a complementary piece to Go on CentOS.

Building statically compiled artifacts

I prefer to compile our Rust binaries with musl for static compilation and easy deployment. A problem I run into frequently with deploying Rust binaries on CentOS 7 is: myrustbinary: /lib64/libc.so.6: version 'GLIBC_2.18' not found (required by myrustbinary)

CentOS GLIBC is “capped” at 2.17, and compiling Rust on my Fedora laptop leads to this issue.

Now, scping binaries from a laptop isn’t a good solution even if it did work. I invoke clux/muslrust to build a statically-compiled musl binary from our Travis-CI job. The Makefile looks like this:

CARGO_CACHE     := -v $(HOME)/.cargo-docker-cache:/root/.cargo:Z
MUSL_DOCKER     := <url to our private docker registry>/rust_build_nightly_2017-06-13
CHOWN_CMD       := chown -R $$UID:$$UID ./ /root/.cargo/
DOCKER_ARGS     := run $(CARGO_CACHE) -e CARGO_FLAGS=$(CARGO_FLAGS) -v $(PWD):/volume:Z -w /volume -t $(MUSL_DOCKER)

all: debug

        docker pull $(MUSL_DOCKER)

debug: docker
        docker $(DOCKER_ARGS) sh -c "make -k -f Makefile.debug && $(CHOWN_CMD)"

release: docker
        docker $(DOCKER_ARGS) sh -c "make -k -f Makefile.release && $(CHOWN_CMD)"

.PHONY: all debug release

The docker image rust_build_nightly_2017-06-13 is a very light modification of clux/muslrust to pin a version of Rust nightly. In fact after writing this Makefile, I contributed it upstream.

Also, all of the chown commands are necessary since Docker runs as root and there’s frankly no better solution to make it inherit the user’s UUID.

A plus side of having a Docker-based build system is that the artifacts are homogenous - Travis, a MacBook, and a Thinkpad running Linux will all produce the same binaries. This removed the restriction of each developer requiring:

  • A rust toolchain (although it’s a good idea since the Docker build is slooow)
  • A musl toolchain

After these have been built by Travis, they’re tarred up and deployed to our instance of Artifactory:

tarball_name = "$application_name_$TRAVIS_TAG" # this env var accesses the git tag in a Travis build
tar -czvf $tarball_name target/release/<my_binary_name>
curl -T $tarball_name <artifactory_url>

A general deployment template I recommend is a simple one based on symlinks:

lrwxrwxrwx   1 rusty rusty  31 Jul 26 16:36 my-app -> /usr/local/my-app-1.0.1
drwxr-xr-x   2 rusty root 4096 Jul 20 17:16 my-app-1.0.0
drwxr-xr-x   2 rusty root  153 Jul 26 16:54 my-app-1.0.1

This way, relinking old versions and restarting is an easy way to roll back a bad deploy.

We use Salt for deploys but this part isn’t too important. You just need a way to get your desired tarball version from Artifactory (or wherever you hosted it) and untarred into /usr/local/.

Also important is to restart the service when the symlink changes. Salt lets us do that with a concise syntax:

{{ prefix }}-restart:
    - name: systemctl restart my-app.service
    - require:
      - file: {{ prefix }}-symlink
    - onchanges:
      - file: {{prefix}}-symlink

Systemd files

If you don’t handle daemonization within your application, then Type=simple is your friend. This is what a basic systemd unit file for a binary looks like:

Description=My Rust application
Requires=network.target remote-fs.target
After=network.target remote-fs.target

ExecStart=/usr/local/<my_binary_location>/<my_binary> \
  --long --contrived --command --line --params \
  --stretching --to -a --new --line


Systemd magically takes care of daemonization, forking, and PID files for you.

There’s nothing quite Rust-specific here, except that the env file is useful for enabling/disabling RUST_BACKTRACE. The systemd directive EnvironmentFile indicates a text file with VAR=VALUE pairs separated by newlines. Simple.

Call this my-app.service, copy it to /etc/systemd/system, and run systemctl daemon-reload. After this, you can run systemctl enable my-app, systemctl start my-app, systemctl status my-app, etc.

Different parameters

A trick with systemd is that you can instantiate variations of your service. Firstly, your file should be named my-service@.service. Now you can instantiate a variation by doing:

systemctl start my-app@param1

In the unit file, the basic substitution is %i:

ExecStart=/usr/local/my-app/my-binary \

So to run my-binary --param1 you use systemctl start my-app@param1, and so on.

Check the syslog with the same identifier:

journalctl -u my-app@param1
<param 1 output>
journalctl -u my-app@param2
<param 2 output>