Deploy Rust and Go on CentOS

7 minute read Published:

Some build and deploy ideas for Go and Rust static binaries on CentOS: systemd units, rpms, etc.

A recap of some things I learned recently while packaging and distributing Go and Rust projects on CentOS hosts.

Go + rpm

RPM spec file

Until now I had never released any of my software as an RPM. I just dropped minimal instructions into README.md and called it a day:

$ sudo yum install <list of dependencies>
$ wget <link_to_my_project_binary_release>
$ mv binary /usr/bin/
$ chmod +x binary

Even worse is software where I also had systemd unit files:

$ cp file.service /etc/systemd/system/
$ sudo systemctl daemon-reload
$ sudo systemctl enable my.service
$ sudo systmectl start my.service

Unfortunately, regardless of how well-intentioned my users were (including public users for my open source projects, and coworkers for my company/private projects), it’s not the best UX to require a user to execute a handful of shell commands.

I pieced together an RPM spec file for my project goat, and installation (on RPM and systemd-enabled distros) became a breeze.

The first part is simple, some naming and descriptions:

%define pkgname goat

Name: %{pkgname}
Version: %{_version}
Release: 1%{?dist}
Summary: Attach and mount EBS volumes

License: BSD 3-clause
URL: https://github.com/sevagh/goat

As we see here, pkgname is defined as goat and used throughout the rest of the spec file, but we never %define version - this is because I pass the version from the outside (so I could have one source of version truth): @rpmbuild [...] --define "_version $(VERSION)".

Then we declare the source files:

Source0: %{pkgname}
Source1: %{pkgname}.service

This says that the files my RPM will include in it are goat (the Go binary, built with go build), and goat.service, the systemd unit file.

Here we define the requirements:

Requires: systemd mdadm

Some unimportant steps (in goat’s case at least):

%description
Automatically attach and mount EBS volumes to a running EC2 instance.


#%prep
#%setup
#%build

Now for the meat of the install phase:

%install
%{__mkdir} -p %{buildroot}/%{_bindir}
%{__mkdir} -p %{buildroot}/%{_unitdir}
%{__install} -m0775 %{SOURCE0} %{buildroot}/%{_bindir}/%{pkgname}
%{__install} -m0777 %{SOURCE1} %{buildroot}/%{_unitdir}/%{pkgname}.service


%files
%{_bindir}/%{pkgname}
%{_unitdir}/%{pkgname}.service

These are all spec file macros that I learned from the excellent documentation. Read the lines and familiarize yourself with them. It’s basically saying to install the binary and systemd file into their respective locations (/usr/bin/ and /usr/lib/systemd/).

Finally, some pre/post shell script sections to run some systemd commands.

Post install; systemctl daemon-reload if it’s the first time goat is being installed:

%post
if [ $1 -eq 1 ]; then
        /bin/systemctl daemon-reload >/dev/null 2>&1 || :
fi
#/bin/systemctl enable goat.service >/dev/null 2>&1 || :

You can choose to enable your service (and even start it) here but I chose to allow my users to have more control over what goat does, and have my RPM just install it.

The pre-uninstall phase; disable and stop goat:

%preun
if [ $1 -eq 0 ] ; then
        # Package removal, not upgrade
        /bin/systemctl disable goat.service >/dev/null 2>&1 || :
        /bin/systemctl stop goat.service >/dev/null 2>&1 || :
fi

The post-uninstall phase; another daemon-reload:

%postun
/bin/systemctl daemon-reload >/dev/null 2>&1 || :

rpmlint and rpmbuild

This is the Make rule I wrote for goat to generate RPMs:

@rpmlint specfile.spec
@rpmbuild -ba specfile.spec --define "_sourcedir $$PWD" --define "_version $(VERSION)"

rpmlint does what the name suggests: lints your specfile. Useful.

The rpmbuild command has two defines: one mentioned above, where the _version is sourced from the same place in the Makefile ($(VERSION)) so I don’t have to run around and change a string in 5 different places when making a release.

The other define is _sourcedir $$PWD, which localizes the build phase to $PWD. This way I can run the rpmbuild command from the goat repo.

Go version from Makefile

Tangentially, how I pass the same $(VERSION) to the Go code itself is with:

@go build -ldflags "-X main.VERSION=$(VERSION)" .

In the Go code this is how it’s used:

# in main.go
var VERSION string

func main

Rust + systemd

Static compiled rust

I prefer to compile our Rust binaries with musl for static compilation and easy deployment. A problem I run into frequently with deploying Rust binaries on CentOS 7 is: myrustbinary: /lib64/libc.so.6: version 'GLIBC_2.18' not found (required by myrustbinary)

CentOS GLIBC is “capped” at 2.17, and compiling Rust on my Fedora laptop leads to this issue.

Now, scping binaries from a laptop isn’t a good solution even if it did work. I invoke clux/muslrust to build a statically-compiled musl binary from our Travis-CI job. The Makefile looks like this:

CARGO_CACHE     := -v $(HOME)/.cargo-docker-cache:/root/.cargo:Z
MUSL_DOCKER     := <url to our private docker registry>/rust_build_nightly_2017-06-13
CHOWN_CMD       := chown -R $$UID:$$UID ./ /root/.cargo/
DOCKER_ARGS     := run $(CARGO_CACHE) -e CARGO_FLAGS=$(CARGO_FLAGS) -v $(PWD):/volume:Z -w /volume -t $(MUSL_DOCKER)

all: debug

docker:
        docker pull $(MUSL_DOCKER)

debug: docker
        docker $(DOCKER_ARGS) sh -c "make -k -f Makefile.debug && $(CHOWN_CMD)"

release: docker
        docker $(DOCKER_ARGS) sh -c "make -k -f Makefile.release && $(CHOWN_CMD)"

.PHONY: all debug release

The docker image rust_build_nightly_2017-06-13 is a very light modification of clux/muslrust to pin a version of Rust nightly. In fact after writing this Makefile, I contributed it upstream.

Also, all of the chown commands are necessary since Docker runs as root and there’s frankly no better solution to make it inherit the user’s UUID.

A plus side of having a Docker-based build system is that the artifacts are homogenous - Travis, a MacBook, and a Thinkpad running Linux will all produce the same binaries. This removed the restriction of each developer requiring:

  • A rust toolchain (although it’s a good idea since the Docker build is slooow)
  • A musl toolchain

After these have been built by Travis, they’re tarred up and deployed to our instance of Artifactory:

tarball_name = "$application_name_$TRAVIS_TAG" # this env var accesses the git tag in a Travis build
tar -czvf $tarball_name target/release/<my_binary_name>
curl -T $tarball_name <artifactory_url>

A general deployment template I recommend is a simple one based on symlinks:

lrwxrwxrwx   1 rusty rusty  31 Jul 26 16:36 my-app -> /usr/local/my-app-1.0.1
drwxr-xr-x   2 rusty root 4096 Jul 20 17:16 my-app-1.0.0
drwxr-xr-x   2 rusty root  153 Jul 26 16:54 my-app-1.0.1

This way, relinking old versions and restarting is an easy way to roll back a bad deploy.

We use Salt for deploys but this part isn’t too important. You just need a way to get your desired tarball version from Artifactory (or wherever you hosted it) and untarred into /usr/local/.

Also important is to restart the service when the symlink changes. Salt lets us do that with a concise syntax:

{{ prefix }}-restart:
  cmd.run:
    - name: systemctl restart my-app.service
    - require:
      - file: {{ prefix }}-symlink
    - onchanges:
      - file: {{prefix}}-symlink

Systemd files

If you don’t handle daemonization within your application, then Type=simple is your friend. This is what a basic systemd unit file for a binary looks like:

[Unit]
Description=My Rust application
Documentation=<link-to-documentation>
Requires=network.target remote-fs.target
After=network.target remote-fs.target
ConditionPathExists=/usr/local/<where-you-put-your-binaries>

[Service]
Type=simple
EnvironmentFile=/var/lib/my-rust-application.env
User=rusty
Group=rusty
ExecStart=/usr/local/<my_binary_location>/<my_binary> \
  --long --contrived --command --line --params \
  --stretching --to -a --new --line
SyslogIdentifier=my-binary
Restart=always

[Install]
WantedBy=multi-user.target

Systemd magically takes care of daemonization, forking, and PID files for you.

There’s nothing quite Rust-specific here, except that the env file is useful for enabling/disabling RUST_BACKTRACE. The systemd directive EnvironmentFile indicates a text file with VAR=VALUE pairs separated by newlines. Simple.

Call this my-app.service, copy it to /etc/systemd/system, and run systemctl daemon-reload. After this, you can run systemctl enable my-app, systemctl start my-app, systemctl status my-app, etc.

Different parameters

A trick with systemd is that you can instantiate variations of your service. Firstly, your file should be named my-service@.service. Now you can instantiate a variation by doing:

systemctl start my-app@param1

In the unit file, the basic substitution is %i:

ExecStart=/usr/local/my-app/my-binary \
  --%i

So to run my-binary --param1 you use systemctl start my-app@param1, and so on.

Check the syslog with the same identifier:

journalctl -u my-app@param1
<param 1 output>
journalctl -u my-app@param2
<param 2 output>