In Unlock The Secret Powers of the New Relic Ruby Agent, I shared how to gain richer telemetry data from your Ruby applications. As with any technology, especially open source projects, we’re constantly improving our Ruby agent. This means I’m back with more tips on how to unlock even MORE secret powers we discovered during our efforts.

Along the way to open sourcing the Ruby agent, we moved our continuous integration (CI) from an internally hosted Travis CI implementation to GitHub Actions. We learned a lot about GitHub Actions, and it would be a shame to let this knowledge remain tucked away in the dark shadows of our repositories. While Ruby-themed, this post isn’t filled with Ruby-tidbits. Instead, I’m going to show Ruby developers how to step outside their comfort zones and into the realm of JavaScript, so that they can build reliable continuous integration workflows.

NEW RELIC RAILS INTEGRATION
Rails logo for quickstart blog promo
Start monitoring your Rails data today.
Install the Rails quickstart Install the Rails quickstart

The New Relic Ruby agent’s continuous integration workflow

First, let’s establish some of the foundational work we on the Ruby agent team completed. As part of our effort to invest more into open source, we built a continuous integration workflow that can reliably build all the binary Ruby interpreters from version 2.0 to the current version and run our test suite against each version. Our test matrix expands to more than 150 jobs. That’s a lot of jobs, and we need it to run efficiently to keep total build and test time at a reasonable level. In fact, check out our All Things Open 2020 Presentation to learn more about the results we achieved. We boosted test performance/efficiency by 577%, and that’s no small feat. The following tips cover topics not well-documented or may not be obvious to non-JavaScript expert developers.

GitHub advises that actions be single-purpose, reusable, and shareable. However, large complex actions also exist, and, if they’re well-written, they can greatly enhance your workflows by removing repetitive code that is brittle and hard to maintain in a YAML file. One approach should not be shunned over another just because of a design goal envisioned by GitHub.

Getting started with a Javascript-based GitHub Actions script

In a complex workflow, the biggest challenge is getting a complete solution assembled and working. In such cases, I cannot emphasize taking the simplest path enough: build first, extract later.

It might be tempting to want to follow a template or blog post that shows how to  set up separate repositories for tutorials and demo action scripts. For your first action script, this is an unnecessary burden. Unless you know that you’re planning to publish and share a solution that you’re going to maintain in the long term, the extra steps of setting up a new repository, documenting it, building test suites, and so on will only  get in the way of simply getting things working. It’s perfectly OK for actions to live inside the main project repository.

In this example, you’ll  follow GitHub’s convention of saving GitHub-related things in the project’s root folder:  ~/.github folder. Each action script you build would be named as such: ~/.github/actions/<action-name>. A locally hosted action is referenced in the workflow files via a relative path from the project’s root:

   - name: Build Ruby ${{ matrix.ruby-version }}

     uses: ./.github/actions/build-ruby

     with:

       ruby-version: ${{ matrix.ruby-version }}

Github Actions are implemented as either pure JavaScript or with TypeScript (which provides object-oriented features). You’ll want to make a careful choice between these options because much of the effort to set up your action’s running environment hinges on your choice of build environment. For our purposes, let's stick with pure JavaScript.

There are lots of detailed guides for installing a JavaScript build environment, but in all cases, you’ll need to install NodeJS for the JavaScript runtime environment, YARN for dependency management, and ncc for the compiler/assembler for your action script. If you’re on macOS, run the following commands:

brew install node

brew install yarn

npm i -g @vercel/ncc

Next, add the two entries to the "scripts" section as shown to have everything compiled into the action’s ~/dist folder:

{

 "name": "your-action-name",

 "version": "1.0.0",

 "description": "short action description",

 "main": "index.js",

 "scripts": {

   "lint": "eslint *.js",

   "package": "ncc build index.js -o dist"

},

...

Now install the GitHub actions toolkit components that let you tap into the good things you’ll need to build your action script:

npm install @actions/cache  # for interacting with Github action caches

npm install @actions/core   # for interacting with Github action core functionality

npm install @actions/exec   # for executing shell commands from Javascript

npm install @actions/io     # for file I/O operations (like file copy)

Create an index.js file in the action’s root folder

To correctly integrate your action with Node’s concurrency model and with GitHub Actions, you’ll need to pay close attention to how you’re defining your methods and the initial entry into the script. The following is a good way to start:

const os = require('os')

const fs = require('fs')

const path = require('path')

const crypto = require('crypto')



const core = require('@actions/core')

const exec = require('@actions/exec')

const cache = require('@actions/cache')

const io = require('@actions/io') 



async function doSomethingUseful() {

 core.startGroup(`Doing something useful`)

 await sleep(1)

core.endGroup()

}

 

async function main() { 

 try {

  await doSomethingUseful()

 }

 catch (error) {

  core.setFailed(`Action failed with error ${error}`)

 } 

} 



main()

As you can see above, the main entry function is defined as async, so it will run in a non-blocking, concurrent fashion. For most actions, you’re going to want to wait on specific steps in your action to complete. You’ll also typically declare each new function/step in your action as an async function. Use await to effectively block until your steps run to completion. This pattern starts with the main entry function and continues throughout your entire script.

One final setup component: a pre-commit hook

Because everything needs to compile/assemble to run once it deploys, it’s easy to forget to build and check-in index.js. A pre-commit hook ensures the latest build of your action script is always checked in:

#!/bin/sh

set -e

cd .github/actions/your-action-script

yarn run package

exec git add dist/index.js

I find it good practice to save this pre-commit script as a file in the action’s root folder. In the README, provide instructions for activating the pre-commit locally for all contributors to discover and install.

Tips and tricks for working with action scripts

Caching in action scripts

Caching is well-documented in workflow files, and it’s easier than you might think to do the same within an action script. Controlling caches in the action script instead of the workflow gives you more fine-grained control over when to restore and when to cache contents. We also found that managing caches in the action script significantly DRY’d up our workflow file since we had multiple steps, and each step needed to declare and restore the cached ruby binaries.

Here is how you save and restore the Ruby binaries our action script builds:

function rubyCachePaths(rubyVersion) {

 return [ `${process.env.HOME}/.rubies/ruby-${rubyVersion}` ]

}

 

function rubyCacheKey(rubyVersion) {

 return `v8-ruby-cache-${rubyVersion}`

} 



// will attempt to restore the previously built Ruby environment if one exists.

async function restoreRubyFromCache(rubyVersion) {

 core.startGroup(`Restore Ruby from Cache`)

 const key = rubyCacheKey(rubyVersion)

 await cache.restoreCache(rubyCachePaths(rubyVersion), key, [key])

 core.endGroup()

}

 

// Causes current Ruby environment to be archived and cached.

async function saveRubyToCache(rubyVersion) {

 core.startGroup(`Save Ruby to Cache`)

 

const key = rubyCacheKey(rubyVersion)

await cache.saveCache(rubyCachePaths(rubyVersion), key)

core.endGroup()

}

The rubyVersion comes from the workflow file as part of the matrix, and we extract and pass that through from the main entry point in the script. We chose to cache instead of  build and publish artifacts because these Ruby binaries weren’t really meant for public consumption, which would’ve meant setting up separate build repositories that could potentially be used by others.

Hashing function for file fingerprints

Most actions we saw for saving and restoring Ruby bundled gems depended on a hash key derived from the contents of the Gemfile.lock. Since we were publishing a gem ourselves, we didn’t check in a Gemfile.lock, per best practices authoring Ruby gems. That meant we needed another way to build a fingerprint.  We chose to fingerprint our gem’s .gemspec file. The trick was discovering how to synchronously read and hash that file. Here’s the solution we arrived at:

// fingerprints the given filename, returning hex string representation

function fileHash(filename) {

 let sum = crypto.createHash('md5')

 sum.update(fs.readFileSync(filename))

 return sum.digest('hex')

}

 

function bundleCacheKey(rubyVersion) {

 const keyHash = fileHash(`${process.env.GITHUB_WORKSPACE}/newrelic_rpm.gemspec`)

 return `v2-bundle-cache-${rubyVersion}-${keyHash}`

}

This is why I included 'crypto' and 'fs' in the above example. The 'fs' module gives us access so we can  synchronously read a file’s content (i.e. blocking I/O) and 'crypto' module provides the means of generating an MD5 digest fingerprint on the file’s contents.

Write utility functions—they’ll make your life much easier

As we built our action script, we discovered that writing small utility functions made our lives much easier.  Because we don’t use JavaScript in our daily lives, these smaller functions provide a safe way to avoid introducing bugs. Here’s an example of how we consistently prepended environment variables:

// prepends the given value to the environment variable

function prependEnv(envName, envValue, divider=' ') {

 let existingValue = process.env[envName];

 if (existingValue) {

  envValue += `${divider}${existingValue}`

 }

 core.exportVariable(envName, envValue);

}



// any settings needed specifically for the EOL'd rubies

async function setupOldRubyEnvironments(rubyVersion) {

 core.startGroup("Setup for EOL Ruby Environments")

 

 const openSslPath = rubyOpenSslPath(rubyVersion);



 core.exportVariable('OPENSSL_DIR', openSslPath)



 prependEnv('LDFLAGS', `-L${openSslPath}/lib`)

 prependEnv('CPPFLAGS', `-I${openSslPath}/include`)

 prependEnv('PKG_CONFIG_PATH', `${openSslPath}/lib/pkgconfig`, ':')

 

 core.endGroup()

}

Download in parallel; install serially

When downloading large files, we took advantage of Node’s concurrency. We use parallel downloads but run installs serially, so we don’t have to contend with system manager lock race conditions. The following example shows how to start  Javascript Promises and resolve them:

// The older Rubies also need older MySQL that was built against the older OpenSSL libraries.

// Otherwise mysql adapter will segfault in Ruby because it attempts to dynamically link

// to the 1.1 series while Ruby links against the 1.0 series.

async function downgradeMySQL() {

 core.startGroup(`Downgrade MySQL`)



 const pkgDir = `${process.env.HOME}/packages`

 const pkgOption = `--directory-prefix=${pkgDir}/`

 const mirrorUrl = 'https://mirrors.mediatemple.net/debian-security/pool/updates/main/m/mysql-5.5'



// executes the following all in parallel 

const promise1 = exec.exec('sudo', ['apt-get', 'remove', 'mysql-client'])

const promise2 = exec.exec('wget', [pkgOption, `${mirrorUrl}/libmysqlclient18_5.5.62-0%2Bdeb8u1_amd64.deb`])

const promise3 = exec.exec('wget', [pkgOption, `${mirrorUrl}/libmysqlclient-dev_5.5.62-0%2Bdeb8u1_amd64.deb`])

 

// wait for the parallel processes to finish

await Promise.all([promise1, promise2, promise3])

 

// executes serially

await exec.exec('sudo', ['dpkg', '-i', `${pkgDir}/libmysqlclient18_5.5.62-0+deb8u1_amd64.deb`])

await exec.exec('sudo', ['dpkg', '-i', `${pkgDir}/libmysqlclient-dev_5.5.62-0+deb8u1_amd64.deb`])



core.endGroup()

}

Getting the output of shell commands

Unlike most languages I’ve worked with, running shell commands in JavaScript returns the exit code instead of the output emitted to STDOUT. Also, the exec function runs as a Promise, so it is also non-blocking. Be aware of both of these issues, so you don’t discover later steps in your action that fail. If you want to run a shell command and capture its output, wire up callback listeners to capture the output like this:

// invokes the @actions/exec exec function with listeners to capture the

// output stream as the return result.

async function execute(command) {

 try {

   let outputStr = ''

   

   const options = {}

   options.listeners = {

     stdout: (data) => { outputStr += data.toString() },

     stderr: (data) => { core.error(data.toString()) }

   }



   await exec.exec(command, [], options)

   

   return outputStr;



 }  catch (error) {

   console.error(error.toString())

 }

}

Using tree to inspect your environment setup

When you’re in unfamiliar territory, it can be challenging to figure out where things get installed and where things execute. In such cases, we use the Linux tree command to see what’s installed where. You can use tree used in the action script as well as the workflow. Using tree on a Ubuntu runner is as simple as this:

- name: show me the tree

 run: |

   sudo apt-get install tree

   tree .

The pièce de résistance: annotations

When you have failures against more than 150 running jobs, the last thing you want to do is navigate into each of the various jobs to see exactly why things failed. Annotations in GitHub offers a way to see that information in summary form. However, this isn’t a well-documented feature. Debugging even one or two jobs failures can be a major chore. For example:

So, let’s fix that. We can capture the output of the failing jobs and write the messages where annotations will grab the information and format and display it in the Annotations section shown above.

Fortunately, we have a home-grown test framework called Multiverse. This tool is similar to Appraisal and other Ruby gems that will run test suites under varying combinations of Ruby binaries and bundled gems. Multiverse captures all output of all test runs, and we simply extended that existing functionality to write said outputs to an error.txt output file. In effect, our Multiverse test suite runs and generates an output—which is typically written to the console—and now we  also write that output to an error.txt file when a failing suite is detected.  Here’s how we do that:

# Saves the failing output to the working directory of the container

# where it is later read and output as annotations of the github workflow

def self.save_output_to_error_file(lines)

 # Because the various environments potentially run in separate threads to

 # start their processes, make sure we don't blatantly interleave output.

 @output_lock.synchronize do

   filepath = ENV["GITHUB_WORKSPACE"]

   output_file = File.join(filepath, "errors.txt")

   

   existing_lines = []

   if File.exist?(output_file)

     existing_lines += File.read(output_file).split("\n")

   end



   lines = lines.split("\n") if lines.is_a?(String)

   File.open(output_file, 'w') do |f|

     f.puts existing_lines

     f.puts "*" * 80

     f.puts lines

   end

  end

end

The @output_lock mutex prevents concurrent jobs writing intermingled output. Now, back over in the action script, we look for that errors.txt file and transcribe any such file contents to the Annotations field like this:

const fs = require('fs')

const core = require('@actions/core')

const command = require('@actions/core/lib/command')



async function main() {

 const workspacePath = process.env.GITHUB_WORKSPACE

 const errorFilename = `${workspacePath}/errors.txt`

 

 try {



   if (fs.existsSync(errorFilename)) {

     let lines = fs.readFileSync(errorFilename).toString('utf8')

     command.issueCommand('error', undefined, lines)

   }

   else {

     core.info(`No ${errorFilename} present.  Skipping!`)

   }

 }

 catch (error) {

   core.setFailed(`Action failed with error ${error}`)

 }



}



main()

Note that since annotations had nothing to do with building Ruby binaries, we wrote the annotation step as its own self-contained Javascript action. This action required access to 'fs' (filesystem), 'core', and 'command' within that 'core' library in order to write to the Annotations field. All this action script needed to do was read the contents of the errors.txt file if present and call the 'issueCommand' with the 'error' tag and the lines from the file.

The final result looks something like this:

Now we can see at a glance whether there’s a general failure happening across many jobs or if each job is exhibiting its own unique point of failure. We no longer have to drill down into each job and then into thousands of lines of output to find exactly what went wrong with each job that failed.

Because annotating is a separate action, we have to call it out in the workflow as needed. At the end of each step where we might have errors to annotate, we do this:

   - name: Annotate errors

     if: ${{ failure() }}

     uses: ./.github/actions/annotate

Closing remarks

GitHub Actions is a powerful solution for automating pretty much anything in your workflow. While our first instinct is to avoid JavaScript because we are not JavaScript specialists, we soon learned to work with it within the context GitHub provides. There is a rich ecosystem that leverages JavaScript because it’s the framework GitHub used when building their toolkit. Hopefully, the above tips and tricks give you a running start in ramping up your own efforts.

To learn more about the Ruby Agent and our other open source contributions, check out all our projects New Relic Open Source.