Welcome to the Gofer documentation! This documentation is a reference for all available features and options of Gofer.

Gofer: Run short-lived jobs easily.


godoc for clintjedwards/gofer docs site project status

Gofer is an opinionated, cloud-native, container-focused, continuous thing do-er, that focuses on simplicity and usability for both developers and ops.

You deploy it as a single static binary service, pass it declarative configurations written in real programming languages, and watch as it automatically handles periodic scheduling of your automation workloads.

Gofer runs your workloads on whatever your choice of container scheduler: Nomad, K8s, Local Docker.

It's purpose is to run short term jobs such as: code linters, build tools, tests, port-scanners, ETL tooling and anything else you can package into a Docker container and run as a result of some other event happening.


  • Deploy it as a single static binary.
  • Write your pipelines in a programming language you're familar with. (Go or Rust for now).
  • Pluggable: Write your own triggers, shared tasks, and more in any language (through GRPC).
  • DAG(Directed Acyclic Graph) support.
  • Reliability tooling: A/B test, version, and canary new pipelines.
  • Bring your own everything! Secret store, object store, container scheduler. Gofer has the interfaces to support all of them.


Documentation & Getting Started

If you want to fully dive into Gofer, check out the documentation site!


Extended installation information is available through the documentation site.

Download a specific release:

You can view and download releases by version here.

Download the latest release:

  • Linux: wget

Build from source:

You'll need to install protoc and its associated golang/grpc modules first

  1. git clone && cd gofer
  2. make build OUTPUT=/tmp/gofer

The Gofer binary comes with a CLI to manage the server as well as act as a client.

Dev Setup

Gofer is setup such that the base run mode is the development mode. So simply running the binary without any additional flags allows easy authless development.

You'll need to install the following first:

To run Gofer dev mode:

To build protocol buffers:

Run from the Makefile

Gofer uses flags, env vars, and files to manage configuration (in order of most important). The Makefile already includes all the commands and flags you need to run in dev mode by simply running make run.

In case you want to run without the make file simply run:

go build -o /tmp/gofer
DEBUG=true; SEMVER=0.0.0; /tmp/gofer service start

Editing Protobufs

Gofer uses grpc and protobufs to communicate with both plugins and provide an external API. These protobuf files are located in /proto. To compile new protobufs once the original .proto files have changed you can use the make build-protos command.

Editing Documentation

Documentation is done with mdbook.

To install:

cargo install mdbook
cargo install mdbook-linkcheck

Once you have mdbook you can simply run make run-docs to give you an auto-reloading dev version of the documentation in a browser.

Gofer's Philosophy

Things should be easy and fast. For if they are not, people will look for an alternate solution.

Gofer focuses on the usage of common docker containers to run workloads that don't belong as long-running applications. The ability to run containers easily is powerful tool for users who need to run various short-term workloads and don't want to care about the idiosyncrasies of the tooling that they run on top of.

How do I use Gofer? What's a common workflow?

  1. Create a docker container with the workload/code you want to run.
  2. Create a configuration file (kept with your workload code) in which you tell Gofer what containers to run and when they should be run.
  3. Gofer takes care of the rest!

What problem is Gofer attempting to solve?

The current landscape for running short-term jobs is heavily splintered and could do with some centralization and sanity.

1) Tooling in this space is often CI/CD focused and treats gitops as a core tenet.

This is actually a good thing in most cases and something that most small companies should embrace. The guarantees and structure of gitops is useful for building and testing software.

Eventually as your workload grows though, you'll start to notice that tying your short-term job runner to gitops leaves a few holes in the proper management of those jobs. Gitops works for your code builds, but what about things in different shapes? Performing needful actions on a schedule (or a trigger) like database backups, port scanning, or maybe just smoke testing leaves something to be desired from the gitops model.

Let's take a look at an example:

Let's imagine you've built a tool that uses static analysis to examine PRs for potential issues1. The philosophy of gitops would have you store your tool's job settings in the same repository as the code it is examining. This ties the static analysis job to the version of code on a specific branch2.

This model of joining your job to the commit it's operating on works well until you have to fix something outside of its narrow paradigm.

Suddenly you have to fix a bug in your static analysis tool and it's a breaking change.

Here is how it would work in the realm of long-running jobs traditionally:

  1. You fix the bug
  2. You push your code
  3. You create a new release
  4. You update to the new version.

Done! The users of your tool(builds that depend on the static analysis tooling) see the breakage fix instantly.

Here is how it would work in a workload tied to gitops:

  1. You fix the bug
  2. You push your code
  3. All users who are working in the most recent commit are happy.
  4. All previous users who are working in an old commit are terribly unhappy as they do not yet have the update. And as such they are still calling upon your tool in the old, broken way. They receive weird breakage messages from their trusted static analysis tooling.
  5. You stress eat from having to figure out a way to tell everyone on old commits to update their branch.

This is due to the lack of operator led deployment mechanism for gitops related tooling. If you have to make a breaking change it's either each user performs a rebase or they're broken until further notice.

This leads to a poor user experience for the users who rely on that job and a poor operator experience for those who maintain it.

When this happens it's a headache. You can try different ways of getting around this problem, but they all have their drawbacks.

How does Gofer help?

Instead of tying itself to gitops wholly, Gofer leaves it as an option for the job implementer. Each pipeline exists independent of a particular repository, while providing the job operator the ability to use triggers to still implement gitops related features. Now the structure of running our static analysis tool becomes "code change is detected" -> "pipeline is run".

It's that simple.

Separating from gitops also allows us to treat our job as we would our long-running jobs. We can do things like canary out new versions, Blue/Green test and more.

2) Tooling in this space can lack testability.

Ever set up a CI/CD pipeline for your team and end up with a string of commits simply testing or fixing bugs in your assumptions of the system? This is usually due to not understanding how the system works, what values it will produce, or testing being difficult.

These are issues because most CI/CD systems make it hard to test locally. In order to support a wide array of job types(and lean toward being fully gitops focused) most of them run custom agents which in turn run the jobs you want.

This can be bad, since it's usually non-trivial to understand exactly what these agents will do once they handle your workload. Dealing with these agents can also be an operational burden. Operators are generally unfamiliar with these custom agents and it doesn't play to the strengths of an ops team that is already focused on other complex systems.

How does Gofer help?

Gofer plays to the strengths that both operators and users already have. Instead of implementing a custom agent, Gofer runs all containers via an already configured cluster that you're already running. This makes it so the people controlling the infrastructure your workloads are running on don't have to understand anything new. Once You understand how to run a container everything else follows naturally.

All Gofer does is run the same container you know locally and pass it the environment variables you expect.


3) Tooling in this space can lack simplicity.

Some user experience issues I've run into using other common CI/CD tooling:

  • 100 line bash script (filled with sed and awk) to configure the agent's environment before my workload was loaded onto it.
  • Debugging docker in docker issues.
  • Reading the metric shit ton of documentation just to get a project started.
  • Trying to understand a groovy script nested so deep it got into one of the layers of hell.
  • Dealing with the security issues of a way too permissive plugin system.
  • Agents giving vague and indecipherable errors to why my job failed.

How does Gofer help?

Gofer aims to use tooling that users are already are familiar with and get out of the way. Running containers should be easy. Forgoing things like custom agents and being opinionated in how workloads should be run, allows users to understand the system immediately and be productive quickly.

Familiar with the logging, metrics, and container orchestration of a system you already use? Great! Gofer will fit right in.

Why should you not use Gofer?

1) You need to simply run tests for your code.

While Gofer can do this, the gitops process really shines here. I'd recommend using any one of the widely available gitops focused tooling. Attempting to do this with Gofer will require you to recreate some of the things these tools give you for free, namely git repository management.

2) The code you run is not idempotent.

Gofer does not guarantee a single run of a container. Even though it does a good job in best effort, a perfect storm of operator error, trigger errors, or sudden shutdowns could cause multiple runs of the same container.

3) The code you run does not follow cloud native best practices.

The easiest primer on cloud native best practices is the 12-factor guide, specifically the configuration section. Gofer provides tooling for container to operate following these guidelines with the most important being that your code will need to take configuration via environment variables.

4) The scheduling you need is precise.

Gofer makes a best effort to start jobs on their defined timeline, but it is at the mercy of many parts of the system (scheduling lag, image download time, competition with other pipelines). If you need precise down to the second or minute runs of code Gofer does not guarantee such a thing.

Gofer works better when jobs are expected to run +1 to +5 mins of their scheduled event/time.

Why not use <insert favorite tool> instead ?

ToolCategoryWhy not?
JenkinsGeneral thing-doerSupports generally anything you might want to do ever, but because of this it can be operationally hard to manage, usually has massive security issues and isn't by default opinionated enough to provide users a good interface into how they should be managing their workloads.
Buildkite/CircleCI/Github actions/etcGitops cloud buildersGitops focused cloud build tooling is great for most situations and probably what most companies should start out using. The issue is that running your workloads can be hard to test since these tools use custom agents to manage those jobs. This causes local testing to be difficult as the custom agents generally work very differently locally. Many times users will fight with yaml and make commits just to test that their job does what they need due to their being no way to determine that beforehand.
ArgoCDKubernetes focused CI/CD toolingIn the right direction with its focus on running containers on already established container orchstrators, but Argo is tied to gitops making it hard to test locally, and also closely tied to Kubernetes.
ConcourseCIContainer focused thing do-erConcourse is great and where much of this inspiration for this project comes from. It sports a sleek CLI, great UI, and cloud-native primatives that makes sense. The drawback of concourse is that it uses a custom way of managing docker containers that can be hard to reason about. This makes testing locally difficult and running in production means that your short-lived containers exist on a platform that the rest of your company is not used to running containers on.
AirflowETL systemsI haven't worked with large scale data systems enough to know deeply about how ETL systems came to be, but (maybe naively) they seem to fit into the same paradigm of "run x thing every time y happens". Airflow was particularly rough to operate in the early days of its release with security and UX around DAG runs/management being nearly non-existent. As an added bonus the scheduler regularly crashed from poorly written user workloads making it a reliability nightmare.

Additionally, Airflow's models of combining the execution logic of your DAGs with your code led to issues of testing and iterating locally.

Instead of having tooling specifically for data workloads, instead it might be easier for both data teams and ops teams to work in the model of distributed cron as Gofer does. Write your stream processing using dedicated tooling/libraries like Benthos (or in whatever language you're most familiar with), wrap it in a Docker container, and use Gofer to manage which containers should run when, where, and how often. This gives you easy testing, separation of responsibilities, and no python decorator spam around your logic.
CadenceETL systemsI like Uber's cadence, it does a great job at providing a platform that does distributed cron and has some really nifty features by choosing to interact with your workflows at the code level. The ability to bake in sleeps and polls just like you would regular code is awesome. But just like Airflow, I don't want to marry my scheduling platform with my business logic. I write the code as I would for a normal application context and I just need something to run that code. When we unmarry the business logic and the scheduling platform we are able to treat it just like we treat all our other code, which means code workflows(testing, for example) we were all already used to and the ability to foster code reuse for these same processes.

cough cough


Here is an example of buildkite's approach where your job definition is uploaded on every run via the buildkite config file at that certain commit.


This software is provided as-is. It's a hobby project, done in my free time, and I don't get paid for doing it.

How does Gofer work?

Gofer works in a very simple client-server model. You deploy Gofer as a single binary to your favorite VPS and you can configure it to connect to all the tooling you currently use to run containers.

Gofer acts as a scheduling middle man between a user's intent to run a container at the behest of an event and your already established container orchestration system.


Interaction with Gofer is mostly done through its command line interface which is included in the same binary as the master service.

General Workflow

  1. Gofer is connected to a container orchestrator of some sort. This can be just your local docker service or something like K8s or Nomad.
  2. It launches it's configured triggers (triggers are just docker containers) and these triggers wait for events to happen.
  3. Users create pipelines (by configuration file) that define exactly in which order and what containers they would like to run.
  4. These pipelines don't have to, but usually involve triggers so that pipelines can run automatically.
  5. Either by trigger or manual intervention a pipeline run will start and schedule the containers defined in the configuration file.
  6. Gofer will collect the logs, exit code, and other essentials from each container run and provide them back to the user along with summaries of how that particular run performed.

Trigger Implementation

  1. When Gofer launches the first thing it does is create the trigger containers the same way it schedules any other container.
  2. The trigger containers are all small GRPC services that are implemented using a specific interface provided by the SDK.
  3. Gofer passes the trigger a secret value that only it knows so that the trigger doesn't respond to any requests that might come from other sources.
  4. After the trigger is initialized Gofer will subscribe any pipelines that have requested this trigger (through their pipeline configuration file) to that trigger.
  5. The trigger then takes note of this subscription and waits for the relevant event to happen.
  6. When the event happens it figures out which pipeline should be alerted and sends an event to the main Gofer process.
  7. The main gofer process then starts a pipeline run on behalf of the trigger.


  • Pipeline: A pipeline is a collection of tasks that can be run at once. Pipelines can be defined via a pipeline configuration file. Once you have a pipeline config file you can create a new pipeline via the CLI (recommended) or API.

  • Run: A run is a single execution of a pipeline. A run can be started automatically via triggers or manually via the API or CLI

  • Trigger: A trigger is an automatic way to run your pipeline. Once mentioned in your pipeline configuration file, your pipeline subscribes to those triggers, passing them conditions on when to run. Once those conditions are met, those triggers will then inform Gofer that a new run should be launched for that pipeline.

  • Task: A task is the lowest unit in Gofer. It is a small abstraction over running a single container. Through tasks you can define what container you want to run, when to run it in relation to other containers, and what variables/secrets those containers should use.

  • Task Run: A task run is an execution of a single task container. Referencing a specific task run is how you can examine the results, logs, and details of one of your tasks.


> I have a job that works with a remote git repository, other CI/CD tools make this trivial, how do I mimic that?

The drawback of this model and architecture is does not specifically cater to GitOps. So certain workflows that come out of the box from other CI/CD tooling will need to be recreated, due to its inherently distributed nature.

Gofer has provided several tooling options to help with this.

There are two problems that need to be solved around the managing of git repositories for a pipeline:

1) How do I authenticate to my source control repository?

Good security practice suggests that you should be managing repository deploy keys, per repository, per team. You can potentially forgo the "per team" suggestion using a "read-only" key and the scope of things using the key isn't too big.

Gofer's suggestion here is to make deploy keys self service and then simply enter them into Gofer's secret store to be used by your pipeline's tasks. Once there you can then use it in each job to pull the required repository.

2) How do I download the repository?

Three strategies:

  1. Just download it when you need it. Depending on the size of your repository and the frequency of the pull, this can work absolutely fine.
  2. Download it as you need it using a local caching git server. Once your repository starts becoming large or you do many pulls quickly it might make more sense to use a cache1,2. It also makes sense to only download what you need using git tools like sparse checkout
  3. Use the object store as a cache. Gofer provides an object store to act as a permanent (pipeline-level) or short-lived (run-level) cache for your workloads. Simply store the repository inside the object store and pull down per job as needed.
1 2:

Feature Guide

Write your pipelines in a real programming language.

Other infrastructure tooling tried configuration languages(yaml, hcl).... and they kinda suck1. The Gofer CLI allows you to create your pipelines in a fully featured programming language. Pipelines can be currently be written in Go or Rust2.

DAG(Directed Acyclic Graph) Support.

Gofer provides the ability to run your containers in reference to your other containers.

With DAG support you can run containers:

  • In parallel.
  • After other containers.
  • When particular containers fail.
  • When particular containers succeed.


Gofer uses GRPC and Protobuf to construct its API surface. This means that Gofer's API is easy to use, well defined, and can easily be developed for in any language.

The use of Protobuf gives us two main advantages:

  1. The most up-to-date API contract can always be found by reading the .proto files included in the source.
  2. Developing against the API for developers working within Golang/Rust simply means importing the autogenerate proto package.
  3. Developing against the API for developers not working within the Go/Rust language means simply importing the proto files and generating them for the language you need.

You can find more information on protobuf, proto files, and how to autogenerate the code you need to use them to develop against Gofer in the protobuf documentation.


Gofer allows you to separate out your pipelines into different namespaces, allowing you to organize your teams and set permissions based on those namespaces.


Triggers are the way users can automate their pipelines by waiting on bespoke events (like the passage of time).

Gofer supports any trigger you can imagine by making triggers pluggable and portable3! Triggers are nothing more than docker containers themselves that talk to the main process when its time for a pipeline to be triggered.

Gofer out of the box provides some default triggers like cron and interval. But even more powerful than that, it accepts any type of trigger you can think up and code using the included SDK.

Triggers are brought up alongside Gofer as long-running docker containers that it launches and manages.

Object Store

Gofer provides a built in object store you can access with the Gofer CLI. This object store provides a caching and data transfer mechanism so you can pass values from one container to the next, but also store objects that you might need for all containers.

Secret Store

Gofer provides a built in secret store you can access with the Gofer CLI. This secret store provides a way to pass secret values needed by your pipeline configuration into Gofer.


Gofer provides a list of events for the most common actions performed. You can view this event stream via the Gofer API, allowing you to build on top of Gofer's actions and even using Gofer as a trigger.

External Events

Gofer allows triggers to consume external events. This allows for triggers to respond to webhooks from favorite sites like Github and more.

Common Tasks

Much like triggers, Gofer allows users to install "common tasks". Common tasks are Gofer's way of cutting down on some of the setup and allowing containers to be pre-setup by the system administrator for use in any pipeline.

For example, if you wanted to do some common action like post to Slack, it would be annoying have to set up the container that does this for every pipeline. Instead, Gofer allows you to set it up once and include it everywhere.

Pluggable Everything

Gofer plugs into all your favorite backends your team is already using. This means that you never have to maintain things outside of your wheelhouse.

Whether you want to schedule your containers on K8s or AWS Lambda, or maybe you'd like to use an object store that you're more familiar with in minio or AWS S3, Gofer provides either an already created plugin or an interface to write your own.


Initally why configuration languages are used made sense, namely lowering the bar for users who might not know how to program and making it simplier overall to maintain(read: not shoot yourself in the foot with crazy inheritance structures). But, in practice, we've found that they kinda suck. Nobody wants to learn yet another language for this one specific thing. Furthermore, using a separate configuration language doesn't allow you to plug into years of practice/tooling/testing teams have with a certain favorite language.


All pipelines eventualy reduce to protobuf so technically given the correct libraries your pipelines can be written in any language you like!



Best Practices

In order to schedule workloads on Gofer your code will need to be wrapped in a docker container. This is a short workflow blurb about how to create containers to best work with Gofer.

1) Write your code to be idempotent.

Write your code in whatever language you want, but it's a good idea to make it idempotent. Gofer does not guarantee single container runs (but even if it did that doesn't prevent mistakes from users).

2) Follow 12-factor best practices.

Configuration is the important one. Gofer manages information into containers by environment variables so your code will need to take any input or configuration it needs from environment variables.

3) Keep things simple.

You could, in theory, create a super complicated graph of containers that run off each other. But the main theme of Gofer is simplicity. Make sure you're thinking through the benefits of managing something in separate containers vs just running a monolith container. There are good reasons for both; always err on the side of clarity and ease of understanding.

4) Keep your containers lean.

Because of the potentially distributed nature of Gofer, the larger the containers you run, the greater potential lag time between the start of execution for your container. This is because there is no guarantee that your container will end up on a machine that already has the image. Downloading large images takes a lot of time and a lot of disk space.

Troubleshooting Gofer

This page provides various tips on how to troubleshoot and find issues/errors within Gofer.

Debugging triggers

Triggers are simply long running docker containers that internally wait for an event to happen and then communicate with Gofer through a long GRPC poll on what should be the result of that event.

Debugging Common Tasks

Common Tasks are pre-setup containers that run at a user's request. Errors in common tasks should show up as errors for your pipeline as normal.

To aid in debugging common tasks in general, there is a debug common task available. It simply prints out all environment variables found and takes some straight-forward parameters and configurations.

Debugging Custom Tasks

When custom tasks aren't working quite right, it helps to have some simple tasks that you can use to debug. Gofer provides a few of these to aid in debugging.

NameImageDescription prints out all environment variables found exist with a non-zero exit code. Useful for testing that pipeline failures or alerting works correctly. a couple paragraphs of log lines with 1 second in-between, useful as a container that takes a while to finish and testing that log following is working correctly a specified amount of time and then successfully exits.

Getting Started

Let's start by setting up our first Gofer pipeline!

Installing Gofer

Gofer comes as an easy to distribute pre-compiled binary that you can run on your machine locally, but you can always build Gofer from source if need be.

You can download the latest version for linux here:


From Source

Gofer contains protobuf assets which will not get compiled if used via go install. Alternatively we can use make to build ourselves an impromptu version.

git clone && cd gofer
make build path=/tmp/gofer
/tmp/gofer --version

Running the Server Locally

Gofer is deployed as a single static binary allowing you to run the full service locally so you can play with the internals before committing resources to it. Spinning Gofer up locally is also a great way to debug "what would happen if?" questions that might come up during the creation of pipeline config files.

Install Gofer

Install Docker

The way in which Gofer runs containers is called a Scheduler. When deploying Gofer at scale we can deploy it with a more serious container scheduler (Nomad, Kubernetes) but for now we're just going to use the default local docker scheduler included. This simply uses your local instance of docker instance to run containers.

But before we use your local docker service... you have to have one in the first place. If you don't have docker installed, the installation is quick. Rather than covering the specifics here you can instead find a guide on how to install docker for your operating system on its documentation site.

Start the server

By default the Gofer binary is able to run the server in development mode. Simply start the service by:

gofer service start

🪧 The Gofer CLI has many useful commands, try running gofer -h to see a full listing.

Create Your First Pipeline Configuration

Before you can start running containers you must tell Gofer what you want to run. To do this we create what is called a pipeline configuration.

The creation of this pipeline configuration is very easy and can be done in either Golang or Rust. This allows you to use a fully-featured programming language to organize your pipelines, instead of dealing with YAML mess.

Let's Go!

As an example, let's just copy a pipeline that has been given to us already. We'll use Go as our language, which means you'll need to install it if you don't have it. The Gofer repository gives us a simple pipeline that we can copy and use.

Let's first create a folder where we'll put our pipeline:

mkdir /tmp/simple_pipeline

Then let's copy the Gofer provided pipeline's main file into the correct place:

cd /tmp/simple_pipeline

This should create a main.go file inside our /tmp/simple_pipeline directory.

Lastly, let's initialize the new Golang program:

To complete our Go program we simply have to initialize it with the go mod command.

go mod init test/simple_pipeline
go mod tidy

The pipeline we generated above gives you a very simple pipeline with a few pre-prepared testing docker containers. You should be able to view it using your favorite IDE.

The configuration itself is very simple. Essentially a pipeline contains of a few parts:

  • Some basic attributes so we know what to call it and how to document it.
  • The containers we want to run are defined through tasks.
  • And when we want to automate when the pipeline runs automatically, we can do that through triggers.

Register your pipeline

Now we will register your newly created pipeline configuration with Gofer!

More CLI to the rescue

From your terminal, lets use the Gofer binary to run the following command, pointing Gofer at your newly created pipeline folder:

gofer pipeline create ./tmp/simple_pipeline

Examine created pipeline

It's that easy!

The Gofer command line application uses your local Golang compiler to compile, parse, and upload your pipeline configuration to Gofer.

You should have received a success message and some suggested commands:

 ✓ Created pipeline: [simple] "Simple Pipeline"

  View details of your new pipeline: gofer pipeline get simple
  Start a new run: gofer run start simple

We can view the details of our new pipeline by running:

gofer pipeline get example_pipeline

If you ever forget your pipeline ID you can list all pipelines that you own by using:

gofer pipeline list

Start a Run

Now that we've set up Gofer, defined our pipeline, and registered it we're ready to actually run our containers.

Press start

gofer pipeline run simple

What happens now?

When you start a run Gofer will attempt to schedule all your tasks according to their dependencies onto your chosen scheduler. In this case that scheduler is your local instance of Docker.

Your run should be chugging along now!

View a list of runs for your pipeline:

gofer run list simple

View details about your run:

gofer run get simple 1

List the containers that executed during the run:

gofer taskrun list simple 1

View a particular container's details during the run:

gofer taskrun get simple 1 <task_id>

Stream a particular container's logs during the run:

gofer taskrun logs simple 1 <task_id>

What's Next?


  1. Keep playing with Gofer locally and check out all the CLI commands.
  2. Spruce up your pipeline definition!
  3. Learn more about Gofer terminology.
  4. Deploy Gofer for real. Pair it with your favorite scheduler and start using it to automate your jobs.

Pipeline Configuration

A pipeline is a directed acyclic graph of tasks that run together. A single execution of a pipeline is called a run. Gofer allows users to configure their pipeline via a configuration file written in Golang or Rust.

The general hierarchy for a pipeline is:

    \_ run
         \_ task

Each execution of a pipeline is a run and every run consists of one or more tasks (containers). These tasks are where users specify their containers and settings.


Creating a pipeline involves using Gofer's SDK currently written in Go or Rust.

Extensive documentation can be found on the SDK's reference page. There you will find most of the features and idiosyncrasies available to you when creating a pipeline.

Small Walkthrough

To introduce some of the concepts slowly, lets build a pipeline step by step. We'll be using Go as our pipeline configuration language and this documentation assumes you've already set up your project and are operating in a main.go file.

A Simple Pipeline

Every pipeline is initialized with a simple pipeline declaration. It's here that we will name our pipeline, giving it a machine referable ID and a human referable name.

err := sdk.NewPipeline("my_pipeline", "My Simple Pipeline")

It's important to note here that while your human readable name ("My Simple Pipeline" in this case) can contain a large amount of characters the ID can only container alphanumeric letters, numbers, and underscores. Any other characters will result in an error when attempting to register the pipeline.

Add a Description

Next we'll add a simple description to remind us what this pipeline is used for.

err := sdk.NewPipeline("my_pipeline", "My Simple Pipeline").
    WithDescription("This pipeline is purely for testing purposes.")

The SDK uses a builder pattern, which allows us to simply add another function onto our Pipeline object which we can type our description into.

Add a trigger

Next we'll add a trigger. Triggers allow us to automate when our pipeline's run. Triggers usually execute a pipeline for us based on some event. In this example that even is the passage of time.

To do this we'll use a trigger included with Gofer called the interval trigger. This trigger simply counts time and executes pipeline's based on that pipeline's specific time configuration.

err := sdk.NewPipeline("my_pipeline", "My Simple Pipeline").
    WithDescription("This pipeline is purely for testing purposes.").
        *sdk.NewTrigger("interval", "every_one_minute").WithSetting("every", "1m"),

Here you can see we create a new WithTriggers block and then add a single trigger interval. We also add a setting block. Different triggers have different settings that pipelines can pass to them. In this case, passing the setting every along with the value 1m will tell interval that this pipeline should be executed every minute.

When this pipeline is registered, Gofer will check that a trigger named interval actually exists and it will then communicate with that trigger to tell it which pipeline wants to register and which configuration values it has passed along.

If this registration with the trigger cannot be formed the registration of the overall pipeline will fail.

Add a task

Lastly let's add a task(container) to our pipeline. We'll add a simple ubuntu container and change the command that gets run on container start to just say "Hello from Gofer!".

err := sdk.NewPipeline("my_pipeline", "My Simple Pipeline").
    WithDescription("This pipeline is purely for testing purposes.").
        *sdk.NewTrigger("interval", "every_one_minute").WithSetting("every", "1m"),
		sdk.NewCustomTask("simple_task", "ubuntu:latest").
			WithDescription("This task simply prints our hello-world message and exists!").
			WithCommand("echo", "Hello from Gofer!"),

We used the WithTasks function to add multiple tasks and then we use the SDK's NewCustomTask function to create a task. You can see we:

  • Give the task an ID, much like our pipeline earlier.
  • Specify which image we want to use.
  • Tack on a description.
  • And then finally specify the command.

To tie a bow on it, we add the .Finish() function to specify that our pipeline is in it's final form.

err := sdk.NewPipeline("my_pipeline", "My Simple Pipeline").
    WithDescription("This pipeline is purely for testing purposes.").
        *sdk.NewTrigger("interval", "every_one_minute").WithSetting("every", "1m"),
		sdk.NewCustomTask("simple_task", "ubuntu:latest").
			WithDescription("This task simply prints our hello-world message and exists!").
			WithCommand("echo", "Hello from Gofer!"),

That's it! This is a fully functioning pipeline.

You can run and test this pipeline much like you would any other code you write. Running it will produce a protobuf binary output which Gofer uses to pass to the server.

Full Example

package main

import (

	sdk ""

func main() {
	err := sdk.NewPipeline("trigger", "Trigger Pipeline").
		WithDescription("This pipeline shows off the various features of a simple Gofer pipeline. Triggers, Tasks, and " +
			"dependency graphs are all tools that can be wielded to create as complicated pipelines as need be.").
			*sdk.NewTrigger("interval", "every_one_minute").WithSetting("every", "1m"),
		sdk.NewCustomTask("simple_task", "ubuntu:latest").
			WithDescription("This task simply prints our hello-world message and exists!").
			WithCommand("echo", "Hello from Gofer!"),
	if err != nil {

Custom Tasks

Gofer's abstraction for running a container is called a Task. Specifically Custom Tasks are containers you point Gofer to and configure to perform some workload.

A Custom Task can be any Docker container you want to run. In the Getting Started example we take a regular standard ubuntu:latest container and customize it to run a passed in bash script.

    sdk.NewCustomTask("simple_task", "ubuntu:latest").
        WithDescription("This task simply prints our hello-world message and exists!").
        WithCommand("echo", "Hello from Gofer!"),

Custom Task Environment Variables and Configuration

Gofer handles container configuration the cloud native way. That is to say every configuration is passed in as an environment variable. This allows for many advantages, the greatest of which is standardization.

As a user, you pass your configuration in via the Variable(s) flavor of functions in your pipeline-config.

When a container is run by Gofer, the Gofer scheduler has the potential to pass in configuration from multiple sources1:

  1. Your pipeline configuration: Configs you pass in by using the Variable(s) functions.

  2. Trigger/Manual configurations: Triggers are allowed to pass in custom configuration for a run. Usually this configuration gives extra information the run might need. (For example, the git commit that activated the trigger.).

    Alternatively, if this run was not activated by a trigger and instead kicked of manually, the user who launched the run might opt to pass in configuration at that runtime.

  3. Gofer's system configurations: Gofer will pass in system configurations that might be helpful to the user. (For example, what current pipeline is running.)2

The exact key names injected for each of these configurations can be seen on any taskrun by getting that taskrun's details: gofer taskrun get <pipeline_name> <run_id>


These sources are ordered from most to least important. Since the configuration is passed in a "Key => Value" format any conflicts between sources will default to the source with the greater importance. For instance, a pipeline config with the key GOFER_PIPELINE_ID will replace the key of the same name later injected by the Gofer system itself.


The current Gofer system injected variables can be found here. Below is a possibly out of date short reference:

GOFER_PIPELINE_IDThe pipeline identification string.
GOFER_RUN_IDThe run identification number.
GOFER_TASK_IDThe task run identification string.
GOFER_TASK_IMAGEThe image name the task is currently running with.
GOFER_API_TOKENRuns can be assigned a unique Gofer API token automatically. This makes it easy and manageable for tasks to query Gofer's API and do lots of other convenience tasks.

What happens when a task is run?

The high level flow is:

  1. Gofer checks to make sure your task configuration is valid.
  2. Gofer parses the task configuration's variables list. It attempts replace any substitution variables with their actual values from the object or secret store.
  3. Gofer then passes the details of your task to the configured scheduler, variables are passed in as environment variables.
  4. Usually this means the scheduler will take the configuration and attempt to pull the image mentioned in the configuration.
  5. Once the image is successfully pulled the container is then run with the settings passed.

Server Configuration

Gofer runs as a single static binary that you deploy onto your favorite VPS.

While Gofer will happily run in development mode without any additional configuration, this mode is NOT recommended for production workloads and not intended to be secure.

Instead Gofer allows you to edit it's startup configuration allowing you to configure it to run on your favorite container orchestrator, object store, and/or storage backend.


There are a few steps to setting up the Gofer service for production:

1) Configuration

First you will need to properly configure the Gofer service.

Gofer accepts configuration through environment variables or a configuration file. If a configuration key is set both in an environment variable and in a configuration file, the value of the environment variable's value will be the final value.

You can view a list of environment variables Gofer takes by using the gofer service printenv command. It's important to note that each environment variable starts with a prefix of GOFER_. So setting the host configuration can be set as:

export GOFER_HOST=localhost:8080

Configuration file

The Gofer service configuration file is written in HCL.

Load order

The Gofer service looks for its configuration in one of several places (ordered by first searched):

  1. Path given through the GOFER_CONFIG_PATH environment variable
  2. /etc/gofer/gofer.hcl

🪧 You can generate a sample Gofer configuration file by using the command: gofer service init-config

Bare minimum production file

These are the bare minimum values you should populate for a production ready Gofer configuration.

The values below should be changed depending on your environment; leaving them as they currently are will lead to loss of data on server restarts.

🪧 To keep your deployment of Gofer safe make sure to use your own TLS certificates instead of the default localhost ones included.

// Gofer Service configuration file is used as an alternative to providing the server configurations via envvars.
// You can find an explanation of these configuration variables and where to put this file so the server can read this
// file in the documenation:
dev_mode                   = false
event_log_retention        = "4380h"
event_prune_interval       = "3h"
ignore_pipeline_run_events = false
log_level                  = "info"
task_run_log_expiry        = 50
task_run_logs_dir          = "/tmp"
task_run_stop_timeout      = "5m"

external_events_api {
  enable = true
  host   = "localhost:8081"

object_store {
  engine = "sqlite"
  sqlite {
    path = "/tmp/gofer-object.db"
  pipeline_object_limit = 50
  run_object_expiry     = 50

secret_store {
  engine = "sqlite"
  sqlite {
    path           = "/tmp/gofer-secret.db"
    encryption_key = "changemechangemechangemechangeme"

scheduler {
  engine = "docker"
  docker {
    prune          = true
    prune_interval = "24h"

server {
  host                  = "localhost:8080"
  shutdown_timeout      = "15s"
  tls_cert_path         = "./localhost.crt"
  tls_key_path          = "./localhost.key"
  storage_path          = "/tmp/gofer.db"
  storage_results_limit = 200

triggers {
  install_base_triggers = true
  stop_timeout          = "5m"
  tls_cert_path         = "./localhost.crt"
  tls_key_path          = "./localhost.key"

2) Running the binary

You can find the most recent releases of Gofer on the github releases page..

Simply use whatever configuration management system you're most familiar with to place the binary on your chosen VPS and manage it. You can find a quick and dirty wget command to pull the latest version in the getting started documentation.

As an example a simple systemd service file setup to run Gofer is show below:

Example systemd service file

Description=gofer service

ExecStart=/usr/bin/gofer service start
ExecReload=/bin/kill -HUP $MAINPID


3) First steps

You will notice upon service start that the Gofer CLI is unable to make any requests due to permissions.

You will first need to handle the problem of auth. Every request to Gofer must use an API key so Gofer can appropriately direct requests.

More information about auth in general terms can be found here.

To create your root management token use the command: gofer service token bootstrap

🪧 The token returned is a management token and as such as access to all routes within Gofer. It is advised that:

  1. You use this token only in admin situations and to generate other lesser permissioned tokens.
  2. Store this token somewhere safe

From here you can use your root token to provision extra lower permissioned tokens for everyday use.

When communicating with Gofer through the CLI you can set the token to be automatically passed per request in one of many ways.

Configuration Reference

Gofer has a variety of parameters that can be specified via environment variables or the configuration file.

To view a listing of the possible environment variables use: gofer service printenv.

The most up to date config file values can be found by reading the code, but a best effort key and description list is given below.

If examples of these values are needed you can find a sample file by using gofer service init-config.



dev_modebooleantrueDev mode controls many aspects of Gofer to make it easier to run locally for development and testing. Because of this you should not run dev mode in production as it is not safe. A non-complete list of things dev-mode helps with: the use of localhost certificates, auto-generation of encryption key, bypass of authentication for all routes.
event_log_retentionstring (duration)4380hControls how long Gofer will hold onto events before discarding them. This is important factor in disk space and memory footprint. Example: Rough math on a 5,000 pipeline Gofer instance with a full 6 months of retention puts the memory and storage footprint at about 9GB.
event_prune_intervalstring3hHow often to check for old events and remove them from the database.
ignore_pipeline_run_eventsbooleanfalseControls the ability for the Gofer service to execute jobs on startup. If this is set to false you can set it to true manually using the CLI command gofer service toggle-event-ingress.
log_levelstringdebugThe logging level that is output. It is common to start with info.
run_parallelism_limitintN/AThe limit automatically imposed if the pipeline does not define a limit. 0 is unlimited.
task_run_logs_dirstring/tmpThe path of the directory to store task run logs. Task run logs are stored as a text file on the server.
task_run_log_expiryint20The total amount of runs before logs of the oldest run will be deleted.
task_run_stop_timeoutstring5mThe amount of time Gofer will wait for a container to gracefully stop before sending it a SIGKILL.
external_events_apiblockN/AThe external events API controls webhook type interactions with triggers. HTTP requests go through the events endpoint and Gofer routes them to the proper trigger for handling.
object_storeblockN/AThe settings for the Gofer object store. The object store assists Gofer with storing values between tasks since Gofer is by nature distributed. This helps jobs avoid having to download the same objects over and over or simply just allows tasks to share certain values.
secret_storeblockN/AThe settings for the Gofer secret store. The secret store allows users to securely populate their pipeline configuration with secrets that are used by their tasks, trigger configuration, or scheduler.
schedulerblockN/AThe settings for the container orchestrator that Gofer will use to schedule workloads.
serverblockN/AControls the settings for the Gofer API service properties.
triggersblockN/AControls settings for Gofer's trigger system. Triggers are different workflows for running pipelines usually based on some other event (like the passing of time).

External Events API (block)

The external events API controls webhook type interactions with triggers. HTTP requests go through the events endpoint and Gofer routes them to the proper trigger for handling.

enablebooleantrueEnable the events api. If this is turned off the events http service will not be started.
hoststringlocalhost:8081The address and port to bind the events service to.


external_events_api {
  enable = true
  host   = ""

Object Store (block)

The settings for the Gofer object store. The object store assists Gofer with storing values between tasks since Gofer is by nature distributed. This helps jobs avoid having to download the same objects over and over or simply just allows tasks to share certain values.

You can find more information on the object store block here.

enginestringsqliteThe engine Gofer will use to store state. The accepted values here are "sqlite".
pipeline_object_limitint50The limit to the amount of objects that can be stored at the pipeline level. Objects stored at the pipeline level are kept permanently, but once the object limit is reach the oldest object will be deleted.
run_object_expiryint50Objects stored at the run level are unlimited in number, but only last for a certain number of runs. The number below controls how many runs until the run objects for the oldest run will be deleted. Ex. an object stored on run number #5 with an expiry of 2 will be deleted on run #7 regardless of run health.

Sqlite (block)

The sqlite store is a built-in, easy to use object store. It is meant for development and small deployments.

pathstring/tmp/gofer-object.dbThe path of the file that sqlite will use. If this file does not exist Gofer will create it.
sqliteblockN/AThe sqlite storage engine.
object_store {
  engine = "sqlite"
  sqlite {
    path = "/tmp/gofer-object.db"

Secret Store (block)

The settings for the Gofer secret store. The secret store allows users to securely populate their pipeline configuration with secrets that are used by their tasks, trigger configuration, or scheduler.

You can find more information on the secret store block here.

enginestringsqliteThe engine Gofer will use to store state. The accepted values here are "sqlite".
sqliteblockN/AThe sqlite storage engine.

Sqlite (block)

The sqlite store is a built-in, easy to use object store. It is meant for development and small deployments.

pathstring/tmp/gofer-secret.dbThe path of the file that sqlite will use. If this file does not exist Gofer will create it.
encryption_keystring"changemechangemechangemechangeme"Key used to encrypt keys to keep them safe. This encryption key is responsible for facilitating that. It MUST be 32 characters long and cannot be changed for any reason once it is set or else all data will be lost.
secret_store {
  engine = "sqlite"
  sqlite {
    path = "/tmp/gofer-secret.db"
    encryption_key = "changemechangemechangemechangeme"

Scheduler (block)

The settings for the container orchestrator that Gofer will use to schedule workloads.

You can find more information on the scheduler block here.

enginestringsqliteThe engine Gofer will use as a container orchestrator. The accepted values here are "docker".
dockerblockN/ADocker is the default container orchestrator and leverages the machine's local docker engine to schedule containers.

Docker (block)

Docker is the default container orchestrator and leverages the machine's local docker engine to schedule containers.

prunebooleanfalseControls if the docker scheduler should periodically clean up old containers.
prune_intervalstring24hControls how often the prune container job should run.
scheduler {
  engine = "docker"
  docker {
    prune          = true
    prune_interval = "24h"

Server (block)

Controls the settings for the Gofer service's server properties.

hoststringlocalhost:8080The address and port for the service to bind to.
shutdown_timeoutstring15sThe time Gofer will wait for all connections to drain before exiting.
tls_cert_pathstringThe TLS certificate Gofer will use for the main service endpoint. This is required.
tls_key_pathstringThe TLS certificate key Gofer will use for the main service endpoint. This is required.
storage_pathstring/tmp/gofer.dbWhere to put Gofer's sqlite database.
storage_results_limitint200The amount of results Gofer's database is allowed to return on one query.
server {
  host                  = "localhost:8080"
  dev_mode              = false
  tls_cert_path         = "./localhost.crt"
  tls_key_path          = "./localhost.key"
  tmp_dir               = "/tmp"
  storage_path          = "/tmp/gofer.db"
  storage_results_limit = 200

Triggers (block)

Controls settings for Gofer's trigger system. Triggers are different workflows for running pipelines usually based on some other event (like the passing of time).

You can find more information on the trigger block here.

install_base_triggersbooleantrueAttempts to automatically install the cron and interval triggers on first startup.
stop_timeoutstring5mThe amount of time Gofer will wait until trigger containers have stopped before sending a SIGKILL.
tls_cert_pathstringThe TLS certificate path Gofer will use for the triggers. This should be a certificate that the main Gofer service will be able to access.
tls_key_pathstringThe TLS certificate path key Gofer will use for the triggers. This should be a certificate that the main Gofer service will be able to access.
triggers {
  install_base_triggers = true
  stop_timeout          = "5m"
  tls_cert_path         = "./localhost.crt"
  tls_key_path          = "./localhost.key"


Gofer's auth system is meant to be extremely lightweight and a stand-in for a more complex auth system.

How auth works

Gofer uses API Tokens for authorization. You pass a given token in whenever talking to the API and Gofer will evaluate internally what type of token you possess and for which namespaces does it possess access.

Management Tokens

The first type of token is a management token. Management tokens essentially act as root tokens and have access to all routes.

It is important to be extremely careful about where your management tokens end up and how they are used.

Other than system administration, the main use of management tokens are the creation of new tokens. You can explore token creation though the CLI.

It is advised that you use a single management token as the root token by which you create all user tokens.

Client Tokens

The most common token type is a client token. The client token simply controls which namespaces a user might have access to.

During token creation you can choose one or multiple namespaces for the token to have access to.

How to auth via the API

The Gofer API uses GRPC's metadata functionality to read tokens from requests:

md := metadata.Pairs("Authorization", "Bearer "+<token>)

How to auth via the CLI

The Gofer CLI accepts many ways setting a token once you have one.

External Events

Gofer has an alternate endpoint specifically for external events streams1. This endpoint takes in http requests from the outside and passes them to the relevant trigger.

You can find more about external event configuration in the configuration-values reference.

 external_events_api {
   enable = true
   host   = ""

It works like this:

  1. When the Gofer service is started it starts the external events service on a separate port per the service configuration settings. It is also possible to just turn off this feature via the same configuration file.

  2. External services can send Gofer http requests with payloads and headers specific to the trigger they're trying to communicate with. It's possible to target specific triggers by using the /events endpoint.

    ex: <- #trigger label

  3. Gofer serializes and forwards the request to the relevant trigger where it is validated for authenticity of sender and then processed.

  4. A trigger may then handle this external event in any way it pleases. For example, the Github trigger takes in external events which are expected to be Github webhooks and starts a pipeline if the event type matches one the user wanted.


The reason for the alternate endpoint is due to the security concerns with sharing the same endpoint as the main API service of the Gofer API. Since this endpoint is different you can now specifically set up security groups such that it is only exposed to IP addresses that you trust without exposing those same address to Gofer as a whole.


Gofer runs the containers you reference in the pipeline configuration via a container orchestrator referred to here as a "scheduler".

The vision of Gofer is for you to use whatever scheduler your team is most familiar with.

Supported Schedulers

The only currently supported scheduler is local docker. This scheduler is used for small deployments and development work.

How to add new Schedulers?

Schedulers are pluggable! Simply implement a new scheduler by following the given interface.

type Engine interface {
	// StartContainer launches a new container on scheduler.
	StartContainer(request StartContainerRequest) (response StartContainerResponse, err error)

	// StopContainer attempts to stop a specific container identified by a unique container name. The scheduler
	// should attempt to gracefully stop the container, unless the timeout is reached.
	StopContainer(request StopContainerRequest) error

	// GetState returns the current state of the container translated to the "models.ContainerState" enum.
	GetState(request GetStateRequest) (response GetStateResponse, err error)

	// GetLogs reads logs from the container and passes it back to the caller via an io.Reader. This io.reader can
	// be written to from a goroutine so that they user gets logs as they are streamed from the container.
	// Finally once finished the io.reader should be close with an EOF denoting that there are no more logs to be read.
	GetLogs(request GetLogsRequest) (logs io.Reader, err error)

Docker scheduler

The docker scheduler uses the machine's local docker engine to run containers. This is great for small or development workloads and very simple to implement. Simply download docker and go!

scheduler {
  engine = "docker"
  docker {
    prune          = true
    prune_interval = "24h"


Docker needs to be installed and the Gofer process needs to have the required permissions to run containers upon it.

Other than that the docker scheduler just needs to know how to clean up after itself.

pruneboolfalseWhether or not to periodically clean up containers that are no longer in use. If prune is not turned on eventually the disk of the host machine will fill up with different containers that have run over time.
prune_intervalstring(duration)24hHow often to run the prune job. Depending on how many containers you run per day this value could easily be set to monthly.

Object Store

Gofer provides an object store as a way to share values and objects between containers. It can also be used as a cache. It is common for one container to run, generate an artifact or values, and then store that object in the object store for the next container or next run. The object store can be accessed through the Gofer CLI or through the normal Gofer API.

Gofer divides the objects stored into two different lifetime groups:

Pipeline-level objects

Gofer can store objects permanently for each pipeline. You can store objects at the pipeline-level by using the gofer pipeline object store command:

gofer pipeline store put my-pipeline my_key1=my_value5
gofer pipeline store get my-pipeline my_key1
#output: my_value5

The limitation to pipeline level objects is that they have a limit of the number of objects that can be stored per-pipeline. Once that limit is reached the oldest object in the store will be removed for the newest object.

Run-level objects

Gofer can also store objects on a per-run basis. Unlike the pipeline-level objects run-level do not have a limit to how many can be stored, but instead have a limit of how long they last. Typically after a certain number of runs a object stored at the run level will expire and that object will be deleted.

You can access the run-level store using the run level store CLI commands. Here is an example:

gofer run store put simple_pipeline my_key=my_value
gofer run store get simple_pipeline my_key
#output: my_value

Supported Object Stores

The only currently supported object store is the sqlite object store. Reference the configuration reference for a full list of configuration settings and options.

How to add new Object Stores?

Object stores are pluggable! Simply implement a new object store by following the given interface.

type Engine interface {
	GetObject(key string) ([]byte, error)
	PutObject(key string, content []byte, force bool) error
	ListObjectKeys(prefix string) ([]string, error)
	DeleteObject(key string) error

Sqlite object store

The sqlite object store is great for development and small deployments.

object_store {
  engine = "sqlite"
  sqlite {
    path = "/tmp/gofer-object.db"


Sqlite needs to create a file on the local machine making the only parameter it accepts a path to the database file.

pathstring/tmp/gofer-object.dbThe path on disk to the sqlite db file

Secret Store

Gofer provides a secret store as a way to enable users to pass secrets into pipeline configuration files.

The secrets included in the pipeline file use a special syntax so that Gofer understands when it is given a secret value instead of a normal variable.

env_vars = {
  "SOME_SECRET_VAR" = "secret{{my_key_here}}"

Supported Secret Stores

The only currently supported secret store is the sqlite object store. Reference the configuration reference for a full list of configuration settings and options.

How to add new Secret Stores?

Secret stores are pluggable! Simply implement a new secret store by following the given interface.

type Engine interface {
	GetSecret(key string) (string, error)
	PutSecret(key string, content string, force bool) error
	ListSecretKeys(prefix string) ([]string, error)
	DeleteSecret(key string) error

Sqlite secret store

The sqlite object store is great for development and small deployments.

secret_store {
  engine = "sqlite"
  sqlite {
    path = "/tmp/gofer-secret.db"
    encryption_key = "changemechangemechangemechangeme"


Sqlite needs to create a file on the local machine making the only parameter it accepts a path to the database file.

pathstring/tmp/gofer-secret.dbThe path on disk to the sqlite b file
encryption_keystring32 character key required to encrypt secrets


Triggers are Gofer's way of automating pipeline runs. Triggers are registered to your pipeline on creation and will alert Gofer when it's time to run your pipeline (usually when some event has occurred).

The most straight-forward example of this, is the event of passing time. Let's say you have a pipeline that needs to run every 5 mins. You would set your pipeline up with the interval trigger set to an interval of 5m.

On startup, Gofer launches the interval trigger as a long-running container. When your pipeline is created, it "subscribes" to the interval trigger with an interval of 5m. The interval trigger starts a timer and when 5 minutes have passed an event is sent from the trigger to Gofer, causing Gofer to run your pipeline.

Gofer Provided Triggers

You can create your own triggers, but Gofer provides some provided triggers for use.

How do I install a Trigger?

Triggers must first be installed by Gofer administrators before they can be used. They can be installed by the CLI. For more information on how to install a specific trigger run:

gofer triggers install -h

How do I configure a Trigger?

Triggers allow for both system and pipeline configuration1. This is what makes them so dynamically useful!

Pipeline Configuration

Most Triggers allow for some user specific configuration usually referred to as "Parameters" or "Pipeline configuration".

These variables are passed by the pipeline configuration file into the Trigger when the pipeline is registered.

System Configuration

Most Triggers have system configurations which allow the administrator or system to inject some needed variables. These are defined when the Trigger is installed.


See a specific Trigger's documentation for the exact variables accepted and where they belong.

How to add new Triggers?

Just like tasks, triggers are simply docker containers! Making them easily testable and portable. To create a new trigger you simply use the included Gofer SDK.

The SDK provides an interface in which a well functioning GRPC service will be created from your concrete implementation.

// TriggerServiceInterface provides a light wrapper around the GRPC trigger interface. This light wrapper
// provides the caller with a clear interface to implement and allows this package to bake in common
// functionality among all triggers.
type TriggerServiceInterface interface {
	// Watch blocks until the trigger has a pipeline that should be run, then it returns. This is ideal for setting
	// the watch endpoint as an channel result.
	Watch(context.Context, *proto.TriggerWatchRequest) (*proto.TriggerWatchResponse, error)

	// Info returns information on the specific plugin
	Info(context.Context, *proto.TriggerInfoRequest) (*proto.TriggerInfoResponse, error)

	// Subscribe allows a trigger to keep track of all pipelines currently
	// dependant on that trigger so that we can trigger them at appropriate times.
	Subscribe(context.Context, *proto.TriggerSubscribeRequest) (*proto.TriggerSubscribeResponse, error)

	// Unsubscribe allows pipelines to remove their trigger subscriptions. This is
	// useful if the pipeline no longer needs to be notified about a specific
	// trigger automation.
	Unsubscribe(context.Context, *proto.TriggerUnsubscribeRequest) (*proto.TriggerUnsubscribeResponse, error)

	// Shutdown tells the trigger to cleanup and gracefully shutdown. If a trigger
	// does not shutdown in a time defined by the gofer API the trigger will
	// instead be Force shutdown(SIGKILL). This is to say that all triggers should
	// lean toward quick cleanups and shutdowns.
	Shutdown(context.Context, *proto.TriggerShutdownRequest) (*proto.TriggerShutdownResponse, error)

	// ExternalEvent are json blobs of gofer's /events endpoint. Normally
	// webhooks.
	ExternalEvent(context.Context, *proto.TriggerExternalEventRequest) (*proto.TriggerExternalEventResponse, error)

For an commented example of a simple trigger you can follow to build your own, view the interval trigger:

// Trigger interval simply triggers the subscribed pipeline at the given interval.
// This package is commented in such a way to make it easy to deduce what is going on, making it
// a perfect example of how to build other triggers.
// What is going on below is relatively simple:
//   - We create our trigger as just a regular program, paying attention to what we want our variables to be
//     when we install the trigger and when a pipeline subscribes to this trigger.
//   - We assimilate the program to become a long running trigger by using the Gofer SDK and implementing
//     the needed sdk.TriggerServiceInterface.
//   - We simply call NewTrigger and let the SDK and Gofer go to work.
package main

import (

	// The proto package provides some data structures that we'll need to return to our interface.
	proto ""

	// The plugins package contains a bunch of convenience functions that we use to build our trigger.
	// It is possible to build a trigger without using the SDK, but the SDK makes the process much
	// less cumbersome.
	sdk ""

	// Golang doesn't have a standardized logging interface and as such Gofer triggers can technically
	// use any logging package, but because Gofer and provided triggers use zerolog, it is heavily encouraged
	// to use zerolog. The log level for triggers is set by Gofer on trigger start via Gofer's configuration.

// Triggers have two types of variables they can be passed.
//   - They take variables called "config" when they are installed.
//   - And they take variables called parameters for each pipeline that subscribes to them.

// This trigger has a single parameter called "every".
const (
	// "every" is the time between pipeline runs.
	// Supports golang native duration strings:
	// Examples: "1m", "60s", "3h", "3m30s"
	ParameterEvery = "every"

// And a single config called "min_duration".
const (
	// The minimum duration pipelines can set for the "every" parameter.
	ConfigMinDuration = "min_duration"

// Triggers are subscribed to by pipelines when that pipeline is registered. Gofer will call the `subscribe`
// function for the trigger and pass it details about the pipeline and the parameters it wants.
// This structure is to keep details about those subscriptions so that we may perform the triggers duties on those
// pipeline subscriptions.
type subscription struct {
	pipelineTriggerLabel string
	pipeline             string
	namespace            string
	quit                 context.CancelFunc

// SubscriptionID is simply a composite key of the many things that make a single subscription unique.
// We use this as the key in a hash table to lookup subscriptions. Some might wonder why label is part
// of this unique key. That is because, when relevant triggers should be expected that pipelines might
// want to subscribe more than once.
type subscriptionID struct {
	pipelineTriggerLabel string
	pipeline             string
	namespace            string

// Trigger is a structure that every Gofer trigger should have. It is essentially the God struct. It contains
// all information about our trigger that we might want to reference.
type trigger struct {
	// The limit for how long a pipeline configuration can request a minimum duration.
	minDuration time.Duration

	// During shutdown the trigger will want to stop all intervals immediately. Having the ability to stop all goroutines
	// is very useful.
	quitAllSubscriptions context.CancelFunc

	// The events channel is simply a store for trigger events.
	// It keeps them until Gofer calls the Watch function and requests them.
	events chan *proto.TriggerWatchResponse

	// The parent context is stored here so that we have a common parent for all goroutines we spin up.
	// This enables us to manipulate all goroutines at the same time.
	parentContext context.Context

	// Mapping of subscription id to actual subscription. The subscription in this case also contains the goroutine
	// cancel context for the specified trigger. This is important, as when a pipeline unsubscribes from a this trigger
	// we will need a way to stop that specific goroutine from running.
	subscriptions map[subscriptionID]*subscription

// NewTrigger is the entrypoint to the overall trigger. It sets up all initial trigger state, validates any config
// that was passed to it and generally gets the trigger ready to take requests.
func newTrigger() (*trigger, error) {
	// The GetConfig function wrap our `min duration` config and retrieves it from the environment.
	// Trigger config environment variables are passed in as "GOFER_PLUGIN_CONFIG_%s" so that they don't conflict
	// with any other environment variables that might be around.
	minDurationStr := sdk.GetConfig(ConfigMinDuration)
	minDuration := time.Minute * 1
	if minDurationStr != "" {
		parsedDuration, err := time.ParseDuration(minDurationStr)
		if err != nil {
			return nil, err
		minDuration = parsedDuration

	ctx, cancel := context.WithCancel(context.Background())

	return &trigger{
		minDuration:          minDuration,
		events:               make(chan *proto.TriggerWatchResponse, 100),
		quitAllSubscriptions: cancel,
		parentContext:        ctx,
		subscriptions:        map[subscriptionID]*subscription{},
	}, nil

// startInterval is the main logic of what enables the interval trigger to work. Each pipeline that is subscribed runs
// this function which simply waits for the set duration and then pushes a "WatchResponse" event into the trigger's main channel.
func (t *trigger) startInterval(ctx context.Context, pipeline, namespace, pipelineTriggerLabel string, duration time.Duration) {
	for {
		select {
		case <-ctx.Done():
		case <-time.After(duration): <- &proto.TriggerWatchResponse{
				Details:              "Triggered due to the passage of time.",
				PipelineTriggerLabel: pipelineTriggerLabel,
				NamespaceId:          namespace,
				PipelineId:           pipeline,
				Result:               proto.TriggerWatchResponse_SUCCESS,
				Metadata:             map[string]string{},
			log.Debug().Str("namespaceID", namespace).Str("pipelineID", pipeline).
				Str("trigger_label", pipelineTriggerLabel).Msg("new tick for specified interval; new event spawned")

// Gofer calls subscribe when a pipeline configuration is being registered and a pipeline wants to subscribe to this trigger.
// The logic here is simple:
//   - We retrieve the pipeline's requested parameters.
//   - We validate the parameters.
//   - We create a new subscription object and enter it into our map.
//   - We call the `startInterval` function in a goroutine for that specific pipeline and return.
func (t *trigger) Subscribe(ctx context.Context, request *proto.TriggerSubscribeRequest) (*proto.TriggerSubscribeResponse, error) {
	interval, exists := request.Config[strings.ToUpper(ParameterEvery)]
	if !exists {
		return nil, fmt.Errorf("could not find required configuration parameter %q; received config params: %+v", ParameterEvery, request.Config)

	duration, err := time.ParseDuration(interval)
	if err != nil {
		return nil, fmt.Errorf("could not parse interval string: %w", err)

	if duration < t.minDuration {
		return nil, fmt.Errorf("durations cannot be less than %s", t.minDuration)

	subctx, quit := context.WithCancel(t.parentContext)
	}] = &subscription{request.PipelineTriggerLabel, request.NamespaceId, request.PipelineId, quit}

	go t.startInterval(subctx, request.PipelineId, request.NamespaceId, request.PipelineTriggerLabel, duration)

	log.Debug().Str("namespace_id", request.NamespaceId).Str("trigger_label", request.PipelineTriggerLabel).Str("pipeline_id", request.PipelineId).Msg("subscribed pipeline")
	return &proto.TriggerSubscribeResponse{}, nil

// Gofer continuously calls the watch endpoint to receive events from the Trigger. Below we simply block until we
// are told to shutdown or we have an event to give.
func (t *trigger) Watch(ctx context.Context, request *proto.TriggerWatchRequest) (*proto.TriggerWatchResponse, error) {
	select {
	case <-ctx.Done():
		return &proto.TriggerWatchResponse{}, nil
	case event := <
		return event, nil

// Pipelines change and this means that sometimes they will no longer want to be executed by a particular trigger or maybe
// they want to change the previous settings on that trigger. Because of this we need a way to remove pipelines that were
// previously subscribed.
func (t *trigger) Unsubscribe(ctx context.Context, request *proto.TriggerUnsubscribeRequest) (*proto.TriggerUnsubscribeResponse, error) {
	subscription, exists := t.subscriptions[subscriptionID{
		pipelineTriggerLabel: request.PipelineTriggerLabel,
		pipeline:             request.PipelineId,
		namespace:            request.NamespaceId,
	if !exists {
		return &proto.TriggerUnsubscribeResponse{},
			fmt.Errorf("could not find subscription for trigger %s pipeline %s namespace %s",
				request.PipelineTriggerLabel, request.PipelineId, request.NamespaceId)

	delete(t.subscriptions, subscriptionID{
		pipelineTriggerLabel: request.PipelineTriggerLabel,
		pipeline:             request.PipelineId,
		namespace:            request.NamespaceId,
	return &proto.TriggerUnsubscribeResponse{}, nil

// Info is mostly used as a health check endpoint. It returns some basic info about a trigger, the most important
// being where to get more documentation about that specific trigger.
func (t *trigger) Info(ctx context.Context, request *proto.TriggerInfoRequest) (*proto.TriggerInfoResponse, error) {
	return sdk.InfoResponse("")

// The ExternalEvent endpoint tells the trigger what to do if they get messages from Gofer's external event system.
// This system is set up to facilitate webhook interactions like those that occur for github
// (A user pushes a branch, Gofer gets an event from github).
// The ExternalEvent will come with a payload which the trigger can then authenticate, process, and take action on.
func (t *trigger) ExternalEvent(ctx context.Context, request *proto.TriggerExternalEventRequest) (*proto.TriggerExternalEventResponse, error) {
	return &proto.TriggerExternalEventResponse{}, nil

// A graceful shutdown for a trigger should clean up any resources it was working with that might be left hanging.
// Sometimes that means sending requests to third parties that it is shutting down, sometimes that just means
// reaping its personal goroutines.
func (t *trigger) Shutdown(ctx context.Context, request *proto.TriggerShutdownRequest) (*proto.TriggerShutdownResponse, error) {

	return &proto.TriggerShutdownResponse{}, nil

// InstallInstructions are Gofer's way to allowing the trigger to guide Gofer administrators through their
// personal installation process. This is needed because some triggers might require special auth tokens and information
// in a way that might be confusing for trigger administrators.
func installInstructions() sdk.InstallInstructions {
	instructions := sdk.NewInstructionsBuilder()
	instructions = instructions.AddMessage(":: The interval trigger allows users to trigger their pipelines on the passage"+
		" of time by setting a particular duration.").
		AddMessage("First, let's prevent users from setting too low of an interval by setting a minimum duration. "+
			"Durations are set via Golang duration strings. For example, entering a duration of '10h' would be 10 hours, "+
			"meaning that users can only run their pipeline at most every 10 hours. "+
			"You can find more documentation on valid strings here:").
		AddQuery("Set a minimum duration for all pipelines", ConfigMinDuration)

	return instructions

// Lastly we call our personal NewTrigger function, which now implements the TriggerServerInterface and then we
// pass it to the NewTrigger function within the SDK.
// From here the SDK will use the given interface and run a GRPC service whenever this program is called with the
// positional parameter "server". Ex. "./trigger server"
// Whenever this program is called with the parameter "installer" then it will print out the installation instructions
// instead.
func main() {
	trigger, err := newTrigger()
	if err != nil {
	sdk.NewTrigger(trigger, installInstructions())

Provided Triggers

Gofer provides some pre-written triggers for quick use:

nameimageincluded by defaultdescription triggers an event after a predetermined amount of time has passed. is used for longer termed, more nuanced intervals. For instance, running a pipeline every year on Christmas. your pipelines to run based on branch, tag, or release activity.

Cron Triggers

Cron allows users to schedule events on long term intervals and specific days.

It uses a stripped down version of the cron syntax to do so:

Field           Allowed values  Allowed special characters

Minutes         0-59            * , -
Hours           0-23            * , -
Day of month    1-31            * , -
Month           1-12            * , -
Day of week     0-6             * , -
Year            1970-2100       * , -

┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of the month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
│ │ │ │ │ ┌───────────── Year (1970-2100)
│ │ │ │ │ │
│ │ │ │ │ │
│ │ │ │ │ │
* * * * * *

Pipeline Configuration

  • expression : Specifies the cron expression of the interval desired.

Every year on Xmas

    *sdk.NewTrigger("cron", "yearly_on_xmas").WithSetting("expression", "0 1 25 12 * *"),

Trigger Configuration


Interval Trigger

Interval simply triggers the subscribed pipeline at the given interval.

Parameters/Pipeline Configuration

  • every : Specifies the time duration between events. Unless changed via the trigger configuration, the minimum for this is 5 mins.
    *sdk.NewTrigger("interval", "every_five_mins").WithSetting("every", "5m"),

Trigger Configuration

Trigger configurations are set upon trigger startup and cannot be changed afterwards without restarting said trigger.

MIN_DURATION"5m"The minimum duration users can set their pipelines to run

Github Trigger

The Github trigger allows Gofer pipelines to be triggered by Github webhook events. This makes it possible to write event driven workloads that depend on an action happening on Github.

See the events section below for all supported events and the environment variables they pass to each pipeline.

:::info Due to the nature of Github's API and webhooks, you'll need to first set up a new Github app to use with Gofer's Github trigger.

Steps to accomplish this can be found in the additional steps section. :::

:::danger The Github trigger requires the external events feature of Gofer in order to accept webhooks from Github's servers. This requires your application to take traffic from external, potentially unknown sources.

Visit the external events page for more information on how to configure Gofer's external events endpoint.

If Github is your only external trigger, to increase security consider limiting the IP addresses that can access Gofer's external events endpoint. :::

Pipeline Configuration

  • repository : The Github repository you would like to listen for events from. The format is in the form <organization>/<repository>.
trigger "github" "only_from_experimental" {
    repository = "clintjedwards/experimental"
    events = "push"
  • events : A comma separated list of events you would like to listen for.
trigger "github" "only_from_experimental" {
    repository = "clintjedwards/experimental"
    events = "push,create"

Trigger Configuration

Trigger configurations are set upon startup and cannot be changed afterwards.

The Github trigger requires the setup and use of a new Github app. You can view setup instructions below which will walk you through how to retrieve the required env var variables.

GOFER_TRIGGER_GITHUB_APPS_INSTALLATIONRequiredThe Github installation ID. This can be found by viewing the webhook payload delivery. See a more details walkthrough on where to find this below.
GOFER_TRIGGER_GITHUB_APPS_KEYRequiredThe base64'd private key of the Github app. This can be generated during Github app creation time.
GOFER_TRIGGER_GITHUB_APPS_WEBHOOK_SECRETRequiredThe Github app webhook secret key. This should be a long, randomized character string. It will be used to verify that an event came from Github and not another source.
triggers {
  registered_triggers "github" {
    image = ""
    env_vars = {
      "GOFER_TRIGGER_GITHUB_APPS_WEBHOOK_SECRET": "somereallylongstringofcharacters",

Additional setup

Due to the nature of Github's API and webhooks, you'll need to first set up a new Github app to use with Gofer's Github trigger. Once this app has been set up, you'll have access to all the required environment variables that you'll need to pass into Gofer's server configuration.

Here is a quick and dirty walkthrough on the important parts of setting up the Github application.

1. Create a new Github application:

Github's documentation will be the most up to date and relevant so please see their walkthrough.

On the configuration page for the new Github application the following should be noted:

  • APP ID: Take note of the id; it will be used later for trigger configuration.

  • Webhook URL: Should be the address of your Gofer's external trigger instance and pointing to the events/github endpoint:


  • Webhook Secret: Make this a secure, long, random string of characters and note it for future trigger configuration.

  • Private Keys: Generate a private key and store it somewhere safe. You'll need to base64 this key and insert it into the trigger configuration.

    base64 ~/Desktop/myorg-gofer.2022-01-24.private-key.pem

2. Find the installation ID

Once the Github application has been created, install it. This will give you an opportunity to configure the permissions and scope of the Github application. It is recommended that you give read-only permissions to any permissions that might include webhooks and read-write for code-suite and code-runs.

The installation ID is unfortunately hidden in an event that gets sent once the Github app has been created and installed. You can find it by navigating to the settings page for the Github application and then viewing it in the "Recent Deliveries" page.

Recent Deliveries Installation webhook event


Gofer's triggers have the ability to pass along event specific information in the form of environment variables that get injected into each container's run. Most of these variables are pulled from the webhook request that comes in.

Below is a breakdown of the environment variables that are passed to a run based on the event that was generated. You can find more information about the format the variables will be in by referencing the payloads for the event.

Events below are the only events that are supported.


Common Tasks

Common Tasks are Gofer's way of allowing you to pre-setup tasks such that they can be set up once and used in multiple pipelines.

This allows you to do potentially complicated or single use setups that make it easier for different pipelines to consume without going through the same process.

An example of this might be having pipelines post to Slack. Setting up a new slack bot account for each and every pipeline that would want to post to slack is cumbersome and slows down productivity. Instead, Gofer's common tasks allow you to set up a single Slack bot, set up a single task, and have each pipeline just specify that task.

Common tasks work just like any other task except that they are registered just like triggers.

Common Tasks are installed by a Gofer administrator and can be used by all pipelines.

Gofer Provided Common Tasks

nameimagedescription for debugging common tasks, simply prints out the env vars each run. A good example of how to setup other common tasks.

How do I install a Common Task?

Common Tasks are installed by the CLI. For more information run:

gofer common-task install -h

How do I configure a Common Task?

Common tasks allow for both system and user configuration1. This is what makes them so dynamically useful!

Pipeline Configuration

Most common tasks allow for some user specific configuration usually referred to as "Parameters" or "Pipeline configuration".

These variables are passed by the pipeline configuration file into the common task when run.

System Configuration

Most Common Tasks have system configurations which allow the administrator or system to inject some needed variables. These are defined when the Common Task is installed.


See the Common Task's documentation for the exact variables and where they belong.

How to add new Common Tasks?

Just like custom tasks, common tasks are simply docker containers! Making them easily testable and portable. To create a new common task you simply use the included Gofer SDK.

The SDK provides simple functions to help in creating common tasks. To see an example of how a common task is structured and created view the debug task.

Debug Common Task

Pipeline Configuration

WithTasks(sdk.NewCommonTask("debug", "debug_task"))

Common Task Configuration

Common Task configurations are set upon common task startup and cannot be changed afterwards.

FILTERfalseDon't print any env vars that contain the strings "key" or "token"

Command Line

Gofer's main way of providing interaction is through a command line application included in the Gofer binary.

This command line tool is how you upload pipelines, view runs, upload artifacts and many other common Gofer tasks.

To view the possible commands for the Gofer pipeline simply run gofer -h.


The Gofer CLI accepts configuration through flags, environment variables, or configuration file.

When multiple configuration sources are used the hierarchy is (from lowest to highest) config file values -> environment variables -> flags. Meaning that if you give the same configurations different values through a configuration file and through flags, the value given in the flag will prevail.


You can view Gofer's global flags by simply typing gofer -h.

Environment variables

You can also set configuration values through environment variables. Each environment variable has a prefix of GOFER_CLI_.

For example, setting your API token:

export GOFER_CLI_TOKEN=mysupersecrettoken
gofer service token whoami

The most up to date list of possible environment variables exists here

Remember that any environment variable found within the above struct is prefixed with GOFER_CLI_. So for example to set a new host variable you would write:

export GOFER_CLI_HOST=localhost:8080

Configuration file

For convenience reasons Gofer can also use a standard configuration file. The language of this file is HCL. Most of the options are simply in the form of key=value.

Configuration file locations

You can put your CLI configuration file in any of the following locations and Gofer will automatically detect and read from it(in order of first searched):

  1. The path given to the --config flag
  2. $HOME/.gofer.hcl
  3. $HOME/.config/gofer.hcl

Configuration file options

The options available in the configuration file are the same as possible environment variables. To find the most up to date values you can use the code here. For convenience, a possible out of date list and explanation is listed below:

namespacestringThe namespace ID of the namespace you'd like to default to. This is used to target specific namespaces when there might be multiple.
formatstringCan be one of three values: pretty, json, silent. Controls the output of CLI commands.
hoststringThe URL of the Gofer server; used to point the CLI and that correct host.
no_colorboolTurns off color globally for all CLI commands.
tokenstringThe authentication token passed Gofer for Ident and Auth purposes.

Example configuration file

// /home/clintjedwards/.gofer.hcl
namespace = "myNamespace"
format    = "pretty"
host      = "localhost:8080"
no_color  = false
token     = "mysupersecrettoken"