Increase your development velocity in Golang with gocheck

I've had the opportunity to write several different production applications in Golang. I have also written software libraries in Golang. In all cases, I've relied on the gocheck package to test that software. I'll outline my approach to testing and show you how I use the gocheck package to implement it. Following these practices has allowed me to achieve a higher velocity when developing software.

This article does not include a discussion of why you should write tests, or the merits of methodologies such as test driven development. Such a discussion may come in the future.

Getting hooked up with Gocheck

Gocheck works by extending the functionality of the existing "testing" package that comes built into Golang.

In order to hook things up, we need to do a few things. Here is the standard example from gocheck's page

package hello

import "testing"
import . "gopkg.in/check.v1" // Step 1

func Test(t *testing.T) { TestingT(t) } // Step 2

type TestSuite struct{} // Step 3 

var _ = Suite(&TestSuite{}) // Step 4 
  1. Import the "gopkg.in/check.v1" package.
  2. Define a single Test function that takes a parameter of type *testing.T and passes it to gocheck's gocheck.TestingT function.
  3. Define a test suite type, by convention this ends with Suite.
  4. Instantiate the suite type and pass it to gocheck.Suite

By convention the gocheck package is always imported using the import . syntax. This makes the package easier to work with.

Defining your tests

The fundamental aspect of a test is asserting that some condition is true. All test definitions for gocheck take a single parameter, of type *gocheck.C. This parameter is used to test all assertions. Here's an example of what a test looks like

func (s * TestSuite) TestBob(c *C) {
  c.Assert("bob", Equals, "bob")
}

The receiver of the function is the suite type defined previously in step 3 and passed to gocheck's Suite function. Any function matching the receiver type and named starting with Test is automatically invoked as part of the test suite.

The call to c.Assert is used to check that two values are equal. Both the values and the test to be performed are passed as arguments to c.Assert. This is possible because the signature of c.Assert is actually func (c *C) Assert(interface{}, Checker, ...interface{}).

Why Checkers ?

The Checker type is responsible for implementing the logic around assertion in a test. As you can probably guess, the Equals type is equivalent to using golang's == operator. So why don't we just use the == operator? Let's consider what that test & output might look like

func (s *TestSuite) TestASimplerTime(c *C) {
  var a = "bob"
  var b = "jim"
  c.Assert(a == b) // assertion fails 
}

The above code definitely does not work, so don't waste your time trying to run it. But the hypothetical output might look like that this

--- TestASimplerTime Failed!
Location: test.go:1216

  c.Assert(a == b)

This tells that the test failed, but not much more. Let's reconsider the above test using gocheck's Checkers.

func (s *TestSuite) TestASimplerTime(c *C){
  var a = "bob"
  var b = "jim"
  c.Assert(a, Equals, b)
}

Output:

FAIL: tmp_test.go:12: TestSuite.TestASimplerTime

tmp_test.go:15:
    c.Assert(a, Equals, b)
... obtained string = "bob"
... expected string = "jim"

Gocheck is able to show us not only what assertion failed, but the values involved as well. This allows us to address test failures much faster with less head scratching and less log statements.

Using the builtin Checkers

Gocheck defines many built in checkers. Here's a quick summary of what is available

IsNil

Check that the value on the left hand side is nil

Example:

func (s *TestSuite) TestOpeningFile(c *C) {
  _, err := os.Open("/etc/passwd")
  c.Assert(err,IsNil)
}

NotNil

Check that the value on the left hand side is not nil

Example:

func (s *TestSuite) TestOpeningNonExistentFile(c *C) {
  _, err := os.Open("/etc/i_dont_exist")
  c.Assert(err,NotNil)
}

Equals

Check that the value on the left hand side is equal to the value on the right hand side

Example:

func (s *TestSuite) TestEquals(c *C) {
  c.Assert(os.Getenv("HOME"), Equals, "/home/ericu")
}

DeepEquals

Check that the value on the left hand side is sequal to the value on the right hand side. Useful when comparing maps, etc.

Example:

func (s *TestSuite) TestMapComparison(c *C) {
  a := make(map[string]string)
  b := make(map[string]string)
  a["foo"] = "bar"
  b["foo"] = "bar"

  c.Assert(a, DeepEquals, b)
}

HasLen

Check that the value on the left hand side has a length equal to the value on the right hand side

Example:

func (s *TestSuite) TestHasLen(c *C) {
  a := "bob"
  c.Assert(a, HasLen, 3) //Check the length of the string 


  b := []int{1,2,3,4}
  c.Assert(b, HasLen, 4) //Also can check the length of a slice
}

Matches

Check that the string value on the left hand side matches the the regex on the right hand side

Example:

func (s *TestSuite) TestMatches(c *C) {
  a := "bob is home"

  c.Assert(a, Matches, ".+home") //The right hand side is treated as a regular expression
}

ErrorMatches

Check that the error value on the left hand side has a Error() value matching the regex on the right hand side

Example:

func (s *TestSuite) TestErrorMatches(c *C) {
  _, err := os.Open("/etc/i_dont_exist")
  //The right hand side is treated as a regular expression, and matches the error's
  //string value returned from Error()
  c.Assert(err, Matches, ".*not exist.*") 
}

Panics

Checks that the function on the left hand side panics when invoked. This is difficult to use correctly.

Example:

func (s *TestSuite) TestPanic(c *C) {
  // The function on the left is invoked, it must be take 0 arguments.
  // It is expected to panic, and the value is compared against the one of the right hand side
  // using DeepEquals
  c.Assert(func () {
    panic("recoils in terror")
  }, Panics, "recoils in terror")
}

FitsTypeOf

Checks that the obtained value on the left hand side is assigned to the same type as on the right hand side.

Example:

func (s *TestSuite) TestFitsTypeOf(c *C){
  buf := new(bytes.Buffer)

  t := make(chan interface{}, 1) //Anything can be sent over this channel
  t <- buf 


  // Verify that the thing coming from the channel really is a buffer,
  // not an interface
  c.Assert(<- t, FitsTypeOf, &bytes.Buffer{})
}

Implements

Checks that the value obtained on the left hand side implements the interface specified by a pointer on the right hand side.

Example:

func (s *TestSuite) TestImplements(c *C){
  buf := new(bytes.Buffer)
  var w io.Writer //Value does not matter here
  //Make sure the buffer object really can be used as an io.Writer
  c.Assert(buf, Implements, &w) // Pass a pointer to interface
}

Inverting a checker

Gocheck also includes the special Not checker that takes another checker! You can use this to invert the expected outcome.

Example:

func (s *TestSuite) TestNotEqual(c *C){
  c.Assert("bob", Not(Equals), "jim") // Passes, because the values are not equal
}

Testing a real application

To test a real application, you need to start by identifying the following in your code

  1. Expected behavior - what happens when everything works correctly
  2. Failure behavior - what happens when things fail
  3. Edge case behavior - behavior details that may seem insigificant, but can have larger consequences

Let's use an example program to find cases of those three kinds of behavior. This program reads lines of text and replaces the first occurence of the word "snowman" on the line with the actual snowman character. We'll call this program "winter_wonderland".

The full source code is available here.

This program implements the *WinterWonderland type. This type behaves like any other io.Reader implementation, but expects text as input and text as output. The example application reads and writes from standard output, but when writing tests we are can interact with the *WinterWonderland type directly. This is an important consideration. If the functionality of the program had been written entirely within the main() function it would work, but would be much more difficult to test.

The expected

The expected cases of this program are straightforward, they cover what the program was intended to do. We can describe those cases as

  1. Transform a document that has lines containing "snowman"
  2. Transform a document having lines that contain "snowman" more than once
  3. Transform a document that does not have lines containing snowman

In order to test the *WinterWonderland type, we'll need an io.Reader that it can read strings from. We could of course actually write and read from files for the test, but a much better option is to use the *bytes.Buffer object.

func (s *TestSuite) TestDocumentWithSnowman(c *C) {
  var err error 
  buf := new(bytes.Buffer)

  _, err = buf.WriteString("Hello world\n")
  c.Assert(err,IsNil)
  _, err = buf.WriteString("Do you know what a snowman is?\n")
  c.Assert(err,IsNil) 
  _, err = buf.WriteString("Hello bob\n")
  c.Assert(err,IsNil)

  reader := NewWinterWonderland(buf)

  output := new(bytes.Buffer)
  _, err = io.Copy(output, reader)
  c.Assert(err, IsNil)

  c.Assert(strings.Count(output.String(), "\u2603"), Equals, 1)
}

This test sets up a *bytes.Buffer object to contain 3 lines. Each time the buffer is written to, the result is checked to make verify that no error happened. Although the purpose of this test is not to test the *bytes.buffer object, it is still mandatory to check the result. If the result of the writes is not checked, it could cause failures later on in the test.

After the buffer is ready, it is wrapped with the *WinterWonderland type by calling NewWinterWonderland. Another buffer is used as the destination for the transformed output. The function io.Copy is used to copy everything into this output buffer. After that we can get a string by calling the String() function on the buffer instance. The method strings.Count is used to get the count of the snowman character in the output. Since exactly one occurence is expected, the Equals checker is used to verify that the count is 1.

The complete set of tests is available here.

On the edge

The edge cases of this program are cases that may have not been considered when the program was written, but that it needs to handle in any case. Here are the cases that are tested for the exapmle program

  1. Transforming a document that has zero length
  2. Transforming a document that has no newlines
  3. Transforming a document that windows newlines

The complete set of tests is available here.

When I wrote the test for a document with windows newlines, I found out that my application does not properly handle them! Thankfully, I don't really care about supporting Microsoft Windows.

Failing well

The failure cases of this program are those that involve a failure of the program or of something that the program depends on. Due to the way error handling works in Go, it is important that our application either address or return an error value when it encounters it. In the example program, the only real problem we have is if the io.Reader that the *WinterWonderland type reads from fails.

This brings up an important question: How do we create an io.Reader that fails? We could perhaps use some sort of filesystem specific features to achieve this, but the test would likely be brittle and not portable. Since io.Reader is an interface, we can create an implementation that fails. In that implementation, we'll return our own instance of an error. We can check that calling Read([]byte) on the *WinterWonderland type returns the error we defined. I refer to this test pattern as using a sentinel error value.

type failureReader int //Type does not matter here

var failureReaderSentinel = errors.New("Sentinel from failure reader")

func (failureReader) Read([]byte) (int, error) {
  return 0, failureReaderSentinel
}

func (s *TestSuite) TestReaderFailure(c *C) {
  reader := NewWinterWonderland(failureReader(0))
  var output [1]byte
  _, err := reader.Read(output[:])
  c.Assert(err, Equals, failureReaderSentinel)
}

The complete set of tests is available here.

Test helpers

Test helpers are any code that helps your write tests, but does not verify the implementation directly. If your review the tests for the "winter_wonderland" program you may notice that some of the tests contain the same thing over and over again. This repetition can be avoided by the use of several simple patterns that can be applied when writing test code.

When writing helper code for a test suite, all functions should have a receiver type that is the same as the test suite. It should also take the same *gocheck.C type as its first parameter.

Factories

A test factory is a function that builds an object for test purposes. The constructed objects should always be in a consistent state. However, it may be necessary to make some identifying properties of the objects somewhat random.

Example:

Build things automatically

In some test suites, it is common to need the same object in each test. The gocheck package supports running code before test. If you define a function named SetUpTest, it is ran directly before each test. In order to make the object accessible from the test, declare the object as a field of the test suite structure. Using a factory from your SetUpTest function is common.

Example:

type TestSuite struct {
  buf *bytes.buffer
}

func (s *TestSuite) SetUpTest(c *C){
  s.buf = new(bytes.buffer) // Initialized to an empty buffer before each test
} 

Fixtures

A test fixture is a function that returns the same object each time it is called. The object is constructed once and is returned each time the function is called. Fixtures are not as flexible as factories. They can lead to brittle tests in some cases. There are valid use cases for them. Consider a web application where all data changes are recorded with an indicator of what user made the change. If any code needs to update some data as maintenance task, it would use an internal maintenance account. There would be only 1 instance of this account in the production system. Using a fixture to access this maintenance account in tests would be a good practice because it reflects the way the actual application is used.

Example:

type TestSuite struct {
  maintenanceUser *Account
}

func (s *TestSuite) loadMaintenanceUser(c *C) *Account{
  if s.maintenanceUser != nil {
    return s.maintenanceUser
  }

  var err error
  s.maintenanceUser, err = CreateAccount("maint")
  c.Assert(err, IsNil)

  return s.maintenanceUser
}

Edge case - sampling production data

One good use case for fixtures is replaying data from your production application in the test. This can be useful if you find a message that is valid but causes a failure in your system. If the data is small, it can be stored as a constant directly in the test. If it is large, I suggest committing the test data to your revision control system. It can be loaded in the test when you need it.

Reuseable helpers

If your application is small, testing is straightforward. You can usually write just one test suite with a handful of tests to cover the important cases. Many applications wind up having a diverse set of responsibilities. This in turn tends to require multiple test suites. Although the functionality tested by each suite is likely to be independent, some helpers might logically be shared amongst multiple suites. For example, a factory for creating a new account might be used by all suites.

You can create a reusable helper by defining each helper to be its own type. In your test suite, embed the type in the Test Suite. The makes the functions available on the test suite.

Example:

type AccountFactory int //Type used here does not matter in this example

func (AccountFactory) BuildAccount(c *C) *Account {
  // Logic to create a new account here...

  return account
}

type TestSuite struct {
  // Embed helpers to use them 
  AccountFactory
  ReportFactory
}

func (s *TestSuite) TestThings(c *C) {
  // Use the functions from the helpers
  account := s.BuildAccount(c)
}

Don't return error values from your helpers

Reviewing the above code samples, one thing should stand out very quickly. None of the return signatures of any of the functions include the error type. This is completely contrary to best practices when writing application code in Golang. But if you consider it, there really is no need to return an error type in test code. If you've encountered an error you did not expect, then the test has failed. You could return the error, but the caller would not be able to proceed either. It's far simpler to assert that there is no error, and let the test suite take care of everything from there onwards. This shows you exactly where your test failed and why.

Testing with multiple goroutines

If you are writing software in Golang, it is almost a given that your software uses multiple goroutines. The design of the Golang is heavily influenced by CSP. A consequence of this is that it is expected that if a goroutine is sent one message, it usually causes another message to be sent. In order to test this sort of behavior, you may be tempted to start additional goroutines in a test. I don't think this is inherently bad, but it can cause race conditions in the test, which can be difficult to understand. It's also likely that the race conditions have no significance in the real world. So they just become noise in your test suite.

To create a robust test suite, it is necessary to apply design guidelines to both your application and your test suites. These are the guidelines I like to refer to when designing software.

  • Goroutines are logically grouped together if they are executing the same code
  • A group of goroutines should have exactly one instance of a struct that is responsible for them
  • It should be obvious what channels are used for input and output from a group of goroutines
  • All goroutines must have a "failure" channel available to them. Any unprocessable message from a channel is written to this channel
  • The business logic of a goroutine should be broken into functions which lend themselves to testing independently
  • Tests should focus around testing the expected behavior of these functions without starting additional goroutine
  • At least one test should verify that the goroutine can receive and send messages

By adhering to these design guidelines, it is possible to write software that has a high amount of test coverage. Some of this winds up generating additional development cost. For example, the failure channel I mentioned is commonly consumed from by another goroutine that just logs the failure and drops it. While it serves no purpose in the running applicatoin, it is a great aid in testing code.

Example code:

/*
The output value for each input.
Either 'RootOfValue' or 'Failure'
is set.
*/
type SquareRootOutput struct{
  Value float64
  RootOfValue float64
  Failure error
}

type SquareRooter struct {
  Input chan float64
  Output chan SquareRootOutput
  Failures chan SquareRootOutput
} 

func NewSquareRooter() *SquareRooter{
  this := new(SquareRooter)
  this.Input = make(chan float64, 1)
  this.Output = make(chan SquareRootOutput, 1)
  this.Failures = make(chan SquareRootOutput, 1)
  return this
}
var ErrInputNotPositive = errors.New("The input must be a value equal to or greater than zero")

func (this *SquareRooter) Main(stop chan <-int, wg *sync.WaitGroup){
  defer wg.Done()
  for{
    select {
      case _ = <- stop:
        return
      case v := this.Input:
        if v < 0.0 {
          this.Failures <- SquareRooter{
            Value: v,
            Failure: ErrInputNotPositive,
            RootOfValue: 0.0,
          }
        } else {
          this.Output <- SquareRooter{
            Value: v,
            Output: math.Sqrt(v),
            Failure: nil,
          }
        }
    }
  }
}

Example code for test:

func (s *TestSuite) TestComputesSqrt(c *C) {
  routine := NewSquareRooter()
  wg := new(sync.WaitGroup)
  stop := make(chan int)

  wg.Incr(1)
  go routine.Main(stop, wg)
  routine.Input <- 2.0
  select {
    case output := <- routine.Output:
      c.Assert(output.RootOfValue, Equals, math.Sqrt(2.0))
    case failure := <- routine.Failures:
      c.Fatalf("Should not have received %v", failure)
    case _ = time.After(time.Second * 15):
      c.Fatal("test timed out")
  }

  //Instruct the goroutine to stop and wait on it.
  close(stop)
  wg.Wait()

}

The above example is highly contrived, but presents the concept well enough. The SquareRooter type can take values and compute the square root of the value, or fail. The only failure case if it the value is a negative number. The Main function of the struct is intended to be ran as a goroutine. The arguments to Main do not convey information, they are there just so the goroutine can be stopped. Even if you don't think in your application you need to stop your goroutine, you want to able to do so during testing.

An important note is the use of time.After in the select statement of the tests. Without this, the test could simply get stuck if you have a bug in your goroutine that results in values not being emitted. There is no actual guarantee that the goroutine can process the value in some amount of time, so a generous value should be used. This is preferable to creating a test that simply hangs.

Writing your own Checkers

If the existing checkers are not comprehensive enough, you can write your own Checker. You only need to implement the gocheck.Checker interface.

For example, what if we had a function that we only care if it changes a value when invoked. Let's consider a trivial definition of such a function

import "math/rand"

func add_random(v *int) {
  *v = *v + rand.Intn(64) 
}

This function cannot be easily tested, because the amount it adds is random between 0 and 64. But if the value passed in is changed, the function likely did the correct thing. Our test would look like the following

func (s *TestSuite) TestChanges(c *C){
  var x int 
  c.Assert(func(){
    add_random(&x)
  }, ChangesValue,
  &x)
}

We'll need to define the ChangesValue checker to make this work. The implementation needs to copy the value of x before the invocation, run the function, then compare the previous value of x with the new one. A pointer &x is passed so that the change can made by add_random is visible to the checker.

The gocheck.Checker interface is defined as follows.

type Checker interface {
    Info() *CheckerInfo
    Check(params []interface{}, names []string) (result bool, error string)
}

The Info() method must return a *gocheck.CheckerInfo object. This function lets gocheck know more about how your Checker works. The CheckerInfo has two fields. The first one Name, is used to refer to the checker. The second field Params is a slice of strings. This is not actually the parameter values to be checked, but instead is the names of the parameters expected. This data is exposed primarily so gocheck can perform helpful logging.

The Check() method is invoked each time your checker is passed to the Assert function. This function performs the actual comparison. The values passed to this function are always passed as interface{}. Due to this, it is often necessary to use the "reflect" package in the implementation.

Test environment preparation

If you develop applications that are complete stateless, then this section is not likely to be of interest to you. For example, my package stalecucumber can be tested without an external dependencies. It is always nice to produce such a clean product, but most real world applications don't have such a luxury.

The best advice I can give here is that you not spend time trying to develop "mocks" for your infrastructure. A mature, real world piece of software such as a database has all sort of idiosyncrasies in it that are the result of it's development process. In normal usage you do not need to understand all of these. But if you start trying to develop mocks for infrastructure, you suddenly need to reproduce all of these behaviors yourself. Instead of investing time in building mocks, I suggest investing time in building a robust test environment. In the past for me this meant writing a bash or python script that would get everything setup as needed. This turned out to work well. But newer ideas like containers are starting to change this. Using docker compose or Gitlab CI to create your test environment is a good idea. These technologies are very much still in the process of maturing, but they eventually will get to a stable state.


Copyright Eric Urban 2016, or the respective entity where indicated