Introduction
One thing that I admire with people from different backgrounds, is that all of them think differently and it's really hard for all of them to come up with the same idea. A possible explanation is the fact an idea is influenced by the experiences of each person, which may lead to a breakthrough innovative idea or an idea that is already implemented by others. The rise of the tech industry has led myself to discuss with a few people about their own tech-related ideas and a naive self of mine would often respond with imprecise estimates of the development cycle and inaccurate designs. However, gaining more experience in the field, I realised that a software is not only about development, but there are additional phases that play a crucial role on the stability and sustainability of the software. Nowadays, I break the development cycle of a software into four main phases, which are the design, development, testing and release phases. The main reason for the impractical estimates that I used to give a few years ago was that I ignored to include testing and release phases in my development plan, however, time has shown me that all phases are valued equally and we developers need to pay attention to all of them.
As such, in this tutorial I will be focusing on testing, where I will be explaining and providing examples around idioms and designs on how to test a modern web application. This tutorial will be the beginning of a series around testing, which will include unit, functional, integration, end-to-end and contract testing. This tutorial will lay down the foundations around testing and will start with unit tests. I have also prepared a repository of a simple microservice architecture that includes all forms of tests necessary for a scalable, easy to maintain modern application. The microservice includes a golang-based API that exposes two endpoints, and a React-based frontend application that consumes those endpoints.
The Importance of Automating Tasks
If you haven't noticed, the web is big, and when I mean big is huge. However, from all applications out there, only a small portion of them are dynamic, meaning that additions and features are added in constant intervals. These applications need to scale well, but to achieve this, the development and release cycles need to be automated. Nonetheless, not all applications have to be automated, rather than only those platforms which will potentially serve a significant amount of traffic.
Fortunately, continuous integration (CI) and continuous development (CD) have evolve tremendously the last couple of decades and they have introduced ideas and practises on how to automate tasks as part of your application. It's common nowadays to include testing as part of your pipeline process, such that new features are tested quickly and failures are captured in a matter of minutes. This option is much more cost-efficient for companies since it doesn't require a group of software testers to manually test the application in every release. Moreover, this allows teams to release the software more frequently.
The Test Pyramid
Knowing that the combination of CI/CD and testing provide stability and scalability for a robust software, the next question that may come to your mind is:
What tests should I run and how should I structure them?
The test pyramid is a conceptual model that splits testing into different layers. It's often expressed with three layers, the Unit, Service and User Interface (UI), however, more variations exist that may look deeper into a certain layer, breaking it into more fine-grained pieces. Unit tests sit at the bottom of the pyramid, followed by service tests, and at the top we have UI tests.
Moving from bottom to top, units will be faster than service tests and subsequently service will precede UI tests. When it comes to confidence, the flow is reversed since UI tests provide more confidence compared to any other layer. Therefore, the norm is to have a huge set of unit tests that check each individual component of your application, and as you move higher off the pyramid to have fewer more business focused related tests.
If you are asking yourself why not to have a huge collection of tests in the upper levels of the pyramid, there are a few reasons. Firstly, the business requirements often change and having a large suite of top-level tests will introduce issues on maintaining the application. Rather than covering every single aspect of your application, it's preferred to cover user journeys that define the core value of your product. Secondly, moving higher on the pyramid, the application starts to acquire external dependencies. These dependencies introduce flakiness and slowness in the software and its better to minimise their effect. You probably don't want to spend an entire day trying to figure out why your application fails on stage and realise that the reason is an external dependency.
Failing to follow the general structure of the test pyramid, it will introduce issues on maintaining and scaling your application. The most common structure of failure is the ice-cream cone one, which essentially is the test pyramid in invert order.
Unit Testing
Unit tests sit at the bottom of the test pyramid and they are really fast. It's common to run hundreds of them in a few seconds and it's the first layer of security for testing the functionality of your application. As the name suggests, unit tests aim to test each individual unit of your application, however, the definition of a unit depends on different programming paradigms. For example, the unit of a web application developed based on a functional paradigm(FP) will be a function, and unit tests should aim to test the inputs and outputs of this function. On the contrary, an application developed based on an object-oriented paradigm(OOP) will consider a class as it's unit. In this case, the goal is to test the methods of an object, as well as the transitions between different states.
Now, let's try to answer this question:
What we should test in each unit?
The quick answer is all the dependencies and conditions within the unit, but, a more thorough explanation is as follows.
A unit in a test environment is also called a system under test(SUT). An SUT develops dependencies that are called depended-on component(DOC) and refers to any further invocation of some sort other functionality. These calls affect the state of SUT, and therefore we need to consider them in our test suite. In addition, conditions can alter the outputs of the SUT, which means that we should consider those as well.
The following code snippets demonstrate an XHR request that fetches some data from an external API. The unit tests of this snippet need to test the following scenarios:
What is the expected output if email is empty
What is the expected output if the XHR call fails
What is the expected output if the XHR call succeed, but returns a status code other than 200
What is the expected output if the XHR call succeed with a 200 status code
const getUser = async (email) => {
if(!email) {
return new Error('Bad request')
}
try {
const {data} = await fetch({
url: `http://localhost:3000/api/v1/users/emails/${email}`,
})
if (data.status !== 200){
throw new Error(data.message)
}
return data.users
}catch(e){
return e
}
}
However, we still can't proceed writing our unit tests. The code snippet doesn't allow this because we need to have an available API and we have included the network in the equation of our unit tests. This will also slow down our tests. To avoid that, DOC components are often replaced with test doubles. A test double is a replacement of DOC providing a similar API with DOC, with the only difference that we have control over it. With that way, we can drive our test suite to test all possible scenarios without too much hassle, keeping at the same time our tests fast and clean from side-effects.
There are two common approaches on how you should replace DOC components and those are:
Sociable: Replace those components that only introduce side-effects
Solitary: Replace all dependencies with test doubles.
I find myself using mostly the first approach, mainly because it's less likely to deviate from the intended functionality of the SUT, however, in some cases I am also applying the solitary approach. Nonetheless, use whichever approach fits for you, but make sure that your doubles behaves similar of the initial functionality (e.g inputs and outputs of the doubles are of the same data type). It's very easy to fall into the trap of testing things that your SUT is not intended to do.
The general definition of a test double is to replace the API of a DOC with something, but the definition of something includes a few different types. The four core types are:
Dummy Object: an input that you don't care too much about, neither plays a crucial role in the SUT, but it's used to execute the existing functionality. It's mostly used in a solitary approach.
Spy: you extend the DOC with some additional capabilities that allow you to monitor it's changes. A common scenario is to count the number of times it has been called.
Stub: replaces the API of DOC, providing at the same time definitions on what should do in each call. This will help to drive your SUT calls into different states.
Mock: an extension of stub with some advanced configurations.
Since we are now ready to write our unit tests, let's check a few examples from the sample repository. The following code snippet is available here and ensures that the HelloWithName
controller of our go-based API will return a 400 status code for an invalid username, and succeeds with a 200 response with a valid username. We use testify to write our unit tests.
package unit
import (
"encoding/json"
"hello-world/internal"
"net/http"
"net/http/httptest"
"testing"
"github.com/julienschmidt/httprouter"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
type helloWithNameSuite struct {
suite.Suite
}
func (hs *helloWithNameSuite) TestHelloWithName(){
var table = []struct{
expectedResponse internal.ResponseBody
params httprouter.Params
}{
{
params: httprouter.Params{},
expectedResponse: internal.ResponseBody{
Status: 400,
Success: false,
Message: "Error: Bad Request",
},
},
{
params: httprouter.Params{
httprouter.Param{
Key: "name",
Value: "!starts with invalid",
},
},
expectedResponse: internal.ResponseBody{
Status: 400,
Success: false,
Message: "Error: Bad Request",
},
},
{
params: httprouter.Params{
httprouter.Param{
Key: "name",
Value: "ends with invalid!",
},
},
expectedResponse: internal.ResponseBody{
Status: 400,
Success: false,
Message: "Error: Bad Request",
},
},
{
params: httprouter.Params{
httprouter.Param{
Key:"name",
Value: "john",
},
},
expectedResponse: internal.ResponseBody{
Status: 200,
Success: true,
Data: "hello John",
},
},
}
var t = hs.T()
var assert = assert.New(t)
for _, row := range table {
var w = httptest.NewRecorder()
var r = httptest.NewRequest(http.MethodGet, "/", nil)
internal.HelloWithName(w, r, row.params)
var result = w.Result()
var resBody internal.ResponseBody
if e := json.NewDecoder(result.Body).Decode(&resBody); e != nil {
t.Errorf("Error: Invalid object for decoding\n")
}
assert.Equal(result.StatusCode, row.expectedResponse.Status)
assert.EqualValues(resBody, row.expectedResponse)
}
}
func TestHelloWithNameSuite(t *testing.T){
suite.Run(t, new(helloWithNameSuite))
}
On the front-end side of our application, we use jest to write our unit tests, with a bit of help from react testing library to test our components. The following code snippets are available here and here.
import {fetchHelloWithName} from '@utils/fetchHelloWithName'
import axios from 'axios'
jest.mock('axios')
describe.only('fetchHelloWithName', () => {
beforeEach(axios.mockReset)
const baseURL = 'http://localhost:4000'
it('should successfully fetch the data', async () => {
const payload = {
status: 200,
success: true,
data: 'hello john'
}
axios.get.mockResolvedValue({data: payload})
const loggerStub = jest.fn(() => null)
const onSuccessStub = jest.fn(()=> true)
const onFailureStub = jest.fn(() => false)
const options = {}
const res = await fetchHelloWithName(loggerStub)(baseURL, 'john', onSuccessStub, onFailureStub, options)
expect(res).toBe(true)
expect(loggerStub.mock.calls.length).toBe(0)
expect(onFailureStub.mock.calls.length).toBe(0)
expect(onSuccessStub.mock.calls.length).toBe(1)
expect(onSuccessStub.mock.calls[0][0]).toEqual(payload)
expect(axios.get.mock.calls[0]).toEqual(['http://localhost:4000/hello/john', {}])
})
it('should handle an unsuccessful payload', async () => {
const payload = {
status: 400,
success: false,
message: "error: bad request",
}
axios.get.mockResolvedValue({data: payload})
const loggerStub = jest.fn(() => null)
const onSuccessStub = jest.fn(()=> true)
const onFailureStub = jest.fn(() => false)
const options = {}
const res = await fetchHelloWithName(loggerStub)(baseURL, 'john', onSuccessStub, onFailureStub, options)
expect(res).toBe(false)
expect(loggerStub.mock.calls.length).toBe(1)
expect(onFailureStub.mock.calls.length).toBe(1)
expect(onSuccessStub.mock.calls.length).toBe(0)
expect(onFailureStub.mock.calls[0][0].message).toEqual(payload.message)
expect(axios.get.mock.calls[0]).toEqual(['http://localhost:4000/hello/john', {}])
})
it('should handle a failure', async () => {
const payload = {
status: 500,
success: false,
message: "error: internal server error",
}
axios.get.mockRejectedValue({response: {data: payload}})
const loggerStub = jest.fn(() => null)
const onSuccessStub = jest.fn(()=> true)
const onFailureStub = jest.fn(() => false)
const options = {}
const res = await fetchHelloWithName(loggerStub)(baseURL, 'john', onSuccessStub, onFailureStub, options)
expect(res).toBe(false)
expect(loggerStub.mock.calls.length).toBe(1)
expect(onFailureStub.mock.calls.length).toBe(1)
expect(onSuccessStub.mock.calls.length).toBe(0)
expect(loggerStub.mock.calls[0][0].response.data).toEqual(payload)
expect(onFailureStub.mock.calls[0][0].response.data).toEqual(payload)
expect(axios.get.mock.calls[0]).toEqual(['http://localhost:4000/hello/john', {}])
})
})
import React from 'react'
import App from '@components/App'
import fetchHello from '@utils/fetchHello'
import fetchHelloWithName from '@utils/fetchHelloWithName'
import { act, render, cleanup } from '@testing-library/react'
import '@testing-library/jest-dom'
jest.mock('../utils/fetchHello.js')
jest.mock('../utils/fetchHelloWithName.js')
afterEach(cleanup)
describe('App.js', () => {
beforeEach(()=> {
process.env.API_BASE_URL = "http://localhost:3000"
fetchHello.mockReset()
fetchHelloWithName.mockReset()
})
describe('fetchHelloWithName', () => {
it('should handle success', async () => {
fetchHelloWithName.mockResolvedValue(true)
fetchHello.mockResolvedValue(true)
const {getAllByText, getByTestId} = render(<App/>)
expect(getAllByText(/Loading/).length).toBe(2)
await act(async () => fetchHelloWithName)
const [args] = fetchHelloWithName.mock.calls
expect(args[0]).toBe('http://localhost:3000')
expect(args[1]).toBe('foo')
expect(args[2]).toBeTruthy()
expect(args[3]).toBeTruthy()
expect(args[4].cancelToken).toBeTruthy()
await act(async () => args[2]({hello: "hello Foo"}))
expect(getByTestId('helloWithName').textContent).toEqual(JSON.stringify({hello: 'hello Foo'}))
})
it('should handle failure', async () => {
fetchHelloWithName.mockResolvedValue(false)
fetchHello.mockResolvedValue(true)
const {getAllByText, getByTestId} = render(<App/>)
expect(getAllByText(/Loading/).length).toBe(2)
await act(async () => fetchHelloWithName)
const [args] = fetchHelloWithName.mock.calls
expect(args[0]).toBe('http://localhost:3000')
expect(args[1]).toBe('foo')
expect(args[2]).toBeTruthy()
expect(args[3]).toBeTruthy()
expect(args[4].cancelToken).toBeTruthy()
await act(async () => args[3]({message: "error"}))
expect(getByTestId('helloWithName').textContent).toEqual(JSON.stringify({message: 'error'}))
})
})
describe('fetchHello', () => {
it('should handle success', async () => {
fetchHello.mockResolvedValue(true)
fetchHelloWithName.mockResolvedValue(true)
const {getAllByText, getByTestId} = render(<App/>)
expect(getAllByText('Loading...').length).toBe(2)
await act(async () => fetchHello)
const [argsFetchHello] = fetchHello.mock.calls
expect(fetchHello).toHaveBeenCalledTimes(1)
expect(argsFetchHello[0]).toBe('http://localhost:3000')
expect(argsFetchHello[2]).toBeTruthy()
expect(argsFetchHello[3].cancelToken).toBeTruthy()
act(() => argsFetchHello[1]({hello: "world"}))
expect(getByTestId('helloPlain').textContent).toEqual(JSON.stringify({hello: "world"}))
})
it('should handle failure', async () => {
fetchHello.mockResolvedValue(false)
fetchHelloWithName.mockResolvedValue(true)
const {getAllByText, getByTestId} = render(<App/>)
expect(getAllByText(/Loading/).length).toBe(2)
await act(async ()=> fetchHello)
const [argsFetchHello] = fetchHello.mock.calls
expect(fetchHello).toHaveBeenCalledTimes(1)
expect(argsFetchHello[0]).toBe('http://localhost:3000')
expect(argsFetchHello[1]).toBeTruthy()
expect(argsFetchHello[3].cancelToken).toBeTruthy()
act(() => argsFetchHello[2]({message: "error"}))
expect(getByTestId('helloPlain').textContent).toBe(JSON.stringify({message: 'error'}))
})
})
})
Summary
In this very first tutorial of the testing modern applications series, we have mainly talked about three things:
Automation and how we can benefits from it in CI/CD pipelines that test our code.
The testing pyramid and how this model can guide us to structure our application efficiently.
Basic definitions of unit tests, as well as code examples of unit tests in a microservice architecture.
In the upcoming tutorials we will be talking about functional, integration, end-to-end tests and contract testing. Until then, stay tuned.