Integration Tests with Typescript
Integration tests are the glue that holds your code together, ensuring seamless interactions between external components. From databases to APIs, and external services, these tests validate the harmony of your system’s integration with real-world services under real-world conditions, safeguarding against bugs and bottlenecks.
Why integration tests?
Integration tests are crucial because they validate interactions between external components, ensuring smooth functionality of the entire system. By simulating real-world scenarios, these tests uncover potential bugs or inconsistencies that unit tests might overlook or miss. Integration tests also provide confidence in the system’s behavior under various conditions, enhancing its reliability. They facilitate early detection of integration issues, saving time and resources in the long run by preventing more significant problems during deployment or production phases. Overall, integration tests help maintain system integrity and deliver high-quality software.
How to think about integration testing
In integration testing, the setup and teardown processes are vital for establishing a controlled testing environment and ensuring test reliability.
During setup, infrastructure components like databases or services are initialized, configurations are tailored to mimic production settings, and necessary test data is prepared. This phase aims to create a stable foundation for executing the tests.
Teardown involves cleaning up resources, resetting states, and restoring the environment to its original state, maintaining test independence and repeatability.
Proper setup and teardown procedures enhance the effectiveness of integration tests by providing consistent environments and reliable results, ultimately contributing to the overall quality and stability of the software being tested. Automated CI pipelines can seamlessly integrate these tests, ensuring continuous system integrity validation throughout the development lifecycle.
Integration tests with docker
When integration tests involve interacting with a database, it’s common to spin up a Docker container running the database engine. Before executing the tests, the setup phase consists of pulling the required Docker image, creating a container instance, exposing necessary ports, making necessary configurations, and waiting for the container instance to start running.
This step ensures a controlled environment mimicking production settings. Once the container is running, the tests create a database, configure the schema, and populate it with test data representative of real-world scenarios. This process validates the functionality and integrity of database interactions.
Upon test completion, the teardown phase kicks in, and the Docker container is gracefully brought down, ensuring cleanup and resource optimization. Since no volumes persist beyond the container’s lifespan, any data generated or manipulated during the tests is ephemeral, maintaining test independence and avoiding data pollution across test runs.
All that seems like a lot to get a couple of tests up and running, you could do it manually (which you can’t do in a CI pipeline) or write a script to start up your docker containers (the code in the script is from a good book by Leonardo Giordani, Clean Architectures in Python)
As much as it’s really fun to write scripts when it comes to production applications, battle-tested software should be used, not because it works better, but because of good documentation that follows it and the bigger community helping to make it work.
Testcontainers is one such tool that helps us work well with docker containers. This test framework makes working with containers in a test environment easy. It provides functionality to spin up containers, add configurations, and spin down containers.
Setup test environment
NOTE: Testcontainers requires docker daemon to be up running
We use vitest
as our testing framework, vite-tsconfig-paths
to help vitest
resolve paths from the tsconfig.json
file. We use @vitest/coverage-istanbul
for code coverage because it ignores type
and interface
while calculating the test coverage. Finally, update the scripts to run once for integration tests since it’s resource-intensive. To separate integration tests from the other tests, files should contain one type of test and all integration test files should end with the suffix, *.integration.test.ts
.
pnpm add --save-dev vitest
pnpm add --save-dev vite-tsconfig-paths
pnpm add --save-dev @vitest/coverage-istanbul
// vitest.config.ts
/// <reference types="vitest" />
import { defineConfig } from 'vitest/config';
import tsconfigPaths from 'vite-tsconfig-paths';
export default defineConfig({
plugins: [tsconfigPaths()],
test: {
coverage: { provider: 'istanbul' },
setupFiles: ['tests/setup.integration.ts'],
},
});
// package.json
{
"scripts": {
"test": "vitest run unit --reporter=verbose",
"test:integration": "vitest run integration --reporter=verbose --coverage"
}
}
Database setup and teardown
We describe a function that will be called before and after all tests have run. This function is responsible for the setup and tear down of the database.
Since Vitest v0.10.0,
beforeAll
also accepts an optional cleanup function (equivalent toafterAll
).
pnpm add --save-dev @testcontainers/mysql
// tests/setup.integration.ts
import { beforeAll } from 'vitest';
import { MySqlContainer, StartedMySqlContainer } from '@testcontainers/mysql';
const SIXTY_SECONDS = 60 * 1000;
let container: MySqlContainer = new MySqlContainer();
let startedContainer: StartedMySqlContainer;
beforeAll(async () => {
// called once before all tests run
startedContainer = await container.start();
// called to retrieve the database connection uri
// e.g. mysql://USER:PASSWORD@HOST:PORT/DATABASE
console.log(startedContainer.getConnectionUri());
// clean up function, called once after all tests run
return async () => {
await startedContainer.stop();
}
}, SIXTY_SECONDS);
It’s important to add the SIXTY_SECONDS
, this tells the setup function to wait for 60 seconds for the setup function to run, if the setup function does not run within the given time, the function terminates. The default time is 5 seconds. This is important because it takes some time for docker to spin up a container and we don’t want the function to terminate before this happens. 60 seconds seems reasonable for a database instance to spin up.
Integration tests without docker
Some services don’t offer docker images to integrate with their services. One such service is AWS S3. In this scenario, we shall look at how to set up and tear down integration tests with AWS S3 service.
The setup functionality involves creating a new test bucket and tear-down deleting the test bucket.
// tests/setup.integration.ts
import { beforeAll } from 'vitest';
import { S3Client, CreateBucketCommand, DeleteBucketCommand } from '@aws-sdk/client-s3';
const testBucketName = `s3-test-bucket-${Math.ceil(Math.random() * 1000000)}`;
const s3Client = new S3Client();
beforeAll(async () => {
await s3Client.send(new CreateBucketCommand({ Bucket: testBucketName}));
return async () => {
await s3Client.send(new DeleteBucketCommand({ Bucket: testBucketName }));
};
}, SIXTY_SECONDS);
The same integration test can be used with Cloudflare’s R2 Object storage service that uses AWS S3 SDK, with just a minor tweak to how the client is set up.
// Cloudflare R2 Object Storage Client
const s3Client = new S3Client({
region: process.env.CLOUDFLARE_REGION,
endpoint: process.env.CLOUDFLARE_S3_CLIENTS_ENDPOINT,
credentials: {
accessKeyId: process.env.CLOUDFLARE_ACCESS_KEY_ID,
secretAccessKey: process.env.CLOUDFLARE_SECRET_ACCESS_KEY,
},
});
Conclusion
Integration tests serve as a vital component in ensuring the robustness and reliability of software systems. Integration tests play a pivotal role in validating the seamless integration of various components by focusing on how different parts of our application interact with each other and the external environment.
A showcase of the code examples used in this article is available in this repository.