Mockito is a mocking framework that helps you test particular components of software in isolation. You use Mockito to replace collaborators of the components you’re testing so that methods in each collaborator return desired outputs for given inputs. That way, if an error occurs when testing the component, you know where and why.
Adding Mockito to the Project
First, in order to use Mockito in our project, update the Gradle build script to include JUnit and Mockito:
dependencies {
testImplementation 'junit:junit:4.13.2'
// mockito-inline is needed instead of mockito-core if you plan to mock final methods or classes, constructors, or static methods
testImplementation 'org.mockito:mockito-inline:4.11.0'
}
If this project used Maven instead of Gradle, the same two dependencies would be required, just using the Maven XML-based syntax in pom.xml.
A Test Class with Everything
A sample unit test utilizing Mockito looks like the following:
As you can see, we can test the publishNotification by mocking the three collaborators, injecting them into the class under test, and then calling the test method.
Working with the Mockito API
The above example demonstrates the general Mockito test process
Creating stubs to stand in for collaborators
Setting expectations on the stubs to do what you want
Injecting the stubs into the class you plan to test
Testing the methods in the class under test by invoking its methods, which in turn call methods on the stubs
Checking the methods work as expected
Verifying that the methods on the collaborators got invoked the correct number of times, in the correct order
You can use these steps every time you want to use Mockito for replacing collaborators in the class under test, thereby writing true unit tests.
Creating Mocks and Stubs
Mockito has two ways of doing that: using the static mock method or using annotations.
In the example, we added the @Mock annotation to those collaborators that we wanted Mockito to mock:
The argument to when is the declaration of an invocation of the method you want to call on the stub. The return type connects to the various then methods, like thenReturn, thenThrow, or thenAnswer, which are usually chained to the output.
Verifying Method Calls
After returning from when, you can take advantage of one more capability of Mockito: verifying that the methods on the mocks were called the correct number of times, in the correct order:
For years, I’ve been using the following snippet in the solution for one of my interview questions:
anagrams = dict()
with open(WORDS_PATH) as f:
for line in f:
key = sort(line.strip())
if key not in anagrams:
anagrams[key] = list()
anagrams[key].append(line.strip())
else:
anagrams[key].append(line.strip())
Recently, I learned to use the dict.setdefault function to further optimize it, and the end result looks like the following:
anagrams = dict()
with open(WORDS_PATH) as f:
for line in f:
key = sort(line.strip())
anagrams.setdefault(key, []).append(line.strip())
Get the list of anagrams for key, or set it to [] if not found; setdefault returns the value, so it can be updated without requiring a second search. In other words, the end result of this line…
anagrams.setdefault(key, []).append(line.strip())
…is the same as running…
if key not in anagrams:
anagrams[key] = list()
anagrams[key].append(line.strip())
else:
anagrams[key].append(line.strip())
…except that the latter code performs at least two searches for key – three if it’s not found – while setdefault does it all with a single lookup.
axum is a very popular web application framework in the Rust world.
How does axum handle incoming requests?
Like many other web application frameworks, axum routes the incoming requests to handlers.
What is a handler?
In axum, a handler is an async function that accepts zero or more “extractors” as arguments and returns something that can be converted into a response.
What is an extractor?
In axum, an extractor is a type for extracting data from requests, which implements FromRequest or FromRequestParts.
For example, Json is an extractor that consumes the request body and deserializes it as JSON into some target type:
use axum::{
Json,
routing::post,
Router,
};
use serde::Deserialize;
#[derive(Deserialize)]
struct CreateUser {
email: String,
password: String,
}
async fn create_user(Json(payload): Json<CreateUser>) {
// ...
}
let app = Router::new().route("/users", post(create_user));
We can spot a few things in the above example:
the async function create_user() is the handler, which is set up to handle POST requests against the /users endpoint
the function argument Json(payload) is a Json extractor that consumes the JSON body and deserializes it as JSON into the target type CreateUser
How to make an API endpoint accept POST requests with an optional JSON body?
All extractors defined in axum will reject the request if it doesn’t match. If you wish to make an extractor optional, you can wrap it in Option:
use axum::{
Json,
routing::post,
Router,
};
use serde_json::Value;
async fn create_user(payload: Option<Json<Value>>) {
if let Some(payload) = payload {
// We got a valid JSON payload
} else {
// Payload wasn't valid JSON
}
}
let app = Router::new().route("/users", post(create_user));
In Rust, a reference to an open file on the filesystem is represented by the struct std::fs::File. And we can use the File::open method to open an existing file for reading:
Which takes a path (anything it can borrow a &Path from i.e. AsRef<Path>, to be exact) as a parameter and returns an io::Result<File>.
How to read a file line by line efficiently?
For efficiency, readers can be buffered, which simply means they have a chunk of memory (a buffer) that holds some input data in memory. This saves on system calls. In Rust, a BufRead is a type of Read which has an internal buffer, allowing it to perform extra ways of reading. Note that File is not automatically buffered as File only implements Read but not BufRead. However, it’s easy to create a buffered reader for a File:
BufReader::new(file);
Finally, we can use the std::io::BufRead::lines() method to return an iterator over the lines of this buffered reader:
BufReader::new(file).lines();
Put everything together
Now we can easily write a function that reads a text file line by line efficiently:
use std::fs::File;
use std::io::{self, BufRead, BufReader, Lines};
use std::path::Path;
fn read_lines<P>(path: P) -> io::Reasult<Lines<BufReader<File>>>
where P: AsRef<Path>,
{
let file = File::open(path)?;
Ok(BufReader::new(file).lines())
}
Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry service that is secure, scalable, and reliable. Amazon ECR supports private container image repositories with resource-based permissions using AWS IAM. This is so that specified users or Amazon EC2 instances can access your container repositories and images. You can use your preferred CLI to push, pull, and manage Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.
Components of Amazon ECR
Amazon ECR contains the following components:
Registry
An Amazon ECR registry is provided to each AWS account; you can create image repositories in your registry and store images in them.
Repository
An Amazon ECR image repository contains your Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.
Image
You can push and pull container images to your repositories. You can use these images locally on your development system, or you can use them in Amazon ECS task definitions and Amazon EKS pod specifications.
Create an IAM user, and then grant this user administrative permissions by attaching an existing policy AmazonEC2ContainerRegistryFullAccess to this user.
Create an image repository
A repository is where you store your Docker or Open Container Initiative (OCI) images in Amazon ECR. Each time you push or pull an image from Amazon ECR, you specify the repository and the registry location which informs where to push the image to or where to pull it from.
For Visibility settings, choose the visibility setting for the repository.
For Repository name, provide a concise name. For example, sonarqube.
For Tag immutability, enable tag immutability to prevent image tags from being overwritten by subsequent image pushes using the same tag. Disable tag immutability to allow image tags to be overwritten.
For Image scan settings and Encryption settings, leave them as Disabled.
Choose Create repository.
Create a Docker image
For brevity, pull a docker image from the Docker Hub instead. For example, sonarqube:8.9.2-enterprise:
docker pull sonarqube:8.9.2-enterprise
Authenticate to your default registry
After you have installed and configured the AWS CLI, authenticate the Docker CLI to your default registry. That way, the docker command can push and pull images with Amazon ECR. The AWS CLI provides a get-login-password command to simplify the authentication process.
The get-login-password is the preferred method for authenticating to an Amazon ECR private registry when using the AWS CLI. Ensure that you have configured your AWS CLI to interact with AWS. For more information, see AWS CLI configuration basics:
Make sure replace [region] and [aws_account_id] with your region and AWS account ID.
Push an image to Amazon ECR
Now you can push your image to the Amazon ECR repository you created in the previous section. You use the docker CLI to push images, but there are a few prerequisites that must be satisfied for this to work properly:
The minimum version of docker is installed: 1.7
The Amazon ECR authorization token has been configured with docker login.
The Amazon ECR repository exists and the user has access to push to the repository.
After those prerequisites are met, you can push your image to your newly created repository in the default registry for your account.
Tag the image to push to your registry, which is sonarqube:8.9.2-enterprise in this case:
docker tag sonarqube:8.9.2-enterprise [aws_account_id].dkr.ecr.[region].amazonaws.com/sonarqube:8.9.2-enterprise
After your image has been pushed to your Amazon ECR repository, you can pull it from other locations. Use the docker CLI to pull images, but there are a few prerequisites that must be satisfied for this to work properly:
The minimum version of docker is installed: 1.7
The Amazon ECR authorization token has been configured with docker login.
The Amazon ECR repository exists and the user has access to pull from the repository.
After those prerequisites are met, you can pull your image. To pull your example image from Amazon ECR, run the following command:
The JUnit Platform serves as a foundation for launching testing frameworks on the JVM.
JUnit Jupiter is the combination of the new programming model and extension model for writing tests and extensions in JUnit 5.
JUnit Vintage provides a TestEngine for running JUnit 3 and JUnit 4 based tests on the platform.
Why the word “Jupiter”?
Because it starts with “JU”, and it’s the 5th planet from the Sun.
All the packages associated with JUnit 5 have the word “jupiter” in there.
Setup
Use Gradle or Maven
@Test
New package: org.junit.jupiter.api
Other test annotations:
@RepeatedTest
@ParameterizedTest
@TestFactory
Lifecycle Annotations
Each test gets @Test
@BeforeEach, @AfterEach
@BeforeAll, @AfterAll
Disabled tests
@Disabled -> skip a particular test or tests
Method level or class level
Optional parameter to give a reason
Replaces @Ignored in JUnit 4
Test names
Use @DisplayName on class or methods
Supports Unicode and even emojis
Assertions
New methods in JUnit 5
assertAll
assertThrows, assertDoesNotThrow
assertTimeout
assertTimeoutPreemptively
Assumptions
Let you test pre-conditions
Static methods in org.junit.jupiter.api.Assumptions
Conditional Execution
Can make tests or test classes conditional, based on:
Operating system
Java version
Some boolean condition
Tagging and Filtering
Test Execution Order
By default, test methods will be ordered using an algorithm that is deterministic but intentionally nonobvious. This ensures that subsequent runs of a test suite execute test methods in the same order, thereby allowing for repeatable builds.
To control the order in which test methods are executed, annotate your test class or test interface with @TestMethodOrder and specify the desired MethodOrderer implementation. You can implement your own custom MethodOrderer or use one of the following built-in MethodOrderer implementations.
Test Instance Lifecycle
Nested Test Classes
Use @Nested on non-static inner classes
Nesting can be as deep as you want
Constructor and Method Parameters
In JUnit 4, no parameters in constructors or test methods
Now, parameters can be injected automatically
TestInfo
TestReporter
RepetitionInfo
Other custom ParameterResolvers supplied by extensions
Parameterized Tests
Arguably more useful than just repeated
Run a test multiple times with different arguments
Python has this idea of abstract base classes (ABCs), which define a set of methods and properties that a class must implement in order to be considered a duck-type instance of that class. The class can extend the abstract base class itself in order to be used as an instance of that class, but it must supply all the appropriate methods.
Case study
Recently, I worked on an internal tool to parse source files containing test cases written in three different kinds of testing frameworks i.e. pytest, Robot Framework, and Cucumber. Since the tool needs to parse source files with three different parsers with a same set of APIs, it is advisable to create an abstract base class in this case to document what APIs the parsers should provide (documentation is one of the stronger use cases for ABCs). The abc module provides the tools I need to do this, as demonstrated in the following block of code:
Though I omitted the implementation of the filter_test_case_from_source_file() method for brevity, the idea is that any subclass of the Parser class must implement this method, as it is decorated with @abstractmethod.
More common object-oriented languages (like Java, Kotlin) have a clear separation between the interface and the implementation of a class. For example, some languages provide an explicit interface keyword that allows us to define the methods that a class must have without any implementation. In such an environment, an abstract class is one that provides both an interface and a concrete implementation of some, but not all, methods. Any class can explicitly state that it implements a given interface. Python’s ABCs help to supply the functionality of interfaces without compromising on the benefits of duck typing.
Recently, I wrote a Cypress test where I had to copy a configuration file from one container to another container. As you may know, the easiest way to do that is to use the “docker cp” command. This post is a step-by-step how-to I used to achieve this.
Installing Docker
The “tester” container is based on the official cypress/included:6.3.0 docker image, which in turn is based on the official node:12.18.3-buster docker image. So as the first step, I had to figure out how to install Docker in Debian 10 in order to be able to run “docker cp” from within the container:
FROM cypress/included:6.3.0
RUN apt-get update && apt-get install -y \
apt-transport-https \
gnupg2 \
software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"
RUN apt-get update && apt-get install -y \
docker-ce
Creating a Volume for the /var/run/docker.sock
In order to talk to the Docker daemon running outside of the “tester” container, I had to add a volume to mount the famous /var/run/docker.sock in the docker compose file:
Finally, I was able to execute “docker cp” to copy the configuration file from the “tester” container to the “web_app” container using the Cypress exec command: