Mockito Quick Start Guide

Why Mockito?

Mockito is a mocking framework that helps you test particular components of software in isolation. You use Mockito to replace collaborators of the components you’re testing so that methods in each collaborator return desired outputs for given inputs. That way, if an error occurs when testing the component, you know where and why.

Adding Mockito to the Project

First, in order to use Mockito in our project, update the Gradle build script to include JUnit and Mockito:

dependencies {
    testImplementation 'junit:junit:4.13.2'
    
    // mockito-inline is needed instead of mockito-core if you plan to mock final methods or classes, constructors, or static methods
    testImplementation 'org.mockito:mockito-inline:4.11.0'
}

If this project used Maven instead of Gradle, the same two dependencies would be required, just using the Maven XML-based syntax in pom.xml.

A Test Class with Everything

A sample unit test utilizing Mockito looks like the following:

package com.example.catalog.model.pubsub;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.ArgumentMatchers.startsWith;
import static org.mockito.Mockito.lenient;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URI;
import java.net.URL;
import java.time.ZonedDateTime;

import org.hibernate.Session;
import org.hibernate.Transaction;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.junit.MockitoJUnitRunner;

import com.example.catalog.exception.CatalogException;
import com.example.catalog.util.CatalogConstants;

@RunWith(MockitoJUnitRunner.class)
public class SubscriptionTest {
    private static final long SUBSCRIBER_ID = 10L;
    private static final long SUBSCRIPTION_ID = 100L;
    private static final long NOTIFICATION_ID = 1000L;
    private static final String MY_FILENAME = "thisIsMyFileName.txt";
    private static final String MY_URL = "http://localhost";

    @Mock
    private Session mockSession;

    @Mock
    private HttpURLConnection mockHttpConnection;

    @Mock
    private Transaction mockTransaction;

    private final Subscriber subscriber = new Subscriber();
    private final Subscription subscription = new Subscription();
    private final ZonedDateTime myDatetime = ZonedDateTime.now();
    private final Notification notification = new Notification(MY_FILENAME, CatalogConstants.Changes.ADDED, myDatetime);

    @Before
    public void setUp() throws IOException {
        subscriber.setId(SUBSCRIBER_ID);
        subscriber.setEmail("abc@qc.com");
        subscriber.setName("Bobby Junior");

        subscription.setId(SUBSCRIPTION_ID);
        subscription.setSubscriber(subscriber);
        subscription.setDatasetName("myAwesomeData.s20250801.e20250802");
        subscription.setRepeatingTimeIntervalString("0 0 * * *");
        subscription.setUrl(MY_URL);
        subscription.setTestMode(true);

        notification.setId(NOTIFICATION_ID);
        notification.setSubscription(subscription);

    lenient().when(mockSession.beginTransaction()).thenReturn(mockTransaction);
        lenient().when(mockHttpConnection.getOutputStream()).thenReturn(new ByteArrayOutputStream());
    }

    @Test
    public void testPublishToSuccess() throws IOException, CatalogException {
        URL mockUrl = spy(URI.create(MY_URL).toURL());
        when(mockUrl.openConnection()).thenReturn(mockHttpConnection);
        when(mockHttpConnection.getResponseCode()).thenReturn(HttpURLConnection.HTTP_ACCEPTED);

        subscription.setUsername("username");
        subscription.setPassword("password");
        NotificationLog nlog = subscription.publishNotification(notification, mockUrl, mockSession);

        verify(mockHttpConnection).setRequestMethod("POST");
        verify(mockHttpConnection).setDoOutput(true);
        verify(mockHttpConnection).setRequestProperty("Content-Type", "application/json");
        verify(mockHttpConnection)
                .setRequestProperty(eq("Authorization"), startsWith("Basic dXNlcm5hbWU6cGFzc3dvcmQ="));
        verify(mockHttpConnection).getResponseCode();
        verify(mockHttpConnection).disconnect();

        assertEquals(NOTIFICATION_ID, nlog.getNotification().getId());
        assertEquals(SUBSCRIPTION_ID, nlog.getNotification().getSubscription().getId());
        assertEquals(
                SUBSCRIBER_ID,
                nlog.getNotification().getSubscription().getSubscriber().getId());
    }
}

As you can see, we can test the publishNotification by mocking the three collaborators, injecting them into the class under test, and then calling the test method.

Working with the Mockito API

The above example demonstrates the general Mockito test process

  1. Creating stubs to stand in for collaborators
  2. Setting expectations on the stubs to do what you want
  3. Injecting the stubs into the class you plan to test
  4. Testing the methods in the class under test by invoking its methods, which in turn call methods on the stubs
  5. Checking the methods work as expected
  6. Verifying that the methods on the collaborators got invoked the correct number of times, in the correct order

You can use these steps every time you want to use Mockito for replacing collaborators in the class under test, thereby writing true unit tests.

Creating Mocks and Stubs

Mockito has two ways of doing that: using the static mock method or using annotations.

In the example, we added the @Mock annotation to those collaborators that we wanted Mockito to mock:

    @Mock
    private Session mockSession;

    @Mock
    private HttpURLConnection mockHttpConnection;

    @Mock
    private Transaction mockTransaction;

We have three collaborators, so we needed three @Mock annotations on the attributes here.

Setting Expectations

The example mocked the collaborators of Subscription used the methods when and thenReturn to set the expectations on various methods:

when(mockSession.beginTransaction()).thenReturn(mockTransaction);

when(mockHttpConnection.getOutputStream()).thenReturn(new ByteArrayOutputStream());

when(mockUrl.openConnection()).thenReturn(mockHttpConnection);

when(mockHttpConnection.getResponseCode()).thenReturn(HttpURLConnection.HTTP_ACCEPTED);

The argument to when is the declaration of an invocation of the method you want to call on the stub. The return type connects to the various then methods, like thenReturnthenThrow, or thenAnswer, which are usually chained to the output.

Verifying Method Calls

After returning from when, you can take advantage of one more capability of Mockito: verifying that the methods on the mocks were called the correct number of times, in the correct order:

        verify(mockHttpConnection).setRequestMethod("POST");
        verify(mockHttpConnection).setDoOutput(true);
        verify(mockHttpConnection).setRequestProperty("Content-Type", "application/json");
        verify(mockHttpConnection)
                .setRequestProperty(eq("Authorization"), startsWith("Basic dXNlcm5hbWU6cGFzc3dvcmQ="));
        verify(mockHttpConnection).getResponseCode();
        verify(mockHttpConnection).disconnect();

Running JUnit Tests with Mockito

For JUnit 4, in order to work with the @Mock annotation, add the following @RunWith annotation to your test class:

@RunWith(MockitoJUnitRunner.class)

Alternatively, you can invoke the openMocks method in a setup method tagged with @Before.

Using dict.setdefault in Python

Photo by Christina Morillo on Pexels.com

For years, I’ve been using the following snippet in the solution for one of my interview questions:

anagrams = dict()

with open(WORDS_PATH) as f:
    for line in f:
    key = sort(line.strip())
    if key not in anagrams:
        anagrams[key] = list()
        anagrams[key].append(line.strip())
    else:
        anagrams[key].append(line.strip())

Recently, I learned to use the dict.setdefault function to further optimize it, and the end result looks like the following:

anagrams = dict()

with open(WORDS_PATH) as f:
    for line in f:
        key = sort(line.strip())
        anagrams.setdefault(key, []).append(line.strip())

Get the list of anagrams for key, or set it to [] if not found; setdefault returns the value, so it can be updated without requiring a second search. In other words, the end result of this line…

anagrams.setdefault(key, []).append(line.strip())

…is the same as running…

if key not in anagrams:
    anagrams[key] = list()
    anagrams[key].append(line.strip())
else:
    anagrams[key].append(line.strip())

…except that the latter code performs at least two searches for key – three if it’s not found – while setdefault does it all with a single lookup.

How to Make an API Endpoint Accept POST Requests with an Optional JSON body in axum?

Photo by RealToughCandy.com on Pexels.com

What is axum?

axum is a very popular web application framework in the Rust world.

How does axum handle incoming requests?

Like many other web application frameworks, axum routes the incoming requests to handlers.

What is a handler?

In axum, a handler is an async function that accepts zero or more “extractors” as arguments and returns something that can be converted into a response.

What is an extractor?

In axum, an extractor is a type for extracting data from requests, which implements FromRequest or FromRequestParts.

For example, Json is an extractor that consumes the request body and deserializes it as JSON into some target type:

use axum::{
    Json,
    routing::post,
    Router,
};
use serde::Deserialize;

#[derive(Deserialize)]
struct CreateUser {
    email: String,
    password: String,
}

async fn create_user(Json(payload): Json<CreateUser>) {
    // ...
}

let app = Router::new().route("/users", post(create_user));

We can spot a few things in the above example:

  1. the async function create_user() is the handler, which is set up to handle POST requests against the /users endpoint
  2. the function argument Json(payload) is a Json extractor that consumes the JSON body and deserializes it as JSON into the target type CreateUser

How to make an API endpoint accept POST requests with an optional JSON body?

All extractors defined in axum will reject the request if it doesn’t match. If you wish to make an extractor optional, you can wrap it in Option:

use axum::{
    Json,
    routing::post,
    Router,
};
use serde_json::Value;

async fn create_user(payload: Option<Json<Value>>) {
    if let Some(payload) = payload {
        // We got a valid JSON payload
    } else {
        // Payload wasn't valid JSON
    }
}

let app = Router::new().route("/users", post(create_user));

How to Read a Text File Line by Line Efficiently in Rust?

Photo by Poppy Thomas Hill on Pexels.com

How to open a file for reading?

In Rust, a reference to an open file on the filesystem is represented by the struct std::fs::File. And we can use the File::open method to open an existing file for reading:

pub fn open<P: AsRef<Path>>(path: P) -> Result<File>

Which takes a path (anything it can borrow a &Path from i.e. AsRef<Path>, to be exact) as a parameter and returns an io::Result<File>.

How to read a file line by line efficiently?

For efficiency, readers can be buffered, which simply means they have a chunk of memory (a buffer) that holds some input data in memory. This saves on system calls. In Rust, a BufRead is a type of Read which has an internal buffer, allowing it to perform extra ways of reading. Note that File is not automatically buffered as File only implements Read but not BufRead. However, it’s easy to create a buffered reader for a File:

BufReader::new(file);

Finally, we can use the std::io::BufRead::lines() method to return an iterator over the lines of this buffered reader:

BufReader::new(file).lines();

Put everything together

Now we can easily write a function that reads a text file line by line efficiently:

use std::fs::File;
use std::io::{self, BufRead, BufReader, Lines};
use std::path::Path;

fn read_lines<P>(path: P) -> io::Reasult<Lines<BufReader<File>>>
where P: AsRef<Path>,
{
    let file = File::open(path)?;
    Ok(BufReader::new(file).lines())
}

Want to buy me a coffee? Do it here: https://www.buymeacoffee.com/j3rrywan9

Getting Started with Amazon Elastic Container Registry

What is Amazon Elastic Container Registry?

Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry service that is secure, scalable, and reliable. Amazon ECR supports private container image repositories with resource-based permissions using AWS IAM. This is so that specified users or Amazon EC2 instances can access your container repositories and images. You can use your preferred CLI to push, pull, and manage Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.

Components of Amazon ECR

Amazon ECR contains the following components:

Registry

An Amazon ECR registry is provided to each AWS account; you can create image repositories in your registry and store images in them.

Repository

An Amazon ECR image repository contains your Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.

Image

You can push and pull container images to your repositories. You can use these images locally on your development system, or you can use them in Amazon ECS task definitions and Amazon EKS pod specifications.

Getting Started with Amazon ECR

Prerequisites

  • Sign up for AWS
  • Install the AWS CLI
  • Install Docker

Create an IAM user

Create an IAM user, and then grant this user administrative permissions by attaching an existing policy AmazonEC2ContainerRegistryFullAccess to this user.

Create an image repository

A repository is where you store your Docker or Open Container Initiative (OCI) images in Amazon ECR. Each time you push or pull an image from Amazon ECR, you specify the repository and the registry location which informs where to push the image to or where to pull it from.

  • Choose Get Started.
  • Inside the Create repository form:
  • For Visibility settings, choose the visibility setting for the repository.
  • For Repository name, provide a concise name. For example, sonarqube.
  • For Tag immutability, enable tag immutability to prevent image tags from being overwritten by subsequent image pushes using the same tag. Disable tag immutability to allow image tags to be overwritten.
  • For Image scan settings and Encryption settings, leave them as Disabled.
  • Choose Create repository.

Create a Docker image

For brevity, pull a docker image from the Docker Hub instead. For example, sonarqube:8.9.2-enterprise:

docker pull sonarqube:8.9.2-enterprise

Authenticate to your default registry

After you have installed and configured the AWS CLI, authenticate the Docker CLI to your default registry. That way, the docker command can push and pull images with Amazon ECR. The AWS CLI provides a get-login-password command to simplify the authentication process.

The get-login-password is the preferred method for authenticating to an Amazon ECR private registry when using the AWS CLI. Ensure that you have configured your AWS CLI to interact with AWS. For more information, see AWS CLI configuration basics:

aws ecr get-login-password --region [region] | docker login --username AWS --password-stdin [aws_account_id].dkr.ecr.[region].amazonaws.com

Make sure replace [region] and [aws_account_id] with your region and AWS account ID.

Push an image to Amazon ECR

Now you can push your image to the Amazon ECR repository you created in the previous section. You use the docker CLI to push images, but there are a few prerequisites that must be satisfied for this to work properly:

  • The minimum version of docker is installed: 1.7
  • The Amazon ECR authorization token has been configured with docker login.
  • The Amazon ECR repository exists and the user has access to push to the repository.

After those prerequisites are met, you can push your image to your newly created repository in the default registry for your account.

Tag the image to push to your registry, which is sonarqube:8.9.2-enterprise in this case:

docker tag sonarqube:8.9.2-enterprise [aws_account_id].dkr.ecr.[region].amazonaws.com/sonarqube:8.9.2-enterprise

Push the image:

docker push [aws_account_id].dkr.ecr.[region].amazonaws.com/sonarqube:8.9.2-enterprise

Pull an image from Amazon ECR

After your image has been pushed to your Amazon ECR repository, you can pull it from other locations. Use the docker CLI to pull images, but there are a few prerequisites that must be satisfied for this to work properly:

  • The minimum version of docker is installed: 1.7
  • The Amazon ECR authorization token has been configured with docker login.
  • The Amazon ECR repository exists and the user has access to pull from the repository.

After those prerequisites are met, you can pull your image. To pull your example image from Amazon ECR, run the following command:

docker pull [aws_account_id].dkr.ecr.[region].amazonaws.com/sonarqube:8.9.2-enterprise

Next Generation Java Testing with JUnit 5

What is JUnit 5?

Unlike previous versions of JUnit, JUnit 5 is composed of several different modules from three different sub-projects.

JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage

The JUnit Platform serves as a foundation for launching testing frameworks on the JVM.

JUnit Jupiter is the combination of the new programming model and extension model for writing tests and extensions in JUnit 5.

JUnit Vintage provides a TestEngine for running JUnit 3 and JUnit 4 based tests on the platform.

Why the word “Jupiter”?

Because it starts with “JU”, and it’s the 5th planet from the Sun.

All the packages associated with JUnit 5 have the word “jupiter” in there.

Setup

Use Gradle or Maven

@Test

New package: org.junit.jupiter.api

Other test annotations:

  • @RepeatedTest
  • @ParameterizedTest
  • @TestFactory

Lifecycle Annotations

Each test gets @Test

@BeforeEach, @AfterEach

@BeforeAll, @AfterAll

Disabled tests

@Disabled -> skip a particular test or tests

Method level or class level

Optional parameter to give a reason

Replaces @Ignored in JUnit 4

Test names

Use @DisplayName on class or methods

Supports Unicode and even emojis

Assertions

New methods in JUnit 5

  • assertAll
  • assertThrows, assertDoesNotThrow
  • assertTimeout
  • assertTimeoutPreemptively

Assumptions

Let you test pre-conditions

Static methods in org.junit.jupiter.api.Assumptions

Conditional Execution

Can make tests or test classes conditional, based on:

  • Operating system
  • Java version
  • Some boolean condition
  • Tagging and Filtering

Test Execution Order

By default, test methods will be ordered using an algorithm that is deterministic but intentionally nonobvious. This ensures that subsequent runs of a test suite execute test methods in the same order, thereby allowing for repeatable builds.

To control the order in which test methods are executed, annotate your test class or test interface with @TestMethodOrder and specify the desired MethodOrderer implementation. You can implement your own custom MethodOrderer or use one of the following built-in MethodOrderer implementations.

Test Instance Lifecycle

Nested Test Classes

Use @Nested on non-static inner classes

Nesting can be as deep as you want

Constructor and Method Parameters

In JUnit 4, no parameters in constructors or test methods

Now, parameters can be injected automatically

  • TestInfo
  • TestReporter
  • RepetitionInfo
  • Other custom ParameterResolvers supplied by extensions

Parameterized Tests

Arguably more useful than just repeated

Run a test multiple times with different arguments

@ParameterizedTest

Need at least one source of parameters

NOTE: Stable as of 5.7 (finally!)

Dynamic Tests

Generated at runtime by a factory method

Annotated with @TestFactory

Python Abstract Base Classes in Action

Photo by Markus Spiske on Pexels.com

Abstract base classes

Python has this idea of abstract base classes (ABCs), which define a set of methods and properties that a class must implement in order to be considered a duck-type instance of that class. The class can extend the abstract base class itself in order to be used as an instance of that class, but it must supply all the appropriate methods.

Case study

Recently, I worked on an internal tool to parse source files containing test cases written in three different kinds of testing frameworks i.e. pytest, Robot Framework, and Cucumber. Since the tool needs to parse source files with three different parsers with a same set of APIs, it is advisable to create an abstract base class in this case to document what APIs the parsers should provide (documentation is one of the stronger use cases for ABCs). The abc module provides the tools I need to do this, as demonstrated in the following block of code:

from abc import ABC, abstractmethod
from pathlib import Path


class Parser(ABC):
    @property
    def name(self) -> str:
        return self._name

    @property
    def test_case_indicator(self) -> str:
        return self._test_case_indicator

    @abstractmethod
    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass


class PytestParser(Parser):
    def __init__(self, source_file: Path) -> None:
        self._name = 'pytest'
        self._file_ext = 'py'
        self._test_case_indicator = 'def '

    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass


class RobotParser(Parser):
    def __init__(self, source_file: Path) -> None:
        self._name = 'robot'
        self._file_ext = 'robot'
        self._test_case_indicator = '*** Test Cases ***'

    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass


class GherkinParser(Parser):
    def __init__(self, source_file: Path) -> None:
        self._name = 'gherkin'
        self._file_ext = 'feature'
        self._test_case_indicator = 'Scenario: '

    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass

Though I omitted the implementation of the filter_test_case_from_source_file() method for brevity, the idea is that any subclass of the Parser class must implement this method, as it is decorated with @abstractmethod.

More common object-oriented languages (like Java, Kotlin) have a clear separation between the interface and the implementation of a class. For example, some languages provide an explicit interface keyword that allows us to define the methods that a class must have without any implementation. In such an environment, an abstract class is one that provides both an interface and a concrete implementation of some, but not all, methods. Any class can explicitly state that it implements a given interface.
Python’s ABCs help to supply the functionality of interfaces without compromising on the benefits of duck typing.

References

PEP-3119

Executing System Command in Cypress Tests

Photo by Darren Halos on Pexels.com

Recently, I wrote a Cypress test where I had to copy a configuration file from one container to another container. As you may know, the easiest way to do that is to use the “docker cp” command. This post is a step-by-step how-to I used to achieve this.

Installing Docker

The “tester” container is based on the official cypress/included:6.3.0 docker image, which in turn is based on the official node:12.18.3-buster docker image. So as the first step, I had to figure out how to install Docker in Debian 10 in order to be able to run “docker cp” from within the container:

FROM cypress/included:6.3.0

RUN apt-get update && apt-get install -y \
  apt-transport-https \
  gnupg2 \
  software-properties-common

RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"

RUN apt-get update && apt-get install -y \
  docker-ce

Creating a Volume for the /var/run/docker.sock

In order to talk to the Docker daemon running outside of the “tester” container, I had to add a volume to mount the famous /var/run/docker.sock in the docker compose file:

volumes:
  - /var/run/docker.sock:/var/run/docker.sock

Executing “docker cp” in Cypress Test

Finally, I was able to execute “docker cp” to copy the configuration file from the “tester” container to the “web_app” container using the Cypress exec command:

const configYamlPath = 'cypress/fixtures/config.yaml';

cy.exec(`docker cp ${configYamlPath} web_app:/opt/web_app`)
  .then(() => cy.reload());

Want to buy me a coffee? Do it here: https://www.buymeacoffee.com/j3rrywan9

JavaScript Object Literal Shorthand with ECMAScript 2015

Photo by Markus Spiske on Pexels.com

Last Friday, I hit an ESLint error as below:

335:13  error  Expected property shorthand        object-shorthand

Per the “object-shorthand” rule’s documentation, there is a syntactic sugar for defining object literal methods and properties in the ECMAScript 2015 (ES6).

Quite often, when declaring an object literal, property values are stored in variables whose names are equal to the property names. For example:

const timeout = 30000;

const options = { timeout: timeout };

There is a shorthand for this situation:

const timeout = 30000;

const options = { timeout };

References