How to Make an API Endpoint Accept POST Requests with an Optional JSON body in axum?

Photo by RealToughCandy.com on Pexels.com

What is axum?

axum is a very popular web application framework in the Rust world.

How does axum handle incoming requests?

Like many other web application frameworks, axum routes the incoming requests to handlers.

What is a handler?

In axum, a handler is an async function that accepts zero or more “extractors” as arguments and returns something that can be converted into a response.

What is an extractor?

In axum, an extractor is a type for extracting data from requests, which implements FromRequest or FromRequestParts.

For example, Json is an extractor that consumes the request body and deserializes it as JSON into some target type:

use axum::{
    Json,
    routing::post,
    Router,
};
use serde::Deserialize;

#[derive(Deserialize)]
struct CreateUser {
    email: String,
    password: String,
}

async fn create_user(Json(payload): Json<CreateUser>) {
    // ...
}

let app = Router::new().route("/users", post(create_user));

We can spot a few things in the above example:

  1. the async function create_user() is the handler, which is set up to handle POST requests against the /users endpoint
  2. the function argument Json(payload) is a Json extractor that consumes the JSON body and deserializes it as JSON into the target type CreateUser

How to make an API endpoint accept POST requests with an optional JSON body?

All extractors defined in axum will reject the request if it doesn’t match. If you wish to make an extractor optional, you can wrap it in Option:

use axum::{
    Json,
    routing::post,
    Router,
};
use serde_json::Value;

async fn create_user(payload: Option<Json<Value>>) {
    if let Some(payload) = payload {
        // We got a valid JSON payload
    } else {
        // Payload wasn't valid JSON
    }
}

let app = Router::new().route("/users", post(create_user));

How to Read a Text File Line by Line Efficiently in Rust?

Photo by Poppy Thomas Hill on Pexels.com

How to open a file for reading?

In Rust, a reference to an open file on the filesystem is represented by the struct std::fs::File. And we can use the File::open method to open an existing file for reading:

pub fn open<P: AsRef<Path>>(path: P) -> Result<File>

Which takes a path (anything it can borrow a &Path from i.e. AsRef<Path>, to be exact) as a parameter and returns an io::Result<File>.

How to read a file line by line efficiently?

For efficiency, readers can be buffered, which simply means they have a chunk of memory (a buffer) that holds some input data in memory. This saves on system calls. In Rust, a BufRead is a type of Read which has an internal buffer, allowing it to perform extra ways of reading. Note that File is not automatically buffered as File only implements Read but not BufRead. However, it’s easy to create a buffered reader for a File:

BufReader::new(file);

Finally, we can use the std::io::BufRead::lines() method to return an iterator over the lines of this buffered reader:

BufReader::new(file).lines();

Put everything together

Now we can easily write a function that reads a text file line by line efficiently:

use std::fs::File;
use std::io::{self, BufRead, BufReader, Lines};
use std::path::Path;

fn read_lines<P>(path: P) -> io::Reasult<Lines<BufReader<File>>>
where P: AnyRef<Path>,
{
    let file = File::open(path)?;
    Ok(BufReader::new(file).lines())
}

Want to buy me a coffee? Do it here: https://www.buymeacoffee.com/j3rrywan9

Getting Started with Amazon Elastic Container Registry

What is Amazon Elastic Container Registry?

Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry service that is secure, scalable, and reliable. Amazon ECR supports private container image repositories with resource-based permissions using AWS IAM. This is so that specified users or Amazon EC2 instances can access your container repositories and images. You can use your preferred CLI to push, pull, and manage Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.

Components of Amazon ECR

Amazon ECR contains the following components:

Registry

An Amazon ECR registry is provided to each AWS account; you can create image repositories in your registry and store images in them.

Repository

An Amazon ECR image repository contains your Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.

Image

You can push and pull container images to your repositories. You can use these images locally on your development system, or you can use them in Amazon ECS task definitions and Amazon EKS pod specifications.

Getting Started with Amazon ECR

Prerequisites

  • Sign up for AWS
  • Install the AWS CLI
  • Install Docker

Create an IAM user

Create an IAM user, and then grant this user administrative permissions by attaching an existing policy AmazonEC2ContainerRegistryFullAccess to this user.

Create an image repository

A repository is where you store your Docker or Open Container Initiative (OCI) images in Amazon ECR. Each time you push or pull an image from Amazon ECR, you specify the repository and the registry location which informs where to push the image to or where to pull it from.

  • Choose Get Started.
  • Inside the Create repository form:
  • For Visibility settings, choose the visibility setting for the repository.
  • For Repository name, provide a concise name. For example, sonarqube.
  • For Tag immutability, enable tag immutability to prevent image tags from being overwritten by subsequent image pushes using the same tag. Disable tag immutability to allow image tags to be overwritten.
  • For Image scan settings and Encryption settings, leave them as Disabled.
  • Choose Create repository.

Create a Docker image

For brevity, pull a docker image from the Docker Hub instead. For example, sonarqube:8.9.2-enterprise:

docker pull sonarqube:8.9.2-enterprise

Authenticate to your default registry

After you have installed and configured the AWS CLI, authenticate the Docker CLI to your default registry. That way, the docker command can push and pull images with Amazon ECR. The AWS CLI provides a get-login-password command to simplify the authentication process.

The get-login-password is the preferred method for authenticating to an Amazon ECR private registry when using the AWS CLI. Ensure that you have configured your AWS CLI to interact with AWS. For more information, see AWS CLI configuration basics:

aws ecr get-login-password --region [region] | docker login --username AWS --password-stdin [aws_account_id].dkr.ecr.[region].amazonaws.com

Make sure replace [region] and [aws_account_id] with your region and AWS account ID.

Push an image to Amazon ECR

Now you can push your image to the Amazon ECR repository you created in the previous section. You use the docker CLI to push images, but there are a few prerequisites that must be satisfied for this to work properly:

  • The minimum version of docker is installed: 1.7
  • The Amazon ECR authorization token has been configured with docker login.
  • The Amazon ECR repository exists and the user has access to push to the repository.

After those prerequisites are met, you can push your image to your newly created repository in the default registry for your account.

Tag the image to push to your registry, which is sonarqube:8.9.2-enterprise in this case:

docker tag sonarqube:8.9.2-enterprise [aws_account_id].dkr.ecr.[region].amazonaws.com/sonarqube:8.9.2-enterprise

Push the image:

docker push [aws_account_id].dkr.ecr.[region].amazonaws.com/sonarqube:8.9.2-enterprise

Pull an image from Amazon ECR

After your image has been pushed to your Amazon ECR repository, you can pull it from other locations. Use the docker CLI to pull images, but there are a few prerequisites that must be satisfied for this to work properly:

  • The minimum version of docker is installed: 1.7
  • The Amazon ECR authorization token has been configured with docker login.
  • The Amazon ECR repository exists and the user has access to pull from the repository.

After those prerequisites are met, you can pull your image. To pull your example image from Amazon ECR, run the following command:

docker pull [aws_account_id].dkr.ecr.[region].amazonaws.com/sonarqube:8.9.2-enterprise

Next Generation Java Testing with JUnit 5

What is JUnit 5?

Unlike previous versions of JUnit, JUnit 5 is composed of several different modules from three different sub-projects.

JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage

The JUnit Platform serves as a foundation for launching testing frameworks on the JVM.

JUnit Jupiter is the combination of the new programming model and extension model for writing tests and extensions in JUnit 5.

JUnit Vintage provides a TestEngine for running JUnit 3 and JUnit 4 based tests on the platform.

Why the word “Jupiter”?

Because it starts with “JU”, and it’s the 5th planet from the Sun.

All the packages associated with JUnit 5 have the word “jupiter” in there.

Setup

Use Gradle or Maven

@Test

New package: org.junit.jupiter.api

Other test annotations:

  • @RepeatedTest
  • @ParameterizedTest
  • @TestFactory

Lifecycle Annotations

Each test gets @Test

@BeforeEach, @AfterEach

@BeforeAll, @AfterAll

Disabled tests

@Disabled -> skip a particular test or tests

Method level or class level

Optional parameter to give a reason

Replaces @Ignored in JUnit 4

Test names

Use @DisplayName on class or methods

Supports Unicode and even emojis

Assertions

New methods in JUnit 5

  • assertAll
  • assertThrows, assertDoesNotThrow
  • assertTimeout
  • assertTimeoutPreemptively

Assumptions

Let you test pre-conditions

Static methods in org.junit.jupiter.api.Assumptions

Conditional Execution

Can make tests or test classes conditional, based on:

  • Operating system
  • Java version
  • Some boolean condition
  • Tagging and Filtering

Test Execution Order

By default, test methods will be ordered using an algorithm that is deterministic but intentionally nonobvious. This ensures that subsequent runs of a test suite execute test methods in the same order, thereby allowing for repeatable builds.

To control the order in which test methods are executed, annotate your test class or test interface with @TestMethodOrder and specify the desired MethodOrderer implementation. You can implement your own custom MethodOrderer or use one of the following built-in MethodOrderer implementations.

Test Instance Lifecycle

Nested Test Classes

Use @Nested on non-static inner classes

Nesting can be as deep as you want

Constructor and Method Parameters

In JUnit 4, no parameters in constructors or test methods

Now, parameters can be injected automatically

  • TestInfo
  • TestReporter
  • RepetitionInfo
  • Other custom ParameterResolvers supplied by extensions

Parameterized Tests

Arguably more useful than just repeated

Run a test multiple times with different arguments

@ParameterizedTest

Need at least one source of parameters

NOTE: Stable as of 5.7 (finally!)

Dynamic Tests

Generated at runtime by a factory method

Annotated with @TestFactory

Python Abstract Base Classes in Action

Photo by Markus Spiske on Pexels.com

Abstract base classes

Python has this idea of abstract base classes (ABCs), which define a set of methods and properties that a class must implement in order to be considered a duck-type instance of that class. The class can extend the abstract base class itself in order to be used as an instance of that class, but it must supply all the appropriate methods.

Case study

Recently, I worked on an internal tool to parse source files containing test cases written in three different kinds of testing frameworks i.e. pytest, Robot Framework, and Cucumber. Since the tool needs to parse source files with three different parsers with a same set of APIs, it is advisable to create an abstract base class in this case to document what APIs the parsers should provide (documentation is one of the stronger use cases for ABCs). The abc module provides the tools I need to do this, as demonstrated in the following block of code:

from abc import ABC, abstractmethod
from pathlib import Path


class Parser(ABC):
    @property
    def name(self) -> str:
        return self._name

    @property
    def test_case_indicator(self) -> str:
        return self._test_case_indicator

    @abstractmethod
    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass


class PytestParser(Parser):
    def __init__(self, source_file: Path) -> None:
        self._name = 'pytest'
        self._file_ext = 'py'
        self._test_case_indicator = 'def '

    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass


class RobotParser(Parser):
    def __init__(self, source_file: Path) -> None:
        self._name = 'robot'
        self._file_ext = 'robot'
        self._test_case_indicator = '*** Test Cases ***'

    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass


class GherkinParser(Parser):
    def __init__(self, source_file: Path) -> None:
        self._name = 'gherkin'
        self._file_ext = 'feature'
        self._test_case_indicator = 'Scenario: '

    def filter_test_case_from_source_file(self, path: Path) -> str:
        pass

Though I omitted the implementation of the filter_test_case_from_source_file() method for brevity, the idea is that any subclass of the Parser class must implement this method, as it is decorated with @abstractmethod.

More common object-oriented languages (like Java, Kotlin) have a clear separation between the interface and the implementation of a class. For example, some languages provide an explicit interface keyword that allows us to define the methods that a class must have without any implementation. In such an environment, an abstract class is one that provides both an interface and a concrete implementation of some, but not all, methods. Any class can explicitly state that it implements a given interface.
Python’s ABCs help to supply the functionality of interfaces without compromising on the benefits of duck typing.

References

PEP-3119

Executing System Command in Cypress Tests

Photo by Darren Halos on Pexels.com

Recently, I wrote a Cypress test where I had to copy a configuration file from one container to another container. As you may know, the easiest way to do that is to use the “docker cp” command. This post is a step-by-step how-to I used to achieve this.

Installing Docker

The “tester” container is based on the official cypress/included:6.3.0 docker image, which in turn is based on the official node:12.18.3-buster docker image. So as the first step, I had to figure out how to install Docker in Debian 10 in order to be able to run “docker cp” from within the container:

FROM cypress/included:6.3.0

RUN apt-get update && apt-get install -y \
  apt-transport-https \
  gnupg2 \
  software-properties-common

RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"

RUN apt-get update && apt-get install -y \
  docker-ce

Creating a Volume for the /var/run/docker.sock

In order to talk to the Docker daemon running outside of the “tester” container, I had to add a volume to mount the famous /var/run/docker.sock in the docker compose file:

volumes:
  - /var/run/docker.sock:/var/run/docker.sock

Executing “docker cp” in Cypress Test

Finally, I was able to execute “docker cp” to copy the configuration file from the “tester” container to the “web_app” container using the Cypress exec command:

const configYamlPath = 'cypress/fixtures/config.yaml';

cy.exec(`docker cp ${configYamlPath} web_app:/opt/web_app`)
  .then(() => cy.reload());

Want to buy me a coffee? Do it here: https://www.buymeacoffee.com/j3rrywan9

JavaScript Object Literal Shorthand with ECMAScript 2015

Photo by Markus Spiske on Pexels.com

Last Friday, I hit an ESLint error as below:

335:13  error  Expected property shorthand        object-shorthand

Per the “object-shorthand” rule’s documentation, there is a syntactic sugar for defining object literal methods and properties in the ECMAScript 2015 (ES6).

Quite often, when declaring an object literal, property values are stored in variables whose names are equal to the property names. For example:

const timeout = 30000;

const options = { timeout: timeout };

There is a shorthand for this situation:

const timeout = 30000;

const options = { timeout };

References

Installing Ruby from Source on Debian 8 Using SaltStack

Photo by Castorly Stock on Pexels.com

The default Ruby shipped with Debian 8 is of version 2.1.5, which is very old. You can use the following SaltStack states to install Ruby 2.5.1 from source:

bison:
  pkg.installed

libgdbm-dev:
  pkg.installed

libreadline-dev:
  pkg.installed

libssl-dev:
  pkg.installed

openssl:
  pkg.installed

zlib1g-dev:
  pkg.installed

download_ruby_2.5.1_source:
  cmd.run:
    - name: curl -s -S --retry 5 https://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.1.tar.gz | tar xz
    - runas: jenkins
    - cwd: /var/lib/jenkins
    - unless: command -v ruby && test '2.5.1p57' = $(ruby -v|awk '{print $2}')

install_ruby_2.5.1_from_source:
  cmd.run:
    - name: cd /var/lib/jenkins/ruby-2.5.1 && ./configure && make && make install
    - onchanges:
      - download_ruby_2.5.1_source

remove_ruby_2.5.1_source:
  file.absent:
    - name: /var/lib/jenkins/ruby-2.5.1
    - onchanges:
      - download_ruby_2.5.1_source

References

Adding Multiple Lines to a File using Ansible

Photo by Pixabay on Pexels.com

The Ansible module lineinfile will search a file for a line and ensure that it is present or absent. It is useful when you want to change a single line in a file only. But how to add multiple lines to a file? You can use a loop to do this together with lineinfile like the following:

- name: ASE Deps | Configure sudoers
  lineinfile:
    dest: /etc/sudoers
    line: "{{ item }}"
  with_items:
    - "Defaults:sybase !requiretty"
    - "sybase ALL=(ALL) NOPASSWD: /bin/mount, /bin/umount, /bin/mkdir, /bin/rmdir, /bin/ps"

Want to buy me a coffee? Do it here: https://www.buymeacoffee.com/j3rrywan9

Configuring DNS when DHCP is Used on Ubuntu

Photo by panumas nikhomkhai on Pexels.com

When eth0 is configured to use DHCP on Ubuntu (14.04 LTS), the contents of /etc/resolv.conf are overwritten by resolvconf (man 8 resolvconf), which in turn is called by dhclient. So you can neither set “dns-nameservers” and “dns-search” in /etc/resolv.conf nor /etc/network/interfaces.d/eth0.cfg.

The solution is to supersede the “domain-name-servers” and “domain-search” values in /etc/dhcp/dhclient.conf (man 5 dhclient.conf):

supersede domain-name-servers 172.16.101.11;
supersede domain-search "example.com";

And you may need to renew DHCP lease to make above change effective:

sudo dhclient -r eth0
sudo dhclient eth0