Iteration 3: Microservice Tests. NoSQL

Learning objective

Using TestContainers to make integration/consumer-driven tests. Deploying NoSQL database instance(s) and using it as storage tier for SkyCave.

Deadline

October 12th at 23.59

Note: The Redis exercises are new, please report any inconsistencies, errors, etc. Also, dependencies on specific versions are almost always out of date; so if things does not work, try to update e.g. TestContainers to the latest version etc. (Keeping code snippets and zipped build.gradle files up-to-date is a pretty enourmous task, sorry...)

Exercise 'consumer-driven-test-hello-spark'

This is a warm-up exercise - most/all of the code is provided below

Create a Consumer-Driven Test/Contract Test using TestContainers to validate your 'hello-spark' image from the previous 'docker-hello-spark' exercise. (Or, if you did not finish it, you may pull 'henrikbaerbak/hellospark'.)

To help you out, here is a working 'build.gradle', that retrieves the libraries for TestContainers and the Unirest client HTTP library (you may of course substitute any other HTTP library that you may prefer):

plugins {
  id 'java'
}

repositories {
    mavenCentral()
}

dependencies {

  testImplementation group: 'com.konghq', name: 'unirest-java',
    version: '3.3.00'

  testImplementation 'junit:junit:4.13.2'
  testImplementation group: 'org.hamcrest', name: 'hamcrest', version: '2.2'
  testImplementation "org.testcontainers:testcontainers:1.15.5"
}         
        

And a almost-complete template for the JUnit code in 'src/test/java/example' is

package example;

import kong.unirest.HttpResponse;
import kong.unirest.Unirest;
import kong.unirest.UnirestException;
import org.testcontainers.containers.GenericContainer;

import org.junit.*;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.CoreMatchers.*;

public class TestHelloSpark {

  public static final int SERVER_PORT = 4567;
  @ClassRule
  public static GenericContainer helloSpark =
          new GenericContainer("(your image here)")
                  .withExposedPorts(SERVER_PORT);
  private String serverRootUrl;

  @Before
  public void setup()
  {
    String address = helloSpark.getContainerIpAddress();
    Integer port = helloSpark.getMappedPort(SERVER_PORT);
    serverRootUrl = "http://" + address + ":" + port + "/hello/";
  }

  @Test
  public void shouldGETonPathHello() throws UnirestException {
    // Code below not quite correct, change to sharp brackets
    HttpResponse(String) reply =
            Unirest.get(serverRootUrl + "Henrik").asString();
    System.out.println("** ROOT: " + reply.getBody().toString());
    assertThat(reply.getStatus(), is(200));
  }
}
        

Or - download this zip: consumer-driven-test-hello-spark.zip , which probably needs the dependencies in the build.gradle file to be updated.

Exercise 'cdt-quote-service' [M 40]

The official quote service stems from the docker hub image

 
henrikbaerbak/quote:msdo_1_0_1
        

In this exercise you should make Consumer Driven Tests (CDT)/Contract tests for the quote service - based upon the REST API described earlier in exercise 'quote-service'.

Requirements:

Hand-in:

Evaluation:

I will review your code on the Crunch machine; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.

Learning Goal Assessment parameters
Submission Required artifacts are all present.
Test Type The test code is CDT code, not integration test code (exercise the REST API, not the QuoteService implementation).
Test Code The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept.
Test Completeness All central aspects of the API are well covered by tests. Irrelevant test aspects are not present.
Functionality The tests execute correctly, all tests pass.

Exercise 'integration-quote-service' [M 40]

In this exercise, you should make Integration Tests (in the Fowler sense) (or Connector tests in the Bærbak sense) of your QuoteService implementation, that is the connector, developed earlier in the 'quote-service' exercise.

Requirements:

Hand-in:

Evaluation:

I will review your code on the Crunch machine; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.

Learning Goal Assessment parameters
Submission Required artifacts are all present.
Test Type The test code is integration test code (it exercises the 'RealQuoteService' implementation), not CDT test code (it is not exercising raw REST calls) nor service tests (testing indirectly through exercising the PlayerServant code).
Test Code The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept.
Test Completeness All central aspects of the integration are well covered by tests. Irrelevant test aspects are not present.
Functionality The tests execute correctly, all tests pass.

Exercise 'no-cdt-for-redis'

Argue why it does not make sense to create Consumer-Driven Tests for the Redis database.

Exercise 'redis-datatype-model'

In this exercise, the learning focus is on using the Redis shell, and explore the key-value paradigm of a NoSQL db.

In mandatory exercise 'integration-redis-connector', you will develop a Redis 'CaveStorage' connector/driver, and in order to do that, you of course have to reflect upon which Redis datatypes (and potentially 'secondary indices') that suits the domain - just as Entity-Relation models are deveveloped for 'good old SQL'.

Requirements:

Exercise 'architectural-prototyping-redis-connector'

In my 'Software Architecture in Practice' course, I teach about Architectural Prototyping: Small codebases that explore/experiment with an architectural issue or an architectural tradeoff.

Prototyping work is often too cumbersome in the original codebase context, therefore often a minimal codebase is harvested and used for quick experiments.

This exercise is basically a warm up to the 'integration-redis-connector' exercise later; and shows some of the setup you need.

The code base uses 'Jedis' as java driver, see some examples at How to use Redis in Java using Jedis.

Exercise: Implement (in partial) a Redis backed CaveStorage implementation using TestContainers as tool for an Integration Test (in the Fowler sense) suite.

You will find the initial steps for a solution in the gradle project: ap-redis-connector.zip

Hint:

  1. JavaTPoint's description of Hashes states that "they are the perfect data type to represent objects." You can see that I instead use Gson to marshall/demarshall to JSON. You are of course free to pick what you find most readable/easy to code.
  2. JUnit tests for the CaveStorage interface already exists, so it makes good sense to reuse them in a TDD and 'small-steps' fashion: "Copy" them one by one to your integration tests, ensuring your implementation makes it pass, before proceeding to "copying" the next - essentially growing your RedisCaveStorageConnector implementation. I say "copy" in quotes as it would be smarter to refactor the existing test cases into something that can be reused, ala CommonXXXTests in the code base.
  3. Getting the connection code right is another story. Find inspiration at Jedis tutorial full version.

Exercise 'integration-redis-connector' (*) [M 50]

Presently, the CaveStorage is an Fake Object test double, and our SkyCave cannot handle restarts at all. We need a real storage tier for production, of course. Therefore, implement the CaveStorage interface so it interfaces a running Redis database. That is, make a Redis connector implementation.

In this exercise you should make Integration Tests (in the Fowler sense / Connector Tests in Bærbak sense) of your CaveStorage implementation that connects to a real Redis container, and use these tests to make a (test-driven?) implementation.

I advise to 'take small steps' by considering solving the 'architectural-prototyping-redis-connector' exercise first! This exercise then more-or-less only deals with the integration into SkyCave aspect.

Requirements:

Hand-in:

Evaluation:

I will review your code on the Crunch machine; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.

Learning Goal Assessment parameters
Submission Required artifacts are all present.
Test Type The test code is integration (connector) test code, not CDT or Service test code.
Test Code The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept.
Test Completeness All central aspects of the integration are well covered by tests. Irrelevant test aspects are not present. Wall message methods are optional to include.
Production Code The CaveStorage implementation that connects Redis is 'Clean Code' and handles connections correctly. You may ignore handling JedisExceptions for now.
Functionality The tests execute correctly, all tests pass.

Hints and issues:

Exercise 'redis-storage-journey' (*) [A 80]

Crunch will execute a couple of test journeys on your SkyCave connected to a Redis storage, validating that it works. Wall behaviour will not be tested. Crunch will test against a Redis v 6.2.5.

Requirements:

Hand-in By Crunch.

Evaluation: Crunch will bind your daemon to its own, empty, redis database through overriding the SKYCAVE_CAVESTORAGE_SERVER_ADDRESS value in the CPF. It will start your daemon, go to room (0,0,0) and validate that the cave has its initial configuration. Then Crunch will dig some random rooms, restart the daemon, log in a random new user, and make certain these rooms are present. It will also try to update a room's description, both as creator and non-creator.

Hints and issues:

Exercise 'redis-storage-journey-wall' [A 60]

Crunch will execute wall behaviour test journeys.

Requirements:

Hand-in By Crunch. Crunch will start your daemon using 'redis-storage-journey.cpf'.

Evaluation: Crunch will create 11 postings in one room and 2 in another, and test the pagination and ordering of wall postings. It will update a message, both as author and as non-author, and validate that sematics is as expected.

Exercise 'operations-redis' [M 30]

Production Checkpoint! Update your production environment, so your production server in the cloud uses a Redis as persistent storage.

Requirements:

Hand-in: Submit screenshot(s) with shells on your production server that shows

  1. The IP address of your server ('ifconfig eth0' or 'ifconfig ens32' or ...).
  2. Output of 'docker ps' showing the daemon and the database containers.
  3. Output from 'docker exec -ti (your-db-container) redis-cli' shell where you find a user created room in your collection of rooms. Both the 'docker exec' and the room data contents must be visible in that shell.
  4. Evidence that Redis stores data on the production/host machine (volume mounts, directory contents, ...)

Evaluation: I will review your output. I may log into your production server using your Cmd, DIG a room, and ask you to find that room in the database at any time during the course.