Mandatory 3: Microservice Tests. NoSQL

Learning objective

Using TestContainers to make integration/consumer-driven tests. Deploying NoSQL database (Redis) instance(s) and using it as storage tier for SkyCave.

Deadline

December 4th at 23.59

Exercise 'consumer-driven-test-hello-spark'

This is a warm-up exercise - most/all of the code is provided below. And is a good foundation for the later CDT exercises!

Create a Consumer-Driven Test/Contract Test using TestContainers to validate your 'hello-spark' image from the previous 'docker-hello-spark' exercise. (Or, if you did not finish it, you may pull 'henrikbaerbak/hellospark'.)

To help you out, here is a working 'build.gradle', that retrieves the libraries for TestContainers and the Unirest client HTTP library (you may of course substitute any other HTTP library that you may prefer):

plugins {
    id 'java'
}
repositories {
    mavenCentral()
}

dependencies {
    testImplementation group: 'com.konghq', name: 'unirest-java',
      version: '3.14.5'

    // Need JUnit
    testImplementation 'org.junit.jupiter:junit-jupiter:5.8.2'
    testImplementation group: 'org.hamcrest',
            name: 'hamcrest', version: '2.2'

    // Need TestContainers
    testImplementation "org.testcontainers:junit-jupiter:1.19.1"
    testImplementation 'org.testcontainers:testcontainers:1.19.1'
}

tasks.named('test') {
    // Use JUnit Platform for unit tests.
    useJUnitPlatform()
}
        

And a almost-complete template for the JUnit code in 'src/test/java/example' is

package example;

import kong.unirest.HttpResponse;
import kong.unirest.Unirest;
import org.junit.jupiter.api.*;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;

import static org.hamcrest.CoreMatchers.*;
import static org.hamcrest.MatcherAssert.assertThat;

import org.testcontainers.containers.GenericContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;

@Testcontainers
public class TestHelloSpark {

  public static final int SERVER_PORT = 4567;
  @Container
  public GenericContainer helloSpark =
          new GenericContainer(DockerImageName.parse("(your image here)"))
                  .withExposedPorts(SERVER_PORT);
  private String serverRootUrl;
        
  @BeforeEach
  public void setup()
  {
    String address = helloSpark.getHost();
    Integer port = helloSpark.getMappedPort(SERVER_PORT);
    serverRootUrl = "http://" + address + ":" + port + "/hello/";
  }
        
  @Test
  public void shouldGETonPathHello() {
    // change to sharp brackets
    HttpResponse(String) reply =
            Unirest.get(serverRootUrl + "Henrik").asString();
    System.out.println("** ROOT: " + reply.getBody());
    assertThat(reply.getStatus(), is(200));

    assertThat(reply.getBody(),
            containsString("Hello to you Henrik"));
  }
}

        

Or - download this zip: consumer-driven-test-hello-spark.zip .

Exercise 'cdt-quote-service' [M 40]

The official quote service stems from the docker hub image

 
henrikbaerbak/quote:msdo_2_3
        

In this exercise you should make Consumer Driven Tests (CDT)/Contract tests for the quote service - based upon the REST API described earlier in exercise 'quote-service'.

Requirements:

Update 21/11: Ensure that the failing test case in 'TestCaveStorage.java' is silenced (or fixed). I will assess your handin by A) pulling your skycave-image and extract the code, B) run your integrations test and review them. And all tests must pass.

Hand-in:

Evaluation:

I will review your code from your image; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.

Learning Goal Assessment parameters
Submission Required artifacts are all present.
Test Type The test code is CDT code, not integration test code (exercise the REST API, not the QuoteService implementation).
Test Code The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept.
Test Completeness All central aspects of the API are well covered by tests. Irrelevant test aspects are not present.
Functionality The tests execute correctly, all tests pass.

Exercise 'integration-quote-service' [M 40]

In this exercise, you should make Integration Tests (in the Fowler sense) (or Connector tests in the Bærbak sense) of your QuoteService implementation, that is the connector, developed earlier in the 'quote-service' exercise.

Requirements:

Hand-in:

Evaluation:

I will review your code on the Crunch machine; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.

Learning Goal Assessment parameters
Submission Required artifacts are all present.
Test Type The test code is integration test code (it exercises the 'RealQuoteService' implementation), not CDT test code (it is not exercising raw REST calls) nor service tests (testing indirectly through exercising the PlayerServant code).
Test Code The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept.
Test Completeness All central aspects of the integration are well covered by tests. Irrelevant test aspects are not present.
Functionality The tests execute correctly, all tests pass.

Exercise 'no-cdt-for-redis'

Argue why it does not make sense to create Consumer-Driven Tests for the Redis database.

Exercise 'redis-datatype-model'

In this exercise, the learning focus is on using the Redis shell, and explore the key-value paradigm of a NoSQL db.

In mandatory exercise 'integration-redis-connector', you will develop a Redis 'CaveStorage' connector/driver, and in order to do that, you of course have to reflect upon which Redis datatypes (and potentially 'secondary indices') that suits the domain - just as Entity-Relation models are deveveloped for 'good old SQL'.

Requirements:

Exercise 'architectural-prototyping-redis-connector'

In my 'Software Architecture in Practice' course, I teach about Architectural Prototyping: Small codebases that explore/experiment with an architectural issue or an architectural tradeoff.

Prototyping work is often too cumbersome in the original codebase context, therefore often a minimal codebase is harvested and used for quick experiments.

This exercise is basically a warm up to the 'integration-redis-connector' exercise later; and shows some of the setup you need.

The code base uses 'Jedis' as java driver, see some examples at How to use Redis in Java using Jedis. Beware, though, that the Jedis version in that tutorial is pretty old (2.4.2 versus currently 5.0)

Exercise: Implement (in partial) a Redis backed CaveStorage implementation using TestContainers as tool for an Integration Test (in the Fowler sense) suite.

You will find the initial steps for a solution in the gradle project: ap-redis-connector.zip

Hint:

  1. JavaTPoint's description of Hashes states that "they are the perfect data type to represent objects." You can see that I instead use Gson to marshall/demarshall to JSON. You are of course free to pick what you find most readable/easy to code.
  2. JUnit tests for the FakeCaveStorage interface already exists ('server' project, package: cloud.cave.service.TestCaveStorage), so it makes good sense to reuse them in a TDD and 'small-steps' fashion: Copy them one by one to your integration tests, ensuring your implementation makes it pass, before proceeding to "copying" the next - essentially growing your RedisCaveStorageConnector implementation.
  3. Getting the connection code right is another story. Find inspiration at Jedis tutorial full version.

Exercise 'integration-redis-connector' (*) [M 50]

Presently, the CaveStorage is an Fake Object test double, and our SkyCave cannot handle restarts at all. We need a real storage tier for production, of course. Therefore, implement the CaveStorage interface so it interfaces a running Redis database. That is, make a Redis connector implementation.

In this exercise you should make Integration Tests (in the Fowler sense / Connector Tests in Bærbak sense) of your CaveStorage implementation that connects to a real Redis container, and use these tests to make a (test-driven?) implementation.

I advise to 'take small steps' by considering solving the 'architectural-prototyping-redis-connector' exercise first! This exercise then more-or-less only deals with the integration into SkyCave aspect.

Requirements:

Hand-in:

Evaluation:

I will review your code on the Crunch machine; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.

Learning Goal Assessment parameters
Submission Required artifacts are all present.
Test Type The test code is integration (connector) test code, not CDT or Service test code.
Test Code The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept.
Test Completeness All central aspects of the integration are well covered by tests. Irrelevant test aspects are not present. Wall message methods are optional to include.
Production Code The CaveStorage implementation that connects Redis is 'Clean Code' and handles connections correctly. You may ignore handling JedisExceptions for now.
Functionality The tests execute correctly, all tests pass.

Hints and issues:

Exercise 'redis-storage-journey' (*) [A 80]

Crunch will execute a couple of test journeys on your SkyCave connected to a Redis storage, validating that it works. Wall behaviour will not be tested. Crunch will test against a Redis v 7.2.1.

Requirements:

Hand-in By Crunch.

Evaluation: Crunch will bind your daemon to its own, empty, redis database through overriding the SKYCAVE_CAVESTORAGE_SERVER_ADDRESS value in the CPF. It will start your daemon, go to room (0,0,0) and validate that the cave has its initial configuration. Then Crunch will dig some random rooms, restart the daemon, log in a random new user, and make certain these rooms are present. It will also try to update a room's description, both as creator and non-creator.

Hints and issues:

Exercise 'redis-storage-journey-wall' [A 60]

Crunch will execute wall behaviour test journeys.

Requirements:

Hand-in By Crunch. Crunch will start your daemon using 'redis-storage-journey.cpf'.

Evaluation: Crunch will create 11 postings in one room and 2 in another, and test the pagination and ordering of wall postings. It will update a message, both as author and as non-author, and validate that sematics is as expected.

Exercise 'operations-redis' [M 30]

Production Checkpoint! Update your production environment, so your production server in the cloud uses a Redis as persistent storage.

Requirements:

Hand-in: Submit screenshot(s) with shells on your production server that shows

  1. The IP address of your server ('ifconfig eth0' or 'ifconfig ens32' or ...) so I can unambigously determine that all screenshots are actually from the production machine (match the DNS/IP in your 'operations.cpf').
  2. Output of 'docker ps' showing the daemon and the database containers.
  3. Output from 'docker exec -ti (your-db-container) redis-cli' shell where you find a user created room in your collection of rooms. Both the 'docker exec' and the room data contents must be visible in that shell.
  4. Evidence that Redis stores data on the production/host machine (volume mounts, directory contents, ...)

Evaluation: I will review your output. I will log into your production server using your Cmd and verify that the room you have created exits. I may DIG a room, and ask you to find that room in the database at any time during the course.