Using TestContainers to make integration/consumer-driven tests. Deploying NoSQL database (Redis) instance(s) and using it as storage tier for SkyCave.
December 4th at 23.59
This is a warm-up exercise - most/all of the code is provided below. And is a good foundation for the later CDT exercises!
Create a Consumer-Driven Test/Contract Test using TestContainers to validate your 'hello-spark' image from the previous 'docker-hello-spark' exercise. (Or, if you did not finish it, you may pull 'henrikbaerbak/hellospark'.)
To help you out, here is a working 'build.gradle', that retrieves the libraries for TestContainers and the Unirest client HTTP library (you may of course substitute any other HTTP library that you may prefer):
plugins { id 'java' } repositories { mavenCentral() } dependencies { testImplementation group: 'com.konghq', name: 'unirest-java', version: '3.14.5' // Need JUnit testImplementation 'org.junit.jupiter:junit-jupiter:5.8.2' testImplementation group: 'org.hamcrest', name: 'hamcrest', version: '2.2' // Need TestContainers testImplementation "org.testcontainers:junit-jupiter:1.19.1" testImplementation 'org.testcontainers:testcontainers:1.19.1' } tasks.named('test') { // Use JUnit Platform for unit tests. useJUnitPlatform() }
And a almost-complete template for the JUnit code in 'src/test/java/example' is
package example; import kong.unirest.HttpResponse; import kong.unirest.Unirest; import org.junit.jupiter.api.*; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; import static org.hamcrest.CoreMatchers.*; import static org.hamcrest.MatcherAssert.assertThat; import org.testcontainers.containers.GenericContainer; import org.testcontainers.junit.jupiter.Container; import org.testcontainers.junit.jupiter.Testcontainers; import org.testcontainers.utility.DockerImageName; @Testcontainers public class TestHelloSpark { public static final int SERVER_PORT = 4567; @Container public GenericContainer helloSpark = new GenericContainer(DockerImageName.parse("(your image here)")) .withExposedPorts(SERVER_PORT); private String serverRootUrl; @BeforeEach public void setup() { String address = helloSpark.getHost(); Integer port = helloSpark.getMappedPort(SERVER_PORT); serverRootUrl = "http://" + address + ":" + port + "/hello/"; } @Test public void shouldGETonPathHello() { // change to sharp brackets HttpResponse(String) reply = Unirest.get(serverRootUrl + "Henrik").asString(); System.out.println("** ROOT: " + reply.getBody()); assertThat(reply.getStatus(), is(200)); assertThat(reply.getBody(), containsString("Hello to you Henrik")); } }
Or - download this zip: consumer-driven-test-hello-spark.zip .
The official quote service stems from the docker hub image
henrikbaerbak/quote:msdo_2_3
In this exercise you should make Consumer Driven Tests (CDT)/Contract tests for the quote service - based upon the REST API described earlier in exercise 'quote-service'.
Requirements:
Update 21/11: Ensure that the failing test case in 'TestCaveStorage.java' is silenced (or fixed). I will assess your handin by A) pulling your skycave-image and extract the code, B) run your integrations test and review them. And all tests must pass.
Hand-in:
Evaluation:
I will review your code from your image; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.
Learning Goal | Assessment parameters |
Submission | Required artifacts are all present. |
Test Type | The test code is CDT code, not integration test code (exercise the REST API, not the QuoteService implementation). |
Test Code | The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept. |
Test Completeness | All central aspects of the API are well covered by tests. Irrelevant test aspects are not present. |
Functionality | The tests execute correctly, all tests pass. |
In this exercise, you should make Integration Tests (in the Fowler sense) (or Connector tests in the Bærbak sense) of your QuoteService implementation, that is the connector, developed earlier in the 'quote-service' exercise.
Requirements:
Hand-in:
Evaluation:
I will review your code on the Crunch machine; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.
Learning Goal | Assessment parameters |
Submission | Required artifacts are all present. |
Test Type | The test code is integration test code (it exercises the 'RealQuoteService' implementation), not CDT test code (it is not exercising raw REST calls) nor service tests (testing indirectly through exercising the PlayerServant code). |
Test Code | The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept. |
Test Completeness | All central aspects of the integration are well covered by tests. Irrelevant test aspects are not present. |
Functionality | The tests execute correctly, all tests pass. |
Argue why it does not make sense to create Consumer-Driven Tests for the Redis database.
In this exercise, the learning focus is on using the Redis shell, and explore the key-value paradigm of a NoSQL db.
In mandatory exercise 'integration-redis-connector', you will develop a Redis 'CaveStorage' connector/driver, and in order to do that, you of course have to reflect upon which Redis datatypes (and potentially 'secondary indices') that suits the domain - just as Entity-Relation models are deveveloped for 'good old SQL'.
Requirements:
computeListOfPlayersAt()
addMessage() + updateMessage()
csdev@m1:~/proj/cave$ docker run -d -p 6379:6379 --name redis redis:7.2.1-alpine 5a2802c17c21cd550b068b11f6071c64f669fb31c0ca0cd215c1be190b3e99ed csdev@m1:~/proj/cave$ docker exec -ti redis redis-cliand
127.0.0.1:6379> set room(0,0,0) "You are standing" OK 127.0.0.1:6379> get room(0,0,0) "You are standing" 127.0.0.1:6379> set room(0,1,0) "You are in an open forest" OK
127.0.0.1:6379> hmset room(0,0,0) description "You are standing" creatorId 0 OK 127.0.0.1:6379> hmset room(0,1,0) description "You are in an open forest" creatorId 27 OK 127.0.0.1:6379> hget room(0,1,0) description "You are in an open forest"
In my 'Software Architecture in Practice' course, I teach about Architectural Prototyping: Small codebases that explore/experiment with an architectural issue or an architectural tradeoff.
Prototyping work is often too cumbersome in the original codebase context, therefore often a minimal codebase is harvested and used for quick experiments.
This exercise is basically a warm up to the 'integration-redis-connector' exercise later; and shows some of the setup you need.
The code base uses 'Jedis' as java driver, see some examples at How to use Redis in Java using Jedis. Beware, though, that the Jedis version in that tutorial is pretty old (2.4.2 versus currently 5.0)
Exercise: Implement (in partial) a Redis backed CaveStorage implementation using TestContainers as tool for an Integration Test (in the Fowler sense) suite.
You will find the initial steps for a solution in the gradle project: ap-redis-connector.zip
Hint:
Presently, the CaveStorage is an Fake Object test double, and our SkyCave cannot handle restarts at all. We need a real storage tier for production, of course. Therefore, implement the CaveStorage interface so it interfaces a running Redis database. That is, make a Redis connector implementation.
In this exercise you should make Integration Tests (in the Fowler sense / Connector Tests in Bærbak sense) of your CaveStorage implementation that connects to a real Redis container, and use these tests to make a (test-driven?) implementation.
I advise to 'take small steps' by considering solving the 'architectural-prototyping-redis-connector' exercise first! This exercise then more-or-less only deals with the integration into SkyCave aspect.
Requirements:
gradle itest
), as they are
out-of-process tests that are slow to execute.
Hand-in:
computeListOfPlayersAt()
Evaluation:
I will review your code on the Crunch machine; and I will run your tests in the 'integration' subproject. I will use the grade sheet to evaluate your submission.
Learning Goal | Assessment parameters |
Submission | Required artifacts are all present. |
Test Type | The test code is integration (connector) test code, not CDT or Service test code. |
Test Code | The test code is simple (no complex constructs, basically only assignments, simple private method calls, very basic for loop). The test code reflects using the TDD principles (Isolated Test, Evident Test, Evident Data, etc.). Robert Martin 'Clean Code' properties are generally kept. |
Test Completeness | All central aspects of the integration are well covered by tests. Irrelevant test aspects are not present. Wall message methods are optional to include. |
Production Code | The CaveStorage implementation that connects Redis is 'Clean Code' and handles connections correctly. You may ignore handling JedisExceptions for now. |
Functionality | The tests execute correctly, all tests pass. |
Hints and issues:
Crunch will execute a couple of test journeys on your SkyCave connected to a Redis storage, validating that it works. Wall behaviour will not be tested. Crunch will test against a Redis v 7.2.1.
Requirements:
redis-storage-journey.cpf
, defining the following
configuration:
Subscription: Real | CaveStorage: **Redis** | Quote: Real | PlayerNameService: InMemory |
Hand-in By Crunch.
Evaluation: Crunch will bind your daemon to its own, empty, redis database through overriding the SKYCAVE_CAVESTORAGE_SERVER_ADDRESS value in the CPF. It will start your daemon, go to room (0,0,0) and validate that the cave has its initial configuration. Then Crunch will dig some random rooms, restart the daemon, log in a random new user, and make certain these rooms are present. It will also try to update a room's description, both as creator and non-creator.
Hints and issues:
Crunch will execute wall behaviour test journeys.
Requirements:
Hand-in By Crunch. Crunch will start your daemon using 'redis-storage-journey.cpf'.
Evaluation: Crunch will create 11 postings in one room and 2 in another, and test the pagination and ordering of wall postings. It will update a message, both as author and as non-author, and validate that sematics is as expected.
Production Checkpoint! Update your production environment, so your production server in the cloud uses a Redis as persistent storage.
Requirements:
Hand-in: Submit screenshot(s) with shells on your production server that shows
Evaluation: I will review your output. I will log into your production server using your Cmd and verify that the room you have created exits. I may DIG a room, and ask you to find that room in the database at any time during the course.