Iteration 3: MicroService Architecture Outlook

Deadline: 9th December 2021.

Learning objective

Pick one/two techniques from the the course, and apply them in practice. Conduct experiments by applying the scientific method.

Prerequisites

You have a working SkyCave microservice architecture from the previous exercises.

Exercise

You are required to formulate a scientific experiment on one/two techniques from the course, conduct the experiment(s) systematically, conclude on it/them, and relate to relevant theory and concepts.

Three person groups must pick at least two techniques to experiment with.

A scientific experiment is a process in which you formulate a hypothesis or a problem statement that is experimentally provable (it is correct or it is incorrect), make a controlled experiment to obtain data, and finally conclude if the hypothesis is true or false, based upon analyzing the obtained data. By 'controlled experiment' is meant that systematics and care is taken to ensure the result obtained is correct, not due to some other random issue (false positives).

Your experiment can be within the following areas (formulated as 'starting hypoteses/problem statements'):

  1. The number of image vulnerabilities can be reduced for our SkyCave daemon / REST service image by following (a subset of) Vermeer's recommendations.
  2. SkyCave / OurService stability can be improved by introducing Redis Clustering.
  3. SkyCave stability can be further improved by using Nygard stability pattern X (choose any feasible X you like).
  4. Monitoring and logging using Humio can provide good overview of a running SkyCave system.
  5. CorrelationIDs can be introduced in SkyCave to allow proper tracing and correlation of a specific user's request across the (full/sub-) set of services (daemon, playerservice, caveservice, ...).
  6. Breaking API changes can be handled by the Nygard principles to allow dual API co-existence.
  7. Refactoring our current tests for our REST service to use Nygard's testing proposal with separate request side and response side tests (§14) will expose non-robustness of our service connectors implementations.
  8. Applying the event-sourcing paradigm on the REST service storage tier (or SkyCave storage tier) improves data reliability in case of human or computational errors.
  9. The Trickle-then-batch pattern can support live migration of the SkyCave data with no downtime.
  10. Canary releasing can be implemented using standard Swarm compose-files.
  11. Reporting (Newman pp 93) is best achieved in SkyCave using (service calls, data pump, event data pump, backup data pump) [pick one to investigate].
  12. Sharding can improve performance in a highly geographical distributed SkyCave system.
  13. Invent your own experiment within the area of microservices; consult Henrik to ensure approval before you begin.

Hand-in:

Evaluation:

The final report, including all three exercises, is evaluated along with the final oral defense for a final grade for this course.

Hints and Guides

You are set loose in this exercise. Some of the proposals are more complex than others and require more workload. All of them require you to limit yourselves, which is fine as long as you remember to write about these limits in the report. Industrial development is about applying a techniques to all aspects to ensure product quality; science is about applying the technique correctly and systematically to the most complex aspect, to maximize the learning.

Work with what you find most interesting (or the least boring, depending on viewpoint :). One path you could consider is doing a 'warm-up' for the project in the next course.

In the project course, we will focus quite a lot more on scientific analysis and writing. You may have a look at the Synop template and Review guide.