Iteration 5: Test Stub and Abstract Factory

Deadline

Always at 23.59 on the same day as your TA session (Except if another deadline has been agreed with the TA). Consult the deadline for your class on the delivery plan.

Learning Goals

In this iteration, the learning goals are using test doubles to get "randomness" under automated test control; and applying the Abstract Factory pattern.

Prerequisites

Carefully read FRS § 37.4.2 which outlines the EpsilonStone variant.

Kata

Spend the first 15-20 minutes of the class in plenum discussing...

... Test Stub for Random Minion

A SWEA group which did not follow the Test First principle came up with an implementation of the RedWine hero power (which lacks the ISO 9126 Testability capability (FRS §3)):

public class RedWinePowerStrategy implements HeroPowerStrategy {
  @Override
  public void executeEffect(StandardGame game) {
    // Get the size of the field of the opponent
    Player opponent = Player.computeOpponent(game.getPlayerInTurn());
    int opponentFieldSize = game.getFieldSize(opponent);
    // Compute a random index in range 0..opponentFieldSize-1
    int index = (int) (Math.random() * opponentFieldSize);
    // Tell game to reduce health of that minion by two
    game.changeCardHealth(opponent, index, -2);
  }
}
        

Of course, their TA was not happy (nor was the lecturere) as there is no automated testing possible: The random 'index' is indirect input from the Java random library, and writing a reproducible test case impossible (or at least cumbersome and not aligned with the Evident Test principle.) We need a test stub!

They came up with this bad design for a test stub: They copied the above code into a new "stub" class:

public class RedWinePowerSTUBStrategy implements HeroPowerStrategy {
  private final int whichIndexToApplyEffectOn;

  public RedWinePowerSTUBStrategy(int whichIndexToApplyEffectOn) {
    this.whichIndexToApplyEffectOn = whichIndexToApplyEffectOn;
  }

  @Override
  public void executeEffect(StandardGame game) {
    // Get the size of the field of the opponent
    Player opponent = Player.computeOpponent(game.getPlayerInTurn());
    int opponentFieldSize = game.getFieldSize(opponent);
    // STUB the random generation
    int index = whichIndexToApplyEffectOn;
    // Tell game to reduce health of that minion by two
    game.changeCardHealth(opponent, index, -2);
  }
}        
        
which was then used to write a reproducible/deterministic test case ala this one:
@Test
  public void shouldApplyRedWineToMinionAtIndex0() {
    // Given a game
    StandardGame game = new StandardGame();
    // Given the stub for the RedWinePowerStrategy which
    // always applies the effect to the minion at index 0
    HeroPowerStrategy redwine = new RedWinePowerSTUBStrategy(0);
    // When I apply the effect
    redwine.executeEffect(game);
    // Then the minion at index 0 has health reduced by 2
    [... the proper assertThat here]
  }          
        

Analyze and discuss the above design proposal

  1. Does it adhere to the (3) principle Encapsulate what varies?
  2. The 'generate a random index' variability point is handled by which of our four variability techniques (source-code-copy; parametric; polymorhpic; compositional)?
  3. At a later point in time, a maintainer accidentially changes the line in RedWinePowerStrategy (that is, not in the RedWinePowerSTUBStrategy) from the correct one:
      game.changeCardHealth(opponent, index, -2);
                
    to
      game.changeCardHealth(opponent, index, +2);
                
    Will our automated JUnit tests fail?
  4. I called it a bad design above, which is rather subjective. To rephrase it in more correct and objective turns, what ISO 9126 capability (FRS §3) is it lacking? Analyzability? Changeability? Stability? Testability?

Discuss (and implement in you HotStone system) a better design.

Exercises

EpsilonStone

Develop the EpsilonStone variant using a compositional approach by refactoring the existing HotStone production code. As much production code as possible must be under automated testing control.

Abstract Factory

Refactor your present HotStone design to use Abstract Factory for creating delegates (like strategies for winner determination, mana production, deck building, hero powers etc). All your existing variants, Alpha, Beta, ..., should be represented by concrete factories.

Notes

Drawing the full Abstract Factory UML diagram leads to way too many UML association lines---essentially making the diagram look like spaghetti. UML should provide clarity and overview, not a mess. So, do not draw the lines from factories to the individual concrete products (the strategies)---this is information that a developer (knowning the Abstract Factory pattern) will find in a configuration table or in the code instead.

Deliveries:

  1. Develop on a feature branch "iteration5". (You can of course make sub branches for the individual exercises, if you want).
  2. You should document your group's work by following the more detailed requirements defined in the iteration5 report template (pdf) (LaTeX source)

Evaluation criteria

Your submission is evaluated against the learning goals and adherence to the submission guidelines. The grading is explained in Grading Guidelines. The TAs will use the Iteration 5 Grade sheet to evaluate your submission.

Learning Goal Assessment parameters
Submission Git repository contains merge request for branch "iteration5"". Git repository is not public! Required artefacts (report following the template) must be present.
Test Stub The "randomness" of the 'select minion algorithm' is properly encapsulated in a test stub using a compositional design. JUnit test cases for EpsilonStone and/or hero power methods are deterministic and correct. The design is clearly and correctly documented in the report.
Abstract Factory Pattern The abstract factory pattern is correctly designed and implemented, and well documented in the report.
UML The UML diagrams are syntactically correct, correctly reflects the patterns/doubles, and is a correct overview of the architecture. The UML diagram does not show irrelevant implementation oriented details.
TDD and Clean Code TDD process has been applied. Test code and Production code keeps obeying the criteria set forth in the previous iterations, including adhering to Clean Code properties for newly developed code. Missing features are noted in the backlog (Minor omissions allowed).