This page contains summaries of all non-confidential reports that I have supervised in my part-time education master fagpakker.
Note: Due to GDPR I have had to remove the actual reports as
well as the author names. Please mail me at hbc (at) cs (dot) au
(dot) dk
to get access to individual non-confidential reports.
This study is an investigation into the world of observability with the use of a predefined tool stack to implement a proof of concept of both log aggregation and metrics collection from a distributed system, within a fixed time frame of 40 hours, with the objective of acquiring knowledge about log aggregation, metrics and tracing. A tool stack consisting of Grafana, Prometheus, Loki is chosen as the monitoring tools and the SkyCave microservice system is chosen as the system to monitor. A basic set of requirements to log aggregation and metric collection is defined and setup of the tool stack and the implementation, adding observability to SkyCave, is described and verified through the execution of a number of SkyCave interaction simulations. The conclusion states that it is possible to implement the basic observability features within the given timeframe, without prior knowledge about the toolstack.
This document describes the process of researching and testing optimization compilations techniques in the DotNet ecosystem for Microservices in the pursuit of lower running costs for cloud hosted service when using docker as deployment. Several tests were performed to measure the impact of different styles of compilations and their impact on computer resource usage. This project was to answer the question of whether it is worth pursuing these optimizations relative to some of their drawbacks.
This project investigates the learnability of Azure Functions by refactoring the player microservice. The refactoring process entails converting the player microservice into serverless functions on the Azure platform. Two team members participated in this study: one refactored the microservice using Java, while the other used C#. Each member is allocated 20 hours to complete the refactoring, while maintaining a logbook as documentation. The primary objective is to evaluate the ease with which developers can transition from a traditional microservice architecture to a serverless architecture using Azure Functions. By analyzing the logbooks, we gain insights into the specific challenges and time-consuming aspects of the refactoring process. The findings will contribute to a better understanding of the practical implications of adopting serverless computing with Azure Functions, aiding organizations in making informed decisions about similar transitions.
Sydbank har i dag et legacy prisdistribuerings-system (SPF) som udviklerne gerne vil have opgraderet og bygget som et Microservice system, distribueret som docker containere, s˚a det er klar til Cloud-Native hosting teknologier. For at kunne overtale forretningen til at prioritere opgaven vil de gerne kunne vise dem hvor meget mere performance det opgraderede system kan give dem. Cloud native teknologier understøtter horisontal skalering som kan bruges til at øge performance af et system, men pga. teamets manglende erfaring med denne teknik kan de ikke sige præcist hvor meget mere performance dette vil give. Derfor vil rapporten undersøge hvad den procentuelle forøgelse i performance er ved at skalere et system horisontal. Resultaterne til undersøgelsen opn˚aes ved at performance teste et system magen til det rigtige SPF, bygget efter at være s˚a anvendeligt for Sydbank som muligt. Resultaterne vil blive præsenteret sammen med observationer fra testene, samt hvilke potentielle fejlkilder undersøgelsen har st˚aet overfor.
This study explores containerization as a means to streamline the deployment procedures for API services hosted on Microsoft Azure. Existing deployment methodologies encounter obstacles stemming from vendor lock-in. Containerization offers a solution by providing portability, thereby minimizing these challenges. Through dockerizing an existing service and deploying it via a continuous deployment pipeline, this study assesses Docker’s efficacy in enhancing deployment workflows. Infrastructure as a service (IaaS) is explored as a measure to minimize further vendor lock-in.
This paper describes the process of enhancing the observability and scalability for Sparinvest's REST API, which currently lacks effective usage tracking and logging mechanisms. The project, developed by the group “Golf”, also aims to gain practical knowledge in using Docker and Graylog. Guided by Sam Newman’s “Building Microservices” and Graylog documentation, the methodological approach combines theoretical and practical insights. The project outcomes will include Docker images for the API and its MSSQL Database, configured as a read-only database, and a Docker Swarm setup with multiple replicas. Additionally, a comprehensive Graylog monitoring setup with a custom dashboard and detailed logging middleware will ensure traceability within the microservices architecture.
This report examines the integration of Event Sourcing into CaveService, which is part of SkyCave, a massive multi-user online game. The project aims to move away from traditional data management systems that overwrite data, which can be negative for data integrity in case of errors. Event Sourcing promises an improvement by maintaining an immutable log of changes, thus offering complete traceability, audit trails, and the ability to reconstruct past states. These capabilities are important for data-driven decision-making, auditability, and historical analysis This project aims to demonstrate the viability of implementing Event Sourcing without prior experience, focusing on how it can provide insights into player actions within game rooms. The application of Event Sourcing in CaveService is expected to improve data integrity, reporting capabilities, and enable more advanced insight into the game regarding rooms. This project aims to explore these outcomes via the development of a visualization, so stakeholders are able to improve the game in the future, based on data rather than intuition.
This report presents the development of a proof-of-concept distributed system to visualize access token flows in the OAuth 2.0 protocol. The system supports teaching access control and OAuth 2.0 concepts to a broad variety of IT security students. Using log aggregation and distributed tracing, the system traces and visualizes token flows across multiple services. The proof-of-concept system uses Graylog for log collection, aggregation, and visualization, supporting observability through correlation IDs added to requests sent between services. While Graylog has some limitations in visualizing the token flow in a process-oriented manner, tracking and visualization of the token flow are proven feasible. Future improvements include incorporating authorization grant processes and the use of OAuth 2.0 access token scopes. The proof-of-concept system could also serve as a tool to introduce students to concepts such as distributed tracing and log aggregation.
This paper examines the saga pattern within a small microservice architecture consisting of one central application and three microservices. Our paper aimed to assess the Saga patterns’ ability to provide ACID-like functionality, their implementation complexity, and their effectiveness in ensuring eventual consistency. Contrary to our initial hypothesis, our findings reveal that the saga pattern, on its own, does not fully provide ACID functionality as it lacks isolation. Despite the pattern’s inherent complexity, requiring significantly more than the hypothesized 20 hours for implementation, it ensures eventual consistency. Our analysis of the two saga implementations concluded that the choreography approach is marginally more suitable, while neither approach particularly fits a small-scale microservice architecture due to the high effort-to-value ratio.
This project aims to apply the Strangler Pattern to transition a monolithic game server to a microservice architecture. Initially, automated tests were created to validate the existing functionality of the GameDAO class, ensuring confidence in maintaining behavior throughout the transition. A new microservice was developed following a well-defined API specification and validated with Contract- or Consumer-Driven Tests (CDTs). Using the Branch by Abstraction pattern, we incrementally replaced the monolithic functionality with a RemoteGameDAO connector that interfaces with the new microservice. Our detailed logbook tracked the process, which took a total of 17.5 hours, slightly exceeding our 15-hour hypothesis. The comprehensive tests confirmed that the functionality remained consistent before and after the transition, validating the effectiveness of the Strangler Pattern in this real-world scenario.
We will explore how to build an Infrastructure Delivery Pipeline by following the methods outlined by Morris [Mor20]. We will build offline tests, stack tests and integration tests and show how each of these test stages improves our ability to make reliable changes to the system’s infrastructure-as-code.
This report details the experiments with how quickly various patterns and tools could be applied to a system to increase its stability and observability. The patterns and tools covered consist of:
The goal of this project is to accommodate performance and energy efficiency concerns by giving the knowledge needed to make architectural decisions to reduce computational overhead in existing or upcoming applications. In this project we study the effect of changing the data serialization format to raise performance and reduce energy requirements. We have built an architectural prototype to handle the serialization and deserialization process for three different data formats, respectively XML, JSON and Protobuf. The results indicates that the binary Protobuf format is superior in the measured aspects; performance, object size and energy efficiency by sacrificing the ability to be human readable.
In this project, we investigate the performance and modifiability characteristics of HTTP long polling, server-sent events, and WebSockets. The aim is to provide software architects with guidance for selecting the best protocol. The investigation is done by building an architectural prototype using the three protocols and conducting experiments testing the round-trip message delivery latency for a varying number of clients and message size. We find that in all experiments HTTP long polling is slowest. WebSockets is the fastest protocol for small message sizes, and server-sent events is the fastest protocol for large message sizes. However, performance should not be the only focus when choosing a protocol. WebSockets and server-sent events have different properties, making them suitable for different scenarios.
Based on the raised importance of energy efficient software as a part in making more sustainable products, this project seeks to apply tactics for energy efficient software to an embedded context. This project will investigate if these tactics can be used to optimize the energy consumption on embedded devices. The project makes experiments in three main groups: how does embedded units communicate, software components and software impact on hardware components. Results from the experiments show that according to the goals set for each experiment only the hardware group did achieve these goals. The hardware category achieved 19.2% power reduction by reducing LED brightness and turning off display backlight, and a power reduction of 7.14% by reducing frequency of processor. Both changes achieved the wanted energy optimization. The changes within the software components and communication did not achieve the goals set for the changes, and were all below a power reduction of 1%.
AI tools such as Github’s Copilot present an opportunity for developers, but few studies have been done to investigate exactly how the usage of AI tools will impact developer productivity. In this study 6 developers were given two exercises of similar complexity and solved one with and one without Copilot assistance. To evaluate developer productivity, the time spent solving the exercises was measured and compared. The results indicate a significant decrease in time spent solving the exercise with Copilot assistance. Furthermore, a metric for developer productivity that takes experience into account was used and a score calculated in the two scenarios. This showed an increase in productivity as well, but not as clearly as when based solely on time. Lastly the developers were asked to report on their perceived productivity increase and their experience with Copilot in terms of the helpfulness and quality of suggested solutions. Most developers’ perceived productivity increases were also supported by the finding. Developers generally reported that Copilot seems most helpful in a predefined context but had little built in quality in the suggestions.
This paper describes the development and prototyping of Time Series Databases instead of using MS SQL database, in a test setup which resembles a real-life scenario in the finance sector. The differences between the database types will be assessed through comparative queries in a replicable setup.
This report explores messaging broker’s capability to secure data consistency in distributed systems when faults occur in the infrastructure. Drawing upon RabbitMQ and Kafka as the used technologies, this study delves into an exploraNon of their respective capabilities for achieving robust fault-tolerance in consuming of messages.
This project intends to explore the FaaS (Function as a Service) architecture in comparison to a traditional RESTful Web API, specifically development complexity for the FaaS, and response times for both. The purpose is to investigate if FaaS can reduce the infrastructure complexity and decrease time to market. Implementing two simplistic prototypes, - a RESTful Web API and its FaaS equivalent are constructed. The quality attributes, modifiability, testability, and performance are scrutinized using Quality Attribute Scenarios (QAS). Initial findings indicate that FaaS may be advantageous for smaller projects, calling for further exploration.
This study/report shows how to develop a set of microservices which makes it possible to remove functionality from a IBM z/os mainframe monolith. We will demonstrate how a specific service on the mainframe can be subject to strangulation as per the strangler pattern.
Traceability in the microservice stack of TDC Erhverv is poor, making it difficult to find the root cause of the errors experienced by customers. To mitigate this, a logging library has been created. It utilizes Logback in combination with Spring Cloud Sleuth, to add correlation ids, and other information relevant to tracing errors, to log messages. The logging library logs incoming requests by implementing an aspect using AspectJ, and outgoing requests by usage of an interceptor from Spring. Adding the library to the microservices improves traceability, by making it possible to follow the correlation id through the microservice stack, and by logging relevant information, such as payloads, endpoints, and errors. The library has been added to four microservices and tested. Implementing the library in a microservice is easy, and takes no more than 20 minutes to complete. A Grafana dashboard further improves the traceability, by gives an easy-to-use overview of all logs with a specific correlation id. Instrumenting the remaining services with the logging library, will improve traceability greatly, in the TDC Erhverv microservices, and can be implemented in a relatively short time.
This report looks at a challenge from my daily work of managing software packages in the situation where you are responsible for both the package and its usage. This is achieved by using the concept of Software Bill of Materials to build an architectural prototype of creating a database storing metadata describing the package usage in the context of an existing code-base, spanning many different applications and services.
In classical monolithic software solutions, developers - in the large majority of projects at least - work around some shared choices of technology for frontend, backend, hosting, deployment, etc. - the tech-stack of the application. As have been the case in an opensource- hobby-project, the author is involved in with a group of friends. The project was started by what was five highly motivated friends, but is now only being contributed to by two of the developers. The main reason for the drop in engagement has been the fact that the chosen tech-stack, simply is not within all the developers’ regular set of programming languages or tools. Contributing to the project has hence required some of the developers to spend extra time learning the basics of a new tech-stack. This ”barrier to entry” has lead to some simply deeming it too much work, before being able to contribute efficiently. Using the application described, the report investigates how / which microservice and DevOps techniques can help alleviate the barrier to entry and foster more seemless co-development in software projects.
This report explores the learnability and usability of the Amazon Webservices Internet-of-things (IoT) framework through practical experience by building two architectural prototypes. The first prototype consists of a Raspberry Pi feeding monitoring data to a server in the cloud by using the software offered by Amazon Webservices IoT framework. The second more advanced prototype explores the usability of the IoT framework in the context of a industrial application performing local real-time control. The perspective is to replace the existing logging system with a service based solution that stores data in the cloud rather than locally. Furthermore, in order to have an end-to-end system, a way to get the monitoring data visualized is also explored.
Feedback i forbindelse med opgaveløsning på datamatikeruddannelsen kan være meget tidskrævende, og med fokus på differentiering i undervisningen, er der i praksis ikke mange minutter til den enkelte studerende, hvilket de studerende finder højst utilfredsstillende. En hjælp til at gøre dette problem mindre, kunne være automatiseret feedback. Denne rapport beskriver et system baseret på microservice arkitekturen og distribueret i Docker containere, der kan netop dette, samtidig med at det er muligt at udvide systemet med nye øvelser.
Inspired by the chaos engineering principles performed on complex micro-service architectures used by e.g. Netflix, this report describes the introduction of chaos engineering on the Skycave system in order to prove unsafe failure modes. The experiments are carried out on a local and isolated environment utilizing a load generator simulating user journeys to give the best possible insights without having a real life system. By using service level fault injections to disrupt the communication within the architecture the user journeys are challenged. The result ends up in showing that part of the Skycave system is actually resilient to failures, meanwhile other parts of the system simply crashed the user due to insufficient error handling. The introduction of chaos engineering have proven to expose weaknesses in the architecture which have not been found during the previously applied classic test methods (unit test, integration test, etc.). If this was a real life system the test carried would have inspired different areas where more robustness could be introduced in the system in order to increase the resilience of the complete architecture. This have actually also been achieved on a system before disrupting it in production, which can be an advantage on systems that not that mature. And the initial steps have been taken to continue the chaos engineering journey in production.
Automating the promotion work process by making an architectural prototype that runs inside a Redhat OpenShift(OKD) Kubernetes container cluster.
When users start complaining about performance of an application, there is no catch all silver bullet a developer can go to for relief. As so was the case for a small team of developers who had launched a small web application that hosted a game statistics website, queries were too quickly getting too slow. A performance investigation into the application was done. It helped developers establish performance goals through quality attribute scenarios, analyse the current state and scalability of the application using stress and load tests, and finally identify the specific bottlenecks and remedies in the querying of data via profiling and best practice guidelines on performance tactics.
The CLM system suffers from performance issues when running on major installations. This project describes the approach taken to investigate two specific scenarios that stresses the system and the discoveries done.
This report explores the learnability and usability of Apache Kafka through practical experience by building an architectural prototype and running it in the cloud. The prototype is based on the architecture-demo system, TeleMed, in which we integrate Apache Kafka to play a central role in the infrastructure. We experience both pros and cons of working with Apache Kafka, and why it is only one the surface that it looks like the traditional messaging system it is often compared to.
Denne rapport undersøger egenskaber i MSSQL og MongoDB gennem et lille litteraturstudie og tester performance gennem svartider i begge databaser på en telemedicinsk EKG-applikation. Testen viser at der ikke kunne opnås en forbedring på svartiderne ved at udskifte MSSQL databasen med en MongoDB, hverken hvor målepunkter gemmes enkeltvis eller som et samlet målesæt.
This project seeks to explore which serverless computing provider is appropriate for developing a typical Web API. In practice, a serverless reference Web API is initially constructed on a non-hosted serverless platform as a common starting point, which is then adapted and deployed to each of the evaluated providers. An analysis model is constructed based on statements from e.g. Martin Fowler and is used for the evaluation of the providers. The analysis model has a focus on usability and performance, and with special emphasis on the cold start problem. The evaluated serverless computing providers are Amazon Web Services, Google Cloud, and Cloudflare.
(No abstract)
For applications operating on user input, there is a possibility that the user erroneously updates the application with incorrect values, and wishes to undo such an update. This project seeks to explore whether Event Sourcing can be employed to provide the user with the capability of correcting such errors, by allowing the user to revert to an earlier application state for the affected domain object. In practice, the hypothesis is tested by adding the capability to an existing service by updating its storage layer to be based on Event Sourcing. Furthermore, the performance impact of applying Event Sourcing is measured, both before and after implementing a read model, to indicate the necessity of creating and maintaining such a model.
I større, distribuerede online systemer arbejdes ofte med CQRS, Command Query Responsibility Segregation, ES, Event Sourcing og Eventual Consistency, som teknikker til at opnå bedre performance og pålidelighed. I denne rapport vil teknikkerne blive beskrevet, samt hvilke problemer de løser, og hvordan, ved at studere forskellige tilgængelige casestudies og litteratur på området. Principperne vil blive demonstreret på en mindre POC, løst baseret på Skycave.
This report looks into the learnability of serverless computing with AWS and the Serverless Framework. This is done by firstly implementing two microservices with corresponding test cases and CI/CD pipeline, and then evaluating the developer experience, the tooling and AWS services used. Based on the conducted experiments the report verifies the hypothesis, stating that the services can be build, tested and deployed in 60 hours each, as the actual spent time averaged less than half. Therefore it can be concluded that developer productivity is increased, making serverless computing fulfill the often told promise of a shorter timeto- market.
This study examines the use of the Flagger progressive delivery operator to execute canary deployments on a Kubernetes cluster using Istio as service mesh. The application that is used as an example is the Online Boutique demo application. A set of non-functional requirements for the deployment is proposed and refined using Quality Attribute Scenario templates. Then Flagger is configured and the analysis is calibrated to meet the proposed response measures, and a number of experiments are executed to verify that the requirements for the deployment has been achieved. The conclusion states that Flagger can be used to execute canary deployment, but also that a response measure for rollout in less than 10 minutes cant be met for all services, on a system with as low activity as the target demo application.
Formålet med denne rapport er, at få noget viden om Istio service mesh og dets egenskaber til, at øge tilgængeligheden af et softwaresystem baseret på en microservicearkitektur. Vi opsætter et Kubernetes cluster, installerer Istio service mesh og eksperimenterer med at implementere nogle af Nygards stability patterns med Istio.
In this report I will improve the quality of logging in a multi service system, by implementing the Splunk best logging practices. Further I will apply monitorability of the system, by using Humio. I will Investigate if it possible to do within a given timeframe.
This report looks at how authentication and authorization can be done in a microservice architecture. The report is based on an existing system in development at Energinet called Project-Origin. The system currently uses OAuth2 for authentication and authorization, both externally and internally and is experiencing issues because of this misuse of OAuth2. This report recommends that OAuth2 should only be used externally for delegation to clients, and internally use JWTs, this coupled with the addition of SSO Gateway should solve most issues and simplify the architecture.
Denne rapport beskriver arbejdet med at opsplitte en eksisterende monolitisk arkitektur med speciel fokus på kommunikationsveje og vedligeholdelse, men også med skelen til performance, sikkerhed og tilgængelighed, hvor det er relevant. Arkitekturen implementerer et kalde- og låsesystem, og specielt kommunikationen mellem de distribuerede enheder i marken og serverne vil have fokus.
This report analyses strategies for managing an API’s lifecycle in regards to versioning, deployment and deprecation. Strategies within each category are analysed and implemented, in a microservice called CaveService, to determine if they can solve the common issues regarding changes in the API. Semantic versioning is used to handle version numbers, HAProxy is used to load balance for canary deployment and Humio is used to monitor the usage of the API versions in regards to deprecation. These strategies have proved to be effective in managing the lifecycle of an API in the selected microservice.
In this report it will be verified that: A monolithic system that doesn’t support scaling and upgrade of independent parts of the system, can be redesigned using a microservice architecture, that supports scaling and upgrade independent roles of the system. It will be proven that the new design fulfills the above based on: a) Quality Attributes Scenarios (QAS) and tactics from (Bass, Clements and Kazman 2012) b) Specific design suggestions taken from (Newman 2014) c) Architectural prototyping taken from (Bardram, Christensen and Hansen 2004) The system design is documented using the 3+1 Approach (Christensen, Corry and Hansen, The 3+1 Approach to Software Architecture Description Using UML 2016).
I dette projekt undersøges 3 open source-databaser i et distribueret miljø ved at gen-nemgå deres dokumentation. MariaDB som relationel database, MongoDB som doku-ment database og Cassandra som wide-column database. Disse tre sammenlignes i for-hold til deres clusteropbygning, deres skalerbarhed, performance, availability og hvor-vidt de opfylder ACID- eller BASE principperne.
Emerging new memory technologies may be about to change how data persistency has been handled for decades. One of these new technologies are from Intel and is called Optane or 3D XPoint, which has the capacity of modern hard drives and the speed of modern DRAM, while also being persistent. To fully utilize this new combination of capabilities, the whole stack of hardware and software will have to change. All the way from CPU’s, Motherboards to Operating Systems and Software Architecture. This project analyses the possibility of using a software library from Intel, called Persistent Collections for Java (PCJ), instead of regular SQL databases.. We find that this will indeed be an alternative option, even with some benefits, but at the current state, it requires some careful coding and use of java locking mechanisms.
This report will document the process of defining a set of availability quality attribute scenarios (QAS) and finding relevant availability related tactics and patterns which can be employed in an architectural prototype of a system, which is responsible with collecting data from a wind turbine controller. The goal of this process is to increase the potential availability of the system. The tactics and patterns identified will then be documented in the current context, describing how they will be implemented and what the desired outcome is. There will also be covered ways of ensuring a high degree of correctness for the data collected from the turbine controller, in order to avoid data quality issues.
This report examines the splitting of a monolithic system into a modular-based microservice architecture. The report attempts the process of transforming a part of the codebase to a new microservice based architecture, and is performed on an existing system, which is used for modelling, design and calculation of products for an industrial company. The architectural change is done through an architectural prototype, and afterwards the process and prototype is evaluated to observe which obstacles, faults or mistakes might occur when migrating a system from monolith to microservice architecture.
This report will explore and examine the orchestration of software deployment on two well established platforms; Docker Swarm and Kubernetes. The result of the report describes what must be done to automate the deployment of software and maintain the deployment in the modern software development world. At the same time we will discuss the pros and cons of these two platforms with focus on availability and usability.
This project uses performance testing to test if the proprietary generic storage component DataVault can meet the performance requirement for a new website. Firstly, a performance use case is specified into a Quality Attribute Scenario, that can be used for testing. Secondly, the performance requirement is tested as a load test to see if the DataVault meet the requirement under the given workload. The test showed that the DataVault did not meet the requirement, for bigger response sizes. Thirdly, the DataVault was tested as a single thread test to see how it performed with a rising workload size, and it was found that the majority of the time was spent retrieving data from the database.
This report is about performance optimization of VanDa data migration project, which I have been working on at Bioscience institute, Aarhus University. The purpose of the project is to migrate data from a SQL database to an Azure Cosmos Document database in the cloud. We employ performance tactics “Reduce Overhead”, “Increasing Resource Efficiency” and “Bound Queue Sizes” [Bass et al. 2013] to optimize the performance of the application, and use performance test to verify that we have met our performance objectives.
The purpose of this paper is to investigate and detail the decisions, constraints and reasoning, as they pertain to architecture and raw source code, behind the transition from “monolith” to microservice on the TopDanmark project in Sydbank. The project was initiated in October 2018, with a deadline in December 2018. This was later pushed back as the requirements weren’t ready in December 2018, and later pushed back further due to lack of business resources for testing. As this is not a proof of concept but rather a leap of faith some claims made may be backed by observations from running in production.
Hvor den udbredte Microservice arkitektoniske still er bredt diskuteret i videnskabelig litteratur, er det svært at finde en klar retningslinje for processen til at refactor eksisterende applikationer. Betydningen af emnet er understreget af de høje omkostninger og investeringer for en refactoring proces, hvor adskillige nye applikationer introduceres med dertilhørende deployment processer (DevOps) og teamstrukturer. Softwarearkitekter, der står overfor denne udfordring, har brug for at vælge en passende strategi og refactoring teknik. Et af de mest diskuterede aspekter i denne kontekst er at finde det rette service granularitet til at kunne drage fuldnytte af fordelene ved microservice arkitekturer. Dette litteraturstudie diskuterer først emnet architectural refactoring og sammenligner derefter 11 eksisterende refactoring metoder, der er blevet beskrevet i videnskabelig litteratur. Metoderne klassificeres baseret på de underliggende decomposition teknikker og præsenteres i form af et visuelt beslutningstræ. Litteraturstudiet resulterer i et overblik over en række strategier til at decompose monolithic applikationer til uafhængige services. Bortset fra Service Cutter metoden er de fleste metoder kun anvendelige under specifikke forhold. En yderligere bekymring er den betydelige mængde af forberedelse/input data, som nogle af metoderne kræver, samt den begrænsede modenhed af den underliggende værktøjsstøtte.
The study looks at how serverless technologies, can be used to implement the storage and aggregation of Danish DataHub. Is it possibly to create a high-performance application in just 75 man-hours with the help from off-the-self technologies, and without running a single server? I build and tested a whole serverless solution within 75 man-hours, sadly, the key figures of returning the aggregated results are too slow.
The purpose of this paper is to gain insight into how we can test visual output in our web applications. To examine this problem we will create a model that can be used to evaluate and compare visual regression testing tools. The result of the comparison will be a matrix with the output of our tests for each visual regression testing tool.
Denne rapport indeholder en analyse med henblik på at finde Nygards antipatterns i et distribueret system. Den indeholder konkrete forslag til implementation af løsninger baseret på tre forskellige kilder, Nygards stability patterns, Uwe Friederichsens resillience patterns og Bass’s taktikker. Der identificeres en række antipatterns og foreslås tilsvarende løsninger. Konklusionen er, at pålideligheden af systemet kan forbedres ved at implementere alle eller blot en delmængde af disse.
This report looks into how to increase the testability by decreasing the time spent on manual system test by investigating how to ease the deployment of our system, and by how to make it independent of hardware. We investigate the concept of single command line deployment and apply it to Docker, by deploying our application and its dependencies. We investigate how to record and use real data communication to create an offline data feed to be fed into the system to make the test hardware independent. This report requires a minimum knowledge of Docker and .Net technologies.
We present a literature study of software reliability properties provided by using the concepts Event Sourcing and Command Query Responsibility Segregation. The document provides an overview of the two concepts and explains their purpose when used in a microservice architecture. Related concepts and patterns including Event-Driven Architecture, Events, Separation of Read and Write Models and Databases, and Eventual Consistency are discussed to position the applicability of and foundation for Event Sourcing and Command Query Responsibility Segregation. Based on the established foundation, the impacts on reliability is analysed and discussed with focus on system availability and testability in a reference system defined in the document. The study identifies Event Sourcing and Command Query Responsibility Segregation as having an overall positive impact on both system availability and testability.
This project will show how automation, continuous integration and deployment can be used to achieve higher availability as well as better reliability and maintainability, by introducing changes in smaller increments, as well as enabling a more seamless handover from development to operations. In addition:
By using known patterns and techniques we will ensure our own products are delivered with high velocity and is reliable even if supporting dependencies and integration points are not.
This report will take a peek at the Forex Backoffice (FXBO) system and try to locate stability patterns. This is done by looking at integration points. It will document the stability patterns and describe how they have been implemented. Then describe the stability antipatterns that they deter. The stability patterns and antipatterns are from the book “Release it!” by Michael Nygard [Nygard 2018].
This project is about Continuous Delivery Maturity Models (CDMM) and how to create value for a project using one of these models. We will examine a maturity model. We will assess the maturity of a project. Then we will use the model to provide a target for improving maturity.
I denne rapport analyseres pålidelighed og tilgængeligheden af ecommerce applikationen eShopOnContainers. Applikationen analyseres for Nygard stability antipatterns, samt om pålideligheden og tilgængeligheden kan højnes ved brug af stability patterns. Udvalgte stability patterns evalueres ved at udsætte applikationen for run-time fault injection.
Ud fra egne erfaringer med forskellige mindre ERP løsninger har deres Web API’er ofte problemer med at opnå high availability og reliability. Rapporten vil teste og analysere en række teknikker til at forbedre et eksisterende Web API for en mindre ERP løsning. Fokus vil være på at introducere redundancy og replication teknikker, og hvordan de løser udfordringerne med high availability og reliability i et distribueret miljø med afsæt i [Bass] og [Nygard]. Rapporten vil beskrive, hvorledes de konkrete teknikker har forbedret kvaliteten af Web API’et. Afslutningsvis vil resultaterne af de forskellige forsøg sammenlignes og evalueres.
A theoretical study of the impact of Reactive Programming on reliability is presented, which includes the following: The motivation behind the Reactive Manifesto, and how it is directly related to reliability. The reason why asynchronous messaging, failures as messages and non-blocking back pressure are required. An introduction to the Reactive Streams API and how it relates to official Java APIs. An analysis of the availability and testability architecture tactics supported by Reactive Programming. An analysis of Deferred Pull-Push asynchronous event handling from the perspective of reliability. An empirical study of the effects of Reactive Programming on reliability is presented. The study uses an existing Java application as case-study. It is shown that introducing Reactive Programming to an application does not necessarily lead to a decrease in the failure rate of the system.
This paper includes an analyses of stability antipatterns for a software module, Bambora paygate, that is used to execute transactions with a payment terminal running Bambora software. This analyzes is meant to be used as a stepping stone to improve the reliability of this paygate. It also demonstrates how a stability patter can be used to mitigate some of this antipatterns. This is an exam paper for Reliable Software and Architecture, for Aarhus university, and represent my experience with stability patterns and antipatterns in a project.
The Window Driver pattern is an approach to automating testing of user interfaces. It strives to make tests less vulnerable to changes in the structure of the user interface. This is achieved by not testing directly on the widgets/controls, but instead through a thin layer of abstraction. This project investigates how to implement the Window Driver pattern in a C# application, using Windows Presentation Foundation (WPF) and the Model-View-ViewModel (MVVM) pattern. It is also evaluated how well it works, when the goal is to replace manual testing with automation. It is found that the Window Driver pattern, when applied to an application with WPF and MVVM, requires a lot of technical issues to be overcome, but the result works well and is a good choice for automated testing.
This report will evaluate on to what degree reliability will benefit from the added availability that comes, from using Docker technology on a Microsoft .NET application. To make the evaluation more realistic a Proof of Concept (PoC) that mimicking a real mission critical integration component will be used for testing and evaluate the Docker container technology. The conclusion is that if a .NET application is ported to .NET Core and inserted into a Docker container (Dockerized), and replication techniques are used, the reliability of the application, can be greatly enhanced, the downside is that the Docker technology in combination with .NET is not fully matured. This report is intended, for an audience with a bachelor’s degree of science or similar, with no prior knowledge of Docker containers.
Message Queuing Telemetry Transport is a rather new Machine To Machine (M2M) protocol special targeting Internet Of Things (IoT). At Lodam electronics a/s, a new distributed system is to be developed using the Message Queuing Telemetry Transport (MQTT) protocol [1]. Today, many initiatives exist in creating the best MQTT broker application. The goal of this project was to find 3 broker candidates for performing a series of performance tests. This paper present the selection process of 3 broker candidates, performance test of those, and finishes with test results and a general discussion.
Ud fra systematiske metoder undersøges hvordan Elasticsearch, Logstash, og Kibana (ELK) understøtter realtime visning af enorme mængder logdata. Vi påviser at Logstash kan konfigureres til at hente data fra en MSSQL database ind i Elasticsearch realtime og gøres søgbare. Derefter valideres statistik rapporter i Kibana mod en original rapport fra Biltorvet.
Vi sammenligner sekventiel læsning af denormaliserede data fra MongoDB og MySQL databaser. Hypotesen er at sekventiel læsning i databaser performer flere størrelsesordener bedre end random læsning. Og at MongoDB performer bedre ved sekventiel læsning end MySQL. Dette undersøges vha. en performancetest på en arkitektonisk prototype. Vi finder at sekventiel læsning giver 1-2 størrelsesordener hurtigere læsning. Men at MongoDB og MySQL opfører sig ens ved læsning af denormaliserede data.
I denne rapport sammenligner vi tre forskellige databasetyper med henblik på at graduere deres usability og performance i forhold til hinanden. Vi arbejder med to NoSQL databaser, Neo4j og Cassandra samt en traditionel relationel database MySQL. De er alle blandt de mest benyttede og ligger derfor højt på DB-engines rangliste. Datagrundlaget vi har anvendt er ens for alle systemer, ligesom kravene til data forespørgsler. Forhindringer og det samlede arbejde som hver system implementering kræver, vil blive dokumenteret og inddraget i den samlede konklusion.
This projects uses performance testing on architectural prototypes to show the effect that different compositions of performance tactics influence the performance of an embedded Internet Of Things (IoT) receiver. This document describes how performance testing of embedded software is carried out in a simulated environment in JAVA. Based on measurements on the real target the tasks of the embedded environment is moved to a developed simulation framework for better profiling and simulation possibilities. Work load condition for the simulations are derived from analysis of a comprehensive data stream obtained from a real world IoT receiver installed in the field.
During this project, we have investigated the performance of big data graph databases using the RDF data model, containing at least 10^7 triples. We have measured performance in terms of response time latency while performing simple and complex queries. We have evaluated the system for production fitness and asserted some of the performance claims stated by the vendors.
When working with minor software projects with limited needs for data storage and consumption, it can seem like a large overhead to commision and run a distributed database setup to handle this storage. As an alternative to the distributed setup, a variety of self-contained embedded databases are available. The self-contained databases runs in process with the application and enables the programs to run without connections to remote database servers. This eases deployment of the software projects by minimizing the support for initial configuration and reduces system complexity. This report will address the evaluation of performance of the embedded databases by screening different options available by doing a desktop comparison as well as an in use evaluation of advantages and disadvantages of different self-contained / embedded databases within the fields of performance monitoring, memory footprint etc.
StoreFront is a web application for the employees in the stores for JYSK used for handling click-and-collect orders. JYSK plan to introduce mobile devices in the shops which will mean a doubling of open sessions for the application. In this project a performance test is carried out to verify if the changed scenario can comply the set requirements for response time and RAM utilization. A test environment similar to production set up and in isolation from backend applications is established. Tests are performed by simulating user request and measuring response in a distributed configuration where test tools are placed on other servers. It is concluded that the application will be able to comply the requirements for RAM utilization but not necessary for the response time requirement.
Vi opstiller en hypotese om, at en foreslået microservicearkitektur (MSA) kan opfylde et opstillet kvalitetsattributscenarie (QAS). Vi efterviser hypotesen ved hjælp af en arkitekturprototype og performance-engineering, og på baggrund af målinger viser vi at, den nye arkitektur kan overholde response measure i QAS.
Denne rapport omhandler implementeringen af en ELK stack til brug I en automatiseret analyse at performance på en SOA platform. Målet er at opnå en løbende analyse af svartider uden at det påvirker den eksisterende platform. Der lægges vægt på at finde afvigelser der ligger uden for 99% percentilen og at data er til rådighed med en tolerance på mindre en 10 sekunder.
A high availability architecture is constructed from four key design principles and presented as a set of quality attribute scenarios and viewpoints.
Crisplant laver sorteringssystemer, som anvendes i bl.a. lufthavne, hvor der er høje krav til oppetid. Softwareprodukterne, der styrer disse systemer, er meget stabile. Dette skyldes dels de høje krav, men samtidig har årtiers erfaring i at lave software til sorteringssystemer udviklet en stabil softwarearkitektur. Michael Nygard har skrevet bogen "Release It", hvor han med udgangspunkt i sine erfaringer fra web-domænet introducerer begreberne Stability Antipatterns og Stability Patterns. I dette projekt har vi undersøgt, hvorvidt Nygards erfaringer kan bruges til at identificere stabilitetsproblemer i Crisplants softwareprodukter. Vores analyse viser, at vi med udgangspunkt i Nygards Stability Antipatterns kan identificere en række stabilitetsproblemer. Fire af disse problemer har vi påvist gennem test. Efterfølgende gør vi rede for, hvordan bl.a. Nygards Stability Patterns kan udbedre disse problemer. Ud over at konludere at Nygards erfaringer kan bruges i Crisplants domæne, så har vi også erfaret, at en liste med konkrete Antipatterns er et godt redskab til at identificere problemer. En sådan liste vil sandsynligvis kunne gøre gavn til arkitektur-reviews, så problemstillinger i forbindelse med Stability Antipatterns fremadrettet overvejes allerede i designfasen.
This paper presents a stability antipattern analysis of a concrete system, namely Bobkapp. The analysis is used as a starting point for improving the stability and reliability of the system. To show how the risks of the antipatterns can be mitigated, several solutions are presented e.g. the implementation of Circuit Breaker. Bobkapp is a system under development, although the modules we are analysing in this paper is currently in User Acceptance Testing (UAT). This report is an exam paper for Reliable Software and Reliable Software Architecture, that illustrates our new experiences with stability antipatterns and patterns in a practical work context.
This report starts by describing an existing system and sets a goal to increase the availability of one specific system service. A fault tree analysis is performed to identify possible faults that could lead to failure in that service. Different availability tactics are evaluated and a solution is proposed to increase availability.
Perform an evaluation on the acceptance test frameworks FitNesse, Specflow and Selenium in context of a wind farm application. The evaluation is partly done based on signal testing (time series testability), test case simplicity/readability, test reporting, framework features, user profile and test suite handling. The architecture of an acceptance test suite is also addressed since its an important aspect of achieving a scalable and maintainable test suite. The evaluation results in a recommendation on which framework we believe is best suited for the wind farm domain, to achieve a full system test.
Vi sammenligner tiden der udføres på integrationstest hvor der anvendes Docker til eksterne systemer sammenlignet med at opsætte systemerne manuelt. Her påviser vi at Docker kan mindske tiden der bruges på konfiguration og opsætning inden en integrationstest udføres.
Et litteraturstudie af begrebet Lambda Arkitektur med fokus på pålidelighed og tilgængelighed i forhold til løsninger lavet med en traditionel RDBMS / CRUD tilgang.
I dette projekt finder og evaluerer vi værktøjer til automatisk at opstarte og initialisere eksterne systemer til brug for distribueret integrationstest således at disse kan udføres ofte og uden nogen manuel indblanding.
Cloud computing har en tendens til at blive ophøjet som garant for høj availability og reliability. Rapporten vil teste denne tendens ved at analysere cloud-baserede persisterings-services fra Google og Amazon. Fokus vil være på at introducere de enkelte services og hvordan de løser udfordringerne med availability i et distribueret miljø med afsæt i CAP teoremet som [Eric Brewer] beskriver det. Rapporten vil beskrive hvordan de konkrete services anvendes og i hvilken grad, kompleksiteten forbundet med distribuerede teknologi, skjules for udvikleren. Afslutningsvis vil vi sammenligne de to udbyderes availability pr. service.
I denne rapport vil vi undersøge produktionssystemet til elmålere i en virksomhed, analysere i hvilken grad systemet lever op til en række pålideligheds parametre og komme med konkrete forbedringsforslag, hvorved pålideligheden af systemet forbedres. Vi fandt at vores analyse fandt en række forbedringsmuligheder i systemet mht. tilgængelighed og pålidelighed. Virksomheden er en elektronik virksomhed der sælger el-, vand-, varme- og gasmålere til mange forskellige kunder overalt i verden.
Denne rapport omridser motivationen og hypotesen for et projekt i reliable software architecture. Hypotesen er at opnå availability på en eksisterende webbaseret applikation ved hjælp af redundans og at skabe replikering af tilstande imellem redundante komponenter. Rapporten belyser to løsninger til hvordan dette kan opnås ved hjælp af redundans og load balancing.
Denne rapport beskæftiger sig med Michael Nygards stability antipattern Unbalanced capacities, og nygards stability pattern Handshaking. Der implementeres et simpelt server/klient system i Java, som benyttes til at påvise effekterne af Unbalanced capacities, samtidig implementeres Blokerende flowcontrol, og Rate limited flowcontrol som eksempler på Nygards Handshaking stability pattern.
Vi sammenligner responstiden på MongoDB og MySQL ved at udføre en række performancetests hvor vi anvender en distribueret MongoDB opsætning. Her påviser vi at MongoDB kan have lavere responstid end MySQL ved at anvende sharding.
I denne rapport vil vi introducere fire forskellige arkitektoniske patterns som kan anvendes ifm. messaging. Vi vil evaluere de forskellige patterns, med henblik på hvor god performance de kan levere og deres evne til at skjule kompleksiteten vedr. concurrency. Begge aspekter vil blive undersøgt via prototyping såvel som performance engineering. Vi vil afdække hvilke faktorer som via hardwaren påvirker performance, og via performancetest afdække hvilket pattern som leverer det bedste samspil med netop disse faktorer.
This report presents a set of component analyses for a wellestablished platform, with the aim of extracting key properties affecting the cost of compile-time configuration (defined as the quality attribute of Configurability). Results are gathered in a lightweight questioning-technique based assessment checklist, useful for both software architects evaluating the platform architecture and for platform developers implementing changes impacting platform Configurability. Furthermore, results are generalized and adapted to the quality framework of [Bass et al., 2003] providing concrete tactics for achieving Configurability Quality- Attribute Scenarios (QASs).
This document presents a research in Enterprise Cloud Storage and Data Migration. The hypothesis is that, it is easy to migrate data between cloud platforms, including changing api for persistence. For the research we have created a prototype application, which uses the MovieLens data set. We have created the prototype application for Google App Engine, whereafter we have migrated application and data to Oracle Cloud, and Microsoft Azure. We have measured time used for changing prototype for platform specific storage options, and for implementing the data migrations. As it turns out, it might not be easy at all. We will present options for persisting and migrating data, and we will ask some of the questions, you should seek to answer before you start your migration between clouds.
In this paper we investigate if there exists an off the shelf (OTS) system to help achieve active redundancy, as we define it, in a messaging system. We describe the issues such a messaging system has to overcome and we measure the potential systems against a quality attribute scenario we define for active redundancy. Finally we conclude if any of the investigated messaging systems have active redundancy.
This paper describes the work of developing and evaluating architecture for a service deployed on Google App Engine, which can be used to facilitate generic game- initialization, turns and completion, for games running on Android devices. Google App Engine and its relevant components like persistence and load-balancing will be evaluated for the purpose. I also want to examine how to push turns to Android devices. Relevant Quality attributes for turn-based Android games will be discussed and evaluated for the Google App Engine and the architecture.
This report is the result of the final project in the course "Software Architecture in Practice". It covers a case study of adapting and evaluating the Symphony process for architectural reconstruction. Further a few tools and techniques for supporting the process is demonstrated and evaluated upon. The case used is an analysis of modifiability qualities for a Product Lifecycle Management system.
(intet abstract)
Vi undersøger BEC’s uformelle arkitekturdesignmetode igennem en række interviews med arkitekter i BEC. Med udgangspunkt i disse interviews klarlægges BEC’s eksisterende arkitekturdesignmetode. Den eksisterende metode analyseres, og en ny og forbedret metode foreslås.
Software architecture is a subject of interest for many in the software community, and a lot of techniques and procedures have been developed to support this area. Little research however has been done on examining whether these techniques and procedures have any effect on major decisions. We will explore one small aspect by testing the thesis "There are common indicators, in relation to the elements of the Architecture Business Cycle, that influence the choice of either letting a system retire or reconstruct the architecture for further development of the system".
Denne rapport er Gruppe Deltas projekt aflevering på master uddanelsen i softwarekonstruktion, i faget Avancerede emner i Software Arkitektur i Praksis (ASAiP). I projektet afprøver vi udvalgte kvalitetsattribut baserede metoders egnethed til design og beskrivelse af en roprotokol til en roklub. Via en Quality Attribute Workshop findes de drivende kvalitetsattributer scenarier, og en arkitektur - kandidat udvikles og beskrives udfra disse kvalitetsattributter. Arkitektur udviklingen baseres på patterns og taktikker, og beskrivelsen baseres på Viewpoints. Slutteligt konkluderes på anvendeligheden af de udvalgte metoder i forhold til den konkrete case.
Et systems software arkitektur bliver typisk defineret og beskrevet uafhængigt af systemets implementation. For at undgå , at en arkitekturbeskrivelse eroderer , skal den periodisk synkroniseres med implementationen. Hvis det gøres i det hele taget , vil det i dag typisk ske via (manuelle) reviews. I denne rapport vil vi foreslå en metode til at knytte arkitekturbeskrivelse og implementation sammen, samt præsentere et værktøj, der validerer om implementationen overholder centrale dele af arkitekturen. Vores metode er inspireret af ArchJava, men er knyttet til .Net og anvender nyere teknikker til at opnå en udtrykskraft lignende ArchJava’s uden at kræve ændringer i sproget. Resultatet er en letvægtstilgang til at udtrykke en component-and-connector arkitektur , samt validering af architectural conformance til den beskrevne arkitektur. Vi har valgt at kalde vores metode og valideringsværktøj for ArchNet.
Dette dokument indeholder et software arkitektonisk research projekt, hvor vi vil sammenholde tre arkitektur prototyper, der laver søgning i store mænger xml filer. Den ene prototype er en emulering af et eksisterende system, hvor xml data ligger i en relationel database i et tekst felt. De andre prototyper benytter henholdsvis fulltext indeksering i MySQL databasen, og en Apache Solr indeks server.
I denne rapport evaluerer vi brugen af architectural prototyping i forhold til en konkret case baseret på et grafmodul. For at afprøve forskellige aspekter i architectural prototyping, vælger vi at evaluere to arkitekturvariationer af casen. Den ene variant benytt er Google Image Charts til generering af grafer mens den anden benytter en egen implementeret. Baseret på resultaterne fra architectural prototyping af casen samt at sammenholde med relevant litteratur, vil der blive redegjort for modenhedsgraden af architectural prototyping.
MultiArchive er et dokumenthåndteringssystem som udover dokumenter, også håndterer en større mængde af andre data, om hvert enkelt dokument. Løsningen har funktionalitet til workflow, søgeindeksering, og kan tillige integreres med mange forskellige ERP systemer. Der er taget en strategisk forretningsmæssig beslutning om at forbedre performance og availability i MultiArchive, for dermed at kunne imødekomme de krav, løsningens kunder forventes at stille indenfor de næste år. I denne rapport vil vi vise, at vi kan identificere nogle væsentlige svagheder, i den nuværende softwarearkitektur, som har en stor negativ indflydelse på den nuværende performance. Med baggrund i de svagheder der er i den nuværende softwarearkitektur, vil vi udarbejde en ny softwarearkitektur, hvor såvel performance som availability forbedres. Vi vil vise, at det er åbenbart, at den nye softwarearkitektur kan forbedre performance og availability.
Denne rapport indeholder vores analyse af om det er muligt at opbygge en produktlinjearkitektur ved at omstrukturere en række eksisterende Sitecore CMS løsninger i en projektorienteret virksomhed.
Et forslag til en arkitektur for et Massive Multiplayer Online (MMO) spil, med parallel forarbejdning af regler mod en distribueret database.
Dette papir omhandler arkitektoniske discipliner og metoder behandlet under "Reliable Software Architecture" fagpakken. Vi afprøver metoder til at beskrive availability og reliability krav og fejlmuligheder i et konkret system, og taktikker til at opnå en arkitektur som overholder de beskrevne krav.
The applicablity of fault tree analysis as a risk decomposition technique for business ciritical software systems is evaluated through application on a concrete software system.
We intend to show that using automatic test generation tools make it possible to achieve the same test case quality in less time - compared to a traditional approach. In this paper we are comparing two very different ways of generating test-cases; Equivalence class partitioning combined with boundary value analysis against using PEX - an automatic white box test generation tool from Microsoft research. Lastly we try to give a recommendation of best practice.
En gennemgang af eksisterende teknologier som understøtter dele af autonomic computing, samt konkrete eksperimenter med Rio.
Vi vil sammenligne forskellige mock frameworks med henblik på udvælgelse af én egnet kandidat til at afkoble en database fra produktionskode under unit testing. Efterfølgende vil vi gennemføre eksperimenter med afkobling af en database fra produktionskode. Vi fik afkoblet produktionskoden fra databasen under unit testing. Derudover opdagede vi at et mock framework et et særdeles nyttigt og alsidigt værktøj, som kan løse mange andre problemstillinger indenfor unit testing, refaktorering af kildekode, isolering under test osv.
Er Cloud Computing et pålideligt paradigme? Jeg vil beskrive aspekter af reliability og cloud computing, som de er behandlet i fire artikler fra fire forskellige forskergrupper. Desuden vil jeg indledningsvis ridse nogle hovedtræk af fra cloud computing op som det bliver anskuet i dagens litteratur og industri.
In the following report we investigate the possibility of introducing an automated environment with Continuous Integration, code quality inspection, automated tests and preparation for deployment. The work is part of the course Reliable Software and Architecture at ITEV. Our focus has been on one company that has a large IT-support department supporting production and sales for the company. During interviews we have learned how the company presently performs testing of the software base they develop and maintain. There are a lot of manual works done when testing. To get inspired we have also interviewed developers in two other companies, that uses automated environments quite extensive. We think it is possible to introduce an automatic environment, that builds the code, when checked in, checks the code quality, runs tests and prepare the code for deployment. But it requires commitment from management, enthusiasm from the employees and some initial investment.
A literature study of cloud computing from an availability point of view.
Delta debugging er en systematisk automatiseret metode til debugging. Delta debugging i praksis, undersøger dels om et delta debugging ddChange plugin til Eclipse er anvendeligt, samt undersøger om delta debugging med ddmin, anvendt ved indlæsning af en fil, i praksis er let at anvende og implementere. Versionen af det afprøvede plugin virker noget ufærdig og der mangler funktionalitet, før det i praksis er brugbart. Implementation og delta debugging med ddmin er mulig, men det kan konstateres, at kompleksiteten hurtigt kan blive høj. Dette er tilfældet i vores eksempel, hvor filen der indlæses har interne afhængigheder. Dette minimer klart den praktisk anvendelse af delta debugging.
(Intet abstract)
This document contains an architecture research and development project. The main focus of the project will be the task of converting existing software into a software product line by using architectural reconstruction, architectural prototypes, architectural redesign along with several product line disciplines such as feature modeling and identifying variations in products.
The Symphony theory provides a framework for architectural reconstruction. The creation of the theory is founded on cases from the authors personal experience. But no validation of the Symphony theory has been tested against a practical case. This report will perform such validation of Symphony.
Service oriented applications deployed in big and medium sized companies today, often consists of multiple components distributed at several hardware nodes. To make these kind of applications manageable in a corporate datacenter environment it is necessary to build in monitoring and management support. The aim of this paper is to analyze the architectural significant requirements for monitoring and management services for a generic service oriented application with N components deployed on M hardware nodes. The analysis is then coined into an architectural description following the [IEEE1471] conceptual model. Finally the knowledge accumulated during this work is persisted into two checklists, one that can be used for evaluating any monitoring and management system, and one that can be used for evaluating to what degree the architectural qualities transparency and manageability is present in an architecture. The latter checklist can also be used as inspiration when formulating quality attribute scenarios for these architectural qualities.
(Intet Abstract).
Dette dokument præsenterer vores arbejde med rekonstruktion af DLBR Dyreregistrerings arkitektur, såvel som udarbejdelsen af en ny serviceorienteret referencearkitektur, med afsæt i en arkitekturprototype.
(Intet Abstract).