Siemens AS (TR)
Buse Ozarslan has a B.S from Bilkent University CS department and MSc from Bogazici University ETM department. She worked as a sw developer for 2.5 years in an insurance company. Currently she is working in Siemens for 2 years as a test engineer, scrum master and quality manager.
ABOUT THE PRESENTATION
How to ensure Testing Robustness in Microservice Architectures and Cope with Test Smells
In order to reduce manual testing effort, automation is very important. Testing asynchronous web services is much more difficult and to handle the sporadic results, all test codes should be analyzed. In this work, possible test smell types and solutions against them are investigated. Preventions and results will be shared. In projects, in which multiple units or modules are integrated, each unit or subsystem is tested; but still the integrated product should be verified which indicates the prominence of E2E testing. After integration, bug findings are very possible and after continuos deployments, retesting is needed to ensure the product quality. In order to reduce manual testing effort & trigger tests automatically, test automation is very important. Testing asynchronous web services is much more difficult when the working principles are taken into consideration. In such cases, robustness of the tests are of great importance. Otherwise; sporadic results may lead to conflicts and misleadings. To handle the sporadic results, all test codes should be analyzed to check whether any test smell exists.
Various actions, which are taken to get rid of the challenges faced in test automation, are defined in this paper. Some inventive solutions are produced in code level to stand against automation difficulties.
The fundamental points of the issue, which will be described in detail, are:
– Against unstabilities, scheduled jobs are created over pipelines to execute the case multiple times to catch the sporadic issues. After each execution, automated reporting is performed to store the results. At the end with filtered queries, detected unstabilities are caught.
– To achieve robustness, adaptive retry algorithms are applied in test code thanks to various java libraries.
– To remove duplications, some special classes named as “helper classes” are created, from which methods are called from multiple test cases.
– In order to assure scope coverage, explatory tests are performed and as a result, the additional test cases are included to test plans.
– For code Quality, test design reviews are multiple test code reviews are applied and test codes are refactored when needed.
– To fasten slow running UI cases, headless execution modes are utilized over htmlunit & Selenium.
– For the sake of compatibility, Cross browser testing is enabled with Selenium Grid.
After applying the solutions, effort on maintanence is considerably optimized. Bugs coming from production environment are observed to have a decreasing tendency. Actions resulted in high speed in automation and gave a chance for a rapid adaptation. Root cause analysis and feedbacks to development teams are improved.