Siemens AS (TR)
Mesut Durukal is QA & Test Automation Manager at Siemens. He has a Bs & MsC degree from Boğaziçi University Electrical & Electronic Engineering. He has a 7 years experience in Defence Industry, working in MultiLocation projects serving as the Manager of Verification & Validation activities. He has then been working in Agile Software Testing projects for more then 2 years. He is acting as a Product Owner & E2E Test Automation Leader for the QA team.
His expertise includes:
- Project Management
- Agile Methodologies: Scrum Framework
- Process Improvement, Requirement Analysis
- Shareholder & Risk Management
- Verification & Validation Management
- Planning, Managing, Coordination, Scheduling, Monitoring and Reporting
- Audits and Reviews, Nonconformance Handling, Root Cause Analysis Proficiency in the Verification & Validation section of the CMMI program
- Software Testing
- Cloud Testing (SAP, AWS)
- Test automation, SW testing frameworks: TestNG, JUnit, Mockito, Nunit, Selenium, Cucumber, JMeterAPI testing frameworks: SOAP&Restful Web Services Testing: Swagger, SoapUI, Postman
- Non-Functional Tests: Performance, Electrical, Environmental (Thermal Vacuum, Vibration, Shock, EMI/EMC/ESD), Security, Safety, Reliability
ABOUT THE PRESENTATION
How to ensure Testing Robustness in Microservice Architectures and Cope with Test Smells
In projects, in which multiple units or modules are integrated, each unit or subsystem is tested; but still the integrated product should be verified which indicates the prominence of E2E testing. After integration, bug findings are very possible and after continuos deployments, retesting is needed to ensure the product quality. In order to reduce manual testing effort & trigger tests automatically, test automation is very important. Testing asynchronous web services is much more difficult when the working principles are taken into consideration. In such cases, robustness of the tests are of great importance. Otherwise; sporadic results may lead to conflicts and misleadings. To handle the sporadic results, all test codes should be analyzed to check whether any test smell exists. Various actions, which are taken to get rid of the challenges faced in test automation, are defined in this paper. Some inventive solutions are produced in code level to stand against automation difficulties.
The fundamental points of the issue, which will be described in detail, are:
– Against unstabilities, scheduled jobs are created over pipelines to execute the case multiple times to catch the sporadic issues. After each execution, automated reporting is performed to store the results. At the end with filtered queries, detected unstabilities are caught.
– To achieve robustness, adaptive retry algorithms are applied in test code thanks to various java libraries.
– To remove duplications, some special classes named as “helper classes” are created, from which methods are called from multiple test cases.
– In order to assure scope coverage, explatory tests are performed and as a result, the additional test cases are included to test plans.
– For code Quality, test design reviews are multiple test code reviews are applied and test codes are refactored when needed.
– To fasten slow running UI cases, headless execution modes are utilized over htmlunit & Selenium.
– For the sake of compatibility, Cross browser testing is enabled with Selenium Grid.
After applying the solutions, effort on maintanence is considerably optimized. Bugs coming from production environment are observed to have a decreasing tendency. Actions resulted in high speed in automation and gave a chance for a rapid adaptation. Root cause analysis and feedbacks to development teams are improved.