CONFERENCE PROGRAM

Please note:
On this site, there is only displayed the English speaking sessions of the OOP 2021 Digital. You can find all conference sessions, including the German speaking ones, here.

The times given in the conference program of OOP 2021 Digital correspond to Central European Time (CET).

By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.

Track: Testing & Quality

Sort by Tracks
Alle ausklappen
  • Tuesday
    09.02.
  • Wednesday
    10.02.
14:00 - 14:45
Di 8.2
Back to the Data - Now That We (Machine) Learned From Test Results, What Else Did We Gain?
Back to the Data - Now That We (Machine) Learned From Test Results, What Else Did We Gain?

80% of machine learning is said to be data wrangling. Is all this wasted effort? Hardly - often the journey really is its own reward.

In this talk, we'll briefly describe a machine learning project that predicts the outcome of test cases in a large-scale software development cycle. We'll then show what we gained from collecting the necessary data and how these insights can have lasting impact on the day-to-day work of developers, testers and architects. This includes a quick answer to the well-known question: Whose defect is it anyway?

Target Audience
: Developers, Testers, Architects
Prerequisites: Basic knowledge of software development and testing and an interest in data analytics
Level: Basic

Extended Abstract
:
Data science folk wisdom holds that at least 80% of machine learning consists of data wrangling, i.e. finding, integrating, annotating, and cleaning the necessary data.

While sometimes viewed as less "glorious" than the deployment of powerful models, this journey has its own rewards as well.

Benefits may sometimes be somewhat expectable, if still non-trivial, like data cleaning potentially exposing errors in underlying data bases.

In other cases, though, there may be some low-hanging fruit a casual glance might miss even though they are indeed rewarding.

Data integration often reveals opportunities for statistical analyses that are relatively simple, but may still have high impact.

In this talk, we'll start at the destination: the result of a machine learning project where we successfully predicted test results from code changes.

A necessary task was the integration of several data sources from the full software development cycle - from code to testing to release in a large industry project with more than 500 developers.
Naturally, this required all typical steps in the data science cycle: building up domain knowledge and problem understanding, both semantic and technical data integration, data base architecture and administration, machine learning feature design, model training and evaluation, and communicating results to stakeholders.

These steps yielded several benefits, on which we will focus in our talk.

Among others these include data quality insights (e.g. about actual "back to the future" timestamps), and new analyses which were made possible by a unified view of the data (e.g. survival analysis of defects).

Last but not least, we demonstrate a simple answer to a well-known question, especially in larger software development contexts: Whose defect is it anyway - how can we avoid assigning defects to teams that have nothing to do with them?

Thanks to the integrated information sources from systems concerned with version control, test result logging, and defect management, we are able to support the claims made in this talk with statistics taken from 6 years of real-world data.

Gregor Endler holds a doctor's degree in Computer Science for his thesis on completeness estimation of timestamped data. His work at codemanufaktur GmbH deals with Machine Learning and Data Analysis.
Marco Achtziger is Test Architect working for Siemens Healthcare GmbH in Forchheim. He has several qualifications from iSTQB and iSQI and is a certified Software Architect by Siemens AG.
Gregor Endler, Marco Achtziger
Gregor Endler, Marco Achtziger
Talk: Di 8.2
flag EVENT MERKEN
09:00 - 10:45
Mi 8.1
The C4 Testpyramid - An Architecture-Driven Test Strategy
The C4 Testpyramid - An Architecture-Driven Test Strategy

The Test Pyramid is an efficient and effective approach for Software Testing but does not come with any details about concrete test methods or fixtures.

In my talk I will show you how to combine the principles of the Test Pyramid and the C4 Model for Software Architecture to elaborate a specific test strategy for your software product in a simple manner.

Target Audience: Architects, Developers, QA Engineers
Prerequisites: Basic knowledge in Software Architecture and QA Engineering
Level: Advanced

Extended Abstract:
Software tests have to specify the behaviour of your product as extensive and reliable as possible. At the same time their implementation and maintenance costs should be kept at a minimum.

As Kent Beck said before: "I get paid for code that works, not for tests".

The Test Pyramid is a proven approach for this problem but leaves a lot of room for interpretation when elaborating a test strategy for your product.

Furthermore there are a lot of partly contradictory definitions of Unit, Service and Integration Tests and variations like “Test Diamond” and “Test Trophy” make it even more confusing. So, how do we get from the Test Pyramid to a concrete test strategy?

In my talk I will show you how to combine the principles of the Test Pyramid and the C4 Model for Software Architecture to obtain a test concept tailored for your product. Examples taken from a recent project will demonstrate how this approach balances test coverage, maintainability and development costs.

Christian Fischer works as a Software Engineering Coach at DB Systel and loves TDD, Extreme Programming and Craft Beer.
Testing a Data Science Model
Testing a Data Science Model

What would your first thought be when you are told there is no testing or quality structure in a team? How would you inspire a team to follow vital processes to thoroughly test a data science model?

I would like to share my knowledge about testing a model in a data science team.

Data science is a very interesting area to explore. It presents testing challenges that are quite different from “traditional” software applications. I will share my journey introducing testing activities to help build quality into a data science model.

Target Audience: Testers, QA, Developers, delivery managers, product owners, scrum masters, everyone is welcome
Prerequisites: QA, Testing
Level: Basic

Laveena Ramchandani is a vibrant, motivated and committed individual, whose main aim towards the IT industry is to apply herself and dedicate her energy to becoming the best hire as a Technical and/or Business Consultant. Establish through this experience a bridge of trust between the company and her education.

She has been awarded a Business Computing degree from Queen Mary University Of London. She thoroughly enjoyed the technical aspects of the computing side of her degree applied those skills to the business side of her degree.

She will bring an innovative and valuable contribution to any programme through her aptitude for IT, her interest in the business world and interpersonal skills. She is and will be a practical and valuable member of any team, as she is able to work effectively and efficiently with others to complete any given task.

She has excellent communication skills gained from both academic and non-academic environments.

Christian Fischer
Laveena Ramchandani
Christian Fischer
Talk: Mi 8.1-1
Laveena Ramchandani
Talk: Mi 8.1-2
flag EVENT MERKEN
14:30 - 15:30
Mi 8.3
Validation of Autonomous Systems
Validation of Autonomous Systems

Autonomous and automated systems are increasingly being used in IT such as finance, but also transport, medical surgery and industry automation. Yet, the distrust in their reliability is growing. This presentation introduces the validation of autonomous systems. We evaluate in practical situations such as automatic driving and autonomous robots different validation methods. The conclusion: Classic methods are relevant for coverage in defined situations but must be supplemented with cognitive test methods and scenario-based testing.

Target Audience: Testers, Quality assurance, Architects, Requirements Engineers, Product Owners, Software Engineers
Prerequisites: Testing basic know-how
Level: Advanced

Extended Abstract:
Autonomous and automated systems are increasingly being used in IT such as finance, but also transport, medical surgery and industry automation. Yet, the distrust in their reliability is growing. There are many open questions about the validation of autonomous systems: How to define reliability? How to trace back decision making and judge afterwards about it? How to supervise? Or, how to define liability in the event of failure? The underlying algorithms are difficult to understand and thus intransparent. Traditional validations are complex, expensive and therefore expensive. In addition, no repeatable effective coverage for regression strategies for upgrades and updates is available, thus limiting OTA and continuous deployment.

With artificial intelligence and machine learning, we need to satisfy algorithmic transparency. For instance, what are the rule in an obviously not anymore algorithmically tangible neural network to determine who gets a credit or how an autonomous vehicle might react with several hazards at the same time? Classic traceability and regression testing will certainly not work. Rather, future verification and validation methods and tools will include more intelligence based on big data exploits, business intelligence, and their own learning, to learn and improve about software quality in a dynamic way.

Verification and validation depend on many factors. Every organization implements its own methodology and development environment, based on a combination of several of the tools presented in this presentation. It is however relevant to not only deploy tools, but also build the necessary verification and validation competences. Too often we see solid tool chains, but no tangible test strategy. To mitigate these pure human risks, software must increasingly be capable to automatically detect its own defects and failure points.

Various new intelligent validation methods and tools are evolving which can assist in a smart validation of autonomous systems.

This presentation introduces the validation of autonomous systems. We evaluate in practical situations such as automatic driving and autonomous robots different validation methods. The conclusion: Classic methods are relevant for coverage in defined situations but must be supplemented with cognitive test methods and scenario-based testing.

Christof Ebert is managing director at Vector Consulting Services. He supports clients around the world in product development. Before he had been working for twelve years in global senior management positions. A trusted advisor for companies around the world and a member of several of industry boards, he is a professor at the University of Stuttgart and at Sorbonne in Paris. He authored several books including the most recent “Global Software and IT” published by Wiley and "Requirements Engineering" published by dPunkt and in China by Motor Press. Since many years he is serving on the editorial Board of the prestigious "IEEE Software" journal.
Michael Weyrich is the director of the University of Stuttgart’s Institute for Automation and Software Systems. Before he spent many years at Daimler and Siemens where he had senior management positions in engineering. Ever since he engages in technology transfer and is heading numerous industry projects on automation and validation. He authored several books including the leading reference book on "Industrial Automation" published by Springer. Since many years he serves on VDI in various leadership positions. He is leading the VDI/VDE committee on testing of connected systems and industry 4.0.
Christof Ebert, Michael Weyrich
Christof Ebert, Michael Weyrich
Talk: Mi 8.3
flag EVENT MERKEN

back