Conference Program

Please note:
On this site, there is only displayed the English speaking sessions of the OOP 2022 Digital. You can find all conference sessions, including the German speaking ones, here.

The times given in the conference program of OOP 2022 Digital correspond to Central European Time (CET).

By clicking on "EVENT MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.

Thema: Testing & Quality

Nach Tracks filtern
Nach Themen filtern
Alle ausklappen
  • Montag
    31.01.
  • Mittwoch
    02.02.
  • Donnerstag
    03.02.
  • Freitag
    04.02.
, (Montag, 31.Januar 2022)
10:00 - 13:00
Mo 10
Limitiert Timing in Testing
Timing in Testing

Today we must deal with shorter time-to-market, increasing complexity and more agility while keeping quality and other key system properties high.

To address these challenges the right timing in testing is critical but often not explicitly tackled. Therefore, in this interactive tutorial we reflect on our current approach on timing in testing, investigate and discuss needed strategies, tactics, and practices in different areas, and share experiences and lessons learned to improve timing in testing – because it is time to act now!

Maximum number of participants: 50

Target Audience: Test Architects, Test Engineers, Product Owners, Quality Managers, Software Architects, Developers
Prerequisites: Basic knowledge about testing and quality engineering
Level: Advanced

Extended Abstract
Today we must deal with shorter time-to-market, increasing complexity and more agility while keeping quality and other key system properties high. Our test systems increase in size, volume, flexibility, velocity, complexity, and unpredictability. Additionally, digitalization requires more than just a face lift in testing.

To address these challenges the right timing in testing (“when to do what kind of testing and how?”) is critical, but often not explicitly tackled. Therefore, in this interactive tutorial we reflect on our current approach on timing in testing, investigate and discuss needed strategies, tactics, and practices in different areas, and share experiences and lessons learned to improve timing in testing – because it is time to act now!

Some of the areas in testing that are covered in the tutorial are:

  • When to do what kind of testing in the lifecycle – agile, lean, DevOps, and beyond
  • Testing too early vs. too late – risks and opportunities
  • Test automation and the test pyramid – shift-left, shift-right
  • When to stop testing – test exit criteria
  • Repetition in testing – regression testing
Peter Zimmerer is a Principal Key Expert Engineer at Siemens AG, Technology, in Munich, Germany. For more than 30 years he has been working in the field of software testing and quality engineering. He performs consulting, coaching, and training on test management and test engineering practices in real-world projects and drives research and innovation in this area. As ISTQB® Certified Tester Full Advanced Level he is a member of the German Testing Board (GTB). Peter has authored several journal and conference contributions and is a frequent speaker at international conferences.
Peter Zimmerer
Peter Zimmerer
flag VORTRAG MERKEN

Vortrag Teilen

, (Mittwoch, 02.Februar 2022)
09:00 - 10:45
Mi 3.1
Agile Games – Creating Business Impact
Agile Games – Creating Business Impact

(Agile) Games are sounding throughout the land. Everyone plays games and anyone guides games. However, what makes playing games "interesting" from the business owner's perspective? 

We look into the criteria of effectiveness and efficiency of games and thus the capabilities of creating business impact for the company.

As such, it turns out a game - is just a game and remains a play if one does not align with underlying business needs. Sounds familiar? But you wonder how to do so?

In this talk, we will look in a 4-Step-Modelmaking the obvious tangible. And in the end, it becomes a structured approach how one might create business impact too.

Target Audience: Moderators, (young) Scrum Masters, Project Leaders, Managers, Decision-Makers, Facilitators
Prerequisites: General understanding of games and agility, and how to lead games successfully
Level: Basic

Anne Hoffmann is an expert in self-pathed leadership. For more than 15 years, she is leading international teams successfully into higher performance. By changing ourselves, we are able to change the world around us, is what she beliefs in and what her (agile) games activities are designed for. Anne is in her final steps on her Phd in "Using Improvisation Theater in (Project) Management Training" as well as co-authoring a book on "Agile Games".
Improving Your Quality and Testing Skills with Gamification
Improving Your Quality and Testing Skills with Gamification

So many challenges, so little time. As testers or quality engineers, we need to sharpen the saw, but how? Gamification can be a way to look at how you're doing and find out where to improve. It's a great way to have everyone engaged and get the best out of people.

In this presentation, Ben Linders will show how playing games (onsite or online) with the Agile Testing Coaching Cards and Agile Quality Coaching Cards help to explore your current quality and testing practice and decide as a team on what to improve or experiment with.

Target Audience: Testers, Agile Teams, Tech Leads, Technical Coaches, Scrum Masters
Prerequisites: None
Level: Advanced

Extended Abstract

The Agile Testing Coaching Cards and Agile Quality Coaching Cards are a deck of cards with statements that help people to share and reflect. Examples of cards are "Testers help developers design acceptance criteria for user stories", "Failing tests get proper attention even when no defect in the product has been detected", and "Refactoring is done to keep code maintainable and reduce technical debt".

Playing games with these coaching cards (onsite or online), you can learn from each other. Teams can use the coaching cards to discuss quality and testing values, principles, and practices, and share their experiences and learnings.

Different game formats can be used to share experiences on testing and quality principles and practices and explore how they can be applied effectively. The playing formats from the Agile Self-assessment Game (benlinders.com/game) can be used to play with these cards. This presentation provides an overview of playing formats and will inspire you to come up with your own formats.

Facilitation is key when playing with these coaching cards. Ben Linders will present how to prepare a game session and facilitate it, what can be done to keep people engaged, and how debriefing can help to pull out learnings and ideas for improvement.

Takeaways:

- Show how to use gamification to self-assess your current way of working.

- Present examples of playing games with the Agile Testing Coaching Cards and Agile Quality Coaching Cards.

- Explore how facilitating games can help to enhance quality and testing in agile teams.

Ben Linders is an Independent Consultant in Agile, Lean, Quality, and Continuous Improvement. As an adviser, trainer, and coach, he helps organizations with effectively deploying software development and management practices. He focuses on continuous improvement, collaboration and communication, and professional development, to deliver business value to customers. Ben is an active member of networks on Agile, Lean, and Quality, and a well-known speaker and author.


Anne Hoffmann
Ben Linders
Ben Linders
flag VORTRAG MERKEN

Vortrag Teilen

09:00 - 10:45
Mi 8.1
Quality Engineering Instead of Testing… Why? How?
Quality Engineering Instead of Testing… Why? How?

To continuously deliver IT systems at speed with a focus on business value, high-performance IT delivery teams integrate quality engineering in their way of working.

Quality engineering is the new concept in achieving the right quality of IT systems. Testing only after an IT product was developed is an outdated approach. Built-in quality from the start is needed to guarantee business value in today’s IT delivery models. Quality engineering is about changes in skills, organization, automation and quality measures.

Target Audience: All people involved in high-performance IT delivery teams
Prerequisites: General knowledge of IT delivery
Level: Advanced

Extended Abstract

To continuously deliver IT systems at speed with a focus on business value, high-performance cross-functional IT delivery teams integrate quality engineering in their way of working.

Quality engineering is the new concept in achieving the right quality of IT systems. Testing an application only after the digital product has been fully developed has long been a thing of the past. More is needed to guarantee the quality of applications that are delivered faster and more frequently in today’s high-performance IT delivery models. It is about achieving built-in quality. The road to quality engineering means changes in terms of starting points, skills, organization, automation and quality measures.

Our new VOICE model guides teams to align their activities with the business value that is pursued, and by measuring indicators, teams give the right information to stakeholders to establish their confidence that the IT delivery will actually result in business value for the customers.

Teams benefit from the clear definition of QA&Testing topics that are a useful grouping of all activities relevant to quality engineering. Organizing topics are relevant to align activities between teams and performing topics have a focus on the operational activities within a team.

Also, to be able to deliver quality at speed, for today’s teams it is crucial to benefit from automating activities, for example in a CI/CD pipeline, whereby people must remember that automation is not the goal but just a way to increase quality and speed.

In this presentation the audience will learn why a broad view on quality engineering is important and how quality engineering can be implemented to achieve the right quality of IT products, the IT delivery process and the people involved.

This presentation is based on our new book "Quality for DevOps teams" (ISBN 978-90-75414-89-9) which supports high-performance cross-functional teams in implementing quality in their DevOps culture, with practical examples, useful knowledge and some theoretical background.

Rik Marselis is principal quality consultant at Sogeti in the Netherlands. He is a well-appreciated presenter, trainer, author, consultant, and coach in the world of quality engineering. His presentations are always appreciated for their liveliness, his ability to keep the talks serious but light, and his use of practical examples with humorous comparisons.
Rik is a trainer for test design techniques for over 15 years.
Impact Assessment 101 to 301: From Beginner to Journeyman
Impact Assessment 101 to 301: From Beginner to Journeyman

In large software projects the assessment of the impact of a code change can be a cumbersome task. If the software has grown and shows an evolutionary design there are always unwanted side effects.

Change control boards are established. But on what data do they judge what can happen with the changes? Very often there is the HIPPO syndrome which means it is the highest paid person's opinion.

In this talk we will show you ways to come to a deterministic prediction of the impact, what data you need and what you can do with it.

Target Audience:
Architects, Test Managers, Developers, Testers
Prerequisites: Basic knowledge of collected data in software projects
Level: Advanced

Marco Achtziger is Test Architect at Siemens Healthineers. He has several qualifications from iSTQB/iSQI, is certified Software Architect and trainer for the Test Architect curriculum at Siemens AG.
Gregor Endler holds a doctor's degree in Computer Science for his thesis on completeness estimation of timestamped data. His work at codemanufaktur GmbH deals with Machine Learning and Data Analysis.
Rik Marselis
Marco Achtziger, Gregor Endler
Rik Marselis

Vortrag Teilen

Marco Achtziger, Gregor Endler
flag VORTRAG MERKEN

Vortrag Teilen

18:30 - 20:00
Nmi 2
Data Technical Debt: Looking Beyond Code
Data Technical Debt: Looking Beyond Code

Data technical debt refers to quality challenges associated with legacy data sources, including both mission-critical sources of record as well as “big data” sources of insight. Data technical debt impedes the ability of your organization to leverage information effectively for better decision making, increases operational costs, and impedes your ability to react to changes in your environment. The annual cost of bad data is in the trillions of dollars, this problem is real and it won't go away on its own.

Target Audience: Developers, Data Professionals, Managers, Architects
Prerequisites: Understanding of basic data terms
Level: Basic

Extended Abstract

Data technical debt refers to quality challenges associated with legacy data sources, including both mission-critical sources of record as well as “big data” sources of insight. Data technical debt impedes the ability of your organization to leverage information effectively for better decision making, increases operational costs, and impedes your ability to react to changes in your environment. Bad data is estimated to cost the United States $3 trillion annually alone, yet few organizations have a realistic strategy in place to address data technical debt.

This presentation defines what data technical debt is and why it is often a greater issue than classic code-based technical debt. We describe the types of data technical debt, why each is important, and how to measure them. Most importantly, this presentation works through Disciplined Agile (DA) strategies for avoiding, removing, and accepting data technical debt. Data is the lifeblood of our organizations, we need to ensure that it is clean if we’re to remain healthy.

Learning objectives:

• Discover what data technical debt is

• Understand the complexities of data technical debt and why they’re difficult to address

• Learn technical and management strategies to address data technical debt

Scott Ambler is the Vice President, Chief Scientist of Disciplined Agile at Project Management Institute. Scott leads the evolution of the Disciplined Agile (DA) tool kit and is an international keynote speaker. Scott is the (co)-creator of the Disciplined Agile (DA) tool kit as well as the Agile Modeling (AM) and Agile Data (AD) methodologies.

Scott W. Ambler
Scott W. Ambler
Vortrag: Nmi 2
flag VORTRAG MERKEN

Vortrag Teilen

, (Donnerstag, 03.Februar 2022)
09:00 - 10:45
Do 3.1
Software Quality is Not Only About Code and Tests
Software Quality is Not Only About Code and Tests

Each project has its own unique technology stack, different business logic and a unique team. The definition of quality in our projects can vary greatly. However, there are good practices that will work everywhere. There are steps that can be taken in every project and team to produce the software of better quality. I will tell you how to improve communication and processes, and what tools we can use not to be ashamed of the fruits of our work. Everything from a programmer's perspective.

Target Audience: Developers and Tech/Project Leaders
Prerequisites: Some experience in profesional software development
Level: Advanced

Extended Abstract
Each project has its own unique technology stack, different business logic and a unique team. Some of us work on mature products that have been in production for many years. Others are constantly struggling to innovate in the race against time. The definition of quality in our projects can vary greatly. However, there are good practices that will work everywhere. There are steps that can be taken in every project and team to produce the software of better quality. I will tell you how to improve communication and processes, and what tools we can use not to be ashamed of the fruits of our work. Everything from a programmer's perspective.

Aleksandra Kunysz has been creating software for years. She has experience in full stack programming, testing, requirements gathering, and conducting trainings. She has worked in corporations, startups and pro bono in various industries and countries. Not only that, but she also enjoys solving problems and writing meaningful code. Since 2019, she has been advocating quality among programmers. When she's offline, she rides two wheels, walks her dog, or practices yoga.
TDD Misconceptions
TDD Misconceptions

“TDD is when you write tests before implementing the business logic” - a simple sentence that is also often misunderstood.
Moving from one project to another, I have observed how many times people were terrified of TDD. I have been there too.
This session will focus on trying to understand HOW and more importantly WHY you should consider TDD. I've transformed failures from my experience into a series of lessons learned, things that in hindsight should have been obvious.

Target Audience: Architects, Developers
Prerequisites: Basic knowledge in testing techniques
Level: Basic

Olena Borzenko is a full-stack developer at The Adecco Group from Berlin in Germany. She has previously worked in a service company based in Ukraine and took a part in the creation of various products from small startups, B2B applications, to enterprise platforms.
Moreover, she is passionate about new technologies, clean code, and best practices.
In her free time, when she’s not spending it on hobbies, she likes to build demos around real-life use cases, share knowledge with others, and the opposite, learn about someone else's experience.

Aleksandra Kunysz
Olena Borzenko
Aleksandra Kunysz

Vortrag Teilen

Olena Borzenko
flag VORTRAG MERKEN

Vortrag Teilen

, (Freitag, 04.Februar 2022)
09:00 - 16:00
Fr 8
Structured Test Design and Condition-Oriented Test Case Design with ECT And MCDC
Structured Test Design and Condition-Oriented Test Case Design with ECT And MCDC

Test case design is one of the core competences of the testing profession. This tutorial is about an effective and elegant technique that is still too little known.

After an overview presentation of test design using coverage-based test design techniques and experience-based test approaches, this tutorial addresses one of the (seemingly) harder techniques from the condition-oriented group of coverage-based test design techniques, the Elementary Comparison Test (ECT) that uses Modified Condition Decision Coverage (MCDC).

Target Audience: Quality Engineers, Test Engineers, Developers
Prerequisites: General knowledge of IT delivery, quality engineering and testing
Level: Advanced

Extended Abstract
Test case design is one of the core competences of the testing profession.

Which test design techniques do you use? And is this effective and efficient?

In the TMAP body of knowledge we distinguish 4 main groups of test design techniques: Process-oriented (such as path testing), Condition-oriented (such as decision table testing), Data-oriented (such as Data Combination test) and Appearance-oriented (such as Syntactic testing and performance testing).

After an overview presentation of test design using coverage-based test design techniques and experience-based test approaches, this tutorial addresses one of the (seemingly) harder techniques from the condition-oriented group of coverage-based test design techniques, the Elementary Comparison Test that uses Modified Condition Decision Coverage.

Suppose you must test the entry-check of the new Wuthering Heights Rollercoaster in the QualityLand amusement park. Every person must be at least 120 cm tall to be allowed in the rollercoaster. What technique would you use? Boundary Value Analysis (from the group data-oriented testing), right? That’s not a tough choice.

But now the marketing department of QualityLand has a special offer for Halloween. To be allowed with the special discount-rate, a person must still be at least 120 cm tall, but must also wear a Halloween outfit and needs to be at the gate on 31 October (that’s a decision with 3 conditions). On top of this, if the person has bought a ticket online, and paid 10% extra, they may skip the line (that’s another decision, with 2 conditions).

So, all in all you have 2 decision points, with together 5 conditions.

Now what test case design technique do you use?

In the above example (with 5 conditions) choose a condition-oriented test design technique. But which? Probably you know Decision Table Testing. Applying this technique would lead to 2-to-the-power-of-5 = 32 test cases. That’s a bit too much to call this an efficient test set.

My advice is to use the Elementary Comparison Test (ECT) design technique, together with Modified Condition Decision Coverage (MCDC). This way, with only 6 (!!) test cases you can guarantee that EVERY condition has contributed to trigger EVERY outcome of the entry-check of the rollercoaster. So, it is both effective and efficient!

Have you never heard of ECT before? Well, even though it exists for decades, I can imagine, because ISTQB doesn’t teach you this. But is has been part of the TMAP body of knowledge since 1995 :-)

This HUSTEF conference you will get your chance to learn all about ECT & MCDC!!

Join me in this half-day or full-day tutorial and I will make sure that you will get hands-on experience.

Rik Marselis is principal quality consultant at Sogeti in the Netherlands. He is a well-appreciated presenter, trainer, author, consultant, and coach in the world of quality engineering. His presentations are always appreciated for their liveliness, his ability to keep the talks serious but light, and his use of practical examples with humorous comparisons.
Rik is a trainer for test design techniques for over 15 years.
Rik Marselis
Rik Marselis
Vortrag: Fr 8
flag VORTRAG MERKEN

Vortrag Teilen

Zurück