Conference Program

Please note:
On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.

The times given in the conference program of OOP 2024 correspond to Central European Time (CET).

By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.

Thema: Quality Assurance

Nach Tracks filtern
Nach Themen filtern
Alle ausklappen
  • Montag
    29.01.
  • Dienstag
    30.01.
  • Mittwoch
    31.01.
  • Donnerstag
    01.02.
  • Freitag
    02.02.
, (Montag, 29.Januar 2024)
10:00 - 13:00
Mo 5
Limitiert Testing Wisdoms to Expand Our Horizons
Testing Wisdoms to Expand Our Horizons

To expand our horizons in testing, we should ask ourselves the following questions:

  1. What did we learn from the history of testing?
  2. What did we miss and what did we forget?
  3. How can we do better testing in the future?

Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories to answer these questions and to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!

Max. number of participants: 50

Target Audience: Test Architects, Test Engineers, Software-Architects, Developers, Product Owners, Quality Managers
Prerequisites: Basic knowledge about testing and quality engineering
Level: Advanced

Extended Abstract:
Effective and efficient software and system development requires superior test approaches in place and a strong commitment to quality in the whole team. To realize the right mix of test methods and quality measures is no easy task in real project life due to increasing demand for reliability of systems, cost efficiency, and market needs on speed, flexibility, and sustainability.
To address these challenges and to expand our horizons in testing, we should ask ourselves the following questions:

  • What did we learn from the history of testing?
  • What did we miss and what did we forget?
  • How can we do better testing in the future?

Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories (techniques, people, history) to answer these questions and at the same time to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!
Projected learning outcomes and lessons learned

  • Get familiar with testing wisdoms – known and unknown, old and new.
  • Learn and share experiences on how to discover and adopt testing wisdoms.
  • Apply discussed testing wisdoms to improve your test approaches in the future!

Peter Zimmerer is a Principal Key Expert Engineer at Siemens AG, Technology, in Garching, Germany. For more than 30 years he has been working in the field of software testing and quality engineering. He performs consulting, coaching, and training on test management and test engineering practices in real-world projects and drives research and innovation in this area. As ISTQB® Certified Tester Full Advanced Level he is a member of the German Testing Board (GTB). Peter has authored several journal and conference contributions and is a frequent speaker at international conferences.

Peter Zimmerer
Raum 03
Peter Zimmerer
Raum 03
flag VORTRAG MERKEN

Vortrag Teilen

, (Dienstag, 30.Januar 2024)
09:00 - 10:30
Di 3.1
All tests are green? Oh no!! Why it is sometimes good, if a test fails
All tests are green? Oh no!! Why it is sometimes good, if a test fails

Test coverage: 100% - Check!
And why do we still have bugs?
OK, tests don't prove the absence of errors.
And at the end of the day, they are just code which could contain bugs as well.
And perhaps they give us a false sense of security.
But how do I know, that my test are good?
One way to find out is using Mutation Testing.
In this talk I want to explain, what Mutation Testing is, how to do it and when it is helpful.

Target Audience: Developers, Achitects, Testers
Prerequisites: Basic knowledge in Programming, some experience in writing tests
Level: Basic

Extended Abstract:
More and more teams are writing tests for their production code, be it by applying concepts like TDD or BDD or by just writing them "after the fact". Sometimes there is also a test coverage metric that needs to be met. The positive effect is definitely that there are tests. Be it for my future self or a future colleague or as a form of documentation.
Tests are a means of telling something about the quality of production code. Mutation testing can help tell something about the quality of tests. It helps to find missing tests and potential bugs.
The concept of mutation testing is already more than 50 years old, but its application has not yet become widespread.
This talk should encourage you to take a closer look at mutation testing to find out what possibilities it offers in your own project, but also to see what disadvantages or pitfalls there are.

Birgit Kratz is freelancing software developer and consultant with more than 20 years experience in the Java ecosystem.
Her domain as well as her passion is using agile development methods and spreading the software-crafting ideas.
This is why she is a co-organizer of the German software crafting community (Softwerkskammer) events in Cologne and Düsseldorf for many years now.
And she helps organizing the SoCraTes conference (Software Crafting and Testing Conference).
To balance her job activities she rides her road bike quite extensively.

Shifting accessibility testing to the left
Shifting accessibility testing to the left

How often have you heard that ‘Yes this is important, but we don’t have the capacity right now’ or ‘sure let’s put it in the backlog’?
This is something we should not brush off or take lightly. Accessibility testing is vital especially when your product is a user facing application.
We need to be socially aware as a team and build quality towards our product with making it more accessible.

Target Audience: Everyone as Accessibility is for social awareness
Prerequisites: None
Level: Basic

Extended Abstract:
At least 1 in 5 people in the UK have a long term illness, impairment or disability. Many more have a temporary disability. A recent study found that 4 in 10 local council homepages failed basic tests for accessibility.
This is quite vital and the sooner we as testers can advocate this into our teams, we make our product more accessible, reduce the risk of bad product reviews, reputation and also be more socially aware. Let's shift left and make accessibility testing built-in our teams.

  1. Understand why accessibility testing is important?
  2. How I adapted accessibility mindset?
  3. How to coach team and bring accessibility into your teams?
  4. Demonstrate various tools available to perform accessibility testing (with demo)

Laveena Ramchandani is a dedicated and experienced Test Manager with over 10 years of experience in the field. She is deeply passionate about continuous learning and sharing knowledge within the testing community. As a leader in data science testing and testing in general, Laveena has significantly contributed to enhancing the skillsets of individuals within the industry, particularly through her impactful presence on digital platforms.
In 2022, Laveena was recognized as a finalist for the prestigious Digital Star Award at the everywoman in Technology Awards. In addition to her recognition, she has been featured on several podcasts and blogs, serves as a trainer for new testers with The Coders Guild, and is a sought-after international speaker. Her commitment to the community continues to inspire and shape the next generation of testers.

Birgit Kratz
Raum 04b
Laveena Ramchandani
Raum 04b
Birgit Kratz
Raum 04b

Vortrag Teilen

Laveena Ramchandani
Raum 04b
flag VORTRAG MERKEN

Vortrag Teilen

14:00 - 14:45
Di 2.2
From Legacy to Cloud – Mistakes You Don’t Want to Make Your Own
From Legacy to Cloud – Mistakes You Don’t Want to Make Your Own

Come and hear the story of a company that is on the journey from the old monolithic, on-premise, waterfall world to the new modular, agile, domain-driven, multi-tenant, cloud-based microservices world. The challenges come from different directions: both technical and organizational aspects have to be mastered. The domain has to be understood, so that the system can be structured right. The big bang has to be avoided.
In this talk we will look at how our “fictional” company has struggled with and finally overcome those challenges.

Target Audience: Architects, Developers, Project Leaders, Managers
Prerequisites: Programming Experience
Level: Advanced

Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/henning.schwentner

Henning liebt Programmieren in hoher Qualität. Diese Leidenschaft lebt er als Coder, Coach und Consultant bei der WPS – Workplace Solutions aus. Dort hilft er Teams dabei, gewachsene Monolithen zu strukturieren oder neue Systeme von Anfang an mit einer tragfähigen Architektur zu errichten. Häufig kommen dann Microservices oder Self-Contained Systems heraus. Henning ist Autor von "Domain Storytelling" (Addison-Wesley, 2022) und dem www.LeasingNinja.io sowie Übersetzer von "Domain-Driven Design kompakt" (dpunkt, 2017).

Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/henning-schwentner/

Henning Schwentner
Raum 01
Henning Schwentner
Raum 01
flag VORTRAG MERKEN

Vortrag Teilen

16:15 - 17:15
Di 9.3
Asking the Right Questions When Testing AI Systems
Asking the Right Questions When Testing AI Systems

While AI systems differ in some points from "traditional" systems, testing them does not have to be more difficult - knowing the right questions to ask will go a long way. In this talk we will:

  • Arm you with a checklist of questions to ask when preparing to test an AI system
  • Show you that testers and data scientist have common ground when testing AI systems

Keep calm and test on - AI systems are not that different from "normal" systems.

Target Audience: Testers, Data Scientists, Developers, Product Owners, Architects
Prerequisites: Basic knowledge of software testing
Level: Advanced

Extended Abstract:
If you're a tester about to test your first AI system, or wanting to move into that area, you're probably wondering how you can prepare for the role. While we usually do not deal with complexity in the magnitude of Large Language models like chatGPT, AI systems still seemingly offer different challenges than "traditional" systems.
You're not the first person to deal with these questions. In fact, a group of us got together to explore it in more detail. Is there a general framework of questions that testers can use to help develop a quality strategy for systems that use AI? We wanted to see if we could design one. To this end, we got together a group with diverse roles: tester, test architect, data scientist, project lead and CEO.
Join us in this talk to hear how we approached the task and what our results are, including an example of using our checklist of questions to analyse a system that uses AI. Along the way we also addressed questions like "What is the role of a tester in such projects?" and "How much math do I need?" - we'll talk about those discussions, too. We encourage participants to use our checklist and give us feedback on it!

Gregor Endler erwarb mit seiner Dissertation “Adaptive Data Quality Monitoring with a Focus on the Completeness of Timestamped Data” 2017 den Doktortitel in Informatik. Seitdem ist er als Data Scientist bei der codemanufaktur GmbH tätig. Seine Arbeit umfasst insbesondere Machine Learning, Datenanalyse und Datenvisualisierung.

Marco Achtziger is Test Architect working for Siemens Healthineers in Forchheim. In this role he supports teams working in an agile environment in implementing and executing tests in the preventive test phase in a large project. He has several qualifications from iSTQB and iSQI and is a certified Software Architect by Siemens AG and Siemens Senior Key Expert in the area of Testing and Continuous Delivery.

Gregor Endler, Marco Achtziger
Raum 13b
Gregor Endler, Marco Achtziger
Raum 13b
flag VORTRAG MERKEN

Vortrag Teilen

, (Mittwoch, 31.Januar 2024)
09:00 - 10:30
Mi 6.1
Coffee chat with documentation, are you ready?
Coffee chat with documentation, are you ready?

The introduction of ChatGPT, CoPilot X brings in a lot of hype over developer experiences, especially documentation. But are we ready to chat with our documentation, instead of reading, using these tools? How can we, as maintainers, leverage these tools to offer a better experience in documentation for developers as our users? Join my talk and let's find out.

Target Audience: Engineers, Developers
Prerequisites: Programming
Level: Advanced

Maya Shavin is Senior Software-Engineer in Microsoft, working extensively with JavaScript and frontend frameworks and based in Israel. She founded and is currently the organizer of the VueJS Israel Meetup Community, helping to create a strong playground for Vue.js lovers and like-minded developers. Maya is also a published author, international speaker and an open-source library maintainer of frontend and web projects. As a core maintainer of StorefrontUI framework for e-commerce, she focuses on delivering performant components and best practices to the community while believing a strong Vanilla JavaScript knowledge is necessary for being a good web developer.

GenAI: Revolutionizing Software Testing with Automated Test Cases and AI Analysis
GenAI: Revolutionizing Software Testing with Automated Test Cases and AI Analysis

Discover the transformative power of GenAI in software testing. This lecture showcases a powerful GenAI-powered test framework that enhances testing efficiency. Learn how GenAI analyzes applications to generate automated test cases, uncover hidden defects with generative AI's random exploratory tests. Experience AI-powered peer reviewers for code analysis and quality evaluations. Explore Smart Report AI, providing comprehensive analysis and insights into test execution, results, and defects. Join us to revolutionize your software testing with GenAI.

Target Audience: Quality Engineers, Architects, Developers, Project Leader, Managers
Prerequisites: Basic knowledge on modern software development
Level: Advanced

Extended Abstract:
In the rapidly evolving landscape of software development Generative AI holds immense potential to revolutionize the field of software testing. This lecture aims to explore the transformative capabilities of GenAI in software testing by showcasing the powerful GenAI powered test framework and its key components.
The lecture begins by introducing the main framework, which combines the power of GenAI and automation to enhance the efficiency and effectiveness of software testing. Attendees will learn how GenAI can analyze the application under test and generate automated test cases based on a predefined test framework. These test cases can be executed locally or seamlessly integrated into DevOps processes, allowing for efficient and comprehensive testing of applications.
Furthermore, the lecture delves into the utilization of generative AI for the generation of random exploratory tests. By leveraging the capabilities of GenAI, testers can uncover hidden defects and vulnerabilities that may go unnoticed with traditional testing approaches. This demonstration highlights the innovative potential of GenAI in driving comprehensive test coverage.
Additionally, the lecture showcases the Smart Peer AI component of the framework. Attendees will discover how AI-powered peer reviewers can anticipate functional and non-functional issues, providing insightful code analysis and quality evaluations. By incorporating specific coding best practices, architectural standards, and non-standard attributes, Smart Peer AI enhances code quality and accelerates the development process.
Finally, the lecture concludes by presenting the Smart Report AI, an AI-powered solution that analyzes test reports generated from automated testing processes. This component provides a comprehensive analysis of test execution, environment configuration, test results, defect severity automated analysis, reproducibility steps, and suggestions for fixes. Smart Report AI empowers testers with valuable insights and facilitates effective decision-making in the testing and development lifecycle.

Davide Piumetti is a Technology Architect with expertise in software testing and part of Switzerland quality engineering leadership. With over 15 years of experience he drives innovative projects in Big Data, Generative AI, and Test@DevOps. He is also exploring the potential of high-performance computing and quantum computing in software testing. Committed to pushing boundaries, he shapes the future of quality engineering through innovation and continuous improvement.

Maya Shavin
Raum 05
Davide Piumetti
Raum 05
Davide Piumetti
Raum 05
flag VORTRAG MERKEN

Vortrag Teilen

14:30 - 15:30
Mi 9.3
Fostering the EU AI Act | A new dimension in assuring high risk AI
Fostering the EU AI Act | A new dimension in assuring high risk AI

In the evolving AI landscape, the EU AI Act introduces new standards for assuring high-risk AI systems. This presentation will explore the tester's role in navigating these standards, drawing from the latest research and from our experiences with an Automatic Employment Decision System, a high-risk AI. We'll discuss emerging methodologies, conformity assessments, and post-deployment monitoring, offering insights and practical guidance for aligning AI systems with the Act's requirements.

Target Audience: QA Professionals, AI Engineers/Architects, Business Leaders, POs/PMs, Policy Makers
Prerequisites: Basic Understanding of AI, Familiar with Testing, Awareness of EU AI Act, Interest in AI Asurance
Level: Advanced

Extended Abstract:
As the landscape of Artificial Intelligence (AI) rapidly evolves, the upcoming EU AI Act is set to introduce a new paradigm for assuring high-risk AI systems. This session will allow participants to delve into the pivotal role of testers in this context. We will decode the complexities of the Act and get to see how fostering the Act will ensure robust, transparent, and ethically aligned AI systems.
Drawing on recent research and my own experience in testing high-risk AI systems, I will discuss the emerging methodologies for testing high-risk AI, including explainability methods, robustness testing, and fairness testing. Together, we will also explore the Act's emphasis on conformity assessments and post-deployment monitoring, highlighting the tester's role in these processes. Participants will gain a unique behind-the-scenes look at how we have gone from chaos to order in testing an Automatic Employment Decision System, a high-risk AI.
Joining this session will equip participants with valuable insights and practical guidance on aligning AI systems with the EU AI Act. This is a must-attend for testers, AI developers, and business leaders alike who are navigating this new frontier, exploring the challenges, opportunities, and future directions in the assurance of high-risk AI systems. By the end of the session, participants will be better prepared to face the challenges posed by the EU AI Act and will have a clear understanding of the future directions in the assurance of high-risk AI systems.

Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/andrei.nutas

Andrei Nutas is a Test Architect at Adesso with over 7 years of industry experience. For the past year, among other things, Andrei has helped align an Automated Employment Decision System to the upcoming EU AI Act. In his free time, he is a research fellow with the West University of Timisoara where he focuses on AI Alignment and AI Ethics.

Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/andrei-nutas/

Andrei Nutas
Raum 04a
Andrei Nutas
Raum 04a
flag VORTRAG MERKEN

Vortrag Teilen

17:00 - 18:00
Mi 6.4
Techniques for Improving Data Quality: The Key to Machine Learning
Techniques for Improving Data Quality: The Key to Machine Learning

One of the fundamental challenges for machine learning (ML) teams is data quality, or more accurately the lack of data quality. Your ML solution is only as good as the data that you train it on, and therein lies the rub: Is your data of sufficient quality to train a trustworthy system? If not, can you improve your data so that it is? You need a collection of data quality “best practices”, but what is “best” depends on the context of the problem that you face. Which of the myriad of strategies are the best ones for you?

Target Audience: Developers, Data Engineers, Managers, Decision Makers
Prerequisites: None
Level: Advanced

Extended Abstract:
This presentation compares over a dozen traditional and agile data quality techniques on five factors: timeliness of action, level of automation, directness, timeliness of benefit, and difficulty to implement. The data quality techniques explored include: data cleansing, automated regression testing, data guidance, synthetic training data, database refactoring, data stewards, manual regression testing, data transformation, data masking, data labeling, and more. When you understand what data quality techniques are available to you, and understand the context in which they’re applicable, you will be able to identify the collection of data quality techniques that are best for you.

Scott Ambler is an Agile Data Coach and Consulting Methodologist with Ambysoft Inc., leading the evolution of the Agile Data and Agile Modeling methods. Scott was the (co-)creator of PMI’s Disciplined Agile (DA) tool kit and helps organizations around the world to improve their way of working (WoW) and ways of thinking (WoT). Scott is an international keynote speaker and the (co-)author of 30 books.

Scott W. Ambler
Raum 22a
Scott W. Ambler
Raum 22a
flag VORTRAG MERKEN

Vortrag Teilen

, (Donnerstag, 01.Februar 2024)
11:00 - 11:45
Do 2.2
Qualityland of Confusion
Qualityland of Confusion

Are you lost when folks talk about "quality" in the context of software? Just when you thought "high quality" means "good" and "QA" means "assure it's good", somebody hits you over the head with ISO 25010, where "quality" is just a neutral property of a software system. It's all a big happy pile of terminology quicksand where you sink faster the more you struggle for unambiguous and clear definitions. But we're here to help you out. We'll be looking at what's relevant about quality from a software architecture perspective.

Target Audience: Architects, Developers
Prerequisites: None
Level: Basic

Extended Abstract:
Among the prominent confusions is "quality requirements" vs. "functional requirements" - someone is sure to tell you that the first one is something not entirely unlike “non-functional” requirements. But if that distinction even is a thing, what's that "functional suitability" category of the ISO 25010 quality model? And where exactly do requirements even enter the picture? Once they do, how do we tell whether they are satisfied or not? And if they're not, how does all this terminology help with devising tactics for making things better? We'll separate out the taxonomy from the metrology, the metrology from the requirements, the functional from the non-functional (as far as this makes sense) and everything in between. We'll survey different ways of looking at quality and identify the murky areas where you need to be explicit about what you mean.

Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/michael.sperber

Dr. Michael Sperber ist Geschäftsführer der Active Group GmbH, die Individualsoftware ausschließlich mit funktionaler Programmierung entwickelt.  Er ist international anerkannter Experte für funktionale Programmierung und wendet sie seit über 20 Jahren in Forschung, Lehre und industrieller Entwicklung an.  Außerdem hat er zahlreiche Fachartikel und Bücher zum Thema verfasst.  Michael Sperber ist Mitbegründer des Blogs funktionale-programmierung.de und Mitorganisator der Entwicklerkonferenz BOB.

Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/michael-sperber/

Alexander Lorz is a member of the iSAQB Foundation Level Working Group, head of the WG Train-the-Trainer, and author of the CPSA-F reference training material for international training providers.

Michael Sperber, Alexander Lorz
Raum 04a
Michael Sperber, Alexander Lorz
Raum 04a
flag VORTRAG MERKEN

Vortrag Teilen

11:00 - 11:45
Do 4.2
What if? Simulation in portfolio management and replacing estimation as a risk management strategy
What if? Simulation in portfolio management and replacing estimation as a risk management strategy

Managers and leaders worldwide struggle to decide between projects A, B, or both. Traditional estimation techniques fail because humans can't predict the future. This talk proposes a simulation-based approach inspired by investment strategies, industrial management, and poker playing. By leveraging AI, forecasting, and computing power, simulations offer a reliable and adaptable portfolio planning strategy. Rather than relying on human estimation, simulations streamline decision-making and provide reassurance.

Target Audience: Portfolio Managers, Product Leaders, CPO, CEO, CTO, Product Managers, Product Owners
Prerequisites: Beginner level probabilistic forecasting, familiarity with portfolio level decisions
Level: Basic

Extended Abstract:
Right now, around the world, managers and leaders are scratching their heads to try and answer the question “Should we take project A, B, or both?”. The techniques they are using, are woefully inadequate to answer their question because they rely on a skill humans don’t possess: predicting the future!
Estimation as a portfolio and risk management strategy relies on our ability to predict the future. But we don’t have that skill! What can we use instead then?
In this talk, we explore how we can learn from the world of investment (risk management), industrial management (process control), and poker playing (thinking in bets) to create a powerful simulation strategy that will streamline and reassure your portfolio planning team. Unlike humans, simulation can take as many ideas as you can throw at it, and can come up with the most likely winning scenarios quickly, repeatably, and is infinitely adaptable to future surprises.
Why rely on estimation when we can rely on AI, Forecasting, and the near-infinite computing power we have in even the most humble of spreadsheet programs?
Key Learnings

  • Basics of simulating portfolio decisions
  • Comparing simulation vs estimation for portfolio level decisions
  • Examples of simulation use in complex scenario assessment, with N >> 1 options for decision
  • How to effectively support decision making with simulation.

Vasco Duarte, a leading figure in the agile community, co-founded Agile Finland and hosts the popular Scrum Master Toolbox Podcast with over 10 million downloads. His book "NoEstimates" provides a unique approach to Agile, enhancing software development's sustainability and profitability. As a keynote speaker, he shares his expertise, empowering organizations to improve effectiveness, adaptability, and responsiveness. Vasco's contributions have reshaped the landscape of software development.

Daniel Vacanti is a 25-plus year software industry veteran who has spent most of his career focusing on Lean and Agile practices. In 2007, he helped to develop the Kanban as a strategy for knowledge work and managed the world’s first project implementation using Kanban that year. He has been conducting Lean-Agile training, coaching, and consulting ever since. In 2013 he founded ActionableAgileTM which provides industry-leading predictive analytics tools and services organizations that utilize Lean-Agile practices. In 2014 he published his book, “Actionable Agile Metrics for Predictability”, which is the definitive guide to flow-based metrics and analytics. In 2017, he helped to develop the “Professional Scrum with Kanban” class with Scrum.org and in 2018 he published his second book, “When Will It Be Done?”. Most recently, Daniel co-founded ProKanban.org whose aim is to create a safe, diverse, inclusive community to learn about Kanban.

Vasco Duarte, Daniel S. Vacanti
Raum 03
Vasco Duarte, Daniel S. Vacanti
Raum 03
Vortrag: Do 4.2
flag VORTRAG MERKEN

Vortrag Teilen

14:30 - 15:30
Do 3.3
NEU: Predicting the Future of Quality, Testing and Teams
NEU: Predicting the Future of Quality, Testing and Teams

The world is constantly changing. As IT professionals, we are aware of the intrinsic changeability of projects, contexts and our business, but the events of the last years have put this into sharper focus. How will external changes shape our teams and our work?
Alex looks at what factors are at work now, and what kinds of effects will they have on how we work, and the roles of testers and software professionals. She will also look at activities on an individual and company level, to best prepare ourselves for a nebulous future.

Target Audience: Everyone
Prerequisites: None
Level: Advanced

Extended Abstract:
The world is constantly changing, and everything is impermanent. As IT professionals, we are aware of the intrinsic changeability of projects, contexts and our business, but the events of the last years have put this into sharper focus. How will external changes shape our teams and our work?
How can we shape ourselves proactively in order to be able to respond to changes, make changes or our own and even thrive? Alex looks at what factors are at work now, and what kinds of effects will they have on how we work, and the roles of testers and software professionals. She will also look at concrete activities on an individual and company level, to best prepare ourselves for a nebulous future.

Alex Schladebeck is one of the managing directors at Bredex GmbH in Braunschweig, Germany with 7 years of managing experience, and 4 years of managing managers. An effervescent addition to any situation, Alex uses her unique style to her advantage in the corporate world of negotiating contracts and keeping a medium-sized company afloat amidst bigger players. Next to her leadership and strategic activities, Alex still works with customers in workshops about quality, agile and communication topics. She has won the award for “Most Influential Agile Testing Professional”, is a member of the ASQF Steering Committee and has been an active speaker since 2009 and international keynote speaker since 2016. Alex's current career grew from her years of expertise as an exploratory tester. Her presentations on the topic, with her contribution to the field through microheuristics, brought her to the realisation that the same skills of systematically exploring and categorising give her the best tools she could have to both succeed as a manager of managers, and teach others to do the same.

Alex Schladebeck
Raum 12a
Alex Schladebeck
Raum 12a
flag VORTRAG MERKEN

Vortrag Teilen

, (Freitag, 02.Februar 2024)
09:00 - 16:00
Fr 6
Limitiert Exploratory Testing – Agile Testing on Steroids
Exploratory Testing – Agile Testing on Steroids

In this interactive training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises, participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.

Max. number of participants: 12

Target Audience: Developers, Testers, Business Analysts, Scrum Masters, Project Manager
Prerequisites: None
Level: Basic

Extended Abstract:
In this interactive and engaging 3-hour training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises (with a E-Commerce Platform), participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.
Whether you are a beginner or an experienced tester, this training will equip you with the skills and knowledge to become a more effective and efficient explorer of software.
Learning Outcomes:

  1. Understand the fundamentals of exploratory testing and its importance in software development.
  2. Learn various techniques and strategies for conducting exploratory testing.
  3. Develop the ability to identify high-risk areas and prioritize testing efforts during exploration.
  4. Acquire practical tips for documenting and communicating exploratory testing findings.
  5. Gain hands-on experience through interactive exercises to apply exploratory testing techniques.
  6. Enhance critical thinking and problem-solving skills to uncover hidden defects.
  7. Improve overall testing efficiency and effectiveness by incorporating exploratory testing into your testing process.
  8. Learn how to collaborate effectively with developers, product owners, and other stakeholders during exploratory testing.
  9. Gain insights into tools and technologies that can support and enhance exploratory testing activities.
  10. Leave with a comprehensive understanding of exploratory testing and the confidence to apply it in real-world scenarios.

Join us for this immersive training session, and unlock the potential of exploratory testing to uncover valuable insights and improve the quality of your software products.

Matthias Zax ist ein engagierter Agile Engineering Coach bei der Raiffeisen Bank International AG (RBI), wo er erfolgreiche digitale Transformationen durch agile Methoden vorantreibt. Mit einer tief verwurzelten Leidenschaft für Softwareentwicklung ist Matthias ein #developerByHeart, der seine Fähigkeiten im Bereich Softwaretest und Testautomatisierung im DevOps-Umfeld seit 2018 verfeinert hat. 
Matthias ist eine treibende Kraft hinter der RBI Test Automation Community of Practice, sowie auch für kontinuierliches Lernen und Innovation.

Matthias Zax is an accomplished Agile Engineering Coach at Raiffeisen Bank International AG (RBI), where he drives successful digital transformations through agile methodologies. With a deep-rooted passion for software development, Matthias is a #developerByHeart who has honed his skills in software testing and test automation in the DevOps environment since 2018.
Matthias is a key driving force behind the RBI Test Automation Community of Practice, where he leads by example. He is a firm believer in the importance of continuous learning and innovation, which he actively promotes through his coaching and mentorship.
 

Matthias Zax
Raum: Sissi
Matthias Zax
Raum: Sissi
flag VORTRAG MERKEN

Vortrag Teilen

Zurück