On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of OOP 2024 correspond to Central European Time (CET).
By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
Thema: Quality Assurance
- Architecture – for Humans?
- C++ and possible Alternatives
- Domain-Driven Design expands our horizons
- Embedding AI into your Products: Practical Applications of Foundation Models
- Full Day Tutorial
- Half Day Tutorial
- Shaping the future: Overcoming Boundaries with New Ideas in Product Ownership, UX & Requirement Engineering
- Social Integration
- Software Architecture – Systematically Handling Quality Attributes
- Special Event
- Testing & Quality
- Thinking DevOps further
- Trends & Techniques
To expand our horizons in testing, we should ask ourselves the following questions:
- What did we learn from the history of testing?
- What did we miss and what did we forget?
- How can we do better testing in the future?
Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories to answer these questions and to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!
Max. number of participants: 50
Target Audience: Test Architects, Test Engineers, Software-Architects, Developers, Product Owners, Quality Managers
Prerequisites: Basic knowledge about testing and quality engineering
Effective and efficient software and system development requires superior test approaches in place and a strong commitment to quality in the whole team. To realize the right mix of test methods and quality measures is no easy task in real project life due to increasing demand for reliability of systems, cost efficiency, and market needs on speed, flexibility, and sustainability.
To address these challenges and to expand our horizons in testing, we should ask ourselves the following questions:
- What did we learn from the history of testing?
- What did we miss and what did we forget?
- How can we do better testing in the future?
Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories (techniques, people, history) to answer these questions and at the same time to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!
Projected learning outcomes and lessons learned
- Get familiar with testing wisdoms – known and unknown, old and new.
- Learn and share experiences on how to discover and adopt testing wisdoms.
- Apply discussed testing wisdoms to improve your test approaches in the future!
Peter Zimmerer is a Principal Key Expert Engineer at Siemens AG, Technology, in Munich, Germany. For more than 30 years he has been working in the field of software testing and quality engineering. He performs consulting, coaching, and training on test management and test engineering practices in real-world projects and drives research and innovation in this area. As ISTQB® Certified Tester Full Advanced Level he is a member of the German Testing Board (GTB). Peter has authored several journal and conference contributions and is a frequent speaker at international conferences.
Test coverage: 100% - Check!
And why do we still have bugs?
OK, tests don't prove the absence of errors.
And at the end of the day, they are just code which could contain bugs as well.
And perhaps they give us a false sense of security.
But how do I know, that my test are good?
One way to find out is using Mutation Testing.
In this talk I want to explain, what Mutation Testing is, how to do it and when it is helpful.
Target Audience: Developers, Achitects, Testers
Prerequisites: Basic knowledge in Programming, some experience in writing tests
More and more teams are writing tests for their production code, be it by applying concepts like TDD or BDD or by just writing them "after the fact". Sometimes there is also a test coverage metric that needs to be met. The positive effect is definitely that there are tests. Be it for my future self or a future colleague or as a form of documentation.
Tests are a means of telling something about the quality of production code. Mutation testing can help tell something about the quality of tests. It helps to find missing tests and potential bugs.
The concept of mutation testing is already more than 50 years old, but its application has not yet become widespread.
This talk should encourage you to take a closer look at mutation testing to find out what possibilities it offers in your own project, but also to see what disadvantages or pitfalls there are.
Birgit Kratz is freelancing software developer and consultant with more than 20 years experience in the Java ecosystem.
Her domain as well as her passion is using agile development methods and spreading the software-crafting ideas.
This is why she is a co-organizer of the German software crafting community (Softwerkskammer) events in Cologne and Düsseldorf for many years now.
And she helps organizing the SoCraTes conference (Software Crafting and Testing Conference).
To balance her job activities she rides her road bike quite extensively.
How often have you heard that ‘Yes this is important, but we don’t have the capacity right now’ or ‘sure let’s put it in the backlog’?
This is something we should not brush off or take lightly. Accessibility testing is vital especially when your product is a user facing application.
We need to be socially aware as a team and build quality towards our product with making it more accessible.
Target Audience: Everyone as Accessibility is for social awareness
At least 1 in 5 people in the UK have a long term illness, impairment or disability. Many more have a temporary disability. A recent study found that 4 in 10 local council homepages failed basic tests for accessibility.
This is quite vital and the sooner we as testers can advocate this into our teams, we make our product more accessible, reduce the risk of bad product reviews, reputation and also be more socially aware. Let's shift left and make accessibility testing built-in our teams.
- Understand why accessibility testing is important?
- How I adapted accessibility mindset?
- How to coach team and bring accessibility into your teams?
- Demonstrate various tools available to perform accessibility testing (with demo)
The Tech world is ever growing, and Laveena Ramchandani has been working in Tech for 10 years now. She works in testing and quality assurance, a good mix of technical and business awareness role. Laveena has learned a lot through her career and looks forward to gaining more knowledge and at the same time inspires and spreads more Testing eminence around the world.
Laveena Ramchandani is an experienced Software Testing Manager with a comprehensive understanding of tools available for software testing and analysis. She aims to provide valuable insights that have high technical aptitude and hopes to inspire others in the world through her work, blogs, podcasts and regularly speaks at events on data science models and other topics.
Come and hear the story of a company that is on the journey from the old monolithic, on-premise, waterfall world to the new modular, agile, domain-driven, multi-tenant, cloud-based microservices world. The challenges come from different directions: both technical and organizational aspects have to be mastered. The domain has to be understood, so that the system can be structured right. The big bang has to be avoided.
In this talk we will look at how our “fictional” company has struggled with and finally overcome those challenges.
Target Audience: Architects, Developers, Project Leaders, Managers
Prerequisites: Programming Experience
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/henning.schwentner
Henning liebt Programmieren in hoher Qualität. Diese Leidenschaft lebt er als Coder, Coach und Consultant bei der WPS – Workplace Solutions aus. Dort hilft er Teams dabei, Ihre gewachsenen Monolithen zu strukturieren oder neue Systeme von Anfang an mit einer tragfähigen Architektur zu errichten. Häufig kommen dann Microservices oder Self-Contained Systems heraus. Henning ist Autor von »Domain Storytelling« (Addison-Wesley, 2022) und dem www.LeasingNinja.io sowie Übersetzer von »Domain-Driven Design kompakt« (dpunkt, 2017).
Henning Schwentner loves programming in high quality. He lives this passion as coder, coach, and consultant at WPS – Workplace Solutions. There he helps teams to restructure their monoliths or to build new systems from the beginning with a sustainable architecture. Henning is author of "Domain Storytelling" (Addison-Wesley, 2022), "Domain-Driven Transformation" (dpunkt, 2023), and the LeasingNinja.io.
While AI systems differ in some points from "traditional" systems, testing them does not have to be more difficult - knowing the right questions to ask will go a long way. In this talk we will:
- Arm you with a checklist of questions to ask when preparing to test an AI system
- Show you that testers and data scientist have common ground when testing AI systems
Keep calm and test on - AI systems are not that different from "normal" systems.
Target Audience: Testers, Data Scientists, Developers, Product Owners, Architects
Prerequisites: Basic knowledge of software testing
If you're a tester about to test your first AI system, or wanting to move into that area, you're probably wondering how you can prepare for the role. While we usually do not deal with complexity in the magnitude of Large Language models like chatGPT, AI systems still seemingly offer different challenges than "traditional" systems.
You're not the first person to deal with these questions. In fact, a group of us got together to explore it in more detail. Is there a general framework of questions that testers can use to help develop a quality strategy for systems that use AI? We wanted to see if we could design one. To this end, we got together a group with diverse roles: tester, test architect, data scientist, project lead and CEO.
Join us in this talk to hear how we approached the task and what our results are, including an example of using our checklist of questions to analyse a system that uses AI. Along the way we also addressed questions like "What is the role of a tester in such projects?" and "How much math do I need?" - we'll talk about those discussions, too. We encourage participants to use our checklist and give us feedback on it!
Gregor Endler holds a doctor's degree in Computer Science for his thesis on the completeness of timestamped data.
His work at codemanufaktur GmbH deals with Machine Learning and Data Analysis.
Marco Achtziger is working for Siemens Healthineers in Forchheim. He has several qualifications from iSTQB and iSQI and is a certified Senior Software-Architect by Siemens AG but is a test architect in his heart. In that area he also works as a trainer for a Siemens AG/Healthineers wide training program for test architects. He always seeks to exchange knowledge and experiences from other companies to make sure that we all learn from each other. He does that also as speaker on several conferences like OOP or Agile Testing Days and several other conferences.
The introduction of ChatGPT, CoPilot X brings in a lot of hype over developer experiences, especially documentation. But are we ready to chat with our documentation, instead of reading, using these tools? How can we, as maintainers, leverage these tools to offer a better experience in documentation for developers as our users? Join my talk and let's find out.
Target Audience: Engineers, Developers
Discover the transformative power of GenAI in software testing. This lecture showcases a powerful GenAI-powered test framework that enhances testing efficiency. Learn how GenAI analyzes applications to generate automated test cases, uncover hidden defects with generative AI's random exploratory tests. Experience AI-powered peer reviewers for code analysis and quality evaluations. Explore Smart Report AI, providing comprehensive analysis and insights into test execution, results, and defects. Join us to revolutionize your software testing with GenAI.
Target Audience: Quality Engineers, Architects, Developers, Project Leader, Managers
Prerequisites: Basic knowledge on modern software development
In the rapidly evolving landscape of software development Generative AI holds immense potential to revolutionize the field of software testing. This lecture aims to explore the transformative capabilities of GenAI in software testing by showcasing the powerful GenAI powered test framework and its key components.
The lecture begins by introducing the main framework, which combines the power of GenAI and automation to enhance the efficiency and effectiveness of software testing. Attendees will learn how GenAI can analyze the application under test and generate automated test cases based on a predefined test framework. These test cases can be executed locally or seamlessly integrated into DevOps processes, allowing for efficient and comprehensive testing of applications.
Furthermore, the lecture delves into the utilization of generative AI for the generation of random exploratory tests. By leveraging the capabilities of GenAI, testers can uncover hidden defects and vulnerabilities that may go unnoticed with traditional testing approaches. This demonstration highlights the innovative potential of GenAI in driving comprehensive test coverage.
Additionally, the lecture showcases the Smart Peer AI component of the framework. Attendees will discover how AI-powered peer reviewers can anticipate functional and non-functional issues, providing insightful code analysis and quality evaluations. By incorporating specific coding best practices, architectural standards, and non-standard attributes, Smart Peer AI enhances code quality and accelerates the development process.
Finally, the lecture concludes by presenting the Smart Report AI, an AI-powered solution that analyzes test reports generated from automated testing processes. This component provides a comprehensive analysis of test execution, environment configuration, test results, defect severity automated analysis, reproducibility steps, and suggestions for fixes. Smart Report AI empowers testers with valuable insights and facilitates effective decision-making in the testing and development lifecycle.
Davide Piumetti is a Technology Architect with expertise in software testing and part of Switzerland quality engineering leadership. With over 15 years of experience he drives innovative projects in Big Data, Generative AI, and Test@DevOps. He is also exploring the potential of high-performance computing and quantum computing in software testing. Committed to pushing boundaries, he shapes the future of quality engineering through innovation and continuous improvement.
In the evolving AI landscape, the EU AI Act introduces new standards for assuring high-risk AI systems. This presentation will explore the tester's role in navigating these standards, drawing from the latest research and from our experiences with an Automatic Employment Decision System, a high-risk AI. We'll discuss emerging methodologies, conformity assessments, and post-deployment monitoring, offering insights and practical guidance for aligning AI systems with the Act's requirements.
Target Audience: QA Professionals, AI Engineers/Architects, Business Leaders, POs/PMs, Policy Makers
Prerequisites: Basic Understanding of AI, Familiar with Testing, Awareness of EU AI Act, Interest in AI Asurance
As the landscape of Artificial Intelligence (AI) rapidly evolves, the upcoming EU AI Act is set to introduce a new paradigm for assuring high-risk AI systems. This session will allow participants to delve into the pivotal role of testers in this context. We will decode the complexities of the Act and get to see how fostering the Act will ensure robust, transparent, and ethically aligned AI systems.
Drawing on recent research and my own experience in testing high-risk AI systems, I will discuss the emerging methodologies for testing high-risk AI, including explainability methods, robustness testing, and fairness testing. Together, we will also explore the Act's emphasis on conformity assessments and post-deployment monitoring, highlighting the tester's role in these processes. Participants will gain a unique behind-the-scenes look at how we have gone from chaos to order in testing an Automatic Employment Decision System, a high-risk AI.
Joining this session will equip participants with valuable insights and practical guidance on aligning AI systems with the EU AI Act. This is a must-attend for testers, AI developers, and business leaders alike who are navigating this new frontier, exploring the challenges, opportunities, and future directions in the assurance of high-risk AI systems. By the end of the session, participants will be better prepared to face the challenges posed by the EU AI Act and will have a clear understanding of the future directions in the assurance of high-risk AI systems.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/andrei.nutas
Andrei Nutas is an AI Assurance Consultant with over 7 years of industry experience. For the past year, among other things, Andrei has helped Nagarro align its Automated Employment Decision System to the upcoming EU AI Act. In his free time, he is a research fellow with the West University of Timisoara where he focuses on AI Alignment and AI Ethics.
One of the fundamental challenges for machine learning (ML) teams is data quality, or more accurately the lack of data quality. Your ML solution is only as good as the data that you train it on, and therein lies the rub: Is your data of sufficient quality to train a trustworthy system? If not, can you improve your data so that it is? You need a collection of data quality “best practices”, but what is “best” depends on the context of the problem that you face. Which of the myriad of strategies are the best ones for you?
Target Audience: Developers, Data Engineers, Managers, Decision Makers
This presentation compares over a dozen traditional and agile data quality techniques on five factors: timeliness of action, level of automation, directness, timeliness of benefit, and difficulty to implement. The data quality techniques explored include: data cleansing, automated regression testing, data guidance, synthetic training data, database refactoring, data stewards, manual regression testing, data transformation, data masking, data labeling, and more. When you understand what data quality techniques are available to you, and understand the context in which they’re applicable, you will be able to identify the collection of data quality techniques that are best for you.
Scott Ambler is an Agile Data Coach and Consulting Methodologist with Ambysoft Inc., leading the evolution of the Agile Data and Agile Modeling methods. Scott was the (co-)creator of PMI’s Disciplined Agile (DA) tool kit and helps organizations around the world to improve their way of working (WoW) and ways of thinking (WoT). Scott is an international keynote speaker and the (co-)author of 30 books.
Are you lost when folks talk about "quality" in the context of software? Just when you thought "high quality" means "good" and "QA" means "assure it's good", somebody hits you over the head with ISO 25010, where "quality" is just a neutral property of a software system. It's all a big happy pile of terminology quicksand where you sink faster the more you struggle for unambiguous and clear definitions. But we're here to help you out. We'll be looking at what's relevant about quality from a software architecture perspective.
Target Audience: Architects, Developers
Among the prominent confusions is "quality requirements" vs. "functional requirements" - someone is sure to tell you that the first one is something not entirely unlike “non-functional” requirements. But if that distinction even is a thing, what's that "functional suitability" category of the ISO 25010 quality model? And where exactly do requirements even enter the picture? Once they do, how do we tell whether they are satisfied or not? And if they're not, how does all this terminology help with devising tactics for making things better? We'll separate out the taxonomy from the metrology, the metrology from the requirements, the functional from the non-functional (as far as this makes sense) and everything in between. We'll survey different ways of looking at quality and identify the murky areas where you need to be explicit about what you mean.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/michael.sperber
Dr. Michael Sperber ist Geschäftsführer der Active Group GmbH. Er ist international anerkannter Experte für funktionale Programmierung. Außerdem hat er zahlreiche Fachartikel und Bücher zum Thema verfasst. Michael Sperber ist Mitbegründer des Blogs funktionale-programmierung.de und Mitorganisator der Entwicklerkonferenz BOB. Außerdem ist er einer der primären Autoren des iSAQB-Advanced-Curriculums "Funktionale Software-Architektur".
Dr. Michael Sperber is CEO of Active Group in Tübingen, Germany. Mike specializes in functional architecture, and has been an internationally recognized expert in the field. He has authored many papers on the subject as well as several books. Mike is also an accredited iSAQB trainer, curator of its FUNAR and DSL curricula, and a member of iSAQB's Foundation working group.
Alexander Lorz is a member of the iSAQB Foundation Level Working Group, head of the WG Train-the-Trainer, and author of the CPSA-F reference training material for international training providers.
Managers and leaders worldwide struggle to decide between projects A, B, or both. Traditional estimation techniques fail because humans can't predict the future. This talk proposes a simulation-based approach inspired by investment strategies, industrial management, and poker playing. By leveraging AI, forecasting, and computing power, simulations offer a reliable and adaptable portfolio planning strategy. Rather than relying on human estimation, simulations streamline decision-making and provide reassurance.
Target Audience: Portfolio Managers, Product Leaders, CPO, CEO, CTO, Product Managers, Product Owners
Prerequisites: Beginner level probabilistic forecasting, familiarity with portfolio level decisions
Right now, around the world, managers and leaders are scratching their heads to try and answer the question “Should we take project A, B, or both?”. The techniques they are using, are woefully inadequate to answer their question because they rely on a skill humans don’t possess: predicting the future!
Estimation as a portfolio and risk management strategy relies on our ability to predict the future. But we don’t have that skill! What can we use instead then?
In this talk, we explore how we can learn from the world of investment (risk management), industrial management (process control), and poker playing (thinking in bets) to create a powerful simulation strategy that will streamline and reassure your portfolio planning team. Unlike humans, simulation can take as many ideas as you can throw at it, and can come up with the most likely winning scenarios quickly, repeatably, and is infinitely adaptable to future surprises.
Why rely on estimation when we can rely on AI, Forecasting, and the near-infinite computing power we have in even the most humble of spreadsheet programs?
- Basics of simulating portfolio decisions
- Comparing simulation vs estimation for portfolio level decisions
- Examples of simulation use in complex scenario assessment, with N >> 1 options for decision
- How to effectively support decision making with simulation.
Vasco Duarte, a leading figure in the agile community, co-founded Agile Finland and hosts the popular Scrum Master Toolbox Podcast with over 10 million downloads. His book "NoEstimates" provides a unique approach to Agile, enhancing software development's sustainability and profitability. As a keynote speaker, he shares his expertise, empowering organizations to improve effectiveness, adaptability, and responsiveness. Vasco's contributions have reshaped the landscape of software development.
Daniel Vacanti is a 25-plus year software industry veteran who has spent most of his career focusing on Lean and Agile practices. In 2007, he helped to develop the Kanban as a strategy for knowledge work and managed the world’s first project implementation using Kanban that year. He has been conducting Lean-Agile training, coaching, and consulting ever since. In 2013 he founded ActionableAgileTM which provides industry-leading predictive analytics tools and services organizations that utilize Lean-Agile practices. In 2014 he published his book, “Actionable Agile Metrics for Predictability”, which is the definitive guide to flow-based metrics and analytics. In 2017, he helped to develop the “Professional Scrum with Kanban” class with Scrum.org and in 2018 he published his second book, “When Will It Be Done?”. Most recently, Daniel co-founded ProKanban.org whose aim is to create a safe, diverse, inclusive community to learn about Kanban.
Heutzutage erfordern Systeme eine hohe Bandbreite an Qualitäten: immer online, schnell, robust, elastisch, skalierbar und sicher, oder was auch immer Ihre Interessenvertreter unter Qualität verstehen.
Ich erkläre, was Software-Entwicklungsprojekte brauchen: spezifische, konkrete und überprüfbare Qualitätsanforderungen, und warum bestehende Normen (wie ISO) in dieser Hinsicht nicht ausreichen.
Abschließend zeige ich einen pragmatischen, leichtgewichtigen (Open-Source-) Ansatz, der zu spezifischen und umsetzbaren Qualitätsanforderungen führt.
Heutzutage erfordern Systeme eine beeindruckende Bandbreite an Qualitäten: immer online, schnell, robust, elastisch, skalierbar und sicher, oder was auch immer Ihre Interessenvertreter unter Qualität verstehen.
Und genau hier beginnt das Problem: Die Stakeholder wissen oft nicht genau, was ihre Qualitätsanforderungen sind. Und wenn sie es doch wissen, bleiben diese Anforderungen oft implizit.
Ich erkläre, was Software-Entwicklungsprojekte brauchen: spezifische, konkrete und überprüfbare Qualitätsanforderungen, und warum bestehende Normen (wie ISO) in dieser Hinsicht nicht ausreichen.
Abschließend zeige ich einen pragmatischen, leichtgewichtigen Ansatz, der zu spezifischen und umsetzbaren Qualitätsanforderungen führt.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/gernot.starke
Dr. Gernot Starke, INNOQ-Fellow, arbeitet als Coach und Berater für Software-Entwicklung und -Architektur. Er ist Mitbegründer und Betreuer der Open-Source-Projekte arc42 (https://arc42.de) und aim42 (https://aim42.github.io), Buchautor und gelegentlicher Konferenzsprecher.
The world is constantly changing. As IT professionals, we are aware of the intrinsic changeability of projects, contexts and our business, but the events of the last years have put this into sharper focus. How will external changes shape our teams and our work?
Alex looks at what factors are at work now, and what kinds of effects will they have on how we work, and the roles of testers and software professionals. She will also look at activities on an individual and company level, to best prepare ourselves for a nebulous future.
Target Audience: Everyone
The world is constantly changing, and everything is impermanent. As IT professionals, we are aware of the intrinsic changeability of projects, contexts and our business, but the events of the last years have put this into sharper focus. How will external changes shape our teams and our work?
How can we shape ourselves proactively in order to be able to respond to changes, make changes or our own and even thrive? Alex looks at what factors are at work now, and what kinds of effects will they have on how we work, and the roles of testers and software professionals. She will also look at concrete activities on an individual and company level, to best prepare ourselves for a nebulous future.
Alex Schladebeck is a whirlwind of enthusiasm for quality, agility and humans. She started out in testing and had an interesting and varied career as a product owner, consultant and team leader before becoming a part of the management team at the beginning of 2020.
She spends her time communicating with people! A typical week involves working with customers, teaching and coaching testers and developers about quality, being an agile leader, working on strategy and developing her team to fulfil their potential. She keeps up to date on her favourite topics by supporting and consulting for teams and customers.
Alex is a frequent speaker and keynote speaker at conferences about agility and quality from her experiences in projects and with customers, and she was awarded the Most Influential Agile Testing Professional Person award in 2018. In her free time, she loves doing sports, playing music and being an auntie. She describes herself as an explorer and loves discovering places, cultures, perspectives and people.
Die digitale Barrierefreiheit nimmt Fahrt auf und ist ein absolutes Trendthema. Eigentlich ist dazu alles schon gesagt worden, es muss nur noch realisiert werden. Es ist entscheidend, Barrierefreiheit systematisch in Arbeitsabläufe von Designern und Entwicklern zu integrieren. Drei Hauptfragen stehen im Fokus:
- Warum wird Barrierefreiheit in Projekten oft nicht priorisiert?
- Welche Bedürfnisse und Herausforderungen haben Designer & Entwickler?
- Wie kann Barrierefreiheit in Entwicklungsprozesse integriert werden?
Zielpublikum: Designer:innen, Entwickler:innen, Projektleiter:innen, Manager:innen, Entscheider:innen
Je weiter ein Projekt fortgeschritten ist, desto schwieriger wird es, Barrierefreiheit miteinzubeziehen. Das Problem vieler Ressourcen und Tools ist ihre retrospektive Natur: Sie beurteilen die Barrierefreiheit eines Produkts, wenn es bereits fertig ist. Zu diesem Zeitpunkt sind Änderungen oft zu aufwändig oder die Zeit zu knapp. Doch wie gewährleistet unser Ansatz, dass Barrierefreiheit in jede Prozessphase einfließt?
- Handlungsorientierte Aufgaben bereitstellen: Designer und Entwickler benötigen umsetzbare Informationen. Oft sind Richtlinien und Standards zu abstrakt formuliert, um sie in Prozessen zu operationalisieren.
- Klarheit über Verantwortlichkeiten schaffen: Jede Aufgabe wird einer konkreten Rolle zugewiesen. Barrierefreiheit ist die Verantwortung aller Teammitglieder, nicht nur von Entwicklern.
- Aufzeigen von Aufgabenabhängigkeiten: Das fördert die Zusammenarbeit der Projektbeteiligten. Es wird aufgezeigt, welche Aufgaben voneinander abhängig sind.
- „On-demand“ statt Informationsflut: Jede Aufgabe wird einer konkreten Projektphase zugeordnet. So stellen wir sicher, dass nur wesentliche Informationen der richtigen Person zum richtigen Zeitpunkt zur Verfügung stehen.
Franziska Kroneck studiert im Masterstudiengang User Experience Design (UXD). Sie hat 5 Jahre Berufserfahrung im Bereich UXD bei verschiedenen Unternehmen wie sepp.med, Bosch Safety Systems, Cariad und msg systems.
Dr. Andrea Nutsi ist seit 2018 als Senior UX Consultant bei der msg systems ag mit Schwerpunkt User Research und User Testing branchenübergreifend tätig. Zuvor hat sie in Medieninformatik promoviert.
In this interactive training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises, participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.
Max. number of participants: 12
Target Audience: Developers, Testers, Business Analysts, Scrum Masters, Project Manager
In this interactive and engaging 3-hour training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises (with a E-Commerce Platform), participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.
Whether you are a beginner or an experienced tester, this training will equip you with the skills and knowledge to become a more effective and efficient explorer of software.
- Understand the fundamentals of exploratory testing and its importance in software development.
- Learn various techniques and strategies for conducting exploratory testing.
- Develop the ability to identify high-risk areas and prioritize testing efforts during exploration.
- Acquire practical tips for documenting and communicating exploratory testing findings.
- Gain hands-on experience through interactive exercises to apply exploratory testing techniques.
- Enhance critical thinking and problem-solving skills to uncover hidden defects.
- Improve overall testing efficiency and effectiveness by incorporating exploratory testing into your testing process.
- Learn how to collaborate effectively with developers, product owners, and other stakeholders during exploratory testing.
- Gain insights into tools and technologies that can support and enhance exploratory testing activities.
- Leave with a comprehensive understanding of exploratory testing and the confidence to apply it in real-world scenarios.
Join us for this immersive training session, and unlock the potential of exploratory testing to uncover valuable insights and improve the quality of your software products.
Matthias Zax ist ein engagierter Agile Engineering Coach bei der Raiffeisen Bank International AG (RBI), wo er erfolgreiche digitale Transformationen durch agile Methoden vorantreibt. Mit einer tief verwurzelten Leidenschaft für Softwareentwicklung ist Matthias ein #developerByHeart, der seine Fähigkeiten im Bereich Softwaretest und Testautomatisierung im DevOps-Umfeld seit 2018 verfeinert hat.
Matthias ist eine treibende Kraft hinter der RBI Test Automation Community of Practice, sowie auch für kontinuierliches Lernen und Innovation.
Matthias Zax is an accomplished Agile Engineering Coach at Raiffeisen Bank International AG (RBI), where he drives successful digital transformations through agile methodologies. With a deep-rooted passion for software development, Matthias is a #developerByHeart who has honed his skills in software testing and test automation in the DevOps environment since 2018.
Matthias is a key driving force behind the RBI Test Automation Community of Practice, where he leads by example. He is a firm believer in the importance of continuous learning and innovation, which he actively promotes through his coaching and mentorship.