Please note:
On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of OOP 2024 correspond to Central European Time (CET).
By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
Thema: Testing
- Montag
29.01. - Dienstag
30.01. - Mittwoch
31.01. - Donnerstag
01.02. - Freitag
02.02.
To expand our horizons in testing, we should ask ourselves the following questions:
- What did we learn from the history of testing?
- What did we miss and what did we forget?
- How can we do better testing in the future?
Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories to answer these questions and to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!
Max. number of participants: 50
Target Audience: Test Architects, Test Engineers, Software-Architects, Developers, Product Owners, Quality Managers
Prerequisites: Basic knowledge about testing and quality engineering
Level: Advanced
Extended Abstract:
Effective and efficient software and system development requires superior test approaches in place and a strong commitment to quality in the whole team. To realize the right mix of test methods and quality measures is no easy task in real project life due to increasing demand for reliability of systems, cost efficiency, and market needs on speed, flexibility, and sustainability.
To address these challenges and to expand our horizons in testing, we should ask ourselves the following questions:
- What did we learn from the history of testing?
- What did we miss and what did we forget?
- How can we do better testing in the future?
Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories (techniques, people, history) to answer these questions and at the same time to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!
Projected learning outcomes and lessons learned
- Get familiar with testing wisdoms – known and unknown, old and new.
- Learn and share experiences on how to discover and adopt testing wisdoms.
- Apply discussed testing wisdoms to improve your test approaches in the future!
Peter Zimmerer is a Principal Key Expert Engineer at Siemens AG, Technology, in Garching, Germany. For more than 30 years he has been working in the field of software testing and quality engineering. He performs consulting, coaching, and training on test management and test engineering practices in real-world projects and drives research and innovation in this area. As ISTQB® Certified Tester Full Advanced Level he is a member of the German Testing Board (GTB). Peter has authored several journal and conference contributions and is a frequent speaker at international conferences.
Vortrag Teilen
Test coverage: 100% - Check!
And why do we still have bugs?
OK, tests don't prove the absence of errors.
And at the end of the day, they are just code which could contain bugs as well.
And perhaps they give us a false sense of security.
But how do I know, that my test are good?
One way to find out is using Mutation Testing.
In this talk I want to explain, what Mutation Testing is, how to do it and when it is helpful.
Target Audience: Developers, Achitects, Testers
Prerequisites: Basic knowledge in Programming, some experience in writing tests
Level: Basic
Extended Abstract:
More and more teams are writing tests for their production code, be it by applying concepts like TDD or BDD or by just writing them "after the fact". Sometimes there is also a test coverage metric that needs to be met. The positive effect is definitely that there are tests. Be it for my future self or a future colleague or as a form of documentation.
Tests are a means of telling something about the quality of production code. Mutation testing can help tell something about the quality of tests. It helps to find missing tests and potential bugs.
The concept of mutation testing is already more than 50 years old, but its application has not yet become widespread.
This talk should encourage you to take a closer look at mutation testing to find out what possibilities it offers in your own project, but also to see what disadvantages or pitfalls there are.
Birgit Kratz is freelancing software developer and consultant with more than 20 years experience in the Java ecosystem.
Her domain as well as her passion is using agile development methods and spreading the software-crafting ideas.
This is why she is a co-organizer of the German software crafting community (Softwerkskammer) events in Cologne and Düsseldorf for many years now.
And she helps organizing the SoCraTes conference (Software Crafting and Testing Conference).
To balance her job activities she rides her road bike quite extensively.
How often have you heard that ‘Yes this is important, but we don’t have the capacity right now’ or ‘sure let’s put it in the backlog’?
This is something we should not brush off or take lightly. Accessibility testing is vital especially when your product is a user facing application.
We need to be socially aware as a team and build quality towards our product with making it more accessible.
Target Audience: Everyone as Accessibility is for social awareness
Prerequisites: None
Level: Basic
Extended Abstract:
At least 1 in 5 people in the UK have a long term illness, impairment or disability. Many more have a temporary disability. A recent study found that 4 in 10 local council homepages failed basic tests for accessibility.
This is quite vital and the sooner we as testers can advocate this into our teams, we make our product more accessible, reduce the risk of bad product reviews, reputation and also be more socially aware. Let's shift left and make accessibility testing built-in our teams.
- Understand why accessibility testing is important?
- How I adapted accessibility mindset?
- How to coach team and bring accessibility into your teams?
- Demonstrate various tools available to perform accessibility testing (with demo)
Laveena Ramchandani is a dedicated and experienced Test Manager with over 10 years of experience in the field. She is deeply passionate about continuous learning and sharing knowledge within the testing community. As a leader in data science testing and testing in general, Laveena has significantly contributed to enhancing the skillsets of individuals within the industry, particularly through her impactful presence on digital platforms.
In 2022, Laveena was recognized as a finalist for the prestigious Digital Star Award at the everywoman in Technology Awards. In addition to her recognition, she has been featured on several podcasts and blogs, serves as a trainer for new testers with The Coders Guild, and is a sought-after international speaker. Her commitment to the community continues to inspire and shape the next generation of testers.
Vortrag Teilen
Vortrag Teilen
While AI systems differ in some points from "traditional" systems, testing them does not have to be more difficult - knowing the right questions to ask will go a long way. In this talk we will:
- Arm you with a checklist of questions to ask when preparing to test an AI system
- Show you that testers and data scientist have common ground when testing AI systems
Keep calm and test on - AI systems are not that different from "normal" systems.
Target Audience: Testers, Data Scientists, Developers, Product Owners, Architects
Prerequisites: Basic knowledge of software testing
Level: Advanced
Extended Abstract:
If you're a tester about to test your first AI system, or wanting to move into that area, you're probably wondering how you can prepare for the role. While we usually do not deal with complexity in the magnitude of Large Language models like chatGPT, AI systems still seemingly offer different challenges than "traditional" systems.
You're not the first person to deal with these questions. In fact, a group of us got together to explore it in more detail. Is there a general framework of questions that testers can use to help develop a quality strategy for systems that use AI? We wanted to see if we could design one. To this end, we got together a group with diverse roles: tester, test architect, data scientist, project lead and CEO.
Join us in this talk to hear how we approached the task and what our results are, including an example of using our checklist of questions to analyse a system that uses AI. Along the way we also addressed questions like "What is the role of a tester in such projects?" and "How much math do I need?" - we'll talk about those discussions, too. We encourage participants to use our checklist and give us feedback on it!
Gregor Endler erwarb mit seiner Dissertation “Adaptive Data Quality Monitoring with a Focus on the Completeness of Timestamped Data” 2017 den Doktortitel in Informatik. Seitdem ist er als Data Scientist bei der codemanufaktur GmbH tätig. Seine Arbeit umfasst insbesondere Machine Learning, Datenanalyse und Datenvisualisierung.
Marco Achtziger is Test Architect working for Siemens Healthineers in Forchheim. In this role he supports teams working in an agile environment in implementing and executing tests in the preventive test phase in a large project. He has several qualifications from iSTQB and iSQI and is a certified Software Architect by Siemens AG and Siemens Senior Key Expert in the area of Testing and Continuous Delivery.
Vortrag Teilen
The introduction of ChatGPT, CoPilot X brings in a lot of hype over developer experiences, especially documentation. But are we ready to chat with our documentation, instead of reading, using these tools? How can we, as maintainers, leverage these tools to offer a better experience in documentation for developers as our users? Join my talk and let's find out.
Target Audience: Engineers, Developers
Prerequisites: Programming
Level: Advanced
Maya Shavin is Senior Software-Engineer in Microsoft, working extensively with JavaScript and frontend frameworks and based in Israel. She founded and is currently the organizer of the VueJS Israel Meetup Community, helping to create a strong playground for Vue.js lovers and like-minded developers. Maya is also a published author, international speaker and an open-source library maintainer of frontend and web projects. As a core maintainer of StorefrontUI framework for e-commerce, she focuses on delivering performant components and best practices to the community while believing a strong Vanilla JavaScript knowledge is necessary for being a good web developer.
Discover the transformative power of GenAI in software testing. This lecture showcases a powerful GenAI-powered test framework that enhances testing efficiency. Learn how GenAI analyzes applications to generate automated test cases, uncover hidden defects with generative AI's random exploratory tests. Experience AI-powered peer reviewers for code analysis and quality evaluations. Explore Smart Report AI, providing comprehensive analysis and insights into test execution, results, and defects. Join us to revolutionize your software testing with GenAI.
Target Audience: Quality Engineers, Architects, Developers, Project Leader, Managers
Prerequisites: Basic knowledge on modern software development
Level: Advanced
Extended Abstract:
In the rapidly evolving landscape of software development Generative AI holds immense potential to revolutionize the field of software testing. This lecture aims to explore the transformative capabilities of GenAI in software testing by showcasing the powerful GenAI powered test framework and its key components.
The lecture begins by introducing the main framework, which combines the power of GenAI and automation to enhance the efficiency and effectiveness of software testing. Attendees will learn how GenAI can analyze the application under test and generate automated test cases based on a predefined test framework. These test cases can be executed locally or seamlessly integrated into DevOps processes, allowing for efficient and comprehensive testing of applications.
Furthermore, the lecture delves into the utilization of generative AI for the generation of random exploratory tests. By leveraging the capabilities of GenAI, testers can uncover hidden defects and vulnerabilities that may go unnoticed with traditional testing approaches. This demonstration highlights the innovative potential of GenAI in driving comprehensive test coverage.
Additionally, the lecture showcases the Smart Peer AI component of the framework. Attendees will discover how AI-powered peer reviewers can anticipate functional and non-functional issues, providing insightful code analysis and quality evaluations. By incorporating specific coding best practices, architectural standards, and non-standard attributes, Smart Peer AI enhances code quality and accelerates the development process.
Finally, the lecture concludes by presenting the Smart Report AI, an AI-powered solution that analyzes test reports generated from automated testing processes. This component provides a comprehensive analysis of test execution, environment configuration, test results, defect severity automated analysis, reproducibility steps, and suggestions for fixes. Smart Report AI empowers testers with valuable insights and facilitates effective decision-making in the testing and development lifecycle.
Davide Piumetti is a Technology Architect with expertise in software testing and part of Switzerland quality engineering leadership. With over 15 years of experience he drives innovative projects in Big Data, Generative AI, and Test@DevOps. He is also exploring the potential of high-performance computing and quantum computing in software testing. Committed to pushing boundaries, he shapes the future of quality engineering through innovation and continuous improvement.
Vortrag Teilen
Vortrag Teilen
I think differently. Why? I have a combination of ADHD and autism incl. high sensitivity - also known as "neurodivergent". I want to share my personal story which strategies and characteristics are helping me finding my career path. I hope to inspire some of my fellow testers, especially those who also sometimes feel different. I'd like to make a stand that some typical qualities make neurodivergent people especially valuable in testing. I want to widen the horizon for colleagues and companys on what they can do to appreciate them and others.
Target Audience: Tester, Test Manager, Developer, Team Leads, HR
Prerequisites: None
Level: Basic
Extended Abstract:
Ever felt different or worked with someone who you couldn’t understand? Who seemed to think somehow unlike than you? Maybe they did indeed. Are you aware that there are two main categories of human thinking: neurotypical and neurodivergent? Whereas neurotypical thinking is often described as “linear thinking” neurodivergent thinking is more “cross-linked”. And even within these categories we have a broad spectrum of diversity. I am fascinated by the differences in behavior and perception of the world that results out of this important detail.
I think differently. But that's not all. I behave differently and I have different needs. Why? I recently discovered that I have a combination of ADHD, autism incl. high sensitivity. But I adapted to the world as it is and blended in. That makes me a perfect fit for an agile environment where adapting to changing circumstances is key. I blame this part on my ADHD brain that loves discovering new things. My autistic brain on the other hand hates changes. So, I consider myself lucky that I have both of them. I developed a lot of implicit strategies to cope with it. To give an example: My family was always wondering why I used check lists so extensively, even as a child. Now I know: That was my autistic side bringing my chaotic ADHD brain under control.
I would like to share my personal story which strategies, characteristics and external enabler are helping me finding my career path. For instance, how I got from the awkward little girl that hid in the bushes to being voted “the most ‘daydreamer’” in school to me giving speeches in front of hundreds of people and taking leading roles in an international consulting company. It is a story on how I get to shine in a branch of extroverts. I hope to inspire some of my fellow testers, especially those who also sometimes feel different. I'd like to make a stand that some typical qualities make neurodivergent people especially valuable in the testing world.
Perhaps it is time to say goodbye to "linear thinking" and premature judgments. Especially in IT we can benefit a lot if we widen our view and enable change by looking beyond the horizon. That starts with acceptance - even for different ways of thinking.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/Viviane.Hennecke
Vivianes Leidenschaft liegt auf der Verbesserung von Qualitätsprozessen. Hierfür kombiniert sie seit 2018 bei Accenture Testing-, Coaching- & Trainererfahrung mit ihrem Hintergrund in Kommunikationsmanagement.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/viviane-hennecke/
The world is constantly changing. As IT professionals, we are aware of the intrinsic changeability of projects, contexts and our business, but the events of the last years have put this into sharper focus. How will external changes shape our teams and our work?
Alex looks at what factors are at work now, and what kinds of effects will they have on how we work, and the roles of testers and software professionals. She will also look at activities on an individual and company level, to best prepare ourselves for a nebulous future.
Target Audience: Everyone
Prerequisites: None
Level: Advanced
Extended Abstract:
The world is constantly changing, and everything is impermanent. As IT professionals, we are aware of the intrinsic changeability of projects, contexts and our business, but the events of the last years have put this into sharper focus. How will external changes shape our teams and our work?
How can we shape ourselves proactively in order to be able to respond to changes, make changes or our own and even thrive? Alex looks at what factors are at work now, and what kinds of effects will they have on how we work, and the roles of testers and software professionals. She will also look at concrete activities on an individual and company level, to best prepare ourselves for a nebulous future.
Alex Schladebeck is one of the managing directors at Bredex GmbH in Braunschweig, Germany with 7 years of managing experience, and 4 years of managing managers. An effervescent addition to any situation, Alex uses her unique style to her advantage in the corporate world of negotiating contracts and keeping a medium-sized company afloat amidst bigger players. Next to her leadership and strategic activities, Alex still works with customers in workshops about quality, agile and communication topics. She has won the award for “Most Influential Agile Testing Professional”, is a member of the ASQF Steering Committee and has been an active speaker since 2009 and international keynote speaker since 2016. Alex's current career grew from her years of expertise as an exploratory tester. Her presentations on the topic, with her contribution to the field through microheuristics, brought her to the realisation that the same skills of systematically exploring and categorising give her the best tools she could have to both succeed as a manager of managers, and teach others to do the same.
Vortrag Teilen
In this interactive training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises, participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.
Max. number of participants: 12
Target Audience: Developers, Testers, Business Analysts, Scrum Masters, Project Manager
Prerequisites: None
Level: Basic
Extended Abstract:
In this interactive and engaging 3-hour training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises (with a E-Commerce Platform), participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.
Whether you are a beginner or an experienced tester, this training will equip you with the skills and knowledge to become a more effective and efficient explorer of software.
Learning Outcomes:
- Understand the fundamentals of exploratory testing and its importance in software development.
- Learn various techniques and strategies for conducting exploratory testing.
- Develop the ability to identify high-risk areas and prioritize testing efforts during exploration.
- Acquire practical tips for documenting and communicating exploratory testing findings.
- Gain hands-on experience through interactive exercises to apply exploratory testing techniques.
- Enhance critical thinking and problem-solving skills to uncover hidden defects.
- Improve overall testing efficiency and effectiveness by incorporating exploratory testing into your testing process.
- Learn how to collaborate effectively with developers, product owners, and other stakeholders during exploratory testing.
- Gain insights into tools and technologies that can support and enhance exploratory testing activities.
- Leave with a comprehensive understanding of exploratory testing and the confidence to apply it in real-world scenarios.
Join us for this immersive training session, and unlock the potential of exploratory testing to uncover valuable insights and improve the quality of your software products.
Matthias Zax ist ein engagierter Agile Engineering Coach bei der Raiffeisen Bank International AG (RBI), wo er erfolgreiche digitale Transformationen durch agile Methoden vorantreibt. Mit einer tief verwurzelten Leidenschaft für Softwareentwicklung ist Matthias ein #developerByHeart, der seine Fähigkeiten im Bereich Softwaretest und Testautomatisierung im DevOps-Umfeld seit 2018 verfeinert hat.
Matthias ist eine treibende Kraft hinter der RBI Test Automation Community of Practice, sowie auch für kontinuierliches Lernen und Innovation.
Matthias Zax is an accomplished Agile Engineering Coach at Raiffeisen Bank International AG (RBI), where he drives successful digital transformations through agile methodologies. With a deep-rooted passion for software development, Matthias is a #developerByHeart who has honed his skills in software testing and test automation in the DevOps environment since 2018.
Matthias is a key driving force behind the RBI Test Automation Community of Practice, where he leads by example. He is a firm believer in the importance of continuous learning and innovation, which he actively promotes through his coaching and mentorship.
Vortrag Teilen