Conference Program

Please note:
On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.

The times given in the conference program of OOP 2024 correspond to Central European Time (CET).

By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.

Thema: Limited Workshops

Nach Tracks filtern
Nach Themen filtern
Alle ausklappen
  • Montag
    29.01.
  • Freitag
    02.02.
, (Montag, 29.Januar 2024)
10:00 - 13:00
Mo 5
Limitiert Testing Wisdoms to Expand Our Horizons
Testing Wisdoms to Expand Our Horizons

To expand our horizons in testing, we should ask ourselves the following questions:

  1. What did we learn from the history of testing?
  2. What did we miss and what did we forget?
  3. How can we do better testing in the future?

Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories to answer these questions and to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!

Max. number of participants: 50

Target Audience: Test Architects, Test Engineers, Software-Architects, Developers, Product Owners, Quality Managers
Prerequisites: Basic knowledge about testing and quality engineering
Level: Advanced

Extended Abstract:
Effective and efficient software and system development requires superior test approaches in place and a strong commitment to quality in the whole team. To realize the right mix of test methods and quality measures is no easy task in real project life due to increasing demand for reliability of systems, cost efficiency, and market needs on speed, flexibility, and sustainability.
To address these challenges and to expand our horizons in testing, we should ask ourselves the following questions:

  • What did we learn from the history of testing?
  • What did we miss and what did we forget?
  • How can we do better testing in the future?

Therefore, in this interactive tutorial we will identify, discover, investigate, reflect, and discuss testing wisdoms from different categories (techniques, people, history) to answer these questions and at the same time to expand our horizons – you are invited to bring your own top 3 testing wisdoms (I will bring my top n) and share them with your peers in this tutorial!
Projected learning outcomes and lessons learned

  • Get familiar with testing wisdoms – known and unknown, old and new.
  • Learn and share experiences on how to discover and adopt testing wisdoms.
  • Apply discussed testing wisdoms to improve your test approaches in the future!

Peter Zimmerer is a Principal Key Expert Engineer at Siemens AG, Technology, in Munich, Germany. For more than 30 years he has been working in the field of software testing and quality engineering. He performs consulting, coaching, and training on test management and test engineering practices in real-world projects and drives research and innovation in this area. As ISTQB® Certified Tester Full Advanced Level he is a member of the German Testing Board (GTB). Peter has authored several journal and conference contributions and is a frequent speaker at international conferences.

Peter Zimmerer
Raum 03
Peter Zimmerer
Raum 03
flag VORTRAG MERKEN

Vortrag Teilen

14:00 - 17:00
Mo 10
Limitiert Embarking on the path to production: Building robust Generative AI powered applications
Embarking on the path to production: Building robust Generative AI powered applications

Developing functional and effective generative AI solutions requires addressing various challenges. Ensuring moderated content and factual accuracy without hallucinations, integrating proprietary and domain-specific knowledge, adhering to stringent data-residency and privacy requirements, and ensuring traceability and explainability of results all demand meticulous engineering efforts. In this hands-on workshop we will explore strategies to overcome these challenges, learn about best practices and implement examples using Cloud services.

Max. number of participants: 200
Laptop (with browser access) is required.

Target Audience: Data Architects, Data Engineers, Data Scientists, Machine Learning Engineers
Prerequisites: Basic knowledge about AI solutions and related Cloud services is a plus
Level: Advanced

Extended Abstract:
Generative AI is taking the world by storm and enterprises across industries are rallying to adopt the technology. However, developing functional and effective generative AI solutions within organizations requires addressing various challenges beyond the management of these novel machine learning models. Ensuring moderated content and factual accuracy without hallucinations, integrating proprietary and domain-specific knowledge, adhering to stringent data-residency and privacy requirements, and ensuring traceability and explainability of results all demand meticulous engineering efforts. Moreover, the user experience of the application has emerged as a crucial performance indicator, while maintaining a lean application footprint is essential for a positive business case.
In this hands-on workshop we will explore strategies to overcome these challenges and implement examples using Cloud services of Amazon Web Services (AWS). You'll get a temporary AWS account (free of charge) to participate but must bring your own laptop to participate. We will delve into best practices, design patterns, and reference architectures.

Aris Tsakpinis is a Specialist Solutions Architect for AI & Machine Learning with a special focus on natural language processing (NLP), large language models (LLMs), and generative AI.

Dennis Kieselhorst is a Principal Solutions Architect at Amazon Web Services with over 15 years of experience with Software-Architectures, especially in large distributed heterogeneous environments.

Aris Tsakpinis, Dennis Kieselhorst
Raum 13b
Aris Tsakpinis, Dennis Kieselhorst
Raum 13b
flag VORTRAG MERKEN

Vortrag Teilen

14:00 - 17:00
Mo 13
Limitiert Sustainable Development: Managing Technical Debt
Sustainable Development: Managing Technical Debt

As systems become complex, teams can be burdened with technical debt and architectural challenges, slowing development, and ultimately not being as agile and nimble as desired. If not enough attention is paid to technical debt, design problems will creep in until it becomes muddy, making it hard to deliver features quickly and reliably. This workshop discusses ways to sustain development by understating and managing technical debt. We will present the technical debt metaphor including the impact, and how to identify and monitor technical debt.

Max. number of participants: 50

Target Audience: Architects, Technical Managers, Agile Coaches, Developers, POs, Scrum Masters, QA
Prerequisites: Understanding architecture is beneficial though not necessary
Level: Advanced

Extended Abstract:
When building complex systems, it can be easy to focus primarily on features and overlook software qualities, specifically those related to the architecture and dealing with technical debt. Some believe that by following Agile practices—starting as fast as possible, keeping code clean, and having lots of tests—a good clean architecture will magically emerge. While an architecture will emerge, if there is not enough attention paid to the architecture and the code, technical debt, and design problems will creep in until it becomes muddy, making it hard to deliver new features quickly and reliably. Often, the technical debt items are unknown, unmonitored, and therefore not managed, thus resulting in high maintenance costs throughout the software life-cycle. This workshop discusses elements of sustainable development specifically for dealing with technical debt. The main topics that will be explained are the technical debt metaphor and concept, the impact of incurring technical debt, some types of technical debt, and what is not technical; debt. Additionally, we will discuss technical debt that teams may incur, where and how it arises, how to identify, monitor, and manage these debts to pay in the long term, and living with technical debt.

Graziela Simone Tonin has worked in the technology market for over 19 years in Brazil and abroad. Ph.D. in Computer Science. Received the US IBM World Award and the Women of Value Award.
Graziela mentors and worked in several national entrepreneurship and innovation programs, such as Innovativa Brasil. Ambassador of Clube Bora Fazer, an entrepreneurship community. She works as a professor at Insper Institution, a Teacher of Executive Education and customized programs for C-Levels, and also is a professor in Computer Science and Engineering program. She led the Women In Tech Project and is co-leader in the Gender Front of the Diversity Committee at Insper. Graziela leads volunteer projects throughout Brazil through the Grupo Mulheres do Brasil. In addition, she is part of a worldwide research project that analyzes initiatives aimed at women in software engineering.

Joseph (Joe) Yoder is a research collaborator at IME/USP, owner of The Refactory, and president of the Hillside Group which is dedicated to improving the quality of life of everyone who uses, builds, and encounters software systems. Joe is best known for the Big Ball of Mud pattern, which illuminates many fallacies in software architecture. Recently, the ACM recognized Joe as a Distinguished Member in the category of "Outstanding Engineering Contributions to Computing".

Graziela Simone Tonin, Joseph Yoder
Raum 03
Graziela Simone Tonin, Joseph Yoder
Raum 03
flag VORTRAG MERKEN

Vortrag Teilen

, (Freitag, 02.Februar 2024)
09:00 - 16:00
Fr 6
Limitiert Exploratory Testing – Agile Testing on Steroids
Exploratory Testing – Agile Testing on Steroids

In this interactive training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises, participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.

Max. number of participants: 12

Target Audience: Developers, Testers, Business Analysts, Scrum Masters, Project Manager
Prerequisites: None
Level: Basic

Extended Abstract:
In this interactive and engaging 3-hour training session, we will dive into the fascinating world of exploratory testing. Exploratory testing is a mindset and approach that empowers testers to uncover hidden defects, explore the boundaries of software systems, and provide valuable feedback to improve overall quality.
Through a combination of theory, practical examples, and hands-on exercises (with a E-Commerce Platform), participants will gain a solid understanding of exploratory testing principles and techniques, and learn how to apply them effectively in their testing efforts.
Whether you are a beginner or an experienced tester, this training will equip you with the skills and knowledge to become a more effective and efficient explorer of software.
Learning Outcomes:

  1. Understand the fundamentals of exploratory testing and its importance in software development.
  2. Learn various techniques and strategies for conducting exploratory testing.
  3. Develop the ability to identify high-risk areas and prioritize testing efforts during exploration.
  4. Acquire practical tips for documenting and communicating exploratory testing findings.
  5. Gain hands-on experience through interactive exercises to apply exploratory testing techniques.
  6. Enhance critical thinking and problem-solving skills to uncover hidden defects.
  7. Improve overall testing efficiency and effectiveness by incorporating exploratory testing into your testing process.
  8. Learn how to collaborate effectively with developers, product owners, and other stakeholders during exploratory testing.
  9. Gain insights into tools and technologies that can support and enhance exploratory testing activities.
  10. Leave with a comprehensive understanding of exploratory testing and the confidence to apply it in real-world scenarios.

Join us for this immersive training session, and unlock the potential of exploratory testing to uncover valuable insights and improve the quality of your software products.

Matthias Zax ist ein engagierter Agile Engineering Coach bei der Raiffeisen Bank International AG (RBI), wo er erfolgreiche digitale Transformationen durch agile Methoden vorantreibt. Mit einer tief verwurzelten Leidenschaft für Softwareentwicklung ist Matthias ein #developerByHeart, der seine Fähigkeiten im Bereich Softwaretest und Testautomatisierung im DevOps-Umfeld seit 2018 verfeinert hat. 
Matthias ist eine treibende Kraft hinter der RBI Test Automation Community of Practice, sowie auch für kontinuierliches Lernen und Innovation.

Matthias Zax is an accomplished Agile Engineering Coach at Raiffeisen Bank International AG (RBI), where he drives successful digital transformations through agile methodologies. With a deep-rooted passion for software development, Matthias is a #developerByHeart who has honed his skills in software testing and test automation in the DevOps environment since 2018.
Matthias is a key driving force behind the RBI Test Automation Community of Practice, where he leads by example. He is a firm believer in the importance of continuous learning and innovation, which he actively promotes through his coaching and mentorship.
 

Matthias Zax
Raum: Sissi
Matthias Zax
Raum: Sissi
flag VORTRAG MERKEN

Vortrag Teilen

Zurück