Conference Program

Please note:
On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.

The times given in the conference program of OOP 2023 Digital correspond to Central European Time (CET).

By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.

Thema: Artificial Intelligence

Nach Tracks filtern
Nach Themen filtern
Alle ausklappen
  • Dienstag
    07.02.
  • Donnerstag
    09.02.
, (Dienstag, 07.Februar 2023)
09:00 - 10:45
Di 8.1
How (Not) to Measure Quality
How (Not) to Measure Quality

Measuring quality requires many questions to be answered. The most obvious ones may be: “What is quality?”, but also “How can we measure it?”, “Which metrics are most accurate?”, “Which are most practical?”.

In this talk, I share some general motivations for measuring quality. I review commonly used metrics that claim to measure quality, I rate them with regards to how they may be helpful or harmful to achieve actual goals. I give some examples how the weaknesses of one metric might be countered by another one to create a beneficial system.

Target Audience: Developers, Project Leader, Manager, Decision Makers, Quality Engineers, Testers, Product Owners
Prerequisites: Basic Software Project Experience, Rough Understanding of Software Development
Level: Advanced

Extended Abstract:
Measuring quality requires many questions to be answered. The most obvious ones may be: “What is quality?”, but also “How can we measure it?”, “Which metrics are most accurate?”, “Which are most practical?”.

In my experience, one question is often not answered or postponed until it is too late: “Why do we want to measure quality?” Is it because we want to control how well our developers are performing? Is it to detect problems early? Is it to measure the impact of changes? Is it the product or the process we care about? Is it to improve locally in a single team or globally across the company? Is there a specific problem that we are trying to solve, and if so, which one?

Instead of trying to define what software quality is – which is hard and depends on a lot of factors – we should first focus on the impact of our measuring. Some metrics may work great for one team, but not for the company as a whole. Some will help to reach your team or organizational goal, some will not help at all, and some will even have terrible side effects by setting unintended incentives. Some can be gamed, others might be harmful to motivation. Consider an overemphasis on lead time, which can lead to cutting corners. Or measuring the number of bugs found, which can cause a testers versus developers situation.

In this talk, I share some general motivations for measuring quality. I review various commonly used metrics that claim to measure quality. Based on my experience, I rate them with regards to how they may be helpful or harmful to achieve actual goals and which side effects are to be expected. I give some examples how the weaknesses of one metric might be countered by another one to create a beneficial system.

Michael Kutz has been working in professional software development for more than 10 years now. He loves to write working software, and he hates fixing bugs. Hence, he developed a strong focus on test automation, continuous delivery/deployment and agile principles.
Since 2014 he works at REWE digital as a software engineer and internal coach for QA and testing. As such his main objective is to support the development teams in QA and test automation to empower them to write awesome bug-free software fast.

The State and Future of UI Testing
The State and Future of UI Testing

UI testing is an essential part of software development. But the automation of UI tests is still considered too complex and flaky.
This talk will cover the "state of the art" of UI testing with an overview of tools and techniques. It will be shown which kind of representations are used by today's test tools and how the addressing of elements in the UI is done.
In addition, the role of artificial intelligence in the different approaches is shown and a prediction of testing tools of the future is presented.

Target Audience: Developers, Testers
Prerequisites: Basic Knowledge of UI-Testing
Level: Advanced

Extended Abstract:
UI testing is an essential part of software development. Despite technological progress, the automation of UI tests is still considered too complex to function completely without manual intervention.
In addition to classical selector-based approaches, more and more image-based methods are being pursued.
This talk will cover the "state of the art" of UI testing with an overview of tools and techniques. In particular, current problems and future developments will be discussed. Furthermore, it will be shown which kind of UI representations are used by today's test tools and how the addressing of elements in the user interface is done.
In addition, the role of artificial intelligence in the different approaches is shown and a prediction of testing tools of the future is presented on the basis of current research.

Johannes Dienst is Developer Advocate at askui. His focus is on automation, documentation, and software quality.

Michael Kutz
Johannes Dienst
Johannes Dienst
flag VORTRAG MERKEN

Vortrag Teilen

14:00 - 14:45
Di 8.2
Testing AI Systems
Testing AI Systems

At first glance, testing AI systems seems very different from testing “conventional” systems. However, many standard testing activities can be preserved as they are or only need small extensions.

In this talk, we give an overview of topics that will help you test AI systems: Attributes of training/testing/validation data, model performance metrics, and the statistical nature of AI systems. We will then relate these to testing tasks and show you how to integrate them.

Target Audience: Developers, Testers, Architects
Prerequisites: Basic knowledge of testing
Level: Basic

Extended Abstract:
From a testing perspective, systems that include AI components seem like a nightmare at first glance. How can you test a system that contains enough math to fill several textbooks and changes its behavior on the whims of its input data? How can you test what even its creators don’t fully understand?

Keep calm, grab a towel and carry on - what you have already been doing is still applicable, and most of the new things you should know are not as arcane as they might seem. Granted, some dimensions of AI systems like bias or explainability will likely not be able to be tested for in all cases. However, this complexity has been around for decades even in systems without any AI whatsoever. Additionally, you will have allies: Data scientists love talking about testing data.

In this talk, we give an overview of topics that will help you test AI systems: Attributes of training/testing/validation data, model performance metrics, and the statistical nature of AI systems. We will then relate these to testing tasks and show you how to integrate them.

Gregor Endler holds a doctor's degree in Computer Science for his thesis on completeness estimation of timestamped data. His work at Codemanufaktur GmbH deals with Machine Learning and Data Analysis.

Marco Achtziger is a Test Architect working for Siemens Healthcare GmbH in Forchheim. He has several qualifications from iSTQB and iSQI and is a certified Software Architect by Siemens AG.

Gregor Endler, Marco Achtziger
Gregor Endler, Marco Achtziger
flag VORTRAG MERKEN

Vortrag Teilen

, (Donnerstag, 09.Februar 2023)
14:30 - 15:30
Do 5.3
Leading AI Transformations
Leading AI Transformations

Artificial Intelligence (AI) and its sub-domain, Machine Learning (ML), have been developing quickly. Your organization could be planning for or be in the middle of an AI transformation.

In this talk, I will speak from my own experience managing the strategy and delivery for AI/ML programs and discuss practical steps for the executive leadership to ensure the success of their AI strategy and delivery.

Target Audience: Project Leaders, IT Leaders, Executives, Decision Makers
Prerequisites: None
Level: Basic

Zorina Alliata is a Sr. Machine Learning Strategist at Amazon, working with global customers to find solutions that speed up operations and enhance processes using Artificial Intelligence and Machine Learning. Zorina helps companies across several industries identify strategies and tactical execution plans for their ML use cases, platforms, and ML at scale implementations.

Zorina Alliata
17:00 - 18:00
Do 7.4
The Next Decade of Software Is About Climate – What Is the Role of ML?
The Next Decade of Software Is About Climate – What Is the Role of ML?

Climate action and green software engineering has risen to the top of many technology companies' agenda. With accuracy hungry algorithms ML software is consuming more and more computational resources, largely benefiting from the increasingly better hardware. Are the results worth the environmental cost?

This talk introduces the field of green software engineering, showing options to estimate the carbon footprint and discuss ideas on how to make Machine Learning greener, giving you the tools to take an active part in the climate solution.

Target Audience: Architects, Developers, Data Scientists
Prerequisites: Basic understanding of the AI lifecycle
Level: Advanced

Extended Abstract:
AI systems have a huge carbon footprint and impact our global commitment to keep global warming to no more than 1.5°C – as called for in the Paris Agreement. To reach this goal, emissions need to be reduced by 45 % by 2030 and reach net zero by 2050. The rising interest in getting a better handle on the carbon emissions due to the AI lifecycle has garnered interest from the research and practitioner communities across industry, government, academia, and civil society.

The objective of this learning series is to break beyond surface-level discussions and dive deep into understanding the challenges and opportunities related to assessing and mitigating the carbon impacts of AI systems.

This session will also walk through the Green Software foundation's Software Carbon Intensity specification and explain why how you measure impact matters.

Sara Bergman is a Senior Software Engineer at Microsoft Development Center Norway working in a team which owns several backend APIs powering people experiences in the Microsoft eco-system. She is an advocate for green software practices at MDCN and M365. She is a member of the Green Software Foundation and the chair of the Writer's project which is curating and creating written articles on the main GSF website and the GSF newsletter.

Sara Bergman
Sara Bergman
flag VORTRAG MERKEN

Vortrag Teilen

Zurück