Please note:
On this site, there is only displayed the English speaking sessions of the OOP 2022 Digital. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of OOP 2022 Digital correspond to Central European Time (CET).
By clicking on "EVENT MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
Traditionally, requirements were used as a means to communicate between customers and development organizations. Unfortunately, requirements suffer from many limitations.
An alternative approach is to focus on outcomes and to use value modeling as a mechanism to quantitatively define the desired outcomes. This value model can then be used for experimentation by humans using DevOps and A/B testing or using machine learning models for automated experimentation.
The tutorial provides introduction of the topics and exercises.
Target Audience: Architects, Product Managers, Senior Developers, Business Leaders
Prerequisites: None
Level: Basic
Vortrag Teilen
Today Computer vision has taken a significant spot in our phones, our roads, our markets that we don’t always even recognize if and where it is deployed. Nonetheless, our industries today have so much potential to automate (using CV) their recurrent tasks to reduce costs, while simultaneously increasing quality of the product and efficiency of the process itself. We will learn about some interesting industrial examples which benefit first-hand from simple automation and perhaps get inspired by it.
Target Audience: Software Developers, Data Scientists, AI Engineers, Managers, Project Leader, Decision Makers
Prerequisites: None
Level: Advanced
Extended Abstract
In the era of Industry 4.0, we are focusing on digitalizing operations using data. We let cameras ‘see’, detect and classify objects and even take some decisions for us. In this talk, we discuss what is Computer Vision (CV), and learn how could different industries benefit / save recurring costs from it. We will discuss some interesting industrial applications of Computer Vision, for you to get some inspiration on how it can potentially help your company grow, or just to understand the advancements in today’s world.
In these 45 mins, we discuss how CV can:
- automate the industrial processes
- be more then more efficient,
- ensure the product quality,
- or detect anomalies.
Using some demos and videos you’ll get the chance to consolidate the knowledge acquired during the talk.
Vortrag Teilen
The adoption of static analysis of C++ and Java requires that the findings and errors can be prioritised in an efficient way. Our work shows that Machine learning (ML) can support this presentation of static analysis results to end-users. The ML engine learns from the codebase itself, and also observes the violations that the user fixes and which he ignores. The ML uses this to suggest the next best violations to fix, relying on probability of violations to be harmful or most likely to be a noise.
Target Audience: Developer Managers, R&D Managers, Software Architects, Software Engineers
Prerequisites: English, Software development, Coding experience, C++, Java, C#.
Level: Expert
Extended Abstract
Static code analysis is often understood as a mandatory part for checking the source code compliance to government and industry regulations, company-wide guidelines and practices. It can play, however, a more fundamental role in estimating the quality of the code in general, understanding the amount of technical debt, creating the strategy to reduce the amount of technical debt, as well as a helper in making decisions on how to speed up the development by creating a more maintainable, understandable, sociable codebase.
However, by its nature, static code analysis is bound to produce a large amount of noise and false alerts that can distract the team from the actual bugs in the code and prevent them from working thoroughly with the findings. One of the reasons for that is the level of soundness of the static analysis tool. If we want to be sure that the analysis is bound to find all errors in the code, the static analysis tool has to report all possible candidates. The more sound the tool is configured to be, the larger the number of the possible errors is reported, which increases the number of false positives as well.
To improve the user experience of working with static analysis technology, we have developed a machine learning (ML) based approach to presenting the results of the static analysis to users. The ML engine can learn from the code base itself, from a user's preferences, as well as from the interaction within the team. At the code level, our engine learns from the syntactical and semantical structure of the analyzed code to understand which violations are more likely to cause more harm, which violations are more likely to be noise, what underlying problems can be fixed to drastically reduce the number of reported violations. At the user level, the ML engine observes which violations the user fixes and which violations the user ignores. Based on these observations, the ML engine builds a model and uses it to suggest the next best violations to fix.
Maintaining a database containing millions of products can be very challenging, especially when the information you require of these products is subject to changes over time.
We show how we used state of the art Deep Learning methods (such as Transformers, BERT) in connection with smart text matching in order to extract relevant information from free-form text.
We also explain how we leveraged the existing database to create an automatically labelled training dataset.
Our model enables us to continuously update idealos database automatically.
Target Audience: Decision Makers, Technical Project Leaders, Developers
Prerequisites: Basic knowledge of machine learning methodology
Level: Advanced
Extended Abstract
To maintain idealos product base, product information in the form of values of predefined product attributes needs to be extracted from free-form text product descriptions.
Before the use of a Machine Learning based solution, this process required a lot of manual work to define rules to extract this information. There is also a high effort connected to keeping these rules consistent across the whole database and different types of products, especially since the source of this information (the product descriptions) as well as the required information (the product properties) are subject to changes over time.
In this talk we present a machine learning solution, based on fine tuned state-of-the-art models such as BERT, which is able to extract product information automatically from product descriptions with production-ready performance.
Our solution contains two different models, each following one of the well-known problem settings in Natural Language Processing (NLP): Semantic Segmentation of text (also known as Token Classification) and Question Answering. We will present both models in detail, as well as discussing their advantages and disadvantages for solving the task at hand and how we measured its performance (metrics).
We will also emphasize the importance of identifying aspects of your data that ensure that the developed model can actually fulfill your business needs before curating your dataset.
This highlights another benefit of implementing a Machine Learning model for a huge database: You will get sanity checks of your existing data “for free”, as consistent data is a prerequisite for a successful Machine Learning project.
One problem that is very common in large organisations is that there is often no or only very little training data in the form of labels for specific text sections available. We show how we mitigated this problem by leveraging the existing database to generate a large artificial training dataset. This allowed us to only use a few thousand manually labelled examples for training and testing to reach sufficient performance.
Jona Welsch is Machine Learning Project Lead at dida, where he is responsible for the development of machine learning solutions in the areas of Natural Language Processing and Computer Vision.
To continuously deliver IT systems at speed with a focus on business value, high-performance IT delivery teams integrate quality engineering in their way of working.
Quality engineering is the new concept in achieving the right quality of IT systems. Testing only after an IT product was developed is an outdated approach. Built-in quality from the start is needed to guarantee business value in today’s IT delivery models. Quality engineering is about changes in skills, organization, automation and quality measures.
Target Audience: All people involved in high-performance IT delivery teams
Prerequisites: General knowledge of IT delivery
Level: Advanced
Extended Abstract
To continuously deliver IT systems at speed with a focus on business value, high-performance cross-functional IT delivery teams integrate quality engineering in their way of working.
Quality engineering is the new concept in achieving the right quality of IT systems. Testing an application only after the digital product has been fully developed has long been a thing of the past. More is needed to guarantee the quality of applications that are delivered faster and more frequently in today’s high-performance IT delivery models. It is about achieving built-in quality. The road to quality engineering means changes in terms of starting points, skills, organization, automation and quality measures.
Our new VOICE model guides teams to align their activities with the business value that is pursued, and by measuring indicators, teams give the right information to stakeholders to establish their confidence that the IT delivery will actually result in business value for the customers.
Teams benefit from the clear definition of QA&Testing topics that are a useful grouping of all activities relevant to quality engineering. Organizing topics are relevant to align activities between teams and performing topics have a focus on the operational activities within a team.
Also, to be able to deliver quality at speed, for today’s teams it is crucial to benefit from automating activities, for example in a CI/CD pipeline, whereby people must remember that automation is not the goal but just a way to increase quality and speed.
In this presentation the audience will learn why a broad view on quality engineering is important and how quality engineering can be implemented to achieve the right quality of IT products, the IT delivery process and the people involved.
This presentation is based on our new book "Quality for DevOps teams" (ISBN 978-90-75414-89-9) which supports high-performance cross-functional teams in implementing quality in their DevOps culture, with practical examples, useful knowledge and some theoretical background.
Rik Marselis is principal quality consultant at Sogeti in the Netherlands. He is a well-appreciated presenter, trainer, author, consultant, and coach in the world of quality engineering. His presentations are always appreciated for their liveliness, his ability to keep the talks serious but light, and his use of practical examples with humorous comparisons.
Rik is a trainer for test design techniques for over 15 years.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/rik.marselis
In large software projects the assessment of the impact of a code change can be a cumbersome task. If the software has grown and shows an evolutionary design there are always unwanted side effects.
Change control boards are established. But on what data do they judge what can happen with the changes? Very often there is the HIPPO syndrome which means it is the highest paid person's opinion.
In this talk we will show you ways to come to a deterministic prediction of the impact, what data you need and what you can do with it.
Target Audience: Architects, Test Managers, Developers, Testers
Prerequisites: Basic knowledge of collected data in software projects
Level: Advanced
Marco Achtziger is a Test Architect working for Siemens Healthcare GmbH in Forchheim. He has several qualifications from iSTQB and iSQI and is a certified Software Architect by Siemens AG.
Gregor Endler holds a doctor's degree in Computer Science for his thesis on completeness estimation of timestamped data. His work at Codemanufaktur GmbH deals with Machine Learning and Data Analysis.
Agile is becoming a standard delivery method adopted by organizations across the globe, according to VersionOne’s 11th Annual State of Agile Report. While 94 percent of survey respondents said their organizations practiced Agile, 80 percent said their organization was at or below a “still maturing” level. There are multiple reasons on why the Agile maturity of the teams are low, but the key one is teams look at Agile as a process change rather than a cultural change.
At Accenture we have been delivering solutions using Agile practices and principles and based on our experience we do face the challenges within the Agile teams such as the whole team’s limited experience with Agile, slowing down of work due to limited access to Product Owner, incomplete\less detailed user stories leading to high onshore dependency and struggling to keep momentum with continuous churn of Agile events through active participation and to maintain quality of artefacts (backlog, burndown, impediment list, retrospective action log etc.). These challenges manifold when delivery teams are practicing distributed Agile at scale. Apart from these challenges, at Accenture we are focused on how to make our teams more productive and ease the ways of working for the Agile teams.
In this presentation, we share our experiences in leveraging AI for Agile and the team feedback where these solutions were deployed. Did the AI in Agile help to address or mitigate the challenges it was introduced to address?
Jefferson Dsouza as an Accenture Managing Director brings with him about 21+ years of software industry experience to his role as Agile Community of Practice Lead, Living System Advisory Lead and Automation deployment lead at Accenture. He has over 14+ years of extensive expertise in Agile and Lean with deep knowledge of program management disciplines across Agile, Waterfall and Lean methodologies. Jeff has experience in large scale transformation, mentoring and coaching leadership, building sustenance through developing internal coaching capabilities, setting up Community of Practices.
LinkedIn: https://www.linkedin.com/in/jeffson-dsouza-9738134/
Raghavendra Meharwade (Raghu) is an active member of the Accenture Agile Community of Practice since 2011 and has worked as Scrum Master, Agile SME and Agile Coach for projects spread across geographies and domains. In his current role he is responsible for leading portion of myWizard platform that sets up AI in Agile and coaches teams on its use.
LinkedIn: https://www.linkedin.com/in/raghavendra-meharwade/
Vortrag Teilen
Unfortunately, the session is cancelled without replacement.
With AI entering more and more aspects of our lives, scepticism and worries towards this technology are increasing too. Empathy towards basic human needs and a great User Experience can help AI being more widely accepted and used.
But how to get there?
After covering basic UX principles, the talk will deep dive into the fields of trust, transparency and explainable AI.
The goal is to outline a path to a fruitful collaboration and mutual understanding between humans and AI.
Target Audience: Software Engineers, Data Scientists, Product Owners, Researchers, Designers
Prerequisites: None
Level: Basic
Vortrag Teilen
I will introduce two AWS services: CodeGuru and DevOps Guru.
CodeGuru Reviewer uses ML and automated reasoning to automatically identify critical issues, security vulnerabilities, and hard-to-find bugs during application development.
DevOps Guru analyzes data like application metrics, logs, events, and traces to establish baseline operational behavior and then uses ML to detect anomalies. It does this by having the ability to correlate and group metrics together to understand the relationships between those metrics, so it knows when to alert.
Target Audience: Developers, Architects, Decision Makers
Prerequisites: Basic understanding of the code quality metrics and observability
Level: Basic
Extended Abstract
In this talk I will introduce two AWS services: CodeGuru and DevOps Guru.
Code reviews are one example and are important to improve software quality, software security, and increase knowledge transfer in the teams working with critical code bases. Amazon CodeGuru Reviewer uses ML and automated reasoning to automatically identify critical issues, security vulnerabilities, and hard-to-find bugs during application development. CodeGuru Reviewer also provides recommendations to developers on how to fix issues to improve code quality and dramatically reduce the time it takes to fix bugs before they reach customer-facing applications and result in a bad experience
Amazon DevOps Guru analyzes data like application metrics, logs, events, and traces to establish baseline operational behavior and then uses ML to detect anomalies. The service uses pre-trained ML models that are able to identify spikes in application requests. It does this by having the ability to correlate and group metrics together to understand the relationships between those metrics, so it knows when to alert and when not to.
Vortrag Teilen
For most people, AI means robots taking human jobs or China’s surveillance of its citizens. Despite the hype around it and its image of progress, the real workings of artificial intelligence are not widely understood. Companies are already implementing a web of algorithms to optimize manual business processes. Most of the time, the larger IT organization is not included on the journey. This talk is an overview of how IT leaders can center the development of human teams in a world that is increasingly optimized by algorithms.
Machine Learning appears to have made impressive progress on many tasks from image classification to autonomous vehicle control and more. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. This is not necessarily a good thing. Systematic risk is invoked by adopting ML in a haphazard fashion. Understanding and categorizing security engineering risks introduced by ML at design level is critical. This talk focuses on results of an architectural risk analysis of ML systems.
Target Audience: Architects, Technical Leads, and Developers and Security Engineers of ML Systems
Prerequisites: Risk Managers, Software Security Professionals, ML Practitioners, everyone who is confronted by ML
Level: Advanced
Extended Abstract
Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level. Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general. A list of the top five (of 78 known) ML security risks will be presented.
Vortrag Teilen