Please note:
On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of OOP 2024 correspond to Central European Time (CET).
By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
Track: Embedding AI into your Products: Practical Applications of Foundation Models
- Mittwoch
31.01. - Donnerstag
01.02.
The introduction of ChatGPT, CoPilot X brings in a lot of hype over developer experiences, especially documentation. But are we ready to chat with our documentation, instead of reading, using these tools? How can we, as maintainers, leverage these tools to offer a better experience in documentation for developers as our users? Join my talk and let's find out.
Target Audience: Engineers, Developers
Prerequisites: Programming
Level: Advanced
Maya Shavin is Senior Software-Engineer in Microsoft, working extensively with JavaScript and frontend frameworks and based in Israel. She founded and is currently the organizer of the VueJS Israel Meetup Community, helping to create a strong playground for Vue.js lovers and like-minded developers. Maya is also a published author, international speaker and an open-source library maintainer of frontend and web projects. As a core maintainer of StorefrontUI framework for e-commerce, she focuses on delivering performant components and best practices to the community while believing a strong Vanilla JavaScript knowledge is necessary for being a good web developer.
Discover the transformative power of GenAI in software testing. This lecture showcases a powerful GenAI-powered test framework that enhances testing efficiency. Learn how GenAI analyzes applications to generate automated test cases, uncover hidden defects with generative AI's random exploratory tests. Experience AI-powered peer reviewers for code analysis and quality evaluations. Explore Smart Report AI, providing comprehensive analysis and insights into test execution, results, and defects. Join us to revolutionize your software testing with GenAI.
Target Audience: Quality Engineers, Architects, Developers, Project Leader, Managers
Prerequisites: Basic knowledge on modern software development
Level: Advanced
Extended Abstract:
In the rapidly evolving landscape of software development Generative AI holds immense potential to revolutionize the field of software testing. This lecture aims to explore the transformative capabilities of GenAI in software testing by showcasing the powerful GenAI powered test framework and its key components.
The lecture begins by introducing the main framework, which combines the power of GenAI and automation to enhance the efficiency and effectiveness of software testing. Attendees will learn how GenAI can analyze the application under test and generate automated test cases based on a predefined test framework. These test cases can be executed locally or seamlessly integrated into DevOps processes, allowing for efficient and comprehensive testing of applications.
Furthermore, the lecture delves into the utilization of generative AI for the generation of random exploratory tests. By leveraging the capabilities of GenAI, testers can uncover hidden defects and vulnerabilities that may go unnoticed with traditional testing approaches. This demonstration highlights the innovative potential of GenAI in driving comprehensive test coverage.
Additionally, the lecture showcases the Smart Peer AI component of the framework. Attendees will discover how AI-powered peer reviewers can anticipate functional and non-functional issues, providing insightful code analysis and quality evaluations. By incorporating specific coding best practices, architectural standards, and non-standard attributes, Smart Peer AI enhances code quality and accelerates the development process.
Finally, the lecture concludes by presenting the Smart Report AI, an AI-powered solution that analyzes test reports generated from automated testing processes. This component provides a comprehensive analysis of test execution, environment configuration, test results, defect severity automated analysis, reproducibility steps, and suggestions for fixes. Smart Report AI empowers testers with valuable insights and facilitates effective decision-making in the testing and development lifecycle.
Davide Piumetti is a Technology Architect with expertise in software testing and part of Switzerland quality engineering leadership. With over 15 years of experience he drives innovative projects in Big Data, Generative AI, and Test@DevOps. He is also exploring the potential of high-performance computing and quantum computing in software testing. Committed to pushing boundaries, he shapes the future of quality engineering through innovation and continuous improvement.
Vortrag Teilen
Vortrag Teilen
In today's economy, creating intelligent customer experiences is a key differentiator for organizations looking to compete and gain a competitive advantage. Use of AI and especially Generative AI became more prevalent in the business world. We will discuss some of the work we did on creating an AI-Driven CX Platform that offers data management, Customer360 views, personalization and chatbots infused with Generative AI, and advanced security features. We will also discuss practical use cases and outcomes of our approach.
Target Audience: Managers, Decision Makers, Leaders
Prerequisites: None
Level: Advanced
Currently a Principal AI Strategist with Amazon, Zorina Alliata works with global customers to find solutions that speed up operations and enhance processes using Artificial Intelligence and Machine Learning. Zorina is also an Adjunct Professor at Georgetown University SCS and OPIT, as a creator and instructor for AI courses.
Zorina is involved in AI for Good initiatives and working with non-profit organizations to address major social and environmental challenges using AI/ML. She also volunteers with the Zonta organization and as the Chair of the Artificial Intelligence Committee at AnitaB.org, to support women in tech.
You can find Zorina on LinkedIn:
https://www.linkedin.com/in/zorinaalliata/
Hara Gavriliadi is a Senior CX Strategist at AWS Professional Services helping customers reimaging and transforming their customer experience using data, analytics, and machine learning. Hara has 13 years of experience in supporting organisations to be more data-driven, and turning analytics and insights into commercial advice to enable growth and innovation. Hara is passionate about ID&E and she is an AWS GetIT Ambassador inspiring young students to consider a future in STEAM.
Vortrag Teilen
This session will introduce embedding vectors and their use in artificial intelligence. It will illustrate how these constructs can be effectively utilized in enterprise AI solutions, specifically in conjunction with prompt engineering. Rainer Stropek will present practical demonstrations using Microsoft's Azure Cloud and OpenAI's ChatGPT 4 model, showcasing real-world application scenarios and potential business benefits. Attendees will gain insights into emerging AI trends and practices in enterprise contexts.
Target Audience: Architects, Developers, Data Scientists
Prerequisites: Basic knowledge about LLMs, ability to read source code
Level: Advanced
Extended Abstract:
This session aims to demystify the technical concepts of embedding vectors and their integration into the Artificial Intelligence (AI) world, with a specific focus on enterprise applications.
We provide an accessible introduction to embedding vectors – the mathematical constructs that translate complex data types into a format that machine learning algorithms can comprehend. Attendees will grasp the foundational principles of embeddings and their pivotal role in enhancing the capabilities of AI algorithms.
Next, we delve into the realm of prompt engineering. Here, we explore how the strategic crafting of prompts can steer AI models, such as OpenAI's ChatGPT 4, toward generating more precise and contextually accurate responses. We'll explain the process and strategies involved in prompt engineering, offering a roadmap for businesses to adopt these practices effectively.
Rainer Stropek will demonstrate these principles in a practical setting using Microsoft's Azure Cloud. We will walk through real-life scenarios, showing how enterprises can implement such technologies.
The session targets AI enthusiasts, software architects, developers, and data scientists. By the end of this session, participants will have gained a better understanding of these AI concepts, and how they can be harnessed.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/rainer.stropek
Rainer Stropek is co-founder and CEO of the company software architects and has been serving this role since 2008. At software architects Rainer and his team are developing the award-winning SaaS time tracking solution “time cockpit”. Previously, Rainer founded and led two IT consulting firms that worked in the area of developing software solutions based on the Microsoft technology stack.
Rainer is recognized as an expert concerning software development, software architecture, and cloud computing. He has written numerous books and articles on these topics. Additionally, he regularly speaks at conferences, workshops and trainings in Europe and the US. In 2010 Rainer has become one of the first MVPs for the Microsoft Azure platform. In 2015, Rainer also became a Microsoft Regional Director.
Rainer graduated from the Higher Technical School Leonding (AT) for MIS with honors and holds a BSc (Hons) Computer Studies of the University of Derby (UK).
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/rainer-stropek/
Vortrag Teilen
One of the fundamental challenges for machine learning (ML) teams is data quality, or more accurately the lack of data quality. Your ML solution is only as good as the data that you train it on, and therein lies the rub: Is your data of sufficient quality to train a trustworthy system? If not, can you improve your data so that it is? You need a collection of data quality “best practices”, but what is “best” depends on the context of the problem that you face. Which of the myriad of strategies are the best ones for you?
Target Audience: Developers, Data Engineers, Managers, Decision Makers
Prerequisites: None
Level: Advanced
Extended Abstract:
This presentation compares over a dozen traditional and agile data quality techniques on five factors: timeliness of action, level of automation, directness, timeliness of benefit, and difficulty to implement. The data quality techniques explored include: data cleansing, automated regression testing, data guidance, synthetic training data, database refactoring, data stewards, manual regression testing, data transformation, data masking, data labeling, and more. When you understand what data quality techniques are available to you, and understand the context in which they’re applicable, you will be able to identify the collection of data quality techniques that are best for you.
Scott Ambler is an Agile Data Coach and Consulting Methodologist with Ambysoft Inc., leading the evolution of the Agile Data and Agile Modeling methods. Scott was the (co-)creator of PMI’s Disciplined Agile (DA) tool kit and helps organizations around the world to improve their way of working (WoW) and ways of thinking (WoT). Scott is an international keynote speaker and the (co-)author of 30 books.
Vortrag Teilen
During the talk, we'll dive into the historical context of Generative AI and examine their challenges. From legal compliance to fairness, transparency, security, and accountability, we'll discuss strategies for implementing Responsible AI principles.
It's important to note that the landscape for AI-driven products is still evolving, and there are no established best practices. The legislative framework surrounding these models remains uncertain, making it even more vital to engage in discussions that shape responsible AI practices.
Target Audience: Decision Makers, Developers, Managers, Everyone - AI-driven products require cross-functional teams
Prerequisites: None
Level: Basic
Extended Abstract:
Foundation models like GPT-4, BERT, or DALL-E 2 are remarkable in their versatility, trained on vast datasets using self-supervised learning. However, the adaptability of these models brings forth ethical, socio-technical, and legal questions that demand responsible development and deployment.
During the talk, we'll delve into the history of AI to better understand the evolution of generative models. We'll explore strategies for implementing Responsible AI principles, tackling issues such as legal compliance, fairness, transparency, security, accountability and their broader impact on society.
It's important to note that there are currently no established best practices for AI-driven products, and the legislative landscape surrounding them remains unclear. This underscores the significance of our discussion as we collectively navigate this emerging field.
Isabel Bär is a skilled professional with a Master's degree in Data Engineering from the Hasso-Plattner-Institute. She has made contributions in the field of AI software, focusing on areas like MLOps and Responsible AI. Beyond being a regular speaker at various conferences, she has also taken on the role of organizing conferences on Data and AI, showcasing her commitment to knowledge sharing and community building. Currently, she is working as a consultant in a German IT consulting company.
Are Large Language Models (LLMs) sophisticated pattern matchers ('parrots') without understanding or potential prodigies that eventually surpass human intelligence? Drawing insights from both camps, we attempt to reconcile these perspectives, examines the current state of LLMs, their potential trajectories, and the profound impact these developments have on how we engineer software in the years to come.
Target Audience: Developers and Architects
Prerequisites: A basic understanding of Large Language Models is helpful but not required
Level: Basic
Extended Abstract:
Large Language Models (LLMs) are complex 'black box' systems. Their capabilities remain largely mysterious, only beginning to be understood through interaction and experimentation. While these models occasionally yield surprisingly accurate responses, they also exhibit shocking, elementary mistakes and limitations, creating more confusion than clarity.
When seeking expert insights, we find two diverging perspectives. On one side, we have thinkers like Noam Chomsky and AI experts such as Yann LeCun, who view LLMs as stochastic 'parrots' — sophisticated pattern matchers without true comprehension.
In contrast, AI pioneers like Geoffrey Hinton and Ilya Sutskever see LLMs as potential 'prodigies' — AI systems capable of eventually surpassing human intelligence and visionaries like Yuval Noah Harari that view LLMs as substantial societal threats.
Regardless of whether we see LLMs as 'parrots' or 'prodigies', they undeniable are catalyzing a paradigm shift in software engineering, broadening horizons, and pushing the boundaries of the field.
What are the theories underpinning these experts' views? Can their perspectives be reconciled, and what can we learn for the future of software engineering?
To answer these questions, we examine the current capabilities and developments of LLMs and explore their potential trajectories.
Steve Haupt, ein agiler Softwareentwickler bei andrena objects, betrachtet Softwareentwicklung als ein qualitätsorientiertes Handwerk. Aus Begeisterung für Künstliche Intelligenz erforscht er deren Auswirkungen auf das Softwarehandwerk, arbeitet an KI-Projekten und entwickelt dabei Best Practices. Er spricht regelmäßig über Künstliche Intelligenz und entwickelt Schulungen, um dieses Wissen zu verbreiten.
Mehr Inhalte dieses Speakers? Schaut doch mal bei SIGS.de vorbei: https://www.sigs.de/experten/steve-haupt/
Vortrag Teilen
Vortrag Teilen
Artificial Intelligence (AI) has become integral to software development, automating complex tasks and shaping this field's future. However, it also comes with challenges. In this talk, we explore how AI impacts current software development and possibilities for the future. We'll delve into AI language models in programming, discussing pros, cons and challenges. This talk, tailored to both supporters and skeptics of AI in software development, doesn't shy away from discussing the ethical obligations tied to this technology.
Target Audience: Software Engineers, Architects and Project Leaders in an enterprise environment
Prerequisites: Basic Understanding of Software Development
Level: Advanced
Extended Abstract:
The integration of Artificial Intelligence (AI), especially AI language models, is fundamentally reshaping the landscape of software development. This transformative technology offers exciting possibilities, from enhancing developer productivity to simplifying complex tasks. However, it is crucial to acknowledge and address the inherent challenges that accompany these advancements.
In this session, we delve into AI's role in software development, emphasizing both the potential benefits and potential drawbacks of AI. We will also examine the ethical implications of this technology.
By the end of the session, attendees will have a comprehensive understanding of AI's role in software development, reinforced by practical demonstrations. Participants will gain insight and skills for their daily roles as well as strategies for adopting and applying AI within their software development context.
As AI continues to permeate software development, understanding its potential and preparing for its challenges has become vital. My session, directly tied to the conference theme of "Expanding Horizons", provides unique, practical insights to help attendees navigate an AI-integrated future. Join me in encouraging the responsible and effective use of this transformative technology.
Marius Wichtner works as a Lead Software Engineer in the IT Stabilization & Modernization department at MaibornWolff. Focused on the quality and architecture of diverse applications and backend systems, he has a particular interest in how artificial intelligence intersects with and evolves the realm of software development.
Vortrag Teilen
Great engineers often use back-of-the-envelope calculations to estimate resources and costs. This practice is equally beneficial in Machine Learning Engineering, aiding in confirming the feasibility and value of an ML project. In my talk, I'll introduce a collaborative design toolkit for ML projects. It includes Machine Learning Canvas and MLOps Stack Canvas to identify ML use cases and perform initial prototyping, thus ensuring a business problem can be effectively solved within reasonable cost and resource parameters.
Target Audience: Architects, Developers, Project Leader, Data Scientist
Prerequisites: Basic knowledge in machine learning
Level: Advanced
Vortrag Teilen