Please note:
On this page you will only see the English-language presentations of the conference. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of OOP 2024 correspond to Central European Time (CET).
By clicking on "VORTRAG MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
Thema: AI & Generative AI
- Montag
29.01. - Dienstag
30.01. - Mittwoch
31.01. - Donnerstag
01.02.
We will dive into the foundations of Generative AI, especially Large Language Models, and how to use them in Products and speed up business processes.
Have you ever wondered how Large Language Models will impact your products? How you can use them to speed up your business processes? And how Security, Data Protection, Tracing, FinOps, … will work in a world of AI?
Product Owners/Managers, Team leads and Managers will benefit from an easy to understand workshop that gives practical advice you can use the next day.
Target Audience: Product Owners, Product Managers, Head ofs, Team Leads, Managers
Prerequisites: General business/product knowledge
Level: Basic
Extended Abstract:
In this workshop you will learn:
- History of LLMs
- How do LLMs work?
- Building blocks of LLMs: What makes them „human“
- Real use cases
- Security & Data Protection
- Tracing
- FinOps for LLMs
- Blueprint for orchestrating an AI Discovery workshop
Björn Schotte is co-founder and managing director at MAYFLOWER GmbH. In his role as Executive Consultant he helps companies with their agile transformation. More than 100 crew members at MAYFLOWER create and develop modern software products with agile teams.
He‘s an astonished explorer on his life-long agile journey.
Developing functional and effective generative AI solutions requires addressing various challenges. Ensuring moderated content and factual accuracy without hallucinations, integrating proprietary and domain-specific knowledge, adhering to stringent data-residency and privacy requirements, and ensuring traceability and explainability of results all demand meticulous engineering efforts. In this hands-on workshop we will explore strategies to overcome these challenges, learn about best practices and implement examples using Cloud services.
Max. number of participants: 200
Laptop (with browser access) is required.
Target Audience: Data Architects, Data Engineers, Data Scientists, Machine Learning Engineers
Prerequisites: Basic knowledge about AI solutions and related Cloud services is a plus
Level: Advanced
Extended Abstract:
Generative AI is taking the world by storm and enterprises across industries are rallying to adopt the technology. However, developing functional and effective generative AI solutions within organizations requires addressing various challenges beyond the management of these novel machine learning models. Ensuring moderated content and factual accuracy without hallucinations, integrating proprietary and domain-specific knowledge, adhering to stringent data-residency and privacy requirements, and ensuring traceability and explainability of results all demand meticulous engineering efforts. Moreover, the user experience of the application has emerged as a crucial performance indicator, while maintaining a lean application footprint is essential for a positive business case.
In this hands-on workshop we will explore strategies to overcome these challenges and implement examples using Cloud services of Amazon Web Services (AWS). You'll get a temporary AWS account (free of charge) to participate but must bring your own laptop to participate. We will delve into best practices, design patterns, and reference architectures.
Aris Tsakpinis is a Specialist Solutions Architect for AI & Machine Learning with a special focus on natural language processing (NLP), large language models (LLMs), and generative AI.
Dennis Kieselhorst is a Principal Solutions Architect at Amazon Web Services with over 15 years of experience with Software-Architectures, especially in large distributed heterogeneous environments.
Vortrag Teilen
While AI systems differ in some points from "traditional" systems, testing them does not have to be more difficult - knowing the right questions to ask will go a long way. In this talk we will:
- Arm you with a checklist of questions to ask when preparing to test an AI system
- Show you that testers and data scientist have common ground when testing AI systems
Keep calm and test on - AI systems are not that different from "normal" systems.
Target Audience: Testers, Data Scientists, Developers, Product Owners, Architects
Prerequisites: Basic knowledge of software testing
Level: Advanced
Extended Abstract:
If you're a tester about to test your first AI system, or wanting to move into that area, you're probably wondering how you can prepare for the role. While we usually do not deal with complexity in the magnitude of Large Language models like chatGPT, AI systems still seemingly offer different challenges than "traditional" systems.
You're not the first person to deal with these questions. In fact, a group of us got together to explore it in more detail. Is there a general framework of questions that testers can use to help develop a quality strategy for systems that use AI? We wanted to see if we could design one. To this end, we got together a group with diverse roles: tester, test architect, data scientist, project lead and CEO.
Join us in this talk to hear how we approached the task and what our results are, including an example of using our checklist of questions to analyse a system that uses AI. Along the way we also addressed questions like "What is the role of a tester in such projects?" and "How much math do I need?" - we'll talk about those discussions, too. We encourage participants to use our checklist and give us feedback on it!
Gregor Endler erwarb mit seiner Dissertation “Adaptive Data Quality Monitoring with a Focus on the Completeness of Timestamped Data” 2017 den Doktortitel in Informatik. Seitdem ist er als Data Scientist bei der codemanufaktur GmbH tätig. Seine Arbeit umfasst insbesondere Machine Learning, Datenanalyse und Datenvisualisierung.
Marco Achtziger is Test Architect working for Siemens Healthineers in Forchheim. In this role he supports teams working in an agile environment in implementing and executing tests in the preventive test phase in a large project. He has several qualifications from iSTQB and iSQI and is a certified Software Architect by Siemens AG and Siemens Senior Key Expert in the area of Testing and Continuous Delivery.
Vortrag Teilen
Security engineering from TARA and security requirements to security testing demand mechanisms to generate, verify, and connect the resulting work products. Traditional methods need lots of manual work and yet show inconsistencies and imbalanced tests. Generative AI allows novel methods with semi-automatic cyber security requirements engineering, traceability, and testing. In this industry presentation, we show two promising approaches with NLP and transformers and how to embed them into an industry-scale security pipeline from TARA to test.
Target Audience: Test Engineers, QA Experts, Security Experts, Requirements and Systems Engineers
Prerequisites: Some background on security and testing. We will hands-on introduce the AI methods.
Level: Advanced
Extended Abstract:
Security engineering from TARA and security requirements to security testing demand mechanisms to generate, verify, and connect the resulting work products. Traditional methods need lots of manual work such as for traceability and yet show little impact when looking at the many inconsistencies and imbalanced tests. NLP especially transformers allow novel methods with semi-automatic cyber security requirements engineering, traceability, and testing.
We focus here on using generative AI with NLP because they can support the methods described in the standard while there is no need to change the form of representation from what is required by cybersecurity standards and respective stakeholders. Especially the use of Large Language Models (LLM) for text generation, aggregation, and classification has recently proven promising to improve the efficiency and effectiveness of security analysis and tests.
Grey Box Penetration Testing is an approach where only publicly available information is used to perform an attack on the SUT. This often requires massive research effort. Threat catalogs were known and often used threats are recorded can increase the performance while testing. To provide additional aid we are currently working towards building an AI-supported threat catalogue. Therefore, we use a special transformer model which is specialized in searching and summarizing information. When fed with known information about the SUT this model searches all available databases like CVE or CAPEC, previously recorded attack patterns, and other contextual information available and gives the penetration test engineer an initial idea of how to approach an attack on the SUT.
Using the AI to generate both grey and white box attack paths is an approach to check how much information about the system or components such as libraries and dependencies which are used in the SUT are available. Having introduced these methods to the security life-cycle, we will in the next step better integrate the tools. This will facilitate a swift turn-around upon changes in an agile delivery pipeline and thus achieve consistency from TARA to security requirements and (regression) test cases.
Vector together with the University of Stuttgart has developed transformers and generative AI-based methodologies for the specification and validation of cybersecurity requirements with the goal to increase productivity and quality.
In this industry presentation, we practically show how generative AI can scale into an industry-scale security pipeline.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/christof.ebert
Christof Ebert is the managing director of Vector Consulting Services in Stuttgart, Germany. He holds a PhD from University of Stuttgart, is a Senior Member of the IEEE and teaches at University of Stuttgart and Sorbonne university in Paris. Cybersecurity has been his focus since studying in USA and directly contributing against the Morris worm.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/christof-ebert/
Vortrag Teilen
The introduction of ChatGPT, CoPilot X brings in a lot of hype over developer experiences, especially documentation. But are we ready to chat with our documentation, instead of reading, using these tools? How can we, as maintainers, leverage these tools to offer a better experience in documentation for developers as our users? Join my talk and let's find out.
Target Audience: Engineers, Developers
Prerequisites: Programming
Level: Advanced
Maya Shavin is Senior Software-Engineer in Microsoft, working extensively with JavaScript and frontend frameworks and based in Israel. She founded and is currently the organizer of the VueJS Israel Meetup Community, helping to create a strong playground for Vue.js lovers and like-minded developers. Maya is also a published author, international speaker and an open-source library maintainer of frontend and web projects. As a core maintainer of StorefrontUI framework for e-commerce, she focuses on delivering performant components and best practices to the community while believing a strong Vanilla JavaScript knowledge is necessary for being a good web developer.
Discover the transformative power of GenAI in software testing. This lecture showcases a powerful GenAI-powered test framework that enhances testing efficiency. Learn how GenAI analyzes applications to generate automated test cases, uncover hidden defects with generative AI's random exploratory tests. Experience AI-powered peer reviewers for code analysis and quality evaluations. Explore Smart Report AI, providing comprehensive analysis and insights into test execution, results, and defects. Join us to revolutionize your software testing with GenAI.
Target Audience: Quality Engineers, Architects, Developers, Project Leader, Managers
Prerequisites: Basic knowledge on modern software development
Level: Advanced
Extended Abstract:
In the rapidly evolving landscape of software development Generative AI holds immense potential to revolutionize the field of software testing. This lecture aims to explore the transformative capabilities of GenAI in software testing by showcasing the powerful GenAI powered test framework and its key components.
The lecture begins by introducing the main framework, which combines the power of GenAI and automation to enhance the efficiency and effectiveness of software testing. Attendees will learn how GenAI can analyze the application under test and generate automated test cases based on a predefined test framework. These test cases can be executed locally or seamlessly integrated into DevOps processes, allowing for efficient and comprehensive testing of applications.
Furthermore, the lecture delves into the utilization of generative AI for the generation of random exploratory tests. By leveraging the capabilities of GenAI, testers can uncover hidden defects and vulnerabilities that may go unnoticed with traditional testing approaches. This demonstration highlights the innovative potential of GenAI in driving comprehensive test coverage.
Additionally, the lecture showcases the Smart Peer AI component of the framework. Attendees will discover how AI-powered peer reviewers can anticipate functional and non-functional issues, providing insightful code analysis and quality evaluations. By incorporating specific coding best practices, architectural standards, and non-standard attributes, Smart Peer AI enhances code quality and accelerates the development process.
Finally, the lecture concludes by presenting the Smart Report AI, an AI-powered solution that analyzes test reports generated from automated testing processes. This component provides a comprehensive analysis of test execution, environment configuration, test results, defect severity automated analysis, reproducibility steps, and suggestions for fixes. Smart Report AI empowers testers with valuable insights and facilitates effective decision-making in the testing and development lifecycle.
Davide Piumetti is a Technology Architect with expertise in software testing and part of Switzerland quality engineering leadership. With over 15 years of experience he drives innovative projects in Big Data, Generative AI, and Test@DevOps. He is also exploring the potential of high-performance computing and quantum computing in software testing. Committed to pushing boundaries, he shapes the future of quality engineering through innovation and continuous improvement.
Vortrag Teilen
Vortrag Teilen
In today's economy, creating intelligent customer experiences is a key differentiator for organizations looking to compete and gain a competitive advantage. Use of AI and especially Generative AI became more prevalent in the business world. We will discuss some of the work we did on creating an AI-Driven CX Platform that offers data management, Customer360 views, personalization and chatbots infused with Generative AI, and advanced security features. We will also discuss practical use cases and outcomes of our approach.
Target Audience: Managers, Decision Makers, Leaders
Prerequisites: None
Level: Advanced
Currently a Principal AI Strategist with Amazon, Zorina Alliata works with global customers to find solutions that speed up operations and enhance processes using Artificial Intelligence and Machine Learning. Zorina is also an Adjunct Professor at Georgetown University SCS and OPIT, as a creator and instructor for AI courses.
Zorina is involved in AI for Good initiatives and working with non-profit organizations to address major social and environmental challenges using AI/ML. She also volunteers with the Zonta organization and as the Chair of the Artificial Intelligence Committee at AnitaB.org, to support women in tech.
You can find Zorina on LinkedIn:
https://www.linkedin.com/in/zorinaalliata/
Hara Gavriliadi is a Senior CX Strategist at AWS Professional Services helping customers reimaging and transforming their customer experience using data, analytics, and machine learning. Hara has 13 years of experience in supporting organisations to be more data-driven, and turning analytics and insights into commercial advice to enable growth and innovation. Hara is passionate about ID&E and she is an AWS GetIT Ambassador inspiring young students to consider a future in STEAM.
Vortrag Teilen
This session will introduce embedding vectors and their use in artificial intelligence. It will illustrate how these constructs can be effectively utilized in enterprise AI solutions, specifically in conjunction with prompt engineering. Rainer Stropek will present practical demonstrations using Microsoft's Azure Cloud and OpenAI's ChatGPT 4 model, showcasing real-world application scenarios and potential business benefits. Attendees will gain insights into emerging AI trends and practices in enterprise contexts.
Target Audience: Architects, Developers, Data Scientists
Prerequisites: Basic knowledge about LLMs, ability to read source code
Level: Advanced
Extended Abstract:
This session aims to demystify the technical concepts of embedding vectors and their integration into the Artificial Intelligence (AI) world, with a specific focus on enterprise applications.
We provide an accessible introduction to embedding vectors – the mathematical constructs that translate complex data types into a format that machine learning algorithms can comprehend. Attendees will grasp the foundational principles of embeddings and their pivotal role in enhancing the capabilities of AI algorithms.
Next, we delve into the realm of prompt engineering. Here, we explore how the strategic crafting of prompts can steer AI models, such as OpenAI's ChatGPT 4, toward generating more precise and contextually accurate responses. We'll explain the process and strategies involved in prompt engineering, offering a roadmap for businesses to adopt these practices effectively.
Rainer Stropek will demonstrate these principles in a practical setting using Microsoft's Azure Cloud. We will walk through real-life scenarios, showing how enterprises can implement such technologies.
The session targets AI enthusiasts, software architects, developers, and data scientists. By the end of this session, participants will have gained a better understanding of these AI concepts, and how they can be harnessed.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/rainer.stropek
Rainer Stropek is co-founder and CEO of the company software architects and has been serving this role since 2008. At software architects Rainer and his team are developing the award-winning SaaS time tracking solution “time cockpit”. Previously, Rainer founded and led two IT consulting firms that worked in the area of developing software solutions based on the Microsoft technology stack.
Rainer is recognized as an expert concerning software development, software architecture, and cloud computing. He has written numerous books and articles on these topics. Additionally, he regularly speaks at conferences, workshops and trainings in Europe and the US. In 2010 Rainer has become one of the first MVPs for the Microsoft Azure platform. In 2015, Rainer also became a Microsoft Regional Director.
Rainer graduated from the Higher Technical School Leonding (AT) for MIS with honors and holds a BSc (Hons) Computer Studies of the University of Derby (UK).
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/rainer-stropek/
Vortrag Teilen
In the evolving AI landscape, the EU AI Act introduces new standards for assuring high-risk AI systems. This presentation will explore the tester's role in navigating these standards, drawing from the latest research and from our experiences with an Automatic Employment Decision System, a high-risk AI. We'll discuss emerging methodologies, conformity assessments, and post-deployment monitoring, offering insights and practical guidance for aligning AI systems with the Act's requirements.
Target Audience: QA Professionals, AI Engineers/Architects, Business Leaders, POs/PMs, Policy Makers
Prerequisites: Basic Understanding of AI, Familiar with Testing, Awareness of EU AI Act, Interest in AI Asurance
Level: Advanced
Extended Abstract:
As the landscape of Artificial Intelligence (AI) rapidly evolves, the upcoming EU AI Act is set to introduce a new paradigm for assuring high-risk AI systems. This session will allow participants to delve into the pivotal role of testers in this context. We will decode the complexities of the Act and get to see how fostering the Act will ensure robust, transparent, and ethically aligned AI systems.
Drawing on recent research and my own experience in testing high-risk AI systems, I will discuss the emerging methodologies for testing high-risk AI, including explainability methods, robustness testing, and fairness testing. Together, we will also explore the Act's emphasis on conformity assessments and post-deployment monitoring, highlighting the tester's role in these processes. Participants will gain a unique behind-the-scenes look at how we have gone from chaos to order in testing an Automatic Employment Decision System, a high-risk AI.
Joining this session will equip participants with valuable insights and practical guidance on aligning AI systems with the EU AI Act. This is a must-attend for testers, AI developers, and business leaders alike who are navigating this new frontier, exploring the challenges, opportunities, and future directions in the assurance of high-risk AI systems. By the end of the session, participants will be better prepared to face the challenges posed by the EU AI Act and will have a clear understanding of the future directions in the assurance of high-risk AI systems.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/andrei.nutas
Andrei Nutas is a Test Architect at Adesso with over 7 years of industry experience. For the past year, among other things, Andrei has helped align an Automated Employment Decision System to the upcoming EU AI Act. In his free time, he is a research fellow with the West University of Timisoara where he focuses on AI Alignment and AI Ethics.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/andrei-nutas/
Vortrag Teilen
One of the fundamental challenges for machine learning (ML) teams is data quality, or more accurately the lack of data quality. Your ML solution is only as good as the data that you train it on, and therein lies the rub: Is your data of sufficient quality to train a trustworthy system? If not, can you improve your data so that it is? You need a collection of data quality “best practices”, but what is “best” depends on the context of the problem that you face. Which of the myriad of strategies are the best ones for you?
Target Audience: Developers, Data Engineers, Managers, Decision Makers
Prerequisites: None
Level: Advanced
Extended Abstract:
This presentation compares over a dozen traditional and agile data quality techniques on five factors: timeliness of action, level of automation, directness, timeliness of benefit, and difficulty to implement. The data quality techniques explored include: data cleansing, automated regression testing, data guidance, synthetic training data, database refactoring, data stewards, manual regression testing, data transformation, data masking, data labeling, and more. When you understand what data quality techniques are available to you, and understand the context in which they’re applicable, you will be able to identify the collection of data quality techniques that are best for you.
Scott Ambler is an Agile Data Coach and Consulting Methodologist with Ambysoft Inc., leading the evolution of the Agile Data and Agile Modeling methods. Scott was the (co-)creator of PMI’s Disciplined Agile (DA) tool kit and helps organizations around the world to improve their way of working (WoW) and ways of thinking (WoT). Scott is an international keynote speaker and the (co-)author of 30 books.
Vortrag Teilen
Expanding horizons has many facets. It means taking advantage of new opportunities that arise from technical progress, such as Large Language Models, or societal challenges like Sustainability. Expanding horizons also means taking responsibility. AI and data analytics have a direct impact on our future life, good and bad. Expanding horizons also means reflection on existing practice. We have perhaps forgotten the benefits of structured monoliths, or have sometimes overdone it with agility, which suggests a critical and learning retrospective.
Moderation: Frank Buschmann
Panelists: Isabel Bär, Zoe Lopez-Latorre, Carola Lilienthal, Xin Yao
Target Audience: Software Practitioners
Prerequisites: None
Level: Advanced
Extended Abstract:
The motto of OOP 2024 has many facets. Expanding horizons means understanding and taking advantage of new opportunities that arise from technological progress or societal challenges. For example, on Large Language Models, Sustainability, and the Metaverse. Expanding horizons also means taking responsibility and not blindly applying new technologies. For example, AI and data analytics have a direct impact on our future life and social interaction. With all the consequences, good and bad. But broadening horizons also means reflecting on existing technologies and practices. In the course of the euphoria about microservices, for example, we have perhaps forgotten the advantages of the structured monolith too much, or have sometimes overdone it with agility. A critical and learning retrospective seems appropriate.
In this panel, we will examine the various aspects of the motto of OOP 2024 to give us all meaningful guiding thoughts for the exciting journey to expanding our horizons.
Frank Buschmann is a Distinguished Engineer at Siemens Technology in Garching. His interests are in modern software architecture and in development approaches for industrial digitization.
Isabel Bär is a skilled professional with a Master's degree in Data Engineering from the Hasso-Plattner-Institute. She has made contributions in the field of AI software, focusing on areas like MLOps and Responsible AI. Beyond being a regular speaker at various conferences, she has also taken on the role of organizing conferences on Data and AI, showcasing her commitment to knowledge sharing and community building. Currently, she is working as a consultant in a German IT consulting company.
Dr. Carola Lilienthal ist Softwarearchitektin und Geschäftsführerin bei der Workplace Solutions GmbH. Seit 2003 analysiert sie die Zukunftsfähigkeit von Softwarearchitekturen und spricht auf Konferenzen über dieses Thema. 2015 hat sie ihre Erfahrungen in ihren Büchern „Langlebige Softwarearchitekturen“ und „Domain-Driven Transformation“ zusammengefasst.
Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/experten/carola-lilienthal/
Zoe is a digital sustainability and web performance engineer with 3 years of experience in the field. They have published user research and actionable advice for brands and advertisers, and they are currently running web performance and web energy consumption correlation research studies. Zoe is also a member of the Sustainable Web W3C Community Group, focused on web digital sustainability measurement and standards to offer actionable advice to developers. They are a contributor to the Web Sustainability Guidelines 1.0 Draft.
Zoe is passionate about using their skills to help businesses reduce their digital environmental impact. They believe that digital sustainability is an important issue that everyone should be aware of, and they are committed to raising awareness and sharing their knowledge with others.
DDD Consultant, Sociotechnical Architect & Advocate of Systems Thinking
Xin Yao is an independent consultant specialized in Domain-Driven Design (DDD), Sociotechnical Architecture and Systems Leadership. She frequently speaks at international design and architecture conferences. In her earlier career, Xin has been chief architect in Danske Bank, spearheading large-scale change initiatives. An experienced architect and an avid change agent, Xin nudges organizations at crossroads to move beyond seeing architecture as an upfront design blueprint. She is deeply committed to collective reasoning, participatory discovery and systems leadership. Xin facilitates languaging, modeling and reflective conversations to help teams and organisations make sense, make decisions and make intuitive business software.
Vortrag Teilen
During the talk, we'll dive into the historical context of Generative AI and examine their challenges. From legal compliance to fairness, transparency, security, and accountability, we'll discuss strategies for implementing Responsible AI principles.
It's important to note that the landscape for AI-driven products is still evolving, and there are no established best practices. The legislative framework surrounding these models remains uncertain, making it even more vital to engage in discussions that shape responsible AI practices.
Target Audience: Decision Makers, Developers, Managers, Everyone - AI-driven products require cross-functional teams
Prerequisites: None
Level: Basic
Extended Abstract:
Foundation models like GPT-4, BERT, or DALL-E 2 are remarkable in their versatility, trained on vast datasets using self-supervised learning. However, the adaptability of these models brings forth ethical, socio-technical, and legal questions that demand responsible development and deployment.
During the talk, we'll delve into the history of AI to better understand the evolution of generative models. We'll explore strategies for implementing Responsible AI principles, tackling issues such as legal compliance, fairness, transparency, security, accountability and their broader impact on society.
It's important to note that there are currently no established best practices for AI-driven products, and the legislative landscape surrounding them remains unclear. This underscores the significance of our discussion as we collectively navigate this emerging field.
Isabel Bär is a skilled professional with a Master's degree in Data Engineering from the Hasso-Plattner-Institute. She has made contributions in the field of AI software, focusing on areas like MLOps and Responsible AI. Beyond being a regular speaker at various conferences, she has also taken on the role of organizing conferences on Data and AI, showcasing her commitment to knowledge sharing and community building. Currently, she is working as a consultant in a German IT consulting company.
Are Large Language Models (LLMs) sophisticated pattern matchers ('parrots') without understanding or potential prodigies that eventually surpass human intelligence? Drawing insights from both camps, we attempt to reconcile these perspectives, examines the current state of LLMs, their potential trajectories, and the profound impact these developments have on how we engineer software in the years to come.
Target Audience: Developers and Architects
Prerequisites: A basic understanding of Large Language Models is helpful but not required
Level: Basic
Extended Abstract:
Large Language Models (LLMs) are complex 'black box' systems. Their capabilities remain largely mysterious, only beginning to be understood through interaction and experimentation. While these models occasionally yield surprisingly accurate responses, they also exhibit shocking, elementary mistakes and limitations, creating more confusion than clarity.
When seeking expert insights, we find two diverging perspectives. On one side, we have thinkers like Noam Chomsky and AI experts such as Yann LeCun, who view LLMs as stochastic 'parrots' — sophisticated pattern matchers without true comprehension.
In contrast, AI pioneers like Geoffrey Hinton and Ilya Sutskever see LLMs as potential 'prodigies' — AI systems capable of eventually surpassing human intelligence and visionaries like Yuval Noah Harari that view LLMs as substantial societal threats.
Regardless of whether we see LLMs as 'parrots' or 'prodigies', they undeniable are catalyzing a paradigm shift in software engineering, broadening horizons, and pushing the boundaries of the field.
What are the theories underpinning these experts' views? Can their perspectives be reconciled, and what can we learn for the future of software engineering?
To answer these questions, we examine the current capabilities and developments of LLMs and explore their potential trajectories.
Steve Haupt, an agile software developer at andrena objects, views software development as a quality-driven craft. Fascinated by AI, he explores its implications for software craftsmanship, working on AI projects and developing best practices. Steve focuses on applying Clean Code and XP principles to AI development. He regularly speaks on AI and co-created an AI training course, aiming to bridge traditional software development with modern AI technologies for sustainable solutions.
More content from this speaker? Have a look at sigs.de: https://www.sigs.de/experten/steve-haupt/
Vortrag Teilen
Vortrag Teilen
Artificial Intelligence (AI) has become integral to software development, automating complex tasks and shaping this field's future. However, it also comes with challenges. In this talk, we explore how AI impacts current software development and possibilities for the future. We'll delve into AI language models in programming, discussing pros, cons and challenges. This talk, tailored to both supporters and skeptics of AI in software development, doesn't shy away from discussing the ethical obligations tied to this technology.
Target Audience: Software Engineers, Architects and Project Leaders in an enterprise environment
Prerequisites: Basic Understanding of Software Development
Level: Advanced
Extended Abstract:
The integration of Artificial Intelligence (AI), especially AI language models, is fundamentally reshaping the landscape of software development. This transformative technology offers exciting possibilities, from enhancing developer productivity to simplifying complex tasks. However, it is crucial to acknowledge and address the inherent challenges that accompany these advancements.
In this session, we delve into AI's role in software development, emphasizing both the potential benefits and potential drawbacks of AI. We will also examine the ethical implications of this technology.
By the end of the session, attendees will have a comprehensive understanding of AI's role in software development, reinforced by practical demonstrations. Participants will gain insight and skills for their daily roles as well as strategies for adopting and applying AI within their software development context.
As AI continues to permeate software development, understanding its potential and preparing for its challenges has become vital. My session, directly tied to the conference theme of "Expanding Horizons", provides unique, practical insights to help attendees navigate an AI-integrated future. Join me in encouraging the responsible and effective use of this transformative technology.
Marius Wichtner works as a Lead Software Engineer in the IT Stabilization & Modernization department at MaibornWolff. Focused on the quality and architecture of diverse applications and backend systems, he has a particular interest in how artificial intelligence intersects with and evolves the realm of software development.
Vortrag Teilen
Great engineers often use back-of-the-envelope calculations to estimate resources and costs. This practice is equally beneficial in Machine Learning Engineering, aiding in confirming the feasibility and value of an ML project. In my talk, I'll introduce a collaborative design toolkit for ML projects. It includes Machine Learning Canvas and MLOps Stack Canvas to identify ML use cases and perform initial prototyping, thus ensuring a business problem can be effectively solved within reasonable cost and resource parameters.
Target Audience: Architects, Developers, Project Leader, Data Scientist
Prerequisites: Basic knowledge in machine learning
Level: Advanced
Vortrag Teilen
In this session, we focus on the topic of software product management (PdM) and how PdM practices are rapidly changing. Together we explore and define how to do PdM for digital products as well as software-, data- and AI-intensive systems. Some questions we explore include:
- How to change current PdM practices to work with digital technologies and digital offerings?
- What is the future of PdM practices and what are the key characteristics of digital product management?
Target Audience: Product Managers, Software-Architects, Software Engineers, R&D Management
Prerequisites: No specific technology expertise required.
Level: Advanced
Jan Bosch is professor at Chalmers University Technology in Gothenburg, Sweden and director of the Software Center (www.software-center.se), a strategic partner-funded collaboration between 17 large European companies (including Ericsson, Volvo Cars, Volvo Trucks, Saab Defense, Scania, Siemens and Bosch) and five universities focused on digitalization. Earlier, he worked as Vice President Engineering Process at Intuit Inc where he also led Intuit's Open Innovation efforts and headed the central mobile technologies team. Before Intuit, he was head of the Software and Application Technologies Laboratory at Nokia Research Center, Finland. Prior to joining Nokia, he headed the software engineering research group at the University of Groningen, The Netherlands. He received a MSc degree from the University of Twente, The Netherlands, and a PhD degree from Lund University, Sweden.
His research activities include digitalisation, evidence-based development, business ecosystems, artificial intelligence and machine/deep learning, software architecture, software product families and software variability management. He is the author of several books including "Design and Use of Software Architectures: Adopting and Evolving a Product Line Approach" published by Pearson Education (Addison-Wesley & ACM Press) and “Speed, Data and Ecosystems: Excelling in a Software-Driven World” published by Taylor and Francis, editor of several books and volumes and author of hundreds of research articles. He is editor for Journal of Systems and Software as well as Science of Computer Programming, chaired several conferences as general and program chair, served on numerous program committees and organised countless workshops. Jan is a fellow member of the International Software Product Management Association (ISPMA) and a member of the Royal Swedish Academy of Engineering Science.
Jan serves on the boards of IVER, Peltarion and Burt Intelligence and on the advisory boards of Assia Inc. in Redwood City, CA and Pure Systems GmbH (Germany). Earlier he was chairman of the board of Auqtus, Fidesmo and Remente. In the startup space, Jan is an angel investor in several startup companies. He also runs a boutique consulting firm, Boschonian AB, that offers its clients support around the implications of digitalization including the management of R&D and innovation. For more information see his website: www.janbosch.com.
Helena Holmström Olsson is a professor of Computer Science at Malmö University, Sweden and a senior researcher in Software Center (software-center.se). Her research interests and expertise include engineering aspects of AI systems, data driven development practices, digitalization and digital transformation, and software and business ecosystems. She is the supervisor of several PhD students in the area of data driven development and AI engineering, focusing in particular on the business and organizational implications of AI deployment, and she has a well-established and continuous collaboration with the European software-intensive embedded systems industry.
Her research is published in high-quality software engineering journals and conferences and she is program chair for the “2nd International Conference on AI Engineering - Software Engineering for AI” (CAIN) 2023. Helena is a fellow member of the International Software Product Management Association (ISPMA) and a board member of Malmö University, Sweden.
Vortrag Teilen