Conference Program

Please note:
On this site, there is only displayed the English speaking sessions of the OOP 2022 Digital. You can find all conference sessions, including the German speaking ones, here.

The times given in the conference program of OOP 2022 Digital correspond to Central European Time (CET).

By clicking on "EVENT MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.

Thema: Security

Nach Tracks filtern
Nach Themen filtern
Alle ausklappen
  • Montag
    31.01.
  • Dienstag
    01.02.
  • Donnerstag
    03.02.
, (Montag, 31.Januar 2022)
14:00 - 17:00
Mo 14
Limitiert Security Games – Playfully Improve your Security
Security Games – Playfully Improve your Security

Security is an important topic, especially when developing software. But it is seen as complex and is holding everyone back, often put off until the end and delegated to an external person or group.

To be effective security needs to be a continuous part of the development process and to involve the whole team.

Security games can help to achieve this. They involve the whole team and facilitate the learning and application of security principles. They offer a way to integrate expert knowledge and make security less scary, maybe even fun.

Maximum number of participants: 50

Target Audience: Architects, Developers, Project Leaders, Testers, Security Experts
Prerequisites: General interest in security, basic development experience
Level: Basic

Claudius Link is working in IT since 1994 in roles from system and network administration, support, software development, development manager to information security officer.
During this time he worked in different domains. Ranging from medical devices and laboratory systems, numerical simulations, transportation, financial industry, through security software and as information security officer for a medium sized subsidiary of a global enterprise.
Currently he is self-employed, promoting human centred security.
Matthias Altmann is a software developer and IT security expert at Micromata GmbH, where he and his colleagues oversee and develop the IT security area. He is also co-founder and organizer of the IT Security Meetup Kassel, a network of IT security enthusiasts dedicated to professional exchange on the topic. More information on his blog: https://secf00tprint.github.io/blog
Claudius Link, Matthias Altmann
Claudius Link, Matthias Altmann
flag VORTRAG MERKEN

Vortrag Teilen

, (Dienstag, 01.Februar 2022)
16:15 - 17:15
Di 3.3
Making your Bureaucracy Value Stream Lean and Automated
Making your Bureaucracy Value Stream Lean and Automated

In today’s software-driven world, the integrity of software assets isn’t just a regulatory and compliance requirement, it’s critical for maintaining trust and avoiding irreparable damage to your brand and reputation. We found that Compliance, Software Chain of custody and in-App Security as well as API Security are seen as an overburdened bureaucracy. But they have to be part of your software value stream. So the question is, how they can be so lean, automated and optimized that they can contribute actual value inside your DevSecOps Approach?

Target Audience: Architects, Developers
Prerequisites: Project development experience
Level: Advanced

Extended Abstract
In today’s software-driven world, the integrity of software assets isn’t just a regulatory and compliance requirement, it’s critical for maintaining trust and avoiding irreparable damage to your brand and reputation.
The same also applies to Quality, In-App Security, and API Security in a more and more digitized world.
In a lot of case studies, we found that Compliance, Software Chain of custody and in-App Security as well as API Security are seen as an overburdened bureaucracy. But in all cases, they have to be part of your software value stream.
So the question is, how they can be so lean, automated, and optimized that they can contribute actual value inside your DevSecOps Approach? In the lecture we provide some key insight in how to solve that dilemma and integrate them into your day-to-day work.

Matthias Zieger ist seit fast 25 Jahren in der IT-Branche tätig – mit Rollen in Soft-wareentwicklung, Architektur, Testautomatisierung, Application Lifecycle Ma-nagement und DevOps für IBM, Borland, Microsoft und codecentric. In den letzten Jahren hat er große Unternehmen dabei unterstützt, ihre Software mit der Relea-se-Orchestrierung und Deployment-Automatisierung von XebiaLabs schneller in Produktion zu bringen – von klassischen Java EE-Umgebungen über Container und Cloud bis hin zu serverlosen Architekturen. Seit zwei Jahren bei Digital.ai hilft er großen Unternehmen, ihre Ziele der digitalen Transformation durch Value Stream Management schneller zu erreichen.. 

Matthias Zieger
Matthias Zieger
Vortrag: Di 3.3
Themen: Security
flag VORTRAG MERKEN

Vortrag Teilen

, (Donnerstag, 03.Februar 2022)
17:00 - 18:00
Do 5.4
Security Engineering for Machine Learning
Security Engineering for Machine Learning

Machine Learning appears to have made impressive progress on many tasks from image classification to autonomous vehicle control and more. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. This is not necessarily a good thing. Systematic risk is invoked by adopting ML in a haphazard fashion. Understanding and categorizing security engineering risks introduced by ML at design level is critical. This talk focuses on results of an architectural risk analysis of ML systems.

Target Audience: Architects, Technical Leads, and Developers and Security Engineers of ML Systems
Prerequisites: Risk Managers, Software Security Professionals, ML Practitioners, everyone who is confronted by ML
Level: Advanced

Extended Abstract
Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level. Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general. A list of the top five (of 78 known) ML security risks will be presented.

Gary McGraw is co-founder of the Berryville Institute of Machine Learning. He is a globally recognized authority on software security and the author of eight best selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books; and he is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written over 100 peer-reviewed scientific publications. Gary serves on the Advisory Boards of Irius Risk, Maxmyinterest, Runsafe Security, and Secure Code Warrior. He has also served as a Board member of Cigital and Codiscope (acquired by Synopsys) and as Advisor to CodeDX (acquired by Synopsys), Black Duck (acquired by Synopsys), Dasient (acquired by Twitter), Fortify Software (acquired by HP), and Invotas (acquired by FireEye). Gary produced the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine for thirteen years. His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the Luddy School of Informatics, Computing, and Engineering.
Gary McGraw
Gary McGraw
flag VORTRAG MERKEN

Vortrag Teilen

Zurück