Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more.
In this session, we will learn how to get started with serverless computing using AWS Lambda, which lets you run code without provisioning or managing servers.
Target Audience: Developers
Prerequisites: Practical development experience
IoT systems consists of components that run across different compute nodes, from cloud to edge. Some of those components are connected with sensors and actuators, some of them process information. With seamless computing, we present a new approach for flexibly allocating functionality to the compute nodes of such a system based on application and infrastructure models. This talk discusses industrial use cases of seamless computing and presents the concepts as well as an implementation based on open technologies like Docker and Kubernetes.
Target Audience: Architects, Developers, Technology Managers
Prerequisites: Knowledge on container and container orchestration technologies
Internet of Things (IoT) applications are, by nature, running on massively distributed systems, consisting of heterogeneous hardware platforms. Sensors and actuators are usually connected to small embedded systems, while the processing of data and information is often done on physical or virtual compute nodes in data centers or in the cloud. Today, the software lifecycle is handled in a completely different way across these platforms, using different methods, processes, and tools. This is not only complicated and makes it difficult to develop, test and iterate the system software efficiently. It also leads to an assignment of functions of the distributed system to the hardware platforms at a very early stage. Adapting this assignment usually requires a large amount of effort, restricting the flexibility and speed for system changes.
Seamless computing is an approach to enable flexible allocation of workloads to compute platforms at deploy time or even at runtime. In addition, it targets the specific requirements of industrial systems, such as real-time capability and data security. One pillar of seamless computing is to provide a unified execution environment for the software components across the supported compute domains, which can be based on open source container and container orchestration technologies like Docker and Kubernetes. The second pillar is defining appropriate models for the applications and for the infrastructure that allow for mapping requirements and constraints of the application with capabilities and current state of the infrastructure. Using this modelling approach, the allocation of workloads to compute nodes can evolve to a self-organizing process. In the most advanced stage, the mapping can be done based on optimization criteria, and it can be dynamically re-visited during runtime, taking into account changes in the state of the system.
This talk presents the concepts of the new seamless computing paradigm and gives an insight to the application and infrastructure models as well as to the mapping requirements. It also demonstrates an implementation of seamless computing based on open source container and container orchestration technologies Docker and Kubernetes. The talk will conclude with an outlook on remaining challenges and approaches for extensions of the used technologies.