Track: Intel Special Day
- Dienstag
30.01.
In this session you will learn how to maximize the performance of machine learning algorithms on Intel® Architecture with optimization libraries for popular Python frameworks. Get practical insights and explore real-world use cases during demo for achieving exceptional speedup in classical ML applications with only few lines of code.
Vladimir Kilyazov is AI Software Solutions Engineer at Intel with notable experience in different domains, including Computer Vision and Speech Processing. Vladimir is awardee of many ML contests and hackathons including Intel OpenVINO Hackathon, IDR&D Anti-Spoofing Challenge, Junction Hackathon.
During this presentation, we'll showcase the most recent advancements in Generative AI, covering Large Language Models and Diffusion Models. We will explore how Intel plays a crucial role in powering this technology, from training and fine-tuning to inference across a spectrum of Intel hardware platforms.
Intel Developer Cloud is a development environment that gives access to cutting-edge Intel hardware and software innovations to build and test AI, machine learning, HPC, and security applications for cloud, enterprise, client, and edge deployments. Learn about Intel's advanced CPUs, GPUs, and accelerators, along with open software tools, to optimize your AI products and solutions. Discovery session with vouchers for all participants
Vladimir Kilyazov is AI Software Solutions Engineer at Intel with notable experience in different domains, including Computer Vision and Speech Processing. Vladimir is awardee of many ML contests and hackathons including Intel OpenVINO Hackathon, IDR&D Anti-Spoofing Challenge, Junction Hackathon.
Intel Geti is a software platform that facilitates the creation of computer vision models in a fraction of the time and with minimal data. It streamlines laborious tasks such as data labeling, model training, and optimization throughout the AI model development process, empowering teams to generate custom AI models at scale. In this presentation, we will explore diverse computer vision use cases in different industries, showcasing how Intel Geti simplifies the process of model training, optimization, and deployment. Learn how AI applications can now be developed more easily and quickly than ever before.
Olga Perepelkina is a Product Manager at Intel responsible for the scope, vision, and strategy of the AI Products. Olga has a PhD in Neuroscience and post-master’s degree in computer science. She serves as an Industrial Advisor at the University of Glasgow, UK, and a mentor in Intel Ignite global startup accelerator program.
Participants will gain insights into how Intel's AI optimizations are contributing to eco-friendly solutions, reducing carbon footprints, and promoting sustainable practices in the tech industry. In the demo part, we will measure the carbon footprint from AI model and check how we could reduce the carbon footprint for 1 year inference.
Vladimir Kilyazov is AI Software Solutions Engineer at Intel with notable experience in different domains, including Computer Vision and Speech Processing. Vladimir is awardee of many ML contests and hackathons including Intel OpenVINO Hackathon, IDR&D Anti-Spoofing Challenge, Junction Hackathon.
Standard machine learning approaches require centralizing the training data on one machine or in a datacenter. Federated Learning enables training models on distributed and private datasets without the need to centralize them. Privacy and security are key considerations for data set owners participating in Federated Learning optimizations.
Walter Riviera is AI Technical Specialist EMEA Lead at Intel.
Walter joined Intel in 2017 as an AI TSS (Technical Solution Specialist) covering EMEA and he’s now playing an active role on most of the AI project engagements within the Data Centers business in Europe. He is responsible for increasing Technical and business awareness regarding the Intel AI Offer, enabling and provide technical support to end user customers, ISVs, OEMs, Partners in implementing HPC and/or Clouds solutions for AI based on Intel’s products and technologies. Before joining Intel Walter has collected research experiences working on adopting ML techniques to enhance images retrieval algorithms for robotic applications, conducting sensitive data analysis in a start-up environment and developing software for Text To Speech applications.
Your code is making you lose money and increasing CO2 emissions. Learn how to harness LLMs and ML-based optimisation technologies to enhance software performance, reduce costs, and cut carbon emissions
Extended Abstract:
In today’s digital landscape, efficiency is more than a luxury—it’s a necessity. Businesses worldwide are striving to streamline operations and reduce costs, yet they often overlook the importance of optimising their code. Inefficient code can be a silent drain on resources, leading to increased compute costs, sluggish runtime, and even a higher carbon footprint.
However, code optimisation has long been a pain point for developers, where retrieving and improving underperforming code is a cumbersome and inefficient process. Even the most experienced engineers will spend days finding out the best ways to optimise code.
Large Language Models (LLMs) are revolutionising the way developers write code. However, developing high-performance software for complex environments presents significant challenges, often surpassing what LLMs can handle alone.
In this talk, we will delve into GenAI-powered code optimisation and share real-world examples using Artemis AI, TurinTech AI’s automatic code optimisation platform. You'll learn about:
• Why code performance is important
• The latest advancements in LLMs for code
• Evolutionary optimisation techniques
• How Artemis AI empowers developers
Join us to unlock peak performance with efficient code!
Michail brings a background in software engineering, blockchain, and database systems to the company, developed through his experience at top financial institutions such as Commerzbank and BNP Paribas.
Michail holds a PhD in Computer Science from University College London, where he specialised in big data systems, evolutionary optimisation for software development, and blockchain technology. He was instrumental in implementing Commerzbank's first blockchain proof of concept and was part of a consulting team at BNP Paribas that applied data science and innovative technologies to solve internal data inconsistencies.
Jonas Mayer arbeitet im Innovation Hacking Team der TNG Technology Consulting und beschäftigt sich dort hauptsächlich mit der Entwicklung von innovativen Showcases und Prototypen in Soft- und Hardware. So arbeitete er seit 2018 an verschiedensten Projekten, wie zum Beispiel Deepfakes, Mixed Reality KI-Kunstwerken und autonom fliegenden Minidrohnen.
Niclas Hülsmann is a Junior Consultant at TNG Technology Consulting, where he is part of the Innovation Hacking Team. In his role, he has worked on various AI-related showcases and prototypes, ranging from an AI Music Generator to LLM powered chatbots.
He is currently pursuing his Master of Science in Computer Science at the Technical University of Munich.
In this workshop you will learn how to build your own Document chatbot, based on Llama2 running on the Intel Developer Cloud.
We will kick things off with a look under the hood on how you can add your own data to a Large Language Model (LLM) with Retrieval-Augmented Generation (RAG). After that, we will build a simple RAG chatbot that you can feed with your own data. You will learn how to build a chat interface with Gradio and connect your LLM and Embedding Model running on the Intel Developer Cloud with Langchain. Make sure to bring your laptops, because this will be a hands-on workshop with minimal Python coding in Google Colab.
Vortrag Teilen
Local-first is a new architecture for building fast, collaborative and resilient software applications. This session will cover the benefits and trade-offs with local-first architecture and walk through the practical steps of creating a local-first application with conflict-free, active-active replication between Postgres in the cloud and SQLite in the local device. electric-sql.com
James is co-founder and CEO of ElectricSQL, an open source system for building local-first software. Currently being accelerated by Intel Ignite in Munich. Prior to Electric, he founded a series of VC-backed startups, including Hazy, LGN and Opendesk and developed software for Apple and IDEO. James's projects have won a TED Prize and the $1M Microsoft Prize for the best startup in Europe.
Vortrag Teilen