Artificial intelligence and the internet of things – EdgeAI

This focused short course explores how artificial intelligence (AI) can be embedded into Internet of Things (IoT) systems, with a special emphasis on edge AI – running ML models directly on devices, close to where data is generated. Participants will learn how to design AIoT pipelines, when to process data on the edge vs. in the cloud, and how to deploy lightweight ML models on resource-constrained hardware. The course is intended for students, researchers, and professionals who want to move from “connected devices” to intelligent devices.

Course date: 05.11.2025 at 13:30
Venue: S34, UDG
Registration: required
Registration link: https://forms.gle/2DktEUqf5KZosFth7

Designed for: students, researchers, and professionals interested in AI, IoT, edge computing and applied ML.

Course Content Overview

Session 1 — AI + IoT theoretical framework

  • AI–IoT convergence: from sensing to intelligent action
  • edge vs. cloud vs. fog: latency, bandwidth, privacy, cost
  • edge AI pipeline: device → preprocessing → inference → actuation
  • lightweight/embedded ML (TinyML, quantization, pruning)
  • platforms and use cases (Raspberry Pi, Jetson, smart agriculture, industry)

Session 2 — hands-on edge/AI lab

  • preparing the edge/IoT environment and data source (sensor/camera/mock)
  • deploying a small ML model to the device
  • running inference locally and sending results to backend/cloud
  • monitoring and simple performance checks
  • how to scale to real deployments

Learning outcomes

By the end of the course, participants will be able to:

  • explain the relationship between AI, IoT and edge computing
  • decide when inference should run on the device and when in the cloud
  • deploy a lightweight ML model to an IoT/edge setup
  • outline an end-to-end AIoT application for their own domain (e.g. agriculture, smart city, industry)