Course syllabus

Course memo

The course is offered by the department of Mechanics and Maritime Sciences.

General information

The course will be given only on-site at Chalmers. There will be no remote (Zoom) lectures. More information will follow in the first lecture.

Contact details

During the course, we strive to be available as much as possible. You are welcome to ask questions at any time, for example in the lectures (and you may also ask questions via e-mail or telephone). You are always welcome at our offices. You do not need to make an appointment, but since we are not always in our offices it's a good idea to first check that we are there (via e-mail or telephone).

Lecturer and examiner:

Professor Mattias Wahde, tel.: 031 772 3727, e-mail: mattias.wahde@chalmers.se

Course assistants:

Minerva Suvanto, e-mail: minerva.suvanto@chalmers.se

Vivien Lacorre, e-mail: vivien.lacorre@chalmers.se

Finding our offices: Go to Hörsalsvägen 7, enter the building (nya M-huset), so that you have Café Bulten on your right as you enter. Then go up one flight of stairs, and enter the corridor (Vehicle Engineering and Autonomous Systems). If the door is locked, please dial the appropriate number, as shown in the list beside the door.

Course purpose

The aim of the course is for the students to gain knowledge regarding interpretable methods in artificial intelligence, as well as applications of such methods, especially in high-stakes situations, for example in healthcare, automated driving, finance, and so on. The course also aims to highlight differences between interpretable systems and so-called black-box models, e.g., deep neural networks. Ethical aspects of AI are also covered.

Schedule

The course schedule is given below. The lectures can also be found in TimeEdit

Date Room Time  Content
20260120 HC2 08.00-09.45 Course introduction and motivation; Brief description of the topics covered in the course.
20260121 HA3 13.15-17.00 Black-box AI vs glass-box (interpretable) AI. Dangers of using black-box models indiscriminately. Interpretability vs. explainability. Ethical issues (particularly regarding black-box models). Experiment design and performance measures. Brief introduction to Python for AI.
20260127 HC2 08.00-09.45 Black-box architectures (neural networks): Deep neural networks (DNNs), e.g., convolutional neural networks (CNNs), large language models (LLMs)
20260128 HA3 13.15-17.00 Interpretable models: Linear models (linear perceptrons, linear and logistic regression), Bayesian methods, k-nearest neighbour methods, decision trees, symbolic regression
20260203 HC2 08.00-09.45 Time series prediction, Handout of assignments (projects)
20260204 HA3 13.15-17.00 Data classification (images, text)
20260206 HC2 08.00-09.45 Natural language processing
20260211 HA3 13.15-17.00 Assignment work session (assistants available as tutors in the classroom)
20260217 HC2 08.00-09.45 Evolvable rule-based models (e.g., symbolic regression)
20260227 HC2 08.00-09.45 Assignment work session (teacher and assistant available as tutors in the classroom).
20260303 HC2 08.00-09.45 Assignment work session (teacher and assistant available as tutors in the classroom).
20260304 --- --- No lecture!
20260310 HC2 08.00-09.45 Assignment work session (teacher and assistant available as tutors in the classroom)  Handin of presentation (see below).
20260311 HC1 13.15-17.00 Assignment presentations (mandatory attendance). Handin of assignments

Course literature

The course literature will consist of lecture notes (slides and other notes), links to various scientific papers, and web resources. This material will be provided gradually during the course.  All course material will be provided (free of charge) on the Modules page.

Course design

The course starts with a few weeks of lectures and some practical activities (preparing for programming work later). Assignments will be handed out in Study week 3 (and should be handed in on the day of the final session, March 12). From Study week 4 and onwards, there is a mix of lectures and work sessions - in the latter, the students are expected to work on their assignments (that will involve Python programming). All assignments are solved individually (there is no group work). In the final session (March 12) we will have presentations of the assignments, where each student is given a few minutes to present their work.

Changes made since the last occasion (2025)

The course is completely new (given for the first time), but it has a certain overlap with the course that preceded it (Intelligent Agents), in particular regarding the parts that deal with natural language processing.

Learning outcomes

Learning objectives:

  • Define and contrast, on the one hand, black-box models and, on the other, interpretable (glass-box) models in artificial intelligence (AI)
  • Define and describe neuro-symbolic models and methods
  • Discuss and compare different kinds of AI applications
  • Select a suitable model class for a given application
  • Define, implement, and train AI-models (both black-box models and interpretable models) for different applications, e.g., in natural language processing (NLP), data classification, image processing, time series prediction, autonomous robots, and so on.
  • Discuss various ethical aspects related to artificial intelligence

Examination

 Examination parts:

  • There will be one assignment, with several parts (some of which are mandatory whereas other are voluntary), worth a total of 100 p. More details (for example, grade requirements) will follow when the course starts.
  • In addition, every student must prepare (and send to the examiner) a presentation PDF (covering the solution to one of the assignments), at least one day before the mandatory presentation session on 20260311. A group of randomly selected students will then give their presentation in that session (the presentation document is graded (pass or fail), but not the oral presentation). More information will follow later.

 

Course summary:

Course Summary
Date Details Due