Subject: COMPUTATIONAL AND STATISTICAL LEARNING (A.A. 2020/2021)
Unit Computational and statistical learning
Related or Additional Studies (lesson)
The course of Computational and Statistical Learning aims to train students able to analyze from a numerical point of view a machine learning problem, selecting the most appropriate algorithm according to the peculiarities of the specific problem. The implementation of the analyzed methods in Matlab or Python languages will allow the student on one hand to put into practice the theoretical knowledge and on the other hand to strengthen knowledge and practice of a programming language that leads the world in scientific computing.
- Differential calculus for real functions of real variables.
- Basics of linear algebra.
- Basics of probability and statistics.
- Basics of computer programming in Matlab and Python.
- Introduction to learning from examples
- Supervised learning: loss functions and Vapnik theory
- Kernel methods and Support Vector Machines
- Statistical methods: discriminant analysis, mixture models
- Ensemble learning: random forests, decision trees
- Unsupervised learning: principal component analysis, manifold learning, clustering
- Deep learning: stochastic optimization, artificial, recurrent and convolutional neural networks
- Case studies: laboratories in collaboration with Ammagamma S.r.l.
The lectures will be divided into theoretical lessons and practical exercises. In both cases, due to the COVID19 health situation, the lectures will be carried out remotely in video recording mode and made available to students during the teaching period.
Examination: oral exam. The candidate must demonstrate a thorough knowledge of: - the course content and teaching training, including both the institutional part, i.e. frontal lessons/classroom activities, and the laboratory practices; - the statistical formulation of the machine learning problem; - the main notions on the supervised learning problem and the properties of the main strategies as the kernel methods, the statistical methods and the ensemble learning strategies; - the main techniques for unsupervised learning problems; - the deep learning techniques, with particular attention devoted to the stochastic optimization strategies needed to the neural network training; - the Matlab/Python syntax for the implementation of an elementar algorithm. The verification is integral with respect to the course's content; it is also verified the student's ability to relate specific subject content with the knowledge listed as pre-requisite. The oral exam consists in the implementation of one of the algorithms analyzed during the class in a Matlab/Python environment and in the in-depth analysis of some topics treated during the lectures. The score of the oral exam, in a scale of thirty, is divided in: 5 points for the communicative skills; 5 points for multidisciplinary skills; 20 points for the knowledge of the contents.
Knowledge and understanding:
At the end of the course the student will know fundamental methods in machine learning, and will be able to implement these methods within the Matlab/Python software environment and to analyse their performance in terms of efficiency and computational complexity.
Applying knowledge and understanding:
At the end of the course the student will have adequate knowledge to face some machine learning problems arising from real applications. The student will be able to find the machine learning methods suitable for the considered problems and to develop their Matlab/Python codes.
At the end of the course the student must be able to independently choose the appropriate methods for a specific problem in machine learning.
At the end of the course the student must be able to describe the considered machine learning methods in a clear and rigorous way and to discuss their efficiency.
At the end of the course the student must be able to go deep by himself into the main aspects of the
subjects proposed in the course.
Dispense fornite dal docente / Teacher's notes
C.M. Bishop, Pattern recognition and machine learning. Springer, 2006