Workshop
Moscow, December 5, 16:00 (GMT +3)

Running ML models with C++

There are two critical stages in developing ML-based functionality products: model training and application.
However, when applying the model, there is no such clarity, and it may be necessary to integrate the finished ML model into the product code written in C++.
We will use examples from the report «Learning in Python, Applying in C++" and create working and tested modules in C++ based on them.

Workshop plan:

Basic machine learning ideas and terms
The simplest linear regression model
Implementing model predictions in C++
Approaches to testing and metrics reconciliation
CatBoost Gradient Boosting Libraries
Coding or connecting C APIs
Batch processing and performance issues
Neural Network Models and Vectorization
Implementing a neural network in C++ via matrix operations
TensorFlow (C API) and TensorFlow Lite (C++ API) libraries
Model conversion and ONNX library
Conclusions and further steps

To be ready:

Install Docker
Download an image with all dependencies installed beforehand: Docker pull sdukshis/cppml
Learn how to mount the source directory inside the container and run builds via CMake
cd <hello, world>
docker run --rm $(pwd):/usr/src/app -ti sdukshis/cppml
cmake -B build
cmake --build build