
For our exhibition design project, "Lange Nacht der Münchner Museen", we wanted to design an interactive exhibit that would explain the technology used by the Max Planck Institute in a playful way. We called it "The Accelerometer Lab".
An accelerometer is an instrument that measures acceleration and orientation of an object. We use them every day without thinking about it; for example, they are used in smartphones to rotate the screen and in smartwatches to count steps. Imagine a small box with a ball suspended inside by springs. Depending on how the box is oriented or moved, the ball stimulates different springs. These stimuli provide three values (AX, AY, and AZ) that allow us to determine orientation and movement through trigonometry.
At this station, visitors engage with a toy baboon equipped with an accelerometer, a nearby screen offers a real-time visualization of the scientist's "vision", translating physical motion into an analytical interpretation. The system relies on a three elements: a device with an accelerometer; a computer for data acquisition and analysis and a dedicated processing unit that can transform numbers (AX, AY, and AZ) into understandable information, such as walking, running, or eating.
We used a micro:bit, an educational device with an integrated accelerometer, for the sensor. We wrote C software that reads the data at 20 Hz (20 times per second) and sends it to a computer via USB through serial communication.
The data is received by a Raspberry Pi computer and arrives as simple lines of text:
-10.41.189
5.30.195
21.10.192, etc.

The challenge lies in translating three numbers into a recognizable behavior. To achieve this, we utilize machine learning, leveraging established mathematical models to classify intricate datasets.
Unlike traditional programming, which relies on fixed logic, this system is trained on examples. For this purpose, we developed software that operates in three stages: data collection, model training, and real-time classification.
In the first stage, the program collects data from the micro:bit and organizes it into ordered samples (arrays of numbers such as [.929, .121, .85, .30453, .121 ... ]). From the raw data we extract characteristics useful for describing the movement. For example, using trigonometry, the three accelerometer values can be used to determine the orientation - the tilt of the device along the three axes in space. By analyzing fixed time windows, typically around two seconds, we can compute additional parameters such as maximum and minimum values, average values, standard deviation, the number of oscillations along an axis, and the amplitude of variations. These extracted characteristics summarize the movement and transform a stream of raw numbers into a compact description that can be used by a learning algorithm. During this phase, the software - written in JavaScript using libraries such as Plotly for data visualization and p5.js for the user interface - records samples while we simulate the baboon’s movements (for example running, walking, eating, or sleeping). The collected samples are then passed to the next stage.
The second stage is model training. Using the ml5.js library, which is built on top of TensorFlow.js, the recorded samples are grouped by movement type and used to train a neural network. By analyzing multiple examples of each movement, the network learns the features that characterize each behavior.
The result of this training process is a set of numerical values organized as arrays of parameters such as [.3112, .2321, .11221 ... ]. These numbers are called weights and, together with the configuration of the neural network, define our trained model.

If we look under the hood, the underlying model is a neural network with 15 input parameters, 16 neurons in a dense hidden layer, and 6 output nodes (one for each movement). Here is how it is displayed:


Each parameter is combined using weights learned during training.
The last layer uses a softmax function to generate a corresponding probability percentage for each movement.
The result displayed on the screen is always the movement with the highest probability. Once trained, the lightweight model can run in real time, even on simple hardware such as a Raspberry Pi.
(The Raspberry controls also the graphical interface. A small graph shows the raw data from the accelerometer, and real-time classification labels appear above it.)


(Back to the station)...The interaction begins with a yellow button, managed by an Arduino Micro Pro Micro that emulates a standard keyboard. A simple press is interpreted as a spacebar command, instantly kicking off the interactive animation. On screen, the graphical output unfolds as a circular timeline, capturing an entire day in the life of a baboon. Within this 30-second interaction window, the system processes the visitor's movements in real time, translating complex sensor data into simple, recognizable behaviors.
Want to try again? Press the button!