Now that you’ve got all the hardware and software in place, it’s time to start capturing frames with your camera and use them to train your model. First, configure the MLX90640 plugin in your platypush configuration file:
camera.ir.mlx90640: fps: 16 # Frames per second rotate: 270 # Can be 0, 90, 180, 270 rawrgb_path: /path/to/your/rawrgb
Restart platypush. If you enabled the HTTP backend you can test if you are able to take pictures:
curl -XPOST -H 'Content-Type: application/json' -d '"type":"request", "action":"camera.ir.mlx90640.capture", "args": "output_file":"~/snap.png", "scale_factor":20' http://localhost:8008/execute?token=...
The thermal picture should have been stored under
~/snap.png. In my case, it looks like this when I’m in front of the sensor:
Notice the glow at the bottom-right corner — that’s actually the heat from my RaspberryPi 4 CPU. It’s there in all the images I take, and you may see similar results if you mounted your camera on top of the Raspberry itself, but it shouldn’t be an issue for your model training purposes.
If you open the webpanel (
http://your-host:8008) you’ll also notice a new tab, represented by the sun icon, that you can use to monitor your camera from a web interface.
You can also monitor the camera directly outside of the webpanel by pointing your browser to
Now add a cronjob to your platypush configuration to take snapshots every minute:
cron.ThermalCameraSnapshotCron: cron_expression: '* * * * *' actions: - action: camera.ir.mlx90640.capture args: - output_file: "$" - grayscale: true
The images will be stored under
/img/folder in the format
YYYY-mm-dd_HH-MM-SS.jpg. No scale factor is applied — even if the images are tiny, we’ll only need them to train our model. Also, we’ll convert the images to grayscale — the neural network will be lighter and actually more accurate, as it will only have to rely on one variable per pixel without being tricked by RGB combinations.
Restart platypush and verify that every minute a new picture is created under your images directory. Let it run for a few hours or days until you’re happy with the number of samples. Try to balance the numbers of pictures with no people in the room and those with people in the room, trying to cover as many cases as possible — e.g. sitting, standing in different points of the room etc. As I mentioned earlier, in my case I only needed less than 1000 pictures with enough variety to achieve accuracy levels above 99%.