import fhemb
# Configure fhemb logging and override levels for selected modules.
fhemb.setup_logging(
force=True,
module_levels={
"embedding": "WARNING",
"tutils": "ERROR",
},
)nbdev_fhemb
Important:
nbdev_fhembis not thefhemblibrary.
It is a demonstration wrapper that importsfhembas an external dependency.
No internal algorithms, models, or implementation details offhembare contained or exposed in this repository.
Overview
This nbdev project presents the fhemb library — facial heatmap embedding — a toolkit for the analysis of multidimensional time series.
Table of Contents
Introduction
The library operates on time‑varying matrices (facial heatmaps) stored in a PostgreSQL + TimescaleDB backend maintained by the Department of Human Physiology at IIT Ferrara, Italy.
The analysis pipeline has two major stages:
- Embedding extraction — transforming raw heatmaps into compact numerical representations.
- Time‑series similarity analysis — comparing embedding trajectories across subjects, sessions, or conditions.
Embedding
Embedding extraction may involve up to three steps:
- Feature extraction
- statistical moments
- frequency bins
- raw heatmaps
- statistical moments
- Dimensionality reduction
- principal components (eigenfaces) computed via several PCA variants and normalization schemes
- Frequency analysis
- wavelet‑based decomposition into frequency bands
- reconstruction within selected frequency intervals
- wavelet‑based decomposition into frequency bands
Time‑series similarity analysis
Once embeddings are computed, their temporal evolution can be compared using:
- Lagged cross‑correlation
- produces heatmaps of correlation across temporal shifts
- Dynamic Time Warping (DTW)
- computes alignment‑invariant distances under multiple normalization options
- Clustering
- groups embedding trajectories into similarity‑based clusters
Quick Start
This section shows the minimal steps required to get
fhembrunning on a local machine.
1. Install the fhemb library
Download the latest wheel from the CI release assets (see scripts/README.md, section 1. CI release asset download).
⚠️ Access requires an authentication token provided by the repository owner!
pip install /path/to/<fhemb-wheel-name>.whl2. Create your configuration directory
mkdir -p ~/.config/fhembCopy the provided templates:
cp config/.env.db.template ~/.config/fhemb/.env.db
cp config/.env.paths.template ~/.config/fhemb/.env.pathsEdit both files and fill in your real SSH, database, and NAS paths.
Note The path
~/.config/fhembis correct for macOS and Linux. Windows users should instead use%APPDATA%\fhemb\. See the Local Configuration section below for full platform‑specific paths.
3. Mount the NAS
fhemb expects the NAS directories defined in .env.paths to be mounted before use. Use provided mount_nas_storage.sh (documented in scripts/README) on macOS:
~/scripts/mount_nas_storage.shOn Linux or Windows, mount the NAS using your usual SMB/sshfs method.
⚠️ Network prerequisite: NAS mount assumes that the NAS hosts are reachable on your local network. If you are off‑LAN, establish your VPN connection (e.g., Tunnelblick) before running any mount script.
4. Run a minimal example
from fhemb.piece import Piece
from fhemb.utils.factories import wfactory
# Declare a heatmap time series
dg0 = Piece(
time_interval=(10000, 10500),
title='Don Giovanni',
subjs=(65,69), # interval of subjects whose time series we are interested in
temperature_threshold=30 # mask values under the threshold
)
# Retrieve the time series from the DB
dg0.max(
subjs=[65,66], # narrow the time series selection
factory=wfactory(
'db4', # Daubechies wavelet of order 4
8 # with decomposition level 8
),
fbands=[2,3,5], # specify the reconstruction frequency bands
).plot_signal()You can now run embeddings, decompositions, and analysis routines.
Local Configuration
nbdev_fhemb uses the same configuration file layout as the fhemb library, because it imports fhemb as an external dependency and relies on its runtime settings.
nbdev_fhemb does not ship default configuration files. If you already use fhemb, you can reuse the same configuration directory without modification.
You must provide the standard fhemb configuration files in two user-side configuration files. They must be placed in the standard per-user config directory for your platform:
| Platform | Config directory | Example path |
|---|---|---|
| macOS | ~/.config/fhemb/ |
/Users/<user>/.config/fhemb/ |
| Linux | ~/.config/fhemb/ |
/home/<user>/.config/fhemb/ |
| Windows | %APPDATA%\fhemb\ |
C:\Users\<user>\AppData\Roaming\fhemb\ |
Required configuration files inside that directory:
.env.db.env.paths
You can start from the provided templates:
mkdir -p ~/.config/fhemb
cp config/.env.db.template ~/.config/fhemb/.env.db
cp config/.env.paths.template ~/.config/fhemb/.env.pathsEdit both files and fill in your real values (SSH, DB, NAS paths). Do not commit the filled-in versions.
Important:
The NAS directories specified in
.env.pathsmust be mounted before running any analysis. Refer to the scripts/README for mount instructions.
Configuration file reference
Both configuration files (.env.db and .env.paths) must be created in your per-user config directory (see table above). Below is a description of all fields and recommended defaults for each platform.
.env.db fields
| Key | Meaning | Notes / Defaults |
|---|---|---|
SSH_PASSPHRASE |
Passphrase for your SSH private key | Leave empty if your key is unencrypted |
REMOTE_HOST |
Remote server hosting PostgreSQL/TimescaleDB | e.g. 12.69.2.30 |
REMOTE_PORT |
SSH port | Default: 22 |
SSH_USERNAME |
Username for SSH login | required |
SSH_PKEY |
Path to your SSH private key | • macOS/Linux: ~/.ssh/id_rsa • Windows: %USERPROFILE%\.ssh\id_rsa |
REMOTE_BIND_HOST |
Local bind address for SSH tunnel | Default: localhost |
REMOTE_BIND_PORT |
Local port for forwarded DB | Default: 5432 |
DB_NAME |
Database name | e.g. tcamera |
DB_USERNAME |
Database user | required |
DB_PASSWORD |
Database password | required |
DB_HOST |
Host for DB connection (after tunnel) | Default: localhost |
.env.paths fields
| Key | Meaning | macOS/Linux default | Windows default |
|---|---|---|---|
NODENAME |
Human-readable name of your machine | "My Machine" |
"My Machine" |
LOCALROOT |
Local project root directory | ~/Projects/fhemb/ |
%USERPROFILE%\Projects\fhemb\ |
MOUNT |
NAS mount point | /Volumes/NASstorage/ |
Z:\ (or any SMB-mapped drive) |
AUDIOFILES |
Path to thermal data on the NAS | /Volumes/NASstorage/thermal_data/ |
Z:\thermal_data\ |
Example template with platform-aware defaults:
# Human-readable node name (your machine)
NODENAME="My Machine"
# Local project root
# macOS/Linux: ~/Projects/fhemb/
# Windows: %USERPROFILE%\Projects\fhemb\
LOCALROOT="~/Projects/fhemb/"
# NAS mount points
# macOS/Linux: /Volumes/NASstorage/
# Windows: Z:\
MOUNT="/Volumes/NASstorage/"
# Thermal data directory on the NAS
# macOS/Linux: /Volumes/NASstorage/thermal_data/
# Windows: Z:\thermal_data\
AUDIOFILES="/Volumes/NASstorage/thermal_data/"OpenMP Runtime Note
Some environments can load both Intel OpenMP (libiomp, typically via torch) and LLVM OpenMP (libomp, typically via scikit-learn). Mixing them in the same Python process can trigger warnings and (on some platforms) instability.
If you see the warning, the most reliable workaround is to keep torch and scikit-learn in separate environments and run workloads in the appropriate env. If you must use both in one env, be aware of the risk and consider diagnosing with threadpoolctl.threadpool_info().
Logging Configuration
nbdev_fhemb uses default fhemb logging configuration. Nevertheless, you can configure fhemb logger levels from outside the library after import.
Quick start
Update levels at runtim
import fhemb
# Batch update
fhemb.set_log_levels({
"embedding": "INFO",
"dbms": "DEBUG",
})
# Single logger update
fhemb.set_logger_level("tseries", "CRITICAL")Note
setup_logging()is idempotent.- Use
force=Trueto re-apply the full logging configuration. - Accepted levels: standard names (
"DEBUG","INFO","WARNING","ERROR","CRITICAL") or numeric logging levels.
Developer Guide
Contributions are welcome. Please open an issue or submit a pull request on GitHub if you want to propose improvements, report bugs, or extend the library.
This project uses nbdev 3. All development happens inside the nbs/ directory, and notebooks are exported into the Python package using nbdev-export.
We recommend the following four steps workflow:
1. Install nbdev_fhemb in development mode
⚠️ Cloning the repository requires that your GitHub account has been granted access to this private repository.
You may authenticate using either:
- an SSH key registered with your GitHub account that has access to this repository, or
- an HTTPS clone using a GitHub Personal Access Token (PAT) with
reposcope.
# SSH clone (requires SSH key and repo access)
git clone git@github.com:rdned/nbdev_fhemb.git
# or HTTPS clone with a Personal Access Token (PAT)
git clone https://github.com/rdned/nbdev_fhemb.git
cd nbdev_fhemb
pip install -e ".[dev]"- activate your favorite isolated environment first (venv/pyenv/conda)
- examples (alternatives):
# venv
source .venv/bin/activate
# pyenv
pyenv activate <env-name>- install the package in editable mode together with developer extras (including
ipykernelfor local notebook work)
pip install -e ".[dev]"- optionally register a named kernel in Jupyter’s kernelspec list
- recommended if you use JupyterLab/classic Jupyter or share notebooks across environments
- usually not needed for VS Code if the correct Python env is already selected
python -m ipykernel install --user --name fhemb --display-name "Python (fhemb)"2. Work inside nbs/
Work in Jupyter notebooks, use nbdev directives, export images using the “kaleido” engine.
3. Update CI Infrastructure
- Check the CI dependency pins
- Build and publish a new Docker image if you have edited any of the Docker build scripts or the Dockerfile.
For more details refer to workflows/README
4. Export code and clean notebooks
a) Hooks configured in .pre-commit-config.yaml:
This repository includes hooks from fastai/nbdev, pre-commit/pre-commit-hooks, and a local nbdev-readme hook.
nbdev-clean— strip outputs (except frozen cells) and normalize notebooksnbdev-export— export Python modules from notebooksnbdev-readme— render README content from this source notebooktrailing-whitespaceandend-of-file-fixer— basic text formatting checks
pre-commit is installed by pip install -e ".[dev]". Run
pre-commit installto register Git hooks from .pre-commit-config.yaml so checks run automatically on each git commit.
Optionally, when you want to bump pinned hook versions (rev: of pre-commit-hooks):
pre-commit autoupdate
b) To execute the hooks manually:
- nbdev hooks only
nbdev-export
nbdev-clean
nbdev-readme- All pre-commit hooks
If you do not run pre-commit install, run checks manually:
pre-commit run --all-filesexecutes all hooks configured for the pre-commit stage in .pre-commit-config.yaml: nbdev-clean, nbdev-export, nbdev-readme, trailing-whitespace, and end-of-file-fixer (subject to each hook’s exclude rules).
License
fhemb is released under the Apache-2.0 License. See the LICENSE file for details.
For additional examples / usecase, advanced workflows, see a project documentation hosted on the GitHub Pages site: https://rdned.github.io/ml-projects/.
Documentation can be found hosted on this GitHub repository’s pages. Additionally you can find package manager specific guidelines on conda and pypi respectively.

