Yes. Use the Automation tab, load a parameter file, then toggle Auto Run to run the pipeline steps for a session. This covers most “API-like” needs for now.
Session-to-session variability is real. Even within the same animal, different sessions can need different parameter “loving care.” Bulk processing is possible, but it can silently bake in mistakes, so the default workflow prioritizes per-session validation.
The key sanity check is the red trace: it should be relatively flat. If you see big spikes or parabola-like artifacts, adjust background/noise parameters.
There is no single perfect visual check. For now, treat denoising as a standard preprocessing step. If your pipeline produces weird “glow removal” artifacts, it is usually fixable via parameter tuning and more conservative recording brightness (see below).
Check the motion summary plots and the distribution. If motion is small and stable (no “crazy” jumps), your surgery and recording stability are likely good. Line-splitting artifacts often show up here; the pipeline is designed to remove many of those.
You want a smooth decay curve, not a spiky pattern. Component 0 is typically background and should be much larger than others. Early components should capture most visible structure; later components capture small variations that add up downstream.
Not automatically. It often means parameters are too permissive early (especially minimum component size). It is easier to remove extra components later than recover missed ones, but thousands can slow later steps. Increase minimum component size and related pruning thresholds to reduce tiny “dot” components.
Both are 0 to 1 thresholds. Increasing them makes merges stricter. If you want cleaner separation, raise the overlap threshold (example: 0.9 instead of ~0.5), and increase temporal correlation only if your expected cell size and imaging resolution support it.
AR=1 favors simple calcium-like dynamics. AR=2 can capture tighter timing structure, but may admit more noise artifacts. If you are seeing too many fast, noise-like events, AR=1 can help. If you expect very fast dynamics, AR=2 may be justified.
Start with: zero threshold (raise it if you have too many tiny components), sparse penalty (increase to discourage diffuse/global signals), and basic pruning thresholds like minimum spike std, minimum calcium variance, and minimum spatial std. Don’t over-optimize early rounds; later QC filters exist.
Aim for “as dim as possible while still seeing cells,” similar to a game brightness slider where you keep it just barely visible. Overly bright recordings can increase autofluorescent background and swamp real signals.
Sometimes. Global signals can leak into temporal components when the model is allowed to explain too much shared variance. Options include: tightening sparsity, using AR=1 in cases where AR=2 is admitting fast artifacts, and applying post hoc frequency-based filtering if the pattern is clearly periodic.
In this pipeline, background removal is implemented via deglow plus denoising steps (conceptually similar to other 1p pipelines). More customization and better visualization for early steps is planned, but you can always inspect the implementation directly in the source.
CNMF remains a strong “gold standard” for many 1p use cases. ML methods can work well but often require training data matched to your lens, expression, and brain region. They can also miss rare but meaningful signals due to long-tail generalization issues. If you need maximum interpretability, CNMF-style pipelines still win for many labs.
Use the GitHub Issue Tracker linked above: https://github.com/ariasarch/MPS_1.0.0/issues. Reporting publicly helps others and makes fixes easier to track.