Image Pre-Processing

Build and test filter stacks interactively with the ImagePairViewer. See real-time results before committing to full processing, with side-by-side comparison of raw and filtered images.

Getting Started

The Pre-Processing panel provides a complete workspace for developing and testing image filter pipelines. Build filter stacks, see results instantly, and iterate until you achieve optimal particle visibility and background removal.

1. Add Filters

Build your filter stack from temporal and spatial options

2. Test

Apply filters to the current frame with one click

3. Compare

View raw vs processed side-by-side

4. Iterate

Adjust parameters and playback through frames to verify

Why Pre-process?

Image pre-processing is a critical step in PIV analysis that significantly improves cross-correlation accuracy and vector quality. By removing noise, correcting uneven illumination, and subtracting stationary backgrounds, you can enhance particle visibility and reduce spurious vectors. PIVTools provides two categories of filters, each serving distinct purposes in the preprocessing pipeline:

Temporal Filters

Temporal filters analyze patterns across multiple consecutive frames (batches) to identify and remove features that persist over time. These are particularly effective for eliminating static backgrounds, reflections, and slowly-varying illumination patterns that would otherwise contaminate cross-correlation results.

  • time:Subtracts the local minimum intensity across the batch, removing static features
  • pod:Uses Proper Orthogonal Decomposition to remove coherent large-scale structures

Spatial Filters

Spatial filters operate on individual frames, modifying pixel intensities based on local neighbourhoods. Use these for smoothing high-frequency noise, enhancing particle contrast, normalising intensity variations, and preparing images for optimal cross-correlation performance.

  • Smoothing: Gaussian, Median
  • Contrast: Clip, Normalise, Max-Norm, Local Max
  • Correction: Invert, Background Subtract, Levelize
  • Geometric: Transpose

Processing Order: Filters are applied sequentially from top to bottom in your filter stack. Temporal filters should typically come first to remove background features, followed by spatial filters for noise reduction and contrast enhancement.

The ImagePairViewer

The ImagePairViewer is your interactive workspace for developing and refining filter stacks. It displays raw and processed images in a synchronised side-by-side layout, with shared zoom and pan controls for precise comparison.

Viewer Layout

Left Panel — Raw Image

Displays the original, unprocessed image from your source data. This panel serves as your reference point for evaluating filter effectiveness. All contrast adjustments and zoom operations are independent from the processed view, but pan and zoom can be synchronized for direct comparison.

  • • Frame A/B toggle to view either image of the pair
  • • Independent contrast controls with auto-scale option
  • • Download buttons for exporting raw images
Right Panel — Processed Image

Shows the result after applying your complete filter stack. This panel updates when you click "Test Filters" and allows you to evaluate how each filter affects particle visibility and background removal. For temporal filters, processing includes the entire batch of frames.

  • • Frame A/B toggle matching the raw panel
  • • Processing status indicator during batch operations
  • • Download buttons for exporting processed results

Frame Playback Controls

Navigate through your image sequence to verify that filters perform consistently across all frames.

Navigation Controls
  • Frame Slider: Drag to jump to any frame in your sequence
  • Direct Input: Type a specific frame number for precise navigation
  • Previous/Next Buttons: Step through frames one at a time
  • Frame Counter: Shows current position (e.g., "25 / 100")
Playback Controls
  • Play/Pause: Toggle automatic frame advancement
  • Playback Speed: Select from 0.5, 1, 2, 5, or 10 FPS
  • Loop: Playback automatically wraps from last frame to first

Performance Tip: Use JPEG format for faster playback during filter development. Switch to PNG only when you need lossless precision for final inspection.

Frame A / Frame B Toggle

PIV analysis works with image pairs, where Frame A and Frame B are captured with a short time delay. Each viewer panel includes an A / B toggle that lets you inspect either frame independently. This is essential for verifying that filters process both frames consistently and that particle patterns are preserved correctly for cross-correlation.

  • • Both raw and processed panels have independent A/B toggles
  • • Download buttons export the currently selected frame (A or B)
  • • Auto-contrast adjusts independently for each frame if enabled

Image Format Selection

The ImagePairViewer supports two image formats for displaying frames. Choose the format that best matches your current workflow stage.

JPEG

Fast Mode

JPEG compression provides significantly smaller file sizes and faster network transfer, making it ideal for interactive filter development and playback. The compression introduces minor artifacts, but these are typically imperceptible during normal inspection.

  • ✓ Faster frame loading and smoother playback
  • ✓ Reduced bandwidth and memory usage
  • ✓ Recommended for filter development workflow
  • ✗ Minor compression artifacts (usually invisible)
PNG

Precise Mode

PNG provides lossless compression that preserves exact pixel values. Use this mode when you need to inspect fine details, verify intensity distributions, or export images for publication or further analysis.

  • ✓ Lossless quality with exact pixel values
  • ✓ Best for final inspection and export
  • ✓ Accurate intensity measurements
  • ✗ Larger files and slower loading

Grid Overlay

The Grid Overlay draws interrogation window boundaries on your images, helping you visualize how PIV processing will divide the image for cross-correlation. This is invaluable for verifying that particles span appropriate sizes relative to window dimensions and that each window contains sufficient particle density.

Grid Controls

Grid Size Options

Select from preset sizes or enter a custom value to match your intended interrogation window dimensions:

  • 8×8 — Fine resolution for small particles
  • 16×16 — Common final pass size
  • 32×32 — Standard intermediate pass
  • 64×64 — Large windows for initial passes
  • Custom — Enter any value from 1-512 pixels
Grid Thickness

Adjust line thickness for visibility at different zoom levels:

  • 1px — Minimal visual interference
  • 2px — Good balance (recommended)
  • 4-6px — High visibility at low zoom
  • 8-10px — Maximum visibility

PIV Guidelines: For optimal cross-correlation, particles should span at least 2-4 pixels in diameter, and each interrogation window should contain 5-10 particle images. Use the grid overlay to verify these conditions are met across your entire field of view.

Zoom & Contrast Controls

Both image panels share zoom and pan state, so when you zoom into a region of interest, both raw and processed views show the same area for direct comparison. Contrast controls are independent for each panel, allowing you to optimise visibility for both the original and filtered images.

Zoom & Pan

  • Scroll Wheel: Zoom in and out centred on cursor position
  • Click + Drag: Pan the view to explore different regions
  • Double-Click: Reset zoom and pan to fit the entire image
  • Synchronised: Both panels track the same zoom level and position

Contrast (Percentage-Based)

  • Auto Scale: Automatically adjusts vmin/vmax based on image statistics (1st-99th percentile)
  • Manual Sliders: Dual-thumb slider for precise min/max control (0-100%)
  • Direct Input: Type exact percentage values in the input fields
  • Independent: Raw and processed panels have separate contrast settings

Colourmap Selection

Choose between two colourmap options for visualising intensity distributions:

Grayscale

Traditional black-to-white mapping. Best for natural image appearance and evaluating particle visibility against backgrounds.

Viridis

Perceptually uniform colourmap from purple through blue-green to yellow. Excellent for revealing subtle intensity variations and gradients.

Building a Filter Stack

Filters are applied in order from top to bottom of your stack. Build your preprocessing pipeline by adding filters, configuring their parameters, and reordering as needed. All changes are automatically saved to your config.yaml file with a 500ms debounce to prevent excessive writes during rapid editing.

Filter Stack Controls

Adding Filters
  • Filter Dropdown: Select from temporal (time, pod) or spatial filters
  • Add Button: Appends the selected filter to the bottom of your stack
  • Default Parameters: New filters use sensible defaults that you can adjust
Managing Filters
  • Move filter up or down in the processing order
  • Remove filter from the stack
  • Expand/Collapse: Click filter header to show/hide parameters
1

Select a filter type

Use the dropdown menu to choose from Temporal filters (Time, POD) or Spatial filters (Gaussian, Median, Clip, Normalise, Max-Norm, Local Max, Invert, Background Subtract, Levelize, Transpose). The dropdown groups filters by category for easier navigation.
2

Click Add Filter

The selected filter is added to the bottom of your stack with default parameters. If you add temporal filters, a Batch Size control appears to configure how many frames are processed together.
3

Configure parameters

Expand the filter card by clicking on it to reveal parameter controls. Each filter type has specific parameters (e.g., sigma for Gaussian, kernel size for Median). Changes are saved automatically after a brief delay.
4

Reorder if needed

Use the up/down arrow buttons to change the processing order. Remember that temporal filters should typically precede spatial filters for best results.

Recommended Order: Start with temporal filters (Time, POD) to remove backgrounds, then apply spatial filters for noise reduction and contrast enhancement. A typical effective stack: Time → POD → Gaussian → Median → Normalise.

Testing Filters

The Test Filters button applies your current filter stack to the active frame and displays the result in the processed panel. This is the core workflow for developing effective preprocessing pipelines—test, evaluate, adjust, and repeat until you achieve optimal results.

Testing Workflow

Spatial Filters Only

When your stack contains only spatial filters, results appear nearly instantly since processing applies only to the single current frame. This enables rapid iteration and parameter tuning.

With Temporal Filters

When temporal filters (Time, POD) are present, clicking Test Filters processes a full batch of frames. A progress indicator shows "Processing..." during computation. Results for all frames in the batch become available for playback after processing completes.

1

Add filters to your stack

Build your filter pipeline using the dropdown and Add Filter button. Configure parameters for each filter as needed.
2

Click Test Filters

Apply the filter stack to the current frame (or batch for temporal filters). The button shows "Processing..." and is disabled during computation.
3

Compare raw vs processed

Use the side-by-side view to evaluate filter effectiveness. Zoom into regions of interest to inspect particle visibility and background removal.
4

Adjust and re-test

Tweak filter parameters, reorder filters, or add/remove filters as needed. Click Test Filters again to see updated results.
5

Verify across frames

Use the playback controls to step or play through multiple frames, verifying that filters perform consistently across your entire sequence.

Processing State Indicators

  • Loading Spinner: Appears over the processed panel during computation
  • "Frame not yet processed": Shown when navigating to a frame outside the processed batch
  • "No filters configured": Displayed when the filter stack is empty
  • Processing Blocked Dialog: Appears if you try to modify filters during batch processing, offering to cancel or wait

Pro Tip: After testing, use the Play button to automatically advance through several frames. Watch for any frames where filters underperform—what works on one frame may need adjustment for frames with different lighting or particle density.

Batch Size for Temporal Filters

Temporal filters (Time and POD) analyze patterns across multiple consecutive frames to identify and remove persistent features. The Batch Size setting controls how many frames are included in each processing batch, directly affecting both filter effectiveness and computational requirements.

Larger Batch Size (50-100)

  • ✓ Better statistical estimation of background features
  • ✓ More effective removal of slowly-varying patterns
  • ✓ POD can identify more coherent modes
  • ✗ Higher memory usage (frames loaded simultaneously)
  • ✗ Longer processing time per batch

Best for: Steady flows, consistent backgrounds, stationary cameras

Smaller Batch Size (10-30)

  • ✓ Lower memory usage during processing
  • ✓ Faster iteration during filter development
  • ✓ Better for time-varying conditions
  • ✗ Less effective background estimation
  • ✗ May miss slowly-varying features

Best for: Dynamic flows, varying illumination, testing filters quickly

Batch Size Control

When temporal filters are present in your stack, a Batch Size input field appears next to the filter selector. The value is validated against your total frame count and saved automatically to the configuration.

  • • Minimum: 1 frame (though temporal filters need multiple frames to be effective)
  • • Maximum: Total number of frame pairs in your sequence
  • • Changes saved on blur (when you click away from the input)

Temporal Filters

Temporal filters analyze intensity patterns across multiple frames to identify and remove features that persist over time. These are essential for removing static backgrounds, reflections, and large-scale flow structures that would otherwise dominate cross-correlation peaks.

time

Time Filter

The Time filter subtracts the local minimum intensity from each pixel across all frames in the batch. Any feature that appears consistently across frames (static background, reflections, sensor artifacts) will be removed because its minimum value equals its typical value. Moving particles, which only occupy each pixel briefly, are preserved.

Algorithm: For each pixel position (x, y), compute the minimum intensity value across all N frames in the batch. Subtract this minimum from every frame at that position. Clip negative values to zero. Process Frame A and Frame B channels independently.

Effective For
  • • Static background removal
  • • Constant reflections and glare
  • • Fixed sensor artifacts
Less Effective For
  • • Moving backgrounds
  • • Time-varying illumination
  • • Large coherent structures

Parameters: None — operates automatically on the batch

pod

POD Filter (Proper Orthogonal Decomposition)

The POD filter uses Proper Orthogonal Decomposition (also known as Principal Component Analysis in the temporal domain) to identify and remove coherent structures from the image sequence. This advanced technique decomposes the flow field into ranked spatial modes, automatically identifies which modes represent "signal" (background/large structures) versus "noise" (particles), and reconstructs images with signal modes removed.

Algorithm (Mendez et al.):

  1. Reshape each frame to a 1D vector and stack into matrix M (frames × pixels)
  2. Compute covariance matrix C = M × MT
  3. Perform SVD: C = PSI × S × PSIT to get eigenvectors and eigenvalues
  4. Auto-detect first "noise mode" where mean(PSI) < 0.01 and eigenvalue difference is small
  5. Compute spatial modes (PHI) and temporal coefficients for signal modes
  6. Reconstruct and subtract signal contribution from each frame
Effective For
  • • Large-scale flow structures
  • • Coherent vortices and wakes
  • • Periodic flow features
  • • Complex time-varying backgrounds
Considerations
  • • Computationally intensive (SVD)
  • • Needs sufficient batch size (>30)
  • • May over-filter sparse particle fields

Parameters: None — automatic mode detection using eps_auto_psi=0.01 and eps_auto_sigma=0.01

Spatial Filters

Spatial filters operate on individual frames, modifying pixel intensities based on local neighbourhoods. These filters are applied per-frame after any temporal filters, and are used for noise reduction, contrast enhancement, and image correction.

gaussian
Gaussian Blur

Smooths images using a Gaussian kernel with specified standard deviation. Reduces high-frequency noise while preserving larger features. Applied in both spatial dimensions.

sigma=1.0Standard deviation (pixels)
median
Median Filter

Replaces each pixel with the median value of its neighbourhood. Excellent for removing salt-and-pepper noise while preserving edges better than Gaussian smoothing.

size=[5, 5]Kernel size [height, width]
clip
Clip Filter

Clips pixel intensities to a threshold range. Auto-mode computes threshold as median ± n×std for each frame, handling hot pixels and sensor noise automatically.

n=2.0Std devs for auto threshold
threshold=nullOptional explicit [min, max]
norm
Normalisation

Local contrast normalisation that subtracts sliding minimum and divides by local range (max-min). Equalises intensity variations across the image.

size=[7, 7]Kernel size [height, width]
max_gain=1.0Maximum normalisation gain
maxnorm
Max-Norm

Normalises by local max-min contrast with smoothing. Similar to norm but includes uniform filtering of the contrast field for smoother results.

size=[7, 7]Kernel size [height, width]
max_gain=1.0Maximum allowed gain
lmax
Local Maximum

Morphological dilation that replaces each pixel with the maximum value in its neighbourhood. Useful for enhancing bright features (particles) and filling small gaps.

size=[7, 7]Kernel size [height, width]
invert
Invert

Inverts image intensities: output = offset - input. Use when you have dark particles on a light background (e.g., shadowgraph, backlit imaging).

offset=255Value to subtract from (typically max intensity)
sbg
Subtract Background

Subtracts a reference background image from all frames and clips at zero. Requires a pre-captured background image without particles.

bg=nullPath to background image file
levelize
Levelize

Divides by a white reference image to correct uneven illumination. The white reference should capture your illumination pattern without particles.

white=nullPath to white reference image
transpose
Transpose

Swaps height and width dimensions of the image. Use when your camera orientation doesn't match expected coordinate system.

Download & Export

Both raw and processed images can be downloaded directly from the viewer. Use these exports for documentation, publication figures, or further analysis in external tools.

Download Options

Raw Image Downloads

Download the original, unprocessed image data for the currently displayed frame.

  • Download Raw A: Exports Frame A of the current pair
  • Download Raw B: Exports Frame B of the current pair
  • • Format matches current viewer setting (JPEG or PNG)
Processed Image Downloads

Download the filtered result after your filter stack has been applied.

  • Download Processed A: Exports filtered Frame A
  • Download Processed B: Exports filtered Frame B
  • • Only available after running Test Filters

Complete YAML Reference

All filter configurations from the GUI are automatically saved to config.yaml. Here's a complete reference of all preprocessing parameters and their relationships.

# config.yaml - Complete preprocessing configuration filters: # TEMPORAL FILTERS (require batch processing) # These analyze multiple frames together - type: time # No parameters - subtracts local minimum - type: pod # No parameters - automatic mode detection # SPATIAL FILTERS (applied per-frame) # Order matters - applied sequentially - type: gaussian sigma: 1.0 # float: Gaussian std dev (pixels) - type: median size: [5, 5] # [int, int]: Kernel size [height, width] - type: clip n: 2.0 # float: Std devs for auto threshold # threshold: [lo, hi] # Alternative: explicit [min, max] values - type: norm size: [7, 7] # [int, int]: Kernel size max_gain: 1.0 # float: Maximum normalisation gain - type: maxnorm size: [7, 7] max_gain: 1.0 - type: lmax size: [7, 7] # Morphological dilation kernel - type: invert offset: 255 # int: Value to subtract from - type: sbg bg: null # string: Path to background image - type: levelize white: null # string: Path to white reference - type: transpose # No parameters - swaps H and W # BATCH SETTINGS (for temporal filters) batches: size: 30 # Number of frames per batch # Note: Kernel sizes are automatically adjusted to be odd # (e.g., [6, 6] becomes [7, 7]) for proper centering

Parameter Quick Reference

FilterTypeParameterData TypeDefault
timeTemporal(none)--
podTemporal(none)--
gaussianSpatialsigmafloat1.0
medianSpatialsize[int, int][5, 5]
clipSpatialnfloat2.0
clipSpatialthreshold[float, float]null
normSpatialsize[int, int][7, 7]
normSpatialmax_gainfloat1.0
maxnormSpatialsize[int, int][7, 7]
maxnormSpatialmax_gainfloat1.0
lmaxSpatialsize[int, int][7, 7]
invertSpatialoffsetint255
sbgSpatialbgstringnull
levelizeSpatialwhitestringnull
transposeSpatial(none)--

Ready to Configure PIV Processing?

With your filter stack configured and verified across multiple frames, you're ready to configure cross-correlation parameters for vector field computation.

Configure Processing →