Measurements

Measurement Session

The MeasurementSession class in PyTestLab provides a high-level, context-managed interface for orchestrating complex measurement workflows. It is designed to coordinate multiple instruments, manage experiment metadata, and ensure reproducibility and traceability of your measurements.


Overview

A MeasurementSession encapsulates:

  • The set of instruments involved in a measurement.
  • Experiment metadata (operator, DUT, environmental conditions, etc.).
  • The sequence of measurement steps and their results.
  • Automatic logging and database integration.

This abstraction is ideal for automating multi-instrument experiments, batch measurements, or compliance/audit scenarios.


API Reference

pytestlab.measurements.MeasurementSession(name=None, description='', tz='UTC', *, bench=None)

Bases: AbstractContextManager

Core builder – read the extensive doc-string in earlier assistant response for design details.

Source code in pytestlab/measurements/session.py
def __init__(
    self,
    name: str | None = None,
    description: str = "",
    tz: str = "UTC",
    *,
    bench: Bench | None = None,
) -> None:
    self.name = name or "Untitled"
    self.description = description
    self.tz = tz
    self.created_at = datetime.now().astimezone().isoformat()
    self._parameters: dict[str, _Parameter] = {}
    self._instruments: dict[str, _InstrumentRecord] = {}
    self._meas_funcs: list[tuple[str, T_MeasFunc]] = []
    self._tasks: list[tuple[str, T_TaskFunc]] = []
    self._data_rows: list[dict[str, Any]] = []
    self._experiment: Experiment | None = None
    self._has_run = False
    self._bench = bench

    # Inherit experiment data from bench if available
    if bench is not None and bench.experiment is not None:
        # Assign bench experiment properties
        self._experiment = bench.experiment
        self.name = bench.experiment.name
        self.description = bench.experiment.description

    # Set up instruments
    if self._bench:
        # Print debug info
        print(f"DEBUG: Setting up {len(self._bench.instruments)} instruments from bench")
        for alias, inst in self._bench.instruments.items():
            self._instruments[alias] = _InstrumentRecord(
                alias=alias,
                resource=f"bench:{alias}",
                instance=inst,
                auto_close=False,
            )

Attributes

created_at = datetime.now().astimezone().isoformat() instance-attribute

data property

description = description instance-attribute

name = name or 'Untitled' instance-attribute

tz = tz instance-attribute

Functions

__enter__()

Synchronous context manager entry.

Source code in pytestlab/measurements/session.py
def __enter__(self) -> MeasurementSession:  # noqa: D401
    """Synchronous context manager entry."""
    return self

__exit__(exc_type, exc, tb)

Synchronous context manager exit.

Source code in pytestlab/measurements/session.py
def __exit__(self, exc_type, exc, tb) -> Literal[False]:  # noqa: D401
    """Synchronous context manager exit."""
    # Directly call the synchronous disconnect method
    try:
        self._disconnect_all_instruments()
    except Exception:  # noqa: BLE001
        pass  # Keep original error handling behavior
    return False

acquire(func=None, /, *, name=None)

Source code in pytestlab/measurements/session.py
def acquire(self, func: T_MeasFunc | None = None, /, *, name: str | None = None):
    if func is None:  # decorator usage
        return lambda f: self.acquire(f, name=name)

    reg_name = name or func.__name__
    if any(n == reg_name for n, _ in self._meas_funcs):
        raise ValueError(f"Measurement '{reg_name}' already registered.")
    self._meas_funcs.append((reg_name, func))
    return func

instrument(alias, config_key, /, **kw)

Source code in pytestlab/measurements/session.py
def instrument(self, alias: str, config_key: str, /, **kw) -> Any:
    if alias in self._instruments:
        record = self._instruments[alias]
        if not record.resource.startswith("bench:"):
            raise ValueError(f"Instrument alias '{alias}' already in use.")
        return record.instance
    if self._bench:
        raise ValueError(
            f"Instrument '{alias}' not found on the bench. "
            "When using a bench, all instruments must be defined in the bench configuration."
        )
    from ..instruments import AutoInstrument as _AutoInstrument

    inst = _AutoInstrument.from_config(config_key, **kw)
    self._instruments[alias] = _InstrumentRecord(alias, config_key, inst)
    return inst

parameter(name, values, /, *, unit=None, notes='')

Source code in pytestlab/measurements/session.py
def parameter(
    self, name: str, values: T_ParamIterable, /, *, unit: str | None = None, notes: str = ""
) -> None:
    if name in self._parameters:
        raise ValueError(f"Parameter '{name}' already exists.")
    if isinstance(values, StepSpec):
        resolved = values.values()
    elif callable(values) and not isinstance(values, list | tuple | np.ndarray):
        resolved = list(values())
    else:
        resolved = list(values)
    self._parameters[name] = _Parameter(name, resolved, unit, notes)

plot(spec=None, **kwargs)

Plot the current session data (if any). Typically called after run().

PARAMETER DESCRIPTION
spec

Optional PlotSpec. If not provided, one is created from kwargs.

TYPE: PlotSpec | None DEFAULT: None

**kwargs

Fields for PlotSpec (kind, x, y, title, xlabel, ylabel, legend, grid).

DEFAULT: {}

RETURNS DESCRIPTION

A matplotlib Figure object.

Source code in pytestlab/measurements/session.py
def plot(self, spec: PlotSpec | None = None, **kwargs):
    """
    Plot the current session data (if any). Typically called after run().

    Args:
        spec: Optional PlotSpec. If not provided, one is created from kwargs.
        **kwargs: Fields for PlotSpec (kind, x, y, title, xlabel, ylabel, legend, grid).

    Returns:
        A matplotlib Figure object.
    """
    from ..plotting import PlotSpec  # local import to keep plotting optional
    from ..plotting import plot_dataframe  # local import to keep plotting optional

    df = self.data
    if df.is_empty():
        raise ValueError(
            "Session has no data yet. Call run() first or ensure measurements produced rows."
        )

    spec_to_use = spec or (PlotSpec(**kwargs) if kwargs else PlotSpec())
    return plot_dataframe(df, spec_to_use)

run(duration=None, interval=0.1, show_progress=True)

Execute the measurement session.

If background tasks have been registered with @session.task, this will run in parallel mode. Otherwise, it will perform a sequential sweep over the defined parameters.

PARAMETER DESCRIPTION
duration

Total time in seconds to run (only for parallel mode).

TYPE: float | None DEFAULT: None

interval

Time in seconds between acquisitions (only for parallel mode).

TYPE: float DEFAULT: 0.1

show_progress

Whether to display a progress bar.

TYPE: bool DEFAULT: True

Source code in pytestlab/measurements/session.py
def run(
    self,
    duration: float | None = None,
    interval: float = 0.1,
    show_progress: bool = True,
) -> Experiment:
    """Execute the measurement session.

    If background tasks have been registered with @session.task, this will run in
    parallel mode. Otherwise, it will perform a sequential sweep over the defined
    parameters.

    Args:
        duration: Total time in seconds to run (only for parallel mode).
        interval: Time in seconds between acquisitions (only for parallel mode).
        show_progress: Whether to display a progress bar.
    """
    if self._tasks:
        return self._run_parallel(duration, interval, show_progress)
    else:
        return self._run_sweep(show_progress)

task(func=None, /, *, name=None)

Decorator to register a function as a background task for parallel execution.

Source code in pytestlab/measurements/session.py
def task(self, func: T_TaskFunc | None = None, /, *, name: str | None = None):
    """Decorator to register a function as a background task for parallel execution."""
    if func is None:  # decorator usage
        return lambda f: self.task(f, name=name)

    if not callable(func):
        raise TypeError("Only callable functions can be registered as tasks.")
    reg_name = name or func.__name__
    self._tasks.append((reg_name, func))
    return func

Example Usage

from pytestlab.measurements import MeasurementSession
from pytestlab import AutoInstrument

def main():
    # Create instrument instances (simulated for this example)
    dmm = AutoInstrument.from_config("keysight/EDU34450A", simulate=True)
    psu = AutoInstrument.from_config("keysight/EDU36311A", simulate=True)
    dmm.connect_backend()
    psu.connect_backend()

    # Start a measurement session
    with MeasurementSession(
        instruments={"dmm": dmm, "psu": psu},
        metadata={"operator": "Alice", "experiment": "Power Supply Test"}
    ) as session:
        # Configure instruments
        psu.channel(1).set(voltage=3.3, current_limit=0.5).on()
        # Perform measurement
        voltage = dmm.measure_voltage_dc()
        # Record result in the session
        session.record("dmm_voltage", voltage)

        # ... additional steps ...

    # Session automatically logs results and closes instruments

main()

Key Features

  • Context Management: Ensures all resources are properly initialized and cleaned up.
  • Metadata Tracking: Attach arbitrary metadata to each session for traceability.
  • Result Recording: Store and retrieve results by key for later analysis or database storage.
  • Integration: Works seamlessly with PyTestLab's database and experiment modules.

Step Helpers for Parameters

PyTestLab exposes pytestlab.measurements.step helpers so you can succinctly define logarithmic, exponential, geometric, or fully custom sequences for session.parameter(...):

from pytestlab.measurements import MeasurementSession, step

with MeasurementSession() as session:
    session.parameter("freq", step.log(start=1e3, stop=1e6, count=100))
    session.parameter("gain", step.exp(exponent_start=-3, exponent_stop=2, count=25))
    session.parameter("impedance", step.points([1+1j, 1-1j, -1+1j]))

Each helper returns a StepSpec that lazily generates the final iterable just before the sweep runs, keeping scripts tidy while still supporting exotic sweep shapes.


For more advanced usage, see the Experiments & Sweeps API and the 10-Minute Tour.