Temporal state-window recording and comparison.

Record when keyed state was active, then compare those windows across sources, providers, or pipeline stages. Use it when the latest value is not enough.

How to install

Code

.NET CLI

dotnet add package Spanfold
dotnet add package Spanfold.Testing

NuGet Package Manager

Install-Package Spanfold
Install-Package Spanfold.Testing

Project file

<PackageReference Include="Spanfold" Version="0.1.0" />
<PackageReference Include="Spanfold.Testing" Version="0.1.0" />

Editable port

cd packages/python
python -m pip install -e ".[dev]"

Local path

python -m pip install -e ./packages/python

requirements.txt

-e ./packages/python

Python tracks the core API surface with idiomatic snake-case names.

Model

Predicates become queryable time ranges.

A temporal window is the period where a predicate stayed true for a key. Keep the lane, segment, and tag context with that range, then compare it without hand-written interval joins.

A window, not a flag

A latest-value table would only show online at the end. The window keeps the four-position offline span as the thing you can query, measure, and compare.

Overlap, residual, missing

One comparison plan emits three row kinds at once. Residual is target-only, overlap is agreement, missing is against-only. Coverage, gap, containment, lead/lag, and as-of extend the same rows.

Live vs final evidence

The horizon is the moment the live run asks its question. Rows derived from windows still open at that moment are provisional: they can change, and exports say so.

Problem

Current state is not enough for temporal debugging.

A current-state table can answer what is true now. It cannot answer when a state became true, which source saw it first, whether another source missed it, or what information was knowable at the decision point.

Keep the temporal record as data in application code. Persist it, compare it, export it, or render it as a debug artifact.

Use cases

When intervals matter more than the latest value.

Monitoring provider comparison

Find where one provider reported an outage that another missed, reported late, or recovered from sooner.

Decision-point audit

Evaluate only the windows and annotations that were knowable at a specific processing position or timestamp.

Live/open-window analysis

Include still-open windows at a live horizon while keeping provisional rows separate from final rows.

Historical pipeline analysis

Compare stages in a pipeline to see where state diverged, lagged, disappeared, or gained extra coverage.

Numeric thresholds

Track periods where readings stayed above, below, or inside a range: 23.4 C, 23.5 C, 23.8 C.

Primitives

Small API surface, explicit temporal output.

Record windows and intervals Capture opened and closed state windows with key, source, partition, segment, and tag context.
Compare overlap, residual, missing, and coverage Turn lane histories into structured rows instead of hand-written interval joins.
Reason at a point in time Query windows and annotations at a horizon without leaking future knowledge into an audit.
Separate live from final Clip open windows to a horizon and preserve whether rows depend on ongoing state.
Export deterministic artifacts Produce JSON, JSON Lines, Markdown, LLM context JSON, and self-contained HTML debug output from comparison results.

Visual Auditing

Render comparison output as a timeline.

Export a self-contained HTML artifact when a row needs inspection: selected windows, aligned segments, gaps, and provisional live rows.

Code examples

Define the state once, then query or compare recorded history.

Install

dotnet add package Spanfold
dotnet add package Spanfold.Testing

`Spanfold.Testing` is optional. It provides fixtures, snapshots, and assertions for consumer tests.

cd packages/python
python -m pip install -e ".[dev]"

The Python package now tracks the C# core API surface with idiomatic snake-case names.

Record windows

using Spanfold;

var pipeline = Spanfold.Spanfold
    .For<DeviceSignal>()
    .RecordWindows()
    .TrackWindow(
        "DeviceOffline",
        key: signal => signal.DeviceId,
        isActive: signal => !signal.IsOnline);

pipeline.Ingest(new DeviceSignal("device-17", false), source: "provider-a");
pipeline.Ingest(new DeviceSignal("device-17", true), source: "provider-a");

public sealed record DeviceSignal(string DeviceId, bool IsOnline);
from dataclasses import dataclass

from spanfold import Spanfold


@dataclass(frozen=True)
class DeviceSignal:
    device_id: str
    is_online: bool


pipeline = (
    Spanfold.for_events()
    .record_windows()
    .track_window(
        "DeviceOffline",
        key=lambda signal: signal.device_id,
        is_active=lambda signal: not signal.is_online,
    )
)

pipeline.ingest(DeviceSignal("device-17", False), source="provider-a")
pipeline.ingest(DeviceSignal("device-17", True), source="provider-a")

Compare recorded windows

var result = pipeline.History
    .Compare("Provider QA")
    .Target("provider-a", selector => selector.Source("provider-a"))
    .Against("provider-b", selector => selector.Source("provider-b"))
    .Within(scope => scope.Window("DeviceOffline"))
    .Using(comparators => comparators
        .Overlap()
        .Residual()
        .Missing()
        .Coverage())
    .Run();

result.ExportDebugHtml("artifacts/provider-qa.html");
result.ExportLlmContext("artifacts/provider-qa.llm.json");
result = (
    pipeline.history.compare("Provider QA")
    .target("provider-a")
    .against("provider-b")
    .within(window_name="DeviceOffline")
    .using("overlap", "residual", "missing", "coverage")
    .run()
)

result.export_debug_html("artifacts/provider-qa.html")
result.export_llm_context("artifacts/provider-qa.llm.json")

Boundaries

What this is not.

Storing timestamps in a database

A database can persist windows, but it will not give you staged comparison plans, selectors, normalization, live finality, diagnostics, or deterministic exports by itself.

Latest-state tracking

Latest-state tables answer what is true now. Window history answers when it was true, who saw it, and whether another lane missed it.

Generic event processing

Stream processors route and aggregate events. This layer stays smaller: it records interpreted state windows and compares their temporal evidence.

Metrics dashboards

Dashboards compress time into counters, rates, and charts. Window comparison keeps individual ranges and emits rows that can be audited.

Event sourcing

Event sourcing preserves facts and rebuilds state. Window comparison analyzes the ranges produced after those facts have been interpreted.