Structured concurrency, cancel-correct channels, four-valued outcomes, and deterministic testing — compiled to WebAssembly and running live right here.
Most WASM libraries discover version incompatibility when something crashes in production. Before any runtime operation, Asupersync's WASM module and JavaScript host perform a version handshake with a 64-bit fingerprint. Incompatible consumers are rejected at the first API call — before any state is mutated. No other browser runtime has built-in ABI version negotiation at the WASM boundary.
In tokio and async-std, spawn() creates detached tasks with no owner.
In Asupersync, every task is owned by a region. Regions form a tree
rooted at the runtime. When a region closes, it guarantees quiescence: all children complete, all finalizers
run, all obligations resolve. No orphan tasks, ever — by construction, not convention.
Click any region to trigger cascading cancellation and see real WASM handles close.
tokio::spawn() returns a detached JoinHandle. If you drop it, the task becomes an orphan — running indefinitely, holding resources, invisible to the parent scope. There is no "close region and wait for children" primitive. You must manually track every spawned task and .await each JoinHandle. Miss one? Zombie task. Forever.
Tokio tasks return Result<T, JoinError> — two states, no algebra.
Asupersync has a four-valued Outcome
with a severity lattice: Ok < Err < Cancelled < Panicked.
Combinators aggregate by taking the worst severity automatically — making error propagation
algebraic instead of manual. No other async runtime has this.
Cancellation flows through a multi-phase protocol: Running → Requested → Draining → Finalizing → Completed. Budget is consumed during drain. Compare with tokio's approach below: runtime dropped, futures abandoned mid-execution, resources leaked.
All tasks executing normally.
Tasks running...
Tasks running...
In tokio, cleanup is best-effort — there is no "budget" concept, no compositional algebra, no guarantee
that nested timeouts compose correctly. Asupersync's cleanup budgets compose as a
semiring: combine(b1, b2)
takes componentwise min for quotas/deadlines and max for priority.
"Who constrains whom?" is algebraic, not ad-hoc — sufficient conditions, not hopes.
Browser APIs and most runtimes use opaque object references with no protection against use-after-free. If an ID is recycled, old references silently alias new state — the classic ABA problem. Asupersync's handles carry a generation counter that increments on every slot recycle. Stale handles are rejected with a precise error message. No other browser-facing WASM runtime has this safety mechanism.
In tokio, closing a "scope" means manually tracking every spawned task, connection, and timer, then cleaning each one in every error path. Miss one? Resource leak. In Asupersync, closing a scope cascades through all descendants — tasks are cancelled, child scopes are closed, handles are released. This demo builds a real 4-level scope tree with real WASM handles, then closes a mid-level node to prove zero orphans.
scope.close() — one call, guaranteed.
In tokio, calling .abort() on a JoinHandle kills the task instantly —
no acknowledgment, no cleanup confirmation, no protocol. The handle may or may not be safe to reuse.
In Asupersync, task_cancel() only requests cancellation.
The task stays pinned.
You must call task_join()
to acknowledge the outcome and release the handle. This is a real protocol enforced by the runtime.
This isn't a simulation. Type JavaScript below and it executes against the real WASM runtime running in your browser. Every handle, every outcome, every error is real. Try creating scopes, spawning tasks, cancelling them — experiment freely.
Most runtimes are never tested with random create/cancel/close sequences — the kind of chaos that reveals use-after-free, double-free, and handle leak bugs. This demo creates hundreds of real WASM handles at high speed, cancels them randomly, closes scopes with live children, and verifies: zero leaked handles, zero orphaned tasks, every obligation resolved. The structural guarantees that make this possible simply don't exist in tokio or async-std.
In vanilla JavaScript, cancelling an in-flight fetch() requires
manually creating an AbortController, threading its signal through the call, and remembering
to call abort() in every error path. In Asupersync, the fetch handle is scoped to a region.
Close the region and every in-flight request under it is automatically cancelled. No manual wiring.
No forgotten cleanup paths. Structural, not conventional.
In tokio, when you join_all multiple tasks, you get back a
Vec<Result> and must manually scan for errors, distinguish
cancellations from panics, and decide which failure is "worst." Asupersync's four-valued
Outcome forms a severity lattice:
Ok < Err < Cancelled < Panicked. When combining outcomes, the worst severity wins automatically.
This demo spawns 5 real tasks with mixed outcomes and shows the lattice composition in real-time.
When a tokio task is cancelled, you get... nothing. The future is dropped. You have no idea who cancelled it, why, from which region, or at what phase. Asupersync's cancellation outcome carries a full attribution chain: the cancel kind (user/timeout/fail_fast/race_lost/shutdown), the phase (requested/draining/finalizing/completed), the originating region, task, timestamp, message, and whether the chain was truncated. This is cancel forensics, not cancel amnesia.
Tokio's internal task IDs are opaque and non-deterministic — re-running the same test produces different IDs, making trace replay impossible. Asupersync exposes deterministic slot allocation with LIFO free-list recycling. The same create/close sequence always produces the same slot assignments — critical for deterministic replay and debugging. This demo creates and closes 40 real WASM scopes and visualizes which slots get reused, how generations climb, and proves the LIFO pattern.
In tokio, cancelling one task group can accidentally leak signals to siblings through shared
CancellationTokens, thread-locals, or global state.
Asupersync guarantees structural isolation:
closing or cancelling one scope has zero effect on sibling scopes.
This demo creates 4 real sibling scopes with tasks, closes them one at a time,
and probes the living siblings after each close to prove they're completely unaffected.
Tokio's internal task IDs are opaque and non-deterministic — re-running the same test produces different IDs, making trace replay impossible. Asupersync's handle allocator uses a deterministic LIFO free-list: when slots are freed, the most recently freed slot is reused first. This demo creates 4 scopes, frees them in order, then creates 4 new scopes and verifies they reuse slots in exact reverse order. This deterministic pattern means you can record a handle allocation trace and replay it identically.
What happens when a programmer forgets to join a spawned task? In tokio, the task becomes an orphan — it runs indefinitely with no owner, leaking memory and potentially holding locks. In Asupersync, the scope won't let you. Tasks are pinned on spawn and tracked as obligations. When the parent scope closes, it cascade-closes every unjoinable child. The runtime cleans up your mistakes structurally. This demo intentionally "forgets" to join 5 tasks and proves the scope catches all of them.
In vanilla JavaScript, implementing a timeout for a group of concurrent operations requires
creating an AbortController, a setTimeout,
wiring the signal to every fetch(), handling the
AbortError in every .catch(),
and remembering to clearTimeout on success. That's 6 coordination points
where bugs hide. In Asupersync, you create a scope and close it when the deadline fires.
Every task and fetch handle under it is automatically cancelled. One line of cleanup instead of six.
Browser resources (WebSockets, MessageChannels, streams) have no concept of "ownership." If your component unmounts without explicitly closing them, they become zombies. In Asupersync, resources are scoped to a region with WASM tasks tracking each end. Close the region and everything under it is torn down — ports closed, tasks released, handles freed. The resource cannot outlive its scope. This demo creates a real MessageChannel with WASM-tracked tasks, sends messages, then closes the scope.
Every exhibit above demonstrates a structural guarantee. But the real question is: how much code do YOU have to write? Here's the same scenario — 3 concurrent fetches with a 5-second timeout — implemented in vanilla JavaScript (22 lines, 11 coordination points where bugs hide) vs Asupersync (6 lines, 1 coordination point). The difference isn't incremental — it's a different programming model entirely.
scope.close() handles everything. No signal wiring. No manual cleanup. No error-path bugs. The scope IS the AbortController, the timeout, and the cleanup — all in one structural primitive.
The common objection to structured concurrency is performance: "all that bookkeeping must be slow." This benchmark measures real WASM operations per second — scope create + close cycles including handle allocation, generation tracking, parent registration, and full cleanup. The result: structured concurrency at microsecond latency. These are the same operations that power every exhibit on this page, running at production speed.