Initializing WASM...

The Async Runtime
Running in Your Browser

Structured concurrency, cancel-correct channels, four-valued outcomes, and deterministic testing — compiled to WebAssembly and running live right here.

Explore Exhibits Open REPL
Exhibit 1

ABI Handshake & Version Negotiation

Most WASM libraries discover version incompatibility when something crashes in production. Before any runtime operation, Asupersync's WASM module and JavaScript host perform a version handshake with a 64-bit fingerprint. Incompatible consumers are rejected at the first API call — before any state is mutated. No other browser runtime has built-in ABI version negotiation at the WASM boundary.

JavaScript
// Initialize the WASM module await init('./packages/browser-core/asupersync_bg.wasm'); // Query the ABI version contract const version = abiVersion(); console.log(version); // → { major: 1, minor: 0 } // Verify fingerprint hasn't drifted const fp = abiFingerprint(); console.log(`Fingerprint: ${fp}`); // → 4558451663113424898n
Live ABI Status
Click Run or Initialize Runtime to see live results
Console Output
Exhibit 2

Structured Concurrency — Region Tree

In tokio and async-std, spawn() creates detached tasks with no owner. In Asupersync, every task is owned by a region. Regions form a tree rooted at the runtime. When a region closes, it guarantees quiescence: all children complete, all finalizers run, all obligations resolve. No orphan tasks, ever — by construction, not convention. Click any region to trigger cascading cancellation and see real WASM handles close.

JavaScript
// Enter a scope from the runtime — creates a child region const root = rt.enterScope('http-server'); const httpRegion = root.value; // Nest deeper — regions form a tree const reqA = httpRegion.enterScope('request-A').value; const reqB = httpRegion.enterScope('request-B').value; // Spawn tasks inside regions const task1 = reqA.spawnTask({ label: 'parse-headers' }).value; const task2 = reqA.spawnTask({ label: 'read-body' }).value; // Closing a region waits for all children reqA.close(); // → quiescence guaranteed reqB.close(); httpRegion.close();
Interactive Region Tree
Running Cancel Requested Draining Quiescent
Without structured concurrency (tokio, async-std): tokio::spawn() returns a detached JoinHandle. If you drop it, the task becomes an orphan — running indefinitely, holding resources, invisible to the parent scope. There is no "close region and wait for children" primitive. You must manually track every spawned task and .await each JoinHandle. Miss one? Zombie task. Forever.
Region Events
Exhibit 3

Four-Valued Outcome Algebra

Tokio tasks return Result<T, JoinError> — two states, no algebra. Asupersync has a four-valued Outcome with a severity lattice: Ok < Err < Cancelled < Panicked. Combinators aggregate by taking the worst severity automatically — making error propagation algebraic instead of manual. No other async runtime has this.

JavaScript
// Click an outcome card above to see its code
Severity Lattice
Exhibit 4

Cancellation Protocol — Not a Silent Drop

Cancellation flows through a multi-phase protocol: Running → Requested → Draining → Finalizing → Completed. Budget is consumed during drain. Compare with tokio's approach below: runtime dropped, futures abandoned mid-execution, resources leaked.

Cancel State Machine
Shutdown Budget 100%

All tasks executing normally.

Tokio Running
DB Connectionactive
File Handleactive
Network Socketactive

Tasks running...

Asupersync Running
DB Connectionactive
File Handleactive
Network Socketactive

Tasks running...

JavaScript
// Cancel a task with structured protocol const task = scope.spawnTask({ label: 'worker' }).value; // Request cancellation — begins the multi-phase protocol const result = task.cancel( 'user', // kind: user | timeout | fail_fast | race_lost | shutdown 'User clicked stop' // optional message for attribution ); // The cancel outcome carries full attribution // { outcome: "cancelled", cancellation: { // kind: "user", // phase: "completed", // origin_region: "browser-sdk", // message: "User clicked stop" // }}
Exhibit 5

Budget Algebra — Semiring Composition

In tokio, cleanup is best-effort — there is no "budget" concept, no compositional algebra, no guarantee that nested timeouts compose correctly. Asupersync's cleanup budgets compose as a semiring: combine(b1, b2) takes componentwise min for quotas/deadlines and max for priority. "Who constrains whom?" is algebraic, not ad-hoc — sufficient conditions, not hopes.

JavaScript
// Create budgets with validated bounds const outer = createBudget({ pollQuota: 2048, deadlineMs: 30000, priority: 100, cleanupQuota: 512 }); const inner = createBudget({ pollQuota: 512, deadlineMs: 5000, priority: 200, cleanupQuota: 128 }); // Semiring meet: tighter constraint wins // combine(outer, inner) → // pollQuota: min(2048, 512) = 512 // deadlineMs: min(30000, 5000) = 5000 // priority: max(100, 200) = 200 // cleanupQuota: min(512, 128) = 128
Budget Composition Visualizer
Click Run to see budget algebra in action
Exhibit 6

Generation Counters — Use-After-Free Prevention

Browser APIs and most runtimes use opaque object references with no protection against use-after-free. If an ID is recycled, old references silently alias new state — the classic ABA problem. Asupersync's handles carry a generation counter that increments on every slot recycle. Stale handles are rejected with a precise error message. No other browser-facing WASM runtime has this safety mechanism.

JavaScript
// Create and close a scope — slot is recycled const s1 = rt.enterScope('first').value; // s1 = { slot: N, generation: 0 } s1.close(); // slot N freed, generation → 1 // New scope reuses the same slot const s2 = rt.enterScope('second').value; // s2 = { slot: N, generation: 1 } // Try using the old handle — REJECTED const staleResult = scopeClose(s1); // → { outcome: "err", failure: { // code: "invalid_handle", // message: "StaleGeneration ..." // }}
Handle Slot Timeline
Click Run to see real handle recycling with generation counters
Without generation counters (most runtimes): When a handle/ID is recycled, old references silently alias new state. This is the classic ABA problem — you think you're talking to Task A, but the slot was recycled and you're now corrupting Task B. In production, this manifests as impossible-to-reproduce state corruption. Asupersync makes this structurally impossible.
Console Output
Exhibit 7

Deep Cascade Close — Quiescence by Construction

In tokio, closing a "scope" means manually tracking every spawned task, connection, and timer, then cleaning each one in every error path. Miss one? Resource leak. In Asupersync, closing a scope cascades through all descendants — tasks are cancelled, child scopes are closed, handles are released. This demo builds a real 4-level scope tree with real WASM handles, then closes a mid-level node to prove zero orphans.

Live Scope Tree (real WASM handles)
Click Build Tree to create a real 4-level scope hierarchy
Without cascade close (tokio, Node.js): You must manually track every spawned task, every open connection, every pending timer, and ensure each one is cleaned up in every error path. Close the HTTP handler but forget the database connections it spawned? Leaked. Close the connection pool but forget a query mid-flight? Leaked. With N components, you have O(N²) cleanup coordination points. Asupersync: scope.close() — one call, guaranteed.
Cascade Events
Exhibit 8

Cancel is Not Enough — You Must Join

In tokio, calling .abort() on a JoinHandle kills the task instantly — no acknowledgment, no cleanup confirmation, no protocol. The handle may or may not be safe to reuse. In Asupersync, task_cancel() only requests cancellation. The task stays pinned. You must call task_join() to acknowledge the outcome and release the handle. This is a real protocol enforced by the runtime.

JavaScript
// Spawn a task — handle is PINNED const t = scope.spawnTask({ label: 'worker' }).value; // Cancel transitions to Cancelling — but task is still alive! t.cancel('user', 'stop requested'); // Double-cancel fails — already in Cancelling state const bad = t.cancel('timeout'); // → { outcome: "err", code: "invalid_handle" } // MUST join to release the handle and acknowledge t.join(Outcome.cancelled({ kind: 'user', phase: 'completed', origin_region: 'demo', message: 'stop requested' })); // → Handle unpinned, slot released. Clean.
Task State Machine (Live)
Click Run to see the cancel→join protocol enforced by real WASM
Protocol Events
Exhibit 9 — Interactive

Live Runtime REPL

This isn't a simulation. Type JavaScript below and it executes against the real WASM runtime running in your browser. Every handle, every outcome, every error is real. Try creating scopes, spawning tasks, cancelling them — experiment freely.

Try:
REPL Input
REPL Output
Ready. Type code and click Run (or press Ctrl+Enter).
Exhibit 10 — Stress Test

Chaos Mode — Prove Zero Leaks Under Pressure

Most runtimes are never tested with random create/cancel/close sequences — the kind of chaos that reveals use-after-free, double-free, and handle leak bugs. This demo creates hundreds of real WASM handles at high speed, cancels them randomly, closes scopes with live children, and verifies: zero leaked handles, zero orphaned tasks, every obligation resolved. The structural guarantees that make this possible simply don't exist in tokio or async-std.

Runtime Stress Test
Operations
0
Peak Handles
0
Max Generation
0
Errors Caught
0
Leaked
Ready 0%
Chaos Log
Exhibit 11 — Unique to Asupersync

Scope-Bounded HTTP — Close the Region, Kill the Request

In vanilla JavaScript, cancelling an in-flight fetch() requires manually creating an AbortController, threading its signal through the call, and remembering to call abort() in every error path. In Asupersync, the fetch handle is scoped to a region. Close the region and every in-flight request under it is automatically cancelled. No manual wiring. No forgotten cleanup paths. Structural, not conventional.

Region-Scoped HTTP Lifecycle
Start 3 parallel requests, then close their parent region to cancel them all at once
Scope Events
Exhibit 12 — Unique to Asupersync

Outcome Severity Propagation — Algebraic Error Handling

In tokio, when you join_all multiple tasks, you get back a Vec<Result> and must manually scan for errors, distinguish cancellations from panics, and decide which failure is "worst." Asupersync's four-valued Outcome forms a severity lattice: Ok < Err < Cancelled < Panicked. When combining outcomes, the worst severity wins automatically. This demo spawns 5 real tasks with mixed outcomes and shows the lattice composition in real-time.

Severity Lattice Composition
Click Run to spawn 5 tasks with different outcomes and watch severity propagate
Exhibit 13 — Unique to Asupersync

Cancellation Forensics — Full Attribution Chain

When a tokio task is cancelled, you get... nothing. The future is dropped. You have no idea who cancelled it, why, from which region, or at what phase. Asupersync's cancellation outcome carries a full attribution chain: the cancel kind (user/timeout/fail_fast/race_lost/shutdown), the phase (requested/draining/finalizing/completed), the originating region, task, timestamp, message, and whether the chain was truncated. This is cancel forensics, not cancel amnesia.

Cancel Attribution Inspector
Click Run to cancel tasks with different reasons and inspect the full attribution
Exhibit 14 — Unique to Asupersync

Handle Slot Recycling — LIFO Determinism Under Pressure

Tokio's internal task IDs are opaque and non-deterministic — re-running the same test produces different IDs, making trace replay impossible. Asupersync exposes deterministic slot allocation with LIFO free-list recycling. The same create/close sequence always produces the same slot assignments — critical for deterministic replay and debugging. This demo creates and closes 40 real WASM scopes and visualizes which slots get reused, how generations climb, and proves the LIFO pattern.

Slot Allocation Heatmap
Click Run to see 40 rapid create/close cycles with real slot tracking
Allocation Events
Exhibit 15 — Unique to Asupersync

Sibling Scope Isolation — No Cross-Contamination

In tokio, cancelling one task group can accidentally leak signals to siblings through shared CancellationTokens, thread-locals, or global state. Asupersync guarantees structural isolation: closing or cancelling one scope has zero effect on sibling scopes. This demo creates 4 real sibling scopes with tasks, closes them one at a time, and probes the living siblings after each close to prove they're completely unaffected.

Sibling Scope Independence
Click Build to create 4 sibling scopes, then close them one-by-one to prove isolation
Isolation Events
Exhibit 16 — Unique to Asupersync

LIFO Slot Recycling — Deterministic by Construction

Tokio's internal task IDs are opaque and non-deterministic — re-running the same test produces different IDs, making trace replay impossible. Asupersync's handle allocator uses a deterministic LIFO free-list: when slots are freed, the most recently freed slot is reused first. This demo creates 4 scopes, frees them in order, then creates 4 new scopes and verifies they reuse slots in exact reverse order. This deterministic pattern means you can record a handle allocation trace and replay it identically.

LIFO Recycling Proof
Creates scopes, frees them, then verifies reuse follows LIFO order
Exhibit 17 — Unique to Asupersync

Handle Leak Detection — The Runtime Catches Your Mistakes

What happens when a programmer forgets to join a spawned task? In tokio, the task becomes an orphan — it runs indefinitely with no owner, leaking memory and potentially holding locks. In Asupersync, the scope won't let you. Tasks are pinned on spawn and tracked as obligations. When the parent scope closes, it cascade-closes every unjoinable child. The runtime cleans up your mistakes structurally. This demo intentionally "forgets" to join 5 tasks and proves the scope catches all of them.

Obligation Enforcement
Spawns 5 tasks and intentionally "forgets" to join them
Leak Detection
Exhibit 18 — Unique to Asupersync

Scope-as-Timeout — Structural Deadlines, Not Afterthoughts

In vanilla JavaScript, implementing a timeout for a group of concurrent operations requires creating an AbortController, a setTimeout, wiring the signal to every fetch(), handling the AbortError in every .catch(), and remembering to clearTimeout on success. That's 6 coordination points where bugs hide. In Asupersync, you create a scope and close it when the deadline fires. Every task and fetch handle under it is automatically cancelled. One line of cleanup instead of six.

Structural Timeout (2s deadline)
Spawns 3 tasks with a 2-second scope deadline — scope auto-closes when time's up
Timeout Events
Exhibit 19 — Unique to Asupersync

Scoped Channel Lifecycle — Resources Die with Their Scope

Browser resources (WebSockets, MessageChannels, streams) have no concept of "ownership." If your component unmounts without explicitly closing them, they become zombies. In Asupersync, resources are scoped to a region with WASM tasks tracking each end. Close the region and everything under it is torn down — ports closed, tasks released, handles freed. The resource cannot outlive its scope. This demo creates a real MessageChannel with WASM-tracked tasks, sends messages, then closes the scope.

Scoped Channel Lifecycle
Opens a MessageChannel with WASM-tracked tasks — close the scope to tear down everything
Channel Events
Exhibit 20 — The Whole Point

Vanilla JS vs Asupersync — Side-by-Side

Every exhibit above demonstrates a structural guarantee. But the real question is: how much code do YOU have to write? Here's the same scenario — 3 concurrent fetches with a 5-second timeout — implemented in vanilla JavaScript (22 lines, 11 coordination points where bugs hide) vs Asupersync (6 lines, 1 coordination point). The difference isn't incremental — it's a different programming model entirely.

Vanilla JavaScript — 22 lines
const controller = new AbortController(); // 1. create controller const signal = controller.signal; // 2. extract signal // 3. wire timeout to abort const timer = setTimeout(() => { controller.abort(); // 4. abort on timeout }, 5000); try { const results = await Promise.all([ fetch(url1, { signal }), // 5. pass signal fetch(url2, { signal }), // 6. pass signal fetch(url3, { signal }), // 7. pass signal ]); clearTimeout(timer); // 8. clear on success return results; } catch (err) { clearTimeout(timer); // 9. clear on error too! if (err.name === 'AbortError') { // 10. distinguish timeout from real error throw new Error('Request timed out'); } throw err; // 11. re-throw others }
11 coordination points where bugs can hide. Forget #8 or #9? Timer leaks. Forget #5-7? Request ignores timeout. Mix up #10? Wrong error type propagates.
Asupersync — 6 lines
const scope = rt.enterScope('requests').value; // All requests are scoped — no signal wiring needed scope.fetchRequest({ url: url1, method: 'GET' }); scope.fetchRequest({ url: url2, method: 'GET' }); scope.fetchRequest({ url: url3, method: 'GET' }); // After 5s, close the scope — ALL requests auto-cancelled setTimeout(() => scope.close(), 5000);
1 coordination point. scope.close() handles everything. No signal wiring. No manual cleanup. No error-path bugs. The scope IS the AbortController, the timeout, and the cleanup — all in one structural primitive.
Exhibit 21 — Performance

Rapid Scope Churn — How Fast Is the Runtime?

The common objection to structured concurrency is performance: "all that bookkeeping must be slow." This benchmark measures real WASM operations per second — scope create + close cycles including handle allocation, generation tracking, parent registration, and full cleanup. The result: structured concurrency at microsecond latency. These are the same operations that power every exhibit on this page, running at production speed.

WASM Runtime Throughput
Measures real scope create/close operations per second