Hungarian Algorithm

Min-cost perfect matching on a 10×10 matrix, executed on a compiled transformer stack machine

Cost matrix

Optimal cells highlighted. Every value lives in the stack machine’s heap at addresses 0–99, stored by parabolic attention writes during initialization.

Execution trace

Each row is one step on the compiled transformer — a parabolic attention head fires, fetches an instruction or reads memory, and the stack/locals mutate. 23,167 steps total; shown here in phases.

Iteration profile

The algorithm loops once per matrix row. Later rows require longer augmenting-path searches as more columns are already assigned.

Opcode frequency

How the machine works

The key insight
Parabolic encoding k = (2j, −j²) makes dot-product attention peak sharply at a target position. The same mechanism addresses program memory (instruction fetch), the stack, local variables, and the heap. The transformer’s weights are the interpreter — compiled analytically, not trained.

The Hungarian algorithm uses all three memory spaces: the heap (100 cost entries + 66 potential/assignment arrays at addresses 0–209), 9 local variables (loop counters and temporaries), and the stack (transient arithmetic, max depth ~3). The 682-instruction program was generated by a Python assembler, not hand-written.