Trillim's Tokens

Trillim Adds Support For Bonsai

April 9, 2026

Trillim v0.8.0 adds support for PrismML's Bonsai models, bringing DarkNet's CPU-first runtime to a new class of 1-bit Qwen3-based models.

On March 31, 2026, PrismML released Bonsai, a new family of Qwen3-based 1-bit language models. That immediately caught our attention at Trillim.

The Bonsai models are interesting for the same reason BitNet was interesting: they push useful language models much closer to the hardware people actually have. In Bonsai, weights and embeddings are binary. In BitNet, the core idea is ternary weights. Both families reduce dependence on expensive floating-point matrix multiplies and fit naturally with CPU-first inference.

PrismML also published a whitepaper for Bonsai 8B, and the benchmark quality was strong enough that we wanted to move quickly.

That led directly to v0.8.0: Trillim now supports Bonsai through the same managed bundle flow as the rest of the product. You can quantize compatible checkpoints into Local/...-TRNQ, load them through the CLI, use them from the Python SDK, or serve them over the local OpenAI-compatible server. If you are new to Trillim, start with the install docs and then head to the CLI reference.

Why we cared about Bonsai

Trillim started because we were not satisfied with the state of local BitNet inference.

Microsoft’s bitnet.cpp was an important research project and it demonstrated that carefully designed low-bit kernels could work well in practice. But it also had real product limitations for the workflow we cared about: it supported only a narrow subset of models, it was not designed as a general local AI surface, and at the time we started Trillim it did not cover some features we considered important for real use, including LoRA adapters and broader unembedding quantization support.

So we built our own runtime. That runtime became DarkNet: the CPU inference engine behind Trillim. It gave us control over kernels, model support, adapters, serving surfaces, and the full path from quantization to local deployment. We have already shown in our earlier DarkNet vs bitnet.cpp benchmark write-up that this approach lets us move faster and ship stronger CPU inference performance.

That same logic applies to Bonsai. If a new 1-bit model family is genuinely useful on consumer hardware, we want it inside the same runtime and the same product surface instead of forcing users onto a separate toolchain.

What ships in v0.8.0

This release adds Bonsai support directly to DarkNet and Trillim’s bundle workflow.

  • compatible Bonsai checkpoints can be quantized into Trillim-managed bundles
  • those bundles load through the same chat, serve, and SDK flows as existing Trillim models
  • DarkNet now includes dedicated CPU kernels for Bonsai on AVX2 and Arm NEON

The practical result is simple: if you are running on CPU, Trillim now gives you a fast path for Bonsai without asking you to learn a new local stack.

There is one important caveat. These results are about CPU inference. Right now our focus is AVX2 and Arm NEON. If you are relying on Apple GPU cores, AVX512, or AVX-VNNI, this release does not yet represent our end state. We are actively working on AVX512, AVX-VNNI, and Metal support next.

Benchmark framing

We benchmarked DarkNet against PrismML’s Bonsai runtime in two configurations:

  • the base bonsai.cpp path from PrismML’s fork of llama.cpp
  • a manually patched version with unmerged pull requests applied, including AVX2 kernels where relevant

For these tables:

  • pp 512 means prefill throughput with a 512-token prompt
  • tg 256 means decode throughput over 256 output tokens

AVX2 Results

These AVX2 runs were collected on a consumer Intel laptop. That means there is some unavoidable variance from thermal behavior, boost behavior, and scheduler noise. We followed the same general methodology as in our earlier DarkNet vs bitnet.cpp benchmark post, so while the numbers are not perfectly noiseless, the comparison is still fair.

The base bonsai.cpp path was not useful here because it does not have a meaningful AVX2 fast path. In practice it falls back to a generic implementation that is too slow to make the results worth reporting.

bonsai.cpp with unmerged PRs

Modelpp 512 t/stg 256 t/s
Bonsai 1.7B93.6144.29
Bonsai 4B36.7016.11
Bonsai 8B17.9910.00

DarkNet

Modelpp 512 t/stg 256 t/s
Bonsai 1.7B126.2543.76
Bonsai 4B49.4816.43
Bonsai 8B25.598.65

The AVX2 story is straightforward. DarkNet is clearly faster on prefill, while decode is roughly even once you account for normal laptop noise. The biggest practical win is that Trillim ships a real AVX2 path instead of falling back to something too slow to matter.

Arm Results

These Arm runs were collected on an Apple Mac Studio with an M3 Ultra. The machine is stable under sustained load, so we used a continuous benchmark process: for each category we ran the 1.7B, 4B, and 8B models while sweeping thread counts, and we repeated each run five times to smooth out variation.

For fairness, both bonsai.cpp variants were compiled without Metal support because DarkNet is currently CPU-only. When we ship Metal support, we will publish that comparison separately.

Base bonsai.cpp

Bonsai-1.7B

threadspp 512 t/stg 256 t/s
121.5716.67
477.8254.05
8154.0595.01
10189.51109.75
20358.81145.44

Bonsai-4B

threadspp 512 t/stg 256 t/s
18.387.14
430.2323.78
859.8243.40
1073.5850.73
20141.0571.74

Bonsai-8B

threadspp 512 t/stg 256 t/s
14.443.80
416.2713.06
832.3324.56
1040.0628.80
2077.9846.04

bonsai.cpp with unmerged PRs

Bonsai-1.7B

threadspp 512 t/stg 256 t/s
121.5017.81
477.8660.13
8152.93102.06
10188.19117.21
20358.59141.08

Bonsai-4B

threadspp 512 t/stg 256 t/s
18.267.73
430.2126.47
859.4647.02
1073.4354.62
20140.9675.08

Bonsai-8B

threadspp 512 t/stg 256 t/s
14.434.18
416.3014.58
832.3226.90
1040.0331.27
2077.9448.69

DarkNet

Bonsai-1.7B

threadspp 512 t/stg 256 t/s
168.4719.08
4243.4964.41
8467.64111.72
10529.39124.38
20851.07152.11

Bonsai-4B

threadspp 512 t/stg 256 t/s
126.648.10
496.6628.46
8188.4350.82
10218.2558.50
20387.0182.68

Bonsai-8B

threadspp 512 t/stg 256 t/s
115.444.67
456.5516.75
8112.2830.86
10133.3436.51
20241.2855.08

On Apple Silicon CPU-only runs, DarkNet pulls ahead very clearly on prefill and still wins on decode, though the decode gains are much smaller. That pattern holds across the thread sweep and becomes more pronounced as thread counts rise.

Summary At 20 Threads

CategoryDarkNet over baseDarkNet over PR
Bonsai-1.7B pp 5122.37x2.37x
Bonsai-1.7B tg 2561.05x1.08x
Bonsai-4B pp 5122.74x2.75x
Bonsai-4B tg 2561.15x1.10x
Bonsai-8B pp 5123.09x3.10x
Bonsai-8B tg 2561.20x1.13x

That is the main takeaway from this release. On CPU, especially on Arm, DarkNet is not just compatible with Bonsai, it is materially faster where prefill matters most, and it still preserves a decode advantage in the aggregate.

Where this leaves Trillim

Trillim was never meant to be only a benchmark project. The point was to build a local AI stack that makes high-performance CPU inference practical to use.

That means two things have to be true at the same time:

  • the runtime needs to be fast
  • the product surface needs to be easy to install, easy to load, and easy to integrate

That is why Bonsai support matters to us. It is not only about adding another model family. It is about folding a new class of efficient models into the same local workflow: quantize, load, chat, serve, and embed.

If you want to try it, start with the install docs, then use the CLI reference or the Python Components guide depending on how you prefer to work.

Enjoy v0.8.0.