What happens when you annotate, extract, and disambiguate every entity mentioned in the longest U.S. Supreme Court decision in history? What if you then linked those entities to each other and visualized it as a network?
This is the result of enriching all 241 pages and 111,267 words of Dred Scott v. Sandford (1857) with Kanon 2 Enricher in less than ten seconds at the cost of 47 cents.
Dred Scott v. Sandford is the longest U.S. Supreme Court decision by far, and has variously been called "the worst Supreme Court decision ever" and "the Court's greatest self-inflicted wound" due to its denial of the rights of African Americans.
Thanks to Kanon 2 Enricher, we now also know that the case contains 950 numbered paragraphs, 6 footnotes, 178 people mentioned 1,340 times, 99 locations mentioned 1,294 times, and 298 external documents referenced 940 times.
For an American case, there are a decent number of references to British precedents (27 to be exact), including the Magna Carta (¶ 928).
Surprisingly though, the Magna Carta is not the oldest citation referenced. That would be the Institutes of Justinian (¶ 315), dated around 533 CE.
The oldest city mentioned is Rome (founded 753 BCE) (¶ 311), the oldest person is Justinian (born 527 CE) (¶ 314), and the oldest year referenced is 1371, when 'Charles V of France exempted all the inhabitants of Paris from serfdom' (¶ 370).
All this information and more was extracted in 9 seconds. That's how powerful Kanon 2 Enricher, my latest LLM for document enrichment and hierarchical graphitization, is. If you'd like to play with it yourself now that it's available in closed beta, you can apply to the Isaacus Beta Program here: https://isaacus.com/beta.
I see all Chinese labs are turning TL;DR into TL;DRGB
Problem: 1M text tokens == 1 M opportunities for your GPU to file worker-comp Solution: don’t feed the model War & Peace—feed it the movie poster.
This is Glyph, Zai’s new visual-text compression voodoo: • 10 k words → 3 PNGs ≈ 3 k visual tokens • Compression ratio: 4.3× • Throughput: 40-60 tok/s i.e. your context window now finishes before my coffee does
So I did the only reasonable thing: asked GLM-4.6 to port Glyph for Qwen3-VL-8B-Thinking. Translation: I made one model compress a novel into a comic strip, then made another model read the comic strip and still ace QA. It’s basically passing notes in class, except the note is a 1920×1080 meme and the teacher is a transformer.
We've gone from "Attention is All You Need" to "Attention is Too Expensive, Just Use Your Eyes." Remember kids: in 2025 literacy is optional, but JPEG is forever.