API Guide
This guide covers practical patterns and best practices for integrating TOON into your LLM pipelines. For detailed function signatures, see the API Reference.
Integration Strategy
The most common way to use TOON is as a translation layer between your application logic and your LLM prompts.
mermaid
graph LR
JSON[JSON Data] -->|encode| TOON[TOON String]
TOON -->|prompt| LLM[Large Language Model]
LLM -->|output| TOON_OUT[TOON Response]
TOON_OUT -->|decode| JSON_OUT[JSON Result]Common Patterns
1. Feeding Large Datasets to LLMs
When sending large arrays of objects (e.g., search results, product catalogs), use Tab delimiters for maximum token efficiency.
ts
import { encode } from '@toon-format/toon'
const products = await getProducts()
const prompt = `Here are our products:\n\n${encode(products, { delimiter: '\t' })}`2. Structured LLM Responses
You can ask an LLM to respond in TOON format. Because TOON has explicit array headers, it's easier for the LLM to maintain structure than with raw CSV, and uses fewer tokens than JSON.
Prompt Example:
Please list the top 3 cities in France by population. Respond using TOON format with headers: [3]{city,population,region}.
Response Example:
yaml
cities[3]{city,population,region}:
Paris,2148271,Île-de-France
Marseille,861635,Provence-Alpes-Côte d'Azur
Lyon,513275,Auvergne-Rhône-Alpes3. Key Folding for Deep Objects
If your data has deeply nested single-key wrappers, use keyFolding to flatten them and save tokens.
ts
const data = { response: { data: { items: [...] } } }
const compact = encode(data, { keyFolding: 'safe' })
// Result: "response.data.items[N]: ..."Best Practices
Use Tabs in Production
While commas are more human-readable, tabs (\t) are often tokenized as single tokens and rarely conflict with data values, reducing the need for quotes.
Always Specify Array Lengths
If you are generating TOON manually or via a custom implementation, always include the [N] length. LLMs use this to verify they haven't truncated the output.
Streaming Large Data
For very large datasets, use encodeLines to avoid building a massive string in memory:
ts
import { encodeLines } from '@toon-format/toon'
for (const line of encodeLines(hugeObject)) {
process.stdout.write(line + '\n')
}Error Handling
When decoding model output, always wrap in a try-catch, as models may occasionally fail to follow the format perfectly.
ts
import { decode } from '@toon-format/toon'
try {
const result = decode(modelOutput)
// Process result...
} catch (e) {
console.error("Failed to parse model output:", e)
// Fallback logic...
}Support Matrix
TOON is natively supported in:
- TypeScript/JavaScript:
@toon-format/toon - CLI:
@toon-format/cli - Python: (Coming Soon)
- Go: (Coming Soon)