Everything you need to get started with ATON.
From installation to advanced usage.
ATON (Adaptive Token-Oriented Notation) is a novel data serialization format specifically designed for LLM applications. It reduces token usage by up to 56% compared to JSON while maintaining full data integrity and human readability.
JSON (125 tokens)
{
"products": [
{"id": 1, "name": "Laptop", "price": 999},
{"id": 2, "name": "Mouse", "price": 49},
{"id": 3, "name": "Keyboard", "price": 79}
]
}
ATON (55 tokens - 56% reduction)
@schema[id:int, name:str, price:float] products(3): 1, "Laptop", 999 2, "Mouse", 49 3, "Keyboard", 79
Get up and running with ATON in 3 steps
Download the complete package with libraries, examples, and documentation.
Download from GithubExtract the archive and you're ready to use ATON.
# Extract tar -xzf aton-format-main.tar.gz cd aton-format-main # No installation needed! Ready to use.
Use the Python library or JavaScript in your projects.
# Python from converter.aton import ATONEncoder encoder = ATONEncoder(optimize=True) aton = encoder.encode(your_data)
Version 1.0.1 • 87 KB • Updated Nov 18, 2025
git clone https://github.com/dagoSte/aton-format cd aton-format
GitHub repository
# Extract package
tar -xzf aton-format-main.tar.gz
cd aton-project
# Use directly (no pip install needed)
from converter.aton import ATONEncoder
# Or add to your Python path
export PYTHONPATH="${PYTHONPATH}:/path/to/aton-project"
Install via pip
pip install aton-format
<!-- Include in your HTML --> <script src="path/to/aton.js"></script> <script> // Use ATON const converter = new ATON.Converter(); const aton = converter.jsonToAton(jsonString); </script>
from converter.aton import ATONEncoder
# Create encoder
encoder = ATONEncoder(optimize=True)
# Your data
data = {
"users": [
{"id": 1, "name": "John", "email": "[email protected]"},
{"id": 2, "name": "Jane", "email": "[email protected]"}
]
}
# Convert to ATON
aton_string = encoder.encode(data)
print(aton_string)
encoder = ATONEncoder(
optimize=True, # Enable optimizations
include_schema=True, # Generate @schema[...]
include_defaults=True, # Generate @defaults[...]
min_items=1 # Minimum items for optimization
)
from converter.aton import ATONEncoder, ATONDecoder # Encode encoder = ATONEncoder(optimize=True) aton = encoder.encode(data) # Decode (round-trip) decoder = ATONDecoder() original_data = decoder.decode(aton) # Verify assert data == original_data # ✅ Zero data loss!
<script src="aton.js"></script>
<script>
// Create converter
const converter = new ATON.Converter({
optimize: true,
includeSchema: true,
includeDefaults: true
});
// Convert JSON to ATON
const jsonString = JSON.stringify(yourData);
const aton = converter.jsonToAton(jsonString);
// Get savings info
const savings = converter.calculateSavings(jsonString, aton);
console.log(`Saved ${savings.savedTokens} tokens!`);
</script>
const { ATONConverter } = require('./aton.js');
const converter = new ATONConverter({
optimize: true,
includeSchema: true
});
const data = { /* your data */ };
const aton = converter.jsonToAton(JSON.stringify(data));
console.log(aton);
Document retrieval with metadata and chunks
from converter.aton import ATONEncoder
rag_data = {
"documents": [{
"doc_id": "doc_001",
"filename": "Q4_Report.pdf",
"pages": 87,
"processed": True
}],
"chunks": [{
"chunk_id": "ch_001",
"doc_id": "doc_001",
"content": "Revenue reached $145.7M..."
}]
}
encoder = ATONEncoder(optimize=True)
aton = encoder.encode(rag_data)
# Result: 57% token reduction!
print(f"JSON: {estimate_tokens(json.dumps(rag_data))} tokens")
print(f"ATON: {estimate_tokens(aton)} tokens")
Find 27+ detailed examples in the package:
examples/EXAMPLES.md - 8 basic examplesexamples/ADVANCED_EXAMPLES.md - 10+ enterprise scenariosexamples/practical_examples.py - Executable Python codeexamples/VISUAL_COMPARISON.md - Side-by-side comparisons| Option | Type | Default | Description |
|---|---|---|---|
| optimize | boolean | true | Enable all optimizations |
| includeSchema | boolean | true | Generate @schema[...] header |
| includeDefaults | boolean | true | Generate @defaults[...] and omit values |
| minItemsForOptimization | number | 1 | Minimum array size for optimization |
ATONEncoder(optimize=True, includeSchema=True, includeDefaults=True)
ATONEncoder(optimize=False, includeSchema=True, includeDefaults=False)
ATONEncoder(optimize=False, includeSchema=False, includeDefaults=False)
Test ATON with local LLMs using Ollama
# 1. Install Ollama curl -fsSL https://ollama.ai/install.sh | sh # 2. Download a model ollama pull llama3.1:8b # 3. Run ATON tests cd aton-project/tests python3 quick_ollama_test.py
Comprehensive testing suite included in the package:
tests/quick_ollama_test.py - 2-minute quick testtests/test_ollama.py - Full test suite (3 scenarios)tests/OLLAMA_TEST_GUIDE.md - Complete guide💡 Tests verify ATON token savings and LLM comprehension
Main class for converting data to ATON format
ATONEncoder(
optimize: bool = True,
include_schema: bool = True,
include_defaults: bool = True,
min_items: int = 1
)
encode(data: dict) → str
Convert dictionary to ATON string
estimate_tokens(text: str) → int
Estimate token count (rough approximation)
Class for parsing ATON back to Python objects
decode(aton_str: str) → dict
Parse ATON string to dictionary
JavaScript/Browser API
new ATON.Converter({
optimize: true,
includeSchema: true,
includeDefaults: true
})
jsonToAton(jsonString: string): string
Convert JSON string to ATON
atonToJson(atonString: string): string
Convert ATON back to JSON
calculateSavings(json: string, aton: string): object
Calculate token savings and metrics
Yes! ATON has been extensively tested and validated. It maintains 100% data integrity with zero data loss, and achieves consistent 56% token reduction across various use cases.
ATON works with all major LLMs including GPT-4, Claude, Llama, Mistral, and more. It's been tested with Ollama for local inference with excellent results.
ATON is ideal for LLM applications where token efficiency matters. For public REST APIs, browser storage, or human-edited config files, JSON remains a better choice due to ecosystem compatibility and tooling support.
Typical savings range from 50-60% depending on your data structure. For enterprise applications processing 100M tokens/day, this translates to $613,200/year in savings.
Zero. ATON guarantees 100% data integrity. You can convert JSON → ATON → JSON and get exactly the same data back.
Check the documentation in the package, particularly the WHITEPAPER.md for technical details and ADVANCED_EXAMPLES.md for real-world use cases. For issues, refer to the troubleshooting sections in each guide.