Home/Research

Advancing the scienceof agentic systems

At Open Gigantic, we research and build token-efficient agent architectures from first principles. Our peer-reviewed work is published on arXiv and drives everything we ship.

2arXiv Papers
4Peer-Reviewed
6CS Subfields
Apr '26Latest Pub.
0130 Apr 2026
cs.AI

ObjectGraph: From Document Injection to Knowledge Traversal — A Native File Format for the Agentic Era

Mohit DubeyOpen Gigantic

Every document format in existence was designed for a human reader moving linearly through text. Autonomous LLM agents do not read — they retrieve. This fundamental mismatch forces agents to inject entire documents into their context window, wasting tokens on irrelevant content, compounding state across multi-turn loops, and broadcasting information indiscriminately across agent roles. We argue this is not a prompt engineering problem, not a retrieval problem, and not a compression problem: it is a format problem. We introduce OBJECTGRAPH (.og), a file format that reconceives the document as a typed, directed knowledge graph to be traversed rather than a string to be injected. OBJECTGRAPH is a strict superset of Markdown — every .md file is a valid .og file — requires no infrastructure beyond a two-primitive query protocol, and is readable by both humans and agents without tooling. We formalize the Document Consumption Problem, characterise six structural properties no existing format satisfies simultaneously, and prove OBJECTGRAPH satisfies all six. We further introduce the Progressive Disclosure Model, the Role-Scoped Access Protocol, and Executable Assertion Nodes as native format primitives. Empirical evaluation across five document classes and eight agent task types demonstrates up to 95.3% token reduction with no statistically significant degradation in task accuracy (p > 0.05). Transpiler fidelity reaches 98.7% content preservation on a held-out document benchmark.

12 pages4 figures4 tables
Artificial Intelligence (cs.AI)Databases (cs.DB)Information Retrieval (cs.IR)Multiagent Systems (cs.MA)
arXiv:2604.27820
0219 Apr 2026
cs.AI

Phase-Scheduled Multi-Agent Systems for Token-Efficient Coordination

Mohit DubeyOpen Gigantic

Multi-agent systems (MAS) powered by large language models suffer from severe token inefficiency arising from two compounding sources: (i) unstructured parallel execution, where all agents activate simultaneously irrespective of input readiness; and (ii) unrestricted context sharing, where every agent receives the full accumulated context regardless of relevance. Existing mitigation strategies — static pruning, hierarchical decomposition, and learned routing — treat coordination as a structural allocation problem and fundamentally ignore its temporal dimension. We propose Phase-Scheduled Multi-Agent Systems (PSMAS), a framework that reconceptualizes agent activation as continuous control over a shared attention space modeled on a circular manifold. Each agent i is assigned a fixed angular phase θᵢ ∈ [0, 2π], derived from the task dependency topology; a global sweep signal φ(t) rotates at velocity ω, activating only agents within an angular window ε. Idle agents receive compressed context summaries, reducing per-step token consumption. We implement PSMAS on LangGraph, evaluate on four structured benchmarks (HotPotQA-MAS, HumanEval-MAS, ALFWorld-Multi, WebArena-Coord) and two unstructured conversational settings, and prove stability, convergence, and optimality results for the sweep dynamics. PSMAS achieves a mean token reduction of 27.3% (range 21.4–34.8%) while maintaining task performance within 2.1 percentage points of a fully activated baseline (p < 0.01, n = 500 per configuration), and outperforms the strongest learned routing baseline by 5.6 percentage points in token reduction with 2.0 percentage points less performance drop.

8 pages3 figures
Artificial Intelligence (cs.AI)Algebraic Topology (math.AT)
arXiv:2604.17400
More papers coming soon