MAYA RESEARCH SERIES · POST-SERIES LLM EXTENSION

Maya-LLM FAQ

Antahkarana in the Age of Transformers · Continual Fine-Tuning via Vedantic Mechanisms

Venkatesh Swaminathan · ORCID 0000-0002-3315-7907 · Nexus Learning Labs, Bengaluru

8.3%
BWT IMPROVEMENT
Condition F calibrated vs Condition A baseline
0.988
BUDDHI S-CURVE
Cross-substrate invariant — SNN P4–P9 and LLM identical
3 domains
BHAYA FIRED ON
Py150, ScienceQA, NumGLUE-ds — genuine loss spikes
Scale
CALIBRATION NEEDED
SNN thresholds don't transfer directly to LLM loss scale
⚠ CALIBRATION NOTE — KEY FINDING

Condition F with SNN-calibrated hyperparameters (Bhaya threshold=1.8×) performs worse than baseline (BWT=1.42 vs 1.05) because Bhaya fires on 18.8% of all steps at LLM loss scale — treating normal variance as catastrophic events. Recalibrating threshold to 4.0× (matched to LLM mean loss ~1.08) achieves the best result (BWT=1.11, 8.3% improvement). This is not a failure — it is a quantified cross-substrate scaling finding with a clear path forward.

MAYA RESEARCH SERIES · P1–P9 + POST-SERIES
P1
Nociceptive
66.6% ↑ velocity
P2
Maya-OS
OS arbitration
P3
Maya-CL
AA=62.38%
P4
Maya-Smriti
AA=31.84%
P5
Maya-Viveka
AA=16.03%
P6
Maya-Chitta
AA=14.42%
P7
Maya-Manas
AA=15.19%
P8
Maya-Śūnyatā
59% pruned
P9
Maya-Prana
Full Antahkarana
POST-SERIES
Maya-mPCI
Δ=−0.0489
POST-SERIES
Maya-LLM
BWT=1.11 ★