Antahkarana in the Age of Transformers · Continual Fine-Tuning via Vedantic Mechanisms
Venkatesh Swaminathan · ORCID 0000-0002-3315-7907 · Nexus Learning Labs, Bengaluru
Condition F with SNN-calibrated hyperparameters (Bhaya threshold=1.8×) performs worse than baseline (BWT=1.42 vs 1.05) because Bhaya fires on 18.8% of all steps at LLM loss scale — treating normal variance as catastrophic events. Recalibrating threshold to 4.0× (matched to LLM mean loss ~1.08) achieves the best result (BWT=1.11, 8.3% improvement). This is not a failure — it is a quantified cross-substrate scaling finding with a clear path forward.