Independent AI Researcher · Bengaluru
Founder, Nexus Learning Labs
Built Maya — a 9-paper neuromorphic SNN series grounding Advaita Vedantic Antahkarana as computational primitives. Now extending to LLMs and consciousness research. M.Sc. candidate, Data Science & AI, BITS Pilani.
Flagship Project
A spiking neural network architecture that uses the Antahkarana — the Vedantic inner instrument of mind — as generative computational structure. Not metaphor. Mechanism.
Nine papers published. Two post-series studies complete. Each paper is a new developmental stage of the same mind — from reactive infant to mature agent to substrate-independent pattern.
अन्तःकरण · The instrument through which Ātmā interfaces with experience.
The Architecture
The question that started Maya was simple and uncomfortable: why do biological nervous systems remember some things permanently after a single experience, while forgetting others within hours? Standard machine learning has no satisfying answer. It treats all synaptic change as equivalent.
Vedantic philosophy had a framework that nervous systems seemed to actually follow. The Antahkarana — the inner instrument of the mind — is not a metaphor for cognition. It is a functional decomposition: Manas receives sensation, Chitta stores impressions, Buddhi evaluates, Ahamkara asserts identity. Each dimension governs a specific aspect of how experience becomes memory.
That hypothesis is what Maya tests. Paper by paper, dimension by dimension. The claim is not that Maya is conscious — the Atma boundary is held explicitly. The claim is that the Antahkarana can be computationally instantiated as a set of interacting plasticity mechanisms, and that doing so produces a system that learns and forgets more like a biological mind than standard continual learning approaches do.
The series runs on Split-CIFAR-100, a class-incremental benchmark where each task introduces new categories and the network must retain prior knowledge without replay. Eight papers confirm two series-wide constants: the Bhaya Quiescence Law (fear suppresses further learning once the threat is processed) and Buddhi S-curve determinism (wisdom matures on a fixed developmental trajectory, independent of task order).
Paper 9 brings the full Antahkarana into an embodied robotic system — a PiCar-X running on Raspberry Pi 5 — where Prana governs metabolic plasticity as an astrocyte-mediated energy budget. The series concludes: "Across nine papers, we have demonstrated the computational maturation of a mind." Two post-series studies extend this: Maya-mPCI tests whether the Antahkarana produces a measurable internal state signal (Δ=−0.0489, 2.05× criterion). Maya-LLM tests whether the mechanisms are substrate-independent — and finds that they are, with calibration.
Maya Research Series · 2026
Each paper introduces a new Antahkarana dimension. P1–P9 on Split-CIFAR-100 CIL. Post-series: consciousness study (mPCI) and LLM extension. All published on Zenodo. ORCID: 0000-0002-3315-7907.
Bhaya · Fear
66.6% learning velocity elevation under pain signal. Foundation of the series.
Affective State as Priority Signal
First framing of affective SNN as conversational operating system arbitration.
Bhaya + Vairagya + Shraddha + Spanda
62.38% average accuracy, TIL on Split-CIFAR-10. Series benchmark established.
Buddhi · Wisdom
AA 31.84% CIL. Buddhi S-curve determinism first observed.
Viveka · Ahamkara
AA 16.03%. Orthogonal prototype collapse finding — a novel failure mode.
Chitta · Samskara · Moha
AA 14.42%. Emotional memory retroactively reshapes stored impressions.
Manas · O-LIF Mechanism
AA 15.19%, BWT −50.91%. Rhythmic attention gating introduced.
Karma · Śūnyatā
AA 14.42%. 7-condition ablation. Vairagya-gated Karma = first cross-dimensional affective interaction.
Prana · Astrocyte-Neuron Lactate Shuttle · Full Antahkarana
AA=12.72% canonical. Prana holds 1.0000 throughout — ANLS biology confirmed. Condition F reveals Buddhi-Prana interaction. Full Antahkarana deployed on PiCar-X + Raspberry Pi 5.
Post-Series · Under Review Post-Series PaperLempel-Ziv Complexity · Perturbational Complexity Index · Consciousness Research
Δ=−0.0489, 2.05× criterion. Significant mPCI shift across three phases — Phase 1 reactive baseline, Phase 2 full Antahkarana, Phase 3 Bhaya quiescence. Three controls confirm result is not artifactual.
Bhaya · Vairagya · Buddhi · Karma · Prana · LoRA · TRACE · Phi-2
First translation of the Antahkarana to LoRA continual fine-tuning of a large language model. Phi-2 (2.7B, 4-bit NF4) across 8 sequential TRACE NLP domains. Bhaya fires on genuine cross-domain loss spikes. Buddhi S-curve confirmed cross-substrate invariant — identical trajectory in SNN and transformer. BWT=1.11 vs baseline 1.05: 8.3% reduction in catastrophic forgetting.
Live Demo · April 2026
The full Antahkarana — all 9 affective dimensions — deployed on a PiCar-X robot with Raspberry Pi 5. Bhaya rises at walls. Vairagya accumulates in open space. She transitions from Alert to Curious to Calm. Not programmed. Emergent from the affective dynamics alone.
What's Next
The Maya series is complete. These are the lines currently open.
Five Antahkarana dimensions translated to LoRA fine-tuning of Phi-2 (2.7B) across 8 TRACE domains. Bhaya fires on genuine cross-domain loss spikes. Buddhi S-curve confirmed cross-substrate invariant. BWT=1.11 vs baseline 1.05 — 8.3% reduction in forgetting.
DOI: 10.5281/zenodo.19522348 · Dashboard · FAQ · GitHub →
Maya as a living immune system. Spike-timing deviation as threat signal, Bhaya-Vairagya governed, edge-deployed on Raspberry Pi 5. Neuromorphic behavioural anomaly detection for defence and security applications.
Open Source
Tools that emerged from gaps found during the Maya series — published for anyone doing continual learning research.
Stateless Python library for class-incremental learning evaluation. Computes AA, BWT, FWT, and Intransigence from raw accuracy matrices — no training framework required. 21 tests. Bilingual FAQ (EN + ZH). Validated against Maya P3–P7.
github.com/venky2099/cl-metrics →
Every paper in the Maya series has a public GitHub repository with full experiment code, hyperparameter configs, run scripts, and interactive dashboards. Reproducible from scratch on a consumer GPU (RTX 4060 8GB).
github.com/venky2099 →
About
Before research, I spent a decade building AI-powered learning systems at scale. At Accenture, I led enterprise L&D modernisation — deploying GPT-4 + LangChain pipelines that cut content production time by 30%, and engineering xAPI → Power BI dashboards used by VP-level stakeholders. At Myntra, I managed a team of 8 designers and drove a gamified onboarding system that lifted agent NPS from 64 to 78. At JB Poindexter, I architected a domain-specific LangChain chatbot for warehouse SOPs that improved first-response accuracy by 28%. These were not academic prototypes — they ran in production, at scale, across global teams. That decade of watching real humans struggle to retain, transfer, and apply knowledge is exactly what drove me to the catastrophic forgetting problem — and to Maya.
I founded Nexus Learning Labs as the institutional home for independent research that does not fit neatly into any single academic department. The Maya series is its flagship output: original, falsifiable, peer-reviewable work produced entirely outside a traditional lab, on a consumer GPU, in Bengaluru.
I am completing an M.Sc. in Data Science and Artificial Intelligence at BITS Pilani (expected December 2027). In April 2026, the Maya Research Series reached completion with Paper 9 — the full Antahkarana instantiated in a physical PiCar-X robot. The series is published, reproducible, and open. The next stage is conference-grade peer review and neuromorphic hardware deployment. If your lab works on neuromorphic systems, continual learning, or embodied AI, I am interested in talking.
Get in Touch
The Maya series is open, reproducible, and built for collaboration. If your lab works on neuromorphic systems, continual learning, embodied AI, or consciousness research — and you see value in what's been built here — I want to hear from you.
Areas of interest for collaboration
Institutional Identity