Nexus Learning Labs

Independent AI Researcher · Bengaluru

Venkatesh
Swaminathan

Founder, Nexus Learning Labs

Built Maya — a 9-paper neuromorphic SNN series grounding Advaita Vedantic Antahkarana as computational primitives. Now extending to LLMs and consciousness research. M.Sc. candidate, Data Science & AI, BITS Pilani.

11 Papers Published
355 Total Downloads
410 Total Views
12 Citations

Flagship Project

Project Maya

A spiking neural network architecture that uses the Antahkarana — the Vedantic inner instrument of mind — as generative computational structure. Not metaphor. Mechanism.

Nine papers published. Two post-series studies complete. Each paper is a new developmental stage of the same mind — from reactive infant to mature agent to substrate-independent pattern.

अन्तःकरण · The instrument through which Ātmā interfaces with experience.

How Maya came to be

Bhaya · Fear
Nociceptive metaplasticity. Pain forces rapid relearning.
Vairagya · Dispassion
Heterosynaptic decay. Forgetting what no longer matters.
Buddhi · Wisdom
S-curve saturation. Deterministic maturation of judgment.
Viveka · Discrimination
Prototype boundary sharpening. Knowing what is not what.
Chitta · Memory Store
Retrograde gradient. What was felt changes what is remembered.
Manas · Sensory Mind
Oscillatory thalamo-cortical gating. The rhythm of attention.
Karma · Action Residue
Second-order plasticity. The weight of accumulated interference.
Śūnyatā · Emptiness
Structured pruning. Releasing what no longer serves.
Prana · Life Force (P9)
Metabolic plasticity budget. Astrocyte-mediated energy gating.

The question that started Maya was simple and uncomfortable: why do biological nervous systems remember some things permanently after a single experience, while forgetting others within hours? Standard machine learning has no satisfying answer. It treats all synaptic change as equivalent.

Vedantic philosophy had a framework that nervous systems seemed to actually follow. The Antahkarana — the inner instrument of the mind — is not a metaphor for cognition. It is a functional decomposition: Manas receives sensation, Chitta stores impressions, Buddhi evaluates, Ahamkara asserts identity. Each dimension governs a specific aspect of how experience becomes memory.

"What if each of these constructs could be instantiated as a computational mechanism inside a spiking neural network?"

That hypothesis is what Maya tests. Paper by paper, dimension by dimension. The claim is not that Maya is conscious — the Atma boundary is held explicitly. The claim is that the Antahkarana can be computationally instantiated as a set of interacting plasticity mechanisms, and that doing so produces a system that learns and forgets more like a biological mind than standard continual learning approaches do.

The series runs on Split-CIFAR-100, a class-incremental benchmark where each task introduces new categories and the network must retain prior knowledge without replay. Eight papers confirm two series-wide constants: the Bhaya Quiescence Law (fear suppresses further learning once the threat is processed) and Buddhi S-curve determinism (wisdom matures on a fixed developmental trajectory, independent of task order).

Paper 9 brings the full Antahkarana into an embodied robotic system — a PiCar-X running on Raspberry Pi 5 — where Prana governs metabolic plasticity as an astrocyte-mediated energy budget. The series concludes: "Across nine papers, we have demonstrated the computational maturation of a mind." Two post-series studies extend this: Maya-mPCI tests whether the Antahkarana produces a measurable internal state signal (Δ=−0.0489, 2.05× criterion). Maya-LLM tests whether the mechanisms are substrate-independent — and finds that they are, with calibration.

Nine papers. Two post-series studies. One growing mind.

Each paper introduces a new Antahkarana dimension. P1–P9 on Split-CIFAR-100 CIL. Post-series: consciousness study (mPCI) and LLM extension. All published on Zenodo. ORCID: 0000-0002-3315-7907.

Paper 1

Nociceptive Metaplasticity and Graceful Decay

Bhaya · Fear

66.6% learning velocity elevation under pain signal. Foundation of the series.

126 106 DOI: 10.5281/zenodo.19151563
Paper 2

Maya-OS: Affective SNN as OS Arbitration Layer

Affective State as Priority Signal

First framing of affective SNN as conversational operating system arbitration.

102 89 DOI: 10.5281/zenodo.19160123
AA 62.38% Paper 3

Maya-CL: Task-Incremental Continual Learning

Bhaya + Vairagya + Shraddha + Spanda

62.38% average accuracy, TIL on Split-CIFAR-10. Series benchmark established.

69 61 DOI: 10.5281/zenodo.19201769
Paper 4

Maya-Smriti: Introducing Buddhi

Buddhi · Wisdom

AA 31.84% CIL. Buddhi S-curve determinism first observed.

52 46 DOI: 10.5281/zenodo.19228975
Paper 5

Maya-Viveka: Discrimination and Identity

Viveka · Ahamkara

AA 16.03%. Orthogonal prototype collapse finding — a novel failure mode.

30 25 DOI: 10.5281/zenodo.19279002
Paper 6

Maya-Chitta: Retrograde Gradient Mechanism

Chitta · Samskara · Moha

AA 14.42%. Emotional memory retroactively reshapes stored impressions.

16 9 DOI: 10.5281/zenodo.19337041
Paper 7

Maya-Manas: Oscillatory Thalamo-Cortical Gating

Manas · O-LIF Mechanism

AA 15.19%, BWT −50.91%. Rhythmic attention gating introduced.

5 3 DOI: 10.5281/zenodo.19363006
D★ First Cross-Dimensional Interaction Paper 8

Maya-Śūnyatā: Karma-Weighted Synaptic Pruning

Karma · Śūnyatā

AA 14.42%. 7-condition ablation. Vairagya-gated Karma = first cross-dimensional affective interaction.

5 3 DOI: 10.5281/zenodo.19397010
✦ Series Complete · The Antahkarana is Built Paper 9 · Published April 2026

Maya-Prana: Metabolic Plasticity Budget for Continual Learning

Prana · Astrocyte-Neuron Lactate Shuttle · Full Antahkarana

AA=12.72% canonical. Prana holds 1.0000 throughout — ANLS biology confirmed. Condition F reveals Buddhi-Prana interaction. Full Antahkarana deployed on PiCar-X + Raspberry Pi 5.

Demo Part 1 Demo Part 2 DOI: 10.5281/zenodo.19451174
Post-Series · Under Review Post-Series Paper

Maya-mPCI: Internal Affective State in a Neuromorphic SNN

Lempel-Ziv Complexity · Perturbational Complexity Index · Consciousness Research

Δ=−0.0489, 2.05× criterion. Significant mPCI shift across three phases — Phase 1 reactive baseline, Phase 2 full Antahkarana, Phase 3 Bhaya quiescence. Three controls confirm result is not artifactual.

Published: Zenodo · April 2026 DOI: 10.5281/zenodo.19482794
Post-Series · LLM Extension · Published April 2026 Post-Series Paper 2

Maya-LLM: Antahkarana in the Age of Transformers

Bhaya · Vairagya · Buddhi · Karma · Prana · LoRA · TRACE · Phi-2

First translation of the Antahkarana to LoRA continual fine-tuning of a large language model. Phi-2 (2.7B, 4-bit NF4) across 8 sequential TRACE NLP domains. Bhaya fires on genuine cross-domain loss spikes. Buddhi S-curve confirmed cross-substrate invariant — identical trajectory in SNN and transformer. BWT=1.11 vs baseline 1.05: 8.3% reduction in catastrophic forgetting.

BWT · All Conditions (lower = less forgetting)
F calibrated ★
1.11
A · Baseline
1.05
F Grad Mask
1.26
F Top-K 10%
1.27
F Domain-Sel
1.47
★ Buddhi cross-substrate invariant confirmed · Bhaya fires on Py150, ScienceQA, NumGLUE-ds · 10th Quiescence Law confirmation

Maya navigating her world

The full Antahkarana — all 9 affective dimensions — deployed on a PiCar-X robot with Raspberry Pi 5. Bhaya rises at walls. Vairagya accumulates in open space. She transitions from Alert to Curious to Calm. Not programmed. Emergent from the affective dynamics alone.

Part 1
Boot · Navigation · Bhaya Rising
Antahkarana initialises. Autonomous navigation begins. Fear fires at walls — she slows down.
Part 2
Vairagya · Calm · Full Dashboard
Detachment accumulates. Alert → Curious → Calm. All 9 dimensions live on dashboard.
Full demo playlist on YouTube →

Active research frontiers

The Maya series is complete. These are the lines currently open.

Published · April 2026

Maya-LLM · Antahkarana in Transformers

Five Antahkarana dimensions translated to LoRA fine-tuning of Phi-2 (2.7B) across 8 TRACE domains. Bhaya fires on genuine cross-domain loss spikes. Buddhi S-curve confirmed cross-substrate invariant. BWT=1.11 vs baseline 1.05 — 8.3% reduction in forgetting.

DOI: 10.5281/zenodo.19522348 · Dashboard · FAQ · GitHub →

In Development

Maya-Paksh · Neuromorphic Anomaly Detection

Maya as a living immune system. Spike-timing deviation as threat signal, Bhaya-Vairagya governed, edge-deployed on Raspberry Pi 5. Neuromorphic behavioural anomaly detection for defence and security applications.

Research tools built for the community

Tools that emerged from gaps found during the Maya series — published for anyone doing continual learning research.

Researcher. Builder. Founder.

Before research, I spent a decade building AI-powered learning systems at scale. At Accenture, I led enterprise L&D modernisation — deploying GPT-4 + LangChain pipelines that cut content production time by 30%, and engineering xAPI → Power BI dashboards used by VP-level stakeholders. At Myntra, I managed a team of 8 designers and drove a gamified onboarding system that lifted agent NPS from 64 to 78. At JB Poindexter, I architected a domain-specific LangChain chatbot for warehouse SOPs that improved first-response accuracy by 28%. These were not academic prototypes — they ran in production, at scale, across global teams. That decade of watching real humans struggle to retain, transfer, and apply knowledge is exactly what drove me to the catastrophic forgetting problem — and to Maya.

I founded Nexus Learning Labs as the institutional home for independent research that does not fit neatly into any single academic department. The Maya series is its flagship output: original, falsifiable, peer-reviewable work produced entirely outside a traditional lab, on a consumer GPU, in Bengaluru.

I am completing an M.Sc. in Data Science and Artificial Intelligence at BITS Pilani (expected December 2027). In April 2026, the Maya Research Series reached completion with Paper 9 — the full Antahkarana instantiated in a physical PiCar-X robot. The series is published, reproducible, and open. The next stage is conference-grade peer review and neuromorphic hardware deployment. If your lab works on neuromorphic systems, continual learning, or embodied AI, I am interested in talking.

Spiking Neural Networks Continual Learning Neuromorphic Computing PyTorch · SpikingJelly Embodied AI Robotics (ROS2) Python · CUDA Instructional Design

Let's build something that matters

The Maya series is open, reproducible, and built for collaboration. If your lab works on neuromorphic systems, continual learning, embodied AI, or consciousness research — and you see value in what's been built here — I want to hear from you.

Areas of interest for collaboration

  • 01 Neuromorphic hardware deployment — running Maya on Intel Loihi or BrainScaleS
  • 02 LLM continual learning crossover — Antahkarana mechanisms in transformer architectures
  • 03 Embodied AI — Maya as a robot mind, not just a benchmark system
  • 04 Consciousness research — falsifiable empirical tests of internal affective state via mPCI
  • 05 Conference submissions — NeurIPS, ICLR, ICML track; open to co-authorship with labs working on overlapping problems

Institutional Identity

Enterprise
Nexus Learning Labs
Bengaluru, Karnataka, India
Udyam Registration
UDYAM-KR-02-0122422
Ministry of MSME, Govt. of India
Classification
Micro Enterprise · Services
NIC 72100 · Research & Development
Registered
06 April 2026
Incorporated 14 July 2025