01 Key Results
What did this paper actually achieve? Five numbers tell the story.
Average Accuracy
15.19%
Full Maya-Manas · E
+0.84pp vs P6
How well the network remembers everything it has ever learned
Best BWT in Series
−50.91%
Least forgetting ever
Series record
How much it forgot old lessons while learning new ones — this is the lowest forgetting in 7 papers
A_manas canonical
0.10
Calibrated amplitude
Half-cycle O-LIF
The dial setting that worked best — like tuning a radio to the clearest signal
Bhaya Quiescence
0.000
Under replay · all conditions
7th confirmation
The fear signal goes completely silent once the network has something to compare against
Negative Control
B = A
Structure ≠ mechanism
Oscillation is key
Proof that just adding the wiring does nothing — the oscillation itself is what matters
02 O-LIF Mechanism — Vikalpa → Sankalpa
How does the gate actually work? Think of it like waking up — your senses open gradually, and only what's loud enough to reach you in those first moments truly registers.
Threshold Schedule Across T_STEPS = 4
V_threshold(t) = V_base + A_manas · cos(π · t / (T_STEPS − 1)) · Half-cycle descent: maximum suppression at t=0, full receptivity at t=3
t = 0
0.400
Vikalpa — doubt
★ Peak-aligned
Only the loudest signals survive
t = 1
0.350
descending
—
Gate beginning to open
t = 2
0.250
opening
—
More signals allowed through
t = 3
0.200
Sankalpa — resolve
—
Fully receptive — all signals welcome
Manas-GANE Intersection:
Only synapses that fire at t=0 (peak threshold, Vikalpa phase) qualify for amplified Vairagya protection. Noise spikes riding the open window at t=3 receive no amplification. The gate selects for saliency, not abundance.
In plain terms: Only signals that were strong enough to get through the tight early filter earn long-term protection. Signals that snuck in through the open window later get no bonus. The network rewards what mattered first.
03 Ablation Study — 5 Conditions
We ran 5 versions of the experiment — each changing one thing — to prove the oscillation itself caused the improvement, not something else.
Average Accuracy (AA %)
Higher is better · Split-CIFAR-100 CIL · 10 tasks · seed=42
Backward Transfer (BWT %)
Less negative = less forgetting · series record at −50.91%
E ★
−50.91%
D
−51.87%
A
−52.68%
B
−52.68%
C
−53.00%
★ E achieves best BWT in the entire Maya series — Manas protects prior knowledge
04 Interactive Experiment Explorer
Drag the slider and watch the experiment change in real time. This is the actual data — no simulation.
A_manas Amplitude Explorer
Drag the slider to explore how the oscillation amplitude shapes the O-LIF threshold schedule, predicted accuracy, and network behaviour. Values at A=0, 0.05, 0.10, 0.25 are real experimental results — intermediate values are interpolated from the ablation data.
What you're controlling: How aggressively the gate filters signals at each timestep. Too low — the gate barely does anything. Too high — the network goes blind and can't learn properly. The sweet spot is A=0.10, marked with ★.
Predicted AA
15.19%
canonical result
Predicted BWT
−50.91%
best in series
Peak-aligned frac.
~32%
t=0 salient spikes
O-LIF Threshold Schedule — Live Preview
05 Accuracy Matrix R[i][j]
Each row = after training one more task. Each column = how well it still remembers that task. The highlighted diagonal = fresh performance. Everything below = memory retained. Ideally nothing drops to zero.
Per-Task Accuracy After Each Training Task
R[i][j] = accuracy on task j evaluated after training on task i. Diagonal = immediate accuracy. Below diagonal = retained knowledge. Above = unseen tasks.
06 Affective Signals Across Tasks
Two internal signals tracked throughout training — how well the gate selected salient signals, and how much of the network was locked in and protected.
Manas Signal — Peak-aligned Fraction per Task
Fraction of fc1 neurons firing during Vikalpa phase (t=0). High amplitude suppresses early spikes; calibrated A=0.10 maintains ~32% salient fraction throughout.
What fraction of the network's signals were strong enough to pass the early tight filter. Think of it as — what percentage truly earned their place.
Vairagya Protection % per Task
Fraction of fc1 synapses above protection threshold. Full Manas shows slower accumulation but more targeted protection via Manas-GANE intersection.
How much of the network is locked in and protected from being overwritten by new lessons. Higher = better long-term memory.
07 Bhaya Quiescence Law — 7th Confirmation
Bhaya is the fear signal. Once the network has something to compare against, fear goes silent — every time, without exception. Seven papers in a row. We've named it a law.
Bhaya Signal per Task · All Conditions
In any Maya series SNN with a functioning replay buffer, Bhaya firing rate approaches 0.000 from Task 1 onward — regardless of oscillation amplitude. Confirmed across seven consecutive papers. Now a named series law.
🔒
All Conditions Converge — The Law Holds
From T2 onward, every condition — spike starvation (C), optimal gating (E), no oscillation (A/B) — shows
Bhaya = 0.000.
The oscillatory gate makes no difference: the replay buffer dominates.
This is the 7th consecutive confirmation of the Bhaya Quiescence Law across the Maya series.
T0 — no replay yet, small Bhaya spike
T1 — replay begins, Bhaya → 0.001
T2–T9 · Quiescence Law confirmed · 0.000 ✓
Note on Condition C (High Amplitude):
Even with A_manas=0.25 causing spike starvation at early timesteps, the Bhaya Quiescence Law still holds — confirming it is driven by the replay mechanism, not the oscillatory gate.
08 Key Findings
Four things this paper proves — stated plainly alongside the science.
Primary Finding
+0.84pp
O-LIF oscillatory gating adds +0.84pp AA over the P6 Maya-Chitta baseline — an isolatable, falsifiable contribution confirmed by the 5-condition ablation.
The oscillation genuinely helped — and we can prove it wasn't anything else.
BWT Series Record
−50.91%
Full Maya-Manas achieves the best backward transfer in the Maya series. Manas doesn't just improve accuracy — it reduces forgetting more effectively than any prior mechanism.
This paper's network forgot less than any previous version in the series.
Negative Control
B = A
Manas structure wired with A_manas=0 contributes exactly zero gain over baseline. The oscillation is the mechanism — not the architectural wiring. A clean negative control.
We proved the gate needs to actually open — structure alone does nothing.
Spike Starvation
−1.34pp
A_manas=0.25 suppresses early timestep spikes so aggressively that the network cannot accumulate Vairagya protection, leading to AA below all other conditions.
When the filter is too tight, the network goes partially blind and performance collapses.
09 Maya Research Series
Each paper adds one new cognitive dimension from Advaita Vedanta philosophy, implemented as a concrete, testable computational mechanism. This is paper 7 of a planned 9.
P1
Nociceptive Metaplasticity
Bhaya · Vairagya · Shraddha · Spanda
Foundation — lability + heterosynaptic decay
P2
Maya-OS
Affective SNN as OS arbitration layer
Ahamkara dimension
P3
Maya-CL
Split-CIFAR-10 TIL
AA = 62.38%
P4
Maya-Smriti
Split-CIFAR-100 CIL · Buddhi dimension
AA = 31.84%
P5
Maya-Viveka
Split-CIFAR-100 CIL · GANE intersection
AA = 16.03%
P6
Maya-Chitta
Samskara traces · Moha release
AA = 14.42%
P7
Maya-Manas ← You are here
O-LIF · Vikalpa→Sankalpa · Manas-GANE
AA = 15.19% · BWT = −50.91% ★ · DOI →
Series AA Progression
CIL papers only (P4–P7) · Split-CIFAR-100
IP Protection Active
All outputs signed via sign_paper.py
ORCID magic: 0.002315
LSB steganography: active
Canary: MayaNexusVS2026
Watermark: all figures