About Me

I' currently an independent researcher working towards brain-like foundation models. My work focuses on Adaptive computation, Continuous-Depth models, Optimization algorithms, and making black-box models transparent.

I'm also huge fan of engineering side of AI, like training large models on GPU clusters and developing frameworks for analyzing models during inference.

Current Interests

I'm particularly interested in:

• Adaptive computation and dynamic neural architectures
• Reasoning and interpretability in AI systems
• Designing efficient networks for real-world applications

Blog Posts

All Research
Stay tuned for an interesting one!
March 6, 2025
research

Publications

Void in Language Models
Mani Shemiranifar
arXiv preprint, 2025

Despite advances in transformer-based language models (LMs), a fundamental question remains largely unanswered: Are all layers activated during inference? We investigate this question by detecting unactivated layers (which we refer to as Voids) using a non-trainable and parameter-free adaptive computation method called L2 Adaptive Computation (LAC). We adapt LAC from its original efficiency-focused application to trace activated layers during inference. This method monitors changes in the L2-norm of activations to identify voids. We analyze layer activation in instruction-tuned LMs across two phases: Prompt Processing (PP), where we trace activated layers for each token in the input prompts, and Response Generation (RG), where we trace activated layers for each generated token. We further demonstrate that distinct layers are activated during these two phases. To show the effectiveness of our method, we evaluated three distinct instruction-tuned LMs from the Llama, Mistral, and Qwen families on three benchmarks: MMLU, GPQA Diamond, and BoolQ. For example, on MMLU with a zero-shot setting, skipping voids in Qwen2.5-7B-Instruct resulted in an improvement from 69.24 to 71.29 while the model uses only 30% of the layers. Similarly, Mistral-7B-Instruct-v0.3 on GPQA Diamond improved from 13.88 to 18.36 when using 70% of the layers during both the PP and RG phases. These results show that not all layers contribute equally during inference, and that selectively skipping most of them can improve the performance of models on certain tasks.

L2 Norm Guided Adaptive Computation
Mani Shemiranifar, Mostafa Dehghani
ICLR TinyPapers, 2023

Although the human brain can adjust the amount of time and energy it uses to solve problems of varying complexity, many standard neural networks require a fixed computation budget regardless of the problem’s complexity. This work introduces L2 Adaptive Computation (LAC), a new algorithm that adjusts the computation budget, by tracking changes in the L2 norm of a neural network’s hidden state as layers are applied to the input. Unlike previous methods, LAC does not require additional trainable modules or auxiliary loss terms to make halting decisions. LACmatches the results of best-performing methods on a complex synthetic task and improves image classification accuracy while also increasing efficiency.