When Models Go Wrong: Detecting Failures and Hidden Biases in Classification Models | Colloquium with Dr. Rakshith Subramanyam

Image
Dr. Rakshith Subramanyam

When

Noon – 1 p.m., Oct. 27, 2025

Join us online for the next College of Information Science colloquium, featuring Dr. Rakshith Subramanyam, Lead AI Researcher at Axio AI. 

Attend in-person or via Zoom (registration is required):

Register Now

 

Modern classifiers can fail in surprising ways because their decision rules are not grounded in the attributes that matter, and calibration-based failure detectors often miss these cases.

I will introduce DECIDER, a multimodal approach that uses vision-language models to surface task-relevant attributes, train a debiased model around them and flag failures through disagreement between the original and debiased predictors. This approach yields state-of-the-art failure detection under subpopulation shifts, such as Waterbirds and CelebA, and extends to more complex domain shifts. It also enables human-interpretable explanations through attribute weights.

We will connect these ideas to visual and textual biases that creep into black-box large language model and vision-language model pipelines, and show practical probes to reveal, mitigate and communicate their impact.


About Rakshith Subramanyam

Rakshith Subramanyam, Ph.D., leads AI research at Axio, where he develops agentic LLM and retrieval-augmented systems for personalized learning and knowledge-aware reasoning. His research centers on VLLM uncertainty estimation, knowledge-graph–driven LLMs, failure detection and trustworthy AI. He advances priors for vision and vision-language models to improve robustness, data efficiency and explainability. His contributions include inverting out-of-distribution images into StyleGAN to achieve high-fidelity reconstruction, ill-posed restoration and semantic editing; single-shot GAN adaptation with target-aware synthesis; meta-learning improved by knowledge graphs and contrastive distillation; learnable prompting for visual relationship prediction; and methods that detect and explain model failures using vision-language priors. He has published this work in top-tier venues including ICML (Spotlight), ECCV, WACV and ICASSP.