Skip to content
Intuitions by Hamidreza Saghir

Research

Selected publications and the questions I keep coming back to.

A few questions I keep returning to: how do we know a system is doing what we think it’s doing? How do we shrink a big model into something that still works under tight constraints? And how do we get learned systems to behave robustly when the inputs drift away from the training distribution?

Current interests

  • Evaluation — measurement design, LLM-as-judge methods, and the long tail of the metric went up but the product got worse.
  • Agents and planning — how learned decision-makers behave in adversarial or long-horizon settings.
  • On-device & edge ML — compressing and factorizing transformers for strict memory / latency budgets.

Selected publications

  1. J. Niu et al. (incl. H. Saghir). Llama See, Llama Do: Contextual Entrainment and Distraction in LLMs. ACL 2025.
  2. M. Mirgbagheri, H. Saghir, T. Chau. Mimicking Linguistic Features of Atypical Speech Transcripts. IEEE TASLP, 2025.
  3. L. Hebert et al. (incl. H. Saghir). Robust Candidate Generation for Entity Linking on Short Social Media. WNUT @ COLING 2022.
  4. H. Saghir, S. Choudhary, S. Eghbali, C. Chung. Factorization-Aware Training of Transformers for NLU on the Edge. InterSpeech 2021.
  5. H. Tu, S. Choudhary, H. Saghir, R. McGowan. Multilingual Neural Language Models for On-Device NLU. The Web Conference (WWW) 2021.
  6. P. Xu, H. Saghir, et al. A Cross-Domain Transferable Neural Coherence Model. ACL 2019.
  7. H. Saghir, A. Dupuis, T. Chau, A. Kushki. Atypical autonomic nervous system complexity accompanies social cognition task performance in ASD. Research in Autism Spectrum Disorders, 2017.
  8. H. Saghir, T. Chau, A. Kushki. Clustering of time-evolving scaling dynamics in a complex signal. Physical Review E, 2016.

Full list on Google Scholar.