About me

I am a Visiting Assistant Professor in the Department of Computer Science (CS) at Purdue University. My research interest lies in the intersection of Natural Language Processing (NLP) and Computational Social Science (CSS). I earned my Ph.D. in CS from Purdue University, advised by Dr. Dan Goldwasser. I obtained M.Sc. in Computer Science (CS) from Old Dominion University (ODU). Prior to joining ODU, I worked as a software developer on the R&D team of Dohatec New Media for two years. I completed my B.Sc in Computer Science and Engineering (CSE) from Bangladesh University of Engineering and Technology (BUET).

Research Overview

We now live in a world where we can reach people directly through social media without relying on traditional media such as television, radio, and print. These platforms not only facilitate massive reach but also collect extensive user data, enabling highly targeted advertising. While microtargeting can improve content relevance, it also raises serious concerns: manipulation of user behavior, creation of echo chambers, and amplification of polarization. My research is motivated by the fact that some of these risks can be mitigated by providing transparency, identifying conflicting or harmful messaging choices, and indicating bias introduced in messaging in a nuanced way. I develop NLP and LLM-based methods to understand and analyze microtargeting dynamics: what messages are sent, to whom, and how they are received. My research delivers both CS artifactsdatasets, models, human-in-the-loop & machine-in-the-loop frameworks—and empirical insights grounded in real-world data.

Understanding microtargeting and activity patterns presents major technical challenges. Messaging strategies are dynamic, context-dependent, and often opaque. Furthermore, user identities and motivations are typically hidden or ambiguous. My work addresses these challenges through several core research directions:

A growing focus of my research is on the role of LLMs in enabling scalable and socially responsible analysis. This includes (but not limited to):

Responsible AI Integration: How can LLMs function as post-hoc (third-party) tools to analyze patterns in targeted communication, especially when internal platform logic is not transparent? While platforms have white-box access, external stakeholders (researchers, auditors, policymakers) do not. My method offers an explainable approach to reverse-engineer targeting practices and uncover potential bias or messaging disparities.

Human‑AI collaboration: Can LLMs support a broader range of psycholinguistic tasks across diverse domains and social issues, particularly in contexts with varying data availability and complexity?

  • Example: Exploring how LLMs can assist human annotators in identifying morality frames within vaccination debates on social media. [ACM WebSci 2025]

Unsupervised Topic Synthesis: How can LLMs uncover latent discourse, generate semantically rich topic labels, and serve as unsupervised annotators for large-scale social media texts?

  • Examples: Integrating LLMs with advanced clustering algorithms enhances semantic coherence, supports unsupervised annotation and enables scalable analysis of vegan discourse [Preprint 2025]; Combining clustering with prompt-based labeling, LLMs iteratively build topic taxonomies and annotate moral framing in political messaging—without seed sets or domain expertise [Preprint 2025].

See my publications here

My Google Scholar and ResearchGate profile.

Recent News