About

Hi there!

I am a final year Ph.D. candidate under the supervision of Dr. Huan Liu at the Data Mining and Machine Learning Lab at the School of CS & AI, Arizona State University. My current research primarily focuses on: (i) leveraging large language models (LLMs) for human-intensive tasks in a machine learning pipeline, (ii) robust detection of AI-generated content, and (iii) NLP applications in domain adaptation and domain generalization. During the summer of 2024, I was an Applied Research Intern at NVIDIA NeMo Guardrails, working with Christopher Parisien and Shaona Ghosh, on inference-time safety steering of large language models(LLMs). During the summer of 2022, I interned at Nokia Bell Labs as a Software Systems Research Intern, under the guidance of Buvaneswari Ramanan and Thomas Woo, where I worked on problems in the realm of continual learning and low-resource learning in real-world use-cases. When I’m not working, I enjoy traveling, hiking, kayaking, camping and chilling with my cat Luna!

📢📢 For speaking opportunities, check out my speaker profile here, and reach out!

📢 News

[Feb 2025] I’m speaking at CactusCon 13 on Building Trust in AI: On Safe and Responsible Use of LLMs. Slides

[Jan 2025] I accepted a full-time offer as an Applied Scientist II at Amazon, Palo Alto, CA where I will be working on LLM post-training and alignment!

[Nov 2024] Received the NSF Travel Award for attending IEEE BigData 2024!

[Oct 2024] Our paper Zero-shot LLM-guided Counterfactual Generation: A Case Study on NLP Model Evaluation has been accepted to IEEE BigData 2024!

[Oct 2024] Our paper Towards Inference-time Category-wise Safety Steering for Large Language Models [Work done with NVIDIA!] has been accepted to the Safe Generative AI Workshop at NeurIPS 2024!

[Sep 2024] Two papers accepted at EMNLP 2024 Main Track: (1) Defending Against Social Engineering Attacks in the Age of LLMs, (2) Large Language Models for Data Annotation: A Survey.

[Sep 2024] Delivered an in-person tutorial on Defending Against Generative AI Threats in NLP at SBP-BRiMS 2024, Pittsburgh, PA, USA. Tutorial Resources.

[Sep 2024] Received the SBP-BRiMS 2024 Student Travel Award!

[Sep 2024] Invited to be a Program Committee Member for SDM 2025.

[Jun 2024] Received the SIGIR 2024 NSF Travel Award!

[Jun 2024] Invited to be a Program Committee Member for AAAI 2025.

[May 2024] Invited to be a Program Committee Member for SBP-BRiMS 2024.

[May 2024] My chat with ScienceNewsExplores (SNE) got featured in this SNE news story.

[Apr 2024] Our paper Towards Interpretable Hate Speech Detection using Large Language Model-extracted Rationales got accepted to NAACL WOAH 2024!

[Apr 2024] Gave an invited talk at the ASU Cronkite School of Journalism for an AI in Journalism course for Masters students in Investigative Journalism, on Large Language Models: Opportunites and Threats in Journalism. Slides

[Mar 2024] Our demo paper “ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and Refinement” has been accepted at SIGIR 2024! Paper Link

[Jan 2024] I have accepted an offer to join NVIDIA as an Applied Research Intern for Summer 2024, working on LLM Safety!

[Jan 2024] Invited to serve as a Program Committee member at IJCAI 2024 (Survey Track).

[Jan 2024] Our paper “Adversarial Text Purification: A Large Language Model Approach for Defense” has been accepted at PAKDD 2024! Paper Link

[Dec 2023] Our paper “Towards LLM-guided Causal Explainability for Black-box Text Classifiers” has been accepted at the AAAI 2024 Responsible Language Models worskhop, Vancouver, Canada! Paper Link

[Nov 2023] Our paper “ConDA: Contrastive Domain Adaptation for AI-generated Text Detection” won the 🏆 Oustanding Paper Award 🏆 at IJCNLP-AACL 2023, held in Bali, Indonesia! Paper Link



Meet my wonderful rescue cat, Luna! 🐈‍⬛