My research interests are Natural Language Understanding and Generation, Faithfulness and Factual Correctness of the Generation, Unsupervised/Self-supervised Learning, and AI Applications.
I joined AWS Connect as an applied scientist working on task-oriented dialogue systems.
Sep 20, 2024
One paper accepted to EMNLP 2024.
Feb 21, 2024
I will join the Office of Applied Research @ Microsoft as a Research Intern working on Copilot optimization this summer. I’m super excited to have the opportunity of backing to Seattle during its prime season!
Oct 7, 2023
Two papers accepted to EMNLP 2023, see you in Singapore! Credits and many thanks to Cav and other collaborators.
Jul 10, 2023
I’m happy to share that our paper on Faithful Low-Resource Data-to-Text Generation through Cycle Training recieved the Outstanding Paper Award @ ACL2023.
As large language models (LLMs) continue to advance, aligning these models with human preferences has emerged as a critical challenge. Traditional alignment methods, relying on human or LLM annotated datasets, are limited by their resource-intensive nature, inherent subjectivity, and the risk of feedback loops that amplify model biases. To overcome these limitations, we introduce WildFeedback, a novel framework that leverages real-time, in-situ user interactions to create preference datasets that more accurately reflect authentic human values. WildFeedback operates through a three-step process: feedback signal identification, preference data construction, and user-guided evaluation. We applied this framework to a large corpus of user-LLM conversations, resulting in a rich preference dataset that reflects genuine user preferences. This dataset captures the nuances of user preferences by identifying and classifying feedback signals within natural conversations, thereby enabling the construction of more representative and context-sensitive alignment data. Our extensive experiments demonstrate that LLMs fine-tuned on WildFeedback exhibit significantly improved alignment with user preferences, as evidenced by both traditional benchmarks and our proposed user-guided evaluation. By incorporating real-time feedback from actual users, WildFeedback addresses the scalability, subjectivity, and bias challenges that plague existing approaches, marking a significant step toward developing LLMs that are more responsive to the diverse and evolving needs of their users. In summary, WildFeedback offers a robust, scalable solution for aligning LLMs with true human values, setting a new standard for the development and evaluation of user-centric language models.
@misc{shi2024wildfeedbackaligningllmsinsitu,title={WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback},author={Shi, Taiwei and Wang, Zhuoer and Yang, Longqi and Lin, Ying-Chun and He, Zexue and Wan, Mengting and Zhou, Pei and Jauhar, Sujay and Xu, Xiaofeng and Song, Xia and Neville, Jennifer},year={2024},booktitle={NeurIPS 2024 Behavioral ML Workshop},eprint={2408.15549},archiveprefix={arXiv},primaryclass={cs.CL},url={https://arxiv.org/abs/2408.15549},}
Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks. However, the capabilities of LLMs in scoring NLG quality remain inadequately explored. Current studies depend on human assessments and simple metrics that fail to capture the discernment of LLMs across diverse NLG tasks. To address this gap, we propose the Discernment of Hierarchical Perturbation (DHP) benchmarking framework, which provides quantitative discernment scores for LLMs utilizing hierarchically perturbed text data and statistical tests to measure the NLG evaluation capabilities of LLMs systematically. We have re-established six evaluation datasets for this benchmark, covering four NLG tasks: Summarization, Story Completion, Question Answering, and Translation. Our comprehensive benchmarking of five major LLM series provides critical insight into their strengths and limitations as NLG evaluators.
@misc{wang2024dhpbenchmarkllmsgood,title={DHP Benchmark: Are LLMs Good NLG Evaluators?},author={Wang, Yicheng and Yuan, Jiayi and Chuang, Yu-Neng and Wang, Zhuoer and Liu, Yingchi and Cusick, Mark and Kulkarni, Param and Ji, Zhengping and Ibrahim, Yasser and Hu, Xia},year={2025},booktitle={Findings of the Association for Computational Linguistics: NAACL 2025},eprint={2408.13704},archiveprefix={arXiv},primaryclass={cs.CL},url={https://arxiv.org/abs/2408.13704},}
EMNLP Findings
FANTAstic SEquences and Where to Find Them: Faithful and Efficient API Call Generation through State-tracked Constrained Decoding and Reranking
API call generation is the cornerstone of large language models’ tool-using ability that provides access to the larger world. However, existing supervised and in-context learning approaches suffer from high training costs, poor data efficiency, and generated API calls that can be unfaithful to the API documentation and the user’s request. To address these limitations, we propose an output-side optimization approach called FANTASE. Two of the unique contributions of FANTASE are its State-Tracked Constrained Decoding (SCD) and Reranking components. SCD dynamically incorporates appropriate API constraints in the form of Token Search Trie for efficient and guaranteed generation faithfulness with respect to the API documentation. The Reranking component efficiently brings in the supervised signal by leveraging a lightweight model as the discriminator to rerank the beam-searched candidate generations of the large language model. We demonstrate the superior performance of FANTASE in API call generation accuracy, inference efficiency, and context efficiency with DSTC8 and API Bank datasets.
@inproceedings{wang-etal-2024-fantase,title={FANTAstic SEquences and Where to Find Them: Faithful and Efficient API Call Generation through State-tracked Constrained Decoding and Reranking},author={Wang, Zhuoer and Ribeiro, Leonardo F. R. and Papangelis, Alexandros and Mukherjee, Rohan and Wang, Tzu-Yen and Zhao, Xinyan and Biswas, Arijit and Caverlee, James and Metallinou, Angeliki},year={2024},booktitle={Findings of the Association for Computational Linguistics: EMNLP 2024},month=nov,address={Miami, United States},publisher={Association for Computational Linguistics},eprint={2407.13945},archiveprefix={arXiv},primaryclass={cs.CL},url={https://arxiv.org/abs/2407.13945},}
EMNLP Findings
Unsupervised Candidate Answer Extraction through Differentiable Masker-Reconstructor Model
Question generation is a widely used data augmentation approach with extensive applications, and extracting qualified candidate answers from context passages is a critical step for most question generation systems. However, existing methods for candidate answer extraction are reliant on linguistic rules or annotated data that face the partial annotation issue and challenges in generalization. To overcome these limitations, we propose a novel unsupervised candidate answer extraction approach that leverages the inherent structure of context passages through a Differentiable Masker-Reconstructor (DMR) Model with the enforcement of self-consistency for picking up salient information tokens. We curated two datasets with exhaustively-annotated answers and benchmark a comprehensive set of supervised and unsupervised candidate answer extraction methods. We demonstrate the effectiveness of the DMR model by showing its performance is superior among unsupervised methods and comparable to supervised methods.
@inproceedings{wang-etal-2023-dmr,title={Unsupervised Candidate Answer Extraction through Differentiable Masker-Reconstructor Model},author={Wang, Zhuoer and Wang, Yicheng and Zhu, Ziwei and Caverlee, James},booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},month=dec,year={2023},address={Singapore},publisher={Association for Computational Linguistics},url={https://aclanthology.org/2023.findings-emnlp.379/},}
ACL Outstanding Paper Award
Faithful Low-Resource Data-to-Text Generation through Cycle Training
Methods to generate text from structured data have advanced significantly in recent years, primarily due to fine-tuning of pre-trained language models on large datasets. However, such models can fail to produce output faithful to the input data, particularly on out-of-domain data. Sufficient annotated data is often not available for specific domains, leading us to seek an unsupervised approach to improve the faithfulness of output text. Since the problem is fundamentally one of consistency between the representations of the structured data and text, we evaluate the effectiveness of cycle training in this work. Cycle training uses two models which are inverses of each other: one that generates text from structured data, and one which generates the structured data from natural language text. We show that cycle training, when initialized with a small amount of supervised data (100 samples in our case), achieves nearly the same performance as fully supervised approaches for the data-to-text generation task on the WebNLG, E2E, WTQ, and WSQL datasets. We perform extensive empirical analysis with automated evaluation metrics and a newly designed human evaluation schema to reveal different cycle training strategies’ effectiveness of reducing various types of generation errors.Our code is publicly available at https://github.com/Edillower/CycleNLG.
@inproceedings{wang-etal-2023-faithful,title={Faithful Low-Resource Data-to-Text Generation through Cycle Training},author={Wang, Zhuoer and Collins, Marcus and Vedula, Nikhita and Filice, Simone and Malmasi, Shervin and Rokhlenko, Oleg},booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},month=jul,year={2023},address={Toronto, Canada},publisher={Association for Computational Linguistics},url={https://aclanthology.org/2023.acl-long.160},pages={2847--2867},}
EMNLP
PARADE: A New Dataset for Paraphrase Identification Requiring Computer Science Domain Knowledge
We present a new benchmark dataset called PARADE for paraphrase identification that requires specialized domain knowledge. PARADE contains paraphrases that overlap very little at the lexical and syntactic level but are semantically equivalent based on computer science domain knowledge, as well as non-paraphrases that overlap greatly at the lexical and syntactic level but are not semantically equivalent based on this domain knowledge. Experiments show that both state-of-the-art neural models and non-expert human annotators have poor performance on PARADE. For example, BERT after fine-tuning achieves an F1 score of 0.709, which is much lower than its performance on other paraphrase identification datasets. PARADE can serve as a resource for researchers interested in testing models that incorporate domain knowledge. We make our data and code freely available.
@inproceedings{he-etal-2020-parade,title={{PARADE}: {A} {N}ew {D}ataset for {P}araphrase {I}dentification {R}equiring {C}omputer {S}cience {D}omain {K}nowledge},author={He, Yun and Wang, Zhuoer and Zhang, Yin and Huang, Ruihong and Caverlee, James},booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},month=nov,year={2020},address={Online},publisher={Association for Computational Linguistics},url={https://aclanthology.org/2020.emnlp-main.611},doi={10.18653/v1/2020.emnlp-main.611},pages={7572--7582},}