CS PhD Student Wins Adobe Research Fellowship

Viet Dac Lai

Viet Dac Lai, a fourth-year Ph.D. student at the CS department of the University of Oregon, won the competitive Adobe Research Fellowship. Only 10 students were awarded worldwide for 2022.

Viet is a Ph.D. student in the CS department under the supervision of Prof. Thien Huu Nguyen. Viet graduated with a B.Sc degree in Computer Science from the Posts and Telecommunications Institute of Technology, Vietnam. He earned his M.Sc. degree in Computer Science from Japan Advanced Institute of Science and Technology, Japan. His research focuses on Natural Language Processing (NLP) and Deep Learning for Information Extraction, which aims to extract valuable information from text. Currently, he is working on extending Event Extraction to new domains that experience data scarcity. His research studies how machine learning models learn to perform a task with just a few observations.

Adobe Research Fellowship is a program supported by Adobe Research to recognize outstanding graduate students anywhere in the world carrying out exceptional research in areas of computer science. This program awards $10,000 to the winners and this year it was highly competitive with hundreds of PhD students applying from top CS departments worldwide, including Stanford, CMU, NYU, Cornell, Toronto, UT Austin, etc. All Ph.D. students who are working on various research topics including but not limited to Artificial Intelligence & Machine Learning, Audio, Content Intelligence, Graphics (2D & 3D), Systems & Languages, Computer Vision, and Natural Language Processing are eligible to apply for this fellowship. To be selected as one of the winners the key criterion is how the student’s research is creative, impactful, important, and realistic in scope. In addition, how the research is related to Adobe products and the technical and personal skills (e.g., communicating and leadership) of the applicants are important.

More about Viet Lai’s research

Viet’s research explores methods to integrate useful knowledge resources to facilitate learning of deep learning models for NLP. One of such knowledge resources involves the relationships between available data examples, their structural representations, and available knowledge bases. The direct use of those example relations can be observed in few-shot learning (FSL) models that can learn to perform a new task using only a handful of training examples. In his research, Viet proposed to leverage relations between the support examples to explicitly regularize the representation learning process for FSL models for event detection, thus improving the robustness of models against noises in data. He also proposed a graph-based representation learning method that employs both full and pruned dependency parsing graphs to effectively learn representations for input texts for FSL models using consistency regularization. In his most recent study in FSL, Viet explored how the relationships between tasks in FSL can help improve representation learning for the models. In particular, Viet introduced a novel FSL model that not only facilitates the interaction between data examples in the training data but also enforce the prediction consistency of the model across tasks. The resulting model is then able to perform well under the skew data scenario with a few given examples in FSL.

Overall, Viet has published 15 papers in top-tier conferences and workshops in Natural Language Processing and Artificial Intelligence (i.e., ACL, NAACL, EACL, EMNLP, SIGIR). Recently, along with his advisor, Viet has organized a workshop on Video Transcript Understanding at the AAAI 2022 conference and a shared task on Mathematical Symbol and Description Linking at the NAACL 2022 conference.