GUCL: Computational Linguistics @ Georgetown
We are a group of Georgetown University faculty, student, and staff researchers at the intersection of language and computation. Our areas of expertise include natural language processing, corpus linguistics, information retrieval, text mining, and more. Members belong to the Linguistics and/or Computer Science departments.
GUCL holds monthly group meetings about research and maintains a mailing list for its members. (Contact Nathan Schneider to subscribe.) This website will also promote courses, talks, and other events on campus that relate to topics in computational linguistics.
- Congratulations to Arman, Nazli, and Georgetown alum Andrew Yates for winning a Best Long Paper award at EMNLP 2017! The paper is entitled "Depression and Self-Harm Risk Assessment in Online Forums."
- Congratulations to Ophir Frieder, who has been named to the European Academy of Sciences and Arts (EASA)!
- Burr Settles (Duolingo): CS 1/26/18, St. Mary’s 111
- Sam Han (Washington Post): CS, Thursday 2/1/18, 10:00
- Shuly Wintner (Haifa): CS 2/2/18, 1:30
- Rebecca Hwa (Pittsburgh): Linguistics 2/16/18
- Mark Steedman (Edinburgh): Linguistics 2/23/18
- Arman Cohan dissertation defense: CS 3/19/18, 10:00
- PyTorch tutorial: GUCL 3/23/18, 10:00, St. Mary’s 201
- Claire Bonial (ARL): Linguistics 3/23/18
- Luca Soldaini dissertation defense: CS 3/26/18, 10:00
- John Conroy (IDA Center for Computing Sciences): CS 4/20/18, room TBA
- Maite Taboada (Simon Fraser): Linguistics 4/20/18
- Alexander Rush (Harvard): CS, Thursday 4/26/18, 12:30
Hal Daumé (UMD)
CS Colloquium 10/14/16, 11:00 in St. Mary’s 326
Learning Language through Interaction
Machine learning-based natural language processing systems are amazingly effective, when plentiful labeled training data exists for the task/domain of interest. Unfortunately, for broad coverage (both in task and domain) language understanding, we're unlikely to ever have sufficient labeled data, and systems must find some other way to learn. I'll describe a novel algorithm for learning from interactions, and several problems of interest, most notably machine simultaneous interpretation (translation while someone is still speaking).
This is all joint work with some amazing (former) students He He, Alvin Grissom II, John Morgan, Mohit Iyyer, Sudha Rao and Leonardo Claudino, as well as colleagues Jordan Boyd-Graber, Kai-Wei Chang, John Langford, Akshay Krishnamurthy, Alekh Agarwal, Stéphane Ross, Alina Beygelzimer and Paul Mineiro.
Hal Daumé III is an associate professor in Computer Science at the University of Maryland, College Park. He holds joint appointments in UMIACS and Linguistics. He was previously an assistant professor in the School of Computing at the University of Utah. His primary research interest is in developing new learning algorithms for prototypical problems that arise in the context of language processing and artificial intelligence. This includes topics like structured prediction, domain adaptation and unsupervised learning; as well as multilingual modeling and affect analysis. He associates himself most with conferences like ACL, ICML, NIPS and EMNLP. He earned his PhD at the University of Southern California with a thesis on structured prediction for language (his advisor was Daniel Marcu). He spent the summer of 2003 working with Eric Brill in the machine learning and applied statistics group at Microsoft Research. Prior to that, he studied math (mostly logic) at Carnegie Mellon University. He still likes math and doesn't like to use C (instead he uses O'Caml or Haskell).
Yulia Tsvetkov (CMU/Stanford)
Linguistics Speaker Series 11/11/16, 3:30 in Poulton 230
On the Synergy of Linguistics and Machine Learning in Natural Language Processing
One way to provide deeper insight into data and to build more powerful, robust models is bridging between linguistic knowledge and statistical learning. I’ll present model-based approaches that incorporate linguistic knowledge in novel ways.
First, I’ll show how linguistic knowledge comes to the rescue in processing languages which lack large data resources. I’ll describe a new approach to cross-lingual knowledge transfer that models the historical process of lexical borrowing between languages, and I will show how its predictions can be used to improve statistical machine translation systems.
In the second part of my talk, I’ll argue that linguistic insight helps improve learning also in resource-rich conditions. I’ll present three methods to integrate linguistic knowledge in training data, neural network architectures, and into evaluation of word representations. The first method uses features quantifying linguistic coherence, prototypicality, simplicity, and diversity to find a better curriculum for learning distributed representations of words. Distributed representations of words capture which words have similar meanings and occur in similar contexts. With improved word representations, we improve part-of-speech tagging, parsing, named entity recognition, and sentiment analysis. The second describes polyglot language models, neural network architectures trained to predict symbol sequences in many different languages using shared representations of symbols and conditioning on typological information about the language to be predicted. Finally, the third is an intrinsic evaluation measure of the quality of distributed representations of words. It is based on correlations of learned vectors with features extracted from manually crafted lexical resources. This computationally inexpensive method obtains strong correlation with performance of the vectors in a battery of downstream semantic and syntactic evaluation tasks. I’ll conclude with future research questions.
Yulia Tsvetkov is a postdoc in the Stanford NLP Group, where she works on computational social science with professor Dan Jurafsky. During her PhD in the Language Technologies Institute at Carnegie Mellon University, she worked on advancing machine learning techniques to tackle cross-lingual and cross-domain problems in natural language processing, focusing on computational phonology and morphology, distributional and lexical semantics, and statistical machine translation of both text and speech. In 2017, Yulia will join the Language Technologies Institute at CMU as an assistant professor.
Marine Carpuat (UMD)
Linguistics Speaker Series 11/18/16, 3:30 in Poulton 230
Toward Natural Language Inference Across Languages
Natural Language processing tasks as diverse as automatically extracting information from text, answering questions, translating or summarizing documents, all require the ability to compare and contrast the meaning of words and sentences. State-of-the-art techniques rely on dense vector representations which capture the distributional properties of words in large amounts of text in a single language. We seek to improve these representations to capture not only similarity in meaning between words or sentences, but also inference relations such as entailment and contradiction, and enable comparisons not only within, but also across languages.
In this talk, we will present novel approaches to inducing word representations from multilingual text corpora. First, we will show that translations in e.g. Chinese can be used as distant supervision to induce English word representations that can be composed into better representations of English sentences (Elgohary and Carpuat, ACL 2016). Then we will show how sparsity constraints can further improve word representations, and enable the detection not only semantic similarity (do "cure" and "remedy" have the same meaning?), but also entailment (does "antidote" entail "cure"?) between words in different languages (Vyas and Carpuat, NAACL 2016).
Marine Carpuat is an Assistant Professor in Computer Science at the University of Maryland, with a joint appointment at UMIACS. Her research interests are in natural language processing, with a focus on multilinguality. Marine was previously a Research Scientist at the National Research Council of Canada, and a postdoctoral researcher at the Columbia University Center for Computational Learning Systems. She received a PhD in Computer Science from the Hong Kong University of Science & Technology (HKUST) in 2008. She also earned a MPhil in Electrical Engineering from HKUST and an engineering degree from the French Grande Ecole Supélec.
Shomir Wilson (UC)
CS Colloquium 11/21/16, 11:00 in St. Mary’s 326
Text Analysis to Support the Privacy of Internet Users
Shomir Wilson is an Assistant Professor of Computer Science in the Department of Electrical Engineering and Computing Systems at the University of Cincinnati. His professional interests span pure and applied research in natural language processing, privacy, and artificial intelligence. Previously he held postdoctoral and lecturer positions in Carnegie Mellon University's School of Computer Science, and he spent a year as an NSF International Research Fellow in the University of Edinburgh's School of Informatics. He received his Ph.D. in Computer Science from the University of Maryland in 2011.
Mark Dredze (JHU)
CS Colloquium 11/29/16, 11:00 in St. Mary’s 326
Topic Models for Identifying Public Health Trends
Twitter and other social media sites contain a wealth of information about populations and has been used to track sentiment towards products, measure political attitudes, and study social linguistics. In this talk, we investigate the potential for Twitter and social media to impact public health research. Broadly, we explore a range of applications for which social media may hold relevant data. To uncover these trends, we develop new topic models that can reveal trends and patterns of interest to public health from vast quantities of data.
Mark Dredze is an Assistant Research Professor in Computer Science at Johns Hopkins University and a research scientist at the Human Language Technology Center of Excellence. He is also affiliated with the Center for Language and Speech Processing and the Center for Population Health Information Technology. His research in natural language processing and machine learning has focused on graphical models, semi-supervised learning, information extraction, large-scale learning, and speech processing. His focuses on public health informatics applications, including information extraction from social media, biomedical and clinical texts. He obtained his PhD from the University of Pennsylvania in 2009.
Mona Diab (GW)
CS Colloquium 12/2/16, 2:30 in St. Mary’s 414
Processing Arabic Social Media: Challenges and Opportunities
We recently witnessed an exponential growth in Arabic social media usage. Processing such media is of great utility for all kinds of applications ranging from information extraction to social media analytics for political and commercial purposes to building decision support systems. Compared to other languages, Arabic, especially the informal variety, poses a significant challenge to natural language processing algorithms since it comprises multiple dialects, linguistic code switching, and a lack of standardized orthographies, to top its relatively complex morphology. Inherently, the problem of processing Arabic in the context of social media is the problem of how to handle resource poor languages. In this talk I will go over some of our insights to some of these problems and show how there is a silver lining where we can generalize some of our solutions to other low resource language contexts.
Mona Diab is an Associate Professor in the Department of Computer Science, George Washington University (GW). She is the founder and Director of the GW NLP lab (CARE4Lang). Before joining GW, she was a Research Scientist (Principal Investigator) at the Center for Computational Learning Systems (CCLS), Columbia University in New York. She is also co-founder of the CADIM group with Nizar Habash and Owen Rambow, which is one of the leading places and reference points on computational processing of Arabic and its dialects. Her research interests span several areas in computational linguistics/natural language processing: computational lexical semantics, multilingual processing, social media processing, information extraction & text analytics, machine translation, and computational socio-pragmatics. She has a special interest in low resource language processing with a focus on Arabic dialects.
Joel Tetreault (Grammarly)
CS Colloquium 1/27/17, 11:00 in St. Mary’s 326
Analyzing Formality in Online Communication
Full natural language understanding requires comprehending not only the content or meaning of a piece of text or speech, but also the stylistic way in which it is conveyed. To enable real advancements in dialog systems, information extraction, and human-computer interaction, computers need to understand the entirety of what humans say, both the literal and the non-literal. This talk presents an in-depth investigation of one particular stylistic aspect, formality. First, we provide an analysis of humans' subjective perceptions of formality in four different genres of online communication. We highlight areas of high and low agreement and extract patterns that consistently differentiate formal from informal text. Next, we develop a statistical model for predicting formality at the sentence level, using rich NLP and deep learning features, and then evaluate the model's performance against human judgments across genres. Finally, we apply our model to analyze language use in online debate forums. Our results provide new evidence in support of theories of linguistic coordination, underlining the importance of formality for language generation systems.
This work was done with Ellie Pavlick (UPenn) during her summer internship at Yahoo Labs.
Joel Tetreault is Director of Research at Grammarly. His research focus is Natural Language Processing with specific interests in anaphora, dialogue and discourse processing, machine learning, and applying these techniques to the analysis of English language learning, automated essay scoring among others. Currently he works on the research and development of NLP tools and components for the next generation of intelligent writing assistance systems. Prior to joining Grammarly, he was a Senior Research Scientist at Yahoo Labs, Senior Principal Manager of the Core Natural Language group at Nuance Communications, Inc., and worked at Educational Testing Service for six years as a managing research scientist where he researched automated methods for essay scoring, detecting grammatical errors by non-native speakers, plagiarism detection, and content scoring. Tetreault received his B.A. in Computer Science from Harvard University and his M.S. and Ph.D. in Computer Science from the University of Rochester. He was also a postdoctoral research scientist at the University of Pittsburgh's Learning Research and Development Center, where he worked on developing spoken dialogue tutoring systems. In addition, he has co-organized the Building Educational Application workshop series for 8 years, several shared tasks, and is currently NAACL Treasurer.
Kenneth Heafield (Edinburgh)
CS Colloquium 2/2/17, 11:00 in St. Mary’s 326
Machine Translation is Too Slow
We're trying to make machine translation output less terrible, but we're impatient. A neural translation system took two weeks to train in 1996 and two weeks to train in 2016 because the field used twenty years of computing advances to build bigger and better models subject to the same patience limit. I'll talk about multiple efforts to make things faster: coarse-to-fine search algorithms and sparse gradient updates to reduce network communication.
Kenneth Heafield is a Lecturer (~Assistant Professor) in computer science at the University of Edinburgh. Motivated by machine translation problems, he takes a systems-heavy approach to improving quality and speed of neural systems. He is the creator of the widely-used KenLM library for efficient language modeling.
Margaret Mitchell (Google Research)
CS Colloquium 2/16/17, 11:00 in St. Mary’s 326
Algorithmic Bias in Artificial Intelligence: The Seen and Unseen Factors Influencing Machine Perception of Images and Language
The success of machine learning has recently surged, with similar algorithmic approaches effectively solving a variety of human-defined tasks. Tasks testing how well machines can perceive images and communicate about them have exposed strong effects of different types of bias, such as selection bias and dataset bias. In this talk, I will unpack some of these biases, and how they affect machine perception today. I will introduce and detail the first computational model to leverage human Reporting Bias—what people mention—in order to learn ground-truth facts about the visual world.
I am a Senior Research Scientist in Google's Research & Machine Intelligence group, working on advancing artificial intelligence towards positive goals, as well as ethics in AI and demographic diversity of researchers. My research is on vision-language and grounded language generation, focusing on how to help computers communicate based on what they can process. My work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science. Before Google, I was a founding member of Microsoft Research's "Cognition" group, focused on advancing vision-language artificial intelligence. Before MSR, I was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where I mainly focused on semantic role labeling and sentiment analysis using graphical models, working under Benjamin Van Durme. Before that, I was a postgraduate (PhD) student in the natural language generation (NLG) group at the University of Aberdeen, where I focused on how to naturally refer to visible, everyday objects. I primarily worked with Kees van Deemter and Ehud Reiter. I spent a good chunk of 2008 getting a Master's in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia. Simultaneously (2005 - 2012), I worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. My title changed with time (research assistant/associate/visiting scholar), but throughout, I worked on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders under Brian Roark. I continue to balance my time between language generation, applications for clinical domains, and core AI research.
Glen Coppersmith (Qntfy & JHU)
CS Colloquium 2/24/17, 11:00 in St. Mary’s 326
Quantifying the White Space
Behavioral assessment and measurement today are typically invasive and human intensive (for both patient and clinician). Moreover, by their nature, they focus on retrospective analysis by the patient (or the patient’s loved ones) about emotionally charged situations—a process rife with biases, not repeatable, and expensive. We examine all the data in the “white space” between interactions with the healthcare system (social media data, wearables, activities, nutrition, mood, etc.), and have shown quantified signals relevant to mental health that can be extracted from them. These methods to gather and analyze disparate data unobtrusively and in real time enable a range of new scientific questions, diagnostic capabilities, assessment of novel treatments, and quantified key performance measures for behavioral health. These techniques hold special promise for suicide risk, given the dearth of unbiased accounts of a person’s behavior leading up to a suicide attempt. We are beginning to see the promise of using these disparate data for revolution in mental health.
Glen is the founder and CEO of Qntfy (pronounced “quantify”), a company devoted to scaling therapeutic impact by empowering mental health clinicians and patients with data science and technology. Qntfy brings a deep understanding of the underlying technology and an appreciation for the human processes these technologies need to fit in to in order to make an impact. Qntfy, in addition to providing analytic and software solutions, considers it a core mission to push the fundamental and applied research at the intersection of mental health and technology. Qntfy built the data donation site OurDataHelps.org to gather and curate the datasets needed to drive mental health research, working closely with the suicide prevention community. Qntfy was also 2015 Foundry Cup grand prize winner – a design competition seeking innovative approaches to diagnosing and treating PTSD.
Prior to starting Qntfy, Glen was the first full-time research scientist at the Human Language Technology Center of Excellence at Johns Hopkins University where he joined in 2008. His research has focused on the creation and application of statistical pattern recognition techniques to large disparate data sets for addressing challenges of national importance. Oftentimes, the data of interest was human language content and associated metadata. Glen has shown particular acumen for enabling inference tasks that bring together diverse and noisy data. His work spans from principled exploratory data analysis, anomaly detection, graph theory, statistical inference and visualization.
Glen earned his Bachelors in Computer Science and Cognitive Psychology in 2003, a Masters in Psycholinguistics in 2005, and his Doctorate in Neuroscience in 2008, all from Northeastern University. As this suggests, his interests and knowledge are broad, from computer science and statistics to biology and psychology.
Jeniya Tabassum (OSU)
GUCL 4/6/17, 2:00 in St. Mary’s 326
Large Scale Learning for Temporal Expressions
Temporal expressions are words or phrases that refer to dates, times or durations. Social media especially contains time-sensitive information about various events and requires accurate temporal analysis. In this talk, I will present our work on TweeTIME, a minimally supervised time resolver that learns from large quantities of unlabeled data and does not require any hand-engineered rules or hand-annotated training corpora. This is the first successful application of distant supervision for end-to-end temporal recognition and normalization. Our proposed system outperforms all previous supervised and rule-based systems in the social media domain. I will also present ongoing work applying deep learning methods for resolving time expressions and discuss opportunities and challenges that a deep learning system faces when extracting time sensitive information from text.
Jeniya Tabassum is a third year PhD student in the Department of CSE at the Ohio Sate University, advised by Prof Alan Ritter. Her research focuses on developing machine learning techniques that can effectively extract relevant and meaningful information from social media data. Prior to OSU, she received a B.S. in Computer Science and Engineering from Bangladesh University of Engineering and Technology.
Jacob Eisenstein (GA Tech)
Linguistics Speaker Series 4/21/17, 3:30 in Poulton 230
Social Networks, Social Meaning
Language is socially situated: both what we say and what we mean depend on our identities, our interlocutors, and the communicative setting. The first generation of research in computational sociolinguistics focused on large-scale social categories, such as gender. However, many of the most socially salient distinctions are locally defined. Rather than attempt to annotate these social properties or extract them from metadata, we turn to social network analysis, which has been only lightly explored in traditional sociolinguistics. I will describe three projects at the intersection of language and social networks. First, I will show how unsupervised learning over social network labelings and text enables the induction of social meanings for address terms, such as “Ms” and “dude”. Next, I will describe recent research that uses social network embeddings to induce personalized natural language processing systems for individual authors, improving performance on sentiment analysis and entity linking even for authors for whom no labeled data is available. Finally, I will describe how the spread of linguistic innovations can serve as evidence for sociocultural influence, using a parametric Hawkes process to model the features that make dyads especially likely or unlikely to be conduits for language change.
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on computational sociolinguistics, social media analysis, discourse, and machine learning. He is a recipient of the NSF CAREER Award, a member of the Air Force Office of Scientific Research (AFOSR) Young Investigator Program, and was a SICSA Distinguished Visiting Fellow at the University of Edinburgh. His work has also been supported by the National Institutes for Health, the National Endowment for the Humanities, and Google. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award. Jacob's research has been featured in the New York Times, National Public Radio, and the BBC. Thanks to his brief appearance in If These Knishes Could Talk, Jacob has a Bacon number of 2.
Christo Kirov (JHU)
GUCL 4/28/17, 2:00 in St. Mary’s 250
Rich Morphological Modeling for Multi-lingual HLT Applications
In this talk, I will discuss a number of projects aimed at improving HLT applications across a broad range of typologically diverse languages by modeling morphological structure. These include the creation of a very large, normalized morphological paradigm database derived from Wiktionary, consensus-based morphology transfer via cross-lingual projection, and approaches to lemmatization and morphological analysis and generation based on recurrent neural network architectures. Much of this work falls under the umbrella of the UniMorph project at CLSP, led by David Yarowsky and supported by DARPA LORELEI, and was developed in close collaboration with John Sylak-Glassman.
Dr. Christo Kirov is a Postdoctoral Research Fellow at the Center for Language and Speech Processing at JHU, working with David Yarowsky. His current research combines novel machine learning approaches with traditional linguistics to represent and learn morphological systems across the world’s languages, and to leverage this level of language structure in Machine Translation, Information Extraction, and other HLT tasks. Prior to joining CLSP, he was a Visiting Professor at the Georgetown University Linguistics Department. He has received his PhD in Cognitive Science from Johns Hopkins University studying under Colin Wilson, with dissertation work focusing on Bayesian approaches to phonology and phonetic expression.
Bill Croft (UNM)
Linguistics 5/18/17, 1:00 in Poulton 230
Linguistic Typology Meets Universal Dependencies
Current work on universal dependency schemes in NLP does not make reference to the extensive typological research on language universals, but could benefit since many principles are shared between the two enterprises. We propose a revision of the syntactic dependencies in the Universal Dependencies scheme (Nivre et al. 2015, 2016) based on four principles derived from contemporary typological theory: dependencies should be based primarily on universal construction types over language-specific strategies; syntactic dependency labels should match lexical feature names for the same function; dependencies should be based on the information packaging function of constructions, not lexical semantic types; and dependencies should keep distinct the “ranks” of the functional dependency tree.
William Croft received his Ph.D. in 1986 at Stanford University under Joseph Greenberg. He has taught at the Universities of Michigan, Manchester (UK) and New Mexico, and has been a visiting scholar at the Max Planck Institutes of Psycholinguistics and Evolutionary Anthropology, and at the Center for Advanced Study in the Behavioral Sciences. He has written several books, including Typology and Universals, Explaining Language Change, Radical Construction Grammar, Cognitive Linguistics [with D. Alan Cruse] and Verbs: Aspect and Causal Structure. His primary research areas are typology, semantics, construction grammar and language change. He has argued that grammatical structure can only be understood in terms of the variety of constructions used to express functions across languages; that both qualitative and quantitative methods are necessary for grammatical analysis; and that the study of language structure must be situated in the dynamics of evolving conventions of language use in social interaction.
Spencer Whitehead (RPI)
GUCL 8/15/17, 11:00 in St. Mary’s 326
Multimedia Integration: Event Extraction and Beyond
Multimedia research is becoming increasingly important, as we are immersed in an ever-growing ocean of noisy, unstructured data of various modalities, such as text and images. A major thrust of multimedia research is to leverage multimodal data to better extract information, including the use of visual information to post-process or re-rank natural language processing results, or vice versa. In our work, we seek to tightly integrate multimodal information into a flexible, unified approach that jointly utilizes text and images. Here we focus on one application: improving event extraction by incorporating visual knowledge with words and phrases from text documents. Such visual knowledge provides a means to overcome the challenges that the ambiguities of language introduce. We first discover named visual patterns in a weakly-supervised manner in order to avoid the requirement of parallel/well-aligned annotations. Then, we propose a multimodal event extraction algorithm where the event extractor is jointly trained with textual features and visual patterns. We find improvements of 7.1% and 8.5% absolute F-score gain on event trigger and argument labeling, respectively. Moving forward, we intend to extend the idea of tight integration of multimodal information to other tasks, namely image and video captioning.
Spencer Whitehead is a PhD student in the Computer Science Department at Rensselaer Polytechnic Institute, where he is advised by Dr. Heng Ji. His interests broadly span Natural Language Processing, Machine Learning, and Computer Vision, but mainly lie in the intersection of these fields: multimedia information extraction and natural language generation from multimedia data. A primary goal of his work is to develop intelligent systems that can utilize structured, unstructured, and multimodal data to extract information as well as generate coherent, accurate, and focused text. Central to his research is the creation of novel architectures, deep learning or otherwise, which can properly incorporate such heterogeneous data. He received his Bachelors of Science degree in Mathematics and Computer Science from Rensselaer Polytechnic Institute with highest honors.
Cristian Danescu-Niculescu-Mizil (Cornell)
Linguistics Speaker Series 10/13/17, 3:30 in Poulton 230
Conversational markers of social dynamics
Can conversational dynamics—the nature of the back and forth between people—predict the outcomes of social interactions? In this talk I will introduce a computational framework for modeling conversational dynamics and for extracting the social signals they encode, and apply it in a variety of different settings. First, I will show how these signals can be predictive of the future evolution of a dyadic relationship. In particular, I will characterize friendships that are unlikely to last and examine temporal patterns that foretell betrayal in the context of the Diplomacy strategy game. Second, I will discuss conversational patterns that emerge in problem-solving group discussions, and show how these patterns can be indicative of how (in)effective the collaboration is. I will conclude by focusing on the effects of under and over-confidence on the dynamics and outcomes of decision-making discussions.
This talk includes joint work with Jordan Boyd-Graber, Liye Fu, Dan Jurafsky, Srijan Kumar, Lillian Lee, Jure Leskovec, Vlad Niculae, Chris Potts and Justine Zhang.
Cristian Danescu-Niculescu-Mizil is an assistant professor in the information science department at Cornell University. His research aims at developing computational frameworks that can lead to a better understanding of human social behavior, by unlocking the unprecedented potential of the large amounts of natural language data generated online. He is the recipient of several awards—including the WWW 2013 Best Paper Award, a CSCW 2017 Best Paper Award, and a Google Faculty Research Award—and his work has been featured in popular-media outlets such as the Wall Street Journal, NBC's The Today Show, NPR and the New York Times.
Antonios Anastasopoulos (Notre Dame)
GUCL 10/20/17, 1:00 in Poulton 255
Speech translation for documentation of endangered languages
Most of the world's languages do not have a writing system, so recent documentation efforts for endangered languages have switched focus to annotating corpora with translations. This talk will present work on modelling parallel speech without access to transcriptions, both using a neural attentional model (Long et al, NAACL 2016) and an unsupervised probability model (Anastasopoulos et al, EMNLP 2016), as well as some recent work on using translations for term discovery (Anastasopoulos et al, SCNLP 2017).
Antonis Anastasopoulos is a fourth year PhD student at the University of Notre Dame, working with Prof. David Chiang. His research lies in the intersection of low resource speech recognition and machine translation, focusing on developing technologies for endangered languages documentation.
Katherine Waldock (GU MSB)
GUCL 10/27/17, 1:00 in Poulton 230
NLP Applications to a Corpus of Corporate Bankruptcy Documents
Data extraction from legal text presents a number of challenges that can be addressed using Natural Language Processing (NLP) methods. I discuss several applications that arise from a corpus of approximately 50 million pages of bankruptcy documents. These constitute substantially all documents from the universe of Chapter 11 cases filed between 2004 and 2014 that involved firms with over $10 million in assets. Examples of NLP applications include various classification issues (nested-phrase docket entries, financial reports, and legal writing), Part-of-Speech tagging, Optical Character Recognition, and quasi-tabular text.
Katherine Waldock is an Assistant Professor of Finance at the McDonough School of Business and holds a courtesy joint appointment with the Georgetown Law Center. She received a Ph.D. in Finance from the NYU Stern School of Business and a B.A. in Economics from Harvard University. Her primary research interests are in corporate bankruptcy, law and finance, small businesses, and financial institutions.
Tim Finin (UMBC)
CS Colloquium 10/27/17, 11:00 in St. Mary’s 326
From Strings to Things: Populating Knowledge Graphs from Text
The Web is the greatest source of general knowledge available today but its current form suffers from two limitations. The first is that text and multimedia objects on the Web are easy for people to understand but difficult for machines to interpret and use. The second is the Web's access paradigm, which remains dominated by information retrieval, where keyword queries produce a ranked list of documents that must be read to find the desired information. I'll discuss research in natural language understanding and semantic web technologies that addresses both problems by extracting information from text to produce and populate Web-compatible knowledge graphs. The resulting knowledge bases have multiple uses, including (1) moving the Web's access paradigm from retrieving documents to answering questions, (2) embedding semi-structured knowledge in Web pages in formats designed for computer to understand, (3) providing intelligent computer systems with information they need to perform their tasks, (4) allowing the extracted data and knowledge to be more easily integrated, enabling inference and advanced analytics and (5) serving as background knowledge to improve text and speech understanding systems. I will also cover current work on applying the techniques to extract and use cybersecurity-related information from documents, the Web and social media.
Tim Finin is the Willard and Lillian Hackerman Chair in Engineering and a Professor of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC). He has over 35 years of experience in applications of artificial intelligence to problems in information systems and language understanding. His current research is focused on the Semantic Web, analyzing and extracting information from text, and on enhancing security and privacy in computing systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, an IEEE technical achievement award recipient and was selected as the UMBC Presidential Research Professor in 2012. He received an S.B. degree from MIT and a Ph.D. from the University of Illinois at Urbana-Champaign. He has held full-time positions at UMBC, Unisys, the University of Pennsylvania and the MIT AI Laboratory. He served as an editor-in-chief of the Journal of Web Semantics and is a co-editor of the Viewpoints section of the Communications of the ACM.
Matthew Marge (ARL)
CS Colloquium 11/3/17, 1:00 in St. Mary’s 326
Towards Natural Dialogue with Robots
Robots can be more effective teammates with people if they can engage in natural language dialogue. In this talk, I will address one fundamental research problem to achieving this goal: understanding how people will talk to robots in collaborative tasks, and how robots could respond in natural language to maintain an effective dialogue that stays on track. The unique contribution of this research is the adoption of a multi-phased approach to building spoken dialogue systems that starts with exploratory data collection of human-robot dialogue with a human “wizard” standing in for the robot’s language processing behind the scenes, and ends with training a dialogue system that automates away the wizard.
With the ultimate goal of an autonomous conversational robot in mind, I will focus on the initial experiments that aim to collect computationally tractable human-robot dialogue without sacrificing naturalness. I will show how this approach can efficiently collect dialogue in the navigation domain, and in a form suitable for training a conversational robot. I will also present a novel annotation scheme for dialogue semantics and structure that captures the types of instructions that people gave to the robot, showing that over time these can change as people better assess the robot’s capabilities. Finally, I’ll place this research effort in the broader context of enabling better teaming between people and robots.
This is joint work with colleagues at ARL and at the USC Institute for Creative Technologies.
Matthew Marge is a Research Scientist at the Army Research Lab (ARL). His research focuses on improving how robots and other artificial agents can build common ground with people via natural language. His current interests lie at the intersection of computational linguistics and human-robot interaction, specializing in dialogue systems. He received the Ph.D. and M.S. degrees in Language and Information Technologies from the School of Computer Science at Carnegie Mellon University, and the M.S. degree in Artificial Intelligence from the University of Edinburgh.
Ben Carterette (Delaware)
CS Colloquium 11/10/17, 11:00 in St. Mary’s 326
Offline Evaluation of Search Systems Using Online Data
Evaluation of search effectiveness is very important for being able to iteratively develop improved algorithms, but it is not always easy to do. Batch experimentation using test collections—the traditional approach dating back to the 1950s—is fast but has high start-up costs and requires strong assumptions about users and their information needs. User studies are slow and have high variance, making them difficult to generalize and certainly not possible to apply during iterative development. Online experimentation using A/B tests, pioneered and refined by companies such as Google and Microsoft, can be fast but is limited in other ways.
In this talk I present work we have done and work in progress on using logged online user data to do evaluation offline. I will discuss some of the user simulation work I have done with my students in the context of evaluating system effectiveness over user search sessions (in the context of the TREC Session track), based on training models on logged data for use offline. I will also discuss work on using historical logged data to re-weight search outputs for evaluation, focusing on how to collect that data to arrive at unbiased conclusions. The latter is work I am doing while on sabbatical at Spotify, which provides many motivating examples.
Ben Carterette is an Associate Professor in the Department of Computer and Information Sciences at the University of Delaware, and currently on sabbatical as a Research Scientist at Spotify in New York City. He primarily researches search evaluation, including everything from designing search experiments to building test collections to obtaining relevance judgments to using them in evaluation measures to statistical testing of results. He completed his PhD with James Allan at the University of Massachusetts Amherst on low-cost methods for acquiring relevance judgments for IR evaluation. He has published over 80 papers, won 4 Best Paper Awards, and co-organized two ACM SIGIR-sponsored conferences—WSDM 2014 and ICTIR 2016—in addition to nearly a decade's worth of TREC tracks and several workshops on topics related to new test collections and evaluation. He was also elected SIGIR Treasurer in 2016.
Laura Dietz (UNH)
CS Colloquium 11/14/17, 11:00 in St. Mary’s 326
Retrieving Complex Answers through Knowledge Graph and Text
We all turn towards Wikipedia with questions we want to know more about, but eventually find ourselves on the limit of its coverage. Instead of providing "ten blue links" as common in Web search, why not answer any web query with something that looks and feels like Wikipedia? This talk is about algorithms that automatically retrieve and identify relevant entities and relevant relations and can identify text to explain this relevance to the user. The trick is to model the duality between structured knowledge and unstructured text. This leads to supervised retrieval models can jointly identify relevant Web documents, Wikipedia entities, and extract support passages to populate knowledge articles.
Laura Dietz is an Assistant Professor at the University of New Hampshire, where she teaches "Information Retrieval" and "Data Science for Knowledge Graphs and Text". She coordinates the TREC Complex Answer Retrieval Track and runs a tutorial/workshop series on Utilizing Knowledge Graphs in Text-centric Retrieval. Previously, she was a research scientist in the Data and Web Science group at Mannheim University, and a research scientist with Bruce Croft and Andrew McCallum at the Center for Intelligent Information Retrieval (CIIR) at UMass Amherst. She obtained her doctoral degree with a thesis on topic models for networked data from Max Planck Institute for Informatics, supervised by Tobias Scheffer and Gerhard Weikum.
Ben Van Durme (JHU)
CS Colloquium 11/17/17, 11:00 in St. Mary’s 326
Universal Decompositional Semantics
The dominant strategy for capturing a symbolic representation of natural language has focussed on categorical annotations that lend themselves to structured multi-class classification. For example, predicting whether a given syntactic subject satisfies the definition of the AGENT thematic role. These annotations typically result from professionals coming to mutual agreement on semantic ontologies. The JHU Decompositional Semantics Initiative (decomp.net) is exploring a framework for semantic representation utilizing simple statements confirmed by everyday people, e.g., "The [highlighted syntactic subject] was aware of the [eventuality characterized by the salient verb]". This is conducive to a piece-wise, incremental, exploratory approach to developing a meaning representation. The resulting data relates to recent work in natural language inference, and common sense, two topics of increasingly larger interest within computational linguistics.
Benjamin Van Durme is an Assistant Professor in the departments of Computer Science and Cognitive Science at Johns Hopkins University, a member of the Center for Language and Speech Processing (CLSP), and the lead of Natural Language Understanding research at the JHU Human Language Technology Center of Excellence (HLTCOE). His research groupin CLSP consists of over a dozen graduate students, with additional post-docs, research staff, and a variety of close collaborations with fellow faculty at JHU and universities in the mid-Atlantic. His research covers a spectrum from computational semantics to applied frameworks for knowledge discovery on large, possibly streaming collections of text and recently photos. He is currently the PI for projects under DARPA DEFT, DARPA LORELEI, DARPA AIDA, and coPI on IARPA MATERIAL. His work has been supported by the NSF and companies including Google, Microsoft, Bloomberg, and Spoken Communications. Benjamin has worked previously at Google, Lockheed Martin, and BAE Systems. He received an MS in Language Technologies from the CMU Language Technologies Institute, followed by a PhD in Computer Science and Linguistics at the University of Rochester, working with Lenhart Schubert, Daniel Gildea and Gregory Carlson.
Jordan Boyd-Graber (UMD)
CS Colloquium 12/1/17, 1:00 in St. Mary’s 326
Cooperative and Competitive Machine Learning through Question Answering
My research goal is to create machine learning algorithms that are interpretable to humans, that can understand human strengths and weaknesses, and can help humans improve themselves. In this talk, I'll discuss how we accomplish this through a trivia game called quiz bowl. These questions are written so that they can be interrupted by someone who knows more about the answer; that is, harder clues are at the start of the question and easier clues are at the end of the question: a player must decide when it has enough information to "buzz in". Our system to answer quiz bowl questions depends on two parts: a system to identify the answer to questions and to determine when to buzz. We discuss how deep averaging networks—fast neural bag of words models—can help us answer questions quickly using diverse training data (previous questions, raw text of novels, Wikipedia pages) to determine the right answer and how deep reinforcement learning can help us determine when to buzz.
More importantly, however, this setting also helps us build systems to adapt in cooperation and competition with humans. In competition, we are also able to understand the skill sets of our competitors to adjust our strategy to optimize our performance against players using a deep mixture of experts opponent model. The game of quiz bowl also allows opportunities to better understand interpretability in deep learning models to help human players perform better with machine cooperation. This cooperation helps us with a related task, simultaneous machine translation.
Finally, I'll discuss opportunities for broader participation through open human-computer competitions: http://hcqa.boydgraber.org/
Jordan Boyd-Graber is an associate professor in the University of Maryland's Computer Science Department, iSchool, UMIACS, and Language Science Center. Jordan's research focus is in applying machine learning and Bayesian probabilistic models to problems that help us better understand social interaction or the human cognitive process. He and his students have won "best of" awards at NIPS (2009, 2015), NAACL (2016), and CoNLL (2015), and Jordan won the British Computing Society's 2015 Karen Spärk Jones Award and a 2017 NSF CAREER award. His research has been funded by DARPA, IARPA, NSF, NCSES, ARL, NIH, and Lockheed Martin and has been featured by CNN, Huffington Post, New York Magazine, and the Wall Street Journal.
Ellie Pavlick (Google/Brown)
CS Colloquium 1/19/18, 11:00 in St. Mary’s 204
Compositional Lexical Entailment for Natural Language Inference
In this talk, I will discuss my thesis work on training computers to make inferences about what is true or false based on information expressed in natural language. My approach combines machine learning with insights from formal linguistics in order to build data-driven models of semantics which are more precise and interpretable than would be possible using linguistically naive approaches. I will begin with my work on automatically adding semantic annotations to the 100 million phrase pairs in the Paraphrase Database (PPDB). These annotations provide the type of information necessary for carrying out precise inferences in natural language, transforming the database into a largest available lexical semantics resource for natural language processing. I will then turn to the problem of compositional entailment, and present an algorithm for performing inferences about long phrases which are unlikely to have been observed in data. Finally, I will discuss my current work on pragmatic reasoning: when and how humans derive meaning from a sentence beyond what is literally contained in the words. I will describe the difficulties that such "common-sense" inference poses for automatic language understanding, and present my on-going work on models for overcoming these challenges.
Ellie Pavlick is currently a Postdoctoral Fellow at Google Research in NY. She will join Brown University as an Assistant Professor of Computer Science in July. Ellie received her PhD from University of Pennsylvania, where her dissertation focused on natural language inference and entailment. Outside of her dissertation research, Ellie has published work on stylistic variation in paraphrase—e.g. how paraphrases can affect the formality or the complexity of language—and on applications of crowdsourcing to natural language processing and social science problems.
Burr Settles (Duolingo)
CS Colloquium 1/26/18, 11:00 in St. Mary’s 111
Duolingo: Improving Language Learning and Assessment with Data
Student learning data can and should be analyzed to develop new instructional technologies, such as personalized practice schedules and data-driven proficiency assessments. I will describe several projects at Duolingo—the world's most popular language education platform with more than 200 million students worldwide—where we combine vast amounts of learner data with machine learning, computational linguistics, and psychometrics to improve learning, testing, and engagement.
Burr Settles leads the research group at Duolingo, developing statistical machine learning systems to improve learning, engagement, and assessment. He also runs FAWM.ORG (a global collaborative songwriting experiment) and is the author of Active Learning—a text on AI algorithms that are curious and exploratory (if you will). His research has been published in numerous journals and conferences, and has been featured in The New York Times, Slate, Forbes, and WIRED. In past lives, he was a postdoc at Carnegie Mellon and earned his PhD from UW-Madison. He lives in Pittsburgh, where he gets around by bike and plays guitar in the pop band Delicious Pastries.
Sam Han (Washington Post)
CS Colloquium 2/1/18, 10:00 in St. Mary’s 326
Data Science @WaPo: How data science can help publishers succeed
The data science team at the Post has built a Big Data platform that helps to develop applications for personalization, newsroom productivity improvement and targeted advertisement. They ingest data from the digital side (washingtonpost.com and apps), paper, and external sources into the platform. They build various applications using the data stored in the platform. These applications are built to enhance user experience, perform targeted advertisement and improve newsroom work productivity. They also build tools to help newsroom adapt to the demands of digital journalism.
Sam will cover some of these applications in detail and share challenges and insights learned from the projects:
- Clavis is an audience targeting platform and is at the root of personalization efforts. Clavis analyzes everything they publish, builds user profiles, and provides personalized content and brand messaging.
- Virality is a machine learning based system that predicts the popularity of articles based on content, meta data, site traffic and social chatters.
- ModBot is an automatic comment moderation system that helps us to maintain a high quality comment section.
- Heliograf is an automated storytelling agent that automates the writing of data-driven articles and frees up reporters to focus on the high-quality stories.
- Bandito allows newsroom to test different headlines and other UX elements, determines the best performing experience and serves it to as much traffic as fast as possible.
- Headliner is an automated system that proposes several headlines for an article.
Eui-Hong (Sam) Han is Director, Data Science & AI at The Washington Post. Sam is an experienced practitioner of data mining and machine learning. He has in-depth understanding of analytics technologies and has experience of successfully applying these technologies to solve real business problems. At the Washington Post, he is leading a team to build an integrated Big Data platform to store all aspects of customer profiles and activities from both digital and print circulation, metadata of content, and business data. His team builds an infrastructure, tools, and services to provide personalized experience to customers, to empower newsroom with data for better decisions, and to provide targeted advertising capability. Prior to joining The Washington Post, he led Big Data practice at Persistent Systems, started Machine Learning Group at Sears Holdings Online Business Unit, and worked for a data mining startup company. His expertise includes data mining, machine learning, information retrieval, and high performance computing. He holds PhD in Computer Science from the University of Minnesota.
Shuly Wintner (Haifa)
CS Colloquium 2/2/18, 1:30 in St. Mary’s 326
Computational Approaches to the Study of Translation (and other Crosslingual Language Varieties)
Translated texts, in any language, have unique characteristics that set them apart from texts originally written in the same language; to some extent, they form a sub-language, called "translationese". Some of the properties of translationese stem from interference from the source language (the so-called "fingerprints" of the source language on the translation product); others are source-language-independent, and are presumably universal. These include phenomena resulting from three main processes: simplification, standardization and explicitation.
I will describe research that uses standard (supervised and unsupervised) text classification techniques to distinguish between translations and originals. I will focus on the features that best separate between the two classes, and how these features corroborate some (but not all) of the hypotheses set forth by Translation Studies scholars. More generally, I will show how computational methodologies shed light on pertinent Translation Studies questions.
Translation is only one instance of language that is affected by the interaction of more than one linguistic system. Another instance is the language of advanced, highly-fluent non-native speakers. Are translations and non-native language similar? In what respects? And are such similarities the result of interference or of more "universal" properties? I will discuss recent work that uses text classification to address these questions. In particular, I will describe work that addresses the identification of the source language of translations, and relate it to the task of Native Language Identification.
Shuly Wintner is professor of computer science at the University of Haifa, Israel. His research spans various areas of computational linguistics and natural language processing, including formal grammars, morphology, syntax, language resources, and translation. He served as the editor-in-chief of Springer's Research on Language and Computation, a program co-chair of EACL-2006, and the general chair of EACL-2014. He was among the founders, and twice (6 years) the chair, of ACL SIG Semitic. He is currently the Chair of the Faculty Union at the University of Haifa.
Rebecca Hwa (Pittsburgh)
Linguistics Speaker Series 2/16/18, 3:30 in Poulton 230
Separating the Sheep from the Goats: On recognizing the Literal and Figurative Usages of Idioms
Typically, we think of idioms as colorful expressions whose literal interpretations don’t match their underlying meaning. However, many idiomatic expressions can be used either figuratively or literally, depending on their contexts. In this talk, we survey both supervised and unsupervised methods for training a classifier to automatically distinguish usages of idiomatic expressions. We will conclude with a discussion about some potential applications.
Rebecca Hwa is an Associate Professor in the Department of Computer Science at the University of Pittsburgh. Her recent research focuses on understanding persuasion from a computational linguistics perspective. Some of her recent projects include: modeling student behaviors in revising argumentative essays, identifying symbolism in visual rhetoric, and understanding idiomatic expressions. Dr. Hwa is a recipient of the NSF CAREER Award. Her work has also been supported by NIH and DARPA. She has been the Chair of the North American Chapter of the Association for Computational Linguistics.
Mark Steedman (Edinburgh)
Linguistics Speaker Series 2/23/18, 3:30 in Poulton 230
Bootstrapping Language Acquisition
Recent work with Abend, Kwiatkowski, Smith, and Goldwater (2016) has shown that a general-purpose program for inducing parsers incrementally from sequences of paired strings (in any language) and meanings (in any convenient language of logical form) can be applied to real English child-directed utterance from the CHILDES corpus to successfully learn the child's ("Eve's") grammar, combining lexical and syntactic learning in a single pass through the data.
While the earliest stages of learning necessarily proceed by pure "semantic bootstrapping", building a probabilistic model of all possible pairings of all possible words and derivations with all possible decompositions of logical form, the later stages of learning show emergent effects of "syntactic bootstrapping" (Gleitman 1990), where the program's increasing knowledge of the grammar of the language allows it to identify the syntactic type and meaning of unseen words in one trial, as has been shown to be characteristic of real children in experiments with nonce-word learning. The concluding section of the talk considers the extension of the learner to a more realistic semantics including information structure and conversational dynamics.
Mark Steedman is Professor of Cognitive Science in the School of Informatics at the University of Edinburgh. Previously, he taught as Professor in the Department of Computer and Information Science at the University of Pennsylvania, which he joined as Associate Professor in 1988. His PhD in Artificial Intelligence is from the University of Edinburgh. He is a Fellow of the Association for the Advancement of Artificial Intelligence, the British Academy, the Royal Society of Edinburgh, the Association for Computational Linguistics, and the Cognitive Science Society, and a Member of the European Academy. His research interests cover issues in computational linguistics, artificial intelligence, computer science and cognitive science, including syntax and semantics of natural language, wide-coverage parsing and open-domain question-answering, comprehension of natural language discourse by humans and by machine, grammar-based language modeling, natural language generation, and the semantics of intonation in spoken discourse. Much of his current NLP research is addressed to probabilistic parsing and robust semantics for question-answering using the CCG grammar formalism, including the acquisition of language from paired sentences and meanings by child and machine. He sometimes works with colleagues in computer animation using these theories to guide the graphical animation of speaking virtual or simulated autonomous human agents, for which he recently shared the 2017 IFAAMAS Influential Paper Award for a 1994 paper with Justine Cassell and others. Some of his research concerns the analysis of music by humans and machines.
Claire Bonial (ARL)
Linguistics Speaker Series 3/23/18, 3:30 in Poulton 230
Event Semantics in Text Constructions, Vision, and Human-Robot Dialogue
“Ok, robot, make a right and take a picture” – a simple instruction like this exemplifies some of the obstacles in our research on human-robot dialogue: how are make and take to be interpreted? What precise actions should be executed? In this presentation, I explore three challenges: 1) interpreting the semantics of constructions in which verb meanings are extended in novel usages, 2) recognizing activities and events in images/video by employing information about the objects and participants typically involved, and 3) mapping natural language instructions to the physically situated actions executed by a robot. Throughout these distinct research areas, I leverage both Neo-Davidsonian styles of event representation and the principles of Construction Grammar in addressing these challenges for interpretation and execution.
Claire Bonial is a computational linguist specializing in the murky world of event semantics. In her efforts to make this world computationally tractable, she has collaborated on a variety of Natural Language Processing semantic role labeling projects, including PropBank, VerbNet, and Abstract Meaning Representation. A focused contribution to these projects has been her theoretical and psycholinguistic research on both the syntax and semantics of English light verb constructions (e.g., take a walk, make a mistake). Bonial received her Ph.D. in Linguistics and Cognitive Science in 2014 from the University of Colorado Boulder. Bonial began her current position in the Computational and Information Sciences Directorate of the Army Research Laboratory (ARL) in 2015. Since joining ARL, she has expanded her research portfolio to include multi-modal representations of events (text and imagery/video), as well as human-robot dialogue.
CS Ph.D. dissertation defense 3/19/18, 10:00 in St. Mary’s 326
Text summarization and categorization for scientific and health-related data
The increasing amount of unstructured health-related data has created a need for intelligent processing to extract meaningful knowledge. This knowledge can be utilized to promote healthcare and wellbeing of individuals. My research goal in this dissertation is to develop Natural Language Processing (NLP) and Information Retrieval (IR) methods for better understanding, summarizing and categorizing scientific literature and other health-related information.
First, I focus on scientific literature as the main source of knowledge distribution in scientific fields. It has become a challenge for researchers to keep up with the increasing rate at which scientific findings are published. As an attempt to address this problem, I propose summarization methods using citation texts and discourse structure of the papers to provide a concise representation of important contributions of the papers. I also investigate methods to address the problem of citation inaccuracy by linking the citations to their related parts in the target paper, capturing their relevant context. In addition, I raise the problem of the inadequacy of current summarization evaluation metrics for summarization in the scientific domain and present a method based on semantic relevance for evaluating the summaries.
In the second part, I focus on other significant sources of health-related information including clinical narratives and social media. I investigate categorization methods to address the critical problem of medical errors which is among leading causes of death worldwide. I demonstrate how we can effectively identify significant reporting errors and harmful cases through medical narratives which could help prevent similar future problems. These approaches include both the carefully designed feature-rich methods and more generalizable neural networks. Mental health is another significant dimension of health and wellbeing. Suicide, the most serious challenge in mental health, accounts for approximately 1.4% of all deaths worldwide and approximately one person dies by suicide every 40 seconds. I present both feature-rich and data-driven methods, to capture mental health conditions, such as depression, self-harm, and suicide, based on the general language expressed on social media. These methods have clear clinical and scientific applications and can help individuals with mental health conditions.
Advisor: Nazli Goharian
CS Ph.D. dissertation defense 3/26/18, 10:00 in St. Mary’s 326
The Knowledge and Language Gap in Medical Information Seeking
Interest in medical information retrieval has raised significantly in the last few years. The Internet has become a primary source for consumers looking for health information and advice; however, their lack of expertise causes a language and knowledge gap that affects their ability to properly formulate their information needs. Health experts also struggle to efficiently search the large amount of medical literature available to them, which impacts their ability of integrating the latest research findings in clinical practice. In this dissertation, I propose several methods to overcame these challenges, thus improving search outcomes. For queries issued by lay users, I introduce query clarification, a technique to identify the most appropriate expert expression that describes their information need; such expression is then used to expand the query. I experiment with three existing synonym mappings, and show that the best one leads to a 7.3% improvement over non-clarified queries. When a classifier that predicts the most appropriate mapping for each query is used, an additional 5.2% improvement over non-clarified queries is achieved. Furthermore, I introduce a set of features to capture semantic similarity between consumer queries and retrieved documents, which are then exploited by a learning to rank framework. This approach yields a 26.6% improvement over the best known results on a dataset designed to evaluate medical information retrieval for lay users. To improve literature search for medical professionals, I propose and evaluate two query reformulation techniques that expand complex medical queries with relevant latent and explicit medical concepts. The first is an unsupervised system that combines a statistical query expansion with a medical terms filter, while the second is a supervised neural convolutional model that predicts which terms to add to medical queries. Both approaches are competitive with the state of the art, achieving up to 8% improvement in inferred nDCG. Finally, I conclude my dissertation by showing how the convolutional model can be adapted to reduce clinical notes that contain significant noise, such as medical abbreviations, incomplete sentences, and redundant information. This approach outperforms the best query reformulation system for this task by 27% in inferred nDCG.
Advisor: Nazli Goharian
John Conroy (IDA Center for Computing Sciences)
CS Colloquium 4/20/18, 11:00 in St. Mary’s room TBA
Multilingual Summarization and Evaluation Using Wikipedia Featured Articles
Multilingual text summarization is a challenging task and an active area of research within the natural language processing community. In this talk I will give an overview of the use of Wikipedia featured articles are used to create datasets comprising about 40 languages for the training and testing of automatic single document summarization methods. These datasets were used in 2015 and 2017 MultiLing Workshop's single document summarization task. I will give an overview of the methods used to both generate and to evaluate the summaries submitted for the tasks. Systems overall performance are measured using automatic and human evaluations and these data are analyzed to evaluate the effectiveness of the automatic methods for multi-lingual summarization evaluation. Thus, the results not only suggest which approaches to automatic text summarization generalize across a wide range of languages but also which evaluation metrics are best at predicting human judgments in the multilingual summarization task. This talk is based on a soon to appear book chapter to be published by World Scientific Press. The chapter is jointly written with Jeff Kubina, Peter Rankel, and Julia Yang.
John M. Conroy is a graduate of Central High School of Philadelphia with a BA and St. Joseph's University of Philadelphia, where he received a BS in Mathematics. Conroy then studied Applied Mathematics with a concentration in Computer Science at the University of Maryland, where he received an MS and PhD in Mathematics. He has been a research staff member at the IDA Center for Computing Sciences for over 30 years. Conroy is the co-developer of the CLASSY and the OCCAMS text summarization systems. He has published widely in text summarization and evaluation and serves on numerous program committees for summarization. He is also a co-inventor of patented summarization methods. Other publications by Conroy include high performance matrix computations, graph matching and anomaly detection, with application to neural science and network security. He is a member of the Association for Computational Linguistics, a life member of the Society for Industrial and Applied Mathematics, and a member of the Institute for Electronics and Electrical Engineers.