Bloomberg Data Science Research Grant Program
Bloomberg invites academic researchers worldwide to apply for unrestricted gifts that support research in data science, typically focusing on natural language processing, information retrieval, machine learning, and crowdsourcing. We also invite proposals for the creation of, or contributions to, open source software used for data science.
The Bloomberg Data Science Grant Program is currently being revised. In the interim, please see our Ph.D. Fellowship Program.
Previous Grant Recipients
Round 6 – June 2019
Unsupervised Abstractive News Summarization, Marti A. Hearst (University of California, Berkeley)
Data-Driven Transfer Clustering, Maria-Florina Balcan (Carnegie Mellon University)
Differentiable ranking losses, Stefano Ermon (Stanford University)
An Adaptive Crowdsourcing System for Real-Time Domain Adaptation, Walter Lasecki and Jonathan Kummerfeld (University of Michigan)
Extracting Spatial Timelines from Text, Eduardo Blanco (University of North Texas)
A Multi-task Model for Information Extraction and Entity-Centric Ranking tasks, Jeff Dalton (University of Glasgow)
Round 5 – July 2018
Neural Information Retrieval with Limited Data, Bruce Croft (University of Massachusetts Amherst)
Interactive Explorative Summarization: Closing the Summarization Gap, Ido Dagan (Bar Ilan University)
Representing Knowledge by Learning to Link, Kevin Gimpel and Karl Stratos (Toyota Technological Institute [TTI] at Chicago)
Cost-effective Learning for Complex Crowdsourcing Tasks, Xi Chen (NYU Stern School of Business)
Task Oriented Information Interaction Systems for Proactive Task Assistance Support, Emine Yilmaz (University College London)
Round 4 – April 2017
Combining structured knowledge and big data for coreference resolution, Greg Durrett (University of Texas – Austin)
Question answering and reasoning in multimodal data, Hannaneh Hajishirzi (University of Washington)
Entity salience via sophisticated syntactic and semantic features, Paolo Ferragina (Universita di Pisa, Italy)
Counterfactual learning with log data, Thorsten Joachims (Cornell University)
Learning hidden semantics by machine reading using entailment graphs, Mark Steedman (University of Edinburgh)
Deep explanation learning for knowledge graph relations, Maarten de Rijke (University of Amsterdam)
Dynamic word embeddings and applications in analysis of real-world discourses, Simon Preston, Karthik Bharath, Yves van Gennip (University of Nottingham) and Michaela Mahlberg (University of Birmingham)
Coarse-to-fine neural attention and generation with applications to document analysis, Alexander Rush (Harvard University)
Round 3 – April 2016
Spectral Learning with Prior Information with Applications to Topic Models, Daniel Hsu (Columbia University) and Kamalika Chaudhry (University of California, San Diego)
Dynamic Interpretability in Machine Learning, Yisong Yue (California Institute of Technology)
Latent-Variable Spectral Learning Kernelization for NLP, Shay Cohen (University of Edinburgh)
Online clustering of time-sensitive data, Stephen Becker (University of Colorado at Boulder)
Character-level neural network sequence models for varied text named entity recognition, Christopher Manning (Stanford University)
What’s The Angle? Disentangling Perspectives from Content in the News, Noah Smith (University of Washington), Amber Boydstun (University of California, Davis), Philip Resnik (University of Maryland), Justin Gross (University of Massachusetts, Amherst)
Multimodal Event Summarization, Mohit Bansal (TTI-Chicago, UNC-Chapel Hill)
Contextual Entity Recommendation, Maarten de Rijke (University of Amsterdam)
Round 2 – October 2015
Deep Topic Models, Alexander Smola and Chris Dyer (Carnegie Mellon University)
Distributed Local Learning via Random Forests, Ameet Talwalkar (University of California at Los Angeles)
Establishing Trust in Tweets, Mark Dredze (Johns Hopkins University)
Report Linking, Benjamin Van Durme (Johns Hopkins University)
Coherent Multi-Document Summarization, Dr. Mausam (Indian Institute of Technology Delhi)
Round 1 – April 2015
Scalable Probabilistic Deep Learning, Jinwoo Shin (KAIST, South Korea)
Algorithms for Offline, Online and Stochastic Clustering, Viswanath Nagarajan (University of Michigan)
Latent-Variable Learning for Transition-Based Parsing, Shay Cohen, (University of Edinburgh) and Giorgio Satta, (University of Padua)