Tech At Bloomberg

New Research: Building a Better Decision Tree…Fast

June 26, 2017

Many applications solve problems that are sequential decision processes: an algorithm is given an a priori distribution over scenarios from which an unknown scenario is chosen and the task is to identify this unknown scenario by taking the fewest number of actions. For example, a doctor might narrow down which disease a patient has by conducting a series of medical tests. The distribution of diseases is mostly well-known and each test can rule out one or more diseases. In the adaptive setting, algorithms are allowed to base their next decision – among other things – on the feedback they have received on their previous decisions. This important class of problems has been widely studied in literature, but typically in an isolated, non-generic manner.

Fatemeh Navidi, a Bloomberg intern and Ph.D. student at the University of Michigan, Anju Kambadur, a senior research scientist in the NLP group at Bloomberg, and Professor Viswanath Nagarajan, Navidi’s thesis advisor and a 2015 Bloomberg Data Science Research Grant award winner, recently developed an algorithm that provides a generic solution to the above class of problems. In their work, they present a simple adaptive greedy-style algorithm with the best possible performance that assigns a score at each step to each remaining scenario and picks the scenario with the maximum score. Their research paper, titled “Adaptive Submodular Ranking,” will be presented at Integer Programming and Combinatorial Optimization (IPCO) 2017, this week at the University of Waterloo.

In addition to the theoretical results, they experimentally compared the performance of their algorithm against several competitors: a clustering-based machine learning algorithm that uses cluster weights for choosing the next scenario and re-weights clusters based on feedback, a static algorithm that did not incorporate user feedback, and a combination algorithm with an adaptive component. One of the data sets they used to test the algorithm was a public data set called MovieLens 100K, which they refined to match their problem settings. They implemented the algorithms for medical diagnosis and another application on this data set and reported their results. Says Navidi: “Our experimental results out-performed other algorithms in most cases. Theoretically, in a general setting, this is the best result we could possibly have.”

Navidi had been working on related problems as part of her Ph.D. work at the University of Michigan with Professor Nagarajan. Kambadur had become good friends with Nagarajan when they worked together at IBM Research, and they had been discussing the problem. “He had a great student [in Navidi], she was interested and we went to work on it,” says Kambadur. Navidi eventually became a summer intern at Bloomberg. “I really enjoyed our work with Anju on this problem,” says Navidi. “Now I’m working on active learning algorithms in the NLP group.”

Their work also suggests some opportunities for further research. While their algorithm is superior in a general setting, there may be specific cases where there is room for improvement. The best possible result for making a medical diagnosis, for instance, might improve for some type of disease distributions. Another future direction for this work is to consider faulty feedback. For example, some medical tests (e.g., blood tests) might not be accurate and can have erroneous results. Now, the question is how to modify the algorithm or the analysis to guarantee their result for these cases.

This problem may seem a bit unrelated to, say, financial modeling or any of Bloomberg’s other products. In spite of that, their research might prove helpful in getting new Bloomberg Terminal users up the learning curve by better directing them to a set of functions that are most relevant to their role, for example. Kambadur says that, to a large extent, that’s beside the point. “Often, breakthroughs are made when you apply things known in one field to your own field,” he says. “Natural language processing experienced a paradigm shift when researchers started incorporating ideas from statistics and machine learning when building applications. It is hard to determine – a priori – which research in mathematics and computer science will turn out to be useful to a given application field.”