A Conversation about Human Computation with Bloomberg’s Human Computation Architect Walter S. Lasecki

Walter Lasecki

As a recent addition to the Data Science People+Language AI team in Bloomberg’s Office of the CTO, Human Computation Architect Walter S. Lasecki joined the team to further his study of human computation and to explore its wide-ranging business applications. Walter began his relationship with Bloomberg in 2019, while an assistant professor of Computer Science & Engineering at the University of Michigan. As the recipient of a Bloomberg Data Science Research Grant, he researched how merging crowdsourcing and Natural Language Processing (NLP) can improve efficiency in data annotation projects.

Ahead of the Eighth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2020) this week, we sat down with Walter to discuss the broad potential for human computation and how it can be applied to financial data. The conversation — which also covers how his work is enabling Bloomberg to take advantage of the unique qualities of human and machine intelligences working together — was edited for length and clarity.

What is human computation?
Human computation is the integration of human intelligence and insight into computation processes. The idea is that, if we can use a computational lens on how we organize human effort, we can improve the efficiency, effectiveness, and even quality of work-life of human contributors to a given project. It also helps us integrate AI and machine learning more flexibly than if we just think about humans as extremely smart “black boxes.”

Why is human computation important to Bloomberg?
Financial market analysis is characterized by two key needs: lots of information and domain expertise. The latter tends to be something that is inherent to the people involved in these processes – a deep understanding of where financial trends come from and where they might go. However, processing lots of information very quickly is the purview of machines. Fusing these two things is a necessary component of being able to make decisions and predictions in the context of complex markets.

In order to bring value to customers, Bloomberg needs to organize, augment, and understand data at a deeper level. We aim to fill in the gaps left by the limitations of computational methods with human intelligence. Being able to create more effective and efficient processes for carrying out human computation is critical for Bloomberg’s business.

Tell us about the state of human computation in 2020. How is it growing, evolving, and expanding into new industries and applications?
Over the last decade, we’ve seen a huge growth of interest in this area and its methodologies. In its early days, human computation was about dividing work into strategic small units or “microtasks” and figuring out how to complete those in reliable ways, with a little too much focus on low costs. By 2020, the field has become more about doing complex, open-ended tasks by trying to understand how to leverage more than the most basic knowledge that people might possess, and instead is moving into tasks that require experience, domain expertise, or rapid responses. We’ve also seen a growing set of applications — not just abstract research problems, but at-scale deployment of systems like search engines, document processing services, and visual understanding tools.

What is the significance of the HCOMP conference?
The Conference on Human Computation and Crowdsourcing is one of the most vibrant venues for work of this type. It really focuses on core methodologies and novel applications across the whole range of Human Computation and Crowdsourcing topics. It started out as a workshop that was hosted at several key AI conferences, and eventually formed its own conference under the sponsorship of the Association for the Advancement of Artificial Intelligence (AAAI).

What is the mission of the Office of the CTO’s People+Language AI team?
The People+Language AI team is focused on understanding how human language, interaction, and effort can be understood in the context of AI systems. That encompasses both how people work with AI systems, as well as how AI can provide additional insight into problems on which humans are working.

Tell us about “Best Practices for Managing Data Annotation Practices.” Why is this practical guide vital right now?
The process of creating effective data annotation processes is often seen as a bit of a “dark art”, drawing on ideas from computer science, design, engineering, human computation, and a number of other fields. Right now, nobody is cross-trained in all of these different areas – at least not in a traditional setting like college. The “Best Practices for Managing Data Annotation Practices” handbook we recently published through a collaboration with our Global Data department and Bloomberg Law helps get at some of the core issues that frequently arise when trying to set up effective data annotation practices. For anyone who needs to be able to process large quantities of information, especially for machine learning purposes, the ability to annotate data efficiently and properly is vital.

Modern AI methods (e.g., deep learning) relies on large quantities of annotated data. Recently, we have seen some approaches to machine learning problems that are “zero shot” or few shot. How can humans work together with neural network models to solve tasks that cannot be solved by either humans or machines alone?
I separate the two ideas of neural models/neural networks and “zero-shot” (or “one-shot”) learning. Often one is implemented as the other. There’s a lot of overlap. We’ve seen all these advances in neural networks to be really exciting from a capabilities point of view — that is, an expanded range of problems that we can solve via machine learning algorithms. From the standpoint of human-machine collaboration, it’s often a bit harder to make that interaction work, despite the improved system accuracy, because there’s more obfuscation in terms of how these decisions are being made by thousands of nodes in the neural network.

Zero-shot learning methods don’t require any or as-much data on a new class, but this is predicated on understanding many other things about the domain in which that new object or class arises. There is still a lot of training and labelling involved in understanding the domain, which is ultimately what allows you to transfer knowledge to a new class that the machine has not seen before. The challenge is to figure out the right way to coordinate efforts based on observations and determine where to insert human effort and where to leave the machine on its own.

Can you tell us about any human computation research projects that the team at Bloomberg has been working on in 2020?
We’re working on how people can help bridge some of the understanding gaps that often arise in real-world uses of natural language. NLP is an interesting application area of human computation methods because understanding natural language is “AI complete,” meaning before machines achieve “human-level intelligence,” there will still be problems that are not solvable automatically. Until we get there, being able to use human intelligence to fill in the gaps in machine learning capabilities is vital to making automated systems more capable.

 What are some key areas of opportunity for Bloomberg related to human computation in the coming years?
Two things are exciting to me. First is the ability to look at how the traditional organizational structure intersects with how we think about human computation at the task level — that is, incorporating both the micro and macro structures into the design of “hybrid intelligence” systems. The second thing is real-time applications of human computation methods — thinking about ways to create new systems and tools to provide human-in-the-loop answers within seconds of the data becoming available. We are building state-of-the-art platforms to support current and future use cases: infrastructure for complex work, for integrating subject matter expertise and domain knowledge with massive parallel efforts, as well as thinking about how latency can be reduced while improving efficiency, accuracy, and working conditions in high-impact, time-sensitive settings.


Say hello to Walter at HCOMP 2020 this week.