[This post originally appeared on the Innovation Endeavors Blog and is posted here with permission and some edits.]
Several weeks ago, Innovation Endeavors partnered with Bloomberg BETA to host an exclusive deep learning event for top experts – academics, entrepreneurs, and engineers – and investors to take a closer look at the space. The event centered around a panel discussion with
- Ilya Sutskever, Research Director at OpenAI
- Reza Zadeh, CEO at Matroid and Consulting Professor at Stanford University
- Richard Socher, CEO at MetaMind
The conversation was moderated by Jack Clack of Bloomberg and organized by Shivon Zilis of Bloomberg Beta.
The conversation focused on the future of deep learning and relevant startups. Below are a few interesting insights:
(1) Deep learning is accessible: It was surprising to hear the panelists describe their field as approachable. As complex as deep learning seems, it is made accessible by data availability, existing and increasingly commoditized systems, and the ability to find useful applications with thoughtful implementation that does not necessitate formal theory. Consider this your invitation to get into the deep learning game. First, data. Startups clearly lack the sort of data Google can access. When asked how startups could in fact access meaningful datasets, the panelists were quick to state this should not be a key barrier. There are a few ways to build a needed dataset: crowdsource it, utilize open source tools, go out and build it yourself - scrape your own data and crawl with cheaply rented hardware. Second, systems. A question was raised regarding using GPUs for training and whether one would need expensive resources to be effective in deep learning. While much of the systems required are being commoditized, availability is largely dependent on applications. If an application can be trained on a single machine, you can likely do it on your own; however, if you need a cluster of machines, you will likely require greater resources. Third, effective heuristics. There is no required or accepted formal theory in the deep learning space and heuristics work well enough. If you fully grasp the problem, understand the heuristics, and have the data, the leap to success in deep learning is not too great. With enough data, things will converge if you are careful with how you set parameters. Many of the existing models are available publicly (i.e., online code available for download).
(2) Unsupervised learning has limited near-term applications: While unsupervised learning has potential, and experts see the community likely going in this direction eventually (because of the endless supply of unlabelled data), the near-term applications seem limited. The problem is that we do not understand what the model is supposed to do, how to evaluate its performance, and how to improve it. As a result, semi-supervised learning is outperforming unsupervised approaches.
(3) Bringing deep learning to mobile takes care: Current models out there for neural networks are optimized for regular computers rather than for phones. Some smaller networks that require less computation are able to run natively on the phone, but it can be quite difficult to make models smaller while retaining accuracy. We have seen some successes with vision and speech recognition in particular on mobile but larger models remain out of reach. The problem is often very deep networks are too computationally expensive. The more succinct the representation, the more mobile-friendly.
(4) Computers only loosely imitate the brain: While the image of engineers copying the brain may be nice marketing, experts in the room found the claim to have little basis. The brain may serve as initial inspiration for making computers smart (e.g., memory, neural network) and the correlations can be interesting, but the processes and algorithms engineers use are extremely different from brain functionality.
Panelists expressed doubt that, as we move forward, developments in neuroscience will affect deep learning. Reza expects that we will use the tools that engineering and mathematics provides us to improve the tasks we need to and set out to do. Accordingly, progress in engineering, rather than progress in neuroscience, is likely to drive changes in deep learning.
Perhaps even more interestingly, our experts expressed optimism about speed of progress on numerous applications. When asked what is most amazing about working in deep learning, Richard answered “how quickly the deep learning community has been able to supersede so much work from so many smart people in a variety of fields.” One can ask fundamental and difficult questions in a field previously untouched by deep learning and, very quickly, be able to have a real impact and product.
To continue to scale deep learning rapidly to unimaginable implications, we will need to ask the right questions and feed enough data into relevant models. And the applications are seemingly endless - deep learning is best positioned to add value on problems that involve large datasets and that take time for humans to solve. Key to success, of course, is creating a product that is truly valuable to the end customer. A couple points of guidance from the panelists: (1) create a complete product where machine learning plays a role, and (2) focus on a specific task, ideally something you are good at, and apply machine learning to answer questions better / faster/ cheaper.
When asked to speculate on the future of deep learning, panelists threw out a few ideas: being able to use a commodity cluster to train many different neural networks and using many to find the best one, seeing natural language processing in applications such as robotics shortly, and eliminating menial human tasks by making us more efficient (e.g., answering 100 emails in 30 minutes).