Book Read Free

Finding Genius

Page 19

by Kunal Mehta


  The second axis is supervised learning versus unsupervised learning. Supervised means that you have historical, accurate labeled data, known as “training data,” which you can feed the ML model to in order to produce an accurate, initial prediction. Unsupervised simply means that you do not have that labeled, accurate data set to begin with, so the ML algorithm puts a stronger emphasis on the observations of the initial outputs in order to quickly optimize. For example, in the spam filter example above, if the input data used to train the ML model is a set of emails and corresponding classification of spam or not spam, then this would be a supervised learning problem. However, if you only had an initial dataset of emails and did not know whether or not it should be labeled as spam, this would be an unsupervised learning problem, as you are relying on the feedback from the ML algorithm to identify the differences between a regular email and spam email without explicitly labeling one or the other. The benefit of an unsupervised learning algorithm over supervised is that you do not need to individually label the training data set; but the tradeoff is that you often need more data and feedback loops in order to produce a strong prediction from your ML model.

  Deep Learning (DL)

  If machine learning can be thought of as the hammer in your AI toolkit, deep learning is the set of screwdrivers; they can be incredibly useful in certain situations, but there is a bit more complexity (head type, size of screw) involved in order to use it properly. The recent rise in the use and effectiveness of deep learning algorithms is one of the biggest drivers of today’s excitement around AI.

  Most of today’s deep learning algorithms are based on a neural network, which is a type of non-linear machine learning algorithm. A neural network algorithm is built to mimic the structure of the neurons in our brain. Each node (neuron) is connected to other nodes, and those connections are weighted based on the type of data being processed. In the same way the neurons in our brain associate information together, a deep learning neural network is built to do the same. Each node of the neural network captures certain features of a dataset in order to make a prediction on new, incoming data. Relating this back to the way it works in the human brain: when we meet someone new, that experience is broken up into certain features that we unconsciously store, such as the shape of the person’s face, where and when you met them, and the sound and spelling of their name. And with each new meeting, we continue to capture those features in neurons and form connections between people and places based on how related these experiences are. The more neurons (or nodes) available, the more features from that information we can capture, resulting in more accurate predictions.

  There are various flavors of neural networks, such as convolutional nets (CNNs) and recurrent nets (RNNs), each with their own specific set of use cases. CNNs are often used to make image predictions; a real-world example of a CNN is the FaceID algorithm on the iPhone. FaceID uses an initial scan of a user’s face to build a CNN with each node capturing features that are used to uniquely identify a person. Each time forward, when that user places their face in front of the phone, the algorithm will make a prediction of whether or not that face matches the features stored in the CNN. If there is a strong match, the phone unlocks. In contrast, RNNs are better suited for time-series data. An example of an RNN application is the algorithm powering your Alexa or Google home speaker; as you speak, it captures the speech data over the course of that sentence, and once you are done speaking, it then processes that data together.

  What has made DL neural networks so powerful recently is the number of nodes and layers of nodes that can now be computed. However, DL algorithms are not only very compute- and data-intensive, but the complexity of the interactions between nodes leads to what’s commonly referred to as a “black-box” solution. Just as you unconsciously store features from an experience into neurons, it is not easy to determine what features a neural net extracts from a dataset. You may remember someone because of their name, while I may remember that person because of the features of their face. A DL algorithm is the same; the complexity of the neural net makes it difficult to understand why the model outputs something, despite it often being highly accurate.

  Despite all of that, the prediction power of deep learning algorithms compared to traditional machine learning often outweighs the “black-box” cost. And with the exponential increase in compute capabilities thanks to processors like GPUs and cloud computing, as well as the massive amount of available data today (80-90% of today’s data has been created in the last two years), deep learning algorithms are now one of an engineer’s preferred tools to use from their AI toolkit.

  Reinforcement Learning (RL)

  The third, major field of AI that I often see companies use is reinforcement learning. In your AI toolkit, reinforcement learning is the set of wrenches. Just like deep learning, it is a more specialized tool in the AI toolkit that requires upfront preparation to use effectively. Reinforcement learning algorithms are often used when the objective to a problem can be optimized through rewards in a “cause-and-effect” manner. We see this type of behavior in the real world every day. For example, when a dog owner is teaching their dog to sit, they will often do this by giving the dog a treat (a “reward”) when the dog performs the behavior, and no treat (a “punishment”) when they do not. Over time, the dog will learn to associate sitting on command with a treat.

  Reinforcement learning algorithms are similar. Unlike deep learning, which requires an extensive set of upfront training data, reinforcement learning models are best used when there is a goal (e.g. teach the dog to sit), along with the ability to train the model by providing cues along the way (e.g. treats when the dog sits, no treats for they do not). Today, reinforcement learning is widely used, from the control algorithms that optimize the path of a robot within a warehouse, to being used to compete in the most complex video games and puzzles.

  This is just a short introduction that scratches the surface of the complexity of the subdomains within AI. But while billions of dollars and decades of effort are spent pushing the limits of one particular specialty within AI in research, it is the ensemble models that combine multiple domains of AI expertise and research together into a single solution that have produced some of the best performing AI today. Just as you would use more than just a hammer to build a house, the best solutions are often constructed by combining multiple tools from an engineer’s AI toolkit. For example, the famous 4-1 chess win by Google Deep Mind’s AlphaGo against Lee Sedol in 2016 is built on a combination of deep learning methodologies coupled with reinforcement learning.

  As an early stage investor in AI startups, I am often investing in companies before any commercial maturity. This means that understanding the AI technology and differentiation at a fundamental level is critical to the investment decision, especially given all of the hype and promise around AI. This leads into the next framework I want to share, which is around investing through the hype.

  Investing through the AI hype

  There is no shortage of companies using some sort of AI to build a new product or service, ranging from the latest consumer app to the next enterprise software that promises to reinvent the way an enterprise works. The power of machine learning, deep learning, and reinforcement learning is real. The challenge today is cutting through the noise to determine what is truly an AI-first company creating long-term value versus one masquerading itself as an AI company in order to take advantage of the current market hype.

  One way that I look at the potential of an AI startup is by using a two-by-two framework that evaluates the company along their technology and business model innovation. On one axis, I look for companies that have differentiated data sets or algorithms. Differentiated datasets can be both proprietary datasets as well as unique access to scalable, labeled data. An example of this is Netflix’s dataset of user preferences based on their watch history; or Facebook’s photo tagging feature, which allows them to amass a large number of labeled photos, done almost entirely by l
everaging their user base.

  On the algorithm side, this is often in the form of a new mathematical model out of academia, or a unique combination of existing AI models that has been optimized to solve a particular problem. Access to this proprietary dataset and/or algorithm allows the company to build a long-term competitive moat around their technology. As you already know from the AI toolkit description above, there is a strong feedback loop between AI algorithms and data. The better the data, the better the algorithm will perform at future predictions. And the better those predictions, the better the output data, which is then fed back into the algorithm. What this means it that a company with even the slightest head start with a better proprietary dataset or algorithm will have an ever-increasing advantage over their competitors. This winner-take-all characteristic of AI is one of the things that makes these companies so powerful.

  The second way to evaluate an AI company is by the innovation of their business model. Companies that can build a business model that leverages their differentiated AI in a way that is fundamentally disruptive to the traditional economics of their competitors will build long-term value that cuts through the noise within a sector. For example, Amazon’s Kiva robots are used to bring products from one end to the other of their 1 million square foot fulfillment center. This drastically reduces the number of humans needed to do retrieval, and instead allows them to focus on tasks that require more cognitive load, such as picking and packing items into a customer’s box. The use of these AI algorithms that enable their robots to autonomously navigate the warehouse disrupts the traditional unit economics of the business. Amazon not only has AI powering their backend logistics, but like Netflix, they have built a recommendation engine on the frontend that personalizes the site for each individual user. The use of AI throughout the business is one of the reasons Amazon has built the largest e-commerce websites in the world but can offer a superior customer experience with a disruptive model of 2-day, 1-day, and even 1-hour shipping.

  Companies that excel on both axes will not only have a differentiated business model but will enjoy the dataset/algorithm defensibility in a space where competitors struggle to survive the new world order. As an investor, I use this framework as a starting place to help me ask the right questions to determine if an AI company is built for long-term success. It is often the case that companies excel along one axis but not the other. This results in short-term success, but competitors that come along with better access to unique datasets/algorithms or innovative business models will ultimately win out. The companies that will succeed in this next wave of AI will need to excel along both axes. Not only will these companies change the way an industry views their business, but by the time the competition figures it out and tries to challenge them, it will be too late to break the AI company’s defensive moat of better data and algorithms.

  AI + Physical world = Intelligent Robots

  Building on the framework for investing in AI startups, one area that I am particularly excited about is at the intersection of AI and the physical world, aka intelligent robots. Today’s world is still largely manual and human-labor intensive. Take the largest industries in the world today that leverage physical labor; from construction, to manufacturing, to agriculture, 80% of the tasks are still done by humans, with relatively simple machines to aid in very specific pieces of the remaining 20% of work. Today’s robot is often designated to repetitive tasks in very constrained environments. But as companies continue to use their AI toolkit to enhance robots to deal with more complex scenarios, I predict that 80/20 split of human/machine will not only flip, but intelligent robots will unlock new business models. Humans no longer will be limited to simplifying the manufacturing line based on the low-level capabilities of robots but will be free to set up complex environments that are better optimized for rapid production of improved service. Early stage investors are often searching for the next big platform shifts in technology and industry, and I believe this has all of the makings of a big one.

  One of the biggest barriers to intelligent robotics penetrating industries such as agriculture or retail has been the high capex with unproven ROI. However, there has been a commoditization of sensor hardware over the last decade, largely driven by the rapid innovation cycles in consumer smartphones and personal electronics. HD cameras, flash memory, and compute processors are pennies compared to what they used to cost. This not only has greatly lowered the barrier for startups to take on the capex required to build robots, but it has enabled new business models such as RaaS (robots-as-a-service) that allow once skeptical industry incumbents to now consider intelligent robots as a viable solution to augment human labor. In addition, this has exponentially increased the amount of sensor training data that a young startup can capture and process for their AI algorithm, which rapidly levels the playing field against the incumbents. Today, startups like Blue River Technology in agriculture and Bossa Nova in retail are leading the charge, but this is just the beginning.

  AI is at the heart of these robots’ ability to make decisions and take actions in massively unstructured environments. It cannot be understated how different intelligent robots are versus the machines we think of today. Human perception is a highly complex process dependent on our past, current, and future predictions of the world. The physical world is incredibly unstructured. The analogy of a nicely organized Excel table doesn’t exist in real life. While humans are innately skilled at perceiving and making decisions with imperfect information, machines are historically not. The reason why robots were only used to automate 20% of the physical world was because the environment needed to be structured enough for a robot to make sense of it. Take the industrial robot arm from ABB or Kuka that is used to build an automobile; it takes months to program that robot to do a single task along the manufacturing line. Because of that, a company needs to produce thousands, even millions of a single line in order to be profitable. But as these robots improve their ability to rapidly adapt to learn and execute new tasks in a complex, unstructured environment, it will open up new ways to build a business with entirely new economics. We have already seen this happen with Amazon’s acquisition of Kiva changing the economics of logistics, and this is continuing with companies like Zume in the food space, and Google’s Waymo and GM’s Cruise in transportation.

  When I meet with startups building intelligent robots, I go back to first understanding where the company falls along the two frameworks I shared in this chapter. First, what is in their AI toolkit? What combination of ML, DL, and RL are they using? And second, do they have access to proprietary data/algorithms coupled with a disruptive business model? There are a number of startups that are building intelligent robots applied to traditionally labor-intensive industries that excel along both of these frameworks. From my point of view, intelligent robots are the “how” to Andrew Ng’s statement of AI transforming industries. And while we are in the early innings of it all, I predict that today’s startups that are leveraging AI to build intelligent robots will be tomorrow’s giants.

  ANDREW KANGPAN

  TWO SIGMA VENTURES

  The common conversations surrounding AI are around how the technology will replace humans: a discourse that forebodingly points to a dystopian future. As the authors of the thesis chapters explore, this may not be the case. Through decades of technological advancements and study of the human body, AI has the potential to elevate human performance to heights never before seen. With a better understanding of how our bodies function, down to the organs and genome itself, the most genius entrepreneurs are using technology to eradicate disease, repair organ function, and optimize humans for their best attributes and qualities. This is an interest area that Andrew Kangpan, an investor with Two Sigma Ventures, is focused on. He writes in this chapter that ‘as our lives become more quantified, we will have a better understanding of how our decisions impact our lives both positively and negatively. This in turn will allow us to optimize the trajectory of our health in a way that allows us t
o more knowingly accept the benefits and risks of how we choose to live our lives.’ Within those words lie the crux of his thesis: we are interacting with technology in ways that we never have, creating truly novel insights into the human body, and with that, there is immense potential.

  This discussion is one that Andrew has considered in his venture capital career spanning FF Ventures and now Two Sigma Ventures, a data-focused fund in New York City. In 2016, nearly three years before publishing this chapter, Andrew wrote in a post on Medium:

  “Lately, there’s been a lot of excitement regarding new forms of human-computer interfaces. The way users interact with their computing devices are becoming more varied as we shift beyond traditional point and click. Textual conversations, voice commands and VR/AR experiences are new user interfaces that present interesting questions in relation to how their ecosystems will continue to develop and impact markets they penetrate.”

  These questions became more focused in 2018 as he honed in on how data science and technology intersect with the human body. In a post during that same year, Andrew posed three questions: How is our health trajectory affected by our daily choices? How can we catch disease and deliver treatment earlier? How can we account for individual variability when we treat disease? In this chapter, Andrew begins to answer some of these questions and explores a key point made earlier in this book: investment theses are often the result of years of research and deep introspective thinking informed by the entrepreneurs they meet.

  OPTIMIZING HUMAN HEALTH AND WELLNESS: DATA, AI, AND THE FUTURE OF THE QUANTIFIED SELF

  Andrew Kangpan, Two Sigma Ventures

  In 2007, Gary Wolf and Kevin Kelly began to organize a diverse group of tech-enthusiasts called the “Quantified Self,” which informally gave name to the band of individuals using digital technology to measure all aspects of their lives. “Numbers are making their way into the smallest crevices of our lives,” Wolf once wrote in reference to the movement; “with an accelerometer and some decent algorithms, you will soon be able to record your sleep patterns with technology that costs less than $100.”

 

‹ Prev