Book Read Free

Solomon's Code

Page 13

by Olaf Groth


  The Brilliant panel includes a screen on the front end, and a platform to combine and learn from many of the data streams produced within a home. Analyzing a breadth of data can help make the home more comfortable, more secure, more livable for older residents, and healthier as users have it monitor diet and sleep patterns. For example, Emigh says, AI-powered systems that bridge a range of data sources could provide stronger home security, at a lower cost, and with less false reports than current offerings. Audio-monitoring devices could flag the sound of glass breaking. Cameras could feed back into facial-recognition systems to distinguish between a family or a potential intruder. “You lose out on a lot of possibilities for the value you can provide if this data is not stored and accessible for use,” Emigh says.

  Yet, he readily acknowledges the concerns users might have about the security and privacy of all that data, especially coming out of a place as intimate as one’s home. Physically, the smart switch has an opaque plastic panel that can cover the camera embedded in it. Digitally, Emigh and his colleagues realize it’s impossible to completely lock down all the data going from multiple devices to multiple providers. To some extent, they’re trying to make sure Brilliant doesn’t provide yet one more attack possibility for hackers. But, unlike many companies, they set out to address security concerns at some of the earliest stages in their development—something many companies do only after trying to build and increase demand for their products. Different people will have different levels of trust, and Brilliant tries to accommodate that from the outset. “The change for me has been entirely positive—the convenience and comfort and enjoyment,” he says. “But I agree with you, there are downstream consequences on everything we do. In some cases we can figure out what they are, but in others they’re not foreseeable.”

  What Brilliant or any single company can’t foresee is just how much these relatively simple devices, such as a smart thermostat, will reveal about the humans and environments they measure, nor how those individual factors will feed into broader environments and communities. Ideally that’s what it’s all about—a cognitive element in the house that reflects on your home life and helps enhance security, comfort, affordability, and entertainment as it integrates and manages these different resources.

  That is still uncomfortable for many. Yet, we’ve already acceded to the idea of carrying around a far more powerful and complex monitoring device almost everywhere we go. A search engine on your mobile phone can tilt results based on your location, past and recent activity, and perceived goals. If you suddenly start to walk with a limp, a mobile phone’s accelerometers and gyros could identify the change in your gait. And, if you happen to be in a high-risk group for falling, the phone can send out alerts, either encouraging you to stop or urging others to intervene before you let pride prevail and end up with a broken hip.

  At some point along this continuum—from simple thermostat, to comprehensive and intelligent home-automation systems, to the phones we take everywhere—our regard for the system changes. Certainly, networks that integrate machine learning capabilities bring an increased level of intelligence and potential value, but at what point do they make the types of judgments that matter to us and, thus, require greater degrees of trust?

  THE MACHINE WILL SEE YOU NOW

  Self-awareness, image, and values help mold the complex mix of factors that shape our private and public identities. A rich and ever-changing blend of all our education, socialization, and past experiences, both at home and in the workplace, infuses itself into our view of the world and our unique role in it. Even the most introspective among us have little clarity of what’s going on under the hood. Now, into this miasma of wonderfully mysterious identities, we introduce systems with a bizarre mix of their own—equally capable of identifying the subtle identity hints we can’t or don’t want to see and overlooking some of the most important facets of our humanity.

  Isn’t it hard enough for us to figure out who we are, who we want to be, and what we want the world to think of us? In today’s hyper-connected and mediated world, it has become even harder to assess the identity of ourselves and others, and our natural biases and generalizations often spark frustration. Oona King, the global head of diversity at YouTube sees it in the push for diversity in programming. “It’s not enough to feature more women or black people on TV shows,” King says. “We have to get much more granular about this.” The deeper push extends beyond the script into the way characters speak and how subtle behaviors and reactions are displayed. And here, AI-powered systems could help generate some long-overlooked humanizing insights. For example, to analyze how much bias is expressed in its programming, YouTube has started to use facial recognition algorithms to identify how much protagonists of a certain gender or ethnicity are represented in programming and how much or little they speak. “We have found that when a male is the main actor in a scene and has a female opposite him, the male speaks 90 percent of the time,” King says. “But when a female is the main protagonist with a male present, the female only speaks 50 percent of the time.”

  Those sorts of frictions exist in so many of the factors and experiences that shape our lives and relationships. In our workplaces, human bias is notoriously pervasive in hiring decisions. Several new firms, including a Seattle-based company called Koru, offer platforms designed to help companies improve their interview processes and provide better access to minority and other often-overlooked candidates. The traditional hiring process resembles an odd mix of investigatory interrogation and beauty pageant—one side trying to probe for flaws, the other trying desperately not to reveal any. Companies and job candidates can and often do switch roles, but the same sorts of narrow judgments, ill-formed opinions, and outright biases remain no matter who’s trying to impress whom. So, Koru, HireView, and other companies employ AI systems in hopes of facilitating better, more-objective matches between employers and candidates. Using facial and voice recognition software and AI algorithms to analyze a video feed of an interview, recruiters can pick up on subtle clues of comfort and discomfort, truthfulness, confidence levels, and overall appearance. They use this analysis to predict a candidate’s performance against other candidates along a number of psychological parameters the employer deems key for the role in question. Then they compare the external candidates to internal employees who are already performing similar jobs well.

  Koru amasses about 450 data points and, rather than feeding them into a single predictive model, runs those variables through a combination of five different models simultaneously, says Josh Jarrett, the company’s cofounder and chief product officer. By testing multiple models on data for current employees, Koru can find the model that best identifies the factors that correlate with success at the company. It then uses that model to process candidates, zeroing in on what Jarrett calls “GRITCOP”—for grit, rigor, impact, teamwork, curiosity, ownership, and polish. In minutes, Koru can run the data and find the best model, then test candidates with a twenty-question survey and rank them on the probability that they’ll be a good fit.

  The system will not replace the human-to-human interview, but it can help counter individual and organizational bias, helping companies find employees from places they might not currently consider. For example, one of the AI models associates high performance with top colleges and universities. But that correlation might be an artifact of the managers’ bias toward, say, Ivy League graduates. “The whole point of this was to widen the funnel,” Jarrett says. “We’d get top Harvard candidates if we want them, so let’s find people from other schools. Let’s find the GRITCOP stuff.” In that sense, Koru can help corporations escape the tyranny of the résumé or curriculum vitae, which emphasizes pedigree that creates structural barriers for equal opportunity and de-emphasizes important psychometric variables.‡‡‡

  Koru uses feedback from the models to help candidates, too. In surveys, job seekers regularly complain that résumés disappear into a black hole and they never receive feedback. So, Kor
u positions its services as a supplement to the interview process, returning advice specific to each individual and engaging them in an exchange that helps inform both Koru and the candidates who use the platform. A candidate whose job interview responses are analyzed by Koru or similar AI systems could learn from comparisons with other candidates, but also with successful existing employees, who are held to a higher standard of real-time behavioral and psychometric evaluation, rather than historical analyses of résumés that over-emphasize pedigree and reinforce social strata along education and socioeconomic status.

  Meanwhile, the computer vision algorithms mentioned above assess minute clues in one’s interview conduct that are often hidden to the human eye. After all, the human interviewer is also beholden to their own biases, notes David Swanson, former chief human resources officer of SAP North America and author of The Data Driven Leader, which looks at how to use data to drive measurable business outcomes. “We have found that all but the most experienced and trained interviewers often spend the first five minutes of an interview forming an opinion, and then spending the next 55 minutes reconfirming it,” Swanson says, “rather than continuing their exploration of the candidate.” Early evidence suggests AI systems such as Koru’s can mitigate this and give better feedback to both interviewees and interviewers, but the call is still out whether that leads to demonstrable job success in the medium and long term, as the value and agility of a human employee in an evolving workplace environment becomes clear. The risk might be that corporations use data metalabels that measure narrow, near-term suitability for certain tasks but sacrifice the longer-term flexibility of an individual, and their fit for evolving corporate strategies or ventures.

  Yet, the returns are promising enough to spawn a range of similar applications. In April 2018, Cisco acquired a start-up called Gong.io, which can record sales calls and provide and analysis to salespeople and their managers alike.§§§ The platform allows the sales force to better identify key leads and how to improve their efforts to close on deals, including ways to view the best practices of top performers. Managers, on the other hand, can get a more transparent view of the field of potential deals and how their sales staff is working to close on them. Few salespeople anywhere report the raw, unbridled, and unpolished truth. After all, their ability to tell appealing narratives and believe in them is what makes them good salespeople. But it’s critical that managers ascertain an accurate portrayal of the situation.

  Relying in part on the judgment of Gong, Koru, and other AI platforms might give managers a clearer picture into the potential of candidates and existing employees, but the same systems might also displace some of the deeply useful human intuition that helps people work better and more closely together. An X-ray system of the mind can help avoid the imperfections in our human decision processes, alleviating certain biases in our cognition and consciousness, but they might also focus too narrowly on certain measurable dimensions. Discarding the richness of all the other, tough-to-quantify cognitive attributes—our creativity, inspiration, adaptability, and intuition—precludes our chance to consider the rich, emergent potential of our minds and the power of the constantly evolving human consciousness.

  CONSCIOUSNESS, VALUES, AND THE ETHEREAL HUMAN SOUL

  We all live inside our conscious minds, but few people spend as much time contemplating that curious and indefinable space as David Chalmers. A philosophy professor and codirector of the Center for Mind, Brain, and Consciousness at New York University, Chalmers treads in some tempestuous territory, yet he seems to exude an indefatigable calm. He assumes he’ll never know a comprehensive answer to his field’s primary question—what, precisely, is consciousness?—yet he’s perfectly happy to keep striving for it nonetheless. His demeanor almost belies the intensity of thought, but one of the few conclusions he’s reached with any conclusiveness: “Our understanding today will look awfully primitive 100 years from now.”

  Recently, though, he and other thinkers in the field of the mind and consciousness have moved away from a hierarchical sense of consciousness at the top of a ladder. Traditionally, the thinking might have gone from cognition, to intelligence, and finally up to whatever consciousness was. To the layperson, Chalmers describes it as the little movie going on inside your mind, the one that only you can see and hear. Yet, a rising movement among philosophers of the mind has started to look at consciousness as a more primitive phenomenon—essentially, as any subjective experience, rather than the subjective experience. Pain, then, is a basic form of consciousness, Chalmers explains.¶¶¶ Fish feel it. Infants feel it. Adult primates feel it. One recently developing argument in the field takes it even a step further. Integrated information theory, introduced by the neuroscientist Giulio Tononi, posits that consciousness is inherent in any physical system, deriving from cause-and-effect relationship between the system’s constituent parts.

  Whatever the true nature of consciousness and the continuum on which it might exist, Chalmers says, we might make some useful distinctions when thinking about the subjective nature of humans and machines. “Some systems have elements of cognition but aren’t able to think about themselves, so maybe self-consciousness kicks in at the level of primates or something,” he muses. “Then maybe that’s another step in this chain.” The chain might start at perception, or how we perceive the world. Chalmers suggests that consciousness comes in very early, with perception, but that many systems have conscious perception without the ability to think or reason. So, perhaps our next step is cognition, that ability to think and reason about the world around us. Then, we might move another notch beyond with self-cognitive reasoning, in which we think about ourselves and our cognition. And with that, we might sense the self-consciousness, subjective, “first-person” experience about ourselves.

  Yet even these distinctions suggest a hierarchy that Chalmers might question. The debates and discussions that surface on YouTube or at various conference panels can veer off into some surreal hypotheticals and trains of thought, which might explain why Chalmers enjoys the field in all its enigmatic glory. But the same uncertainty raises a critical conundrum for the development of AI systems that will monitor us, make decisions for us, have opinions about us, and judge us. “The science of consciousness has really developed a lot in the last twenty to thirty years, and AI and computer science are in principle a part of that,” Chalmers says, “but it’s hard to approach consciousness directly because [AI developers] don’t know what exactly they have to model. . . . In the human case, we can start with things we know are conscious, like other humans, and try to track back from there. But there’s no piece of code we can write and then say, ‘That’s it. Now we have consciousness!’”

  Rather, consciousness appears to be an emergent phenomenon, irreducible to its individual physical parts or a set of clear causes. The interplay of billions of neurons in our brains with the millions of sensory inputs they get through our bodies create thoughts of a higher order—almost like a space station of the brain, orbiting on a higher plane, clearly supported by the physics of earth’s resources and atmosphere but hovering above. Human self-reflection might offer the best evidence of complex consciousness, even if we can’t explain precisely how it happens or from whence it emerges, Chalmers says. But what’s clear is that consciousness, like identity and the many other indefinable attributes that make us human, is not a fixed or finite variable. Even we, as humans, will try to chase a higher plane of increased consciousness through meditation, Tai Chi, spiritual engagement, experiences in nature, or experiments with drugs. Some of us merely want to think and argue better. Some wish to perform better in their professional roles and careers. Still others want to become more fulfilled in their relationships, seeking wisdom and growth toward a higher state of being.

  All these pursuits require a heightened level of brain function, alertness, and awareness. One need not become metaphysical to find this challenge appealing. Keeping the brain fit and ensuring stronger mental health as we age is en
ough, thank you. It’s not for nothing that brain workout start-ups like BrainGym and SharpBrains have gained big followings in the past decade. Of course, the insights that spring from these wells of digital pattern recognition and reflection are not always pleasant or ego-reinforcing. Like any good feedback, this can force people to think hard about who they are and what they want to become. And in this sense, especially, AI systems appear to approximate a more complex consciousness than we might otherwise grant a collection of silicon, metal, and code.

  Even that anthropomorphic sentiment is enough to rankle Jerry Kaplan, who lectures about the social and economic impact of AI at Stanford University and has written extensively about consciousness and artificial intelligence. Kaplan gives no quarter on the consciousness of machines: “It’s a mistake to use that terminology,” he says. “We don’t know what human consciousness is, so to apply it to machines is completely inappropriate. And there’s no evidence to date that it will ever be appropriate to talk about machine consciousness.” It’s not so much the concept that bothers Kaplan; it’s the language. A computer program could model its own existence in the world and reflect on that simulation of its actions. A robot takes an action, fields the resulting inputs from the surrounding environment, and then sees if its action had the predicted effect. To that extent, there’s a sort of metareflection occurring, but to ascribe to that an anthropomorphic “consciousness” gets Kaplan more animated. There’s no higher plane that the robot’s circuits create, no place where it can reflect critically about its existence, its place in the universe, or its feelings of satisfaction, doubt, or curiosity.

 

‹ Prev