trolley problem

44 results back to index


pages: 202 words: 58,823

Willful: How We Choose What We Do by Richard Robb

activist fund / activist shareholder / activist investor, Alvin Roth, Asian financial crisis, asset-backed security, Bear Stearns, behavioural economics, Bernie Madoff, Brexit referendum, capital asset pricing model, cognitive bias, collapse of Lehman Brothers, Daniel Kahneman / Amos Tversky, David Ricardo: comparative advantage, delayed gratification, diversification, diversified portfolio, effective altruism, endowment effect, Eratosthenes, experimental subject, family office, George Akerlof, index fund, information asymmetry, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, lake wobegon effect, loss aversion, market bubble, market clearing, money market fund, Paradox of Choice, Pareto efficiency, Paul Samuelson, Peter Singer: altruism, Philippa Foot, principal–agent problem, profit maximization, profit motive, Richard Thaler, search costs, Silicon Valley, sovereign wealth fund, survivorship bias, the scientific method, The Wealth of Nations by Adam Smith, Thomas Malthus, Thorstein Veblen, transaction costs, trolley problem, ultimatum game

Society, in deciding whether to let the government build a dam, would have to make a for-itself decision about whether the benefit of the electricity exceeds its costs to an unwilling few. Two Moral Dilemmas The distinction between purposeful and for-itself decision-making can be demonstrated by looking at two enduring moral puzzles: the merchant’s choice posed by Cicero in 44 BCE and the trolley problem posed by Philippa Foot in 1967.3 The merchant’s choice belongs in the purposeful category, where options can be evaluated, ranked, and traded. Choices in the trolley problem, however, depend ultimately on impulse—attempts to calculate the trade-offs are swamped by an individual exercise of will. Action (or inaction) is for-itself. In Cicero’s story, Rhodes is suffering a famine when a merchant arrives at the port with a ship full of grain.

It’s possible that personal moral principles would dictate a particular response—a utilitarian might favor pushing, while a Kantian might not. For effective altruists, utilitarians, and Kantians, the moral considerations arising from the trolley problem fit with purposeful choice. As long as they don’t abandon their principles at the crucial moment, their actions should be predictable. For the rest of us, though, it might not be so easy. The trolley problem is carefully constructed so that there is no Pareto-efficient solution. Variations of the problem that deal with injury, where everyone can be made better off, are easy to solve. Say the man you pushed would break his arm to save five people from breaking their arms.

But if the fat man is a shot putter about to compete in the Olympics, don’t push, since the cost of breaking his arm likely exceeds the sum cost of breaking the arms of five random people. In the actual trolley problem, though, he can’t be compensated for blocking the trolley since he’ll be dead. I don’t think we can resort to a “veil of ignorance” solution, either. If I didn’t know ex ante whether I’d be the fat man or one of the five people on the tracks and I had a one-sixth chance of each, of course I would choose “push.” But that doesn’t help with the trolley problem. It is already resolved who will be the fat man and that’s the individual you’d have to kill. In the end, I probably wouldn’t push, although I can’t say for sure.


pages: 346 words: 97,890

The Road to Conscious Machines by Michael Wooldridge

Ada Lovelace, AI winter, algorithmic bias, AlphaGo, Andrew Wiles, Anthropocene, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, basic income, Bletchley Park, Boeing 747, British Empire, call centre, Charles Babbage, combinatorial explosion, computer vision, Computing Machinery and Intelligence, DARPA: Urban Challenge, deep learning, deepfake, DeepMind, Demis Hassabis, don't be evil, Donald Trump, driverless car, Elaine Herzberg, Elon Musk, Eratosthenes, factory automation, fake news, future of work, gamification, general purpose technology, Geoffrey Hinton, gig economy, Google Glasses, intangible asset, James Watt: steam engine, job automation, John von Neumann, Loebner Prize, Minecraft, Mustafa Suleyman, Nash equilibrium, Nick Bostrom, Norbert Wiener, NP-complete, P = NP, P vs NP, paperclip maximiser, pattern recognition, Philippa Foot, RAND corporation, Ray Kurzweil, Rodney Brooks, self-driving car, Silicon Valley, Stephen Hawking, Steven Pinker, strong AI, technological singularity, telemarketer, Tesla Model S, The Coming Technological Singularity, The Future of Employment, the scientific method, theory of mind, Thomas Bayes, Thomas Kuhn: the structure of scientific revolutions, traveling salesman, trolley problem, Turing machine, Turing test, universal basic income, Von Neumann architecture, warehouse robotics

So, what would and what should an AI agent do when faced with a Trolley Problem, or something like it? Well, first of all, we should ask ourselves whether it is reasonable to expect more of an AI system that we would expect of a person in a situation like this. If the greatest philosophical thinkers in the world cannot definitively resolve the Trolley Problem, then is it reasonable of us to expect an AI system to do so? Secondly, I should point out that I have been driving cars for decades, and in all that time I have never faced a Trolley Problem. Nor has anyone else I know. Moreover, what I know about ethics, and the ethics of the Trolley Problem, is roughly what you read above: I wasn’t required to pass an ethics exam to get my driving licence.

We’ll begin our survey with one particular scenario which has attracted more attention in the ethical AI community than perhaps any other. The Trolley Problem is a thought experiment in the philosophy of ethics, originally introduced by British philosopher Philippa Foot in the late 1960s.7 Her aim in introducing the Trolley Problem was to disentangle some of the highly emotive issues surrounding the morality of abortion. There are many versions of Foot’s trolley problem, but the most common version goes something like this (see Figure 21): A trolley (i.e. a tram) has gone out of control, and is careering at high speed towards five people who are unable to move.

There is a lever by the track; if you pull the lever, then the trolley will be diverted down an alternative track, where there is just one person (who also cannot move). If you pull the lever, then this person would be killed, but the five others would be saved. Should you pull the lever or not? The Trolley Problem has risen rapidly in prominence recently because of the imminent arrival of driverless cars. Pundits were quick to point out that driverless cars might well find themselves in a situation like the Trolley Problem, and AI software would then be called upon to make an impossible choice. ‘Self-driving cars are already deciding who to kill’ ran one Internet headline in 2016.8 There was a flurry of anguished online debate, and several philosophers of my acquaintance were surprised and flattered to discover that there was suddenly an attentive audience for their opinions on what had hitherto been a rather obscure problem in the philosophy of ethics.


pages: 175 words: 54,755

Robot, Take the Wheel: The Road to Autonomous Cars and the Lost Art of Driving by Jason Torchinsky

autonomous vehicles, barriers to entry, call centre, commoditize, computer vision, connected car, DARPA: Urban Challenge, data science, driverless car, Elon Musk, en.wikipedia.org, interchangeable parts, job automation, Philippa Foot, ransomware, self-driving car, sensor fusion, side project, Tesla Model S, trolley problem, urban sprawl

George, on the other hand, is actually and directly murdering the poor fat man to save the five people. Is this difference actually significant? Does anything about the trolley problem really matter? The truth is that, in reality, I don’t think the trolley problem is really a likely dilemma that autonomous cars will face. Sure, they may end up in situations where sacrifice of life is unavoidable, but the idea that the robotic vehicles will have access to all the information that makes up the trolley problem—the number of passengers in the vehicle, specifically—is by no means assured, and as such is not likely to be a factor in the cars’ decision making.

These are, of course, extremely important questions and concerns, but let’s be real here: we’re all sort of being hypocrites whenever we wring our hands over how we expect robotic vehicles to behave in morally or ethically difficult situations, where real lives are at stake. We’re hypocrites because humanity is basically a collection of all kinds of often miserable jackasses who wouldn’t know the best ethical solution to the trolley problem if it shoved its ethical and hypothetical tongue in their nostril, and just about all of those miserable jackasses have car keys. Oh, and in case you’re not familiar with it, I’ll explain the trolley problem soon. The recent interest in autonomous vehicles has made this fifty-two-year-old thought experiment surprisingly popular, so, don’t worry, before you fling this book to the ground in disgust, you’ll know what the hell everyone’s talking about.

Let’s look into this aspect first, and think about how future robotic cars will deal with a confusing world. This means we should probably address the trolley problem first, since almost every discussion of autonomous car ethics will bring this up, and I’ve put it off as long as I could. The trolley problem⁵³ was first “officially” stated by the British philosopher, ethicist, and hilarious-name-haver Philippa Foot in 1967. Foot’s original description of the trolley problem reads like this: Edward is the driver of a trolley, whose brakes have just failed. On the track ahead of him are five people; the banks are so steep that they will not be able to get off the track in time.


pages: 88 words: 26,706

Against the Web: A Cosmopolitan Answer to the New Right by Michael Brooks

4chan, Anton Chekhov, Ayatollah Khomeini, basic income, Bernie Sanders, capitalist realism, centre right, Community Supported Agriculture, crony capitalism, deindustrialization, Donald Trump, drone strike, Flynn Effect, gun show loophole, invisible hand, Jeremy Corbyn, late capitalism, market fundamentalism, mass incarceration, moral hazard, Nelson Mandela, open borders, Peter Thiel, Philippa Foot, public intellectual, Silicon Valley, Slavoj Žižek, The Bell Curve by Richard Herrnstein and Charles Murray, trolley problem, universal basic income, upwardly mobile

The consequentialist approach to the Trolley Problem is to make whatever decision—in this case, turning the trolley to the right—that will result in the fewest deaths. When they first hear Foot’s version of the Trolley Problem, the majority of people have a consequentialist reaction. (Or their eyes glaze over, as yours might be doing. Just give me a minute here. This is going to come up later.) The usual response is to argue that the morally “right” thing for Edward to do is to turn the trolley to the right, killing the one person to save the five. Nevertheless, Judith Jarvis Thomson amended the Trolley Problem in such a way that, when hearing her version, people have the opposite reaction.

George can shove the fat man onto the track in the path of the trolley, or he can refrain from doing this, letting the five die. When presented with this version of the Trolley Problem, most people refuse to sacrifice the fat man’s life to save the five people. In other words, though on the face of it the moral calculation in both Trolley Problems is the same—in both versions of the story, one person dies to save five—the different responses that people give demonstrate that in real life, people distinguish between actively participating in a killing and letting someone die. Whatever you think about the solutions to the Trolley Problems, you can see the point of the thought experiment. Two principles are being pitted against each other to test which one we think ‘outranks’ the other.

According to my friend Ben Burgis, the author of Give Them An Argument: Logic for the Left, a thought experiment generally refers to two things: first, an imaginary situation designed to test whether a certain definition of a concept captures what we really mean by it, and second, an imaginary situation in which we bring two moral principles into conflict in order to discover which one we care more about. The most famous thought experiment is the so-called “Trolley Problem,” which was originally formulated by the British philosopher Philippa Foot, though the version that most people are familiar with incorporates a change suggested by the American philosopher Judith Jarvis Thomson. Here’s Foot’s original example: Edward is the driver of a trolley whose breaks have just failed.


pages: 472 words: 80,835

Life as a Passenger: How Driverless Cars Will Change the World by David Kerrigan

3D printing, Airbnb, airport security, Albert Einstein, autonomous vehicles, big-box store, Boeing 747, butterfly effect, call centre, car-free, Cesare Marchetti: Marchetti’s constant, Chris Urmson, commoditize, computer vision, congestion charging, connected car, DARPA: Urban Challenge, data science, deep learning, DeepMind, deskilling, disruptive innovation, Donald Shoup, driverless car, edge city, Elon Musk, en.wikipedia.org, fake news, Ford Model T, future of work, General Motors Futurama, hype cycle, invention of the wheel, Just-in-time delivery, Lewis Mumford, loss aversion, Lyft, Marchetti’s constant, Mars Rover, megacity, Menlo Park, Metcalfe’s law, Minecraft, Nash equilibrium, New Urbanism, QWERTY keyboard, Ralph Nader, RAND corporation, Ray Kurzweil, ride hailing / ride sharing, Rodney Brooks, Sam Peltzman, self-driving car, sensor fusion, Silicon Valley, Simon Kuznets, smart cities, Snapchat, Stanford marshmallow experiment, Steve Jobs, technological determinism, technoutopianism, TED Talk, the built environment, Thorstein Veblen, traffic fines, transit-oriented development, Travis Kalanick, trolley problem, Uber and Lyft, Uber for X, uber lyft, Unsafe at Any Speed, urban planning, urban sprawl, warehouse robotics, Yogi Berra, young professional, zero-sum game, Zipcar

Trolleyology In philosophy circles, there’s an ethical question to explore this phenomenon, known as the trolley problem. It challenges that if you had to push one large person in front of a moving trolley to save a group of people on the tracks, would you? This abstract thought exercise has been widely applied in discussion about how we should design the programming for self-driving cars: what should it choose to do in a trolley-style situation where not everybody can be saved but relative value choices need to be made? In an interesting public exploration of the trolley problem in the context of driverless cars, MIT have created a website[291] offering users the chance to choose their preferred outcome in a variety of scenarios.

In an interesting public exploration of the trolley problem in the context of driverless cars, MIT have created a website[291] offering users the chance to choose their preferred outcome in a variety of scenarios. The MIT reworking of the trolley problem replaces the trolley with a driverless car experiencing brake failure. The experiment depicts 13 variations of the “trolley problem”, asking users to decide who should perish, which involves agonising priority choices: more deaths against fewer, humans over animals, elderly compared to young, professionals against criminals, law abiding people over jaywalkers, and larger people against athletes. I strongly recommend you try it yourself: http://moralmachine.mit.edu/ and see how your choices compare with others who have completed the experiment.

They may make no choice - frozen into inaction by fear. So programming cars for the best possible outcome, even if unfavourable, adds a degree of certainty we don't currently have. The driverless cars trolley problem discussions portend many forthcoming debates about ethics in the time of Artificial Intelligence and how we will hold machines to different standards than we do humans. We don’t endlessly debate the trolley problem for human drivers, nor is it part of any driver test. Patrick Lin, a philosopher at California Polytechnic State University, San Luis Obispo and a legal scholar at Stanford University notes that “Even if a machine makes the exact same decision as a human being, I think we’ll see a legal challenge.”[298] For all the debate about how to treat ethics in relation to driverless cars, it’s also noteworthy that today we commonly put our safety in the hands of a driver who may be forced to make a life or death decision every time we get into a taxi.


pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence by Jacob Turner

"World Economic Forum" Davos, Ada Lovelace, Affordable Care Act / Obamacare, AI winter, algorithmic bias, algorithmic trading, AlphaGo, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, autonomous vehicles, backpropagation, Basel III, bitcoin, Black Monday: stock market crash in 1987, blockchain, brain emulation, Brexit referendum, Cambridge Analytica, Charles Babbage, Clapham omnibus, cognitive dissonance, Computing Machinery and Intelligence, corporate governance, corporate social responsibility, correlation does not imply causation, crowdsourcing, data science, deep learning, DeepMind, Demis Hassabis, distributed ledger, don't be evil, Donald Trump, driverless car, easy for humans, difficult for computers, effective altruism, Elon Musk, financial exclusion, financial innovation, friendly fire, future of work, hallucination problem, hive mind, Internet of things, iterative process, job automation, John Markoff, John von Neumann, Loebner Prize, machine readable, machine translation, medical malpractice, Nate Silver, natural language processing, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, nudge unit, obamacare, off grid, OpenAI, paperclip maximiser, pattern recognition, Peace of Westphalia, Philippa Foot, race to the bottom, Ray Kurzweil, Recombinant DNA, Rodney Brooks, self-driving car, Silicon Valley, Stanislav Petrov, Stephen Hawking, Steve Wozniak, strong AI, technological singularity, Tesla Model S, The Coming Technological Singularity, The Future of Employment, The Signal and the Noise by Nate Silver, trolley problem, Turing test, Vernor Vinge

The participant has a choice: act, and divert the trolley so that it hits the one person, or do nothing and allow the trolley to kill five.105 The most direct analogy to the Trolley Problem for AI is the programming of self-driving cars.106 For instance: if a child steps into the road, should an AI car hit that child, or steer into a barrier and thereby kill the passenger? What if it is a criminal who steps into the road?107 The parameters can be tweaked endlessly, but the basic choice is the same—which of two (or more) unpleasant or imperfect outcomes should be chosen? Aspects of the Trolley Problem are by no means unique to autonomous vehicles. For instance, whenever a passenger gets into a taxi, they delegate such decisions to the driver.

An autonomous weapon may have to decide whether to fire a weapon at an enemy when the enemy is surrounded by civilians, taking the risk of causing collateral damage in order to eliminate the target.112 A common objection to the Trolley Problem or its variants being applied to AI is to say that humans are very rarely faced with extreme situations where they must choose between, for example, killing five schoolchildren or one member of their family. However, this objection confuses the individual example with the underlying philosophical dilemma. Moral dilemmas do not arise only in life and death situations. To this extent, the Trolley Problem is misleading in that it could encourage people to think that AI’s moral choices are serious, but rarely arise.

However, these will be autonomous in the relevant sense so long as the software within the vehicles, which may come from a single central hub and be sent to the individual vehicles via the Internet, contains features which would qualify as AI within this book’s definition. See, for example, Joel Achenbach, “Driverless Cars Are Colliding with the Creepy Trolley Problem”, Washington Post, 29 December 2015, https://​www.​washingtonpost.​com/​news/​innovations/​wp/​2015/​12/​29/​will-self-driving-cars-ever-solve-the-famous-and-creepy-trolley-problem/​?​utm_​term=​.​30f91abdad96, accessed 1 June 2018; Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan, “The Social Dilemma of Autonomous Vehicles”, Cornell University Library Working Paper, 4 July 2016, https://​arxiv.​org/​abs/​1510.​03346, accessed 1 June 2018. 107The scenario involving a criminal pedestrian was posed by researchers at MIT, in their “Moral Machine” game, which is described by its designers as “A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.


pages: 223 words: 66,428

The Comforts of a Muddy Saturday by Alexander McCall Smith

British Empire, do what you love, North Sea oil, Philippa Foot, trolley problem

She drew in her breath and read quickly to the bottom; there was the signature, bold as brass: Christopher Dove. She read the letter again, more slowly this time. Dear Ms. Dalhousie, I enclose with this letter an article that I have recently completed and that I think is suitable for publication in the Review. You may be familiar, of course, with the famous Trolley Problem that Philippa Foot raised all those years ago in Virtues and Vices. I have recently given this matter considerable thought and feel that I have a new approach to propose. There are a number of other editors keen to take this piece (both here and in the United States), but I thought that I would give you first option.

Then she picked up the letter again and began to enumerate its various effronteries and, not to beat about the bush, lies. To begin with, there was Dove’s choice of the words you may be familiar with: that may have sounded innocuous, but was in reality a piece of naked condescension. Of course she would be familiar with the Trolley Problem, one of the most famous thought experiments of twentieth-century philosophy—and twenty-first-century philosophy, too, as the problem continued to rumble along, as everyone knew. Everyone professionally involved in philosophy, that is, and that included Isabel. To suggest that she may be familiar with it was to imply ignorance on her part; what Dove should have written was you will of course be familiar with.

He had gone to East Berlin, as had Dove, and had publicly complained about reactionaries, as he described them, who had questioned the visit on the grounds that meetings would be restricted to those with posts in the universities, Party men every one of them. Dove…She thought of his paper on the Trolley Problem; she felt a vague unease about that, and she felt that there would be more to come. But Brecht and the GDR, and even Dove and Lettuce, seemed far away. “Let’s leave Brecht out of it for a moment,” she said. “After Charlie has been fed, I thought we could go out to the Pentlands and just…just go for a walk.


pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Ada Lovelace, AI winter, Alignment Problem, AlphaGo, Amazon Mechanical Turk, Apple's 1984 Super Bowl advert, artificial general intelligence, autonomous vehicles, backpropagation, Bernie Sanders, Big Tech, Boston Dynamics, Cambridge Analytica, Charles Babbage, Claude Shannon: information theory, cognitive dissonance, computer age, computer vision, Computing Machinery and Intelligence, dark matter, deep learning, DeepMind, Demis Hassabis, Douglas Hofstadter, driverless car, Elon Musk, en.wikipedia.org, folksonomy, Geoffrey Hinton, Gödel, Escher, Bach, I think there is a world market for maybe five computers, ImageNet competition, Jaron Lanier, job automation, John Markoff, John von Neumann, Kevin Kelly, Kickstarter, license plate recognition, machine translation, Mark Zuckerberg, natural language processing, Nick Bostrom, Norbert Wiener, ought to be enough for anybody, paperclip maximiser, pattern recognition, performance metric, RAND corporation, Ray Kurzweil, recommendation engine, ride hailing / ride sharing, Rodney Brooks, self-driving car, sentiment analysis, Silicon Valley, Singularitarianism, Skype, speech recognition, Stephen Hawking, Steve Jobs, Steve Wozniak, Steven Pinker, strong AI, superintelligent machines, tacit knowledge, tail risk, TED Talk, the long tail, theory of mind, There's no reason for any individual to have a computer in his home - Ken Olsen, trolley problem, Turing test, Vernor Vinge, Watson beat the top human players on Jeopardy!, world market for maybe five computers

The trolley problem has recently reemerged as part of the media’s coverage of self-driving cars,20 and the question of how an autonomous vehicle should be programmed to deal with such problems has become a central talking point in discussions on AI ethics. Many AI ethics thinkers have pointed out that the trolley problem itself, in which the driver has only two horrible options, is a highly contrived scenario that no real-world driver will ever encounter. But the trolley problem has become a kind of symbol for asking about how we should program self-driving cars to make moral decisions on their own. In 2016, three researchers published results from surveys of several hundred people who were given trolley-problem-like scenarios that involved self-driving cars, and were asked for their views of the morality of different actions.

If you steer the trolley to the right, the trolley will kill the single worker. What is the moral thing to do? The trolley problem has been a staple of undergraduate ethics classes for the last century. Most people answer that it would be morally preferable for the driver to steer onto the spur, killing the single worker and saving the group of five. But philosophers have found that a different framing of essentially the same dilemma can lead people to the opposite answer.19 Human reasoning about moral dilemmas turns out to be very sensitive to the way in which the dilemmas are presented. The trolley problem has recently reemerged as part of the media’s coverage of self-driving cars,20 and the question of how an autonomous vehicle should be programmed to deal with such problems has become a central talking point in discussions on AI ethics.

Clarke, 2001: A Space Odyssey (London: Hutchinson & Co, 1968). 17.  Ibid., 192. 18.  N. Wiener, “Some Moral and Technical Consequences of Automation,” Science 131, no. 3410 (1960): 1355–58. 19.  J. J. Thomson, “The Trolley Problem,” Yale Law Journal 94, no. 6 (1985): 1395–415. 20.  For example, see J. Achenbach, “Driverless Cars Are Colliding with the Creepy Trolley Problem,” Washington Post, December 29, 2015. 21.  J.-F. Bonnefon, A. Shariff, and I. Rahwan, “The Social Dilemma of Autonomous Vehicles,” Science 352, no. 6293 (2016): 1573–76. 22.  J. D. Greene, “Our Driverless Dilemma,” Science 352, no. 6293 (2016): 1514–15. 23.  


pages: 502 words: 132,062

Ways of Being: Beyond Human Intelligence by James Bridle

Ada Lovelace, Airbnb, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Anthropocene, Any sufficiently advanced technology is indistinguishable from magic, autonomous vehicles, behavioural economics, Benoit Mandelbrot, Berlin Wall, Big Tech, Black Lives Matter, blockchain, Californian Ideology, Cambridge Analytica, carbon tax, Charles Babbage, cloud computing, coastline paradox / Richardson effect, Computing Machinery and Intelligence, corporate personhood, COVID-19, cryptocurrency, DeepMind, Donald Trump, Douglas Hofstadter, Elon Musk, experimental subject, factory automation, fake news, friendly AI, gig economy, global pandemic, Gödel, Escher, Bach, impulse control, James Bridle, James Webb Space Telescope, John von Neumann, Kickstarter, Kim Stanley Robinson, language acquisition, life extension, mandelbrot fractal, Marshall McLuhan, microbiome, music of the spheres, negative emissions, Nick Bostrom, Norbert Wiener, paperclip maximiser, pattern recognition, peer-to-peer, planetary scale, RAND corporation, random walk, recommendation engine, self-driving car, SETI@home, shareholder value, Silicon Valley, Silicon Valley ideology, speech recognition, statistical model, surveillance capitalism, techno-determinism, technological determinism, technoutopianism, the long tail, the scientific method, The Soul of a New Machine, theory of mind, traveling salesman, trolley problem, Turing complete, Turing machine, Turing test, UNCLOS, undersea cable, urban planning, Von Neumann architecture, wikimedia commons, zero-sum game

This is the paradox at the heart of the Trolley problem, an ethical problem posed for self-driving cars and other autonomous systems such as an automated trolley (or tramcar, for non-Americans). The Trolley problem asks what an automated vehicle should do if there are two unavoidable paths for it to take: one towards a group of people and one towards an individual, for example. Whose life is worth more? The Trolley problem has even been turned into an online game, the Moral Machine, by researchers at MIT seeking to formulate rules for autonomous vehicles.21 The problem with the Trolley problem is that it was originally formulated for a human operator at the controls of a runaway tram car: the power of the problem resides in the unavoidable nature of its two outcomes.

These include, but are not limited to, the car-centric design of modern cities; the education or otherwise of pedestrians in road safety and much else; the fatally addictive design of the app they were playing with on their phone at the time; the financial incentives of automation; the assumptions of actuaries and insurance companies; and the legal processes which govern everything from the speed limit to the assignation of blame and recompense. In short, the most important factor in the Trolley problem is not the internal software of the vehicle, but the culture which surrounds the self-driving car, which has infinitely more impact on the outcomes of the crash than any split-second decision by the driver, human or otherwise. This is the real lesson of scenarios like the Trolley problem, the Basilisk and the paperclip machine: we cannot control every outcome, but we can work to change our culture. Technological processes like artificial intelligence won’t build a better world by themselves, just as they tell us nothing useful about general intelligence.

See, for example, Stuart Russell and Peter Norvig’s Artificial Intelligence: A Modern Approach, 3rd edn (Harlow: Pearson Education, 2016), the standard textbook on the subject, which cites Yudkowsky’s concerns about AI safety. 21. The Trolley problem was first given that name by the moral philosopher Judith Jarvis Thomson in ‘Killing, Letting Die, and the Trolley Problem’, The Monist, 59 (2), April 1976, pp. 204–17. Her conclusion, from a number of examples, was that ‘there are circumstances in which – even if it is true that killing is worse than letting die – one may choose to kill instead of letting die’.


pages: 197 words: 59,656

The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically by Peter Singer

Albert Einstein, clean water, cognitive load, corporate social responsibility, correlation does not imply causation, David Brooks, effective altruism, en.wikipedia.org, Flynn Effect, hedonic treadmill, Large Hadron Collider, Nick Bostrom, Peter Singer: altruism, purchasing power parity, randomized controlled trial, stem cell, Steven Pinker, TED Talk, trolley problem, William MacAskill, young professional

Bloom, “The Baby in the Well.” 12. Gleichgerrcht and Young, “Low Levels of Empathic Concern,” e60418. For an entertaining discussion of trolley problems, see David Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us about Right and Wrong (Princeton: Princeton University Press, 2013). 13. C. D. Navarrete, M. M. McDonald, M. L. Mott, and B. Asher, “Virtual Morality: Emotion and Action in a Simulated Three-Dimensional ‘Trolley Problem,’” Emotion 12 (2011): 364–70. I owe this reference to Gleichgerrcht and Young. 14. Bloom, “The Baby in the Well.” 15. Immanuel Kant, Critique of Practical Reason, trans.

You may feel a little worse, but it is unlikely that you feel anything like ten times worse.11 Effective altruists, as we have seen, need not be utilitarians, but they share a number of moral judgments with utilitarians. In particular, they agree with utilitarians that, other things being equal, we ought to do the most good we can. In a study of the role of emotion in moral decision making, subjects were presented with so-called trolley problem dilemmas in which, for example, a runaway trolley is heading for a tunnel in which there are five people, and it will kill them all unless you divert it down a sidetrack, in which case only one person will be killed. In a variant, the only way to stop the five being killed is to push a heavy stranger off a footbridge.

Empathic concern is, as we have seen, one aspect of emotional empathy. Other aspects of empathy, including personal distress and perspective taking, did not vary between those who made consistently utilitarian judgments and those who did not. Neither did demographic or cultural differences, including age, gender, education, and religiosity.12 Another trolley problem study used virtual reality technology to give people a more vivid sense of being in the situation in which they must decide whether to throw the switch to divert the trolley down the sidetrack, thereby killing one but saving five. In this study, experimenters measured the skin conductivity of their subjects while making these decisions.


pages: 296 words: 78,631

Hello World: Being Human in the Age of Algorithms by Hannah Fry

23andMe, 3D printing, Air France Flight 447, Airbnb, airport security, algorithmic bias, algorithmic management, augmented reality, autonomous vehicles, backpropagation, Brixton riot, Cambridge Analytica, chief data officer, computer vision, crowdsourcing, DARPA: Urban Challenge, data science, deep learning, DeepMind, Douglas Hofstadter, driverless car, Elon Musk, fake news, Firefox, Geoffrey Hinton, Google Chrome, Gödel, Escher, Bach, Ignaz Semmelweis: hand washing, John Markoff, Mark Zuckerberg, meta-analysis, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, pattern recognition, Peter Thiel, RAND corporation, ransomware, recommendation engine, ride hailing / ride sharing, selection bias, self-driving car, Shai Danziger, Silicon Valley, Silicon Valley startup, Snapchat, sparse data, speech recognition, Stanislav Petrov, statistical model, Stephen Hawking, Steven Levy, systematic bias, TED Talk, Tesla Model S, The Wisdom of Crowds, Thomas Bayes, trolley problem, Watson beat the top human players on Jeopardy!, web of trust, William Langewiesche, you are the product

Michael Taylor, ‘Self-driving Mercedes-Benzes will prioritize occupant safety over pedestrians’, Car and Driver, 7 Oct. 2016, https://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/. 34. Jason Kottke, Mercedes’ Solution to the Trolley Problem, Kottke.org, 24 Oct. 2016, https://kottke.org/16/10/mercedes-solution-to-the-trolley-problem. 35. Jean-François Bonnefon, Azim Shariff and Iyad Rahwan (2016), ‘The social dilemma of autonomous vehicles’, Science, vol. 35, 24 June 2016, DOI 10.1126/science.aaf2654; https://arxiv.org/pdf/1510.03346.pdf. 36. All quotes from Paul Newman are from private conversation. 37.

Except, Hugo wasn’t being asked about any old crash. He was ­being tested on his response to a well-worn thought experiment ­dating back to the 1960s, involving a very particular kind of ­collision. The interviewer was asking him about a curious conundrum that forces a choice between two evils. It’s known as the trolley problem, after the runaway tram that was the subject of the original formulation. In the case of driverless cars, it goes something like this. Imagine, some years into the future, you’re a passenger in an autonomous vehicle, happily driving along a city street. Ahead of you a traffic light turns red, but a mechanical failure in your car means you’re unable to stop.

Because when the same study asked participants if they would actually buy a car which would murder them if the circumstances arose, they suddenly seemed reluctant to sacrifice themselves for the greater good. This is a conundrum that divides opinion – and not just in what people think the answer should be. As a thought experiment, it remains a firm favourite of technology reporters and other journalists, but all the driverless car experts I interviewed rolled their eyes as soon as the trolley problem was mentioned. Personally, I still have a soft spot for it. Its simplicity forces us to recognize something important about driverless cars, to challenge how we feel about an algorithm making a value judgement on our own, and others’, lives. At the heart of this new technology – as with almost all algorithms – are questions about power, expectation, control, and delegation of responsibility.


pages: 386 words: 113,709

Why We Drive: Toward a Philosophy of the Open Road by Matthew B. Crawford

1960s counterculture, Airbus A320, airport security, augmented reality, autonomous vehicles, behavioural economics, Bernie Sanders, Big Tech, Boeing 737 MAX, British Empire, Burning Man, business logic, call centre, classic study, collective bargaining, confounding variable, congestion pricing, crony capitalism, data science, David Sedaris, deskilling, digital map, don't be evil, Donald Trump, driverless car, Elon Musk, emotional labour, en.wikipedia.org, Fellow of the Royal Society, Ford Model T, gamification, gentrification, gig economy, Google Earth, Great Leap Forward, Herbert Marcuse, hive mind, Ian Bogost, income inequality, informal economy, Internet of things, Jane Jacobs, labour mobility, Lyft, mirror neurons, Network effects, New Journalism, New Urbanism, Nicholas Carr, planned obsolescence, Ponzi scheme, precautionary principle, Ralph Nader, ride hailing / ride sharing, Ronald Reagan, Sam Peltzman, security theater, self-driving car, sharing economy, Shoshana Zuboff, Silicon Valley, smart cities, social graph, social intelligence, Stephen Hawking, surveillance capitalism, tacit knowledge, tech worker, technoutopianism, the built environment, The Death and Life of Great American Cities, the High Line, time dilation, too big to fail, traffic fines, Travis Kalanick, trolley problem, Uber and Lyft, Uber for X, uber lyft, Unsafe at Any Speed, urban planning, Wall-E, Works Progress Administration

On one side, you have inputs consisting of empirical facts, on the other side you have outputs consisting of some new state of affairs in the world, and in the middle you have a person who applies principles. These principles need to be likewise precise, capable of clear articulation, and universally applicable. One appeal of the trolley problem, then, is that it lends itself to a kind of moral calculus that resembles the input-output logic of a computer. The most widely adopted moral operating system, if you will, is utilitarianism, the motto of which is “the greatest good for the greatest number.” Another appeal of the trolley problem is that one can ask people to imagine themselves in such a scenario, vary the specifications of the scenario, and see how they respond, thereby gathering social data.

Automation as Moral Reeducation What happens when an autonomous car cannot avoid colliding with another car, or with pedestrians, or a dog, and it must make a decision whom to hit? What sort of moral priorities shall the computers be programmed with? Anyone who took an undergraduate philosophy class in the last twenty years is likely to have encountered the “trolley problem,” a classic thought experiment that goes like this. Suppose a trolley is headed on a collision course with a group of pedestrians. But you, as an alert bystander, can pull a lever to switch the track to a different course. The problem is, there is an innocent on this new track as well. But only one.

In both its idealist and empirical versions, this is a way of thinking about ethics that has a long pedigree, and has been subject to critique (perhaps most witheringly by Nietzsche, in his treatment of “the English moralists”) for nearly as long, but has lately taken on a new life for reasons that should be obvious.1 It offers a certain intellectual tractability that makes it seem a good fit with machine logic.2 And sure enough, when talk turns to the ethical dilemmas posed by driverless cars, the industry and its adjuncts in academia and journalism quickly settle into the trolley problem, that reassuringly self-contained conundrum of utilitarian ethics, and proceed to debate the “death algorithm.” (Mercedes-Benz was the first automaker to come out and declare that its cars would be programmed to prioritize the lives of the car’s occupants.) This way of thinking about ethics seems to permit the transfer of a moral burden to a machine.


pages: 280 words: 85,091

The Wisdom of Psychopaths: What Saints, Spies, and Serial Killers Can Teach Us About Success by Kevin Dutton

Asperger Syndrome, Bernie Madoff, business climate, corporate governance, corporate social responsibility, dark triade / dark tetrad, delayed gratification, epigenetics, Fellow of the Royal Society, G4S, impulse control, iterative process, John Nash: game theory, meta-analysis, mirror neurons, Neil Armstrong, Nicholas Carr, no-fly zone, Norman Mailer, Philippa Foot, place-making, RAND corporation, Ronald Reagan, seminal paper, Steve Jobs, Steven Pinker, theory of mind, trolley problem, ultimatum game

Schug, “The Neural Correlates of Moral Decision-Making in Psychopathy,” Molecular Psychiatry 14 (January 2009): 5–6, doi:10.1038/mp.2008.104. 10 Consider, for example, the following conundrum (case 1) … The Trolley Problem was first proposed in this form by Philippa Foot in “The Problem of Abortion and the Doctrine of the Double Effect,” in Virtues and Vices: And Other Essays in Moral Philosophy (Berkeley: University of California Press, 1978). 11 Now consider the following variation (case 2) … See Judith Jarvis Thomson, “Killing, Letting Die, and the Trolley Problem,” The Monist 59, no. 2 (1976): 204–17. 12 Daniel Bartels at Columbia University and David Pizarro at Cornell … See Daniel M.

That, of course, leaves 10 percent unaccounted for: a less morally hygienic minority who, when push quite literally comes to shove, have little or no compunction about holding another person’s life in the balance. But who is this unscrupulous minority? Who is this 10 percent? To find out, Bartels and Pizarro presented the trolley problem to more than two hundred students, and got them to indicate on a four-point scale how much they were in favor of shoving the fat guy over the side—how “utilitarian” they were. Then, alongside the trolleyological question, the students also responded to a series of personality items specifically designed to measure resting psychopathy levels.

These included statements such as “I like to see fistfights” and “The best way to handle people is to tell them what they want to hear” (agree/disagree on a scale of one to ten). Could the two constructs—psychopathy and utilitarianism—possibly be linked? Bartels and Pizarro wondered. The answer was a resounding yes. Their analysis revealed a significant correlation between a utilitarian approach to the trolley problem (push the fat guy off the bridge) and a predominantly psychopathic personality style. Which, as far as Robin Dunbar’s prediction goes, is pretty much on the money. But which, as far as the traditional take on utilitarianism goes, is somewhat problematic. In the grand scheme of things, Jeremy Bentham and John Stuart Mill, the two nineteenth-century British philosophers credited with formalizing the theory of utilitarianism, are generally thought of as good guys.


pages: 450 words: 144,939

Unthinkable: Trauma, Truth, and the Trials of American Democracy by Jamie Raskin

2021 United States Capitol attack, affirmative action, Affordable Care Act / Obamacare, back-to-the-land, Bernie Sanders, Black Lives Matter, clean water, coronavirus, COVID-19, cuban missile crisis, defund the police, desegregation, disinformation, Donald Trump, failed state, fake news, George Floyd, hindsight bias, Johann Wolfgang von Goethe, Lyft, mandatory minimum, opioid epidemic / opioid crisis, public intellectual, QAnon, race to the bottom, Ronald Reagan, Silicon Valley, social distancing, Steve Bannon, traumatic brain injury, trolley problem

Real morality cannot be just an exercise for the classroom; the classroom must help us discover and exercise morality out in the world. There is plainly no right answer to the trolley problem, no real “solution” to it. Yet Tommy had managed to solve it in his own way, by completely changing the terms of the question. Sometimes, when I let my thoughts run away with me these days, I wonder if Tommy’s even thinking about the trolley problem led him down a blind alley. Did he think, in his stressed frame of mind, that by taking one life, his own, he could somehow save ninety-nine other lives? Did he think he would redirect people’s attention to the necessity of human decency and kindness, or was it just a psychological compulsion he was acting on, his illness speaking?

—Sophocles Epigraph I realized, through it all, that in the midst of winter, there was, within me, an invincible summer. —ALBERT CAMUS Contents Cover Title Page Dedication Epigraph Preface Prologue: Democracy Winter Part I Chapter 1: Democracy Summer Chapter 2: A Sea of Troubles Chapter 3: The Trolley Problem Part II Chapter 4: “There Is a North” Chapter 5: Complete the Count Chapter 6: Midnight Meditations and Orwellian Preparations Chapter 7: “This Is About the Future of Democracy” Chapter 8: An All-American Defense of Democracy Chapter 9: Reverse Uno Chapter 10: Writing Trump Part III Chapter 11: Violence v.

As uncomfortable and intrusive as it may seem, it is essential to use the word suicide itself in order to demystify and deflate it, to strip it of its phony pretense to omnipotence and supernatural force. Suicide is not a “bad word,” as Tommy Raskin might have said, for there is no such thing as a bad word. It is just, in reality, a terrible thing and an irreversible detour from the road we all try to walk down together, the road of life. Chapter 3 The Trolley Problem It is only with the heart that one can see right; what is essential is invisible to the eye. —ANTOINE DE SAINT-EXUPÉRY, THE LITTLE PRINCE It was important to me for a long while—less so now—to reconstruct the final days to map out exactly how we let our guard down in the final week of 2020.


pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard

"Susan Fowler" uber, 1960s counterculture, A Declaration of the Independence of Cyberspace, Ada Lovelace, AI winter, Airbnb, algorithmic bias, AlphaGo, Amazon Web Services, autonomous vehicles, availability heuristic, barriers to entry, Bernie Sanders, Big Tech, bitcoin, Buckminster Fuller, Charles Babbage, Chris Urmson, Clayton Christensen, cloud computing, cognitive bias, complexity theory, computer vision, Computing Machinery and Intelligence, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data science, deep learning, Dennis Ritchie, digital map, disruptive innovation, Donald Trump, Douglas Engelbart, driverless car, easy for humans, difficult for computers, Electric Kool-Aid Acid Test, Elon Musk, fake news, Firefox, gamification, gig economy, global supply chain, Google Glasses, Google X / Alphabet X, Greyball, Hacker Ethic, independent contractor, Jaron Lanier, Jeff Bezos, Jeremy Corbyn, John Perry Barlow, John von Neumann, Joi Ito, Joseph-Marie Jacquard, life extension, Lyft, machine translation, Mark Zuckerberg, mass incarceration, Minecraft, minimum viable product, Mother of all demos, move fast and break things, Nate Silver, natural language processing, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, One Laptop per Child (OLPC), opioid epidemic / opioid crisis, PageRank, Paradox of Choice, payday loans, paypal mafia, performance metric, Peter Thiel, price discrimination, Ray Kurzweil, ride hailing / ride sharing, Ross Ulbricht, Saturday Night Live, school choice, self-driving car, Silicon Valley, Silicon Valley billionaire, speech recognition, statistical model, Steve Jobs, Steven Levy, Stewart Brand, TechCrunch disrupt, Tesla Model S, the High Line, The Signal and the Noise by Nate Silver, theory of mind, traumatic brain injury, Travis Kalanick, trolley problem, Turing test, Uber for X, uber lyft, Watson beat the top human players on Jeopardy!, We are as Gods, Whole Earth Catalog, women in the workforce, work culture , yottabyte

It was like the time I went to Le Bernadin for lunch, then came home and realized the only thing we had for dinner was hot dogs. As a car, the Tesla is amazing. As an autonomous vehicle, I am skeptical. Part of the problem is that the machine ethics haven’t been finalized because they are very difficult to articulate. The ethical dilemma is generally led by the trolley problem, a philosophical exercise. Imagine you’re driving a trolley that’s hurtling down the tracks toward a crowd of people. You can divert it to a different track, but you will hit one person. Which do you choose: certain death for one, or for many? Philosophers have been hired by Google and Uber to work out the ethical issues and embed them in the software.

Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver? Do you trust the unknown programmers who are making these decisions on your behalf? In a self-driving car, death is a feature, not a bug. The trolley problem is a classic teaching example of computer ethics. Many engineers respond to this dilemma in an unsatisfying way. “If you know you can save at least one person, at least save that one. Save the one in the car,” said Christoph von Hugo, Mercedes’s manager of driverless car safety, in an interview with Car and Driver.22 Computer scientists and engineers, following the precedent set by Minsky and previous generations, don’t tend to think through the precedent that they’re establishing or the implications of small design decisions.

There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules? Ito replied: “When we did the car trolley problem, we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car.” It should surprise no one that members of the public are both more ethical and more intelligent than the machines we are being encouraged to entrust our lives to.


pages: 147 words: 39,910

The Great Mental Models: General Thinking Concepts by Shane Parrish

Albert Einstein, anti-fragile, Atul Gawande, Barry Marshall: ulcers, bitcoin, Black Swan, colonial rule, correlation coefficient, correlation does not imply causation, cuban missile crisis, Daniel Kahneman / Amos Tversky, dark matter, delayed gratification, feminist movement, Garrett Hardin, if you see hoof prints, think horses—not zebras, index fund, Isaac Newton, Jane Jacobs, John Bogle, Linda problem, mandelbrot fractal, Pepsi Challenge, Philippa Foot, Pierre-Simon Laplace, Ponzi scheme, Richard Feynman, statistical model, stem cell, The Death and Life of Great American Cities, the map is not the territory, the scientific method, Thomas Bayes, Torches of Freedom, Tragedy of the Commons, trolley problem

This experiment was first proposed in modern form by Philippa Foot in her paper “The Problem of Abortion and the Doctrine of the Double Effect,”3 and further considered extensively by Judith Jarvis Thomson in “The Trolley Problem.”4 In both cases the value of the thought experiment is clear. The authors were able to explore situations that would be physically impossible to reproduce without causing serious harm, and in so doing significantly advanced certain questions of morality. Moreover, the trolley problem remains relevant to this day as technological advances often ask us to define when it is acceptable, and even desirable, to sacrifice one to save many (and lest you think this is always the case, Thomson conducts another great thought experiment considering a doctor killing one patient to save five through organ donation).

Retrieved from: https://plato.stanford.edu/entries/thought-experiment/ 2 Isaacson, Walter. Einstein: His Life and Universe. New York: Simon and Schuster, 2007. 3 Foot, Philippa. “The Problem of Abortion and the Doctrine of the Double Effect.” Oxford Review, No. 5 (1967). 4 Thomson, Judith Jarvis. “The Trolley Problem.” Yale Law Journal, Vol. 94, No. 6 (May, 1985). 5 Rawls, John. A Theory of Justice, revised edition. Cambridge: Harvard University Press, 2005. Second-Order Thinking 1 Keller, Evelyn Fox. A Feeling for the Organism: The Life and Work of Barbara McClintock. New York: W.H. Freeman and Company, 1983. 2 Atwood, Margaret.


Driverless: Intelligent Cars and the Road Ahead by Hod Lipson, Melba Kurman

AI winter, Air France Flight 447, AlphaGo, Amazon Mechanical Turk, autonomous vehicles, backpropagation, barriers to entry, butterfly effect, carbon footprint, Chris Urmson, cloud computing, computer vision, connected car, creative destruction, crowdsourcing, DARPA: Urban Challenge, deep learning, digital map, Donald Shoup, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, General Motors Futurama, Geoffrey Hinton, Google Earth, Google X / Alphabet X, Hans Moravec, high net worth, hive mind, ImageNet competition, income inequality, industrial robot, intermodal, Internet of things, Jeff Hawkins, job automation, Joseph Schumpeter, lone genius, Lyft, megacity, Network effects, New Urbanism, Oculus Rift, pattern recognition, performance metric, Philippa Foot, precision agriculture, RFID, ride hailing / ride sharing, Second Machine Age, self-driving car, Silicon Valley, smart cities, speech recognition, statistical model, Steve Jobs, technoutopianism, TED Talk, Tesla Model S, Travis Kalanick, trolley problem, Uber and Lyft, uber lyft, Unsafe at Any Speed, warehouse robotics

While raised in a new context, this ethical choice question is actually an old chestnut, a variant of the well-known Trolley Problem4 that students in philosophy classes have discussed for decades. The Trolley Problem, conceived by Philippa Foot in 1967, describes the ethical conundrum of “a driver of a runaway tram [who] can steer only from one narrow track onto another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed.” Most people will do the simple utilitarian calculation that five lives are worth more than one, and consider this a no-brainer. But the Trolley Problem case then continues to get more complicated with other morbid choices that lead, eventually, to paradoxical dilemmas. The Trolley Problem is not unique to driverless cars.

The Trolley Problem is not unique to driverless cars. Recently, in downtown Ithaca in upstate New York, we witnessed a tragic demonstration of the Trolley Problem. One sunny Friday afternoon while driving down the steep hill that leads into Ithaca’s bustling downtown, a truck driver became aware that his brakes had given out. He was forced to make the painful decision about which way to aim his deadly, out of control two-ton truck. The driver elected to steer his truck away from a group of construction workers and instead, aimed his truck into a nearby café, accidentally killing Amanda Bush, 27, a young mother spending that summer afternoon earning extra money as a bartender.

See Simultaneous Localization and Mapping Software companies versus car companies, 46–55, 63 State space, 76, 165 Sun, Jian, 225 SuperVision, 224–227. See also Deep learning Sutskever, Ilya, 224 Taxi drivers, 260 Template-based perception, 91, 229, 230. See also Shakey the robot Templeton, Brad, 142–146 Thrun, Sebastian, 152, 168 Traffic congestion, 25–28 Traffic prediction software Trolley problem. See Ethics Truckers, 259–263 Uber, 68, 260 Unemployment. See Jobs U.S. Department of Transportation (USDOT), 128–132 V2I. See V2X V2V. See V2X V2X Drawbacks of, 136–140 Overview of, 129, 130, 136 Vehicular lifespan, 28, 29 Werbos, Paul, 210, 213 “Who to kill.” See Ethics Wiesel, Torsten, 229 World’s Fair (New York, 1939), 107–110 World’s Fair (New York, 1964), 121 XOR problem, 208 Yosinski, Jason, 232 Zero Principle, 255–258.


Autonomous Driving: How the Driverless Revolution Will Change the World by Andreas Herrmann, Walter Brenner, Rupert Stadler

Airbnb, Airbus A320, algorithmic bias, augmented reality, autonomous vehicles, blockchain, call centre, carbon footprint, clean tech, computer vision, conceptual framework, congestion pricing, connected car, crowdsourcing, cyber-physical system, DARPA: Urban Challenge, data acquisition, deep learning, demand response, digital map, disruptive innovation, driverless car, Elon Musk, fault tolerance, fear of failure, global supply chain, industrial cluster, intermodal, Internet of things, Jeff Bezos, John Zimmer (Lyft cofounder), Lyft, manufacturing employment, market fundamentalism, Mars Rover, Masdar, megacity, Pearl River Delta, peer-to-peer rental, precision agriculture, QWERTY keyboard, RAND corporation, ride hailing / ride sharing, self-driving car, sensor fusion, sharing economy, Silicon Valley, smart cities, smart grid, smart meter, Steve Jobs, Tesla Model S, Tim Cook: Apple, trolley problem, uber lyft, upwardly mobile, urban planning, Zipcar

A car that prioritises the safety of its occupants above all other considerations is socially just as unacceptable as a vehicle that sacrifices its passengers to save other road users’ lives. Is the decision 249 Autonomous Driving 250 over life and death to be left to a random generator, or is the ultimate authority a matter for the driver or occupants? TROLLEY PROBLEM A central aspect of the debate about ethical principles for autonomous driving is the trolley problem, which is based on a philosophical thought experiment [38, 77]. Should a runaway trolley that threatens to run over five people be deliberately diverted along a side track so that only one innocent person is killed? This is based on the question of whether, in a situation of danger, one death may be sacrificed in order to save several.

Ethics and Morals 257 K e y T a ke a w a y s As soon as autonomous vehicles are on the roads, situations will occur in which they have to decide on life and death. Depending on the selected manoeuvre in a dangerous situation, more or fewer and differing persons will be killed or injured. This ethical reflection must be pre-programmed in the cars and any decisions cannot be made by an individual programmer. The debate centres on the trolley problem. This is based on the question of whether, in a dangerous situation, the death of a smaller number of people should be accepted in order to save the lives of a larger number. An economic or utilitarian approach consists of comparing human lives with each other and possibly sacrificing an individual for the sake of a group.

An economic or utilitarian approach consists of comparing human lives with each other and possibly sacrificing an individual for the sake of a group. This offsetting of human lives not only violates many people’s moral intuition, it also contravenes the principle of human dignity. This conviction goes back to the philosopher Kant, and is a fixed element of many national legal systems. As there is no quick and easy answer to the trolley problem, a social discourse is the only way forward. Society is compelled to reflect upon ethical principles and to begin a far-reaching debate. This page intentionally left blank PART 7 IMPACT ON VEHICLES This page intentionally left blank CHAPTER 26 THE VEHICLE AS AN ECOSYSTEM The information and communication technologies that make self-driving cars possible are fundamentally changing the nature of what a vehicle is.


pages: 389 words: 119,487

21 Lessons for the 21st Century by Yuval Noah Harari

"World Economic Forum" Davos, 1960s counterculture, accounting loophole / creative accounting, affirmative action, Affordable Care Act / Obamacare, agricultural Revolution, algorithmic trading, augmented reality, autonomous vehicles, Ayatollah Khomeini, basic income, behavioural economics, Bernie Sanders, bitcoin, blockchain, Boris Johnson, Brexit referendum, call centre, Cambridge Analytica, Capital in the Twenty-First Century by Thomas Piketty, carbon tax, carbon-based life, Charlie Hebdo massacre, cognitive dissonance, computer age, computer vision, cryptocurrency, cuban missile crisis, decarbonisation, DeepMind, deglobalization, disinformation, Donald Trump, Dr. Strangelove, failed state, fake news, Filter Bubble, Francis Fukuyama: the end of history, Freestyle chess, gig economy, glass ceiling, Google Glasses, illegal immigration, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invisible hand, job automation, knowledge economy, liberation theology, Louis Pasteur, low skilled workers, Mahatma Gandhi, Mark Zuckerberg, mass immigration, means of production, Menlo Park, meta-analysis, Mohammed Bouazizi, mutually assured destruction, Naomi Klein, obamacare, pattern recognition, post-truth, post-work, purchasing power parity, race to the bottom, RAND corporation, restrictive zoning, Ronald Reagan, Rosa Parks, Scramble for Africa, self-driving car, Silicon Valley, Silicon Valley startup, TED Talk, transatlantic slave trade, trolley problem, Tyler Cowen, Tyler Cowen: Great Stagnation, universal basic income, uranium enrichment, Watson beat the top human players on Jeopardy!, zero-sum game

Based on its lightning calculations, the algorithm driving the car concludes that the only way to avoid hitting the two kids is to swerve into the opposite lane, and risk colliding with an oncoming truck. The algorithm calculates that in such a case there is a 70 per cent chance that the owner of the car – who is fast asleep in the back seat – would be killed. What should the algorithm do?16 Philosophers have been arguing about such ‘trolley problems’ for millennia (they are called ‘trolley problems’ because the textbook examples in modern philosophical debates refer to a runaway trolley car racing down a railway track, rather than to a self-driving car).17 Up till now, these arguments have had embarrassingly little impact on actual behaviour, because in times of crisis humans all too often forget about their philosophical views and follow their emotions and gut instincts instead.

However, there might be some new openings for philosophers, because their skills – hitherto devoid of much market value – will suddenly be in very high demand. So if you want to study something that will guarantee a good job in the future, maybe philosophy is not such a bad gamble. Of course, philosophers seldom agree on the right course of action. Few ‘trolley problems’ have been solved to the satisfaction of all philosophers, and consequentialist thinkers such as John Stuart Mill (who judge actions by consequences) hold quite different opinions to deontologists such as Immanuel Kant (who judge actions by absolute rules). Would Tesla have to actually take a stance on such knotty matters in order to produce a car?

, Forbes, 24 November 2010; Cecilia Mazanec, ‘Will Algorithms Erode Our Decision-Making Skills?’, NPR, 8 February 2017. 16 Jean-François Bonnefon, Azim Shariff and Iyad Rahwan, ‘The Social Dilemma of Autonomous Vehicles’, Science 352:6293 (2016), 1573–6. 17 Christopher W. Bauman et al., ‘Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology’, Social and Personality Psychology Compass 8:9 (2014), 536–54. 18 John M. Darley and Daniel C. Batson, ‘“From Jerusalem to Jericho”: A Study of Situational and Dispositional Variables in Helping Behavior’, Journal of Personality and Social Psychology 27:1 (1973), 100–8. 19 Kristofer D.


pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI by John Brockman

AI winter, airport security, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alignment Problem, AlphaGo, artificial general intelligence, Asilomar, autonomous vehicles, basic income, Benoit Mandelbrot, Bill Joy: nanobots, Bletchley Park, Buckminster Fuller, cellular automata, Claude Shannon: information theory, Computing Machinery and Intelligence, CRISPR, Daniel Kahneman / Amos Tversky, Danny Hillis, data science, David Graeber, deep learning, DeepMind, Demis Hassabis, easy for humans, difficult for computers, Elon Musk, Eratosthenes, Ernest Rutherford, fake news, finite state, friendly AI, future of work, Geoffrey Hinton, Geoffrey West, Santa Fe Institute, gig economy, Hans Moravec, heat death of the universe, hype cycle, income inequality, industrial robot, information retrieval, invention of writing, it is difficult to get a man to understand something, when his salary depends on his not understanding it, James Watt: steam engine, Jeff Hawkins, Johannes Kepler, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John von Neumann, Kevin Kelly, Kickstarter, Laplace demon, Large Hadron Collider, Loebner Prize, machine translation, market fundamentalism, Marshall McLuhan, Menlo Park, military-industrial complex, mirror neurons, Nick Bostrom, Norbert Wiener, OpenAI, optical character recognition, paperclip maximiser, pattern recognition, personalized medicine, Picturephone, profit maximization, profit motive, public intellectual, quantum cryptography, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, Richard Feynman, Rodney Brooks, self-driving car, sexual politics, Silicon Valley, Skype, social graph, speech recognition, statistical model, Stephen Hawking, Steven Pinker, Stewart Brand, strong AI, superintelligent machines, supervolcano, synthetic biology, systems thinking, technological determinism, technological singularity, technoutopianism, TED Talk, telemarketer, telerobotics, The future is already here, the long tail, the scientific method, theory of mind, trolley problem, Turing machine, Turing test, universal basic income, Upton Sinclair, Von Neumann architecture, Whole Earth Catalog, Y2K, you are the product, zero-sum game

For example, the computer might infer that the person who would escape death if the trolley is left alone is a convicted terrorist recidivist loaded up with doomsday pathogens, or a saintly POTUS—or part of a much more elaborate chain of events in detailed alternative realities. If one of these problem descriptions seems paradoxical or illogical, it may be that the authors of the Trolley Problem have adjusted the weights on each side of the balance such that hesitant indecision is inevitable. Alternatively, one can use misdirection to rig the system, such that the error modes are not at the level of attention. For example, in the Trolley Problem, the real ethical decision was made years earlier when pedestrians were given access to the rails—or even before that, when we voted to spend more on entertainment than on public safety.

Faith and ethics are widespread in our species and can be studied using scientific methods, including but not limited to fMRI, psychoactive drugs, questionnaires, etc. Very practically, we have to address the ethical rules that should be built in, learned, or probabilistically chosen for increasingly intelligent and diverse machines. We have a whole series of Trolley Problems. At what number of people in line for death should the computer decide to shift a moving trolley to one person? Ultimately this might be a deep-learning problem—one in which huge databases of facts and contingencies can be taken into account, some seemingly far from the ethics at hand. For example, the computer might infer that the person who would escape death if the trolley is left alone is a convicted terrorist recidivist loaded up with doomsday pathogens, or a saintly POTUS—or part of a much more elaborate chain of events in detailed alternative realities.

See singularity Tegmark, Max, 76–87 AI safety research, 81 Asilomar AI Principles, 2017, 81, 84 background and overview of work of, 76–77 competence of superintelligent AGI, 85 consciousness as cosmic awakening, 78–79 general expectation AGI achievable within next century, 79 goal alignment for AGI, 85–86 goals for a future society that includes AGI, 84–86 outlook, 86–87 rush to make humans obsolescent, reasons behind, 82–84 safety engineering, 86 societal impact of AI, debate over, 79–82 Terminator, The (film), 242 three laws of artificial intelligence, 39–40 Three Laws of Robotics, Asimov’s, 250 threshold theorem, 164 too-soon-to-worry argument against AI risk, 26–27, 81 Toulmin, Stephen, 18–19 transhumans, rights of, 252–53 Treister, Suzanne, 214–15 Trolley Problem, 244 trust networks, building, 200–201 Tsai, Wen Ying, 258, 260–61 Turing, Alan, 5, 25, 35, 43, 60, 103, 168, 180 AI-risk message, 93 Turing Machine, 57, 271 Turing Test, 5, 46–47, 276–77 Tversky, Amos, 130–31, 250 2001: A Space Odyssey (film), 183 Tyka, Mike, 212 Understanding Media (McLuhan), 208 understanding of computer results, loss of, 189 universal basic income, 188 Universal Turing Machine, 57 unsupervised learning, 225 value alignment (putting right purpose into machines) Dragan on, 137–38, 141–42 Griffiths on, 128–33 Pinker on, 110–11 Tegmark on, 85–86 Wiener on, 23–24 Versu, 217 Veruggio, Gianmarco, 243 visualization programs, 211–13 von Foerster, Heinz, xxi, 209–10, 215 Vonnegut, Kurt, 250 von Neumann, John, xx, 8, 35, 60, 103, 168, 271 digital computer architecture of, 58 second law of AI and, 39 self-replicating cellular automaton, development of, 57–58 use of symbols for computing, 164–65 Watson, 49, 246 Watson, James, 58 Watson, John, 225 Watt, James, 3, 257 Watts, Alan, xxi Weaver, Warren, xviii, 102–3, 155 Weizenbaum, Joe, 45, 48–50, 105, 248 Wexler, Rebecca, 238 Whitehead, Alfred North, 275 Whole Earth Catalog, xvii “Why the Future Doesn’t Need Us” (Joy), 92 Wiener, Norbert, xvi, xviii–xx, xxv, xxvi, 35, 90, 96, 103, 112, 127, 163, 168, 256 on automation, in manufacturing, 4, 154 on broader applications of cybernetics, 4 Brooks on, 56–57, 59–60 control via feedback, 3 deep-learning and, 9 Dennett on, 43–45 failure to predict computer revolution, 4–5 on feedback loops, 5–6, 103, 153–54 Hillis on, 178–80 on information, 5–6, 153–59, 179 Kaiser on Wiener’s definition of information, 153–59 Lloyd on, 3–7, 9, 11–12 Pinker on, 103–5, 112 on power of ideas, 112 predictions/warnings of, xviii–xix, xxvi, 4–5, 11–12, 22–23, 35, 44–45, 93, 104, 172 Russell on, 22–23 on social risk, 97 society, cybernetics impact on, 103–4 what Wiener got wrong, 6–7 Wilczek, Frank, 64–75 astonishing corollary (natural intelligence as special case of AI), 67–70 astonishing hypothesis of Crick, 66–67 background and overview of work of, 64–65 consciousness, creativity and evil as possible features of AI, 66–68 emergence, 68–69 human brain’s advantage over AI, 72–74 information-processing technology capacities that exceed human capabilities, 70–72 intelligence, future of, 70–75 Wilkins, John, 275 wireheading problem, 29–30 With a Rhythmic Instinction to Be Able to Travel Beyond Existing Forces of Life (Parreno), 263–64 Wolfram, Stephen, 266–84 on AI takeover scenario, 277–78 background and overview of work of, 266–67 computational knowledge system, creating, 271–77 computational thinking, teaching, 278–79 early approaches to AI, 270–71 on future where coding ability is ubiquitous, 279–81 goals and purposes, of humans, 268–70 image identification system, 273–74 on knowledge-based programming, 278–81 purposefulness, identifying, 281–84 Young, J.


pages: 255 words: 79,514

How Many Friends Does One Person Need? Dunbar’s Number and Other Evolutionary Quirks by Robin Dunbar, Robin Ian MacDonald Dunbar

agricultural Revolution, Albert Michelson, Donner party, Fellow of the Royal Society, glass ceiling, Gregor Mendel, Isaac Newton, mass immigration, Nash equilibrium, nuclear winter, out of africa, pattern recognition, Richard Feynman, social intelligence, Steven Pinker, theory of mind, Thomas Bayes, Thomas Malthus, trolley problem, University of East Anglia, upwardly mobile

They asked subjects to make judgements about morally dubious behaviour, but some did so while rather closer than they might have wished to a smelly toilet or a messy desk, and others did so in a more salubrious environment. The first group gave much harsher judgements than the second, [Page 268] suggesting that their judgements were affected by their emotional state. One of the classic dilemmas used in studies of morality is known as the ‘trolley problem’. It goes like this. Imagine you are the driver of a railway trolley approaching a set of points. You realise that your route takes you down a line where five men are working on the railway unaware of your approach. But there is a switch you can pull that would throw the points and send you off down the other line where just one man is working.

The important role of intentions was borne out by a study of stroke patients, which showed that people with damage to the brain’s frontal lobe will usually opt for the rational utilitarian option and throw their companion off the bridge. The frontal lobes provide one area [Page 269] in the brain where we evaluate intentional behaviour. The importance of intentionality has recently been confirmed by Marc Hauser from Harvard and Rebecca Saxe from MIT: they found that, when subjects are processing moral dilemmas like the trolley problem, the areas in the brain that are especially involved in evaluating intentionality (such as the right temporal-pari-etal junction just behind your right ear) are particularly active. Our appreciation of intentions is crucially wrapped up with our ability to empathise with others. The final piece in the jigsaw has now been added by Ming Hsu and colleagues at the California Institute of Technology in Pasadena.

., 223 temperature rises, 156–7 testes, size, 253 testosterone, 247 tetrachromatic women, 17–18 theology, 287–8 Thomas, Dylan, 22 Thornhill, Randy, 102 titis, 259 tits, 193, 260 Tomasello, Mike, 194 tools, 131, 137, 192 touching, 61–3 toumaï (Sahelanthropus tchadensis), 133–5 traders, 54–6 tree-climbing, 134 Treherne, John, 217 tribal groupings, 25–6 ‘trolley problem’, 269–70 trust, 63–6 tsunami, Indian Ocean, 145, 156 turtles, 98 Tusi, Nasir al-Din, 119 Tyrannosaurus rex, 120, 121 ultraviolet radiation (UVR), 89–91 Upper Palaeolithic Revolution, 137 vasopressin, 262–5 Venus figures, 137 vervet monkeys, 195–6 village sizes, 27 visual processing, 181, 272–3 vitamin: B, 90, 92; D, 87, 90–2 Vivaldi, Antonio, 71 Voland, Eckart, 42, 227, 237 voting patterns, 165–9 Vugt, Mark van, 68 walking upright, see bipedalism Walum, Hasse, 262 war chiefs, 250–1 waulking songs, 78, 155 Waynforth, David, 231, 236 wealth: advertising, 233, 236, 241; differentials, 227–8, 230, 240; [Page 301] inherited, 221; IQ and, 207 Whiten, Andy, 29, 179 Wilberforce, ‘Soapy Sam’, 117 Wilson, Edward O., 5 Wilson, Margo, 259 Wilson, Sandra, 95–6 Winston, Robert, 217 women: attractiveness, 233–5; colour vision, 17–20; conversations, 75, 79–80; extra-pair mating, 258–9; female–female bonding, 16, 79–80; Lonely Hearts adverts, 228–32; marriage, 227–8; skin colour, 91; social skills, 16–17 Young, Thomas, 183 Younger Dryas Event, 156–7 Zulus, 90 [Page 302] About This ePub Corresponding pdf: Library Genesis md5=BC65A72F278ACD9099870DA938364156 P2P ePub [UL] v1 urn:uuid:926fa8e5-1b71-4666-b18a-bb734a892208 2014.12.03


pages: 250 words: 79,360

Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It by Erica Thompson

Alan Greenspan, Bayesian statistics, behavioural economics, Big Tech, Black Swan, butterfly effect, carbon tax, coronavirus, correlation does not imply causation, COVID-19, data is the new oil, data science, decarbonisation, DeepMind, Donald Trump, Drosophila, Emanuel Derman, Financial Modelers Manifesto, fudge factor, germ theory of disease, global pandemic, hindcast, I will remember that I didn’t make the world, and it doesn’t satisfy my equations, implied volatility, Intergovernmental Panel on Climate Change (IPCC), John von Neumann, junk bonds, Kim Stanley Robinson, lockdown, Long Term Capital Management, moral hazard, mouse model, Myron Scholes, Nate Silver, Neal Stephenson, negative emissions, paperclip maximiser, precautionary principle, RAND corporation, random walk, risk tolerance, selection bias, self-driving car, social distancing, Stanford marshmallow experiment, statistical model, systematic bias, tacit knowledge, tail risk, TED Talk, The Great Moderation, The Great Resignation, the scientific method, too big to fail, trolley problem, value at risk, volatility smile, Y2K

Even if you are a theologian or a string theorist (perhaps they are not so very different), that still holds true. But if the aim of models is to inform better decisions, then there is an unavoidable question of defining what we mean by a better decision, and this is not trivial even for seemingly quite trivial questions. The well-known ‘trolley problem’ is one philosophical attempt to grapple with this problem: there are three people tied to one branch of a railway line and one person tied to another branch, in such a way that the passage of a train would lead to their certain deaths. A train is coming and you are at the points, which are set so that the train will kill the three people.

People who make models are primarily well-educated, middle-class individuals, often trained in a certain way that values what they perceive as scientific detachment and therefore seeks to suppress value judgements and make them less visible. Their choices reflect the social norms of the modelling environment. The target of the trolley problem memes mentioned above is the incommensurability of value judgements. What if the three people on one rail are terrible criminals and the singleton on the other is a highly respected and productive member of society? What if we are in fact trading off biodiversity for economic gain, air quality outside a school for lower commuting times or the quality of a personal relationship for higher productivity at work?

Sometimes, where a dollar value is just too crude, we find alternative commensurable units such as Quality-Adjusted Life Years which perform the same operation of reducing trade-offs to quantitative comparison. Mathematical models don’t need to do this: we can always choose to keep incommensurables separate. That the contrived trolley problem is discussed at all is a bizarre and even somewhat morbid symptom of an obsession with quantifying, comparing and judging. But, again, the social norms of the modelling environment do prioritise comprehensiveness, generalisability and universality. All of these speak in favour of slicing, dicing and weighting the multiple outputs of a model, or of many models, in order to be able to present them on the same chart.


pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff

A Declaration of the Independence of Cyberspace, AI winter, airport security, Andy Rubin, Apollo 11, Apple II, artificial general intelligence, Asilomar, augmented reality, autonomous vehicles, backpropagation, basic income, Baxter: Rethink Robotics, Bill Atkinson, Bill Duvall, bioinformatics, Boston Dynamics, Brewster Kahle, Burning Man, call centre, cellular automata, Charles Babbage, Chris Urmson, Claude Shannon: information theory, Clayton Christensen, clean water, cloud computing, cognitive load, collective bargaining, computer age, Computer Lib, computer vision, crowdsourcing, Danny Hillis, DARPA: Urban Challenge, data acquisition, Dean Kamen, deep learning, DeepMind, deskilling, Do you want to sell sugared water for the rest of your life?, don't be evil, Douglas Engelbart, Douglas Engelbart, Douglas Hofstadter, Dr. Strangelove, driverless car, dual-use technology, Dynabook, Edward Snowden, Elon Musk, Erik Brynjolfsson, Evgeny Morozov, factory automation, Fairchild Semiconductor, Fillmore Auditorium, San Francisco, From Mathematics to the Technologies of Life and Death, future of work, Galaxy Zoo, General Magic , Geoffrey Hinton, Google Glasses, Google X / Alphabet X, Grace Hopper, Gunnar Myrdal, Gödel, Escher, Bach, Hacker Ethic, Hans Moravec, haute couture, Herbert Marcuse, hive mind, hype cycle, hypertext link, indoor plumbing, industrial robot, information retrieval, Internet Archive, Internet of things, invention of the wheel, Ivan Sutherland, Jacques de Vaucanson, Jaron Lanier, Jeff Bezos, Jeff Hawkins, job automation, John Conway, John Markoff, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, John von Neumann, Kaizen: continuous improvement, Kevin Kelly, Kiva Systems, knowledge worker, Kodak vs Instagram, labor-force participation, loose coupling, Marc Andreessen, Mark Zuckerberg, Marshall McLuhan, medical residency, Menlo Park, military-industrial complex, Mitch Kapor, Mother of all demos, natural language processing, Neil Armstrong, new economy, Norbert Wiener, PageRank, PalmPilot, pattern recognition, Philippa Foot, pre–internet, RAND corporation, Ray Kurzweil, reality distortion field, Recombinant DNA, Richard Stallman, Robert Gordon, Robert Solow, Rodney Brooks, Sand Hill Road, Second Machine Age, self-driving car, semantic web, Seymour Hersh, shareholder value, side project, Silicon Valley, Silicon Valley startup, Singularitarianism, skunkworks, Skype, social software, speech recognition, stealth mode startup, Stephen Hawking, Steve Ballmer, Steve Jobs, Steve Wozniak, Steven Levy, Stewart Brand, Strategic Defense Initiative, strong AI, superintelligent machines, tech worker, technological singularity, Ted Nelson, TED Talk, telemarketer, telepresence, telepresence robot, Tenerife airport disaster, The Coming Technological Singularity, the medium is the message, Thorstein Veblen, Tony Fadell, trolley problem, Turing test, Vannevar Bush, Vernor Vinge, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, We are as Gods, Whole Earth Catalog, William Shockley: the traitorous eight, zero-sum game

Congress could create a liability exemption for self-driving vehicles, as it has done for childhood vaccines. Insurance companies could impose a no-fault regime when only autonomous vehicles are involved in accidents. Another aspect of the liability issue is what has been described as a version of the “trolley problem,” which is generally stated thus: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks that has only one person on it, but that person will be killed.

A technology known as V2X that continuously transmits the location of nearby vehicles to each other is now being tested globally. In the future, even schoolchildren will be carrying sensors to alert cars to their presence and reduce the chance of an accident. It’s puzzling, then, that the philosophers generally don’t explore the trolley problem from the point of view of the greater good, but rather as an artifact of individual choice. Certainly it would be an individual tragedy if the technology fails—and of course it will fail. Systems that improve the overall safety of transportation seem vital, even if they aren’t perfect. The more interesting philosophical conundrum is over the economic, social, and even cultural consequences of taking humans out of the loop in driving.

Some people have been so bamboozled by the word ‘machine’ that they don’t realize what can be done and what cannot be done with machines—and what can be left, and what cannot be left to the human beings.”19 Only now, six and a half decades after Wiener wrote Cybernetics in 1948, is the machine autonomy question becoming more than hypothetical. The Pentagon has begun to struggle with the consequences of a new generation of “brilliant” weapons,20 while philosophers grapple with the “trolley problem” in trying to assign moral responsibility for self-driving cars. Over the next decade the consequences of creating autonomous machines will appear more frequently as manufacturing, logistics, transportation, education, health care, and communications are increasingly directed and controlled by learning algorithms rather than humans.


The Science of Language by Noam Chomsky

Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, backpropagation, British Empire, Brownian motion, Computing Machinery and Intelligence, dark matter, Drosophila, epigenetics, finite state, Great Leap Forward, Howard Zinn, language acquisition, phenotype, public intellectual, statistical model, stem cell, Steven Pinker, Stuart Kauffman, theory of mind, trolley problem

And there's been no progress. These are just questions that are too hard. There is by now some study – like John Mikhail's – some empirical study of elements of human moral nature. Contemporary ethical philosophy has given interesting examples, the kind that Judith Thompson talks about, and Gil Harman and others – the trolley problem and others. There are situations in which we just have an intuition about what the right answer is – and it's a very strange one. For example, sometimes it leads everybody to prefer an outcome that will kill more people when they have a choice of killing one person; and the results are pretty systematic.

NC: There is now for the first time some serious research into it. A lot of it grew out of John Mikhail's dissertation; now Marc Hauser is doing work, Elizabeth Spelke, and others. And they're finding some quite interesting things. There are these kinds of paradoxical situations that have been worked on by ethical philosophers for some time – trolley problems, for example – conditions under which you have a choice to make. A typical case is a doctor in a hospital who has five patients who each have different diseased organs, and they're all going to die. And a healthy person comes in and you could kill him and take the appropriate organs and transplant them and save five patients.

S. 51, 53 Hale, Kenneth 17, 62 Halle, Morris 21 Hamilton, William D. 104 Harman, Gilbert 100 Harris, Zellig 38, 80, 81, 86 Hauser, Marc 100, 109, 286evolution of communication 12, 58 faculty of language 60, 170, 172, 268, 269 hearing 48 Helmholtz, Hermann von 73, 97 Herbert of Cherbury 181 Higginbotham, Jim 129, 130 Hirsh-Pasek, Kathy 196 homunculus 37, 290 Hornstein, Norbert 29, 183, 265 human behavior 138–151, 286 human evolution 2, 13, 71developmental constraints on 41 ‘great leap forward' 13, 70, 77 human nature 95–102, 108–112 and biological capacities 95 Chomsky on 95–102 determined and uniform 95, 99 distinctiveness of 176–179 enlightenment conception of 142 and evolution 103–107 ‘great leap forward' 179 moral agency 101 plasticity of 121 humanitarian intervention 121, 122, 287 humans, genetic variation 13 Hume, David 26, 90, 99, 106, 179color problem 247–248, 286 theory of moral nature 63, 99, 109 Huxley, Thomas 23 I-beliefs 153–156 definition of 156 I-concepts 153–156 definition of 155 I-language 81, 153–156, 164, 239, 258, 266intensional specification of 167 imagination 70, 161 inclusiveness 62, 281 induction 88, 90, 95 inference 73, 165, 221 information 208, 213, 218, 228, 229, 254pragmatic 30 semantic 29, 260 innateness 39–45, 60, 89, 91, 255, 267, 284 innatism 123 innovation 71, 74, 95, 177, 178, 185, 282technological 145 insects, study of 147 instinct 96, 143, 178, 181, 247, 248, 287 instrumentalism 211 intention (see also nativism) 163 internalism 6, 228, 248, 262–263, 269, 287and concepts 188, 190, 209, 255–257, 260, 272 intuitions 125, 126 island sentences 50 Jackendoff, Ray 170, 172 Jacob, François 24, 53, 60, 243 Joos, Martin 145 justice 120 Kahneman, Daniel 140 Kant, Immanuel 90 Kauffman, Stuart 21, 22, 266 Kayne, Richard 55, 84, 241 Keller, Helen 45 Kissinger, Henry 101, 107, 113, 287 Klein, Ralph 111 knowledge 70, 193See also information Kripke, Saul 126 Kropotkin, Peter 103, 111 languageand agency 124–128 as an animal instinct 178 and arithmetical capacities 16 and biology 21–30, 80, 235, 284 biophysical explanations of 208 and brain morphology 46 capacity for 70, 164 characteristic uses of 11–12 cognitive benefits of 2 competence and use 63 and complex thought 1 complexity of 52, 146 compositional character of 37 computational theory of 174, 272 and concepts 71, 198 conceptual resources of 212 displacement property 16 distinctive features 22 domination 232–238 expectations for 54 externalization of 52, 78, 79, 153, 222, 278 flexibility 95, 162, 197, 210, 224, 227 formal languages 16, 17, 289 formal theory of 21–30 functions of 11–20, 164, 165 generative capacity 49 head-first 240 hierarchical structure 232–238 I-language 153–156, 164, 239, 258, 266 interface conditions 25 internal 37 internal, individual and intensional 37, 154, 167 internal use of 52, 69, 124, 153, 160, 197, 262–263, 272–274 a ‘knowledge' system 187, 193 localization of 46, 59, 69–74 and mathematics 181 modularity of 59 movement property 16, 85, 108, 264–265 as a natural object 2, 7 nominalizing languages 155 open texture of 273 and other cognitive systems 271 phonetic features 42 phonological features 42, 57 precursors of 43, 77 properties of 22, 37, 60, 62 public language 153, 288 purposes of 224 and reason 181 result of historical events 84 rules of 165, 221, 223, 224, 225, 283, 284 and science 124–128 sounds available in 282 structural features of 42 structure of 236, 277–278 study of 36, 76, 79, 154See also linguistics theories of 164, 193, 239, 243, 285 unboundedness 177, 262 uniqueness to humans 150 variation in the use of 164, 239–242 language faculty 74, 172, 177, 243, 260, 261, 270adicity requirements of 198, 199 perfection of 50 language of thought 27, 71, 189, 190, 220, 230, 269 Lasnik, Howard 85 learning 95, 180, 200, 226, 281, 282empiricism and 173, 179 learning a language 187, 225, 226 Lenneberg, Eric 21, 43, 47, 59 Lepore, E. 195 Lewis, David 153, 165, 220, 222, 223, 224 Lewontin, Richard 58, 157, 170, 172, 173, 175, 231 lexical items 62categories of 234 origin of 46 liberalism 98 linguistic communities 222 linguistic development 39See also development linguistic practices 221, 223 linguistic principles 237, 276 linguistics 19, 36, 82, 145and biology 150 first factor considerations 45, 96, 148 and natural science 38 and politics 152 procedural theories in 149 second factor considerations 148, 277 structural 80 theories of 87, 265 third factor considerations:separate entry Locke, John 26, 125, 267personal identity 31, 271 secondary qualities 256 logic, formal 251 Logical Structure of Linguistic Theory 84–85 Lohndal, Terje 57 Lorenz, Konrad 21 Marx, Karl 122 mathematics 127, 165, 214, 215, 266capacity for 15, 136 formal functions in 166–169 and language 181 semantics for 251, 252 Mayr, Ernst 174 meaning 29, 98, 199, 206, 250, 252, 270, 273computational theory of 213 construction of a science of 226–230 externalist science of 209–220 methodology for a theory of 226, 227 study of 261 theories of 221 theory of 212, 214, 217, 226 Mehler, Jacques 55 Merge 16, 77, 91, 181, 236, 243, 263, 279–280 centrality of 41, 60, 62, 176, 245 consequences of 17 and edge properties 17, 41 Merge, external 17, 166, 201, 238, 263 Merge, internal 16, 25, 29, 85, 201, 238, 264 mutation giving rise to 43, 52 origin of 14, 15 Pair Merge 201, 264 and psychic identity 28 uniqueness to humans 25, 200, 205 metaphor 195 metaphysics 125, 157 Mikhail, John 63, 99, 100, 109, 129, 286 Mill, John Stuart 121, 122, 287 Miller, George 81 mindas a causal mechanism 138 computational sciences of 247 computational theory of 280 philosophy of 186, 255 place of language in 69–74 representational theory of 162, 188 science of 138–151, 212, 288 theory of 14 Minimalist Program 24, 83, 84, 233, 235–236, 237, 245, 246, 264and adaptationism 172 aim of 42, 199 simplicity and 80, 243, 285 modes of presentation (MOPs) 187, 190, 217, 219, 275roles of 218 morality 99, 100, 109, 287character of 110 conflicting systems 114 generation of action or judgment 110 moral truisms 101, 102 theories of 110, 135 trolley problems 109 and universalization 113–117 Moravcsik, Julius 164 morphemes 81, 149 morphology 52, 54, 195distributed 27 and syntax 200 Morris, Charles 250 Move 108 mutations 14, 43, 170, 171survival of 51, 53 mysterianism 97 Nagel, Thomas 98 Narita, Hiroki 57 nativism 187, 217, 283 natural numbers 204 natural sciences 18, 38 natural selection 58, 76, 104, 143, 157 Navajo language 277 neural networks 225 neurophysiology 74 Newton, Isaac 66, 67, 72, 88, 127, 134alchemy 67 nominalism 87, 91 non-violence 114 Norman Conquest 84 objective existence 169 optimism 118–123, 288 parameters 39–45, 54, 239–242, 277, 282, 283and acquisition of language 241 choice of 45, 83 developmental constraints in 243 functional categories 240 head-final 55, 240 headedness macroparameter 241, 276 linearization parameter 55 macroparameters 55 microparameters 55, 84, 241 polysynthesis 55 and simplicity 80 Peck, James 288 Peirce, Charles Sanders 96, 132, 184, 250abduction 168, 183, 246, 248 truth 133, 136 perfection 50–58, 172, 175, 263–264, 279 person, concept of 125, 126, 271, 284‘forensic' notion of 125 persuasion 114, 116 Pesetsky, David 30 Petitto, Laura-Ann 48, 78 phenomenalism 211 philosophers 129–131, 282, 283contribution of 129 contribution to science 129 philosophy 181accounts of visual sensations 255–257 of language 35, 273 of mind 186, 255 problems in 286 and psychology 140 phonemes 81 phonetic/phonological interfaces 161, 194, 253, 278 phonology 28, 40, 52, 54, 57, 109, 208 physicalism 187 physics 19, 65, 106, 144and chemistry 65 folk physics 72 theoretical 18, 65, 73, 100 Piattelli-Palmarini, Massimo 140, 246, 279 Pietroski, Paulconcepts 47, 199, 200, 209 semantics 198, 211, 223, 229, 254 Pinker, Steven 166, 170, 172, 176 Pirahã language 30 Plato 115 Plato's Problem 23, 195, 236, 244, 246, 266 Poincaré, Henri 65 politics 116, 119, 145, 146, 152 poverty of the stimulus observations 5, 23, 40, 177, 200, 227, 233, 262 power 120 pragmatic information 30 pragmatics 36, 130, 250–254, 289definition of 250 and reference 253 principles and parameters approach to linguistic theory 24, 53, 235, 236, 240, 245, 276language acquisition 60, 82, 83, 149 and simplicity 246 progress 118, 145, 183 projection problem 83, 89 prosody 37 psychic continuity 26, 205, 207, 271 psychology 219of belief and desire 138, 141 comparative 21 evolutionary 103–107, 111 folk psychology 72, 141 and philosophy 140 rationalistic 255 scientific 140 psychology, comparative 21 public intellectuals 122 Pustejovsky, James 164, 195 Putnam, Hilary 95, 126, 138 Quine, W.


pages: 338 words: 100,477

Split-Second Persuasion: The Ancient Art and New Science of Changing Minds by Kevin Dutton

availability heuristic, Bernie Madoff, call centre, Cass Sunstein, classic study, cognitive bias, cognitive dissonance, cognitive load, credit crunch, different worldview, double helix, Douglas Hofstadter, equity premium, fundamental attribution error, haute couture, job satisfaction, Jon Ronson, loss aversion, Milgram experiment, Philippa Foot, placebo effect, Stephen Fry, Stephen Hawking, Steven Pinker, theory of mind, trolley problem, ultimatum game, upwardly mobile

Discover (April 2004). http://discovermagazine.com/2004/apr/whose-life-would-you-save (accessed January 9th, 2007). 5 Consider, for example … The Trolley Problem was first proposed in this form by Philippa Foot in ‘The Problem of Abortion and the Doctrine of the Double Effect’. In Virtues and vices and other essays in moral philosophy (Berkeley, CA: University of California Press, 1978). 6 Now consider the following … Thomson, Judith J. ‘Killing, Letting Die, and the Trolley Problem.’ The Monist 59 (1976): 204–17. Want to take things a stage further? How about this? A brilliant transplant surgeon has five patients.

A healthy young traveller, just passing through, comes in to the doctor’s surgery for a routine checkup. While performing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that were the young man to disappear, no-one would suspect the doctor … (See Thomson, Judith J. ‘The Trolley Problem.’ Yale Law Journal 94 (1985): 1395–1415.) 7 Harvard psychologist Joshua Greene … Greene, Joshua D., Sommerville, R. Brian, Nystrom, Leigh E., Darley, John M. and Cohen, Jonathan D., ‘An fMRI Investigation of Emotional Engagement in Moral Judgement.’ Science 293 (2001): 2105–2108. For a more general account of the neuroscience of morality see Greene, Joshua D. and Haidt, Jonathan, ‘How (and Where) Does Moral Judgement Work?’


pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning by Justin E. H. Smith

3D printing, Ada Lovelace, Adrian Hon, agricultural Revolution, algorithmic management, artificial general intelligence, Big Tech, Charles Babbage, clean water, coronavirus, COVID-19, cryptocurrency, dark matter, disinformation, Donald Trump, drone strike, Elon Musk, game design, gamification, global pandemic, GPT-3, Internet of things, Isaac Newton, Jacquard loom, Jacques de Vaucanson, Jaron Lanier, jimmy wales, Joseph-Marie Jacquard, Kuiper Belt, Mark Zuckerberg, Marshall McLuhan, meme stock, new economy, Nick Bostrom, Norbert Wiener, packet switching, passive income, Potemkin village, printed gun, QAnon, Ray Kurzweil, Republic of Letters, Silicon Valley, Skype, strong AI, technological determinism, theory of mind, TikTok, Tragedy of the Commons, trolley problem, Turing machine, Turing test, you are the product

In this short book we will range widely in topic and time, permitting ourselves to linger far from some of the questions that internet users and tech analysts today consider most pressing: the outsized power of the tech monopolies; the racism built into AI applications in security, social media, and credit-rating algorithms; the variations on the trolley problem to which self-driving vehicles give rise; the epidemic of disinformation and the corollary crisis of epistemic authority in our culture; internet mobs and the culture wars; and so on, ad nauseam. For the most part, this aloofness is intentional. This book does describe itself as a “philosophy” of the internet and, while there will be much disagreement about what that might mean, most of us can at least agree that a philosophy of something, whatever else it may be, has the right to zoom out from that thing and to consider it in relation to its precedents, or in relation to other things alongside which it exists in a totality.

See Cantwell Smith, Brian sociobiology, 71 Source, The (computer network), 8 Spotify, 47–49, 164 Srinivasan, Balaji, 29 Stanley, Manfred, 6–7 Stendhal (Marie-Henri Beyle), 35 telecommunication: among humans, 59, 83–84, 124; among plants and animals, 56–59, 73–74, 83–84 teledildonics, 164 TikTok, 50 Tinder, 21 Tormé, Mel, 47 trolley problem, 13 Trump, Donald, 44, 49 Tupi (language), 108 Turing test, 30 Turing Tumble (toy), 110–11 Twitter, 32, 53–55, 122, 155, 164 Tyson, Neil DeGrasse, 90 Uber, 45 Vaucanson, Jacques de, 98, 119, 128–30 video games, 41, 43–45, 122 virality. See viruses viruses, 141–43 Vischer, Friedrich Theodor, 26 Vischer, Robert, 25–26 Vosterloch, Captain, 78 Wales, Jimmy, 156 Walton, Izaak, 40 Walzer, Michael, 10 Warhol, Andy, 31 Watson, James D., 70 weaving, 66, 127–39 White, Leslie, 80 Wiener, Norbert, 6, 60, 116–18, 142 Wikipedia, 154–58, 168, 170 Williams, James, 30, 37–38 Wilson, E.


pages: 233 words: 69,745

The Reluctant Carer: Dispatches From the Edge of Life by The Reluctant Carer

call centre, cognitive dissonance, deskilling, disinformation, gig economy, Jeff Bezos, load shedding, place-making, stem cell, telemarketer, trolley problem

Mum could get by without him, up to a point; the reverse is impossible. The longer this lasts, the worse it will be and perhaps the worse I will become. The more of us there are, the faster we sink. Our life is a leaking lifeboat. Or so it seems to me. In psychology and ethics this is known as a Trolley Problem, after a model in which one might change the direction of a runaway tram to spare one group of people, but in doing so kill another. There is no ‘right’ answer, but as Wikipedia explains, ‘Under some interpretations of moral obligation, simply being present in this situation and being able to influence its outcome constitutes an obligation to participate.’

If you want to see how much you love someone, try and fix their computer. ‘Password?’ ‘Don’t know.’ ‘User ID?’ ‘Don’t know.’ I would consider taking my own life and the lives of others before dealing with the online ‘help’ desk of his email provider again, bearing so little basic information. No Trolley Problems there. If anything gets logged out of now that will be end of his online adventures. * Into this stable instability land the groceries. A spin-off of the Amazon issue, except this is about things we need. If I am not there to monitor the delivery, one or both of two things will happen. Either my mother will struggle to unpack it and give my dad a hard time about all the things he has or hasn’t ordered and the expense of all this, or my sister will arrive and unpack it herself along with a diatribe against the whole household, which she will save and relay to me another time.


pages: 1,261 words: 294,715

Behave: The Biology of Humans at Our Best and Worst by Robert M. Sapolsky

autism spectrum disorder, autonomous vehicles, behavioural economics, Bernie Madoff, biofilm, blood diamond, British Empire, Broken windows theory, Brownian motion, car-free, classic study, clean water, cognitive dissonance, cognitive load, corporate personhood, corporate social responsibility, Daniel Kahneman / Amos Tversky, delayed gratification, desegregation, different worldview, domesticated silver fox, double helix, Drosophila, Edward Snowden, en.wikipedia.org, epigenetics, Flynn Effect, framing effect, fudge factor, George Santayana, global pandemic, Golden arches theory, Great Leap Forward, hiring and firing, illegal immigration, impulse control, income inequality, intentional community, John von Neumann, Loma Prieta earthquake, long peace, longitudinal study, loss aversion, Mahatma Gandhi, meta-analysis, microaggression, mirror neurons, Mohammed Bouazizi, Monkeys Reject Unequal Pay, mouse model, mutually assured destruction, Nelson Mandela, Network effects, nocebo, out of africa, Peter Singer: altruism, phenotype, Philippa Foot, placebo effect, publication bias, RAND corporation, risk tolerance, Rosa Parks, selective serotonin reuptake inhibitor (SSRI), self-driving car, Silicon Valley, Skinner box, social contagion, social distancing, social intelligence, Stanford marshmallow experiment, Stanford prison experiment, stem cell, Steven Pinker, strikebreaker, theory of mind, Tragedy of the Commons, transatlantic slave trade, traveling salesman, trickle-down economics, trolley problem, twin studies, ultimatum game, Walter Mischel, wikimedia commons, zero-sum game, zoonotic diseases

The Frontal Cortex and Its Relationship with the Limbic System We now have a sense of what different subdivisions of the PFC do and how cognition and emotion interact neurobiologically. This leads us to consider how the frontal cortex and limbic system interact. In landmark studies Joshua Greene of Harvard and Princeton’s Cohen showed how the “emotional” and “cognitive” parts of the brain can somewhat dissociate.66 They used philosophy’s famous “runaway trolley” problem, where a trolley is bearing down on five people and you must decide if it’s okay to kill one person to save the five. Framing of the problem is key. In one version you pull a lever, diverting the trolley onto a side track. This saves the five, but the trolley kills someone who happened to be on this other track; 70 to 90 percent of people say they would do this.

More interesting than squabbling about the relative importance of reasoning and intuition are two related questions: What circumstances bias toward emphasizing one over the other? Can the differing emphases produce different decisions? As we’ve seen, then–graduate student Josh Greene and colleagues helped jump-start “neuroethics” by exploring these questions using the poster child of “Do the ends justify the means?” philosophizing, namely the runaway trolley problem. A trolley’s brake has failed, and it is hurtling down the tracks and will hit and kill five people. Is it okay to do something that saves the five but kills someone else in the process? People have pondered this since Aristotle took his first trolley ride;* Greene et al. added neuroscience.

Alabama, 171, 589 mimicry, 390 empathic, 102, 522–24 mirror neurons and, see mirror neurons minimal group paradigm, 389–91 Minsky, Marvin, 603, 605 mirror neurons and supposed functions, 166n, 180n, 536–41 autism and, 539–40 empathy and, 540–41 social interactions and, 538–39 Mischel, Walter, 186–87 Mitchell, David, 657 M’Naghten, Daniel, 586–87, 598 Mogil, Jeffrey, 133, 524, 544 mole rats, 120, 352 Moniz, Egas, 9 Money, John, 215 monkeys, 4, 35, 36, 47, 48, 50–51, 55, 67, 68, 70, 71, 73–74, 82, 104, 109–10, 123, 148, 172, 221, 429, 535, 557 baboons, 17, 123, 131–32, 162, 172, 191–92, 196, 207, 295, 303, 337, 338, 429, 648–52, 648, 650 “Garbage Dump” troop of, 648–50, 649 hierarchies and, 426–27, 427, 428, 436–39, 442, 455 deception in, 513 “executive,” stress in, 436 Harlow’s experiments with, 189–90, 190, 192 kinship understanding in, 337–38 langurs and competitive infanticide, 334–35 moral judgments in, 484–85, 487 sex differences in behaviors of, 213–14, 214 social rank and, 433, 434 tamarins, 110, 213, 355, 357 monoamine oxidase-A (MAO-A), 251–55, 257, 264, 605 monogamy, 339, 366 morality and moral decisions, 478–520 in animals, 484–87 applying science of, 504–20 automaticity and, 50 in children, 181–85 reasoning in, 182–83 competition and, 495–500 consequentialism and, 504–7, 520 context in, 488–503 cultural, 275, 493–503 framing, 491–92 language, 491 proximity, 491 special circumstances, 492–93 cooperation and, 495–500, 508–9 cultural differences and, 275 deontology and, 504, 505, 520 disgust and, 398, 454, 561–65 doing the harder thing when it’s the correct thing to do, 45, 47–48, 50, 51, 55, 56, 63, 64, 74, 75, 92, 130, 134, 513, 515, 614 dumbfounding in, 483 honesty and duplicity and, 512–20 in infants, 483–84 internal motives and external actions in, 493 intuition in, 478, 479, 481–83, 507–8 “me vs. us” and “us vs. them” in, 508–12 obedience and, 471, 473 see also obedience and conformity political orientation and, 449–50 punishment and, see punishment reasoning in, 169, 478–81, 487–88, 507–8, 542 in adolescents, 167–69 in children, 182–83 in infants, 483–84 runaway trolley problem (killing one person to save five) and, 55, 56, 58–59, 117, 482, 488–91, 505–7 self-driving cars and, 612n saving person vs. dog, 368, 371 and sins of commission vs. omission, 490 and tragedy of the commons vs. tragedy of commonsense morality, 508–11, 533 universals of, 494–95 utilitarianism and, 505–7 virtue ethics and, 504, 520 Moral Life of Children, The (Coles), 181n Moral Origins: The Evolution of Virtue, Altruism, and Shame (Boehm), 323 Moral Politics: How Liberals and Conservatives Think (Lakoff), 558 Moral Tribes: Emotion, Reason, and the Gap Between Us and Them (Greene), 508–9 Mormons, 367 Morozov, Pavlik, 368–69, 487 Morse, Stephen, 598–600 Moscone, George, 92n Mother Teresa, 535 motivation, “you must be so smart” vs.


pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, Avi Goldfarb

Abraham Wald, Ada Lovelace, AI winter, Air France Flight 447, Airbus A320, algorithmic bias, AlphaGo, Amazon Picking Challenge, artificial general intelligence, autonomous vehicles, backpropagation, basic income, Bayesian statistics, Black Swan, blockchain, call centre, Capital in the Twenty-First Century by Thomas Piketty, Captain Sullenberger Hudson, carbon tax, Charles Babbage, classic study, collateralized debt obligation, computer age, creative destruction, Daniel Kahneman / Amos Tversky, data acquisition, data is the new oil, data science, deep learning, DeepMind, deskilling, disruptive innovation, driverless car, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, everywhere but in the productivity statistics, financial engineering, fulfillment center, general purpose technology, Geoffrey Hinton, Google Glasses, high net worth, ImageNet competition, income inequality, information retrieval, inventory management, invisible hand, Jeff Hawkins, job automation, John Markoff, Joseph Schumpeter, Kevin Kelly, Lyft, Minecraft, Mitch Kapor, Moneyball by Michael Lewis explains big data, Nate Silver, new economy, Nick Bostrom, On the Economy of Machinery and Manufactures, OpenAI, paperclip maximiser, pattern recognition, performance metric, profit maximization, QWERTY keyboard, race to the bottom, randomized controlled trial, Ray Kurzweil, ride hailing / ride sharing, Robert Solow, Salesforce, Second Machine Age, self-driving car, shareholder value, Silicon Valley, statistical model, Stephen Hawking, Steve Jobs, Steve Jurvetson, Steven Levy, strong AI, The Future of Employment, the long tail, The Signal and the Noise by Nate Silver, Tim Cook: Apple, trolley problem, Turing test, Uber and Lyft, uber lyft, US Airways Flight 1549, Vernor Vinge, vertical integration, warehouse automation, warehouse robotics, Watson beat the top human players on Jeopardy!, William Langewiesche, Y Combinator, zero-sum game

Even if we’re all comfortable with complete machine autonomy, the law might not allow it. Isaac Asimov anticipated the regulatory issue by opting for hard coding robots with three laws, cleverly designed to remove the possibility that robots harm any human.8 Similarly, modern philosophers often pose ethical dilemmas that seem abstract. Consider the trolley problem: Imagine yourself standing at a switch that allows you to shift a trolley from one track to another. You notice five people in the trolley’s path. You could switch it to another track, but along that path is one person. You have no other options and no time to think. What do you do? That question confounds many people, and often they just want to avoid thinking about the conundrum altogether.

., 49–50 human weaknesses in, 54–58 stereotypes, 19 Stern, Scott, 169–170, 218–219 Stigler, George, 105 strategy, 2, 18–19 AI-first, 179–180 AI’s impact on, 153–166 boundary shifting in, 157–158 business transformation and, 167–178 capital and, 170–171 cheap AI and, 15–17 data and, 174–176 economics of, 165 hybrid corn adoption and, 158–160 judgment and, 161–162 labor and, 171–174 learning, 179–194 organizational structure and, 161–162 value capture and, 162–165 strokes, predicting, 44–46, 47–49 Sullenberger, Chesley “Sully,” 184 supervised learning, 183 Sweeney, Latanya, 195, 196 Tadelis, Steve, 199 Taleb, Nassim Nicholas, 60–61 The Taming of Chance (Hacking), 40 Tanner, Adam, 195 task analysis, 74–75, 125–131 AI canvas and, 134–139 job redesign and, 142–145 Tay chatbot, 204–205 technical support, 90–91 Tencent Holdings, 164, 217, 218 Tesla, 8 Autopilot legal terms, 116 navigation apps and, 89 training data at, 186–187 upgrades at, 188 Tesla Motor Club, 111–112 Thinking, Fast and Slow (Kahneman), 209–210 Tinder, 189 tolerance for error, 184–186 tools, AI, 18 AI canvas and, 134–138 for deconstructing work flows, 123–131 impact of on work flows, 126–129 job redesign and, 141–151 usefulness of, 158–160 topological data analysis, 13 trade-offs, 3, 4 in AI-first strategy, 181–182 with data, 174–176 between data amounts and costs, 44 between risks and benefits, 205 satisficing and, 107–109 simulations and, 187–188 strategy and, 156 training data for, 43, 45–47 data risks, 202–204 in decision making, 74–76, 134–138 by humans, 96–97 in-house and on-the-job, 185 in medical imaging, 147 in modeling skills, 101 translation, language, 25–27, 107–108 trolley problem, 116 truck drivers, 149–150 Tucker, Catherine, 196 Tunstall-Pedoe, William, 2 Turing, Alan, 13 Turing test, 39 Tversky, Amos, 55 Twitter, Tay chatbot on, 204–205 Uber, 88–89, 164–165, 190 uncertainty, 3, 103–110 airline industry and weather, 168–169, 170 airport lounges and, 105–106 business boundaries and, 168–170 contracts in dealing with, 170–171 in e-commerce delivery times, 157–158 reducing, strategy and, 156–157 strategy and, 165 unknown knowns, 59, 61–65, 99 unknown unknowns, 59, 60–61 US Bureau of Labor Statistics, 171 US Census Bureau, 14 US Department of Defense, 14, 116 US Department of Transportation, 112, 185 Validere, 3 value, capturing, 162–165 variables, 45 omitted, 62 Varian, Hal, 43 variance, 34–36 fulfillment industry and, 144–145 taming complexity and, 103–110 Vicarious, 223 video games, 183 Vinge, Vernor, 221 VisiCalc, 141–142, 163, 164 Wald, Abraham, 101 Wanamaker, John, 174–175 warehouses, robots in, 105 Watson, 146 Waymo, 95 Waze, 89–90, 106, 191 WeChat, 164 Wells Fargo, 173 Windows 95, 9–10 The Wizard of Oz, 24 work flows AI tools’ impact on, 126–129 decision making and, 133–140 deconstructing, 123–131 iPhone keyboard design and, 129–130 job redesign and, 142–145 task analysis, 125–131 World War II bombing raids, 100–102 X.ai, 97 Xu Heyi, 164 Yahoo, 216 Y Combinator, 210 Yeomans, Mike, 117 YouTube, 176 ZipRecruiter, 93–94, 100 About the Authors AJAY AGRAWAL is professor of strategic management and Peter Munk Professor of Entrepreneurship at the University of Toronto’s Rotman School of Management and the founder of the Creative Destruction Lab.


Psychopathy: An Introduction to Biological Findings and Their Implications by Andrea L. Glenn, Adrian Raine

dark triade / dark tetrad, epigenetics, longitudinal study, loss aversion, meta-analysis, phenotype, randomized controlled trial, selection bias, selective serotonin reuptake inhibitor (SSRI), statistical model, theory of mind, trolley problem, twin studies

Finally, studies have also found that damage to the ventromedial PFC alters moral judgment. One popular way for examining moral judgment has been to present individuals with a series of hypothetical moral dilemmas and ask them to make judgments (Greene et al. 2001). One of the most famous of these dilemmas is the trolley problem: A runaway trolley is heading down the tracks toward five workmen who will be killed if the trolley proceeds on its present course. You are on a footbridge over the tracks, in between the approaching trolley and the five workmen. Next to you on this footbridge is a stranger who happens to be very large.

Notably, there is significant overlap between the brain regions implicated in psychopathy and the regions important in emotional responding during moral decision making (for a review, see Raine and Yang 2006). In a study conducted in our laboratory, we presented participants with a series of moral dilemmas that had been compiled in a previous study examining the neural correlates of moral judgment (Greene et al. 2001). The trolley problem, presented in Chapter 4, is one of these dilemmas. Another example is the crying baby scenario: Enemy soldiers have taken over your village. They have orders to kill all remaining civilians. You and some of your townspeople have sought refuge in the cellar of a large house. Outside you hear the voices of soldiers who have come to search the house for valuables.


pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell

3D printing, Ada Lovelace, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Alfred Russel Wallace, algorithmic bias, AlphaGo, Andrew Wiles, artificial general intelligence, Asilomar, Asilomar Conference on Recombinant DNA, augmented reality, autonomous vehicles, basic income, behavioural economics, Bletchley Park, blockchain, Boston Dynamics, brain emulation, Cass Sunstein, Charles Babbage, Claude Shannon: information theory, complexity theory, computer vision, Computing Machinery and Intelligence, connected car, CRISPR, crowdsourcing, Daniel Kahneman / Amos Tversky, data science, deep learning, deepfake, DeepMind, delayed gratification, Demis Hassabis, Elon Musk, en.wikipedia.org, Erik Brynjolfsson, Ernest Rutherford, fake news, Flash crash, full employment, future of work, Garrett Hardin, Geoffrey Hinton, Gerolamo Cardano, Goodhart's law, Hans Moravec, ImageNet competition, Intergovernmental Panel on Climate Change (IPCC), Internet of things, invention of the wheel, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Nash: game theory, John von Neumann, Kenneth Arrow, Kevin Kelly, Law of Accelerating Returns, luminiferous ether, machine readable, machine translation, Mark Zuckerberg, multi-armed bandit, Nash equilibrium, Nick Bostrom, Norbert Wiener, NP-complete, OpenAI, openstreetmap, P = NP, paperclip maximiser, Pareto efficiency, Paul Samuelson, Pierre-Simon Laplace, positional goods, probability theory / Blaise Pascal / Pierre de Fermat, profit maximization, RAND corporation, random walk, Ray Kurzweil, Recombinant DNA, recommendation engine, RFID, Richard Thaler, ride hailing / ride sharing, Robert Shiller, robotic process automation, Rodney Brooks, Second Machine Age, self-driving car, Shoshana Zuboff, Silicon Valley, smart cities, smart contracts, social intelligence, speech recognition, Stephen Hawking, Steven Pinker, superintelligent machines, surveillance capitalism, Thales of Miletus, The Future of Employment, The Theory of the Leisure Class by Thorstein Veblen, Thomas Bayes, Thorstein Veblen, Tragedy of the Commons, transport as a service, trolley problem, Turing machine, Turing test, universal basic income, uranium enrichment, vertical integration, Von Neumann architecture, Wall-E, warehouse robotics, Watson beat the top human players on Jeopardy!, web application, zero-sum game

This is really not too much to ask for the AI systems of the future, given that present-day Facebook systems are already maintaining more than two billion individual profiles. A related misunderstanding is that the goal is to equip machines with “ethics” or “moral values” that will enable them to resolve moral dilemmas. Often, people bring up the so-called trolley problems,12 where one has to choose whether to kill one person in order to save others, because of their supposed relevance to self-driving cars. The whole point of moral dilemmas, however, is that they are dilemmas: there are good arguments on both sides. The survival of the human race is not a moral dilemma.

., “A decision-theoretic model of assistance,” Journal of Artificial Intelligence Research 50 (2014): 71–104. 11. A critique of beneficial AI based on a misinterpretation of a journalist’s brief interview with the author in a magazine article: Adam Elkus, “How to be good: Why you can’t teach human values to artificial intelligence,” Slate, April 20, 2016. 12. The origin of trolley problems: Frank Sharp, “A study of the influence of custom on the moral judgment,” Bulletin of the University of Wisconsin 236 (1908). 13. The “anti-natalist” movement believes it is morally wrong for humans to reproduce because to live is to suffer and because humans’ impact on the Earth is profoundly negative.


System Error by Rob Reich

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, 2021 United States Capitol attack, A Declaration of the Independence of Cyberspace, Aaron Swartz, AI winter, Airbnb, airport security, Alan Greenspan, Albert Einstein, algorithmic bias, AlphaGo, AltaVista, artificial general intelligence, Automated Insights, autonomous vehicles, basic income, Ben Horowitz, Berlin Wall, Bernie Madoff, Big Tech, bitcoin, Blitzscaling, Cambridge Analytica, Cass Sunstein, clean water, cloud computing, computer vision, contact tracing, contact tracing app, coronavirus, corporate governance, COVID-19, creative destruction, CRISPR, crowdsourcing, data is the new oil, data science, decentralized internet, deep learning, deepfake, DeepMind, deplatforming, digital rights, disinformation, disruptive innovation, Donald Knuth, Donald Trump, driverless car, dual-use technology, Edward Snowden, Elon Musk, en.wikipedia.org, end-to-end encryption, Fairchild Semiconductor, fake news, Fall of the Berlin Wall, Filter Bubble, financial engineering, financial innovation, fulfillment center, future of work, gentrification, Geoffrey Hinton, George Floyd, gig economy, Goodhart's law, GPT-3, Hacker News, hockey-stick growth, income inequality, independent contractor, informal economy, information security, Jaron Lanier, Jeff Bezos, Jim Simons, jimmy wales, job automation, John Maynard Keynes: Economic Possibilities for our Grandchildren, John Maynard Keynes: technological unemployment, John Perry Barlow, Lean Startup, linear programming, Lyft, Marc Andreessen, Mark Zuckerberg, meta-analysis, minimum wage unemployment, Monkeys Reject Unequal Pay, move fast and break things, Myron Scholes, Network effects, Nick Bostrom, Northpointe / Correctional Offender Management Profiling for Alternative Sanctions, NP-complete, Oculus Rift, OpenAI, Panopticon Jeremy Bentham, Parler "social media", pattern recognition, personalized medicine, Peter Thiel, Philippa Foot, premature optimization, profit motive, quantitative hedge fund, race to the bottom, randomized controlled trial, recommendation engine, Renaissance Technologies, Richard Thaler, ride hailing / ride sharing, Ronald Reagan, Sam Altman, Sand Hill Road, scientific management, self-driving car, shareholder value, Sheryl Sandberg, Shoshana Zuboff, side project, Silicon Valley, Snapchat, social distancing, Social Responsibility of Business Is to Increase Its Profits, software is eating the world, spectrum auction, speech recognition, stem cell, Steve Jobs, Steven Levy, strong AI, superintelligent machines, surveillance capitalism, Susan Wojcicki, tech billionaire, tech worker, techlash, technoutopianism, Telecommunications Act of 1996, telemarketer, The Future of Employment, TikTok, Tim Cook: Apple, traveling salesman, Triangle Shirtwaist Factory, trolley problem, Turing test, two-sided market, Uber and Lyft, uber lyft, ultimatum game, union organizing, universal basic income, washing machines reduced drudgery, Watson beat the top human players on Jeopardy!, When a measure becomes a target, winner-take-all economy, Y Combinator, you are the product

For example, when confronted with a choice about whether to swerve into a bicycle lane to protect the car’s driver or to harm parents bicycling with their children, what should the autonomous system that pilots the car be programmed to do? Consider a hypothetical dilemma introduced by the English philosopher Philippa Foot in the late 1960s, the “Trolley Problem,” that has now become a real problem for engineers. In the context of autonomous cars, the problem asks whether a vehicle should be programmed to endanger or sacrifice the life of its sole passenger by running off the road in order to avoid potentially hitting five pedestrians crossing the road.

See also Y Combinator start-up mindset, xxi “Statement on the Purpose of a Corporation” (Business Roundtable), 181 Stiglitz, Joseph, 254 stock options, 26–28 substantive fairness, 92–93 success disasters, 20–21 Sundar Pichai, 64–65 Sundararajan, Arun, 49 Sunflower Movement, Taiwan, 242 supervised data, 85–86 Supreme Court of the United States, 199, 201 surveillance capitalism, 115, 121–22 surveillance society, 151 surveillance technologies, 21, 112, 113–14, 125–26 Swartz, Aaron, xxi–xxvi, 44 Sweeney, Latanya, 130 Swift, Taylor, 111–12 systemic problems in a democracy, 239–43 Taiwan, 242–43, 261–62 Tang, Audrey, 242–43 Taylor, Frederick, and Taylorism, 14 technological innovation overview, 240 balancing the competing values created by, 240–43, 258 Clipper Chip technology, 115–16 deceleration in, 52 democratic resolution of rival values, xxxiii–xxxiv externalities created by, xxvii failure to examine potential societal harm, xxi and governance, 52–53 insider argument for a reflective stance, 254 instant wealth as a priority, xxv–xxvi maximizing benefits while minimizing harms, xiii–xiv, 65 See also algorithmic decision-making; governance; innovation technological unemployment, 174–76 technologists enablers of, xxviii funding for OpenAI’, 234 governing us vs. governing them, xxviii–xxix, 68–69, 257–63 lack of diversity, 17, 41 legislative ignorance of, 66–68 libertarian tendencies, 25, 52, 67 new masters of the universe, 22–23 optimizing facial recognition software, 17 small group of humans make choices for all of us, 11, 25–26 transforming education to create civic-minded technologists, 251 See also optimization mindset technology, 21, 53–59, 169, 174, 237–39 Telecommunications Act (1996), 60, 61, 62 telegraph, 56–57 telephone system, 60 Terman, Frederick, 28–29 terrorist attack, San Bernardino, California, 72 Theranos, xxx Thiel, Peter, 28, 38, 42, 52 Thrun, Sebastian, 154 Time magazine, 30 transparency of algorithmic decisions, 105, 107–9 and control, 134 Facebook Oversight Board, 215–16 requiring internet platforms to disclose information on credibility of sources, 225–26 “Traveling Salesperson Problem” (TSP), 12–13 Triangle Waist Company fire in 1911, 53–55 Trolley Problem, the, 155 truck drivers and trucking industry, 175 Trump, Donald J., xi, 187–88, 215 Tuskegee experiment, xxxi Twitter as digital civic square, 21 leaders surprised by ways the platform could do harm, 254 Trump’s access denied after January 6, 2021, xi–xii, 187–88 See also big tech platforms ultimatum game, 91 unicorns, 37–38, 39, 43 United Kingdom, 165, 218, 254, 260–62 United Nations Development Programme (UNDP), 173 United States Postal Service, 3–4 universal basic income (UBI), 182–84, 185 University College London Jeremy Bentham display, 120–21, 124 unsupervised data, 85 US Air Force Academy, 103 US Capitol assault (Jan. 6, 2021), xi-xii, xxvi, 115, 187, 209, 215, 223 US Census Bureau, 41 US Department of Justice (DOJ), 257 US Federation of Worker Cooperatives, 180 US security forces and message encryption, 128–29 USA PATRIOT Act, 116 user engagement in online platforms, 40 user-centric privacy, 149–50 utilitarianism, 9, 121, 168, 245 Vacca, James, 104–5 values overview, xvii, xxix balancing the competing values created by innovation, 240–43, 258 expressing ourselves in support of each other, 178 free expression, democracy, individual dignity at risk online, 190–91 freedom as, 172–73 goals assessment for evaluating efficiency vs. values, 15–16 replacing governance by big tech with process of deciding, xxix resolving trade-offs between rival values, xxxi–xxxiii, 45 at risk from new, unregulated innovations, 56 of tech leaders as expert rulers, 67–68 See also dignity, fairness, free speech, privacy, safety, security Varian, Hal, 174 venture capital, inequality in distribution of, 41 venture capitalists (VCs), 25–49 ecosystem of, 31–33 funding Soylent, 8 funds as investment vehicles for their LPs, 38–39 hackers and, 28, 52, 68 high value exits, 40–41 increasing numbers of, 39 narrow view of success as white, male, nerd, 41 optimizing from multiple starting points, 43–45 and scalability of businesses, xxviii and Silicon Valley, 17, 26–28 at Stanford showcasing their new companies, 42–45 unicorns, search for, 37–38, 39, 43 Vestager, Margrethe, 252–53, 255 virtual reality, the experience machine, 167–69 Waal, Frans de, 92 Wales, Jimmy, 195 Walker, Darren, 180 Wall Street Journal, 42–43 Warren, Elizabeth, 181, 256 washing machines and laundry, 157–58 watch time metric, 34 Watchdog.net, xxiii Weapons of Math Destruction (O’Neil), 98 Weinberg, Gabriel, 135–36 Weinstein, Jeremy, xv–xvi, 72 Weld, William, 130 Western Union, 57 Westin, Alan, 137–38 WhatsApp, 127–28 Wheeler, Tom, 63, 76 Whitt, Richard, 149 “Why Software Is Eating the World” (Wall Street Journal), 42–43 Wikipedia, 195–96 Wikipedia conference, xxiii–xxiv Wilde, Oscar, 63 winner-take-all, disruption vs. democracy, 51–76 overview, 51–53 democracy and regulation of technology, 68–73 democracy as a guardrail, 73–76 government’s complicity in absence of regulation, 59–63 innovation vs. regulation, 53–59 and Plato’s philosopher kings, 63–68 Wisconsin’s COMPAS system, 88, 98 Wong, Nicole, 40, 254 worker cooperatives, 180 workers’ compensation benefit, 55 workplace safety, 53–54, 55 World Economic Forum 1996, Davos, Switzerland, 25 World Health Organization, 154 World Wide Web, 29, 30.


pages: 698 words: 198,203

The Stuff of Thought: Language as a Window Into Human Nature by Steven Pinker

airport security, Albert Einstein, Bob Geldof, classic study, colonial rule, conceptual framework, correlation does not imply causation, Daniel Kahneman / Amos Tversky, David Brooks, Douglas Hofstadter, en.wikipedia.org, experimental subject, Ford Model T, fudge factor, George Santayana, language acquisition, Laplace demon, loss aversion, luminiferous ether, Norman Mailer, Philippa Foot, Plato's cave, Richard Feynman, Ronald Reagan, Sapir-Whorf hypothesis, science of happiness, social contagion, social intelligence, speech recognition, stem cell, Steven Pinker, Thomas Bayes, Thorstein Veblen, traffic fines, trolley problem, urban renewal, Yogi Berra

The semantic distinction between after and from points to a causal distinction between succession and impingement, which in turn animates a moral distinction between tragedy and evil. Another force-dynamic distinction, the one between causing and letting, deeply penetrates our moral reasoning. The difference is exposed in the trolley problem, a famous thought experiment devised by the philosopher Philippa Foot that has long been a talking point among moral philosophers. 141 A trolley is hurtling out of control and is bearing down on five railroad workers who don’t see it coming. You are standing by a switch and can divert the trolley onto another track, though it will then kill a single worker who hasn’t noticed the danger either.

Berkeley: University Of California Press. Tetlock, P. E., Kristel, O. V., Elson, B., Green, M. C., & Lerner, J. 2000. The psychology Of the unthinkable: Taboo tradeoffs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78, 853—870. Thomson, J. J. 1985. The trolley problem. Yale Law Journal, 94, 1395—1415. Tomasello, M. 2003. Constructing a language: A usage-based theory of language acquisition. Cambridge, Mass.: Harvard University Press. Tooby, J., & Cosmides, L. 1992. Psychological foundations Of culture. In J. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture.

Singer, Isaac Bashevis skin color, words for skirt lengths slang Sloman, Steven Smith, Anna Nicole Smith, Susan Snedeker, Jesse sniglets snow, Eskimo words for social contract social relationships Authority Ranking Communal Sharing Exchange in human nature mismatches of type of politeness as signal of switching types words and society: duality in social life individual decision in trends and moral responsibility variation and language type words and see also social relationships Soja, Nancy solidarity sound: meaning’s relation to symbolism working memory and space as cognitive framework in Authority Ranking body terms for place and direction causal discontinuities as aligning with spatial terms in conceptual semantics dimensionality of economizing of spatial terms either-or distinctions in engineering finite versus infinite ideal vocabulary for imagination constrained by imprecision of language of Kant on Linguistic Determinism and linguistic variation regarding as medium metaphorical use in other domains in metaphors for time in physics polysemy of terms for precision in expressing shape versus location words similarity between time and time contrasted with spam Spanish language Spaulding, Bailey species, concepts of specious present Spelke, Elizabeth spell, preserving the Spellman, Barbara Sperber, Dan Splash Starr, Kenneth states: aspect category (Aktionsart) as involuntary in language of thought state-change effect state-space statistical thinking status Steve Stewart, Potter straw man Stroop effect structural parallelism Subbiah, Ilavenil Subbiah, Saroja substance in Authority Ranking in conceptual semantics in engineering applications Kant on in locative construction subtexts suffixes superlatives Sutcliffe, Stuart swearing (taboo language): aloud in aphasia basal ganglia and cathartic desensitization of different ways of emotional charge of feminism and historical changes in persecution of pros and cons of profanity’s meaning religious semantics of sexual swearing “on” and “by,” syntax of truncated profanities as universal see also taboo words syllepsis sympathetic politeness syntax of taboo expressions see also adjectives; causative construction; dative construction; double-object (ditransitive) dative; grammar; intransitive verbs; locative construction; nouns; prepositions; transitive verbs; verbs taboos: food in human nature in prenuptial agreements relationship type mismatches leading to in swearing by see also taboo words taboo words: abusive use of acceptability of in aphasia brain and common denominator of in coprolalia count and mass nouns among descriptive use of domains dysphemistic emphasis euphemisms for for excretion and bodily effluvia historical changes idiomatic use of negative emotion triggered by paradox of identifying without using seven you can’t say on television substituting for one another as word magic see also swearing (taboo language) tact Talmy, Len Tamil language Tannen, Deborah television, seven words you can’t say on telic events Tenner, Edward tense: and aspect basic discrete nature of in engineering in English as “location” in time and “thinking for speaking” effect time as embedded in Tetlock, Philip thing “thinking for speaking” effect third commandment third-person pronoun, gender-neutral Thomas, Clarence Thomas, Dylan Thurber, James threats: rational ignorance of veiled time in Authority Ranking in conceptual semantics and consciousness counting events in as cultural universal as cyclical in engineering finite versus infinite as cognitive framework goals and language of intuitive versus scientific conceptions of “landscape” versus “procession” metaphor as medium model underlying language moving time metaphor as one-dimensional in physics precision in expressing representation of space contrasted with spatial metaphors for time-orientation metaphor see also aspect; tense Tipping Point, The (Gladwell) titles tits Tlingit language To Have and Have Not toilet token bow Tootsie topology Tourette syndrome Tower of Hanoi problem T pronouns transitive verbs absence of polite verb for sex and causative construction meaning and moral judgments and used intransitively trolley problem Truman, Bess truncation truth and tense see also reality; relativism Truth, Sojourner Tucker, Sophie Turkish language tumor problem Turner, Mark Tversky, Amos Twain, Mark Twin Earth thought experiment Tzeltal language Tzotzil language United Nations Resolution Universal Grammar.


pages: 533

Future Politics: Living Together in a World Transformed by Tech by Jamie Susskind

3D printing, additive manufacturing, affirmative action, agricultural Revolution, Airbnb, airport security, algorithmic bias, AlphaGo, Amazon Robotics, Andrew Keen, Apollo Guidance Computer, artificial general intelligence, augmented reality, automated trading system, autonomous vehicles, basic income, Bertrand Russell: In Praise of Idleness, Big Tech, bitcoin, Bletchley Park, blockchain, Boeing 747, brain emulation, Brexit referendum, British Empire, business process, Cambridge Analytica, Capital in the Twenty-First Century by Thomas Piketty, cashless society, Cass Sunstein, cellular automata, Citizen Lab, cloud computing, commons-based peer production, computer age, computer vision, continuation of politics by other means, correlation does not imply causation, CRISPR, crowdsourcing, cryptocurrency, data science, deep learning, DeepMind, digital divide, digital map, disinformation, distributed ledger, Donald Trump, driverless car, easy for humans, difficult for computers, Edward Snowden, Elon Musk, en.wikipedia.org, end-to-end encryption, Erik Brynjolfsson, Ethereum, ethereum blockchain, Evgeny Morozov, fake news, Filter Bubble, future of work, Future Shock, Gabriella Coleman, Google bus, Google X / Alphabet X, Googley, industrial robot, informal economy, intangible asset, Internet of things, invention of the printing press, invention of writing, Isaac Newton, Jaron Lanier, John Markoff, Joseph Schumpeter, Kevin Kelly, knowledge economy, Large Hadron Collider, Lewis Mumford, lifelogging, machine translation, Metcalfe’s law, mittelstand, more computing power than Apollo, move fast and break things, natural language processing, Neil Armstrong, Network effects, new economy, Nick Bostrom, night-watchman state, Oculus Rift, Panopticon Jeremy Bentham, pattern recognition, payday loans, Philippa Foot, post-truth, power law, price discrimination, price mechanism, RAND corporation, ransomware, Ray Kurzweil, Richard Stallman, ride hailing / ride sharing, road to serfdom, Robert Mercer, Satoshi Nakamoto, Second Machine Age, selection bias, self-driving car, sexual politics, sharing economy, Silicon Valley, Silicon Valley startup, Skype, smart cities, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, smart contracts, Snapchat, speech recognition, Steve Bannon, Steve Jobs, Steve Wozniak, Steven Levy, tech bro, technological determinism, technological singularity, technological solutionism, the built environment, the Cathedral and the Bazaar, The Structural Transformation of the Public Sphere, The Wisdom of Crowds, Thomas L Friedman, Tragedy of the Commons, trolley problem, universal basic income, urban planning, Watson beat the top human players on Jeopardy!, work culture , working-age population, Yochai Benkler

(Think, for example, of ‘the traitorous coffee maker’ sold by Keurig that refused to brew coffee from non-Keurig brand beans.)44 Each individual limitation induced by these technologies may constitute only a small exertion of force, but the cumulative effect will be that we’re subject to a good deal of power flowing OUP CORRECTED PROOF – FINAL, 26/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS Force 117 from whoever controls those technologies. The implications for freedom are discussed in the next Part. Take the famous example of the ‘trolley problem’.45 You are motoring down the freeway in a self-driving car, and a small child steps into the path of your vehicle. If you had control of the car, you would swerve to avoid the child. You know that this would cause a collision with the truck in the adjacent lane, probably killing both you and the trucker—but to preserve the life of the child, that’s a sacrifice you are willing to make.Your car, however, has different ideas.

In return for these affordances, however, we’ll necessarily sacrifice other ­freedoms. The freedom (occasionally) to drive over the speed limit. The freedom (occasionally) to make an illegal manoeuvre or park on a double yellow line. The freedom to make a journey with no record of it. Perhaps even the freedom to make moral choices, like (in the case of the trolley problem described in chapter six) whether to kill the child or the trucker. Again, I don’t seek to ­suggest that this isn’t a deal worth striking. But I do suggest that we see it for what it is: a trade-off in which our precious liberties are part of the bargain. From the perspective of freedom, there are four important differences between the power wielded by the state and that wielded by tech firms.


pages: 225 words: 70,180

Humankind: Solidarity With Nonhuman People by Timothy Morton

a long time ago in a galaxy far, far away, Anthropocene, capitalist realism, David Brooks, Georg Cantor, gravity well, Ian Bogost, invisible hand, means of production, megacity, microbiome, mirror neurons, Oklahoma City bombing, phenotype, planetary scale, Plato's cave, Richard Feynman, self-driving car, Silicon Valley, Slavoj Žižek, trolley problem, Turing test, wage slave, zero-sum game

Utilitarian holism, the holism of populations, is explosive—the whole is especially different (better or worse) than the part. There is no such thing as society! Or, specific people don’t matter! Utilitarian holism sets up a zero-sum game between the actually existing lifeform and the population. One consequence is the trolley problem: it is better to kill one person tied to the tracks by diverting the trolley than it is to kill hundreds of people on the trolley who will go off a cliff if we don’t divert the trolley. There’s the left-wing variant: talk of wholes is necessarily violent (racist, sexist, homophobic, transphobic and so on) because what exists are highly differentiated beings that are radically incommensurable.


pages: 1,351 words: 385,579

The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker

1960s counterculture, affirmative action, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, Albert Einstein, availability heuristic, behavioural economics, Berlin Wall, Boeing 747, Bonfire of the Vanities, book value, bread and circuses, British Empire, Broken windows theory, business cycle, California gold rush, Cass Sunstein, citation needed, classic study, clean water, cognitive dissonance, colonial rule, Columbine, computer age, Computing Machinery and Intelligence, conceptual framework, confounding variable, correlation coefficient, correlation does not imply causation, crack epidemic, cuban missile crisis, Daniel Kahneman / Amos Tversky, David Brooks, delayed gratification, demographic transition, desegregation, Doomsday Clock, Douglas Hofstadter, Dr. Strangelove, Edward Glaeser, en.wikipedia.org, European colonialism, experimental subject, facts on the ground, failed state, first-past-the-post, Flynn Effect, food miles, Francis Fukuyama: the end of history, fudge factor, full employment, Garrett Hardin, George Santayana, ghettoisation, Gini coefficient, global village, Golden arches theory, Great Leap Forward, Henri Poincaré, Herbert Marcuse, Herman Kahn, high-speed rail, Hobbesian trap, humanitarian revolution, impulse control, income inequality, informal economy, Intergovernmental Panel on Climate Change (IPCC), invention of the printing press, Isaac Newton, lake wobegon effect, libertarian paternalism, long peace, longitudinal study, loss aversion, Marshall McLuhan, mass incarceration, McMansion, means of production, mental accounting, meta-analysis, Mikhail Gorbachev, mirror neurons, moral panic, mutually assured destruction, Nelson Mandela, nuclear taboo, Oklahoma City bombing, open economy, Peace of Westphalia, Peter Singer: altruism, power law, QWERTY keyboard, race to the bottom, Ralph Waldo Emerson, random walk, Republic of Letters, Richard Thaler, Ronald Reagan, Rosa Parks, Saturday Night Live, security theater, Skinner box, Skype, Slavoj Žižek, South China Sea, Stanford marshmallow experiment, Stanford prison experiment, statistical model, stem cell, Steven Levy, Steven Pinker, sunk-cost fallacy, technological determinism, The Bell Curve by Richard Herrnstein and Charles Murray, the long tail, The Wealth of Nations by Adam Smith, theory of mind, Timothy McVeigh, Tragedy of the Commons, transatlantic slave trade, trolley problem, Turing machine, twin studies, ultimatum game, uranium enrichment, Vilfredo Pareto, Walter Mischel, WarGames: Global Thermonuclear War, WikiLeaks, women in the workforce, zero-sum game

(It also may be a response to whatever external threat would have caused a fellow animal to issue an alarm call.)247 The participants in Stanley Milgram’s famous experiment, who obeyed instructions to deliver shocks to a bogus fellow participant, were visibly distraught as they heard the shrieks of pain they were inflicting.248 Even in moral philosophers’ hypothetical scenarios like the Trolley Problem, survey-takers recoil from the thought of throwing the fat man in front of the trolley, though they know it would save five innocent lives.249 Testimony on the commission of hands-on violence in the real world is consistent with the results of laboratory studies. As we saw, humans don’t readily consummate mano a mano fisticuffs, and soldiers on the battlefield maybe petrified about pulling the trigger.250 The historian Christopher Browning’s interviews with Nazi reservists who were ordered to shoot Jews at close range showed that their initial reaction was a physical revulsion to what they were doing.251 The reservists did not recollect the trauma of their first murders in the morally colored ways we might expect—neither with guilt at what they were doing, nor with retroactive excuses to mitigate their culpability.

Today, for example, people might be dumbfounded when asked whether we should burn heretics, keep slaves, whip children, or break criminals on the wheel, yet those very debates took place several centuries ago. We even saw a neuroanatomical basis for the give-andtake between intuition and reasoning in Joshua Greene’s studies of trolley problems in the brain scanner: each of these moral faculties has distinct neurobiological hubs.215 When Hume famously wrote that “reason is, and ought to be, only the slave of the passions,” he was not advising people to shoot from the hip, blow their stack, or fall head over heels for Mr. Wrong.216 He was basically making the logical point that reason, by itself, is just a means of getting from one true proposition to the next and does not care about the value of those propositions.

Response to torture warrants: Dershowitz, 2004b; Levinson, 2004a. 245. Taboo against torture is useful: Levinson, 2004a; Posner, 2004. 246. Aversiveness of conspecifics in pain: de Waal, 1996; Preston & de Waal, 2002. 247. Reasons for aversiveness of pain displays: Hauser, 2000, pp. 219–23. 248. Anxiety while hurting others: Milgram, 1974. 249. Trolley Problem: Greene & Haidt, 2002; Greene et al., 2001. 250. Aversion to direct violence: Collins, 2008. 251. Ordinary Germans: Browning, 1992. 252. Nausea not soul-searching: Baumeister, 1997, p. 211. 253. Distinguishing fiction from reality: Sperber, 2000. 254. Blunted emotions in psychopathy: Blair, 2004; Hare, 1993; Raine et al., 2000. 255.


pages: 249 words: 77,342

The Behavioral Investor by Daniel Crosby

affirmative action, Asian financial crisis, asset allocation, availability heuristic, backtesting, bank run, behavioural economics, Black Monday: stock market crash in 1987, Black Swan, book value, buy and hold, cognitive dissonance, colonial rule, compound rate of return, correlation coefficient, correlation does not imply causation, Daniel Kahneman / Amos Tversky, disinformation, diversification, diversified portfolio, Donald Trump, Dunning–Kruger effect, endowment effect, equity risk premium, fake news, feminist movement, Flash crash, haute cuisine, hedonic treadmill, housing crisis, IKEA effect, impact investing, impulse control, index fund, Isaac Newton, Japanese asset price bubble, job automation, longitudinal study, loss aversion, market bubble, market fundamentalism, mental accounting, meta-analysis, Milgram experiment, moral panic, Murray Gell-Mann, Nate Silver, neurotypical, Nick Bostrom, passive investing, pattern recognition, Pepsi Challenge, Ponzi scheme, prediction markets, random walk, Reminiscences of a Stock Operator, Richard Feynman, Richard Thaler, risk tolerance, Robert Shiller, science of happiness, Shai Danziger, short selling, South Sea Bubble, Stanford prison experiment, Stephen Hawking, Steve Jobs, stocks for the long run, sunk-cost fallacy, systems thinking, TED Talk, Thales of Miletus, The Signal and the Noise by Nate Silver, Tragedy of the Commons, trolley problem, tulip mania, Vanguard fund, When a measure becomes a target

., MOO) to be less risky than those with hard to pronounce tickers (e.g., NTT). So, rather than trying to scour your local mall for the next big investment idea, put in place a plan that diversifies across geographies and asset classes, both familiar and foreign. Don’t know what you own The trolley problem is a formulation used in many philosophy and ethics courses. A slight modification of the general form of the problem is as follows: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them.


pages: 383 words: 92,837

Incognito: The Secret Lives of the Brain by David Eagleman

Ada Lovelace, Albert Einstein, Any sufficiently advanced technology is indistinguishable from magic, Charles Babbage, Columbine, Daniel Kahneman / Amos Tversky, delayed gratification, endowment effect, facts on the ground, impulse control, invisible hand, Isaac Newton, Jeff Hawkins, Johann Wolfgang von Goethe, out of africa, Pierre-Simon Laplace, Ralph Waldo Emerson, Robert Shiller, Rodney Brooks, Saturday Night Live, selective serotonin reuptake inhibitor (SSRI), Steven Pinker, Stuart Kauffman, subprime mortgage crisis, Thales of Miletus, trolley problem

Something about interacting with the person up close stops most people from pushing the man to his death. Why? Because that sort of personal interaction activates the emotional networks. It changes the problem from an abstract, impersonal math problem into a personal, emotional decision. When people consider the trolley problem, here’s what brain imaging reveals: In the footbridge scenario, areas involved in motor planning and emotion become active. In contrast, in the track-switch scenario, only lateral areas involved in rational thinking become active. People register emotionally when they have to push someone; when they only have to tip a lever, their brain behaves like Star Trek’s Mr.


pages: 356 words: 106,161

The Glass Half-Empty: Debunking the Myth of Progress in the Twenty-First Century by Rodrigo Aguilera

"Friedman doctrine" OR "shareholder theory", "World Economic Forum" Davos, activist fund / activist shareholder / activist investor, Alan Greenspan, Anthropocene, availability heuristic, barriers to entry, basic income, benefit corporation, Berlin Wall, Bernie Madoff, Bernie Sanders, bitcoin, Boris Johnson, Branko Milanovic, Bretton Woods, Brexit referendum, Capital in the Twenty-First Century by Thomas Piketty, capitalist realism, carbon footprint, Carmen Reinhart, centre right, clean water, cognitive bias, collapse of Lehman Brothers, Colonization of Mars, computer age, Corn Laws, corporate governance, corporate raider, creative destruction, cryptocurrency, cuban missile crisis, David Graeber, David Ricardo: comparative advantage, death from overwork, decarbonisation, deindustrialization, Deng Xiaoping, Doha Development Round, don't be evil, Donald Trump, Doomsday Clock, Dunning–Kruger effect, Elon Musk, European colonialism, fake news, Fall of the Berlin Wall, first-past-the-post, Francis Fukuyama: the end of history, fundamental attribution error, gig economy, Gini coefficient, Glass-Steagall Act, Great Leap Forward, green new deal, Hans Rosling, housing crisis, income inequality, income per capita, index fund, intangible asset, Intergovernmental Panel on Climate Change (IPCC), invisible hand, Jean Tirole, Jeff Bezos, Jeremy Corbyn, Jevons paradox, job automation, job satisfaction, John Maynard Keynes: Economic Possibilities for our Grandchildren, joint-stock company, Joseph Schumpeter, karōshi / gwarosa / guolaosi, Kenneth Rogoff, Kickstarter, lake wobegon effect, land value tax, Landlord’s Game, late capitalism, liberal capitalism, long peace, loss aversion, low interest rates, Mark Zuckerberg, market fundamentalism, means of production, meta-analysis, military-industrial complex, Mont Pelerin Society, moral hazard, moral panic, neoliberal agenda, Network effects, North Sea oil, Northern Rock, offshore financial centre, opioid epidemic / opioid crisis, Overton Window, Pareto efficiency, passive investing, Peter Thiel, plutocrats, principal–agent problem, profit motive, public intellectual, purchasing power parity, race to the bottom, rent-seeking, risk tolerance, road to serfdom, Robert Shiller, Robert Solow, savings glut, Scientific racism, secular stagnation, Silicon Valley, Silicon Valley ideology, Slavoj Žižek, Social Justice Warrior, Social Responsibility of Business Is to Increase Its Profits, sovereign wealth fund, Stanislav Petrov, Steven Pinker, structural adjustment programs, surveillance capitalism, tail risk, tech bro, TED Talk, The Spirit Level, The Wealth of Nations by Adam Smith, too big to fail, trade liberalization, transatlantic slave trade, trolley problem, unbiased observer, universal basic income, Vilfredo Pareto, Washington Consensus, Winter of Discontent, Y2K, young professional, zero-sum game

People are said to be irrational when they behave in ways that would appear inconsistent with logical or factual decision-making, relying instead on intuitive or emotional impulses, or other forms of motivated reasoning. Put in the simplest dichotomy: Facts good. Feelings bad. Reason, however, is a far more complex epistemological process, of which rationality is just one of many components. Take, for example, the famous “trolley problem” in philosophy, which in its original conception goes something like this: Imagine that you are on a street and you see a trolley which has lost control of its breaks. It is hurtling down at such speed that it will inevitably kill five pedestrians who are crossing the tracks. However, in front of you is a lever which will divert the trolley into a second set of tracks where only one pedestrian is crossing, but who will also be killed.


pages: 412 words: 115,266

The Moral Landscape: How Science Can Determine Human Values by Sam Harris

Albert Einstein, banking crisis, Bayesian statistics, behavioural economics, cognitive bias, cognitive load, end world poverty, endowment effect, energy security, experimental subject, framing effect, higher-order functions, hindsight bias, impulse control, John Nash: game theory, language acquisition, longitudinal study, loss aversion, meta-analysis, mirror neurons, Monty Hall problem, out of africa, Paradox of Choice, pattern recognition, peak-end rule, placebo effect, Ponzi scheme, public intellectual, Richard Feynman, risk tolerance, scientific worldview, stem cell, Stephen Hawking, Steven Pinker, TED Talk, the scientific method, theory of mind, traumatic brain injury, trolley problem, ultimatum game, World Values Survey

Theory-based Bayesian models of inductive reasoning. In A. Feeney & E. Heit (Eds.), Inductive reasoning: Experimental, developmental, and computational approaches (pp. 167–204). Cambridge, UK: Cambridge University Press. Teresi, D. (1990). The lone ranger of quantum mechanics. New York Times. Thompson, J. J. (1976). Letting die, and the trolley problem. The Monist, 59 (2), 204–217. Tiihonen, J., Rossi, R., Laakso, M. P., Hodgins, S., Testa, C., Perez, J., et al. (2008). Brain anatomy of persistent violent offenders: More rather than less. Psychiatry Res, 163 (3), 201–212. Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk.


pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again by Eric Topol

"World Economic Forum" Davos, 23andMe, Affordable Care Act / Obamacare, AI winter, Alan Turing: On Computable Numbers, with an Application to the Entscheidungsproblem, algorithmic bias, AlphaGo, Apollo 11, artificial general intelligence, augmented reality, autism spectrum disorder, autonomous vehicles, backpropagation, Big Tech, bioinformatics, blockchain, Cambridge Analytica, cloud computing, cognitive bias, Colonization of Mars, computer age, computer vision, Computing Machinery and Intelligence, conceptual framework, creative destruction, CRISPR, crowdsourcing, Daniel Kahneman / Amos Tversky, dark matter, data science, David Brooks, deep learning, DeepMind, Demis Hassabis, digital twin, driverless car, Elon Musk, en.wikipedia.org, epigenetics, Erik Brynjolfsson, fake news, fault tolerance, gamification, general purpose technology, Geoffrey Hinton, George Santayana, Google Glasses, ImageNet competition, Jeff Bezos, job automation, job satisfaction, Joi Ito, machine translation, Mark Zuckerberg, medical residency, meta-analysis, microbiome, move 37, natural language processing, new economy, Nicholas Carr, Nick Bostrom, nudge unit, OpenAI, opioid epidemic / opioid crisis, pattern recognition, performance metric, personalized medicine, phenotype, placebo effect, post-truth, randomized controlled trial, recommendation engine, Rubik’s Cube, Sam Altman, self-driving car, Silicon Valley, Skinner box, speech recognition, Stephen Hawking, techlash, TED Talk, text mining, the scientific method, Tim Cook: Apple, traumatic brain injury, trolley problem, War on Poverty, Watson beat the top human players on Jeopardy!, working-age population

There are two fundamental levels of the ethics of AI: machine ethics, which refers to the AI systems per se, and the wider domain, not specific to the algorithms. The prototypical example of machine ethics involves how driverless cars handle the dilemma of choosing between evils in the case of an impending accident, when no matter how it responds, people are going to die. It’s the modern-day version of the trolley problem introduced more than fifty years ago. Jean-Francois Bonnefon and colleagues examined the driverless car dilemma in depth using simulations and input from more than 1,900 people.59 In each of the three scenarios (Figure 5.1), there is no good choice; it’s just a matter of who and how many people are killed, whether the car’s passenger, a pedestrian, or several of them.


pages: 533 words: 125,495

Rationality: What It Is, Why It Seems Scarce, Why It Matters by Steven Pinker

affirmative action, Albert Einstein, autonomous vehicles, availability heuristic, Ayatollah Khomeini, backpropagation, basic income, behavioural economics, belling the cat, Black Lives Matter, butterfly effect, carbon tax, Cass Sunstein, choice architecture, classic study, clean water, Comet Ping Pong, coronavirus, correlation coefficient, correlation does not imply causation, COVID-19, critical race theory, crowdsourcing, cuban missile crisis, Daniel Kahneman / Amos Tversky, data science, David Attenborough, deep learning, defund the police, delayed gratification, disinformation, Donald Trump, Dr. Strangelove, Easter island, effective altruism, en.wikipedia.org, Erdős number, Estimating the Reproducibility of Psychological Science, fake news, feminist movement, framing effect, George Akerlof, George Floyd, germ theory of disease, high batting average, if you see hoof prints, think horses—not zebras, index card, Jeff Bezos, job automation, John Nash: game theory, John von Neumann, libertarian paternalism, Linda problem, longitudinal study, loss aversion, Mahatma Gandhi, meta-analysis, microaggression, Monty Hall problem, Nash equilibrium, New Journalism, Paul Erdős, Paul Samuelson, Peter Singer: altruism, Pierre-Simon Laplace, placebo effect, post-truth, power law, QAnon, QWERTY keyboard, Ralph Waldo Emerson, randomized controlled trial, replication crisis, Richard Thaler, scientific worldview, selection bias, social discount rate, social distancing, Social Justice Warrior, Stanford marshmallow experiment, Steve Bannon, Steven Pinker, sunk-cost fallacy, TED Talk, the scientific method, Thomas Bayes, Tragedy of the Commons, trolley problem, twin studies, universal basic income, Upton Sinclair, urban planning, Walter Mischel, yellow journalism, zero-sum game

., 168 representativeness heuristic, 27, 155–56 Stone, Oliver, JFK, 303 Stoppard, Tom, Jumpers, 44–45, 66 straw man, 88, 291 Styron, William, Sophie’s Choice, 184 subjective reality, claims for, 39 subjectivist interpretation of probability, 115, 116, 151, 194–96 sucker’s payoff, 239, 242, 244 suicide, 156 Suits, Bernard, 346n28 sunk cost fallacy, 237–38, 320, 323 Sunstein, Cass, 56 Superfund sites, 191 superstitions the cluster illusion and, 147 and coincidences, prevalence of, 143–44, 287 confirmation bias and, 14, 142 openness to evidence and, 311 prevalence of, 285–86, 354–55n8 syllogisms, 12, 81 synchronicity, 144, 305 System 1 & 2 defined, 10 equality and System 2, 108–9 the Monty Hall dilemma and, 20 rational choice and, 187 reflective and unreflective thinking, 8–10, 311 visual illusions and, 30 taboos and communal outrages, 123–25 definition, 62 forbidden base rates, 62, 163–66 heretical counterfactuals, 64–65 taboo on discussing taboo, 166 taboo tradeoffs, 62–64, 184, 350n15 victim narratives, 124 taboo tradeoffs, 62–64, 184, 350n15 talent and practice, 272–73, 277–78, 278 Talking Heads, 35 Talleyrand, Charles-Maurice de, 337 tautologies, 80 See also begging the question; circular explanations taxicab problem, 155, 168, 170, 171 television, 216, 238–39, 267–68, 303, 305 temporal discounting, 47–56, 320 temporal logic, 84 temporal stability, 258 tendentious presuppositions, 89 terrorism availability bias and, 122 Bayesian reasoning and prediction of, 162–63 man carrying own bomb joke, 127– 28, 138 media coverage and, 126 paradoxical tactics and, 60 profiling and, 156–57 torture of terrorists, 218 Tetlock, Philip, 62–65, 162–66 Texas sharpshooter fallacy, 142–46, 160, 321 Thaler, Richard, 56 theocracies, 43 theoretical reason, 37 #thedress, 32 threats, and paradoxical tactics, 58, 60 Three Cards in a Hat, 138 The Threepenny Opera (Brecht), 121 time, San people and, 3 See also goals—time-frame conflicts Tit for Tat strategy, 241–42, 243–44 Tooby, J., 169 Toplak, M. F., 356–57n67 trade and investment, international, 327 Tragedy of the Carbon Commons, 242–44, 328 Tragedy of the Commons, 242, 243–44, 315 Tragedy of the Rationality Commons, 298, 315–17 Trivers, Robert, 241 trolley problem, 97 Trump, Donald, 6, 60, 82–83, 88, 92, 126, 130–31, 145, 245, 283–84, 284, 285, 288, 303, 306, 310, 312–13, 313 truth tables, 76–78 tu quoque (what-aboutery), 89 Turkmenistan, 245–47, 251 Tversky, Amos, 7, 25–29, 119, 131, 146, 154–55, 156, 186–87, 190–95, 196, 254, 342n15, 349–50nn6,27 Twain, Mark, 201 Twitter, 313, 316, 321–23 uncertainty, distinguished from risk, 177 United Nations, 327 unit homogeneity, 258 universal basic income (UBI), 85–87 universal realism, 300–301 universities academic freedom in, 41 benefits of college education, 264 college admissions, 262, 263, 266–67, 294 sexual misconduct policies, 218 suppression of opinions in, 43, 313–14 viewpoint diversity, lack of, 313–14 See also academia; education unreflective thinking, 8–10, 311 See also System 1 & 2 urban legends, 287, 306, 308 Uscinski, Joseph, 287 US Constitution, 75, 333 US Department of Education, 218 USSR, 60, 89, 122 vaccines, 284, 325.


AI 2041 by Kai-Fu Lee, Chen Qiufan

3D printing, Abraham Maslow, active measures, airport security, Albert Einstein, AlphaGo, Any sufficiently advanced technology is indistinguishable from magic, artificial general intelligence, augmented reality, autonomous vehicles, basic income, bitcoin, blockchain, blue-collar work, Cambridge Analytica, carbon footprint, Charles Babbage, computer vision, contact tracing, coronavirus, corporate governance, corporate social responsibility, COVID-19, CRISPR, cryptocurrency, DALL-E, data science, deep learning, deepfake, DeepMind, delayed gratification, dematerialisation, digital map, digital rights, digital twin, Elon Musk, fake news, fault tolerance, future of work, Future Shock, game design, general purpose technology, global pandemic, Google Glasses, Google X / Alphabet X, GPT-3, happiness index / gross national happiness, hedonic treadmill, hiring and firing, Hyperloop, information security, Internet of things, iterative process, job automation, language acquisition, low earth orbit, Lyft, Maslow's hierarchy, mass immigration, mirror neurons, money: store of value / unit of account / medium of exchange, mutually assured destruction, natural language processing, Neil Armstrong, Nelson Mandela, OpenAI, optical character recognition, pattern recognition, plutocrats, post scarcity, profit motive, QR code, quantitative easing, Richard Feynman, ride hailing / ride sharing, robotic process automation, Satoshi Nakamoto, self-driving car, seminal paper, Silicon Valley, smart cities, smart contracts, smart transportation, Snapchat, social distancing, speech recognition, Stephen Hawking, synthetic biology, telemarketer, Tesla Model S, The future is already here, trolley problem, Turing test, uber lyft, universal basic income, warehouse automation, warehouse robotics, zero-sum game

NON-TECHNOLOGY ISSUES THAT MAY IMPEDE L5 In order to make autonomous vehicles pervasive, a number of challenges will need to be overcome, including ethics, liability issues, and sensationalism. This is to be expected because there are millions of lives at stake, not to mention many industries and hundreds of millions of jobs. There will be circumstances that force AVs to make agonizing ethical decisions. Perhaps the most famous ethical dilemma is “the trolley problem,” which boils down to a scenario in which a decision would need to be made between taking action and killing person A, or taking no action and killing persons B and C. If you think the answer is obvious, what if person A is a child? What if person A is your child? What if the car belongs to you, and person A is your child?


Lifespan: Why We Age—and Why We Don't Have To by David A. Sinclair, Matthew D. Laplante

Albert Einstein, Albert Michelson, Anthropocene, anti-communist, Any sufficiently advanced technology is indistinguishable from magic, Atul Gawande, basic income, Berlin Wall, Bernie Sanders, biofilm, Biosphere 2, blockchain, British Empire, caloric restriction, caloric restriction, carbon footprint, Charles Babbage, Claude Shannon: information theory, clean water, creative destruction, CRISPR, dark matter, dematerialisation, discovery of DNA, double helix, Drosophila, Easter island, Edward Jenner, en.wikipedia.org, epigenetics, experimental subject, Fall of the Berlin Wall, Fellow of the Royal Society, global pandemic, Grace Hopper, helicopter parent, income inequality, invention of the telephone, Isaac Newton, John Snow's cholera map, Kevin Kelly, Khan Academy, labor-force participation, life extension, Louis Pasteur, McMansion, Menlo Park, meta-analysis, microbiome, mouse model, mutually assured destruction, Paul Samuelson, personalized medicine, phenotype, Philippa Foot, placebo effect, plutocrats, power law, quantum entanglement, randomized controlled trial, Richard Feynman, ride hailing / ride sharing, self-driving car, seminal paper, Skype, stem cell, Stephen Hawking, Steven Pinker, TED Talk, the scientific method, Thomas Kuhn: the structure of scientific revolutions, Thomas Malthus, Tim Cook: Apple, Tragedy of the Commons, trolley problem, union organizing, universal basic income, WeWork, women in the workforce, zero-sum game

The chairperson of the Joint Chiefs of Staff tells you that six US Air Force F-22 Raptor fighters are tracking the plane as it circles over the Pacific Ocean. The pilots have it locked in; their missiles are ready. The plane is running out of gas. The fate of the passengers, and the entire United States, rests upon your orders. What do you do? This, of course, is a “trolley problem,” an ethical thought experiment, of the type popularized by the philosopher Philippa Foot, that pits our moral duty not to inflict harm on others against our social responsibility to save a greater number of lives. It’s also, however, a handy metaphor, because the highly contagious disease the passengers are carrying is, as you doubtless have noticed, nothing more than a faster-acting version of aging.


pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

agricultural Revolution, AI winter, Albert Einstein, algorithmic trading, anthropic principle, Anthropocene, anti-communist, artificial general intelligence, autism spectrum disorder, autonomous vehicles, backpropagation, barriers to entry, Bayesian statistics, bioinformatics, brain emulation, cloud computing, combinatorial explosion, computer vision, Computing Machinery and Intelligence, cosmological constant, dark matter, DARPA: Urban Challenge, data acquisition, delayed gratification, Demis Hassabis, demographic transition, different worldview, Donald Knuth, Douglas Hofstadter, driverless car, Drosophila, Elon Musk, en.wikipedia.org, endogenous growth, epigenetics, fear of failure, Flash crash, Flynn Effect, friendly AI, general purpose technology, Geoffrey Hinton, Gödel, Escher, Bach, hallucination problem, Hans Moravec, income inequality, industrial robot, informal economy, information retrieval, interchangeable parts, iterative process, job automation, John Markoff, John von Neumann, knowledge worker, Large Hadron Collider, longitudinal study, machine translation, megaproject, Menlo Park, meta-analysis, mutually assured destruction, Nash equilibrium, Netflix Prize, new economy, Nick Bostrom, Norbert Wiener, NP-complete, nuclear winter, operational security, optical character recognition, paperclip maximiser, pattern recognition, performance metric, phenotype, prediction markets, price stability, principal–agent problem, race to the bottom, random walk, Ray Kurzweil, recommendation engine, reversible computing, search costs, social graph, speech recognition, Stanislav Petrov, statistical model, stem cell, Stephen Hawking, Strategic Defense Initiative, strong AI, superintelligent machines, supervolcano, synthetic biology, technological singularity, technoutopianism, The Coming Technological Singularity, The Nature of the Firm, Thomas Kuhn: the structure of scientific revolutions, time dilation, Tragedy of the Commons, transaction costs, trolley problem, Turing machine, Vernor Vinge, WarGames: Global Thermonuclear War, Watson beat the top human players on Jeopardy!, World Values Survey, zero-sum game

Similarly, although some philosophers have spent entire careers trying to carefully formulate deontological systems, new cases and consequences occasionally come to light that necessitate revisions. For example, deontological moral philosophy has in recent years been reinvigorated through the discovery of a fertile new class of philosophical thought experiments, “trolley problems,” which reveal many subtle interactions among our intuitions about the moral significance of the acts/omissions distinction, the distinction between intended and unintended consequences, and other such matters; see, e.g., Kamm (2007). 26. Armstrong (2010). 27. As a rule of thumb, if one plans to use multiple safety mechanisms to contain an AI, it may be wise to work on each one as if it were intended to be the sole safety mechanism and as if it were therefore required to be individually sufficient.


pages: 743 words: 201,651

Free Speech: Ten Principles for a Connected World by Timothy Garton Ash

"World Economic Forum" Davos, A Declaration of the Independence of Cyberspace, Aaron Swartz, activist lawyer, Affordable Care Act / Obamacare, Andrew Keen, Apple II, Ayatollah Khomeini, battle of ideas, Berlin Wall, bitcoin, British Empire, Cass Sunstein, Chelsea Manning, citizen journalism, Citizen Lab, Clapham omnibus, colonial rule, critical race theory, crowdsourcing, data science, David Attenborough, digital divide, digital rights, don't be evil, Donald Davies, Douglas Engelbart, dual-use technology, Edward Snowden, Etonian, European colonialism, eurozone crisis, Evgeny Morozov, failed state, Fall of the Berlin Wall, Ferguson, Missouri, Filter Bubble, financial independence, Firefox, Galaxy Zoo, George Santayana, global village, Great Leap Forward, index card, Internet Archive, invention of movable type, invention of writing, Jaron Lanier, jimmy wales, John Markoff, John Perry Barlow, Julian Assange, Laura Poitras, machine readable, machine translation, Mark Zuckerberg, Marshall McLuhan, Mary Meeker, mass immigration, megacity, mutually assured destruction, national security letter, Nelson Mandela, Netflix Prize, Nicholas Carr, obamacare, Open Library, Parler "social media", Peace of Westphalia, Peter Thiel, power law, pre–internet, profit motive, public intellectual, RAND corporation, Ray Kurzweil, Ronald Reagan, semantic web, Sheryl Sandberg, Silicon Valley, Simon Singh, Snapchat, social graph, Stephen Fry, Stephen Hawking, Steve Jobs, Steve Wozniak, Streisand effect, technological determinism, TED Talk, The Death and Life of Great American Cities, The Wisdom of Crowds, Tipper Gore, trolley problem, Turing test, We are Anonymous. We are Legion, WikiLeaks, World Values Survey, Yochai Benkler, Yom Kippur War, yottabyte

In a study conducted with 1,800 undecided voters in India’s 2014 parliamentary election, he claimed to have shifted votes by an average of 12.5 percent to particular candidates simply by improving their placings in search results found by the individual voter.52 An extreme example of algorithmic choice could be provided by Google’s computer-driven car. An old chestnut for students of ethics is the ‘trolley problem’: you control the railway points and have to decide whether the trolley will turn left and run over one person or turn right and kill five. Now suppose this automated Google car, steered by computer, faces a similar choice. It cannot stop in time. It has to run over either that grey-haired old woman on the left or that funky young man on the right.


pages: 669 words: 210,153

Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers by Timothy Ferriss

Abraham Maslow, Adam Curtis, Airbnb, Alexander Shulgin, Alvin Toffler, An Inconvenient Truth, artificial general intelligence, asset allocation, Atul Gawande, augmented reality, back-to-the-land, Ben Horowitz, Bernie Madoff, Bertrand Russell: In Praise of Idleness, Beryl Markham, billion-dollar mistake, Black Swan, Blue Bottle Coffee, Blue Ocean Strategy, blue-collar work, book value, Boris Johnson, Buckminster Fuller, business process, Cal Newport, call centre, caloric restriction, caloric restriction, Carl Icahn, Charles Lindbergh, Checklist Manifesto, cognitive bias, cognitive dissonance, Colonization of Mars, Columbine, commoditize, correlation does not imply causation, CRISPR, David Brooks, David Graeber, deal flow, digital rights, diversification, diversified portfolio, do what you love, Donald Trump, effective altruism, Elon Musk, fail fast, fake it until you make it, fault tolerance, fear of failure, Firefox, follow your passion, fulfillment center, future of work, Future Shock, Girl Boss, Google X / Alphabet X, growth hacking, Howard Zinn, Hugh Fearnley-Whittingstall, Jeff Bezos, job satisfaction, Johann Wolfgang von Goethe, John Markoff, Kevin Kelly, Kickstarter, Lao Tzu, lateral thinking, life extension, lifelogging, Mahatma Gandhi, Marc Andreessen, Mark Zuckerberg, Mason jar, Menlo Park, microdosing, Mikhail Gorbachev, MITM: man-in-the-middle, Neal Stephenson, Nelson Mandela, Nicholas Carr, Nick Bostrom, off-the-grid, optical character recognition, PageRank, Paradox of Choice, passive income, pattern recognition, Paul Graham, peer-to-peer, Peter H. Diamandis: Planetary Resources, Peter Singer: altruism, Peter Thiel, phenotype, PIHKAL and TIHKAL, post scarcity, post-work, power law, premature optimization, private spaceflight, QWERTY keyboard, Ralph Waldo Emerson, Ray Kurzweil, recommendation engine, rent-seeking, Richard Feynman, risk tolerance, Ronald Reagan, Salesforce, selection bias, sharing economy, side project, Silicon Valley, skunkworks, Skype, Snapchat, Snow Crash, social graph, software as a service, software is eating the world, stem cell, Stephen Hawking, Steve Jobs, Stewart Brand, superintelligent machines, TED Talk, Tesla Model S, The future is already here, the long tail, The Wisdom of Crowds, Thomas L Friedman, traumatic brain injury, trolley problem, vertical integration, Wall-E, Washington Consensus, We are as Gods, Whole Earth Catalog, Y Combinator, zero-sum game

I don’t always get a chance to do it, but I find that it clears the head in a very useful way.” more in Audio Listen to episode #87 of The Tim Ferriss Show (fourhourworkweek.com/87) for Sam’s thoughts on the following: What books would you recommend everyone read? (6:55) A thought experiment worth experiencing: The Trolley Problem (55:25) * * * Caroline Paul Caroline Paul (TW: @carowriter, carolinepaul.com) is the author of four published books. Her latest is the New York Times bestseller The Gutsy Girl: Escapades for Your Life of Epic Adventure. Once a young scaredy-cat, Caroline decided that fear got in the way of the life she wanted.