How to stay in charge in a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to “turn right in 500 yards.”
Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place—while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all seem to agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why that’s not true, and tells us how we can stay in charge in a world populated by algorithms.
Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent “black box” algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the “like” button. We shouldn’t trust smart technology unconditionally, Gigerenzer tells us, but we shouldn’t fear it unthinkingly, either.
"Sinopsis" puede pertenecer a otra edición de este libro.
Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam, Director Emeritus at the Max Planck Institute for Human Development, and Partner of Simply Rational—the Institute for Decisions. He is the author of Calculated Risks, Gut Feelings, Risk Savvy, and How to Stay Smart in a Smart World (MIT Press).
Technological solutionism is the belief that every societal problem is a “bug” that needs a “fix” through an algorithm. Technological paternalism is its natural consequence, government by algorithms. It doesn’t need to peddle the fiction of a superintelligence; it instead expects us to accept that corporations and governments record where we are, what we are doing, and with whom, minute by minute, and also to trust that these records will make the world a better place. As Google’s former CEO Eric Schmidt explains, “The goal is to enable Google users to be able to ask the question such as ‘What shall I do tomorrow’ and ‘What job shall I take?’”23 Quite a few popular writers instigate our awe of technological paternalism by telling stories that are, at best, economical with the truth.24 More surprisingly, even some influential researchers see no limits to what AI can do, arguing that the human brain is merely an inferior computer and that we should replace humans with algorithms whenever possible.25 AI will tell us what to do, and we should listen and follow. We just need to wait a bit until AI gets smarter. Oddly, the message is never that people need to become smarter as well.
I have written this book to enable people to gain a realistic appreciation of what AI can do and how it is used to influence us. We do not need more paternalism; we’ve had more than our share in the past centuries. But nor do we need technophobic panic, which is revived with every breakthrough technology. When trains were invented, doctors warned that passengers would die from suffocation.26 When radio became widely available, the concern was that listening too much would harm children because they need repose, not jazz.27 Instead of fright or hype, the digital world needs better-informed and healthily critical citizens who want to keep control of their lives in their own hands.
23. Daniel and Palmer, “Google’s Goal.”
24. Overstated claims about algorithms without supporting evidence can be found, for instance, in Harari, Homo Deus. I provide examples in chapter 11.
25. See the spectrum of opinions in Brockman, Possible Minds. Also, Kahneman (“Comment,” 609) poses the question whether AI can eventually do whatever people can do: “Will there be anything that is reserved for human beings? Frankly, I don’t see any reason to set limits on what AI can do.” And: “You should replace humans by algorithms whenever possible” (610).
26. Gigerenzer, Risk Savvy.
27. On fear cycles, see Orben, “Sisyphean Cycle.”
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: mountain, GEORGETOWN, CO, Estados Unidos de America
paperback. Condición: Acceptable. a handful of pages are creased otherwise in good shape. Nº de ref. del artículo: mon0000011659
Cantidad disponible: 1 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 46282367
Cantidad disponible: 6 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 46282367-n
Cantidad disponible: 6 disponibles
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
Paperback. Condición: new. Paperback. How to stay in charge in a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to turn right in 500 yards.Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better placewhile tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all seem to agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why thats not true, and tells us how we can stay in charge in a world populated by algorithms.Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent black box algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the like button. We shouldnt trust smart technology unconditionally, Gigerenzer tells us, but we shouldnt fear it unthinkingly, either. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Nº de ref. del artículo: 9780262548441
Cantidad disponible: 1 disponibles
Librería: Rarewaves USA, OSWEGO, IL, Estados Unidos de America
Paperback. Condición: New. Nº de ref. del artículo: LU-9780262548441
Cantidad disponible: 8 disponibles
Librería: Massive Bookshop, Greenfield, MA, Estados Unidos de America
Paperback. Condición: New. Nº de ref. del artículo: 9780262548441
Cantidad disponible: 10 disponibles
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
Paperback. Condición: New. Nº de ref. del artículo: LU-9780262548441
Cantidad disponible: 8 disponibles
Librería: Magers and Quinn Booksellers, Minneapolis, MN, Estados Unidos de America
paperback. Condición: New. Brand New. Nº de ref. del artículo: 1507384
Cantidad disponible: 1 disponibles
Librería: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
Condición: New. 2025. paperback. . . . . . Nº de ref. del artículo: V9780262548441
Cantidad disponible: 15 disponibles
Librería: Biblios, Frankfurt am main, HESSE, Alemania
Condición: New. Nº de ref. del artículo: 18398973194
Cantidad disponible: 3 disponibles