Table of Contents
- Introduction
- Life 3.0: Being Human in the Age of Artificial Intelligence
- Superintelligence: Paths, Dangers, Strategies
- The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma
- Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
- Human Compatible: Artificial Intelligence and the Problem of Control
- The Alignment Problem: Machine Learning and Human Values
- Artificial Intelligence: A Modern Approach (Pearson Series in Artificial Intelligence)
- Exploring the Most Interesting Books on Artificial Intelligence
Introduction
2024 is declared the year of AI, where we see even more progress and a transition from exploration to execution. I don’t know to what extent this is true, but I’d like to be prepared for it, at least mentally. So today, we’ll discuss some of the most interesting books on artificial intelligence written by well-known experts. At the end of the article, I’ll give you all the links to the most interesting books on artificial intelligence. Where you can directly land on the book.
Life 3.0: Being Human in the Age of Artificial Intelligence
The first book I will discuss is Life 3.0 by Max Tegmark. He is a physicist and a machine learning researcher. In this book, Tegmark talks about three different tiers of life since the start of the universe, from simple and biological, like the little bacteria that can’t change much about its on the body and also its software, to cultural life, where the species still can change their own biological body. Still, it can design its software by learning new skills, languages, and ideas. The author argues that this flexibility has given humans the power and the ability to dominate the planet.
But our brains are still essentially the same as our ancestors thousands of years ago. Here comes next, life 3.0, where a technological species can design its software and hardware, causing an intelligence explosion. So, there’s a lot of debate about what this future, artificial general intelligence life 3.0, or whatever you want to call it, how will look like. Few people believe in the extreme. Good or awful scenarios with either or die in a few years by AI will live in a heaven-like world thanks to AI.
Most people fall into the two main camps, as Tegmark calls them:
- Technos-Skeptics who believe AGI is so complex that it won’t happen for hundreds of years. So don’t worry about it.
- Beneficial AI movement Camp believes human-level AGI is possible within this Century, and a good outcome is not guaranteed.
We need to work hard for it. You might still remember around this time last year. There was a heated debate about the open letter, causing many well-known figures in the field to sign giant AI experiments. Reading this book makes me realize that those Technos-Skeptics who believed this letter was unnecessary. It doesn’t mean they are reckless and don’t care about the risks.
It’s just that they have a much longer timeline in mind. Entering put it this way, fearing the rise of killer robots is like worrying about overpopulation on Mars. Yan Lagun also thinks LMS today are still too stupid to be concerned about. On the other hand, the people who want the AI wrist are not necessarily AI Doomers. They have a closer timeline in mind as to when AGI will happen. So, there’s no consensus on how fast things will go.
The book also discusses AI’s impacts on the military, healthcare, and finance domains. Also, why AI safety is complex and deserves more research. The chapter also discusses various AI aftermath scenarios, from the best to the most absurd. Should we have an AI-protective God or enslaved God or 1984 surveillance kind of world? I’ve had it entertaining and thought-provoking at the same time. My favourite takeaway from this book. Is that asking the question? What will happen is asking the wrong question.
The better question to ask is what should happen. We do have the power to influence and shape our future. So, it’s essential to figure out what we want. So, what kind of world do we want to live in? We will wish to complete job automation, who should be in control of society, humans, AI, or cyborgs, so if you enjoy these high-level discussions and want a birds-eye view of all things AI-related. But I highly recommend this book, and it’s exceptionally well-written, easy to read, and very insightful.
Superintelligence: Paths, Dangers, Strategies
The next book I will discuss is another classic superintelligence book by Nick Postrum. The central idea of this book is that in the grand spectrum of intelligence, the distance between a village idiot and Einstein is relatively tiny. Once AI intelligence passes the chimpanzee and dumb human stages, It can become much more intelligent than us. The AI system will start becoming more innovative at a specific crossover point. This is why, in this book, Nick Postrum believes that superintelligence is more likely to be fast and explosive if it’s happening.
One reason to believe that an intelligence explosion is more likely to happen than a slow process. Machine intelligence can benefit from breakthroughs in other fields in rather unexpected ways. Of course, this is not to mention quantum computing and that one day, machines might be able to develop new ideas to improve themselves or rewrite themselves completely. Another interesting point in this book is that there are two ways to design superintelligence machines.
What we are currently doing with AI is primarily teaching computers to imitate human thinking through training large neuron networks on a lot of data. The alternative idea would be to get the computer to simulate the human brain, not just imitate it. This idea of whole brain emulation is about building a computer that can learn like a child and will eventually get smarter through interacting with the real world. It sounds kind of like the idea of the movie minority reports, where we can precisely predict who we will grow into. From there, we can, hopefully, even foresee future cramps.
The only problem with that idea is that we know little about how our brains and consciousness work. No one knows if it would otherwise be a good idea to emulate human brains without consciousness, as Stuart Russel put it in his book Consciousness. We do know nothing. So I’m going to say nothing. No one in AI is working on making machines conscious, nor would anyone know where to start.
So that sounds like a long shot. But today, with GPT-4 and many powerful language models, I feel like we’re making some good progress on the first route, which is to imitate human thinking. This book also discusses why we need to prioritize AI safety, as it’s never guaranteed that a super-intelligent AI would be benevolent. There are many, many different ways. A superintelligence AI might not be aligned with human values. The book describes many failure modes where things could go wrong.
One is instrumental convergence, which means an AI agent with unbounded but harmless goals can act in surprisingly harmful ways. For example, harmless AI might turn us all into paper clips to maximize production. These scenarios are mostly thought experiments, but they’re fascinating to read and make a lot of sense. Another good point of this book is that global collaboration is the key to making AI safe and beneficial, and an arms race or secret government programs will likely lead to terrible outcomes.
This point hits home, even though this book was written over 10 years ago. It’s a good sign. We have many open-source large language models that anyone can use and contribute to. You can now download a whole uncensored, large language model for free from the internet. These open-source projects will hopefully help startups compete. It also drives the progress to watch safer AI well, provided everyone has a kind heart and uses these models for good.
The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma
The next book is The Coming Wave by Mustafa Suleyman. He’s also the co-founder of Google DeepMind. He thinks we are approaching a threshold in human history where everything is about to change, and we are unprepared. This work is one of the newest books on AI that also covers recent breakthroughs like robotics and large language models. The book is divided into four parts. The first two parts talk about the endless acceleration of technology throughout human history.
The idea of this book is that technology and inventions come and go like waves and shape the world we live in, from the invention of the printing press, electricity, steam engines, cars, and computers to machine intelligence. Many unstoppable incentives and forces push progress. Not just financial and political incentives but also human ego, human curiosity, the desire to win the race, help the world, or change it and whatever it might be.
So what’s the coming wave? This will include advanced AI quantum computing and biotechnology. They also discussed a few different features that distinguish this wave from technology from the previous waves in human history. One of the main features is that it’s happening at an accelerating pace. It will be general-purpose technology, just like electricity, but it will be much more potent because it can become autonomous and do things by itself.
The next part of this book describes different states of failure. What are the consequences of these technologies for nation-states and democracy? If the state cannot contain this wave, the nation-state will collapse. Reading this chapter makes me realize how our world is the future. Imagine how new AI technology makes it possible to create the next generation of digital weapons.
Like what we see in the black mirror, sophisticated cyber-attacks, or imagine a world where serious effects are everywhere and spreading false information, targeting those who want to believe it and other doom, scenarios are biological weapons and a plethora of autonomous weapons. Another effect of the coming wave is job automation, which we’ll discuss more in-depth in the next book. Suleyman believes new jobs will undoubtedly be created but won’t come in the numbers and timescale to honestly how.
Also, even if we have new jobs, there might not be enough people with the right skills. Many people would need complete retraining. So, in the short term, many people would potentially become unemployed. In the last part of the book, the author discusses why containment must be possible because our lives depend on it. He also talks about the 10 steps to make this possible, which required coordination of technical research, developers, businesses, and governments.
Also, I find this book super interesting and relevant, covering the more immediate challenges we face today. I’d highly recommend it. Check out this nice explainer video on The Coming Wave website to learn more about this book.
Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
The next book we will discuss is Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity by Darren Acemoglu and Simon Johnson. I appreciate that this book is recommended to me by one of you. This book examines the relationship between technology prosperity and suicidal progress. The authors challenge the popular notion that technological advancement, including AI, automatically leads to progress and shared prosperity.
Instead, they argue that technological advancement can often exaggerate inequality. The benefits get captured mainly by small individuals and corporations, just like workers in textile factories. During the Industrial Revolution, they were forced to work long hours in horrible conditions as a small group of wealthy people captured most of the wealth. Similarly, in the last decades, computer technologies have made a small group of entrepreneurs and businesses wealthy while the poorer part of the population has seen their real incomes decline.
Data tells us that in the last four decades, the real wages of good-producing workers in the US have declined, even though productivity has grown. The book discussed a lot of explanations for this and how we could solve it. This book’s most exciting and relevant chapters are the chapters on digital damage and artificial struggle. This chapter analyzes the impact of digital and AI automation on jobs and human workers.
The authors argue that AI technology should focus on automating routine tasks like the ATMs automated by bank tellers rather than automating the creative and non-rooting tasks from humans. They pointed out that technology should empower us to be more productive rather than try to replace us altogether. This is how we can make technology benefit everyone. The book also uses the term so-so automation, which I find pretty interesting.
The idea is that many companies seem to rush to replace workers with machinery and automated AI customer services. For example, only to find out that automation did not work. Well, the machines do a poorer job than human workers. Elon Musk once tried to automate everything possible at Tesla and admitted it was a mistake. It is his mistake, and humans are underrated. So, the book argues that humans are good.
Most of what they do, we develop sophisticated communication, problem-solving, and creativity skills over thousands of years. Let’s just let humans and machines do their things, or put another word. This is a case against building artificial general intelligence that big takes today or after. Although unsure if I would agree with this point, I can relate.
The authors offer a range of recommendations for policies to help redirect technology to a better future for all of us. So overall, this is a very thought-provoking book. I’d highly recommend it to those who enjoy more critical discussion on AI and if you’re into economics and politics and working in law-making organizations.
Human Compatible: Artificial Intelligence and the Problem of Control
The next book is Human Compatible AI and The Problem of Control by Stuart Russell. He is also the co-author of their textbook about artificial intelligence, which we discussed at the end of the article. Despite the serious-sounding title, this book is an enjoyable read. It talks about designing intelligent machines that can help us solve complex problems. At the same time, they must ensure that they never behave in harmful ways to humans—the first part of the book talks about AI in general.
AI can be misused differently, so why should we take it seriously to build super-intelligent AI? That’s aligned with human goals. He said success would be the most significant event in human history and perhaps the last event in human history. He also briefly offers an answer to the question of when we solve human-level AI. Russel believes that with the technology we have today, we still have a long way to go, and he believes that deep learning the model behind large AI models today “falls far short of what is needed,” so deep learning is probably not going to lead to human-level AI directly.
He also thinks several breakthroughs are needed for us to serve human-level AI. One of the most critical missing pieces of the puzzle is to make computers understand the hierarchy of abstract actions, the notion of time and space, which is needed to construct complex plans and build models of words. An example he gives is that it’s easy to train a robot to stand up using reinforcement learning. But the real challenge is to make the robot discover by itself that standing up is a thing.
In the book’s second part, Stuart Russell explains why the standard approach to building AI systems is fundamentally flawed. According to him, we are essentially building optimization machines that try to optimize specific objectives that we feed into them. They are utterly indifferent to human values, which could lead to catastrophic outcomes. Imagine that we tell an AI system to come up with a cure for cancer as soon as possible.
Well, this sounds like an innocent and reasonable objective. The AI might decide to come up with a poison to kill everyone. So, no more people would die from cancer or maybe choose to inject a lot of people with cancers that it can carry experiments at scale and see what works. So then it will be a little bit too late for us to say, oh, I forgot to mention a significant thing that people don’t like to be killed.
So, this book argues that the world is complex, and it’s complicated to devise a good objective for a machine that considers all possible loopholes. We need to teach the machine. Do you know what I mean? That is not just what I mean. So, to solve this problem, Russell proposes a new approach, the idea of beneficial machines. He thinks we should design AI systems that best realize human values. Never harm, never harm, no matter how intelligent they are. So, to make this possible, Russell proposes three principles, which can’t remind me of the three principles by Asimo for robots.
- The first principle is that the machines are purely artistic. They don’t care about their well-being or even their existence.
- The second principle is that the machines are humble and don’t assume they know everything perfectly, including what objective they should have.
- The third principle is the learning to observe and predict human preferences.
For example, it should be known most humans prefer to live and not to die, and the machine should be able to recognize human preferences even when our actions are not perfectly rational. The book goes on to prove that these principles should work, and this can be mathematically guaranteed. Well, this seems to be a perfect plan. If there’s only one human on earth, billions of unique humans and our preferences could ultimately collide.
So there’s a whole chapter about the complications to this entire plan: humans ourselves. Overall, this book is fun small, but also very nuanced. You find so many original ideas and arguments here. I find it an essential read and, so, highly recommend it.
The Alignment Problem: Machine Learning and Human Values
The next book on the list is The Alignment Problem: How Can AI Learn Human Values by Brian Christian. This book tackles the issue of making AI systems aligned with human values and intentions. This book walks you through a tour of deep learning and neuron networks since the beginning of deep learning and talks about how AI goes wrong and how people have been trying to fix it. This book will be fascinating and helpful if you are already somewhat familiar with some machine learning and data science you come across in this book.
Many data science terms include training data, gradient descent algorithm word embeddings, and other jargon—the first part of the book talks about bias fairness and transparency of the machine learning model. You get to know almost the history of large neural networks and all the names of those who have contributed to the progress in the last decade. This chapter also talks about a bunch of mistakes and all kinds of instances where machine learning went wrong.
For example, in 2015, Google Photos mistakenly classified black people as gorillas. Google realized this was not okay and decided to remove this label entirely. It is so embarrassing that three years later, in 2018, Google Photos. They still refuse to tag anything as gorillas, including real gorillas. There are more examples with severe consequences, like motor buyers in healthcare or justice systems.
I enjoy discussing what caused these issues and what people have done to fix them. Also, how to remove bias for machine learning models. When the world is biased, it’s a reality that more mens are doctors, and all Asians are good at math. How can we even define fairness when life is unfair in many ways? It’s captivating because these are all real stories and not just thought experiments, and they could impact the lives of billions of people. The book’s second part is about reinforcement learning, which trains machines to imitate our behaviors.
This is also the main idea behind self-driving cars. We try to teach machines. We’ve seen a lot of success with this, but there are still some limitations. Hence, the book further discussed inverse reinforcement learning and reinforcement learning with human feedback, also used by open AI to train their language models. In the last chapter, Christian delves into how AI should deal with uncertainty. So, overall, this book is a must-read for anyone interested in the ethical implications of AI. And also all the challenges in building a fair machine-learning system.
Artificial Intelligence: A Modern Approach (Pearson Series in Artificial Intelligence)
It would be a mistake If we didn’t mention this huge textbook, Artificial Intelligence, a modern approach by Peter Norwick and Stuart Russell. It’s a comprehensive textbook that covers all the foundations of an AI agent, from problem knowledge to representation planning. You also have a chapter on machine learning and a separate chapter. Natural language, processing, and computer vision are stable for anyone studying computer science and AI. It has a detailed overview of all the AI concepts. You might be thinking it’s a textbook.
So it’s meant for people who are students or researchers or so, but it’s a very accessible and engaging book. The only thing is that you need some basic math understanding and be familiar with the mathematical notations. I wish I had more chances to dive into more details in many chapters of this book. If you’re learning the technical aspect of AI, this is an excellent resource, and I highly recommend getting this book sooner or later.
Exploring the Most Interesting Books on Artificial Intelligence
So it’s a longer article than usual, and thank you for sticking around. I hope I did justice to this fantastic book with this article. These books give me a more ground view of AI developments, which is refreshing, and I have much less angst whenever I see a headline on the news saying we have AGI in 2024. I’m like, probably not. Things will surely get exciting, and if you’ve got some value in my article, you must comment and tell me your opinion.
Here are the links to the books I have discussed in the article.
- Life 3.0: Being Human in the Age of Artificial Intelligence
- Superintelligence: Paths, Dangers, Strategies
- The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma
- Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
- Human Compatible: Artificial Intelligence and the Problem of Control
- The Alignment Problem: Machine Learning and Human Values
- Artificial Intelligence: A Modern Approach (Pearson Series in Artificial Intelligence)
If you find the content valuable, comment below. Or, if you are interested in making money with AI, you should read this article. After reading this article, “How to Make Money with AI – 7 Great Ideas for ChatGPT Marketplace“, you’ll learn how to make money with AI. Thanks for reading. I’ll come again with more valuable content later. Bye & take care of yourself.
1 thought on “7 Most Interesting Books on Artificial Intelligence in 2024”