I assembled these 11 quotes for Techopedia back in early 2019. At the end of 2020, I was asked to put together a whole new set that has displaced the original one under the same URL: https://www.techopedia.com/11-quotes-about-ai-thatll-make-you-think/2/33718 You can click that to see the new version. This post preserves the original quotes that are timeless and still relevant, particularly the last one that is misquoted in almost every other list of quotes.
Takeaway: The advance of AI is inevitable, and what that translates into for humanity is not altogether clear. Some believe we can look forward to a great future, while others think it means we are on the path to being supplanted by our robotic overlords. The third perspective is one that is aware of risks but considers them to be manageable.
We hear a lot about AI and its transformative potential. What that means for the future of humanity, however, is not altogether clear. Some futurists believe life will be improved, while others think it is under serious threat. There’s also a spectrum of positions in the middle. Here’s a range of takes from 11 experts.
1. “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” – Eliezer Yudkowsky
That is the first sentence in Yudkowsky’s 2002 report entitled “Artificial Intelligence as a Positive and Negative Factor in Global Risk” for the Machine Intelligence Research Institute (MIRI). While the term AI wasn’t bandied about nearly as much then as it is now, there still remains a problem of a lack of understanding on the capabilities and limits of the technology. In fact, in the past couple of years, there’s been more of a push to make AI not just understandable, but explainable.
2. “What is vital is to make anything about AI explainable, fair, secure and with lineage, meaning that anyone could see very simply see how any application of AI developed and why.” – Ginni Rometty
IBM's CEO made that statement during her keynote address at CES on January 9, 2019. The background of asserting the need for explainable AI is that keeping it as a sealed black box makes it impossible to check and fix biases or other problems in the programming. IBM has put itself in the camp of working toward solving this problem, not just offering computing services for companies, but consultations on reducing bias for those who are building machine learning systems. (Learn more about explainable AI in AI’s Got Some Explaining to Do.)
3. “The ultimate search engine, which would understand, you know, exactly what you wanted when you typed in a query, and it would give you the exact right thing back, in computer science we call that artificial intelligence. That means it would be smart, and we’re a long ways from having smart computers.” – Larry Page
Google’s co-founder and CEO at the time said this in November 2002 during a PBS NewsHour segment entitled “Google: The Search Engine that Could.” The host opened with a reflection on the rising popularity of Google in the year in which the American Dialect Society ranked it as the most useful verb to be added into use, though it would take another few years for it to be recognized by the likes of Merriam-Webster. But even early on, the company signaled its interest in utilizing AI.
4. “Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI – if not THE most. This is red meat to the media to find all ways to damage Google.” – Fei Fei Li, an AI pioneer at Google, in an email to colleagues about the company’s involvement in Project Maven
Google found that being a major player in AI can have a downside. In July 2017, the Defense Department presented its goals for Project Maven. Marine Corps Col. Drew Cukor, chief of the Algorithmic Warfare Cross-Function Team in the Intelligence, Surveillance and Reconnaissance Operations Directorate-Warfighter Support in the Office of the Undersecretary of Defense for Intelligence spoke about their stated goal for the year: “People and computers will work symbiotically to increase the ability of weapon systems to detect objects.”
Google had been a partner in this venture, but – as indicated by the quote above – Google employees didn’t like it. Eventually, the company succumbed to the pressure and in June 2018 announced it would not renew its contract with the Defense Department. As The Intercept reported:
Google faced growing pressure since the contract was revealed by Gizmodo and The Intercept in March. Nearly a dozen employees resigned in protest, and several thousand signed an open letter declaring that “Google should not be in the business of war.” More than 700 academics also signed a letter demanding that “Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes.”
5. “Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” – Ray Kurzweil
The futurist and inventor said this in a 2012 interview in which he spoke about achieving immortality through computing power. He confirmed the “billion-fold” figure and explained it as follows: “That’s such a singular change that we borrow this metaphor from physics and call it a singularity, a profound disruptive change in human history. Our thinking will become a hybrid of biological and non-biological thinking.” Obviously, he is one of the optimistic futurists, picturing a disruptive change that will be of great benefit. He further explained why he believes immortality is within reach: “we will be adding more than a year every year to your remaining life expectancy, where the sands of time are running in rather than running out, where your remaining life expectancy actually stretches out as time goes by.”
6. “The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.” – Kevin Kelly
The co-founder of Wired wrote this sweeping assertion in his 2016 book, “The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future.” As he envisions the rise of automation and jobs being taken over by robots, he anticipates that there will be a repeated cycle of denial, but progress is inevitable, and we will have to adapt accordingly. As he explained in an interview with IBM: “Through AI, we’re going to invent many new types of thinking that don’t exist biologically and that are not like human thinking,“ and the silver lining to the computer cloud that he highlights is this: “Therefore, this intelligence does not replace human thinking, but augments it.”
7. “The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.” – Stephen Hawking
This quote dates to early 2015. It was the answer Stephen Hawking offered during a Reddit AMA (Ask Me Anything) Q&A session to a question from a teacher who wanted to know how to address certain AI concerns that come up in his classes, namely the following:
How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?
Hawking shows some concern about the possibly destructive effects of AI on humanity, though he appears to believe that risk can be managed if we plan for it, a view shared by some others.
8. “You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine.” – Yuval Noah Harari
Professor Harari made that pronouncement in his 2017 book, “Homo Deus: A Brief History of Tomorrow.” His view is poles apart from that of the positive futurists in picturing the rise of what he calls dataism in which humans cede the superior ground to advanced artificial intelligence. We are bound for the position of the ants to be flooded in Hawking’s explanation. This is a future dominated by an omnipresent and omniscient “cosmic data-processing system,” and resistance is futile.
9. “We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.” – Klaus Schwab
Schwab published his thoughts on the Fourth Industrial Revolution in January 2016. Like the positive futurists, he envisioned that the future will fuse “the physical, digital and biological worlds in ways that will fundamentally transform humankind.” But he did not take it for granted that such a “transformation is positive,” urging people to plan ahead with awareness of both “the risks and opportunities that arise along the way.”
10. “Much has been written about AI’s potential to reflect both the best and the worst of humanity. For example, we have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly bigger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive.” – Andrew Ng
This quote comes from “What Artificial Intelligence Can and Can’t Do Right Now,” the article Andrew Ng, the founding lead of the Google Brain team, former director of the Stanford Artificial Intelligence Laboratory, wrote for Harvard Business Review in 2016 when he was the overall lead of Baidu’s AI team. (In 2017 he became the founder and director of Landing AI.) It explains the capabilities and limits of AI as it was then, and is still relevant today. While Ng is not positing a data-dominated dystopian future, he does agree that those who develop it have a responsibility to apply it responsibly with full understanding of its both intended and unintended consequences.
11. “There is no reason and no way that a human mind can keep up with an artificial Intelligent machine by 2035.” – Gray Scott
This quote is not mistyped, though it deviates from the way you will see it anywhere else online because it always appears as “There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.” Here’s the story. Based on how far back it appears in digital sources, it was likely said in 2015. However, I could not pin it down to any particular context even after hours of search through texts and videos from that period. So I contacted Scott himself to ask for the source. He admitted, “I do not recall when the first time was that I said this or where it was.” But he did recall his wording: “The quote has always been wrong. It should read ‘artificial Intelligent.’”
No comments:
Post a Comment