Quotes About Ai
Elon Musk is a leading voice on the potential danger's of AI. His concern over AI led him to co-found OpenAI, a non-profit AI research company. Scroll down to read the quotes, or watch Elon share his thoughts on AI.
Artificial Intelligence Quotes. Monica Swinton: I'm sorry I didn't tell you about the world. Professor Hobby: You are a real boy. At least as real as I've ever made one. Many positive artificial intelligence quotes look at what AI stands to change and allow us to do. And one of the more frequent themes for this is the impact it could have on our humanity. That is, our compassion and emotional intelligence. Our intelligence is what makes us human, and AI is an extension of that quality.
Feel free to leave a comment at the bottom with your favorite quote. Enjoy!
'Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.' (Aug, 2014 Source)
'So the goal of OpenAI is really just to take the set of actions that are most likely to improve the positive futures. Like, if you can think of like the future as a set of probability streams that branch out and then converge; collapse down to a particular event and then branch out again and there's a certain set of probabilities associated with the future being positive and a different type flavours of that and at OpenAI we want to try to guide… do whatever we can to increase the probability of the good futures happening.' (June, 2016 Source)
'I think if AI power is widely distributed then, and there's not, say, one entity that has some super AI that is a million times smarter than anything else, if instead the AI power is broadly distributed and, to the degree that we can link AI power to each individual's will, like you'd have your AI agent, everyone would have their AI agent, and then if somebody did try and do something really terrible then the collective will of others could overcome that bad actor, which you can't do if there's one AI that's a million times better than everyone else.' (June, 2016 Source)
'I think AI is something that is risky at the civilization level, not merely at the individual risk level, and that's why it really demands a lot of safety research.' (Jan, 2015 Source)
'I think A.I. is probably the single biggest item in the near term that’s likely to affect humanity. So it’s very important that we have the advent of A.I. in a good way, that it’s something that if you could look into a crystal ball and see the future, you would like that outcome. Because it is something that could go wrong… So we really need to make sure it goes right.' (Sep, 2016 Source)
'I think having a high bandwidth interface to the brain [is extremely important], we are currently bandwidth limited. We have a digital tertiary self. In the form of our email, computers, phones, applications, we are effectively superhuman. But we are extremely bandwidth constrained in that interface between the cortex and that tertiary digital form of yourself, and helping solve that bandwidth constraint would be very important for the future as well.' (Sep, 2016 Source)
'The best of the available alternatives that I can come up with [regarding A.I.], and maybe somebody else can come up with a better approach or a better outcome, is that we achieve democratization of A.I. technology. Meaning that no one company or small set of individuals has control over advanced A.I. technology. That’s very dangerous, it could also get stolen by somebody bad. Like some evil dictator. A country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation if you have any incredibly powerful A.I. You just don’t know whose going to control that.' (Sep, 2016 Source)
'It’s not as though I think the risk is that A.I. will develop a will of its own right off the bat, the concern is more that someone may use it in a way that is bad. Or even if they weren’t going to use it in a way that's bad, that someone could take it from them and use it in a way that’s bad. That I think is quite a big danger. So I think we must have democratization of A.I. technology and make it widely available. And that’s obviously the reason that you [Sam Altman], me, and the rest of the team created OpenAI - was to help spread out A.I. technologies so it doesn’t get concentrated in the hands of a few.' (Sep, 2016 Source)
'If we can effectively merge with A.I. by improving the neural link between the cortex and your digital extension of yourself - which already exists, it just has a bandwidth issue - then effectively you become an A.I. human symbiote. And if that then is widespread, [where] anyone who wants it can have it, then we solve the control problem as well. We don’t have to worry about some evil dictator A.I., because we are the A.I. collectively. That seems to be the best outcome I can think of.' (Sep, 2016 Source)
'[OpenAI] seems to be going really well. We have a really talented team that are working hard. OpenAI is structured as a non-profit, but many non-profits do not have a sense of urgency… but OpenAI does. I think people really believe in the mission, I think it’s important. It’s about minimizing the risk of existential harm in the future…' (Sep, 2016 Source)
When it comes to the possibilities and possible perils of artificial intelligence (AI), learning and reasoning by machines without the intervention of humans, there are lots of opinions out there. Only time will tell which one of these quotes will be the closest to our future reality. Until we get there, it’s interesting to contemplate who might be the one who predicts our reality the best.
“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”
— Stephen Hawking told the BBC
“I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”
—Claude Shannon
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”
—Larry Page
“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.”
—Elon Musk wrote in a comment on Edge.org
“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
— Nick Bilton, tech columnist wrote in the New York Times
“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of 'bug out' houses, to which they could flee if it all hits the fan.”
—James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, told the Washington Post
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.”
—Elon Musk warned at MIT’s AeroAstro Centennial Symposium
“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”
—Gray Scott
“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.”
—Klaus Schwab
“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence.”
—Ginni Rometty
“I'm more frightened than interested by artificial intelligence - in fact, perhaps fright and interest are not far away from one another. Things can become real in your mind, you can be tricked, and you believe things you wouldn't ordinarily. A world run by automatons doesn't seem completely unrealistic anymore. It's a bit chilling.”
—Gemma Whelan
“You have to talk about 'The Terminator' if you're talking about artificial intelligence. I actually think that that's way off. I don't think that an artificially intelligent system that has superhuman intelligence will be violent. I do think that it will disrupt our culture.”
—Gray Scott
“If the government regulates against use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.”
—Peter Diamandis
“The key to artificial intelligence has always been the representation.”
—Jeff Hawkins
“It's going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.”
—Colin Angle
“Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement - wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.”
—Eliezer Yudkowsky
“Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.”
—Diane Ackerman
“Someone on TV has only to say, ‘Alexa,’ and she lights up. She’s always ready for action, the perfect woman, never says, ‘Not tonight, dear.’”
—Sybil Sage, as quoted in a New York Times article
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.”
—Alan Kay
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”
—Ray Kurzweil
“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It's really an attempt to understand human intelligence and human cognition.”
—Sebastian Thrun
“A year spent in artificial intelligence is enough to make one believe in God.”
—Alan Perlis
“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.”
—Gray Scott
Quotes About Airplanes
“Is artificial intelligence less than our intelligence?”
—Spike Jonze
Quotes About Aim
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
—Eliezer Yudkowsky
“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.”
—Jean Baudrillard
“Forget artificial intelligence - in the brave new world of big data, it's artificial idiocy we should be looking out for.”
—Tom Chatfield
“Before we work on artificial intelligence why don’t we do something about natural stupidity?”
—Steve Polyak
So, how would you weigh in? What’s your opinion about artificial intelligence?