“Machine Learning Is Like Money Laundering For Bias”

“Machine Learning Is Like Money Laundering For Bias”

Maciej Ceglowski quote from Noah Friedman

Originally shared by Rick Wayne (Author)

Robotics pioneer Rodney Brooks debunks AI hype

This article is so fantastic that I want to sit and annotate it with you. Since I can't, we'll have to settle for a few notes.

First of all, ILLUSTRATIONS BY JOOST SWARTE!!!

Second of all, CLARKE'S LAWS!!! Everyone needs to know them, rather than just the last one.

Now that I got that out of the way, note this section:

This is a problem I regularly encounter when trying to debate with people about artificial general intelligence, or AGI—the idea that we will build autonomous agents that operate much like beings in the world. I am told that I do not understand how powerful AGI will be. That is not an argument. We have no idea whether it can even exist. I would like it to exist—this has always been my own motivation for working in robotics and AI. But modern-day AGI research is not doing well at all on either being general or supporting an independent entity with an ongoing existence. It mostly seems stuck on the same issues in reasoning and common sense that AI has had problems with for at least 50 years. All the evidence that I see says we have no real idea yet how to build one. Its properties are completely unknown, so rhetorically it quickly becomes magical, powerful without limit.

Here is the author's bio, for those who don't know the name and want to be immediately critical:

Rodney Brooks is a former director of the Computer Science and Artificial Intelligence Laboratory at MIT and a founder of Rethink Robotics and iRobot.

Former directory of the AI lab at MIT... noting what I and others have been noting -- that the discipline has been stuck on the same problems for the last 50 or more years. This is almost certainly because its founding principle, the computational theory of mind, is wrong.

Then there's this:

When people hear that machine learning is making great strides in some new domain, they tend to use as a mental model the way in which a person would learn that new domain. However, machine learning is very brittle, and it requires lots of preparation by human researchers or engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem domain. Today’s machine learning is not at all the sponge-like learning that humans engage in, making rapid progress in a new domain without having to be surgically altered or purpose-built.

In other words, what we call artificial intelligence is not actually intelligence. It is an algorithmic encoding of the intelligence of the humans who made it. Calling it intelligent is confusing the painting for the artist.

Then there's this:

The building I live in was built in 1904, and it is not nearly the oldest in my neighborhood. Many of the cars we are buying today, which are not self-driving, and mostly are not ­software-enabled, will probably still be on the road in the year 2040. This puts an inherent limit on how soon all our cars will be self-driving. If we build a new home today, we can expect that it might be around for over 100 years.

This is actually the basis of this project (note the name of this Collection). As the speed of change quickens, radically different technologies will co-exist for longer periods of time. It's the fundamental reason the work of artists like Simon Stalenhag resonates so much now.

The article quotes Amara’s Law, which says we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run, and that is very true. A big reason for that is we don't accurately measure "long run."

Anyway, check it out.
https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/

Comments

Popular posts from this blog

Entremet

Flushbunkingly Gloriumptious

Originally shared by Kam-Yung Soh