Arxiv paper (PDF) from Adam Darwiche of UCLA: Human-Level Intelligence or Animal-Like Abilities? Thought-provoking and insightful discussion of the current AI rage.
Mainstream scientific intuition stands in the way of accepting that a method, which does not require any explicit modeling or sophisticated reasoning, can be sufficient for reproducing human level. intelligence. This dilemma is further amplified by the observation that recent developments did not culminate in a clearly characterized and profound scientific discovery—such as a new theory of the mind—that would normally mandate massive updates to the AI curricula. Scholars from outside AI and computer science often sense this dilemma as they complain that they are not receiving an intellectually satisfying answer to the question of “What just happened in AI?”
The answer to this dilemma lies in a careful assessment of what we managed to achieve with deep learning, and in identifying and appreciating the key scientific outcomes of recent developments in this area of research.
Darwiche perfectly sums up the feeling I first had when Deep Learning started mowing down Go champions like a pit bull shredding raggedy-ann dolls - "What just happened in AI?" After researching the breakthroughs in Deep Learning, I began to realize that most of the headlining advancements in AI were a breakthrough of degree, not a breakthrough of kind. Yet, most people who write about AI in tech media do not understand what is happening. Darwiche says,
Public perceptions about AI progress and its future are very important. The current misperceptions and associated fears are being nurtured by the absence of scientific, precise and bold perspectives on what just happened, therefore, leaving much to the imagination.Darwiche goes on to explain that the narrative surrounding recent AI breakthroughs is measuring progress against a different, much more forgiving, standard than that used by early AI researchers:
Consider machine translation for example, which received significant attention in the early days of AI. The represent-and-reason approach is considered to have failed on this task, with machine learning approaches being the state of the art now (going beyond function-based approaches). In the early days of AI, success was measured by how far a system accuracy was from 100%, with intelligence being a main driving application (a failure to translate correctly can potentially lead to a political crisis). The tests that translation systems were subjected to compared their performance to human translation abilities (e.g., translate a sentence from English to Russian then back to English). Today, the almost sole application of machine translation is to the web. Here, success is effectively measured in terms of how far a system accuracy is from 0%. If I am looking at a page written in French, a language that I don’t speak, I am happy with any translation that gives me a sense of what the page is saying. In fact, the machine translation community rightfully calls this gist translation. It can work impressively well on prototypical sentences that appear often in the data, but can fail badly on novel text. This is still very valuable, yet it corresponds to a task that is significantly different from what was tackled by early AI researchers. Current translation systems will fail miserably if subjected to the tests adopted by early translation researchers. Moreover, these systems will not be suitable, or considered successful, if integrated within a robot that is meant to emulate human behavior or abilities
One area of AI progress that I think Darwiche overlooks is "Universal AI" (Hutter's term) - AI built on Solomonoff's theory of induction. This field offers something that no other field in AI has - a formal definition of action, among other things. While these formal definitions are uncomputable (cannot be directly implemented with any real hardware or software), the latest developments in other areas of AI - such as Deep Learning - are inadvertently paving the way for resource-feasible approximations to Universal AI.
My view is that we need to be more cautious about AI but not for the reasons that those in the AI-spells-doom community give. No one can predict the advancements that will occur in this field, including advancements that are dangerous. We are building AI on Turing-universal platforms and this is inherently unsafe. We need to start fencing AI inside of containers with provable properties. Otherwise, we are flying blind - anybody's opinion about what is going to happen next is as good as anybody else's opinion. Fencing AI inside containers with provable properties removes speculation from the discussion. Even though this approach imposes some overhead on AI development, this overhead is not very costly and it is the only rational way forward.
No comments:
Post a Comment