History of Artificial Intelligence

History Of Artificial Intelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true. Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. This cycle of boom and bust, of "AI winters" and summers, continues to haunt the field. Undaunted, there are those who make extraordinary predictions even now.

Progress in AI has continued, despite the rise and fall of its reputation in the eyes of government bureaucrats and venture capitalists. Problems that had begun to seem impossible in 1970 have been solved and the solutions are now used in successful commercial products. However, no machine has been built with a human level of intelligence, contrary to the optimistic predictions of the first generation of AI researchers. "We can only see a short distance ahead," admitted Alan Turing, in a famous 1950 paper that catalyzed the modern search for machines that think. "But," he added, "we can see much that must be done."

History of computing
Hardware
  • Hardware before 1960
  • Hardware 1960s to present
  • Hardware in Soviet Bloc countries
Computer science
  • Artificial intelligence
  • Compiler construction
  • Computer science
  • Operating systems
  • Programming languages
  • Software engineering
Modern concepts
  • Graphical user interface
  • Internet
  • Personal computers
  • Laptops
  • Video games
  • World Wide Web
Timeline of computing
  • 2400 BC–1949
  • 1950–1979
  • 1980–1989
  • 1990–1999
  • 2000–2009
  • More timelines...
More...

Read more about History Of Artificial Intelligence:  Precursors, The Birth of Artificial Intelligence 1943−1956, The Golden Years 1956−1974, The First AI Winter 1974−1980, Boom 1980–1987, Bust: The Second AI Winter 1987−1993, AI 1993−present

Other articles related to "history of artificial intelligence, artificial intelligence, of artificial intelligence, intelligence":

Outline Of Artificial Intelligence - History of Artificial Intelligence
... Main article History of artificial intelligence Progress in artificial intelligence Timeline of artificial intelligence AI effect AI winter ...
History Of Artificial Intelligence - AI 1993−present - Where Is HAL 9000?
... by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings ... the issue is computer power and, using Moore's Law, he predicts that machines with human-level intelligence will appear by 2029 ...

Famous quotes containing the words history of, intelligence, history and/or artificial:

    The history of all hitherto existing society is the history of class struggles.
    Karl Marx (1818–1883)

    The information links are like nerves that pervade and help to animate the human organism. The sensors and monitors are analogous to the human senses that put us in touch with the world. Data bases correspond to memory; the information processors perform the function of human reasoning and comprehension. Once the postmodern infrastructure is reasonably integrated, it will greatly exceed human intelligence in reach, acuity, capacity, and precision.
    Albert Borgman, U.S. educator, author. Crossing the Postmodern Divide, ch. 4, University of Chicago Press (1992)

    Books of natural history aim commonly to be hasty schedules, or inventories of God’s property, by some clerk. They do not in the least teach the divine view of nature, but the popular view, or rather the popular method of studying nature, and make haste to conduct the persevering pupil only into that dilemma where the professors always dwell.
    Henry David Thoreau (1817–1862)

    In truth, politeness is artificial good humor, it covers the natural want of it, and ends by rendering habitual a substitute nearly equivalent to the real virtue.
    Thomas Jefferson (1743–1826)