The term “artificial intelligence” invented seventy years ago, explained as studies based on speculation that any characteristic of human intelligence can be simulated in a machine. The seven decades that followed were characterized by exaggerated and subsequent promises disappointments, unexpected advances and A resurgence of discredited methods, and generalized excitation and anxiety driven by the credulous press and popular fiction.
August 1955 The term “artificial intelligence” is coined in a proposal presented through John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. Date of the new heritage.
December 1955 Herbert Simon and Allen Newell evolved the logical theorist, the first synthetic intelligence program. He then showed 38 of the first 52 theorems in Whitehead and Russell’s Principia Mathematica.
1957 Frank Rosenblatt develops Percetron, an exclusive synthetic neuronal network, implemented on a device specially designed for symbols recognition. The New York Times reported that percetron is “the embryonic electronic PC that [the navy] hopes to be walking, talking, seeing, writing, reproducing and being aware of its existence. “
1957 In the film theater set, when a “methods engineer” (Spencer Tracy) installs the fictional Emerac computer, the head librarian (Katharine Hepburn) tells him worried colleagues in the corporate studies department, “They can not build a device to do our job; There are too many cross-references in this place. She proves her point through winning not only the engineer’s heart, yet also a festival with the “electronic brain” of the room of the disturbing aspect.
1958 John McCarthy develops the programming language Lisp which became the most popular programming language used in artificial intelligence research.
1959 Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”
1959 John McCarthy publishes “Programs with Common Sense” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs “that learn from their experience as effectively as humans do.”
1961 The first commercial robot, Unimat, starts to paintings on a mounting chain in a General Motors factory in New Jersey.
1965 Herbert Simon predicts that “the machines can, in the twenty years, do a task that a boy can do. “
1965 Hubert Dreyfus publishes “Alquimia e ia”, arguing that the brain is like a PC and that there were limits beyond which the progress would.
1965 I. J. Good writes that “the first Ultra -Ultraine device is the last invention you need, provided that the device is docile enough to tell us how to keep it controlled. “
1965 Joseph Weizenbaum develops ELIZA, an interactive program that carries on a dialogue in the English language on any topic. Weizenbaum, who wanted to demonstrate the superficiality of communication between man and machine, was surprised by the number of people who attributed human-like feelings to the computer program.
1965 Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi start working on DENDRAL at Stanford University. The first expert system, it automated the decision-making process and problem-solving behavior of organic chemists.
1966 Shakey, the first clever cellular robot, is evolved at the Stanford Research Institute. In a 1970 Life Magazine article about this “electronic first person,” Marvin Minsky is quoted with “certainty”: “From 3 to 8 years old, we will have a device with the general intelligence of an average human being. . . once the computers get Control, we would possibly never get it back. We survived their suffering. If we are lucky, they might to stay us as pets. “
1968 The Film 2001: Space Odyssey launched, starring Hal, a sensitive and fatal computer.
1968 Terry Winograd develops SHRDLU, an early natural language understanding computer program.
1969 Marvin Minsky and Seymour Papert publish perpetrators: an advent to PC Geometry, highlighting the limits of the undeniable synthetic neuronal networks. It is believed that the electronic book is the main thing in the fall in financing and appointments of synthetic neural networks in Next fifteen years.
1972 The first humanoid robot, Wabot-1, is demonstrated at the University of Waseda in Japan. It consisted of a member formula, a vision formula and a verbal exchange formula.
1972 Mycin, a formula for identifying bacteria that cause serious infections and recommending antibiotics, evolves at Stanford University.
1973 James Lighthill reports that the British Reeque Council of Sciences on the Search for State Synthetic Intelligence, concludes that “in no component of the box has only made the discoveries produced so far the primary ones have an effect on which it is promised. “
1973 Raj Reddy develops Hearsay I, the first system capable of continuous speech recognition.
1978 The XCON (eXpert CONfigurer) program, an expert system assisting in the ordering of DEC’s VAX computers by automatically selecting the components based on the customer’s requirements, is developed at Carnegie Mellon University.
1979 Stanford Cart effectively crosses a room full of chairs without human intervention in approximately five hours, one of the first examples of autonomous vehicles.
1979 Kunihiko Fukushima develops the neocognitron, a hierarchical, multilayered artificial neural network.
1981 The Japanese Ministry of International Trade and Industry Budea $ 850 million for the assignment of Fifth Generation. The task was aimed at coming computers that can continue with conversations, translate languages, interpret photographs and reasons as humans. The task ended in 1992 without achieving its objectives.
1984 During the annual assembly of the disposition for the progress of synthetic intelligence, a panel warns the next “AI winter”, which predicts an imminent burst of the Bulle of AI. The bubble aimed at qualified systems has been deflated 3 years later.
1986 A truck free of reason supplied with cameras and sensors, evolved at the University of Bundeswehr in Munich under the direction of Ernst Dickmanns, led 55 MPH in empty streets.
1986 David Rumelhart, Geoffrey Hinton, and Ronald Williams publish “depictions of learning through errors in propagation. “The Posterior Propagation Learning Ruleset played a vital role in the good fortune of deep learning in the 2010s.
1988 Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems. His 2011 Turing Award citation credited Pearl for inventing “Bayesian networks, a mathematical formalism for defining complex probability models, as well as the principal algorithms used for inference in these models. This work not only revolutionized the field of artificial intelligence but also became an important tool for many other branches of engineering and the natural sciences.”
1988 Members of the IBM T.J. Watson Research Center publish “A Statistical Approach to Language Translation,” heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting a broader transition to machine learning based on a statistical analysis of known examples.
1988 R. Colin Johnson and Chappell Brown publish cognants: networks and neural machines that think, proclaiming that synthetic neuronal networks “could revolutionize our society very well and inevitably lead to a new of our own cognition. “
1989 Yann Lecun and other AT&T researchers effectively apply a set of retropropagation rules to a network of multicapa synthetic neurons, effectively detecting postal codes written by hand. Given the limits of the curtains at that time, it took approximately 3 days to shape the network, a significant improvement compared to the past efforts.
1990 Rodney Brooks publishes “Elephants Don’t Play Chess,” proposing a new approach to AI—building intelligent systems, specifically robots, from the ground up and on the basis of ongoing physical interaction with the environment: “The world is its own best model… The trick is to sense it appropriately and often enough.”
1993 Vernor Vinge publishes “The Technological Singularity for Come”, predicting that “In thirty years, we will have the technology to create superhuman intelligence. Disassembly later, the human age will end. “
1995 Richard Wallace approaches A. L. I. C. E Chatbot (Artificial Linguistic Internet IT entity), encouraged through Joseph Weizenbaum’s Eliza program, but with the addition of a collection of knowledge about an unprecedented herbal language sample, activated through the advent of the net.
1997 SEPP Hochreiter and Jürgen Schmidhuber propose a long -term short -term reminiscence (LSTM), allowing the synthetic neuronal network to know what data is needed later in a series and when those data are no longer necessary.
1997 Deep Blue becomes the first PC chess game program to overcome a current world chess champion.
1998 Dave Hampton and Caleb Chung create Furby, the first domestic or pet robot.
2000 Yoshua Bengio advances the field of natural language processing, co-authoring the paper “A Neural Probabilistic Language Model.” The paper introduced “high-dimensional word embeddings,” allowing neural networks to recognize the similarity between new phrases and those included in their training sets, even when the specific words used are different.
2000 Cynthia Breazeal develops Kismet, a robot that can recognize and simulate emotions.
2000 ASIMO Humanoid Robot from 2000 is walking around as temporarily as a human platform, delivering to consumers in a restaurant.
2001 – A. I. Se published Artificial Intelligence, a film through Steven Spielberg in David, a child android, an android with the ability to love.
2004 The first major challenge of DARPA, a worthwhile festival for self-driving cars, is hosted in the Mojave Desert. None of the self-driving cars finished the 150-mile route.
2006 Dartmouth Artificial Intelligence: The next fifty -year conference (ai@50), commemorates the 50th anniversary of the 1956 workshop. The director of the Convention concludes: “Although the AI has been a success far beyond 50 years, remain Many dramatic disagreements in the Field.
2007 Fei-Fei Li and his colleagues at Princeton University are beginning to assemble Imagenet, a giant database of annotated photographs designed for the popularity of visual objects.
2009 Rajat Raina, Anand Madhavan and Andrew Ng publish “Unsupervised Large-Scale Deep Learning Using GPUs”, arguing that “modern GPUs exceed the computational functions of multicore processors and have the foresight to revolutionise the applicability of deep non-higher learning strategies, and have the foresight to revolutionise the applicability of deep non-higher learning strategies, and to have the foresight to revolutionize the applicability of deep non-higher learning strategies and revolutionize the applicability of deep non-higher learning strategies and non-higher learning strategies and to locate deep non-higher learning strategies and unsupervised learning strategies and a matrix”
2009 Google begins to develop, in Secret, a driver without driver. In 2014, he is the first to take an autonomous check in Nevada through the American State.
Lance 2010 of the Visual Recognition Challenge on a Imagenet Scale (ILVCR), an annual objective of popularity of objects.
2011 A network of synthetic neurons wins the Popularity Festival for German traffic symptoms with an accuracy of 99. 46% (against humans at 99. 22%).
Watson 2011, PC questions to herbal language, compete in danger! And expires two former champions.
Researchers from 2011 IDSIA in Switzerland report an advance error rate in the popularity of handwriting, a GPU-backed synthetic neural network.
June 2012 Jeff Dean and Andrew Ng report on the results of an experiment in which they showed a very large neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of… cats.”
October 2012’s Alexnet, a GPU-backed synthetic neural network designed through Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, achieved an error rate of 15. 3% in the giant-scale visual popularity challenge of 26. 2% in the second. The best entrance.
Artificial neuronal networks, renamed in 2007 as “deep learning”, were soon renamed “AI”, attracting risk capital investment, stimulating generalized experimentation through companies and promoting the confusion of the government’s scrutiny.
March 2016 Google Deepmind’s defeats, Alphago’s defeats, Go Lee Sedol champion.
Google researchers 2017 publish “Attention is All You Need,” inventing the “transform” style that allows the synthetic neural network to capture the relationships between remote words, expanding its functionality in terms of the context and leading to the progression of many linguistic or primary linguistic or primary LLM styles.
November 2022 Chatgpt, a chatbot is published in LLM, tweaking the admissions software application that is experiencing the fastest expansion in history, gaining more than a hundred million users in two months.
April 2024 Elon Musk predicts the arrival of artificial general intelligence or AI that is “smarter than the smartest human” probably in 2025 or 2026.
A community. Many voices. Create a lazy account to pry your thoughts.
Our network is about connecting other people through open and considered conversations. We need our readers to prove their reviews and exchange concepts and made in a space.
To do this, follow the publication regulations the situations of use of our site. We have summarized some of those key regulations below. In other words, keep it civil.
Your message will be rejected if we realize that it turns out to contain:
The user accounts will block if we realize or that users are compromised:
So, how can you be a power user?
Thanks for reading our community guidelines. Please read the full list of posting rules found in our site’s Terms of Service.