Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

User:Grofaz

From Wikipedia, the free encyclopedia

Theories of the Digital Revolution

[edit]

The Memex

Vannevar Bush [1] introduced the concept of what he called the memex in the 1930s, a microfilm-based "device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility."

After thinking about the potential of augmented memory for several years, Bush set out his thoughts at length in the essay "As We May Think" in the Atlantic Monthly which is described as having been written in 1936 but set aside when war loomed. He removed it from his drawer and it was published in July 1945. In the article, Bush predicted that "Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified." A few months later (10 September 1945) Life magazine published a condensed version of "As We May Think," accompanied by several illustrations showing the possible appearance of a memex machine and its companion devices. This version of the essay was subsequently read by both Ted Nelson and Douglas Engelbart, and was a factor in their independent formulations of the various ideas that became hypertext. The memex is still an important accomplishment, because it directly inspired the development of hypertext technology.

Binary Code

The term binary code can mean several different things:

There are a variety of different methods of coding numbers or symbols into strings of, including fixed-length binary numbers, prefix codes such as Huffman code, and other arithmetic coding. Made up of only zeros and ones (zeros standing for off and ones standing for on), and used in computers to stand for letters and digits. For example, computers using western languages often use 8-bit binary codes for characters. The ISO 8859-1 character code uses 8 bits for one letter e.g. "R" is "01010010" and "b" is "01100010"; the block of 8 bits is called a byte. The ASCII code uses 7 bits to represent 128 characters (0–127).

A binary code can also refer to a linear code over the finite field F2 = Z/2Z.

Moore's Law[2]

Growth of transistor counts for Intel processors (dots) and Moore's Law (upper line=18 months; lower line=24 months)For the observation regarding information retrieval.

Moore's Law describes an important trend in the history of computer hardware: that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years. The observation was first made by Intel co-founder Gordon E. Moore in a 1965 paper. The trend has continued for more than half a century and is not expected to stop for a decade at least and perhaps much longer.

Almost every measure of the capabilities of digital electronic devices is linked to Moore's Law: processing speed, memory capacity, even the resolution of LCD screens and digital cameras. All of these are improving at (roughly) exponential rates as well. This has dramatically changed the usefulness of digital electronics in nearly every segment of the world economy. Moore's Law describes this driving force of technological and social change in the late 20th and early 21st centuries.

Artificial intelligence[3]

Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a reigning world champion. Artificial intelligence Portal The modern definition of artificial intelligence (or AI) is "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines." Other names for the field have been proposed, such as computational intelligence, synthetic intelligence or computational rationality. The term artificial intelligence is also used to describe a property of machines or programs: the intelligence that the system demonstrates.

AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic. AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.

History Main articles: History of artificial intelligence and Timeline of artificial intelligence

The field was born at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle 60s their research was heavily funded by DARPA[13] and they would make extraordinary predictions about their work:

1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do" 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." These predictions, and many like them, would not come true. They had failed to anticipate the difficulty of some of the problems they faced: the lack of raw computer power, the intractable combinatorial explosion of their algorithms, the difficulty of representing commonsense knowledge and doing commonsense reasoning, the incredible difficulty of perception and motion and the failings of logic. In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from congress to fund more productive projects, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter.

In the early 80s, the field was revived by the commercial success of expert systems and by 1985 the market for AI had reached more than a billion dollars. Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow. Minsky was right. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.

In the 90s AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas. The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.

Mechanisms

Expert systems were one of the earliest types of AI system. They are built around automated inference engines including forward reasoning and backwards reasoning. Based on certain conditions ("if") the system infers certain consequences ("then").

In terms of consequences, AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of most AI systems.

Classifiers make use of pattern recognition for condition matching. In many cases this does not imply absolute, but rather the closest match. Techniques to achieve this divide roughly into two schools of thought: Conventional AI and Computational intelligence (CI).

Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. This approach limits the situations to which conventional AI can be applied. Lotfi Zadeh stated that "we are also in possession of computational tools which are far more effective in the conception and design of intelligent systems than the predicate-logic-based methods which form the core of traditional AI." These techniques, which include fuzzy logic, have become known as soft computing. These often biologically inspired methods stand in contrast to conventional AI and compensate for the shortcomings of symbolicism.[27] These two methodologies have also been labeled as neats vs. scruffies, with neats emphasizing the use of logic and formal representation of knowledge while scruffies take an application-oriented heuristic bottom-up approach.[28]

Classifiers

Classifiers are functions that can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.

When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are mainly statistical and machine learning approaches.

A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.

The most widely used classifiers are the neural network, support vector machine, k-nearest neighbor algorithm, Gaussian mixture model, naive Bayes classifier, and decision tree.

AI Types

Conventional AI Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). Methods include:

Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them. Case based reasoning: stores a set of problems and answers in an organized data structure called cases. A case based reasoning system upon being presented with a problem finds a case in its knowledge base that is most closely related to the new problem and presents its solutions as an output with suitable modifications. Bayesian networks Behavior based AI: a modular method of building AI systems by hand.

Computational intelligence Computational intelligence involves iterative development or learning (e.g., parameter tuning in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Subjects in computational intelligence as defined by IEEE Computational Intelligence Society mainly include:

Neural networks: trainable systems with very strong pattern recognition capabilities. Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in modern industrial and consumer product control systems; capable of working with concepts such as 'hot', 'cold', 'warm' and 'boiling'. Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g., genetic algorithms) and swarm intelligence (e.g., ant algorithms). With hybrid intelligent systems, attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R or CLARION (see References below). It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, systems integration is seen as promising and perhaps necessary for true AI, especially the integration of symbolic and connectionist models (e.g., as advocated by Ron Sun).

GOFAI TEST research is often done in programming languages such as Prolog or Lisp. Matlab and Lush (a numerical dialect of Lisp) include many specialist probabilistic libraries for Bayesian systems. AI research often emphasises rapid development and prototyping, using such interpreted languages to empower rapid command-line testing and experimentation. Real-time systems are however likely to require dedicated optimized software.

Many expert systems are organized collections of if-then such statements, called productions. These can include stochastic elements, producing intrinsic variation, or rely on variation produced in response to a dynamic environment.

Research challenges

A legged league game from RoboCup 2004 in Lisbon, Portugal.The 800 million-Euro EUREKA Prometheus Project on driverless cars (1987-1995) showed that fast autonomous vehicles, notably those of Ernst Dickmanns and his team, can drive long distances (over 100 miles) in traffic, automatically recognizing and tracking other cars through computer vision, passing slower cars in the left lane. But the challenge of safe door-to-door autonomous driving in arbitrary environments will require additional research.

In the post-dot-com boom era, some search engine websites use a simple form of AI to provide answers to questions entered by the visitor. Questions such as What is the tallest building? can be entered into the search engine's input form, and a list of answers will be returned.

AI in other disciplines

Philosophy Mind and Brain Portal Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence The strong AI vs. weak AI debate ("can a man-made artifact be conscious?") is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters' opinions, artificial consciousness is considered the holy grail of artificial intelligence. Edsger Dijkstra famously opined that the debate had little importance: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

Epistemology, the study of knowledge, also makes contact with AI, as engineers find themselves debating similar questions to philosophers about how best to represent and use knowledge and information (e.g., semantic networks).

Neuro-psychology Main article: Cognitive science Techniques and technologies in AI which have been directly derived from neuroscience include neural networks, Hebbian learning and the relatively new field of Hierarchical Temporal Memory which simulates the architecture of the neocortex.

Computer Science Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as John McCarthy, Marvin Minsky, Seymour Papert (who developed Logo there) and Terry Winograd (who abandoned AI after developing SHRDLU).

Business Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001).[32] A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Many practical applications are dependent on artificial neural networks, networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering.

Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.[33]

Fiction Main article: Artificial intelligence in fiction In science fiction AI is often portrayed as an upcoming power trying to overthrow human authority, usually in the form of futuristic humanoid robots. Best known examples include the films The Terminator and The Matrix, as well as TV shows such as the re-imagined Battlestar Galactica series.

Another common theme is the suspicion and hatred by humanity for AIs and the AIs attempt to gain human acceptance. Films include Bicentennial Man, Artificial Intelligence: A.I. and The Iron Giant. This concept is also explored in the Uncanny Valley hypothesis.

Isaac Asimov wrote stories where engineers understood these potential problems and designed their robots accordingly. Positive examples of AIs include Robby from Forbidden Planet, R2D2, C3PO and Data (Star Trek)

The inevitability of the integration of AI into human society is also argued by some science/futurist writers such as Kevin Warwick and Hans Moravec and the manga Ghost in the Shell

Toys and games The 1990s saw some of the first attempts to mass-produce domestically aimed types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of AI, specifically in the form of Tamagotchis and Giga Pets, the Internet (example: basic search engine interfaces are one simple form), and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy.