Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Reverse Turing test

From Wikipedia, the free encyclopedia

A reverse Turing test is a Turing test[1] in which failure suggests that the test-taker is human, while success suggests the test-taker is automated.

Conventionally, the Turing test is conceived as having a human judge and a computer subject which attempts to appear human.

Reversal of objective

[edit]

Arguably the standard form of the reverse Turing test is one in which the subjects attempt to appear to be a computer rather than a human.

A formal reverse Turing test follows the same format as a Turing test. Human subjects attempt to imitate the conversational style of a conversation program. Doing this well involves deliberately ignoring, to some degree, the meaning of the conversation that is immediately apparent to a human, and the simulation of the kinds of errors that conversational programs typically make. Arguably unlike the conventional Turing test, this is most interesting when the judges are very familiar with the art of conversation programs, meaning that in the regular Turing test they can very rapidly tell the difference between a computer program and a human acting normally.

The humans that perform best in the reverse Turing test are those that know computers best, and so know the types of errors that computers can be expected to make in conversation. There is much shared ground between the skill of the reverse Turing test and the skill of mentally simulating a program's operation in the course of computer programming and especially debugging. As a result, programmers (especially hackers) will sometimes indulge in an informal reverse Turing test for recreation.

An informal reverse Turing test involves an attempt to simulate a computer without the formal structure of the Turing test. The judges of the test are typically not aware in advance that a reverse Turing test is occurring, and the test subject attempts to elicit from the 'judges' (who, correctly, think they are speaking to a human) a response along the lines of "is this really a human?". Describing such a situation as a "reverse Turing test" typically occurs retroactively.

There are also cases of accidental reverse Turing tests, occurring when a programmer is in a sufficiently non-human mood that his conversation unintentionally resembles that of a computer.[citation needed] In these cases the description is invariably retroactive and humorously intended. The subject may be described as having passed or failed a reverse Turing test or as having failed a Turing test. The latter description is arguably more accurate in these cases; see also the next section.

Failure by control subjects

[edit]

Since Turing test judges are sometimes presented with genuinely human subjects, as a control, it inevitably occurs that a small proportion of such control subjects are judged to be computers. This is considered humorous and often embarrassing for the subject.[citation needed]

This situation may be described literally as the human "failing the Turing test", for a computer (the intended subject of the test) achieving the same result would be described in the same terms as having failed. The same situation may also be described as the human "failing the reverse Turing test" because to consider the human to be a subject of the test involves reversing the roles of the real and control subjects.[citation needed]

Judgement by computer

[edit]

The term "reverse Turing test" has also been applied to a Turing test (test of humanity) that is administered by a computer. In other words, a computer administers a test to determine if the subject is or is not human. Such procedures, called CAPTCHAs, are used in some anti-spam systems to prevent automated bulk use of communications systems.

The use of captchas is controversial.[2] Circumvention methods exist that reduce their effectiveness. Also, many implementations of captchas (particularly ones desired to counter circumvention) are inaccessible to humans with disabilities, and/or are difficult for humans to pass.

Note that "CAPTCHA" is an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart" so that the original designers of the test regard the test as a Turing test to some degree.

Judgement of sufficient input

[edit]

An alternative conception of a Reverse Turing Test is to use the test to determine whether sufficient information is being transmitted between the tester and the subject. For example, if the information sent by the tester is insufficient for the human doctor to perform diagnosis accurately, then a medical diagnostic program could not be blamed for also failing to diagnose accurately.[citation needed]

This formulation is of particular use in developing Artificial Intelligence programs, because it gives an indication of the input needed for a system that attempts to emulate human activities.[citation needed][3]

See also

[edit]

References

[edit]
  1. ^ Albury, W. R. (June 1996). "Claude Bernard: Rationalite d'une methode. Pierre Gendron". Isis. 87 (2): 372–373. doi:10.1086/357537. ISSN 0021-1753.
  2. ^ "Techi".
  3. ^ "What Is Machine Learning and Why Is It Important?". Enterprise AI. Retrieved 2023-07-18.
[edit]