Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Imageability

From Wikipedia, the free encyclopedia

Imageability is a measure of how easily a physical object, word or environment will evoke a clear mental image in the mind of any person observing it.[1][2] It is used in architecture and city planning, in psycholinguistics,[3] and in automated computer vision research.[4] In automated image recognition, training models to connect images with concepts that have low imageability can lead to biased and harmful results.[4]

History and components

[edit]

Kevin A. Lynch first introduced the term, "imageability" in his 1960 book, The Image of the City.[1][5] In the book, Lynch argues cities contain a key set of physical elements that people use to understand the environment, orient themselves inside of it, and assign it meaning.[6]

Lynch argues the five key elements that impact the imageability of a city are Paths, Edges, Districts, Nodes, and Landmarks.

  • Paths: channels in which people travel. Examples: streets, sidewalks, trails, canals, railroads.
  • Edges: objects that form boundaries around space. Examples: walls, buildings, shoreline, curbstone, streets, and overpasses.
  • Districts: medium to large areas people can enter into and out of that have a common set of identifiable characteristics.
  • Nodes: large areas people can enter, that serve as the foci of the city, neighborhood, district, etc.
  • Landmarks: memorable points of reference people cannot enter into. Examples: signs, mountains and public art.[1]

In 1914, half a century before The Image of the City was published, Paul Stern discussed a concept similar to imageability in the context of art. Stern, in Susan Langer's Reflections on Art, names the attribute that describes how vividly and intensely an artistic object could be experienced apparency. [7]

In computer vision

[edit]

Automated image recognition was developed by using machine learning to find patterns in large, annotated datasets of photographs, like ImageNet. Images in ImageNet are labelled using concepts in WordNet. Concepts that are easily expressed verbally, like "early", are seen as less "imageable" than nouns referring to physical objects like "leaf". Training AI models to associate concepts with low imageability with specific images can lead to problematic bias in image recognition algorithms. This has particularly been critiqued as it relates to the "person" category of WordNet and therefore also ImageNet. Trevor Pagan and Kate Crawford demonstrated in their essay "Excavating AI" and their art project ImageNet Roulette how this leads to photos of ordinary people being labelled by AI systems as "terrorists" or "sex offenders".[8]

Images in datasets are often labelled as having a certain level of imageability. As described by Kaiyu Yang, Fei-Fei Li and co-authors, this is often done following criteria from Allan Paivio and collaborators' 1968 psycholinguistic study of nouns.[3] Yang el.al. write that dataset annotators tasked with labelling imageability "see a list of words and rate each word on a 1-7 scale from 'low imagery' to 'high imagery'.[4]

To avoid biased or harmful image recognition and image generation, Yang et.al. recommend not training vision recognition models on concepts with low imageability, especially when the concepts are offensive (such as sexual or racial slurs) or sensitive (their examples for this category include "orphan", "separatist", "Anglo-Saxon" and "crossover voter"). Even "safe" concepts with low imageability, like "great-niece" or "vegetarian" can lead to misleading results and should be avoided.[4]

See also

[edit]

Further reading

[edit]

References

[edit]
  1. ^ a b c Lynch, Kevin, 1918-1984. (1960). The image of the city. Cambridge, Mass.: MIT Press. ISBN 0-262-12004-6. OCLC 230082.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  2. ^ Dellantonio, Sara; Job, Remo; Mulatti, Claudio (2014-04-03). "Imageability: now you see it again (albeit in a different form)". Frontiers in Psychology. 5: 279. doi:10.3389/fpsyg.2014.00279. ISSN 1664-1078. PMC 3982064. PMID 24765083.
  3. ^ a b Paivio, Allan; Yuille, John C.; Madigan, Stephen A. (1968). "Concreteness, imagery, and meaningfulness values for 925 nouns". Journal of Experimental Psychology. 76 (1, Pt.2): Suppl:1–25. doi:10.1037/h0025327. ISSN 0022-1015. PMID 5672258.
  4. ^ a b c d Yang, Kaiyu; Qinami, Klint; Fei-Fei, Li; Deng, Jia; Russakovsky, Olga (2020-01-27). "Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the ImageNet hierarchy". Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. FAT* '20. New York, NY, USA: Association for Computing Machinery. pp. 547–558. arXiv:1912.07726. doi:10.1145/3351095.3375709. ISBN 978-1-4503-6936-7. S2CID 209386709.
  5. ^ "Analyzing Lynch's City Imageability in the Digital Age". Planetizen - Urban Planning News, Jobs, and Education. Retrieved 2020-02-15.
  6. ^ The urban design reader. Larice, Michael, 1962-, Macdonald, Elizabeth, 1959- (Second ed.). London. 2013. ISBN 978-0-203-09423-5. OCLC 1139281591.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: others (link)
  7. ^ Langer, Susanne K. (Susanne Katherina Knauth), 1895-1985 (1979) [1958]. Reflections on art. New York: Arno Press. ISBN 0-405-10611-4. OCLC 4570406.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  8. ^ Crawford, Kate; Trevor, Pagan (2019). "Excavating AI: The Politics of Images in Machine Learning Datasets". The AI Now Institute.