Tag Archives: culture

ARticle review: This Researcher Says AI Is Neither Artificial nor Intelligent

Kate Crawford, who holds positions at USC and Microsoft, says in a new book that even experts working on the technology misunderstand AI. 

TECHNOLOGY COMPANIES LIKE to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. In her book Atlas of AI, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources, human sweat, and bad science underpinning some versions of the technology.

book cover - Atlas of AI by Kate Crawford Link to book review.

Crawford, a professor at the University of Southern California and researcher at Microsoft, says many applications and side effects of AI are in urgent need of regulation.

Crawford recently discussed these issues with WIRED senior writer Tom Simonite. An edited [and further excerpted] transcript follows.

KATE CRAWFORD: It [AI] is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

You take on that myth by showing how AI is constructed. Like many industrial processes it turns out to be messy. Some machine learning systems are built with hastily collected data, which can cause problems like face recognition services more error prone on minorities.

We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just “raw” material, reused across thousands of projects.

This evolved into an ideology of mass data extraction, but data isn’t an inert substance—it always brings a context and a politics. 

You trace the roots of emotion recognition software to dubious science funded by the Department of Defense in the 1960s. A recent review of more than 1,000 research papers found no evidence a person’s emotions can be reliably inferred from their face.

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea that’s so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people’s faces and correlating that to simple, predefined, emotional states works with machine learning—if you drop culture and context and that you might change the way you look and feel hundreds of times a day

We’ve seen research focused too narrowly on technical fixes and narrow mathematical approaches to bias, rather than a wider-lensed view of how these systems integrate with complex and high stakes social institutions like criminal justice, education, and health care. I would love to see research focus less on questions of ethics and more on questions of power. These systems are being used by powerful interests who already represent the most privileged in the world.

Is AI still useful?

Let’s be clear: Statistical prediction is incredibly useful; so is an Excel spreadsheet. But it comes with its own logic, its own politics, its own ideologies that people are rarely made aware of.

https://www.wired.com/story/researcher-says-ai-not-artificial-intelligent/

(My highlighting) Highlighted parts relate directly to my thinking in regards to how AI/technology can be used across a general (diverse) population, when it has been designed and programmed by fallible and inevitably biased humans? As fashions change, theory, perspectives, experiences, culture/s, languages and dialects, and effects of globalisation, first world power and dominance, disparities between the global ‘North and South’, the ‘East and West’, religious and political influence, AI is being built and programmed by who? As the author says in the final comment, AI “comes with its own logic, its own politics, its own ideologies that people are rarely made aware of” and this is one of my main concerns. How can this be mitigated? Should we (users/educators) be cognisant of these issues of power and bias when we chose our tools? Should we ensure we educate our learners to be critical, to always consider minority perspectives, to consider the tools they/we use for what might be missed, or not considered, or how they support and ensure the power (and knowledge) is wielded by those with conflicting interests?

A Leve Reflections: 1 May, 2021