(comment & excerpts from…) Artificial intelligence research may have hit a dead end

“Misfired” neurons might be a brain feature, not a bug — and that’s something AI research can’t take into account

By THOMAS NAIL
APRIL 30, 2021 10:00PM (UTC)

https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/

[…]  artificial intelligence researchers and scientists are busy trying to design “intelligent” software programmed to do specific tasks. There is no time for daydreaming.

Or is there? What if reason and logic are not the source of intelligence, but its product? What if the source of intelligence is more akin to dreaming and play?

Recent research into the “neuroscience of spontaneous fluctuations” points in this direction. If true, it would be a paradigm shift in our understanding of human consciousness. It would also mean that just about all artificial intelligence research is heading in the wrong direction.

Yet all approaches have one thing in common: they treat intelligence computationally, i.e., like a computer with an input and output of information. 

Narrow AI excels at accomplishing specific tasks in a closed system where all possibilities are known. It is not creative and typically breaks down when confronted with novel situations. On the other hand, researchers define “general AI” as the innovative transfer of knowledge from one problem to another.

Decades of neuroscience have experimentally proven that neurons can change their function and firing thresholds, unlike transistors or binary information. It’s called “neuroplasticity,” and computers do not have it.  

Spontaneous fluctuations are neuronal activities that occur in the brain even when no external stimulus or mental behavior correlates to them. These fluctuations make up an astounding 95% of brain activity while conscious thought occupies the remaining 5%. In this way, cognitive fluctuations are like the dark matter or “junk” DNA of the brain. They make up the biggest part of what’s happening but remain mysterious.   

Neuroscientists have known about these unpredictable fluctuations in electrical brain activity since the 1930s, but have not known what to make of them. Typically, scientists have preferred to focus on brain activity that responds to external stimuli and triggers a mental state or physical behavior. They “average out” the rest of the “noise” from the data.

This is why computer engineers, just like many neuroscientists, go to great lengths to filter out “background noise” and “stray” electrical fields from their binary signal. 

This is a big difference between computers and brains. For computers, spontaneous fluctuations create errors that crash the system, while for our brains, it’s a built-in feature.    

What if noise is the new signal? What if these anomalous fluctuations are at the heart of human intelligence, creativity, and consciousness? 

There is no such thing as matter-independent intelligence. Therefore, to have conscious intelligence, scientists would have to integrate AI in a material body that was sensitive and non-deterministically responsive to its anatomy and the world. Its intrinsic fluctuations would collide with those of the world like the diffracting ripples made by pebbles thrown in a pond. In this way, it could learn through experience like all other forms of intelligence without pre-programmed commands. 

In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.

My comment/reflections…

Yes, I read this and excerpted elements that resonated particularly strongly with me. Whenever I hear discussions about AI, I have misgivings. This article helps me to articulate some of these.

Notions such as creativity, addressing ‘novel situations’, going beyond ‘what is known’, or programmed, to find novel solutions that may not have been already attempted. A “closed system where all possibilities are known” is simply a translation of human fallibility with all its potential biases and blind spots, into, as the author says, “computational slaves for capitalism”. One that works faster, cheaper, more efficiently, but without the potential for the fluctuations and ‘noise’ to get in the way.

Well this ‘noise’, to me, is the human condition and I believe it contributes to the wonders of diversity, of difference, of creativity and even what might be considered bohemian or eccentric responses and ways of being that provide the colours of our world.

In terms of the origins of the new technological and AI machinery, what would it mean in terms of the ethics, morals and understandings of ‘right and wrong’, good/bad, acceptability of ‘solutions’, if any nation, sect or belief system of the world was able to program and develop it? Any religion, any philosophy, any group or individual? We know who is ruling the development of AI right now, is that ok with you and me? With our neighbours, our extended families, our region or our place in the world? Have we thought about why this might be, or how it might feel different if our own belief systems were completely incompatible or in opposition?

Considering immersive technology in relation to pre-service teacher education*

*Note that I am referring to classroom pedagogical responses in general, not subject or discipline specific. This can be seen from the subject description in which I am teaching presently:

This subject will introduce you to key philosophical, sociological, political and historical underpinnings of education and educational research in the Australian context. 

 University Handbook entry for Educational Foundations EDUC90901.

Or prior to that: Health, Wellbeing and Inclusive Education:

Students will engage in an exploration of the relationship between learning, learning outcomes and well-being. Individual wellbeing and identity and wellbeing from ecological, Indigenous and intercultural perspectives will be explored. Students will investigate learning environments and how these impact on learner well-being and in turn learning.

The unit also explores ways to create safe, secure and nurturing learning environments for all children and young people that enhance positive learner engagement

Unit | Deakin (my emphasis)

In our #EDUC90970 Facilitating Online Learning course (professional development program) we have been exploring possibilities in incorporating Immersive Technology such as Virtual Reality (VR) and (XR) as part of the Higher Education (HE) learning experience.

See my previous post: MODULE 7: IMMERSIVE REALITY – PRE AND POST REFLECTIONS

Today I joined the inaugural MCSHE SoTEL Showcase #1 Webinar in which four presentations were given by UniMelbourne academic staff about how they have been using ‘Technology Enhanced Learning’ in their various fields.

Note: SoTEL – SOTEL | Melbourne CSHE Scholarship of Technology Enhanced Learning research network (unimelb.edu.au)

I’m always impressed when I see what others are doing to set up innovative programmes in their discipline areas. It is clear from each of the presentations that whilst there are things to learn, there are also important considerations regarding purpose – using technology that is available, as well as the purpose or educational/professional outcomes. Each of the presenters was working in a team – clearly a crucial aspect in order to combine technical expertise of different kinds – having a great idea for building a platform and tool for your learners, also requires the technical expertise of others (not mentioning the funding required) and so a great idea in and of itself is not going to enough to ‘get it off the ground’.

As per DBR (Design Based Research) Principles:

Design Principle 2: The Collaboration that Is Essential to Instantiating Authentic Tasks-Based Learning Strategies Online Is a New Experience for Most Learners and Must Be Carefully Nurtured (Kartoğlu et al, 2020)

… collaboration is crucial. This principle is sort of obvious, but in my personal situation (still working from home, teaching contract work only, no real links or collaborative possibilities in my own faculty, let alone across all of the HE institutions that I have worked at over the previous years), this is an ongoing challenge.

I have been producing online materials for years, most, if not all, on an individual basis, or working with a team that dissipates at the end of my contract. Maybe this is my own inability to make the right connections, or to follow up on possible collaborative opportunities – this always amuses me as I teach and operate in ways that always encourages and tries to facilitate any opportunities for students to work together in teams, and in fact get very envious of those who manage to co-publish and co-research together! (Personality? Independence? Pride? Stubbornness? Inability to commit???? Or what about practical elements, the ongoing need to make money, support a family, run a household, managing a chronic health condition)

But back to Pre-Service Teacher Education. When Pre-Service teacher educators talk about ‘immersive’ or ‘experiential’ learning, they are usually referring to experiences in a classroom – practical experience, practicums or placements. That is, traditional modes of ‘placement in situ’, and having a ‘mentor’ to help guide them (and also to formally assess their competence). This experience gives pre-service teachers an opportunity to observe interactions, to get to know and to work directly with the learners, to experience the moment to moment pedagogical decision making of practising teachers, and most interestingly for me, get to understand some of the complexities in the classroom beyond their teaching discipline.

Graziano (2017) notes the limited literature on the use of contemporary immersive technologies with preservice teachers.  He discusses a small (N=27) study with undergraduate preservice teachers’ reactions to creating and inter-facing with immersive technology. Of course they found it ‘relevant to their needs and interests’. However, as I keep on finding, this related to ‘teacher instructional design’ and teacher educators becoming familiar with immersive technologies in order to integrate it into teacher preparation curricula. Important work but not my key interest.

https://www.researchgate.net/publication/319171811_Immersive_Technology_Motivational_Reactions_from_Preservice_Teachers(15) (PDF) Immersive Technology: Motivational Reactions from Preservice Teachers (researchgate.net)

What I am looking for…

I can find numerous articles about bringing technology into pre-service teacher education to improve their skills in integrating technology into lesson planning, instruction, assessment, student interaction and collaborative work, discipline immersion… But in searching, I have realised that what I am looking for is more particular to my area of interest and expertise, and this relates to understanding and working appropriately with difference in the classroom. I’m not referring to psychology here, I am thinking about social situations, inclusive practices, culturally responsive and relational pedagogies. This is where I want to explore possibilities to integrate immersive technology, particularly since 2020 Covid-19 lockdowns a time in which all face to face teaching was put on pause, and actual teacher practicums were cancelled and/or delayed. I was working with Pre-service teachers in their second year of their Masters of Teaching who had never been in a classroom since their own schooling.

The AITSL (Australian Institute for Teaching & School Leadership) site ‘articulates what classroom practice looks like…’ . and provides a resource guide that ‘aid[s] classroom observation’. ACARA (Australian Curriculum and Assessment Authority) . There are many other online samples (short videos of interactions, examples from practice, ‘expert’ and novice teachers talking about their experiences etc.) that have been produced in order to help pre- and practising teachers to both inform and to assist them in ‘gathering evidence’ to demonstrate their competence in meeting of the AISTL Australian Professional Standards for Teachers (APSTs).

(See relevant discussion re AITSL in previous post: Module-7-Immersive-Reality-Pre-and-Post-Reflections).

Numerous video case studies are integrated with the materials and advice provided on the AISL site. They are expertly produced and provide selected examples to illustrate practice.

https://www.aitsl.edu.au/deliver-ite-programs/learn-about-ite-accreditation-reform/improved-professional-experience-for-ite-students/effective-professional-experience-case-studies

State and Territory Departments of Education contain a huge range of online materials, videos, links, case studies, classroom exemplars, curriculum support materials etc. (Find links here…) The number and range of these sources are frankly, quite overwhelming – but useful to access as required, or advised.

Halt … and suspend !!!!! (to be continued…)

I’m going to stop adding to this post now as I am moving further off topic, and in fact, being prompted to write and publish other posts while this one awaits in draft form! (See new page: messy filing cabinets…my mind)

REFERENCES:

Graziano, KJ (2017) Immersive Technology: Motivational Reactions from Preservice Teachers, Internet Learning, V6, no.1 DOI: 10.18278/il.6.1.4

Kartoğlu, Ümit, Siagian, Ria Christine, & Reeves, Thomas C. (2020). Creating a “good clinical practices inspection” authentic online learning environment through educational design research. TechTrends : for leaders in education & training, 1-12. doi: http://dx.doi.org/10.1007/s11528-020-00509-0

ARticle review: This Researcher Says AI Is Neither Artificial nor Intelligent

Kate Crawford, who holds positions at USC and Microsoft, says in a new book that even experts working on the technology misunderstand AI. 

TECHNOLOGY COMPANIES LIKE to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. In her book Atlas of AI, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources, human sweat, and bad science underpinning some versions of the technology.

book cover - Atlas of AI by Kate Crawford Link to book review.

Crawford, a professor at the University of Southern California and researcher at Microsoft, says many applications and side effects of AI are in urgent need of regulation.

Crawford recently discussed these issues with WIRED senior writer Tom Simonite. An edited [and further excerpted] transcript follows.

KATE CRAWFORD: It [AI] is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

You take on that myth by showing how AI is constructed. Like many industrial processes it turns out to be messy. Some machine learning systems are built with hastily collected data, which can cause problems like face recognition services more error prone on minorities.

We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just “raw” material, reused across thousands of projects.

This evolved into an ideology of mass data extraction, but data isn’t an inert substance—it always brings a context and a politics. 

You trace the roots of emotion recognition software to dubious science funded by the Department of Defense in the 1960s. A recent review of more than 1,000 research papers found no evidence a person’s emotions can be reliably inferred from their face.

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea that’s so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people’s faces and correlating that to simple, predefined, emotional states works with machine learning—if you drop culture and context and that you might change the way you look and feel hundreds of times a day

We’ve seen research focused too narrowly on technical fixes and narrow mathematical approaches to bias, rather than a wider-lensed view of how these systems integrate with complex and high stakes social institutions like criminal justice, education, and health care. I would love to see research focus less on questions of ethics and more on questions of power. These systems are being used by powerful interests who already represent the most privileged in the world.

Is AI still useful?

Let’s be clear: Statistical prediction is incredibly useful; so is an Excel spreadsheet. But it comes with its own logic, its own politics, its own ideologies that people are rarely made aware of.

https://www.wired.com/story/researcher-says-ai-not-artificial-intelligent/

(My highlighting) Highlighted parts relate directly to my thinking in regards to how AI/technology can be used across a general (diverse) population, when it has been designed and programmed by fallible and inevitably biased humans? As fashions change, theory, perspectives, experiences, culture/s, languages and dialects, and effects of globalisation, first world power and dominance, disparities between the global ‘North and South’, the ‘East and West’, religious and political influence, AI is being built and programmed by who? As the author says in the final comment, AI “comes with its own logic, its own politics, its own ideologies that people are rarely made aware of” and this is one of my main concerns. How can this be mitigated? Should we (users/educators) be cognisant of these issues of power and bias when we chose our tools? Should we ensure we educate our learners to be critical, to always consider minority perspectives, to consider the tools they/we use for what might be missed, or not considered, or how they support and ensure the power (and knowledge) is wielded by those with conflicting interests?

A Leve Reflections: 1 May, 2021