Perihelion Science Fiction

Sam Bellotto Jr.

Eric M. Jones
Associate Editor


From Gaia to Proxima Centauri
by Milo James Fowler

Suck the Oil Out With a Straw
by Robin White

L’enfer, C’est la Solitude
by Joe Vasicek

Tea With Silicate Gods
by Auston Habershaw

by Andrew Muff

Gina Starlight’s Got the Blues
by Sandra M. Odell

Passing History
by Bill Adler Jr.

A Planet Like Earth
by E.K. Wagner

Shorter Stories

Cold Deaths
by Michael Haynes

Leviathan Buffet
by Sarina Dorie

by Hall Jameson


How Far is Heaven?
by Gary Cuba

A.I. Invasion or A.I. in Education?
by Jason M. Harley



Comic Strips





A.I. Invasion or A.I. in Education?

By Jason M. Harley

“ARE YOU TRYING TO REPLACE teachers with artificial intelligence?”

I can’t count how many times I’ve been asked that question after academic talks throughout my graduate degree studies, post doctorate work, and tenure-track job interviews. But across the years and contexts my answer has always been the same: “No.”

Fear, Incredulity, and What A.I. and Education is Not About

From my point of view, there are two striking aspects to this question. The first is its projection of fear as people imagine rogue HAL-esque systems, GLADos, Skynet, Cylons, and/or voiceless, non-anthropomorphic appendages wagging about. (Inevitably, some people think robots when I talk about A.I.) Given the rich imagery science fiction has supplied us with, it’s no surprise that many people aren’t quick to jump on board the re-imagining of a classroom involving technology they don’t have a lot of experience interacting with, and is associated with themes of world domination, the ruthless pursuit of narrow objectives, and ethical questions involving potential rights.

The question is a particularly interesting one given the use of computer metaphors in cognitive psychology to describe how we store and process information. Describing ourselves in computer terms certainly opens the door to thinking about A.I. in human terms.

This leads to what strikes me as the second implication in the question: incredulity. But a little skepticism makes sense. In fact, the way it is typically expressed as a follow-up to the initial question makes a lot of sense. Putting on an evolutionary psychologist’s hat for just a moment, it is more adaptive to identify and react to potential threats in an environment than to wait to conduct a detailed threat analysis.

In other words, it’s typically safer to listen to your gut (or more accurately, physiology) and get the hell out of that cold, synthetic classroom with matrix code dripping down the walls and a stuttering answering-machine voice informing you that you should relax as its Dr. Octopus-style tendrils dart toward you, chip clutched, and laser target-pointed at the center of your head where the drill bit is aimed. It probably isn’t interested in debating the instructional merits of its (and soon to be yours!) chipset anyway.

So when the fight or flight response has passed (and the above mental image has been shelved) and there are still people sitting in conference room chairs with furrowed brows and crossed arms, where is that skepticism coming from? If anything, popular science fiction has been presenting sophisticated A.I. for decades. Is it possible to both fear and doubt the sophistication of A.I.?

The answer seems to lay in the grand task of developing machines (I use the term loosely here) so sophisticated that they can effectively capture the complexity, nuance, and most of all, human warmth, of a good teacher. Those warm and fuzzy qualities are important to our definitions of what constitutes an effective teacher—I and many others have come to this conclusion researching people’s mental inventories for the qualities that separate the good educators from the cringe-worthy. And from my experience designing and evaluating advanced educational and training software, I can tell you that researchers’ and educators’ incredulity is well-founded when it comes to replicating the characteristics of a good teacher—but not necessarily for the same reasons.

The most immediate concern I’ve mentioned so far is the challenge of creating A.I.-driven instructors that are capable of mimicking warmth. What this entails, if nothing else, is creating emotionally intelligent A.I. The good news is, the scientific community has come a long way in recent years developing means to accurately detect users’ emotional (and other psychological) states, often unobtrusively.

What this means concretely is that a student can sit down in front of a computer and have their emotions read with everything from automatic facial recognition software (it’s not just Felicity Smoak who has those toys), to physiological signals of emotional arousal (e.g., spikes in heat rate and sweating from anxiety) from watch-like bracelets, and even to navigational and keystroke behavior. I’ve published extensively on the topic of emotion recognition and measurement with advanced computer-based environments, including A.I.-driven tutoring systems, and while there are challenges that require us to advance our theories and methods of analyses, we’re making exciting headway.

There is more to being perceived as warm and caring than the ability to detect a students’ emotion, however. And this is where things get messy. In this field (often referred to as affective computing), we are currently working to advance classifications and frameworks in order to provide guidelines for designers of intelligent systems. Currently, controlled scientific examinations of A.I.-driven systems providing the sort of motivational and constructive comments that come to mind, however, suggest that the answer is more complicated than one size fits all. This is almost always the answer in scientific research, especially in fields involving psychological processes and individual differences. We aren’t all the same and we don’t respond the same way to different stimuli. (We don’t necessarily respond the same way to the same stimuli on different days, but that’s another discussion.)

Perhaps some of you are sighing in relief right about now, content that your kids (and probably grandkids) will have human teachers; that, for the moment, the nuances and variance of the human mind will escape rigorous decision tree-mapping (planned responses A.I. provide for situations) and machine-learning techniques, even when educators, psychologists, and computer scientists combine our (mighty!) forces.

But you should know that replacing teachers isn’t our goal—at least not mine nor any of the colleagues I know. I mentioned this at the beginning of the article, but it usually takes a second mention to see the arms uncross and guarded expressions relax. So what’s the point? What’s up, Doc?

Tutoring. Intelligent tutoring.

Adaptive, Personalized Learning: Open Your Textbooks to A.I.

We’ve all felt (and likely been) behind others in our class at some point in our lives, some more often than others, and not all of us have (had) the money for the individualized help that a tutor could provide.

Enter technology. Enter a better question: what can artificial intelligence and other advanced technologies do for education? Now we’re cooking.

My answer to this question is two-fold. First, A.I. has the capacity to equalize access to individualized, supplementary instructional support. In the post-apprenticeship educational model, teachers have precious little time to spend helping struggling students in classes of thirty or more, often requiring studentsrobot teacher who need extra help demystifying a concept to seek out the assistance of a human tutor. Not every student can afford a human tutor—they aren’t cheap. Moreover, not all human tutors (even those who are great subject-matter experts) are good tutors. Content mastery doesn’t always translate to instructional mastery—something most of us are probably familiar with from having a teacher at some point in our lives who knew their stuff but seemed clueless about how to share the wealth.

[A robot teaches Elroy Jetson in a classroom of the future, at right.]

These two limitations of human tutoring are opportunities for A.I.-based tutoring, henceforth referred to as intelligent tutoring systems (ITS). ITS are educational software programs that learners use to study academic material and receive feedback from dedicated, embedded A.I. in the form of text and/or audio messages. Feedback can range from suggestions to establish broader or more specific learning objectives in a studying session to recommendations to revisit content after a less-than-ideal score on a sub-unit quiz administered within the ITS.

Feedback can also take on an emotional support dimension, as mentioned, if the ITS is equipped with the channels to detect, and adequate programming to respond to, emotions such as anxiety, frustration, and boredom; the later being viewed as the most commonly occurring problematic emotion in these and other advanced learning environments.

Returning to the affordability of ITS: most systems developed to date for primary, secondary, and tertiary education are available free of charge to students, though they might require an appointment in a laboratory. Moreover, ITS have flexed their muscles fostering learning and problem solving across an impressive range of educational topics, including teaching algebra, number factorization, microbiology, diagnostic reasoning (medical education), and cultural sensitivity training (military and corporate applications), amongst others. Aside from specialized military and corporate applications of ITS, these systems (generally) present opportunities to take advantage of free individualized support within or beyond class time.

An important note is that some of these systems have been purchased by companies with the infrastructure to distribute and manage them (e.g., set up servers) within and across different countries. While few things are truly free (“free-to-play” freemium games being an excellent contemporary example), the ideal distribution end-goal of ITS for general education would be a model where school boards pay for licenses for them, as they do with other educational software and subscription-based services and resources.

This leads us to a second advantage of ITS over human tutors and why school boards should purchase them as they become available: they are standardized, rigorously evaluated, educational tools for learning. Traditionally, ITS have been developed in university labs with the combined expertise of educational and cognitive psychologists, computer scientists, and discipline-specific-content experts (e.g., math or science teachers). The most advanced ITS have therefore been the subjects of numerous experiments and dozens (sometimes hundreds) of scientific analyses, presentations, and publications.

Consider the level of quality control of a system designed to tutor in this sort of environment in comparison to a well-intentioned and/or profit-seeking bright individual who is unlikely to have any formal tutorial training and, at best, has limited word-of-mouth endorsements.

The easy answer to the dichotomous question of “do ITS work?” is “yes.” Learning gains (positive increases in knowledge from pre-to-post test evaluations), improvements on standardized tests post-use, and increases in learning behaviors associated with better and deeper learning outcomes have all been found with these systems (in various combinations).

But this question sidesteps one I consider even more meaningful: how effective are these systems at supporting struggling students? This is a question that needs more focus, especially in light of findings (including my own) that students with lower prior knowledge and different psychological characteristics (e.g., high predispositions to anxiety while studying, higher levels of neuroticism) react differently to various instructional interventions provided by these systems. More research and meta-analyses specially focused on this topic are required to identify features of systems empirically associated with improvements in learning, more adaptive learning behaviors, and psychological states. Given the diversity of learners, this won’t be an easy task, but is one that is critical to advancing ITS and implementing them in contexts where they can benefit learners who need them the most.

There are, however, approaches that are likely to appeal to a larger body of learners, and these include taking advantage of increasingly accessible technologies for fostering immersion and new ways of interacting with educational content.

Such strategies include embedded learning content in 3D virtual worlds and framing learning outcomes around a narrative (e.g., solving a mystery) using game engines such as Unity.

Mobile technology can also be used to take education beyond the walls of a classroom or laboratory via applications designed to contrast the past with the present. Using a combination of overlay technology, historical images, texts, and video, new skills can be fostered, such as historical reasoning by treating the smartphones and tablets as digital looking glasses into the past. In such apps (e.g., mobile augmented reality), A.I. could serve to share wisdom from the past, and even provide historically-accurate feedback by emulating the views, beliefs and knowledge structures of past decades or centuries, opening the door to opportunities for richer understandings and interpretations of, for example, past civilizations’ views through interactive dialogue.

Imagine debating with, rather than reading about, a controversial historical figure on a defining policy or strategic battle decision? (Sign me up for that homework!)

An A.I. Future Can be a Friendly One

Fortunately or unfortunately (I’ll let you decide for yourselves) the answer to better and more efficient learning for all isn’t developing a pill or cerebral chip to instantly “acquire” information. (And I’ll just pretend I can’t hear a fell wind blow down the corridor outside my office as the ghosts of deceased educational psychologists vent hot words in my general direction for even going there.) The answer isn’t offloading all our work onto robots and automated systems, either—however appealing that fantasy may be when it’s humans doing the offloading and not the other way around.

The answer is educational collaboration.

In closing, A.I. has proven itself as more than just a tool to spit out data and recommendations. While A.I. are built and “taught” by humans, we can gain from having the pedagogical role reversed. We might have been inadvertently taught to fear it through engaging and imaginative media (from books to TV), but that’s no reason not to explore the potential A.I. has to help and even advance the playing field of education.

Indeed, no kid should be left behind. Can a teacher reasonably ensure that all kids and young adults in their classes have the one-on-one time needed to support their learning? And where are the lines in terms of what support can be offered? What is missing that A.I. and advanced technology can help accomplish? When a student says they’ve tried, what does that mean? What did they try? Where did they get stuck? What might have led to them reaching an impasse? Was it sloppy skimming? Not taking notes? Skipping exercises? An unidentified error that might not be caught if they don’t attend a remedial lunch hour session?

The reality is that ITS can bring different and highly advantageous tools to the table. While teachers might have eyes at the back of their heads, those eyes don’t tend to extend into libraries or kitchen tables where homework and studying gets done. Nor could they track, monitor, and integrate students’ learning behavior and psychological states experienced during homework completion (or non-completion) and studying.

Can A.I. ensure that no child is left behind? Maybe not, but I believe that ITS and other advanced technology being developed and piloted in and outside of classrooms across the world stand to give students, especially those most at risk of falling and potentially staying behind, new and valuable tools to catch up, and even get ahead.

Further Reading

For more information on the topic of Artificial Intelligence in Education (AIED) check out the international AIED society, which publishes a peer-reviewed international journal through Springer, and is related to several peer-reviewed international conferences including “Intelligent Tutoring Systems” and “Artificial Intelligence in Education”; proceedings from these conferences are published through “Springer’s Lecture Notes in Computer Science and Lecture Notes in Artificial Intelligence,” respectively. Please also feel free to check out my related publications indexed on my academic and personal websites. END

Jason M. Harley, Ph.D., is an assistant professor of educational technology and psychology at the University of Alberta, and author or co-author of twenty-five academic publications on advanced tech. Follow him on Twitter @JasonHarley07.


ad blocker


shop amazon