The ELIZA Effect, VR, and CUIs

“The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors.” – Wikipedia

An experiment designed to evoke the ELIZA effect with a combination of 3D animation and a conversational user interface (CUI)
A chatterbot with manners and body language – idoru.js at http://idoru.ca

The ELIZA effect is named after a “chatterbot” called ELIZA that was developed between 1964 and 1966 at MIT. A “chatterbot” is a computer program that conducts a conversation.

ELIZA’s creator, Joseph Weizenbaum, said it was a a “parody” of “the responses of a nondirectional psychotherapist in an initial psychiatric interview.” (Weizenbaum 1976, p. 188)

But to Weizenbaum’s surprise, people who interacted with the program were quick to develop an emotional relationship with it, including ascribing human-like motives to the simple algorithm: “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”(Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. W. H. Freeman. p. 7.)

One of the things people in the UX/UI field have started talking about is CUIs, or Conversational User Interfaces. Siri is one of the first things that pops to mind these days. Siri is a chatterbot, descended from ELIZA, that comes with voice recognition, speech synthesis, and access to the entire web. Siri provides an excellent example of what a CUI is.

According to this article, CUI’s are all the rage on the Chinese web – and will be all the rage here in the near future:

https://medium.com/life-learning/the-future-of-cui-isn-t-conversational-fa3d9458c2b5#.j10zyso24

This article also gives good background on the famous “Turing Test”, how it relates to the ELIZA effect, and how when the level of user input AIs are able to observe increases (say, not just keys pressed, but body temperature) the more willing people will be to ascribe human-like motivations to them.

I’ve been working a little on this from the other direction.

While an ability to measure user reactions is key to establishing rapport with a user, so too are the gestures and body language that the AI is able to exhibit. On a 2D interface, this won’t go far. But in a 3D virtual world – an artificial character has a decent shot at appearing as real and sentient as a real person.

In a chat – as you would with ELIZA – users only imagine a real person, sitting at a keyboard, answering their questions. In VR – you’d be able to imagine a real person chatting, moving, gesturing as though a real person were driving the avatar from remote.

As a demo, I’ve set up a 3D “artificial character” that is as simple as possible, for something with 3 dimensions that could exist in a VR environment – it is created from 9 sphere’s and it has a slight upgrade from a standard ELIZA-style chatterbot.

Much as any rudimentary chatterbot is designed to use a few simple techniques to establish the impression of engagement with the user, this demo uses a few simple techniques to establish the visual illusion of engagement with the user. It maintains eye contact while its head tilts expressively. It projects a range of vague emotional expressions, none of which is based on a specific emotion, so that it will appear more “real” that a static model with a chat interface.

Check it out and chat it up: idoru.ca

Preliminary tests seem to indicate that one of the reactions people have to it, that they do not necessarily have to a straight text-only chatterbot – people think it’s funny. So not only does it exude attentiveness – it exudes a sense of humour. Without doing anything goofy, mind you – but it does appear to smile slyly as it gives its dry, sometimes pithy, responses. While this may not make it any easier to achieve the ELIZA effect – it would certainly increase the depths to which the illusion can be perpetrated.

While primitive, one of this demo’s strengths is that it is not platform-specific – the framework and the code are all written in JavaScript, so that it can be used in any 3D environment on the web.

Much will be explored and discovered in this field as CUIs and VR begin serious evolution as commercially viable user interface paradigms. And the two will evolve together – in a virtual space, it would be a little more unnerving to “speak” with something like Siri in the form of a disembodied, omnipresent voice – than it would be to speak to a cute little avatar with an autopilot programmed for manners, charm, and presence.

In conclusion, I’d like to leave you with an artificial character who is charting a course in this direction – Gary the Gull. Gary is the brainchild of two ex-Pixar guys, Tom Sanocki and Mark Walsh.

“I got really interested in how to have an interactive, not passive, experience and how to create interaction using the character skills I built at Pixar. The first person I started talking to was Tom because Tom had a similar interest and he comes from a technical background,” says Walsh.

“The philosophy Mark and I have is we need to take our cues from real life and think what’s real life like? Because that’s the promise of VR. So by taking real life into VR we can achieve that promise, and a lot of that is about people, because people are the user interface of real life. That’s what we care about and that’s a lot about responding and going on a story, because all our conversations are really mini-stories,” says Sanocki.

While Gary doesn’t involve the idea of a CUI – Conversational User Interface – he does embody the idea that for something to be engaging in VR, it would do well to mimic the behaviours that engage us in the real world.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

Author: Pete

Editor-in-Chief, Lead Software Developer and Artistic Director @ 3dspace.com