The scripts were written to encourage the “cute”, “thoughtful”, and “charming” expressions and not be too inclined to generate random values from a range that can lead to “intimidating”, “creepy”, or “menacing” expressions. This thing can do that. For it to give good UX, it’s gotta be well-mannered.
The idea is that the squirming is prototypical body language for the artificial character. It has 4 ways of controlling its scarce body parts, each of which is designed to mimic a basic ingredient in human body language. As you can see, the number of expressions it can conjur up with so few body parts, and so few ways of controlling them, isn’t half bad.
At one point, I put on audio of a newscast in the background, a very nice BBC lady with a very proper voice, and watched this thing emote while the lady talked. It was really pretty funny – it did a much better job of convincing me that it was reading the news, than any cardboard-cutout 3d model might.
One of the biggest elements of body language, and charm, is how one (real or artificial) interacts with other people spatially. This thing does none of that. At this point. It doesn’t move around space – it doesn’t even know where the user is yet – it just appears in a place where its making eye contact, as long as you don’t move. Note how the eye contact is maintained as the head tilts about – a significant part of the illusion.
It would be easy to imagine that somehow, someday, “artifical intelligence” will be able to conjur up some sort of synthetic charm spontaneously, just because it wants to. Though I think in the meantime, I’d like to look at more ways of synthesizing charm with good old fashioned code.