360 Photos with Prisma Effects

I’ve seen the photos re-touched by the Prisma app for weeks. Just the other day, the app became available for Android. I’d been impressed by the things I’d seen. I downloaded it right away.

As I’d been having a good time with 360 photos recently, I decided I wanted to try to add Prisma effects to one of those. Could I make a 360 photo look like a 360 painting?

360 Photo with effects from the Prisma app. Looks like a 360 painting.

[Click image to enter photosphere.]

Continue reading “360 Photos with Prisma Effects”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

Snow and the WebVR Boilerplate

Back in the days of Flash, I’d get to “model” snow a couple of times around Christmas every year. For some animated corporate Christmas cards, usually. So, when the time came to try out the WebVR Boilerplate – that’s what I chose for my first experiment. Gently falling snow.

(The above example is an iFrame – to pop it out for your VR goggles, click here)

The WebVR Boilerplate is a collection of files that does everything you need to do to get something in WebVR up and running, really easy. So that seems perfect for me! The base-state of the boilerplate shows just a rotating cube, in a room defined by a bright green grid. So all I did, to make this demo, was to remove the cube and start coding up the snow!

Continue reading “Snow and the WebVR Boilerplate”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

360 Photos on the Open Web

One of my hobbies is photography. So as I begin dabbling in VR, I find myself quite curious about 360 photos. A 360 photo is the “bare minimum” of a VR experience.

A flattened Equirectangular Image (an unwrapped 360 image) of a little girl sitting on a jetty at the beach
How can we make this 360 photo fully immersive and available to everyone on the Open Web?

What is a 360 photo?

A 360 photo goes by many names, but each describes a photograph that completely surrounds the camera. It shows what is in front – in back – off to the sides – and above and below. ALL of it. You can look all around, and everywhere you look has been captured in the photo.

Continue reading “360 Photos on the Open Web”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

Maple Keys revisited with WebVR

Lie down under a maple tree, and watch the keys lazily drift down towards you. In your VR goggles, with WebVR!

(This is in an iFrame – to pop it out for your VR goggles, click here)

I’d written a demo using THREE.js some time back, that simulated maple keys falling in the spring. Having secured some VR goggles (Samsung Gear VR with a Galaxy S7), and tried no end of VR experiences, it seemed like I should “port” that demo into an immersive version.

Continue reading “Maple Keys revisited with WebVR”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

Super Stylized Solar System with A-Frame VR

For my first attempt at creating VR content for the web, I tried something called A-Frame. And it was as easy as the day is long.

(This is in an iFrame – to pop it out for your VR goggles, click here)

Obviously, this demo is just a doodle. A boisterous doodle.

Continue reading “Super Stylized Solar System with A-Frame VR”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

WebVR is Easy with A-Frame

About a week ago, I got my first set of VR goggles. Nothing fancy – it’s the Samsung Gear VR. I explored some of the demos. (great fun!) Some of what I explored was WebVR, which just became available on Gear VR (without the use of “experimental browsers”) a couple of weeks ago – albeit with a “deprecated API”. (which means, an old obsolete version.) Then I took a crack at a tool called A-Frame.

WebVR is easy with A-FrameA-Frame makes WebVR easy. Easy peasy. It reminds me of X3DOM – it’s a “declarative language”, so it drives a lot like HTML5. All of the things you declare when you’re using A-Frame get added to the “DOM” (Document Object Model), so everything in your “world” can be accessed and manipulated just like you would the elements on a plain, old-fashioned web page. Which, really, makes a lot of things easy. Easy peasy.

Continue reading “WebVR is Easy with A-Frame”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

VR Goggles and Corrective Eyewear (Glasses)

I was a little concerned about the optics in my first VR goggles, given that I wear glasses.

I’d heard that you can’t fit your glasses into the headset – and it’s true – you gotta take your glasses off, to put the headset on.

Samsung Gear VR goggles don't fit over glasses - but that's okay.
My first VR rig is the Samsung Gear VR Headset – and I wear glasses…

Now, my glasses are pretty pedestrian – not too powerful, and only correct for near-sightedness. I certainly have to wear them to drive. But I do take them off for photography – so I can press the viewfinder right against myself.

Continue reading “VR Goggles and Corrective Eyewear (Glasses)”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

The ELIZA Effect, VR, and CUIs

“The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors.” – Wikipedia

An experiment designed to evoke the ELIZA effect with a combination of 3D animation and a conversational user interface (CUI)
A chatterbot with manners and body language – idoru.js at http://idoru.ca

The ELIZA effect is named after a “chatterbot” called ELIZA that was developed between 1964 and 1966 at MIT. A “chatterbot” is a computer program that conducts a conversation.

ELIZA’s creator, Joseph Weizenbaum, said it was a a “parody” of “the responses of a nondirectional psychotherapist in an initial psychiatric interview.” (Weizenbaum 1976, p. 188)

Continue reading “The ELIZA Effect, VR, and CUIs”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

Blend4web vs. Three.js

As a sequel to my Feb 23 post X3DOM vs. Three.js, I’d like to quickly compare Blend4web vs. Three.js, using that same old arbitrary VRML file as a neutral sample. Blend4web is an add-on for Blender, the open-source 3D authoring tool I use.

Here’s how the Blender workspace containing my old VRML file looks when I export it using Blend4web:

And here’s the same Blender workspace exported as a COLLADA file and then imported into Three.js:

You can drag your mouse on either of those to move them around. Each button gives you a different motion when you drag. Each example uses each button in a different way.

Continue reading “Blend4web vs. Three.js”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone

Introducing Idoru.js

idoru.js is an experiment I’m working on with artificial characters in virtual worlds. The idea is that to provide good “user experience” (UX) in a virtual world, a character must have good “stage presence” to stimulate engagement.

The idea is to create a framework for an artificial character that is charming and attentive to the user. This character can then be “dressed up” with any imaginable avatar. It can be given any “job” that anyone cares to script.

A screenshot of the very first prototype of idoru.js - with a rudimentary avatar and chatbot.
First idoru.js experiment.

A good suit and deep knowledge are not enough to make a person engaging in the real world. A person needs body language. A person needs to be attentive to the person they are engaging. They need to make eye contact. They need to interact with a person’s personal space in a thoughtful, polite way.

Continue reading “Introducing Idoru.js”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookEmail this to someone