3dspace.com is a site about programming for the 3D web. As such, there will be programming here. We’re not going to build any commercially viable systems – just experiment with developing things in various programming languages, on various web-based platforms.
Back in the days of Flash, I’d get to “model” snow a couple of times around Christmas every year. For some animated corporate Christmas cards, usually. So, when the time came to try out the WebVR Boilerplate – that’s what I chose for my first experiment. Gently falling snow.
(The above example is an iFrame – to pop it out for your VR goggles, click here)
The WebVR Boilerplate is a collection of files that does everything you need to do to get something in WebVR up and running, really easy. So that seems perfect for me! The base-state of the boilerplate shows just a rotating cube, in a room defined by a bright green grid. So all I did, to make this demo, was to remove the cube and start coding up the snow!
One of my hobbies is photography. So as I begin dabbling in VR, I find myself quite curious about 360 photos. A 360 photo is the “bare minimum” of a VR experience.
What is a 360 photo?
A 360 photo goes by many names, but each describes a photograph that completely surrounds the camera. It shows what is in front – in back – off to the sides – and above and below. ALL of it. You can look all around, and everywhere you look has been captured in the photo.
Lie down under a maple tree, and watch the keys lazily drift down towards you. In your VR goggles, with WebVR!
(This is in an iFrame – to pop it out for your VR goggles, click here)
I’d written a demo using THREE.js some time back, that simulated maple keys falling in the spring. Having secured some VR goggles (Samsung Gear VR with a Galaxy S7), and tried no end of VR experiences, it seemed like I should “port” that demo into an immersive version.
About a week ago, I got my first set of VR goggles. Nothing fancy – it’s the Samsung Gear VR. I explored some of the demos. (great fun!) Some of what I explored was WebVR, which just became available on Gear VR (without the use of “experimental browsers”) a couple of weeks ago – albeit with a “deprecated API”. (which means, an old obsolete version.) Then I took a crack at a tool called A-Frame.
A-Frame makes WebVR easy. Easy peasy. It reminds me of X3DOM – it’s a “declarative language”, so it drives a lot like HTML5. All of the things you declare when you’re using A-Frame get added to the “DOM” (Document Object Model), so everything in your “world” can be accessed and manipulated just like you would the elements on a plain, old-fashioned web page. Which, really, makes a lot of things easy. Easy peasy.
I was a little concerned about the optics in my first VR goggles, given that I wear glasses.
I’d heard that you can’t fit your glasses into the headset – and it’s true – you gotta take your glasses off, to put the headset on.
Now, my glasses are pretty pedestrian – not too powerful, and only correct for near-sightedness. I certainly have to wear them to drive. But I do take them off for photography – so I can press the viewfinder right against myself.
As a sequel to my Feb 23 post X3DOM vs. Three.js, I’d like to quickly compare Blend4web vs. Three.js, using that same old arbitrary VRML file as a neutral sample. Blend4web is an add-on for Blender, the open-source 3D authoring tool I use.
Here’s how the Blender workspace containing my old VRML file looks when I export it using Blend4web:
And here’s the same Blender workspace exported as a COLLADA file and then imported into Three.js:
You can drag your mouse on either of those to move them around. Each button gives you a different motion when you drag. Each example uses each button in a different way.
idoru.js is an experiment I’m working on with artificial characters in virtual worlds. The idea is that to provide good “user experience” (UX) in a virtual world, a character must have good “stage presence” to stimulate engagement.
The idea is to create a framework for an artificial character that is charming and attentive to the user. This character can then be “dressed up” with any imaginable avatar. It can be given any “job” that anyone cares to script.
A good suit and deep knowledge are not enough to make a person engaging in the real world. A person needs body language. A person needs to be attentive to the person they are engaging. They need to make eye contact. They need to interact with a person’s personal space in a thoughtful, polite way.