Reactable

From FourMs

Revision as of 10:18, 28 May 2009 by Kyrrehg (Talk | contribs)
(diff) ← Older revision | Current revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

About

The Reactable concept is an electronic instrument based around a tangible interface with visual feedback. People can place objects on and touch a screen in order to control sound generation. The Reactable UiO project aims to reproduce this concept, with the same kind of tangibles for the controller interface, however with a different mapping and sound generation solution.

How it works

Overview of reactable internals. Image from http://reactivision.sourceforge.net/

The reactable mainly consists of a camera which tracks the movements of the tangibles on the surface, and a projector projects a graphical feedback to this on the same surface. In order for the camera to not be disturbed by the projected image in the tracking of the objects, the tracking is performed in the infrared light range. This includes illuminating the tangibles with infrared light, and the camera filtering out visible light. In the same way, the infrared illumination for the tangibles does not disturb the users' perception of the projected image. On the software side, the reactivision image processing software sends OSC messages to self-developed programs which generate visuals and sound.

Mechanical Structure

Sensing / Camera

For tracking the surface it is necessary with a camera which can detect infrared (IR) light, and have a wide enough angle to cover the entire surface. Most camera lenses have an infrared filter coating which blocks most of the IR light. In addition, a filter which blocks visible light is needed to avoid interference from the projected image on the surface.


Camera

We are using the camera Unibrain Fire-i since it was available in the lab. It captures 640x480 mono at 30 FPS and has a firewire interface. The lenses are replaceable and lenses which do not have IR coating are available.

We have tried non-IR-coated 4.3mm and 2.1mm (wide angle) lenses. At the moment we are using the 4.3mm lens since there are problems with detecting the objects with the 2.1mm lens. This is probably due to not having enough light and that the tangibles are smaller in the image. With the current illumination the camera sensor settings need to be set to maximum exposure time and a high gain, which indicates that not enough light is available for sensing. A high exposure time increases the reaction time and a high gain introduces noise in the image. Overall we feel that we would have needed a camera with one or more of the following: better(bigger) sensor (for less noise), better optics (for more light and better definition), and possibly better resolution.

IR pass / visible light cut-off filter

Since the IR illumination is 850nm, the ideal would be to use a filter which only lets this wavelength pass. We have ordered such a filter which should be possible to place on top of the camera with the help of a filter mount adapter.

Due to problems with shipping we are currently using a cutout piece of photographic film negative which filters out most of the visible light (but not everything). It is expected that a commercial filter would give a cleaner result.

This shop has a large selection of filters, if further ordering would be necessary.

Infrared illumination

The table uses the principle of rear diffused illumination (DI), with IR lights illuminating the surface from below (for more information see this page).

The challenge here is to have enough light so that the camera sensor gets enough light to produce quality, recognizable, images. In addition, it is important that the light is distributed evenly.

After trying some QED222 5mm LEDs we found that even many these would not give a sufficient amount of light. We decided then to go for a smaller amount high-power LED emitters instead. Although it is stated that these are rated for 3.2V~3.5V 350mA, a customer suggests that they can handle up to 700mA, according to the following product sheet suggests. We are currently feeding them around 500mA. Such LEDs generate heat and a thermal dissipation system was constructed by cutting pieces from a large aluminium dissipator, and boring holes for screw fixation of the LED star. Some thermal conducting paste was applied. At the moment 5 such emitters are used. In addition some simple diffusers were mounted.

The High-power IR LED setup

We found that pointing the LEDs in the direction of the surface either gave reflection highlights in the surface, or good but too uneven lighting with our 4/5 LEDs. It is possible that it could have been sufficiently even with some more (8?) emitters. At the moment we have therefore decided to point the LEDs in another direction, to have a more "ambient" illumination setup. Some aluminium foil was added to the inside walls as an attempt to diffuse the light more evenly.

Surface Projection

We needed a projection which could cover the circle with a diametre of 90 cm, which means that the shortest dimension (the height) of the image needed to be this size. With a box height of ca 90cm, this imposes either an extremely short-throw projector or a mirror setup (which has the effect of increasing the available throw distance for the projector). Since the projector in our lab did not give a large enough image even by a mirror setup, we needed to buy a new one.

The choice of projector was a Hitachi CP-A100 projector which has an extremely short throw (by a built-in mirror system). This made it unnecessary to set up a mirror projection system. The projector has good performance, however it should be noted that the distance from the bottom of the projector to the bottom of the image can be a problem, we solved this by placing the projector in an "add-on box" on one of the table walls.

The projection calculator is useful when looking for candidate projectors.

Other candidate short-throw projectors were Optoma EX-525ST, Epson EMP-400W and BenQ MP771.

Sound

We are using a Terratec aureon 5.1 usb mk ii sound card connected to 5 speakers (4 outside the box) and a subwoofer.

Software

We are using the reacTIVision software for tracking the objects on the surface. This sends Open Sound Control (OSC) messages via UDP to client software. The messages comply with the TUIO protocol and contain information such as the position and orientation of the objects on the surface. For the client side there are TUIO clients, which are are easy-to-interface libraries available for several programming languages.

Processing Prototype Playground Program

We have implemented a prototype program in processing which receives TUIO messages from the reactivison software via the processing TUIO client. The purpose of the program is to have a simple and fun interface for trying out some of the reactable functionality, and the concept could be extended later to a more powerful music generator for live use.

PPPP prototype software

Its main concepts are the "radar percussion sequencer" and the "loop blocks". The radar sweeps a line around the outer part of the surface, and when it hits a percussion marker, a sound is triggered. In the inner area there is space for markers which trigger loops. It is planned to extend this later to also include effects which can influence other markers. We are currently using the minim library in processing for generating sound - however there are some problems with unstable sound and it is planned to perform sound processing in MAX/MSP, by sending OSC control messages from processing.

OpenGL is used for the visualization of the feedback (blocks around the tangible boxes, radar line, and more). With the presence of a graphics accelerator this makes it possible to expand to high resolutions and high-quality visual effects while keeping a high frame rate, being less demanding on the CPU. This is also important in case the vision software is running on the same computer, also requiring CPU power.

Project Status

At the moment we have a working prototype which can be seen as a "proof of concept". The amount of work needed for searching for and purchasing parts took more time than expected, and thus little time has been spent on development of software. This can be seen as a first step of a reactable-like device for the fourMs lab, however it would be desirable that the project will be taken further and moved from an "alpha" state to a "beta" or more finished state. Only then will the real possibilities of the instrument be opened up, and allow for interesting experiments in terms of user interface, mapping and sound generation.

The following points are the most immediate possible improvements:

  • Better optics: The vision area is now limited and it could probably be improved by a camera with a better sensor or with better optics.
  • Better IR illumination: The quality of the camera images would be better with more IR light for illumination. Now the exposure and gain needs to be at high settings, which makes the camera output noisy.
  • More useful software

Related projects

Personal tools