UPDATE: Demo 2 of the Kinect Theramin, Therenect, by Martin Kaltenbrunner |
- UPDATE: Demo 2 of the Kinect Theramin, Therenect, by Martin Kaltenbrunner
- ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero (video)
- Interactive Information Visualization for the Kinect? Something like Jer Thop's "Just Landed-36 Hours" might work nicely if revamped!
UPDATE: Demo 2 of the Kinect Theramin, Therenect, by Martin Kaltenbrunner Posted: 06 Dec 2010 05:07 PM PST I recently posted about the Therenect, a gesture-controlled digital theremin created for Microsoft's Kinect, created by Martin Kaltenbrenner - Therenect: Theremin for the Kinect! (via Martin Kaltenbrenner) It looks like Martin has been busy polishing up the application over the past few days, as you can see from the video below: Therenect - Kinect Theremin - 2nd Demo from Martin Kaltenbrunner on Vimeo. RELATED Virtual Theremin Made with Kinect; Real Thereminists Will Make it Useful Peter Kirn, Create Digital Music, 11/30/10 |
ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero (video) Posted: 06 Dec 2010 01:53 PM PST ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero Here is the information about the interactive sculpture from the Art Below Zero YouTube Channel: "Created by David Sauer & Max Zuleta for the Lake Forest Tree Lighting Festival.This Ice Crystal Display was the 1st to be created in the USA, Transforming 300 pounds of ice into the equivalent of a giant Ipad touch screen. "People always want to touch our Ice Sculptures, This Interactive Display gave them the perfect reason to get their hands cold." said Max Zuleta owner of Art Below Zero. The public response was amazement and interest in the workings of the touch screen in ice. Our favorite guess was "It must work by sensing body heat!"..." "...The system is known as Rear Diffused Illumination or Rear DI. It works because an Infrared light is shone from the opposite side of the ice wall through the ice. When an object such as a finger, hand, or mitten stops the infrared light it reflects the light back to a custom camera built by Peau Productions. The illuminated objects are then converted to points of interaction using an open source program Community Core Vision which outputs TUIO data streams to a Flash program for animation. We like the look and feel of the Fluid Solver flash application. The output from the computer is then projected into the ice and ice diffracts the light into something beautiful. By this method the user can manipulate a visible light screen via an invisible light that only the camera can see..." Thanks to Nolan Ramseyer, of PeauProductions, for the link! PeauProductions Blog: Multitouch and Technology RELATED Ubice = Multi-touch On Ice at the Nokia Research Center in Finland (Video + Pic via Albrecht Schmidt) Art Below Zero |
Posted: 06 Dec 2010 05:09 PM PST I follow the O'Reilly Radar blogs and came across a recent post about an information visualization created by blprnt two years ago using Processing. I think it would have great potential if it was re-purposed for use on the Kinect! In the article, Edd Dumbill discusses the advantages of using Processing to create data and information visualizations. One example of the power of Processing is an information visualization, "Just Landed -36 Hours, created by Jer Thorp. Jer gathered tweets from Twitter that included the statement, "just landed", along with location information for each tweet, within a 36-hour period, to create the visualization. 36 Hours- Just Landed is a great 3D visualization of air travel on our planet. I especially lik the different views that the application provides. As soon as I watched the Just Landed video, I thought it would be great if it could be revamped for use on the Kinect! (Leave a comment if you know of anyone working on a project in this area.) Just Landed - 36 Hours from blprnt on Vimeo. Information about the video from blprnt's Vimeo site: "I was discussing H1N1 with a bioinformatics friend of mine last weekend, and we ended up talking about ways that epidemiologists model transmission of disease. I wondered how some of the information that is shared voluntarily on social networks might be used to build useful models of various kinds...I'm also interested in visualizing information that isn't implicitly shared - but instead is inferred or suggested...This piece looks for tweets containing the phrases 'just landed in...' or 'just arrived in...'. Locations from these tweets are located using MetaCarta's Location Finder API. The home location for the traveling users are scraped from their Twitter pages. The system then plots these voyages over time...I'm not entirely sure where this will end up going, but I am reasonably happy with the results so far. Built with Processing (processing.org) You can read more about this project on my blog - blog.blprnt.com" RELATED Strata Gems: Write your own visualizations: The Processing language is an easy way to get started with graphics Edd Dumbill, O'Reilly Radar, 12/3/10 |
You are subscribed to email updates from Interactive Multimedia Technology To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
0 comments:
Post a Comment