Some pictures

November 3, 2010
by Nathan Whitmore (nww10)

These are from the newest version of the Hackerhat software that I’m playing around with.

This is the Hackerhat’s “home screen”, or the main mode where the feed from the camera and the layers are displayed. In this picture, the hackerhat is running a news ticker layer(on top) and a test layer which displays RGB values of a point. The hackerhat also supports zooming the image  up to 4x.The text is the lower right corner is a measurement of the amount of latency, or time spent processing the frame before it could be displayed.

This is the layer toggling screen, which allows the user to turn various layers on or off. Selecting the layers is done using a visual target attached to the finger, which is recognized by the computer and acts as a pointer

The document viewer, shown above, allows layers to easily retrieve text from the Internet and display it. Text scrolling and contrast is controlled gesturally by the same motion-tracking device.

What’s a hackerhat?

October 31, 2010
by Nathan Whitmore (nww10)

Hacker(n) 1. A person who enjoys exploring the details of programmable systems and how to stretch their capabilities,

The hackerhat is designed to be an open-source platform for developing augmented reality applications. It runs in machine-independent Java, and is designed to allow for an augmented reality system to be implemented on any device that possesses a camera, java environment, and display of some type. It is a “pass-through” augmented reality system, which means that everything displayed on the diplay device passes through the processing system(as opposed to a pass-by system which optically superimposes computerized data on top of  a transparent surface, like a pair of glasses.

The hackerhat can run speccially designed Java programs as “layers”. In the hackerhat design, each layer adds a particular piece of information to the visual display(i.e. navigation layer would display directions, a Twitter layer would pull trending topics from Twitter, etc.)

Other things it can do:

  • Gestural control using a homemade motion tracking device
  • (Somewhat rudiementary) integration with speech-recognition systems
  • Layer APIs for easy accsess to network resources and image processing systems
  • “Markerless” object recognition system

Cool things it will do in the near future: