this blog is the platform for publishing the research conducted by the students of the "Prehystories of New Media" class at The School of the Art Institute of Chicago (SAIC), Fall 2008, instructor: Nina Wenhart

tagging

for tagging, please only use your name and the title of your paper only, so that we'll get an index. make sure that you end your tag with a comma.

here is an example:
Nina Wenhart: Prehystories of New Media,
thanks!

students' blogs

readers

Tuesday, November 4, 2008

Ben Carney: Interface: From Cords and Jacks to Gesture and Thoughts

Abstract


Abstract:
In this paper, I attempt to weave a narrative of the development of the human computer interface as it eveloved and changed through history. It becomes apparent that what is being aimed for time and time again is the ability to interface with fewer devices in our hands. The transparency of interface is something that is now becoming a reality. This paper finds the path that has led us there and explaines some of the more ground breaking interfaces along the way.




1. A Brief History of our Current Interface

From the time that digital computer's first took form, we have needed an intuitive way of interacting or communicating with these machines. The earliest computers had large, cumbersome interfaces consisting of large terminals of input and output ports and were programmed by setting up large patches, sometimes requiring several hours of setup to mold into a workable state suited for running calculations. ENIAC was the first computer to feature a display, as it was to be featured on a television program. An array of ping pong balls with light-bulbs behind them lit up corresponding to the calculations being performed, so as to show the viewers at home that this machine was, in fact performing a task never before seen by the masses done by a piece of machinery. This method of interface and display was clumsy at best. Shortly after, in late 1947 early forms of transistors were invented by William Shockley, Walter Brattain, and John Bardeen at Bell Laboratories, sending computers into a new age of smaller sizes, faster speeds and at a cost greatly reduced.1 It wasn't until much later that any great advancement in human computer interaction was made.
In 1963, Ivan Sutherland introduced a new way in which humans could communicate with computers. Sutherland's demonstration of his program "Sketchpad" showed that it was possible to directly manipulate the inner workings of a computer and thus, the elements displayed on the screen through a process very similar to drawing with a pen and paper. It was the first computer program ever to have done this. Sutherland used a light-pen to directly manipulate simple forms drawn on a cathode ray tube screen.2 Interestingly, this program may be the first example of a form of science fiction advancing and inspiring new and innovative computer technology. Sutherland was greatly inspired by Vannevar Bush's description of a "Memex" short for "memory extender."3
Bush was by no means a science fiction writer, he is known for his work with analogue computing and the Military during World War Two. Though, his ideas contained in his famous article, "As we may think" have inspired many to create much of the technology we use today. In "As we may think,” Bush describes the use of a graphical interface involving the manipulation of processes and information between multiple users. Bush's descriptions have mostly to do with manipulation and duplication of bits of microfilm, as the cathode ray tubes were not in wide use at the time of the article's writing and microfilm was a medium gaining wide use.(Bush)4
Sketchpad was the first program to feature "instances," a common feature still used in popular drawing and animation software today. Sketchpad was also the first program that spoke directly to visual artists. Programs like these could be used for drafting, science visualization and the visual arts. It would be safe to assume that Ivan Sutherland's Sketchpad gave birth to the computer as a tool for the artist.
Inspired by Sutherlands' groundbreaking work, Douglas Englebert went on to create the most popular way of interfacing with a machine of all time. Made from hand carved wood and Metal wheels, Englebert patented the "X-Y position indicator for a display system" later on given the name "mouse" as that is what the device resembled with it's single chord resembling the tail of the common rodent. Englebart's original mouse lacked a ball as most common mice have today, instead relying on two wheels contacting directly the surface. The development of the mouse as a method of human for computer interaction was backed by extensive research on tasks that came naturally to small children; obviously, operating a keyboard was a task far out of the scope of a young child's motor skills. Small children learned by touch first and foremost.5 The mouse made it's way into nearly every home computer system through its ease of use. Coupled with the graphical user interface, which was developed originally by Xerox, then perfected and mainstreamed by Apple and Microsoft, the computer was no longer a daunting and scary behemoth of a machine. The average person could now access computers for nearly any task he or she wished to achieve.
It is strange that from the time that the mouse became standard, the innovation of new and better human and computer interaction methods came nearly to a halt. Improvements on the already existent mouse were made, and are still being made. Better tracking, better design, better mechanisms for movement, more buttons. Still though, the basic and limited model for interacting with these machines remained much as it began, literally so simple even a small child could master it. It is fascinating to think what new innovation may have sprung up had the mouse's predecessor of ten years, the track ball become the mainstream standard. One could imagine innovations aiming in faster control over the free-flowing control surface, expressive control over an interface. Smooth control, the gliding of hands fluidly over an interface, much like a psychic and her crystal ball, conjuring up images and sound and information from her machine.
2. Progress
At approximately the same time that the mouse was being developed, a computer scientist/artist by the name of Myron Krueger began developing interactive systems alongside Dan Sandin, Jerry Erdman and Richard Venesky. The project aimed to expand on the restricted dialogue humans had with these new exciting machines. The first project that came from the collaboration was a project called Glowflow. Glowflow consisted of a room rigged with several pressure sensitive foot switches. When a foot switch was depressed by an observer of the piece, several corresponding electro-luminescent elements would illuminate one of six columns placed throughout the space. Sound emitted from moog synthesizers were activated at or around the same time. The timing of the response as it related to the action it responded to was offset by quite a bit, as Krueger wanted the viewers of the piece to be somewhat unaware of their role in the interaction, so as to prohibit the whole installation from becoming too game-like; the participants frantically moving about making all attempts to coerce a direct response. Since Glowflow was one of the first interactive spaces in observing the response and interaction that Glowflow provided, Krueger came to some conclusions about interactive art as a whole.

1.Interactive art is potentially a richly composable medium quite distinct from the concerns
of sculpture, graphic art or music.
2.In order to respond intelligently the computer should perceive as much as possible about the participant's behavior.
3. In order to focus on the relationships between the environment and the participants,
rather than among participants, only a small number of people should be involved at a time.
4. The participants should be aware of how the environment is responding to them.
5. The choice of sound and visual response systems should be
dictated by their ability to convey a wide variety of conceptual relationships.
6.The visual responses should not be judged as art nor the sounds as music.
The only aesthetic concern is quality of the interaction.6

Following "Glowflow” Krueger and his team began to develop a number of other human-computer interaction environments that eventually conglomerated into the project now known as "Videoplace." Using some of the times latest advances in processing power, video cameras, monitors and computer pen tablets, Krueger created an interaction system that allowed people placed in separate buildings located over a mile apart to communicate as they never had before. An image consisting of once interactor's silhouette was captured by video camera and transferred via television cable to another building where another interactor's hands were captured on a separate video camera and super imposed along with the previously mentioned silhouette. Krueger developed the software for Videoplace in such away as to allow for non-verbal and gestural communication to take place between the two participants. One could draw directly on the same screen of which another was displayed. They could then form a dialogue of communication as to what they might perceive the simple line drawings produced by the interactor as. Video documentation shows one interactor's hand appearing giant in comparison to the other's silhouette. The hand then proceeds to nudge the silhouette, causing it to fall over and bounce back up. The games and new forms of communication that the multiple viewers could conceive were nearly endless. Krueger's Work on Videoplace was the first of it's kind and paved the way for some of today's communication platforms such as video conferencing.
Shortly after Krueger's groundbreaking interactive installations, David Rokeby began work on a system that greatly simplified the process of humans interacting with a computer by means of video cameras. Rokeby's system used only a single camera and some very clever use of pixel detection. "Very Nervous System" by David Rokeby, proceeded by two other similar though less robust interactive systems produced sound in response to a viewer's movement within the field of view of a video camera. Rokeby's method of detecting subtle movement was extremely precise. So precise infact, that when it was first exhibited, viewers of the piece mistook the extremely accurate interpretations of their movement as simply semi-random responses to their basic presence in the space. 7Rokeby found that lowering the system's capabilities of detection allowed for a more opaque understanding of response for the viewer. Rokeby's system worked through a process in which the camera and computer system were constantly analyzing the camera's image for differences in the previous frame captured. This means that if there were no one in the frame, the output would be silence. If someone were to enter into the frame, Rokeby's program would recognize this and output the sound, which corresponds to the group of pixels affected by a figure entering the frame. Silence could also be achieved if one were to stand absolutely still, thus a dance of sorts consisting of periods of rest and subtle motion could be performed by the viewer, coaxing from the system a composition of their own creation. Rokeby's system is perhaps, even with today's technology the best example of a completely transparent interface. It required absolutely nothing external of the interactor to make it function.
3. Leaning Towards Transparency
In the early 1990's, technology that made Virtual Reality possible became less expensive, which allowed for more people to experiment with the possibility of creating audio and visual environments that existed entirely within computers. In order to be able to experience and navigate these virtual spaces however, most often required extremely bulky and delicate gear. The use of large head mounted display screens and a mess of wires was par for the course in the development of and interaction within these new virtual spaces.
One project in particular made brilliant use of the bulky equipment necessary to experience a virtual environment. Char Davies, a Canadian artist with a background in painting directed a landmark Virtual Reality piece, "Osmose" along with an entire development team. Her use of control within this environment allowed the user to completely forget about the extremely cumbersome equipment required to experience the space. Osmose allowed it's viewers to drift through twelve environments ranging from depictions of structural concepts such as a view of the computer's code and a Cartesian grid, to more natural environments filled with flowing elements and depictions of trees, all pulsing with elements that suggest transfer of natural energy.
As usual, a head mount display was worn to provide the stereoscopic visuals of the space. A tilt sensor was used to measure the angle at which the user leaned. The leaning motion allowed the user to float through the space without the need of a hand held controller of any sort. Exploration became a very simple task, as it had no learning curve. Viewers of the piece needed only to lean in the direction they wished to travel and it was so. To travel up and down within the space of Osmose, the viewers need only to inhale deeply. Traveling downward only required an exhalation. Again, this is very intuitive and simple requiring no extra thought towards external control allowing a more direct communication with the space. Davies was an experienced scuba diver, so this method of control came as quite an obvious answer to the control of her 3d space. It was to be nearly as natural as some experiences in the real.
Those who experienced the piece during it's many exhibitions reported feelings of complete relaxation, euphoria, and even spiritual experiences. Supposedly, some even wept.8 Davies' intuitive interface allowed users to completely forget that they were interfacing with a machine, something that is still very rarely achieved.
One other VR piece featuring a very intuitive interface, though with much less of an emotional impact on those who viewed it is Jeffery Shaw's "Legible City." In this work, the interface is one that would be instantly accessible to nearly everyone, a bicycle. The viewer of this piece simply rides a stationary bicycle through a virtual city where buildings are made of giant blocks of text, so as to allow the viewer to read the city as she rides along. Though Jeffery Shaw's interface is not quite as advanced as the interface of Osmose, It is worth noting that this was developed some 8 years before.
4. Now
When Vannevar Bush inked his essay, "As We May Think," he sparked a great wave of intrigue and innovation of the possibilities of new technology and it's use in everyday life. As it goes now, it seems that this generation's "As we may think" came in the form of a few brief scenes from a Hollywood blockbuster adapted from a book by Philip K Dick. In Minority Report, a few scenes depict the film's protagonist interacting with a futuristic interface used mostly for solving crimes that have not yet happened. He wears gloves with small lights placed on the thumb and index finger. He swings his arms in front of him, coaxing images from the machine's database of images and video files, sorting and viewing select parts of images before tossing them aside once more with a sweep of his arm only to materialize more images in front of him with another gesture from his other arm.
This scene from the film depicted an interface of the near future. Realistic in it's approach but using technology that had not yet been invented, much like the words that make up bush's famous article. These images struck a chord with curious researchers, and as some new technology surfaced, a few bright developers put it to use.
It is amazing really. An Internet search for Minority Report contains nearly as many links to new interface development as it does to information about the film. So much is going on dealing with the interaction with machines as depicted in the film. A few have done it so right that they seem to be the current stars of the internet, their names popping up on social networks and news pages alike.
Several similar attempts in a tactical, gestural interface are currently being developed. All have extremely similar goals in mind. Jeff Han, a researcher from the labs at NYU developed a multi-touch interface similar in look to the interface depicted in minority report, though one still must physically touch the screen to interact. Han went on to found his own company, perceptive pixel. He now sells his technology to the military and the central intelligence agency. An example of his interface is used today during CNN's news broadcast via their "Magic Wall."
Johnny Lee, a Computer Science major from Carnegie Mellon University is leading research developing an extremely low cost method of interfacing with computers using not much more than a videogame controller. Lee put to use Nintendo’s inexpensive infrared cameras inside of their newest video game controllers to track points in space. Lee demonstrated that with a few inexpensive items, like reflective tape and a few extra infrared light-emitting diodes, nearly anyone could have the basics of a control system that requires no touch whatsoever. Lee’s videos demonstrating how his simple systems works have been viewed several millions of times.
Several industry related companies have taken Lee’s ideas and expanded on them using development teams and bigger budgets. The most impressive of these being a demonstration by Cynergy labs, which expanded upon and gave credit in the process to Lee’s innovative ideas. Using the collective data that already existed on the Internet, it took one man only eight days to build a simple gestural interface with light glove hardware. It will not be long until input such as this will be inexpensive and commonplace alongside existing computer hardware.
Furthering the transparency of interface to the absolute is the development of a Brain-computer interface, where human thought is mapped to placement of an object on a screen. Simple proof of concept demonstrations have been realized in this field already. One such demonstration by a team of researchers at Washington University allowed a disabled teenager to play the classic game, Space Invaders just by thinking of where to place the spaceship within the game world. He mastered the first two levels with ease, using just his imagination. 9
This is just a brief example of how the interface becomes completely transparent. The mouse has stuck around so long because it is so efficient, there is a 1:1 correlation to moving your hand and moving a cursor on the screen. The brain computer interface takes that efficiency one step further, cutting out the middleman that is the human body. In the coming years, It will be fascinating to see how artists and researchers alike mold these new tools into intuitive systems for pleasure, play, and the expansion of human intellect.



Critique of Sources Used

The Sources I used to gather ideas and generally research this topic were generally very good. One of the main reasons this is so is that many of these sources were original writings by the artis who produced the work in question. It was great to read David Rokeby’s own words about the troubles he had in developing “Very Nervous System.” The same goes for Myron Krueger. An extensive essay by him in the book Multimedia-from Wagner to VR was extremely helpful. Other sources, such as the timeline of computer history, put toherther by the museum of computer history were a great reference point along the way.




Bibliography



Porter, Stephen. "Expanding the VR Aesthetic." Computer Graphics World 19, no. 7 (1996): 4.

WGBH Boston and the BBC, The Machine That Changed the World

Rokeby, David. "Transforming Mirrors." 3/7/96.http://homepage.mac.com/davidrokeby/mirrors.html
(accessed 11/01/2008).

Bush, Vannevar. "As We May Think." The Atlantic, July 1945,


Multimedia – From Wagner to Virtual Reality. 1 ed. 1, 1. Randal Packer Ken Jordan. New York, NY: W. W. Norton & Company, 2002.

Fitzpatrick, Tony. "Teenager moves video icons just by imagination Washington University in St. Louis News & Information ( 2006), 1, http://news-info.wustl.edu/news/page/normal/7800.html. (accessed November 15, 2008).

No Author Listed, "Timeline of Computer History." 2006.http://www.computerhistory.org/timeline/?year=1947 (accessed Oct. 20 2008).

No comments: