Interactive Systems
Interactive Systems
Interactive systems are computer systems characterized by significant amounts of interaction between humans and the computer. Most users have grown up using Macintosh or Windows computer operating systems, which are prime examples of graphical interactive systems. Editors, CAD-CAM (Computer Aided Design-Computer Aided Manufacture) systems, and data entry systems are all computer systems involving a high degree of human-computer interaction. Games and simulations are interactive systems. Web browsers and Integrated Development Environments (IDEs) are also examples of very complex interactive systems.
Some estimates suggest that as much as 90 percent of computer technology development effort is now devoted to enhancements and innovations in interface and interaction. To improve efficiency and effectiveness of computer software, programmers and designers not only need a good knowledge of programming languages, but a better understanding of human information processing capabilities as well. They need to know how people perceive screen colors, why and how to construct unambiguous icons, what common patterns or errors occur on the part of users, and how user effectiveness is related to the various mental models of systems people possess.
Types of Interactive Systems
The earliest interactive systems were command line systems, which tightly controlled the interaction between the human and the computer. The user was required to know the commands that might be issued and how the arguments were to be ordered. Both the UNIX operating system and DOS (Disk Operating System) are classic examples. Users were required to enter data in a particular sequence. The options for the output of data were also tightly controlled, and generally limited. Such systems generally put a high demand on the user to remember commands and the syntax for issuing these commands.
Command line systems gradually gave way to a second generation of menu-, form-, and dialog-based systems that eased some of the demands on memory. An Automatic Teller Machine is a good example of a form-based program where users are given a tightly controlled set of possible actions. Data entry systems are frequently form-or dialog-oriented systems offering the user a limited set of choices but greatly relieving the memory demands of the earlier command line systems.
A third generation of interactive computing was introduced by Xerox Corporation in 1980. The Xerox Star was the result of a half dozen years of research and development during which the mouse, icons, the desktop metaphor, windows, and bit-mapped displays were all brought together and made to function. The Xerox Star was replicated in the Lisa and Macintosh first offered by Apple Computer Inc. in the mid-1980s. The windows, icon, menu, and pointer (or WIMP) approach was made universal by Microsoft in the Windows family of operating systems introduced in the 1990s. With the maturation of WIMP interfaces, also known as graphical user interfaces (or GUIs—pronounced "gooey"), interaction moved from command-based to direct manipulation.
In command-based systems, the user specifies an action and then an object on which that action is to be performed. In a direct manipulation system, an object is selected, and then the user specifies the action to be performed on that object. The most recent developments in interactive systems have focused on virtualization , visualization, and agents. The following sections describe in more detail the nature of the current generation of direct manipulation systems and the coming generation of agents and virtual systems.
The Importance of Understanding Human Capabilities
It is important that users be able to understand how to use a highly interactive computer system. Cognitive science professor Donald A. Norman describes the human-computer interaction in terms of the "gulfs of execution and evaluation." Basically, this means that the user has a goal in mind and must reformulate that goal in terms of a plan that ultimately involves the execution of a series of actions on the system. These actions result in changes in the state of the system, which must be perceived, interpreted, and evaluated by the user. Computer system developers need to understand how human beings perceive, interpret, evaluate, and respond to these computer actions.
Although even a cursory review of the literature on human perception, human information processing, and human motor skills is far beyond this brief overview, it may be useful to consider a very select set of principles developed from that literature. Hundreds of research studies have been done on the limits of short-term memory. These include research on how information is "chunked," on how many "chunks" can be kept in memory at one time, and at how the number of chunks varies when the information is sensory or symbolic. Similarly, there are hundreds of research studies on how to best access information in long-term memory. For example, by "priming" a subject with some fact that requires access to long term memory, the access time for closely related concepts can be improved. Finally, there are thousands of studies on the acuity of and variation in human sensory and motor capabilities. All of these studies have led to principles for:
- menu construction, related to the limits of short term memory;
- system design based on metaphors that activate areas of long term memory;
- the target size of buttons and icons, based on studies of motor skills;
- the use of visual and auditory cues, based on human sensory capabilities and limits.
Although these references are only the tip of a vast and growing field of research in human perception and use of data provided by computers, they represent the kinds of developments that are moving interactive system design from an art to an engineering science.
Direct Manipulation Systems
As noted earlier, a direct manipulation system is one in which the user is able to select an object and then specify which actions are to be taken. This is in opposition to command line systems where the user would normally specify an action and then select an object upon which the action was to be performed. This fundamental paradigm shift caused a number of changes in how these systems were designed and implemented in code.
The basic programming paradigm had to change from the process-driven approach to an event-driven perspective. In earlier systems, the program's main process would control what the user could do. Now, it was possible for the user to initiate a broad series of actions by selecting an "object"—a window, an icon, or a text box, for example. This required some method for collecting events and handling them. The X Window System on Unix was one of the early popular systems for doing this. Each graphical component of the interface was capable of producing one or more events. For example, a window might be opened or closed generating an event. Similarly, a button might be pressed, or the text in a text box might be changed. There are mouse events as well—such as when a mouse enters a window or moves over a button. These events are dispatched to a window manager. For Apple systems and all the Windows systems since Windows 95, this functionality is built into the operating system.
The programmer's task is to display a coordinated set of components that can generate events. The programmer is also required to write code that will initiate some action when an event occurs. These code fragments are called event-handling functions. Once the programmer has defined the objects that might generate events and the code to respond to those events, the final programming task is to "register" the event handlers as having an interest in certain classes of events produced by certain objects. When those events occur, the window manager dispatches them to the appropriate event handler. In object-based and object-oriented programming environments, this task of handling events is made easier through object classes which associate default event-handling methods with specific classes of objects. For example, the code for how the appearance of a button is changed when it is pressed may be provided as a default method of the button objects. Similarly, the class may provide default button release code. The programmer simply needs to add additional code that performs some application-specific action when the button is released.
Visualization, Virtualization, and Agents/Embedded Systems
Throughout the 1980s and 1990s, there were numerous efforts to take advantage of the human ability to process information visually. At the simplest level, consider that a human looking at an image on a television screen has no problem in discerning a pattern that consists of millions of individual pixels per second, changing in both time and space. Or consider the example provided in Figure A. Ask yourself what pattern is represented by the following set of numbers: a set of X,Y pairs with the X value being the upper value in each column and the Y value being the lower value?
Even knowing they are pairs of X,Y coordinates, most people have trouble seeing a pattern in this numerical example. If however, these points are plotted, or visualized, a pattern emerges rather quickly as shown in Figure B. Visualization systems manipulate information at high levels of aggregation, making the information more accessible to users. The aggregates may be records, documents, or any other entity defined as an object. Working with large numbers of objects that have multiple attributes, we can map these attributes to the interface in such a way as to visualize the data or simulate some process. The visualization of abstract data to sensory interfaces is central to software including geographic information systems and data mining applications, among others.
In the 1990s, researchers began to experiment with extending interactive systems from symbolic interaction—icons, mice, and pointers—to virtual systems. In these systems, every effort was made to allow the user to explore a virtual world with little or no translation to symbolic form. Thus, with visualization techniques and new forms of input devices, such as data gloves, hand movements could be used to manipulate virtual objects represented graphically in a virtual world. This virtual environment was presented to the user via two display screens, each of which provided a slightly different perspective, giving the user a stereoscopic view of a virtual space that appeared to have depth. Work on virtual and artificial reality continues on a number of specialized fronts, including a field known as telemedicine .
The next generation of interactive systems, represented by agents in embedded systems, will again change how humans and computers interact. Direct manipulation environments will still be around for many years to come. At the same time, we have begun to see both agents and embedded systems make their appearance. Embedded systems can be as simple as the analog sensor systems that open a department store door, or turn on lights when someone enters a room. At a more complex level, most cars being built today include air bag deployment systems and antilock brakes that operate invisibly by gathering data from the environment and inserting computer control between our actions and the environment. As air bag deployment systems become more complex, they react based not simply on acceleration data, but also based on the weight of the individuals occupying the seat and their relative position (leaning forward or back) on the seat.
As the information processing of sensory inputs becomes more complex, these embedded systems begin to act like agents. For example, programs that monitor typing activity and automatically correct spelling errors are beginning to mature. Although early versions frustrated sophisticated users, more advanced versions are demonstrating their ability to learn user preferences and new forms of errors to correct. The perceptive user will note that the most recent applications remember lots of things about user activity—which web sites they frequent, for example, or where they hold meetings and with whom they meet. In the next generation, programs will use these data stores to communicate on the user's behalf with agents of other users.
In summary, new systems are emerging where the interface between the human and the computer system is becoming invisible. When the programs are very complex and act on behalf of the user, they are called "agents." These agents make use of increasingly sophisticated methods of data acquisition. Agents are evolving from using stores of data acquired from user activity to acquiring real-time data based on new information including facial and gesture pattern recognition and speech recognition . Increasingly, these agents will perform tasks such as storing and retrieving files for the user and undertaking simple actions such as making or confirming appointments. The help feature in Microsoft's Office software is an example of an active agent that observes user activity and offers help based on actions that suggest it may be needed, from formatting documents to correcting common spelling and grammatical errors.
see also Game Controllers; Games; Hypermedia and Multimedia; Mouse.
Michael B. Spring
Bibliography
Baecker, Ronald M., and William A. S. Buxton, eds. Readings in Human-Computer Interaction: A Multidisciplinary Approach. Los Altos, CA: Morgan-Kaufmann Publishers, 1987.
Card, Stuart K., Thomas P. Moran, and Allen Newell. The Psychology of Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates, 1983.
Johnson, Jeff, et al. "The Xerox Star: A Retrospective." IEEE Computer 22, no. 9 (1989): 11–30.
Norman, Donald A. The Psychology of Everyday Things. New York: Basic Books, 1988. (Also published as The Design of Everyday Things, 1990.)
Perlman, Gary, Georgia K. Green, and Michael S. Wogalter, eds. Human Factors Perspectives on Human-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings 1983-1994. Santa Monica, CA: HFES, 1995.
Shneiderman, Ben. Designing the User Interface: Strategies for Effective Human-Computer Interaction, 3rd ed. Reading, MA: Addison-Wesley Publishing, 1997.
Smith, David, et al. "Designing the Star User Interface." Byte April (1982): 242–283.
Weiser, Mark. "The Computer for the Twenty-First Century." Scientific American 256, no. 3 (1991): 94–104.