Proceedings of the International Workshop on Biorobotics:
Human-Robot Symbiosis, Tuskuba, Japan, May 1995

Copyright © Elsevier Press, 1995.


Tele-Service-Robot: integrating the socio-technical framework of human service through the InterNet-World-Wide-Web

Larry Leifer+, George Toye+ and Machiel Van der Loos++

+ Stanford Center for Design Research
Mechanical Engineering Design
Bldg. 560, Duena Street, Stanford, CA 94305-4026
Leifer's E-mail address
Toye's E-mail address

++Rehabilitation R&D Center
VA Palo Alto Health Care System
3801 Miranda Ave. #153
Palo Alto, CA 94304-1200
Van der Loos' E-mail address


Abstract

In a recent survey of robotics in rehabilitative human service, Stanger et al. (1994) re-establish the central role of task assessment in defining technical R&D priorities. Among their key findings, and central to the thesis of this paper, is the re-affirmation that engineers and scientists, intent on being helpful, must first assess just who is being served, where they are, what they are trying to do and who is going to pay for it. Moreover, the cost associated with an integral socio-technical framework that addresses user needs for interaction, support and maintenance after the initial installation is the real driver toward adoption of robotics technology over equivalent human service.

1. Background

The U.S. Department of Veterans Affairs did a three-year, multi-site, field evaluation of DeVar, the Desktop Vocational Assistant Robot (Cupo et al., 1994). Three key results guide the formulation of this paper. First, the manufacturer could give the robot away, but still incur installation, training, and maintenance costs that would destroy the robot's cost-benefit potential. Secondly, though DeVar was safe, reliable and effective, service downtime for technically trivial problems sharply reduced the perceived value of the system. And thirdly, placing real robots in the field places a premium on robust simplicity, at the same time that it demands very sophisticated technology for human-to-human, human-to-machine and machine-to-machine communication.

These experiences lead us to the following hypothesis. Human-Robot productivity will increase and downtime will decrease if all system users have interactive access to each other, to on-line training, and to vendor support. Text, graphic, video, and haptic software tools should also support remote programming. These communication services must be on-line, on-site and on-demand. The reduction in training and equipment downtime will make field placement of sophisticated robotic systems feasible and cost effective. Their absence will severely limit service robot utility (Leifer et al., 1994).

The InterNet and the world-wide-web are infrastructure technologies that facilitate virtual presence via electronic communications between entities, humans or machines. The richness of media for communications, offered through the InterNet and the web, provide high level interactivity. The web supports key robotics technology, including: visual robot-user-interfaces (RUI), robot system fault tolerance; monitoring; control, and operational history recording). But most importantly, the web creates the social framework required for field placement of sophisticated robotics technology, e.g. email, interactive remote audio-video presence and shared multi-media bulletin boards.

2. Tele-service-robotics technology framework

The conditions for effective use of robots in unstructured (or even moderately structured) environments are at least four fold. The technical system must include one or more sensory modalities. The control architecture must support adaptive sensor driven motion control. There must be utilities for task and motion programming. A fourth, and rather subtle requirement, is that these factors must deal with ambiguity. The collective importance of these requirements must be judged by the degree of accessibility to a wide range of operators, placing an additional burden on the RUI.

Robot programming languages (RPLs) have always presented the applications engineer with a deep dilemma. How does one use rigid RPL semantics and syntax to specify robot motion in an uncertain environment? Many investigators can testify that sensing the environment is just the beginning of adaptive control. In fact, sensor interpretation and multi-sensor fusion contribute massively to the RUI programming burden. Fault tolerance is often the first casualty of environmental sensing, as when sensor failure leads to undesirable robot motion.

While the richness of media carried over the InterNet demonstrably supports information exchange, it also represents a set of technologies that can be deployed to manage some of the design requirements in tele-service-robotics.

Figure 1 illustrates the strategic relationship amongst RUI research and development projects at the Stanford University Center for Design Research (CDR). The following narrative describes each issue briefly. Special attention should be given to how each previous project can now take advantage of enabling InterNet technologies. The first example is at the user-robot interface.


The interaction between a service-robot and the people in the user-world may be represented as a set of barriers to effective symbiosis. Each of these barriers must be addressed explicitly and solved technically.


Figure 1: The interaction between a service-robot and the people in the user-world may be represented as a set of barriers to effective symbiosis. Each of these barriers must be addressed explicitly and solved technically. We suggest that the solutions should be balanced, i.e., superior performance in one area will be of little value if other factors limit overall system performance. Over a 15 year period, we have systematically addressed each of these issues (as have others in the field). Most recently, we have found that limited robot integration within the socio-technical application environment severely limits its utility. We propose that interactive use of InterNet-WWW technology can cost-effectively address this barrier. The work of three investigators, Lees (1993), Rosenberg (1994) and Toye (1989), are particularly germane to these issues.

2.1 RoboGlyph - a visual robot programming language :

In one line of human-computer integration research, Richard Steele and collaborators (1989) developed a visual language prosthesis, C-VIC, that allows people with global aphasia (profound impairment of natural language processing skills) to communicate better through the computer than they can through "natural" language conventions. Experience with Lingraphica(TM), the commercial version of C-VIC, led us to speculate that the communicative power of this GUI promised a substantial performance breakthrough in RUI design. The linguistic notion of "readily inferable meaning" has a parallel in graphic representation (as in cartoons). This is a key feature of the lingraphic approach to robot programming. Figure 2 presents a few of the 2300 iconic communication building blocks in Lingraphica(TM). This sample has been chosen to reveal its manipulation-language potential



Visual communication icons for four lexical items.

Figure 2:

These icons include an animated verb to "turn", the preposition "against", the adjective "small" and the noun "book". When clicked, they reveal their meaning through graphic animation, spoken word, text and sound effects. A complete message is composed in storyboard fashion, "turn against the small book".


Building in part on Steele's work, David Lees (Leifer et al., 1991; Lees & Leifer, 1993) developed a text-free robot programming environment. Key features of the environment include: a storyboard programming metaphor; graphic representations with readily inferable meaning; composition primitives; motion primitives; orientation primitives; grasp primitives; and environmental mapping. Whereas visual language programming uses a linguistic framework, body language programming uses a kinesthetic analogy. Motions are represented by animation with mime qualities. Gestures replace words (Figure 3).



A figure of the storyboard metaphor to program a robot.


Figure 3: Lees uses a "storyboard" metaphor to help the operator visualize the robot's configuration before, during and after a motion sequence. The sequence itself can be specified graphically at varied levels of detail. The tactile sensing capability of the Roboglyph system plays an important role in telling "alternate" stories, e.g., move forward until you (the arm) encounter a resistance force greater than 2 ounces; then invoke planar surface calibration to prepare for the next step. "A" is the activate button. Arm configurations are created by a 3D simulation of the arm itself and are not pre-configured icons.


The RoboGlyph robot programming environment was particularly useful in situations where manipulation takes place in an unstructured environment. A priori models of such environments are not available. Rehabilitation applications tend to fall in this domain. The RoboGlyph system was used to write application programs for DeVar. The creation and maintenance of DeVar applications software was dramatically improved over the original RPL environment. Most importantly, these programs could be written by attending para-medical healthcare staff rather than robot programmers.

Once construction of the RoboGlyph system was complete, its effectiveness was tested by having users write application programs for DeVar. The system was tested in the clinic by 6 individuals with high level quadriplegia and 6 occupational therapists. Test subjects who had experience with computer programming, but not with RPLs, were easy to recruit. We also tested the system with people who have had experience with text based RPLs. This study provided valuable data about the performance of experienced users with the graphical system. The subjects are asked to write programs for three tasks:

1. Getting a cup of water from a cooler and bringing it to the front of the workstation, then returning it to the cooler.

2. Retrieving a mouth-stick from its storage holder and presenting it to the user.

3. Writing a program to operate a mechanical timer knob on a microwave oven.

These tasks are representative of the types of robot motions needed in DeVar's application environment (Lees & Leifer, 1993). The time required to complete a program provided a rough indication of the overall effort involved in code creation. This measure was supplemented by a measure of debugging effort, the number of test runs required before the program worked correctly. Since the graphical language was so different from the text based RPL (VAL-II) it was difficult to compare code size even though the icons representing actions are eventually translated into VAL code. Code compactness was not judged to be important whereas coding ease and accuracy were considered very important. These measures allowed some comparison of efficiency to be made. However, it does not separate the performance of the user from the process of code translation. Nonetheless, comparing program size and execution speed provided useful information about the ultimate efficiency of the graphical programming method.

We also measured the users' understanding of graphical programs relative to their text-based counterparts. Test subjects were presented with code fragments from VAL and from the graphical language and then asked questions about the code's function as an indicator of program comprehension. A second measure of code intelligibility was the ease with which program segments could be re-used. Test subjects were given a collection of functioning programs in both text and graphical formats and asked to write a new program using portions of the old ones when possible. The extent to which the old code was re-used indicated how comprehensible it was and therefore how easy it was to adapt to new applications. This combination of tests allowed us to document the strengths and weaknesses of graphical programming as compared with text based robot programming languages. The results of this testing program were overwhelmingly positive (Lees & Leifer, 1993) and support the main theme of this paper, that web interactivity is a key enabling technology for health care robotics.

It is now feasible to create such highly visual and interactive programming environment, such as RoboGlyph, using Sun Microsystems' Java technology. Consistent with web-technology, Java-based user interfaces are completely platform independent. And used together with the world-wide-web, the sharing of robot program fragments are merely a click-away.

2.2 Haptic RUIs, perception based design of virtual fixtures:

Rosenberg (1993) introduced the notion of virtual fixtures for use in telepresence systems and demonstrated that such fixtures can enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts, overlaid onto the workspace, which can provide similar benefits. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs.



Complete Telepresence System developed to implement the testing of teleoperator performance in a standardized peginsertion task with and without the aid of virtual fixtures.


Figure 4: Complete Telepresence System developed to implement the testing of teleoperator performance in a standardized peg insertion task with and without the aid of virtual fixtures.

This study investigated the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual aids in a standardized peg insertion telemanipulation task. Eight subjects were tested wearing an exoskeleton device linked to a slave robot arm to perform the manipulatory task in the remote workspace. A Fitts Law paradigm was used to quantify operator performance and human information processing capacity for a variety of virtual fixture configurations (Fitts, 1954). It was found that virtual fixtures composed of haptic and auditory perceptual overlays could increase operator performance up to 70%. Because simple fixtures devised from basic elements were shown to be such powerful perceptual aids, it was thought that a workstation environment could be developed to allow a teleoperator to design and implement assistive virtual fixtures interactively. Such a workstation was expected to facilitate teleoperation within unstructured environments upon first encounter (Figure 4).

Test results confirmed that overlaying abstract sensory information in the form of virtual fixtures on top of the instrumented sensory feedback from a remote environment can greatly enhance teleoperator performance. Virtual fixtures composed of simple combinations of impedance surfaces and abstract auditory information increased operator performance by up to 70%. Analysis of some basic perceptual elements suggests that virtual fixtures enhance performance in a number of way, including:

1. simplifying the operators internal model of the workspace;

2. altering the conceptualization of the task;

3. providing localizing references for the remote worksite; and

4. reducing the demands on high load sensory modalities by providing information through alternate sensory pathways.

Because effective virtual fixtures were developed from very basic elements like rigid impedance planes and simple gradient fields, the creation of an interactive perceptual workstation that allows a teleoperator to develop virtual fixtures on-the-fly is entirely feasible. Such virtual fixtures can be defined using Virtual Reality Modeling Language (VRML) tools linked to web browsers. Using this web-related technology, remotely distributed users can quickly convene in a virtual space to manage complex teleoperations by jointly designing virtual fixtures.

2.3 Non-homogeneous fault tolerance for safety and reliability:

After safety, no single factor has been more important in our clinical field trials than reliability. Not surprisingly, safety and reliability are strongly dependent on the same underlying technical issues. To date, most safety and reliability considerations have been passive. We build things with large safety factors. It is appropriate and timely to move forward with active safety and fault-tolerant control. George Toye (1989) demonstrated a non-homogenous fault-tolerant architecture and operating system that could maintain normal operation of a prototype mobile robot in the face of sensor and actuator degradation and failure. The approach is compatible with manipulator requirements as well.

In programmable electro-mechanical system applications where the consequences of system failure are unbearable, fault tolerance is mandatory. Fault tolerant programmable electro-mechanical systems (FT-PEMS) continue to function despite component failures. It is hypothesized that programmable electro-mechanical systems can be made fault tolerant through judicious management of non-homogeneous functional modular redundancy.

Application of conventional fault tolerant digital computer design techniques to the design of FT-PEMS is inappropriate. Replicated analog devices, such as sensors and actuators, rarely produce identical outputs. Yet, fault tolerant PEMS must deal with these inconsistencies. Hence, the simple fault detection hardware and software used in digital systems must be replaced with an analog voting scheme. A democratic dynamic weighted voting algorithm (DDWV) has been developed. This software voting mechanism provides the versatility required to handle analog variability between redundant modules, and is the heart of the non-homogeneous functional modular redundancy architecture.

The definition of fault tolerant actuator subsystems implies that no single actuator may be allowed to dominate the outputs produced by the remainder of the actuators. The task of designing a fault tolerant actuator subsystem is challenging. Energy releasing elements are required. This and other requirements of fault tolerant actuator subsystem design are formally developed in Toye (1989).

The proposed architecture incorporates a uniform framework for handling analog inconsistencies in all PEMS sub-systems: sensors, computational hardware, software and actuators. Thus, redundant modules only need to be functionally equivalent. For example, the mixing of precise, high cost sensors with simple, low cost sensors in a redundant sensor configuration is encouraged. Without sacrificing fault tolerance, reduced system cost and configuration flexibility can be achieved. The architecture takes advantage of redundant module diversity to promote common mode failure resistance. Results from the testing of this fault tolerant programmable electro-mechanical system, based on non-homogeneous functional modular redundancy and democratic dynamic weighted voting, confirm the effectiveness and feasibility of the architecture.

Maintaining fault tolerance and maximizing availability requires early replacement of failed sub components, before critical redundancies are lost. By linking the control and monitoring of the robot's health to the InterNet, replacement parts and repair services can be quickly dispatched. Computational results can be fault-tolerance tested remotely with simulated and real representations of the deployed system without full local hardware redundancy. The user is notified by the servicing organization and further servicing information can be sent to the user, all via the InterNet.


Task specification is a pre-requisite for service robot technical specifications.


Figure 5: Task specification is a pre-requisite for service robot technical specifications. In this matrix we present the distribution of requirements versus three task categories. Across all known studies, these are the most frequently identified rehabilitative service task candidates. All tasks have a strong "social" component. Definitions include: metaphor refers to descriptions of tasks "like" other activities, e.g., "hold it like an egg"; process refers to procedural variables, e.g., "smoothly", in the sense of low accelerations; manipulation is used here in the narrow sense of "motion vector specification"; temporal is often used to define sequential relationships, e.g., "pour the batter after the griddle is hot"; spatial relationships include "put the printer output in the wastebasket"; and social refers to those facets of a task that may require, or are most naturally done in a context that includes other people, e.g., "the social nature of taking a meal". Activities of Daily Living (ADL) are routine living activities judged necessary for independent living.

3. Socio-Technology Framework

Despite the fact that we are making steady advances in tele-service-robotics, it is clear that the technology alone is insufficient to bridge the existing chasm to wide spread adoption and acceptance. Our clinical experience with human service robots led to a sharp distinction between the robot-user-interface (RUI) and the graphical-user-interface (GUI) (Leifer, 1992). In that analysis we were able to isolate key differences that led to the development of RoboGlyph (Lees, 1993), a visual-language story-board system for programming DeVar. Performance studies showed that occupational therapists could create and debug vocational robot programs as quickly and accurately as experienced robot engineers. While elated with these findings, concurrent results from multi-site field trials were telling us that even the best programming interface would not be enough to transfer system authority to the clinic. We were learning that real-time, every-day access to the robot and the constellation of people who engaged it, from patient to therapist and physician to janitor, was the barrier to achieving high system availability.



Table 1: A requirement matrix summarizes RUI-GUI design differences. Whereas the GUI designer tries to create the illusion of reality, the RUI designer must make reality accessible. Whereas GUI-interaction is traditionally a one-to-one dialog, InterNet mediated RUI-interaction is a multi-media sharing space.



                         GUI                       WWW RUI
                      Graphical                  Robot               
                      User                       User  
                      Interface                  Interface
                      Requirements               Requirements 

Metaphor create symbols that create symbols that are evoke usable analogies consistent with manifest and metaphors reality

Cognition visualize abstract visualize real robot representations for data motion consistent with & data processing the user's own algorithms kinesthetic models

Manipulation create the illusion that directly manipulate a symbol directly physical objects safely manipulates reality & reliably (there is no post-facto undue)

Time create the illusion of make the interface work time in real-time

Space 2 DoF, fixed point of 6 DoF, variable point of view (eye to screen) view (end-effector to object)

WWW universal text-graphic msg. posting interactive access has been text-graphic-haptic the missing element manipulation and story-board programming We were forced to learn the obvious, that human-service robots must operate effectively within a specific social context and a constellation of users. Failure to recognize, and design for, the social nature of rehabilitation was a deep flaw in the system architecture. Figure 5, adapted from our 1992 RUI paper, includes the addition of an InterNet communication framework to more accurately reflect the full dimensions of interaction requirements. Multi-media InterNet web services are now expected to complete the product definition for robots in working human environments.



Figure 6 The RUI must work for each member of the care delivery team. Visual language communication empowers the impaired user. Training and calibration modules support therapy and automated performance assessment reports enhance therapist productivity. The health care administrator benefits most from quality assurance features. All of these users need access to each other and technical system support. We strongly advocate that the InterNet world-wide-web be adopted for integrated rehabilitation community communication.


The RUI must work for each member of the care delivery team (figure 6). Visual language communication empowers the impaired user. Training and calibration modules are required to support therapy. Automated performance assessment reports document therapist productivity and satisfy the needs of third-party-payers. Health care administrators require quality assurance features. All of these users need access to each other and the technical system community. The InterNet world-wide-web is the right medium for integrating natural-language, visual-language and haptic communication requirements (Table 1).

Our approach to building such a virtual clinic and community is based on world-wide-web (WWW) standards (e.g., HTML). WWW interaction is based on viewer applications such as Mosaic(TM), NetScape(TM), and MBone(TM). Web authoring is done with our proprietary PENS(TM) (Personal Engineering Notebook with Sharing) application for informal material and FrameMaker(TM) for formal documentation (Toye et al., 1993). Both authoring tools support automatic text-graphic translation to HTML. Eventually, WWW-based virtual reality tools will support direct tele-operation of the patient's robot for programming, service and training. Initially, a small set of cost sensitive, medical tasks in diagnosis, treatment and emergency intervention are being studied. We are working with equipment manufacturers to convert training and maintenance documents to HTML.

3.1 Maintenance and support:

For interactive computer controlled systems such as robots, history lists store records of user and system events. The history of commands, system states and actions taken by the robot are important to safety, performance, service and quality assurance. These records are commonly referred to as "history lists". Our history list design guidelines for tele-service-robots are based on 3000 hours of history list data and 30 hours of videotaped DeVar operation (Van der Loos, 1992; Van der Loos & Leifer, 1995). Data analysis (e.g., regression, state transition networks, task action grammar, maximal repeating pattern) illustrate that the client's needs must be understood to accurately dictate data structure and system implementation. Tests of the methodology with DeVar also show that: (1) user interface redesign informed by these observations would easily increase system safety and effectiveness; (2) relatively subtle refinements in the history list data structure greatly facilitate analysis; and (3) extensions to the DeVar history list are needed to cover the range of anticipated users and uses. The following 5 step guideline to the design of a real-time data logging system was developed: 1) identify the real clients for the history list information; 2) identify the purpose for which each client will use the information; 3) define the data acquisition system's technical specifications; 4) define the data structure and variables for the data list; and 5) de-limit the applicability of the data, e.g., tailor the extent of data collection to satisfy the users' informational needs lest the magnitude of the history list grows to defeat its utility.

It is estimated that 50% or more of the maintenance visits by factory repair technicians can be eliminated by shifting responsibility to on-site clinical staff. However, this is achievable only if the information is available on-site, on-demand, and the design of the system minimizes the expertise required to troubleshoot and perform maintenance. It is the goal of our service-robot network to support on-site personnel in this role and to allow factory personnel to assist them in "over-the-shoulder" supervision of the more complex maintenance and repair tasks. Extensive product documentation need not be on-site. Instead, only pertinent information need be delivered on-demand and shared. Furthermore, on-site staff can send photographs, video and audio evidence regarding malfunction to the vendor's design staff to deal with unforeseen problems or usage patterns. These capabilities come fully into play when patient-specific treatment programs must be developed. The local care-giver and even the patient can work interactively with remote technical and medical experts.

Assessment of user acceptance is crucial to any technology change, especially when such a fundamental change in the way of doing business is proposed. Throughout the project the user and technical support aspects will be assessed through history-list activity tracking and WWW-mediated surveys. When indicated, video-interaction-analysis of taped usage scenarios provides the broad-band data required to deal with the unforeseen. The success of this project will be measured by the response from both manufacturers, clinical staff and administrators. Manufacturers must see a substantial cost-of-business savings. Clinical staff must see a substantial improvement in their ability to perform their own jobs, including better treatment for the patient. Healthcare administrators must see substantiated net cost savings without compromising care delivery. All parties must be winners.

4. Summary

The robot-user interface presents many challenges. We may build upon advances in graphical-user interface design but must go beyond them to explicitly address the physical realities of robot teleoperation. We have focused on three lines of RUI investigation in our laboratory: visual language programming (Steele, Lees); haptic interfaces for virtual fixtures (Rosenberg); and non-homogeneous fault-tolerance (Toye). All aspects of the robot-user-interface deserve attention. No one laboratory can expect to cover all of these "bases". Accordingly, we call for a high degree of international cooperation and collaboration in the development of a single, world-linked rehabilitation robot. Cost-benefit evidence supports the value of assistive robot technology once deployed (Hammel et al., 1992). However, the cost of development is prohibitive for any single national market. The rehabilitation robot is an "orphan product" in need of adoption by both major and minor laboratories around the world.

The implications of our proposed InterNet-based framework go far beyond telerobotics. By deploying currently available InterNet technology, we can now merge multiple modes of control and communication through one user interface, an enhanced web browser, to form a socio-technological framework. Moreover, based on the hypothesis of this paper, the World-Wide-Web gives us, perhaps for the first time, a medium to support the scale of collaboration called for to deploy this technology. We call for the creation of a rehabilitation-robotics-web to support collaboration on a universal rehabilitation robot architecture, to support the training and education of all those whose lives will be affected by the technology, and finally, to support routine continuous remote interaction with these robots once deployed. Please check URL http://cdr.stanford.edu/assistiveBOT/.

5. Acknowledgments

Work reported in this paper has been supported by the Stanford Center for Design Research, the Tolfa Corporation, the Stanford Center for the Study of Language and Information, the National Science Foundation and the U.S. Department of Veterans Affairs Rehabilitation Research and Development Service. DeVar(TM) is a registered trademark of Independence Works Incorporated.

Other members of the rehabilitation robotics team, variously involved in one or more aspects of the DeVar R&D program include: Joy Hammel, David Lees, Larry Edwards, Richard Steele, Jim Kramer, Charlie Wampler, John Jameson, Charles Buckley, Stefan Michalowski and Inder Perkash.

6. References

Cannon, D.J., "Point-and-Direct Telerobotics: interactive supervisory control at the object level in unstructured human-machine system environments", Ph.D. Thesis, Dept. Mechanical Engineering, Stanford University, April, 1992.

Cupo, M.E., Sheredos, S., "Clinical Evaluation of the Desktop Vocational Assistant Robot (DeVAR)", Technology Transfer Section Report, U.S. Department of Veterans Affairs, September, 1994.

Cutkosky, M.R., Kovacs, G., and Howe, R., "Tactile Sensing and Information Processing for Man and Machine Systems", URI proposal manuscript, July, 1991.

Fitts, P. M., "The Information Capacity of Human Motor Systems in Controlling the Amplitude of a Movement," Journal of Experimental Psychology., Vol 47, pp. 381-391, 1954.

Hammel, J., Van der Loos, H.F.M., Perkash, I., "Evaluation of a Vocational Robot with a Quadriplegic Employee", Archives of Physical Medicine and Rehabilitation, Vol. 73, July, 1992, pp. 683-693.

Kerr, J., "The Zebra-Zero Technical Description", Zebra Robotics, Inc., 576 Middlefield Rd., Palo Alto, CA, 94303, 1991.

Lees, D., Leifer, L., "A Graphical Programming Language for Robots Operating in Lightly Structured Environments", IEEE Conference on Robotics and Automation, Atlanta, GA, May, 1993, pp. 648-653.

Leifer, L., "RUI: factoring the robot user interface", Proceedings of the Special Interest Group on Rehabilitation Robotics, RESNA '92, Toronto, June 6-11, 1992, 4 pages.

Leifer, L., Van der Loos, H.F.M., and Toye, G., "The Management of Rehabilitation Technology through Network-Mediated Interactive Instruments", white-paper, NSF-Whitaker Foundation, December, 1994.

Leifer, L., Van der Loos, H.F.M., and Lees, D., "Visual Language Programming: for robot command-control in unstructured environments", Proceedings of the 5th International Conference on Advanced Robotics, Pisa, June, 1991, pp. 31-36.

Liang, L., "Implementation of a Theory of Robotic Machine Learning of Natural Language", Ph.D. Thesis, Dept. Mechanical Engineering, Stanford University, Dec., 1991.

Rosenberg, L., "Virtual Fixtures", Ph.D. Thesis, Dept. Mechanical Engineering, Stanford University, 1994.

Stanger, C., Anglin, C., Harwin, W., and Romilly, D., "Devices for Assisting Manipulation: a summary of user task priorities", IEEE Transactions Rehabilitation Engineering, Vol. 2, No. 4, pp. 256-265, Dec., 1994.

Steele, R., Weinrich, M., Wertz, R., Kleczewska, M., and Carlson, G., "Computer-based visual communication in aphasia" Neuropsychologia, Vol. 27, pp. 409-426, 1989.

Toye, G., "Management of Non-Homogeneous Functional Modular Redundancy for Fault Tolerant Programmable Electro-Mechanical Systems," Ph.D. Thesis, Dept. Mechanical Engineering, Stanford University, July, 1989.

Toye, G., Cutkosky, M., Leifer, L., Tenenbaum, J., and Glicksman, J., "SHARE: a Methodology and Environment for Collaborative Product Development", Proceedings of the 2nd Workshop on Enabling Technologies, (WETICE'93), IEEE Computer Society Press, pp. 33-47, April, 1993.

Van der Loos, H.F.M., "A History List Design Methodology for Interactive Robots." Ph.D. Thesis, Department of Mechanical Engineering, Stanford University, CA, 1992.

Van der Loos, H.F.M., and Leifer, L., "The Design and Use of History List Systems for Rehabilitation Robots: enhancing safety and performance through activity recording and analysis", Journal of "Technology and Disability", special section on rehabilitation robotics, Mahoney, R., editor, in-press, expected December, 1995.




Citation (Copyright Elsevier, 1996)

L.J. Leifer, G. Toye, H.F.M. Van der Loos, Integrating the socio-technical framework of human service through the InterNet-World-Wide-Web. Proceedings of the International Workshop on Biorobotics: Human-Robot Symbiosis, Tuskuba, Japan, May 1995, published in Robotics and Autonomous Systems, vol. 18, Elsevier Press, Amsterdam, 1996, pp. 117-126.