Last time, I talked about advanced telepresence, instances where the brain and the hand (communications and physical capability) go to work on a problem or opportunity that is distant. I mentioned examples from medicine, science, sports coaching, and the arts. But the possibilities for this kind of telepresence are limited by technology and what we, as humans can handle well.
The technological improvements are easy to understand. Making sensors, waldos, and activators faster, cheaper, and (in some cases) smaller can help extend telepresence options. Greater bandwidth, algorithms that complete experiences, compress data, and anticipate and prioritize information would be useful, too. Proving convenient ways to power devices and materials (such as flexible elements and protections against heat, pressure, and corrosion) that allow robots to go into new environments would also extend the possibilities for telepresence.
The human dimension may be less obvious, but is no less critical. It may seem like the more realistic the user experience is, the more capability there would be. Unfortunately, this is not always the case. Greater fidelity does not always make for better experiences.
The uncanny valley is a phenomenon wherein a too realistic animation can pull you out of the experience, even make you squeamish. Some people felt something like that with the higher frame rate of a version of the first Hobbit film. Willful suspension of disbelief, it seems, requires enough sensory space for participation.
It has also been found that too much information in a simulation can lead to simulator sickness, a phenomenon similar to motion sickness. This may occur because of conflicts between what users sense (e.g., the visual experience not aligning with the sense of motion) or the experience with the phenomenon simulated. (Think of how people accommodate themselves over time to travel on a ship.) In addition to being unpleasant, these sickness experiences can distract users or even motivate them to make less than optimal choices in the task at hand if those choices reduce the unpleasantness. Dialing in the stimuli to include just the right amount of detail is one way to fix this.
Attention is another important aspect. The right overlay of data (such as labeling visuals) for virtual reality can help the user pick out and prioritize which phenomena are important. Take drone targeting applications as an example. The system must, of course, correctly sense the environment (e.g., project clear, accurate images), but it also must provide analysis that distinguishes targets from what should not be targets. When time lags are in effect, there must be anticipatory analysis (e.g., predicting where the target will be when a command is actually executed). Interfering phenomena, included those in motion (e.g., animals, traffic, noncombatants) must be accounted for. And there probably should be provisions to warn users of potential mistakes and even to override their actions if they are too reckless.
One other factor is having an effective interface. The science fiction dream here is direct control by the brain. That makes sense for applications for people with disabilities, but it’s hard to imagine instances of telepresence where this would be required. Being able to look around as if you were there (either with imaging goggles that pick up movement or with a human scale display, perhaps in 3-D) makes sense and can help with the immediacy. Similarly, instrumenting motion (for instance with gloves that cause waldos to move with the dexterity of hands) and including haptics (so touch can be transmitted to the user) may be helpful, though delays would be a concern for delicate work. Voice control is a good option for some applications.
Artificial intelligence could play an important role in integrating users with the tools of telepresence. Some systems might provide advice and information at optimum times. There is also the opportunity to learn and adapt to the quirks of individual users over time.
Finally, the real world, especially with humans in the loop, is full of challenges and unexpected occurrences. Anticipating as much as possible and creating modes for graceful failure – ending or suspending operations in the least harmful ways –should be part of any telepresence design.
In summary, much of the potential for telepresence depends on a deeper understanding of how people work in such artificial circumstances. In many ways, we will simply accommodate ourselves to the new circumstances, perhaps with training. But there will be broader opportunities for taking advantage of what telepresence offers if we can consider the human dimension and engineer systems that better fit the way we naturally work. This will be especially critical when the users (e.g., physicians, coaches, scientists) already have specialized capabilities and may not have the time or the aptitude to become experts in telepresence.