BEAMING: Being in Augmented Multi-Modal Naturally-Networked Gatherings


Robot avatar talking to a remote user

Today, in spite of advanced video conferencing, shared virtual environments, and gaming environments such as Second Life, it is still simply much more efficient to physically travel to remote location for business, scientific or family meetings—even if at a huge environmental, energetic and opportunity cost.The science and technology developed in BEAMING will for the first time give people a real sense of physically being in a remote location with other people, and vice versa—without actually traveling.

BEAMING is a four year FP7 EU collaborative project which started on Jan 1st 2010. BEAMING raises a number of ethical and legal issues that are familiar from existing virtual reality and telecommunications technologies.It also raises several novel issues.

BEAMING is the process of instantaneously transporting people (visitors) from one physical place in the world to another (the destination) so that they can interact with the local people there. This is achieved through shifting their means for perception into the destination, and decomposing their actions, physiological and even emotional state into a stream of data that is transferred across the internet.

Simultaneous streams of data from the destination site to the visitor’s perceptual apparatus, and from the actions and state of the visitor to the destination site, cohere together to form a unified virtual environment representing the physical space of the destination in real-time – a destination that now includes the beamed people. BEAMING will endow this process with physicality.


Movement and gesture rendering

The visitor’s actions at the destination site can have physical consequences; the actions of local people at the destination site can have physical consequences for the visitor. The visitor may be embodied at the destination site as a physical robot, and yet be seen by the locals virtually in human form.


Different environmental setups

This project will bring today’s networking, computer vision, computer graphics, virtual reality, haptics, robotics and user interface technology together in a way that has never been tried before thereby transcending what is possible today.  The goal is to produce a new kind of virtual transportation, where the person can be physically embodied interacting with life-sized people who may be thousands of kilometers away. Moreover, this is underpinned by the practical utilization of recent advances in cognitive neuroscience in understanding the process whereby the brain represents our own body.

The project brings technology researchers together with neuroscientists in order to develop and understand this complex but far reaching technology. The profound ethical and legal issues raised by a (near) future world in which this will be possible are considered in a dedicated workpackage.