All R&D activities within the 3DLife Network of Excellence are organized under a series of Integrated Projects (IPs). Each one of these IPs involves a different set of 3DLife partners, rotating in its leadership every six months. The list of 3DLife IPs is given in the table below.
|Autonomous Virtual Humans||M1 – Today||QMUL, ITI, UNIGE||AVH Poster (200)||-|
|Robust Virtual Mirror||M1 – Today||DCU, HHI, ITI||RVM Poster (285)||Link|
|Simulating the Body and Motion of an Athlete||M1 – Today||QMUL, DCU, UNIGE, TPT||ABMS Poster (222)||Link|
|Immersive Worlds||M7 – Today||ITI, UNIGE, TPT||IW Poster (209)||Link|
Autonomous Virtual Humans. In this integrated project, UNIGE is developing an Embodied Conversation Agent system by integrating its own technologies for 3D graphics, animation techniques and dialogue management. The dialogue management will accept text and speech based input and will react in speech with the appropriate facial expressions. QMUL is working on recording head and the gaze movements of individual humans using a tracking device in a predefined dialogue. Based on this data, QMUL and UNIGE develop a model for facial gesture and eye-gaze controlled by specific dialogue events. The head and gaze model is being integrated into the Embodied Conversation Agent system. ITI develops a Face Recognition module that is also being integrated with the Embodied Conversation Agent system. Face recognition starts by detecting the face region in each frame of the video. This is based on low-level features of the image and is carried out in real-time. The output of this process is a bounding box of the face where face recognition takes place. Another set of low level features is being computed in the face bounding box, which serves as the basis for face matching. Matching in turn is based on specialized classifiers which try to classify the input unknown face to the most similar person of those registered in the database. The system is able to register new users when commanded by the dialogue manager. Registration might be done in bulk, that is many users together, or in separate, that is adding a new user to an already existing database. After each registration, the system undergoes a training process, not visible to the user, which is necessary for running algorithms. After training, the system is able to recognize anyone of the registered users by retrieving his/her name and presenting it to the dialogue manager. [Top]
Clothing is usually purchased by trying it on in front of a mirror. We look at how the fabric drapes our body and how we like the blend of colours and textures; and we do so while asking our friends to give us their opinion directly in the dressing room. However, when buying customized and tailored clothes, we purchase something that has not been produced yet. An augmented reality dressing room, equipped with a Virtual Mirror that shows the client wearing a virtual version of the customized product could assist the user in the selection of design, fabrics, textures and patterns, and thereby bring back the shopping experience when buying customized and tailored clothes.
During the development of such a system several issues have to be addressed. On the one hand, methods and algorithms for realistic rendering of virtually textured clothes or virtual clothes onto the real person have to be investigated, such that the virtual clothes accurately follow the movements of the body. On the other hand, in order to allow a large amount of cloth textures, also image retrieval is of great importance. In order to make the shopping experience a shared experience with friends, also the issue of streaming over networks to remote devices and data privacy could be addressed. [Top]
In this integrated project, several advanced tools and computer graphics methods are used to achieve the visual realism.
As a real case scenario, simulation of a tennis player in motion will be demonstrated.
The production pipeline of this integrated project consists of the following stages:
- Firstly a human model is scanned and post-processed to generate high quality mesh with texture.
- Secondly these models are processed for scalable rendering.
- Thirdly the resulting high quality mesh is mapped to an animation library.
- Fourthly, corresponding motion data is generated.
- Finally the resulting model is streamed through network and rendered in an interactive environment for a visual demonstration. [Top]
Giving the power to the end-user to dynamically construct his/her own immersive worlds based on his preferences and selected session parameters is the motivation behind the “Immersive Worlds” integrated project.
In order to achieve this goal, new tools should be developed that will provide an intuitive user experience and push the end-user to discover new ways of interaction with the media content available in the Future Internet.
The “Immersive Worlds” IP started at M7 of the 3DLife project lifetime; that is July 2010. Its initial timeline extends to M20 of the 3DLife project lifetime, i.e. August 2011.
During this initial timeline a tool was developed that enables the average user to easily create and view its own personalized virtual 3D worlds by using any available multimedia content (e.g. his own sketches, existing images, 3D models he found by searching the internet etc.).
Beyond August 2011 – from M21 up to M42 of the 3DLife project – the ultimate aim of the IP is to offer the possibility of a personalized immersive 3D experience to the user. This will be achieved by enhancing the aforementioned tool so as to enable the user to add a personal avatar as well as autonomous agent feeds, in the virtual 3D world. [Top]