Name
Affiliation
Papers
FRÉDÉRIC ELISEI
INPG/Univ. Stendhal, Grenoble Cedex, France
35
Collaborators
Citations 
PageRank 
41
275
25.05
Referers 
Referees 
References 
576
611
371
Search Limit
100611
Title
Citations
PageRank
Year
Impact of Social Presence of Humanoid Robots: Does Competence Matter?00.342021
Comparing Cascaded Lstm Architectures For Generating Head Motion From Speech In Task-Oriented Dialogs10.362018
Learning off-line vs. on-line models of interactive multimodal behaviors with recurrent neural networks.10.352017
Graphical models for social behavior modeling in face-to face interaction.80.522016
Beaming the Gaze of a Humanoid Robot10.362015
Learning multimodal behavioral models for face-to-face social interaction70.512015
Design And Validation Of A Talking Face For The Icub20.412015
Vizart3d - real-time system of visual articulatory feedback.00.342013
Vizart3D : Retour Articulatoire Visuel pour l'Aide à la Prononciation (Vizart3D: Visual Articulatory Feedack for Computer-Assisted Pronunciation Training) [in French]00.342012
I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.422.042012
Gaze, conversational agents and face-to-face communication271.642010
Can you 'read' tongue movements? Evaluation of the contribution of tongue display to speech understanding221.712010
On the importance of eye gaze in a face-to-face collaborative task30.662010
Lip-synching using speaker-specific articulation, shape and appearance models100.562009
From 3-D Speaker Cloning To Text-To-Audiovisual-Speech00.342008
An Audiovisual Talking Head for Augmented Speech Generation: Models and Animations Based on a Real Speaker's Articulatory Data180.932008
Speaking with smile or disgust: data and models50.532008
Can You "Read Tongue Movements"?70.652008
Retargeting cued speech hand gestures for different talking heads and speakers00.342008
Lips2008: Visual Speech Synthesis Challenge321.722008
A Trainable Trajectory Formation Model Td-Hmm Parameterized For The Lips 2008 Challenge10.392008
Gaze Patterns during Face-to-Face Interaction20.502007
Intelligibility of natural and 3d-cloned German speech.80.792007
Analyzing Gaze During Face-to-Face Interaction00.342007
Scrutinizing Natural Scenes: Controlling the Gaze of an Embodied Conversational Agent81.172007
Towards eye gaze aware analysis and synthesis of audiovisual speech.00.342007
Analyzing and modeling gaze during face-to-face interaction.20.502007
Does a Virtual Talking Face Generate Proper Multimodal Cues to Draw User’s Attention to Points of Interest?00.342006
Embodied conversational agents: computing and rendering realistic gaze patterns20.512006
CAPTURING DATA AND REALISTIC 3D MODELS FOR CUED SPEECH ANALYSIS AND AUDIOVISUAL SYNTHESIS10.372005
Basic components of a face-to-face interaction with a conversational agent: mutual attention and deixis20.412005
Audiovisual text-to-cued speech synthesis10.362004
Tracking talking faces with shape and appearance models90.982004
Evaluation of a Speech Cuer: From Motion Capture to a Concatenative Text-to-cued Speech System.30.542004
Audiovisual Speech Synthesis502.862003