Name
Papers
Collaborators
JEESUN KIM
64
40
Citations 
PageRank 
Referers 
64
24.84
120
Referees 
References 
212
113
Search Limit
100212
Title
Citations
PageRank
Year
Perceiving Older Adults Producing Clear and Lombard Speech00.342019
Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines.00.342018
Disgust expressive speech: The acoustic consequences of the facial expression of emotion.00.342018
Contribution of visual rhythmic information to speech perception in noise.00.342017
The effect of age and hearing loss on partner-directed gaze in a communicative task.00.342017
The Effect Of Spectral Profile On The Intelligibility Of Emotional Speech In Noise00.342017
The Processing of Attended and Predicted Sounds in Time20.442016
Introduction to Poster Presentation of Part II.00.342016
Influences of visual speech information on the perception of foreign-accented speech in noise.00.342015
Exploring Acoustic Differences Between Cantonese (Tonal) And English (Non-Tonal) Spoken Expressions Of Emotions00.342015
Explaining the visual and masked-visual advantage in speech perception in noise: the role of visual phonetic cues.00.342015
The stability of mouth movements for multiple talkers over multiple sessions.10.362015
Syllabic structure and informational content in English and Spanish.00.342015
Anticipation of Turn-switching in Auditory-Visual Dialogs10.352015
Cross-Modality Matching Of Linguistic And Emotional Prosody00.342015
Cross-modality matching of linguistic prosody in older and younger adults.00.342015
Visual vs. auditory emotion information: how language and culture affect our bias towards the different modalities.00.342015
Auditory, visual, and auditory-visual spoken emotion recognition in young and old adults.00.342015
Examining speech production using masked priming.00.342015
The effect of auditory and visual signal availability on speech perception.00.342015
Does elderly speech recognition in noise benefit from spectral and visual cues?00.342014
The effect of expression clarity and presentation modality on non-native vocal emotion perception00.342014
Interplay of informational content and energetic masking in speech perception in noise.00.342014
Comparing the consistency and distinctiveness of speech produced in quiet and in noise50.472014
Tracking eyebrows and head gestures associated with spoken prosody.60.552014
How far out? the effect of peripheral visual speech on speech perception.00.342013
Detecting auditory-visual speech synchrony: how precise?00.342013
Auditory and auditory-visual Lombard speech perception by younger and older adults.20.542013
Acoustic and visual adaptations in speech produced to counter adverse listening conditions.10.372013
Spontaneous synchronisation between repetitive speech and rhythmic gesture.00.342013
The Intelligibility Of Lombard Speech: Communicative Setting Matters00.342012
Auditory-Visual Speech To Infants And Adults: Signals And Correlations00.342012
Perceiving visual prosody from point-light displays.00.342011
Temporal Relationship Between Auditory And Visual Prosodic Cues20.382011
The Effect Of Seeing The Interlocutor On Speech Production In Different Noise Types10.382011
Audiovisual speech processing in visual speech noise.10.402011
The effect of seeing the interlocutor on auditory and visual speech production in noise.60.522011
Visual speech influences speeded auditory identification.00.342011
Testing Audio-Visual Familiarity Effects on Speech Perception in Noise.00.342011
Visual Speech Speeds Up Auditory Identification Responses10.362011
Auditory Speech Processing Is Affected By Visual Speech In The Periphery00.342011
Emotion perception by eye and ear and halves and wholes.00.342010
Infants match auditory and visual speech in schematic point-light displays.00.342010
Abstracting visual prosody across speakers and face areas.00.342010
Audiovisual perception in adverse conditions: Language, speaker and listener effects30.412010
Prosody off the top of the head: Prosodic contrasts can be discriminated by head motion80.632010
Are virtual humans uncanny?: varying speech, appearance and motion to better understand the acceptability of synthetic humans.00.342009
Non-Automaticity Of Use Of Orthographic Knowledge In Phoneme Evaluation00.342009
Speaker Discriminability For Visual Speech Modes00.342009
Recognizing spoken vowels in multi-talker babble: spectral and visual speech cues.00.342009
  • 1
  • 2