CoFee Workshop, November 26-27th

• Date(s) : 26/11/2015 – 27/11/2015
• Lieu / location/notice : Laboratoire Parole et Langage, 5 av
Pasteur, Aix-en-Provence, salle A003 (26/11) et salle A-102 (27/11,
accès réservé)

——

Program :

CoFee Workshop – Day 1, November 26th

(Public access, room A003)

9:00-9:15 Introduction and workshop structure

9:15-9:45 CoFee general objective and current situation, Laurent Prévot

9:45-10:45 Shifting preferences: How the brain interprets the pregnant pause in conversation, Sarah Bögels

10:45-11:00 CoFee Break

11:00-12:00 Backchannels and turn-taking, Stefan Benus

12:00-13:30 Lunch Break (@LPL)

13:30-14:30 Automatic synchronisation of audio and video signals, Jan
Gorisch
14:30-15:15 Is prosodic entrainment a turn-competitive resource in
multi-party conversations? Emina Kurtiç

15:15-15:30 CoFee Break

15:30-16:30 Current Results of CoFee, Laurent Prévot
16:30-17:30 Wrap-up: Open time for discussion

*****

CoFee Workshop – Day 2, November 27th

(ONLY CoFee members + guests, room A-102)

Working sessions / Open Discussion

9:00-9:45 Data session / Looking more into CoFee (and related datasets)
– CoFee Corpora Presentation
– Processing steps
– Datasets extraction

9:45-10:30 Multimodality from a practical perspective
– Theoretical issues (multimodal units,…)
– Methodological issues
– Tools (Annotation and Extraction)

10:30-10:45 CoFee Break

10:45-11:30 Theoretical and applied perspectives and implications for
the work on conversational feedback
– From Corpora to Experiments (and back) (0h45)
– Inter-individual and inter-situational variation
– Cross-lingtuistic studies
– Formal modelling

11:30-12:00 Wrap-up and conclusions

12:00-14:00 Closing Lunch

Laurent Prévot

Professor in Language Sciences

More Posts - Website

Follow Me:
LinkedIn

Kristiina Jokinen’s Talk @ LPL

Engagement and Affects in WikiTalk Interactions

September 30th, 14h

LPL Conference Room (5 avenue Pasteur)
Kristiina Jokinen
University of Helsinki (Finland)
University of Tartu (Estonia)

In this talk I will talk about my work related to interaction modelling,
and discuss issues related to engagement in WikiTalk, a robot application
that enables the user to query Wikipedia via the Nao robot. The robot
supports open-domain conversations using Wikipedia as a knowledge source.
To manage smooth interaction, it is important to capture the userís
emotional and attentional state. I will focus on the challenges related
to the topic structure, new information, and tracking the user’s interest
level, engagement, and emotional state in interaction.

Slides

 

Laurent Prévot

Professor in Language Sciences

More Posts - Website

Follow Me:
LinkedIn

CoFee presented @ LIF (Marseille)

Une approche quantitative de l’usage des items de feedback en français

Laurent Prévot, Séminaire Talep

10/02/2015, LIF, Marseille

Les énoncés de feedback sont parmi les plus fréquents dans situations de communication interactives. Ils sont cruciaux pour les théories de la communication étudiée en tant qu’activité jointe des participants.
Dans ce travail nous tentons de déterminer la contribution des différentes domaines linguistiques (lexique, prosodie, geste) et paramètres contextuels (discours, parole, conversation) dans le signalement des fonctions communicatives traditionnellement associées au feedback (grounding, attitudes, statut informationnel…). Notre travail s’appuie sur 4 corpus transcrits (dont 2 ont été entièrement créés dans le cadre du projet ANR CoFee) pour une durée totale d’environ 16 heures de parole dialogique. Nous présenterons ces corpus, notre méthodologie (extractions de traits pertinents et annotation) et les premiers résultats obtenus sur ces données.

Slides

Laurent Prévot

Professor in Language Sciences

More Posts - Website

Follow Me:
LinkedIn

New MTX Corpus (Audio Visual Maptask)

Screen_Shot_2013-10-04_at_11-45-01

 

A new corpus has been recorded. The transcription and alignment of this “MTX-Corpus” is completed. One aim of the project is to analyze conversational feedback at different interactional situations (spontaneous conversation, task-oriented dialogue and an intermediate condition). While working on the French Map Task we figured out that hit will be interesting to vary the recording condition (remote vs. face-to-face) as it has actually be done on the Edimburg Map Task . These two conditions allow to compare precisely feedback behaviors in different presence situation with the task remaining the same.

 

The resource is accessible on ortolang at the following place: http://sldr.org/sldr000875

Laurent Prévot

Professor in Language Sciences

More Posts - Website

Follow Me:
LinkedIn

New DVD-Corpus

DVD-YM DVD-AG

A new corpus has been recorded. The transcription of this “DVD-Corpus” is nearly finished. One aim of the project is to analyse conversational feedback at different interactional situations (spontaneous conversation, task-oriented dialogue and an intermediate condition). Therefore, a corpus involving discussions and negotiations about movies has been recorded, as the first two conditions are already covered by the CID corpus and the Maptask (remote and audio-visual) corpus.

jangorisch

Post-Doctoral Researcher at Aix-Marseille Université, Laboratoire Parole et Langage.

More Posts

Annotation guidelines for the visual domain

After the visual maptask corpus has been recorded, annotation guidelines for the visual modality are developed. In parallel to the verbal transcriptions, the first version of the visual annotation guidelines is underway to be tested on the pilot recordings. The guidelines are largely informed by guidelines suggested by David McNeill and the MUMIN scheme. The aim was to take the advantages from both schemes and simplifying the new guidelines without a major loss of information and detail. The guidelines are tailored to the needs of (i) annotating nonverbal feedback as well as the movements of the interlocutor and (ii) annotating vast quantities of data. If improvements to the guidelines are necessary, a second version is to be created that is then used for the annotate the visual maptask corpus. According to time constraints, a strict limit to a certain amount of data or a focus on the verbal feedback items might be envisaged, but has not been decided yet.

The annotations of the visual behaviour will be used in two manners. On the one hand, a qualitative analysis of interactional sequences will be performed using methods of CA. On the other hand, a quantitative analysis is used for a general overview on the visual resources used by participants of task-oriented dialogues and in particular for statistical support.

jangorisch

Post-Doctoral Researcher at Aix-Marseille Université, Laboratoire Parole et Langage.

More Posts

Gesture days

The 3rd and 4th of July were used to look at initial data recordings and trial transcripts. The focus was on the visual modality, i.e. gestures, gaze, facial displays.

For this occasion we invited a specialist in gesture research, Gaëlle Ferré from Nantes (http://www.lling.univ-nantes.fr/index.php?option=com_profils&user=64&lang=en).

The underlying material for a CA-style data-session (CA = Conversation Analysis) were taken from an extract of the CID corpus and three extracts of the pilot recordings of the new Maptask corpus. Two with each participant being the director or the recipient of the map. And one part of free conversation. With other researchers from the Project the following was points have been discussed and decisions were taken.

The data-session was used to make initial observations on how the interacting participants made use of gestures to achieve what we call “feedback”. The focus was: regularities, sequential organisation, participant’s orientation, feedback as social action. The usefulness of CA procedures on the way to a description of feedback according to phonetic resources, such as Prosody, Gesture, Lexical content, etc. was discussed. This resembles an Interactional Phonetics approach (cf. research by John Local, Richard Ogden, Bill Wells and many others).

The annotation of gestures was discussed with respect to the minimum amount of detail needed in order to explain the interactional processes, and the maximum amount of detail in order to achieve statistical power.

One important aspect is also to avoid the risk of circularity in using the annotated gestures for the classification of communicative functions of feedback. The information given with the classification guidelines may not be used in the subsequent analysis of the annotation results in relation with the communicative functions.

Another important point is the annotation and analysis of gaze. What kinds o gaze should be distinguished? Are mid-distance gaze and gaze-to-participant sufficient? What is gaze and what is facial display. E.g. are closed eyes part of the one or the other?

jangorisch

Post-Doctoral Researcher at Aix-Marseille Université, Laboratoire Parole et Langage.

More Posts

CoFee Post-Doc Position open

Postdoc on “Conversational Feedback: Multidimensional analyses and
modeling” (CoFee), Laboratoire Parole et Langage (Aix-En-Provence)
(UMR 7309)

Applications are invited for an 18 months postdoctoral position on
linguistic feedback modeling.

The aim of the project is to study the interfaces between linguistic
domains (lexicon, prosody, kinesic) in the context of linguistic
feedback behaviors. Only candidates with  experience with at least two
of the points below will be considered:

– speech and prosody
– dialogue / conversation / interaction analysis and modeling
– kinesics (especially facial expression and head movements)
– natural language processing

Concerning the kind of approaches considered both corpus/quantitative
and formal backgrounds are welcomed.

The position is planned to start around February 1st but we can
discuss about early or late starting period. The salary will be
determined according to the French university standards (about 1950 euros / month after tax and health insurance coverage for a young researcher). Funding for presenting
relevant research results into conferences comes with the position.

Speaking French will be a plus for everyday life in Aix-en-Provence,
but fluency in English is the minimal requirement.

The research lab is located in the center of Aix-en-Provence
(http://www.aixenprovencetourism.com/). Aix is a sunny middle-sized
city of South East France, nested in the Provence countryside, 30
minutes from the Mediterranean and 1h30 from the Alps.

The “Laboratoire Parole et Langage”  is a very active lab currently
involved in several large scale projects (Labex “Brain and Language
Research Institute” : www.blri.fr ; EQUIPEX Open Resources and Tools
for LANGuage (ORTOLANG) : www.ortolang.fr ; and Erasmus Mundus
“Multilingualism and Multiculturalism”: www.em-multi.eu ) offering a
stimulating research environment and large and diverse opportunities
of collaborations.

A curriculum vitae and a list of publications should be sent to
Laurent Prévot (laurent.prevot@lpl-aix.fr) by December 31st but please contact us asap if you are interested by the position for deepening mutual knowledge before formal application.

For more information, please visit the following web pages:

CoFee: cofee.hypotheses.org
Laboratoire Parole et Langage: lpl-aix.fr

Séminaire CoFee Ellen Gurman Bard

Nous avons le plaisir de vous annoncer un séminaire d’Ellen Gurman Bard qui aura lieu le jeudi 29 novembre à 14h30 en salle de conférence.
Ellen Gurman Bard, Dept of Linguistics and English Language, University of Edinburgh.
Titre: Maybe not the first time: Alignment at varying lags in dialogue
Lieu: Salle B011 (Salle de Conférences), Laboratoire Parole et Langage.

Conversational Feedback

This site is about the ANR project “Conversational Feedback”. In a conversation, feedback is mostly performed through short utterances produced by another participant than the main current speaker. These utterances
are among the most frequent in conversational data. They are also considered as
crucial communicative tools for achieving coordination in dialogue. They have
been the topic of various descriptive studies and often given a central role in
applications such as dialogue systems. The present project addresses this issue
from a linguistic viewpoint and combines fine-grained corpus analyses of semi-
controlled data with formal and statistical modeling.