Candidates are highly recommended to write a proposal that fits into one of the following research lines and are recommended to contact the relevant supervisors when developing proposals. Other projects may be developed but the potential supervisors at the partner universities MUST be contacted prior to application.
When writing your research proposal, please use the structure provided in this proposal template.
The list of projects may be updated throughout the application period.
Characterizing speech motor control across age
Supervisory team:
Assistant Prof. Defne Abur (University of Groningen, d.abur@rug.nl)
Prof. Martijn Wieling (University of Groningen, m.b.wieling@rug.nl)
Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)
Speech changes due to aging are increasingly important to society since fluent communication is a core feature of quality of life and the average lifespan is steadily rising. Thus, characterizing speech changes with age is critical for structural inclusion of older adults in an aging society. Although acoustic impairments in speech in later stages of life have been well-documented, the way in which neural control of speech changes with older age (i.e, feedforward control for motor function and feedback control for monitoring sensory feedback) has not been clearly examined. This project will evaluate the comprehensive components of speech motor control across a wide range in age to provide insight into the specific sensory and motor mechanisms during speech that are impaired by aging. In addition, this project will include a modeling component; computational models of speech motor control will be leveraged to map behavioral data to neural regions and provide a mechanistic interpretation of the impact of aging on neural control of speech.
Multisensory Integration in Speech Perception: Priming with Smell
Supervisory team:
Dr. Anita Szakay (Macquarie University, anita.szakay@mq.edu.au)
Associate Prof. Anja Schüppert (University of Groningen, a.schueppert@rug.nl)
Speech perception is a highly dynamic, contextually sensitive, multisensory system that has been shown to integrate auditory information with visual information (Sumby & Pollack 1954; McGurk & MacDonald 1976) as well as aero-tactile information (Gick & Derrick 2009). However, little is known about the effects of olfactory information on cognition, and specifically on speech perception. Odours are particularly potent in eliciting rich memories (e.g. de Bruijn & Bender 2017), and the cortical areas of integration of the olfactory sensations have important interconnections with memory and language.
Recent research in experimental sociophonetics has shown that listeners store in memory and are sensitive to the phonetic consequences of a speaker’s social characteristics. The interpretation of linguistic forms depends on the perceived social characteristics of the speaker which is often manipulated by visual cues in an experimental setting (e.g. gender in Johnson, Strand & D’Imperio 1999; social class in Hay, Warren & Drager 2006; age in Drager 2011). Speech perception is also sensitive to implicit contextual cues priming a particular social category, likewise investigated mainly through visual cues (e.g. stuffed toys in Hay & Drager 2010; regional newspapers in Portes & German 2019).
The current project therefore aims to test whether listeners interpret linguistic information differently depending on which social category is evoked by olfactory cues, where smell is used either as an explicit characteristic of a speaker, or as an implicit contextual prime.
Motor learning in speech and other domains in patients with Parkinson’s disease
Supervisory team:
Prof. Martijn Wieling (University of Groningen, m.b.wieling@rug.nl)
Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)
Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)
Parkinson’s disease (PD) is a progressive neurodegenerative disorder, affecting predominantly the elderly. Symptoms include several motor symptoms, such as resting tremor, slowness of movement, postural instability and rigidity, but also speech problems, such as imprecise articulations, slurring, reduced volume, and a monotonous tone of voice. It is unclear, however, whether these problems stem predominantly from problems with feedforward control (i.e., planning and learning movements) or with feedback control (i.e., monitoring the movements and integrating sensory feedback). This project will thus investigate PD patients’ ability for motor learning and feedback integration in speech and other domains (e.g., vision). This will deepen the understanding of how PD affects the ability to adapt and learn as well as advance our knowledge on how speech is connected to other motor domains. Students can choose to study speech using acoustic as well as several articulatory methods (electromagnetic articulography, ultrasound tongue imaging), and will have the new mobile laboratory of the Faculty of Arts at their disposal for data collection.
Speech-music therapy and multilingualism
Supervisory team:
Prof. Dr. Wander Lowie (University of Groningen, w.m.lowie@rug.nl)
Prof. Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)
Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)
Dr. Joost Hurkmans (Revalidatie Friesland, j.hurkmans@revalidatie-friesland.nl)
There are several speech-language therapies that use musical parameters to treat neurological language and speech disorders. A speech-music therapy approach derives from the overlap between music and language in the brain. Speech-Music Therapy for Aphasia (SMTA) is a method developed and implemented in the Netherlands and treatment outcomes have been effective. However, results have been inconsistent, and a potential reason is the influence of cross-linguistic differences in prosody – rhythm and melody. This pertains to the native languages of the patients receiving SMTA.
This project aims to explore the differences in prosody between various languages on a large scale and use the findings to develop an adapted version of SMTA to provide more individualized therapy to patients. The investigation of prosodic rhythm and melody can extend to other languages which broadens the scope of SMTA. Ultimately, the musical parameters of SMTA would be adjusted to better suit the prosodic characteristics of the patient’s native language. A clinical case series follows the adaptation of the SMTA protocol, comparing treatment outcomes of the new versus old protocol. Working mechanisms of speech-music therapy remain unclear and this investigation would potentially shed light on the way prosody and musical parameters influence each other. Clinically, the final goal is to provide therapy to patients that will be the most effective for improving intelligibility and verbal communication in daily life.
Language processing: The effect of time/cognitive load in different linguistic domains using off-line and on-line measures
Supervisory team:
PD Dr. Frank Burchert (University of Potsdam, burchert@uni-potsdam.de)
Dr. Nicole Stadie (University of Potsdam, nstadie@uni-potsdam.de)
Prof. Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)
The objective of the current project is to examine a rather neglected effect – the effect of time/cognitive load during syntactic and phonological processing both in Individuals with aphasia (IWA) and language neurotypical adults. IWA often encounter difficulties in language processing and previous studies have shown that deficits can occur in different language domains, e.g., syntax and phonology. In the domain of syntax, processing deficits have been described as impaired comprehension of sentences with a non-canonical word order. In the domain of phonology, impaired comprehension of sentences with phonological load was observed for individuals with phonological working memory limitations. Various effects on language processing were also examined, such as the effect of morphological cues on comprehension of non-canonical sentences, and the effect of sentence length on comprehension of sentences with phonological load. Similar limitations, though much less pronounced, have also been observed in elderly neurotypical adults.
The project will investigate the effect of time/cognitive load by conducting two experiments focusing on syntactic and phonological processing in IWA with syntactic deficits and/or limited phonological working memory and neurotypical adults. Different sentence types will be used in a sentence-picture matching task and long vs. short sentences in a rhyme judgement task. In order to capture the effect of time/cognitive load, sentences will be presented in two conditions: (1) a self-paced listening condition (SPL), in which the sentences are divided into constituents and the participant is required to press a button in order to hear the next constituent, and (2) a regular listening condition, in which sentences are presented at a normal speech rate. The effect of time/cognitive load will be operationalized in terms of accuracy, reaction times and listening times. In order to measure the cognitive load we will investigate to what extent neural resources are used to perform a language task by using methods such as Pupillometry and possibly also the Brain Engagement Index (BEI).
The impact of accent familiarity on film subtitle processing
Supervisory team:
Prof Jan-Louis Kruger (Macquarie University, janlouis.kruger@mq.edu.au)
A/Prof Hanneke Loerts (University of Groningen, h.loerts@rug.nl)
With the rise of video on demand, viewers have a multitude of films at their fingertips, leading to a rise in the popularity of translated film though subtitling. At the same time, with developments in AI and machine translation, large parts of the subtitling process can now be automated, resulting in an increase in subtitle speed (the time a viewer has to read a subtitle) as less human intervention means that subtitles tend to be closer to a direct transcript or translation of the dialogue. The assumption behind a higher tolerance for fast subtitles is that all viewers can process subtitles efficiently regardless of their reliance on the subtitles to understand foreign language films, or even film in an unfamiliar accent.
Research has also shown that accent can impede processing of information even for L1 speakers. For example, word processing has been shown to be impeded in the presence of regional or foreign accents as evidenced by delayed word identification (c.f. Floccia, Butler, Goslin & Ellis, 2009). This finding was robust for participants from the South West of England when listening to either a French or Irish accent compared to a familiar Plymouth accent, and did not change with habituation. Similarly, L1 speakers of Canadian English were shown by Arnhold, et al., 2020) to be unable to make effective use of prosodic cues to disambiguate between two possible referents in British English instructions. In other words, word identification was again shown to be impeded by a regional accent (British English for Canadian English participants in this case), similar to L2 speakers.
This project will investigate how subtitle familiarity impacts reliance on subtitles, but also eye movement control during the reading of subtitles as well as comprehension of film in an unfamiliar accent or language. The project will build on models of eye movement control during reading as well as during the processing of multimodal input such as subtitled film (cf. Liao, Yu, Kruger and Reichle, 2021).
Multidimensional word relationships: The interplay of meaning and form through network metrics
Supervisory team:
Dr Adrià Rofes (University of Groningen, a.rofes@rug.nl)
Dr Lisi Beyersmann (Macquarie University, lisi.beyersmann@mq.edu.au)
Prof Roel Jonkers (University of Groningen, r.jonkers@rug.nl)
Network metrics indicate how words are connected. They reveal how words like “pear” are close to other fruits but far from animal words like “dog” or “cat.” New studies use network metrics to spot differences between young and older people, and between healthy individuals and individuals with neurological disorders. But it is unclear if network metrics only show relationships between word meanings (like fruits and animals) or also between how words look and sound. This matters because a word like “pear” might connect to animal words that sound alike, such as “bear” or “hare.” This project aims to explore this aspect of network metrics. To do this, new tasks will be designed where people decide how words relate to each other (like “pear” and “apple” versus “pear” and “dog” versus “pear” and “bear”). The study will involve healthy people and, optionally, individuals with neurological disorders.
Speech and language impairments in infant and toddler survivors of posterior fossa tumors
Supervisory team:
Dr Vânia de Aguiar (University of Groningen, vania.de.aguiar@rug.nl)
Prof Roel Jonkers (University of Groningen, r.jonkers@rug.nl)
Prof Dr. Natalie Boll-Avetisyan (University of Potsdam, nboll@uni-potsdam.de)
Dr Ditte Boeg Thomsen (University of Copenhagen, ditte.boeg@hum.ku.dk)
Dr Jonathan Kjær Grønbæk (Rigshospitalet Copenhagen, jonathan.kjaer.groenbaek@regionh.dk)
Mutism, reduced speech, and other speech and language symptoms occur frequently in children after the resection of a brain tumor in the posterior fossa. While approximately half of childhood brain tumors occur between the ages of 0-4 years, few studies reported communication outcomes specifically for individuals within this age range. Within those, only three cases are reported to have experienced postoperative mutism or reduced speech (De Smet et al., 2009; Hudson et al., 1989; Murdoch et al., 1994). However, studies who included children who are 4 or younger generally evaluated language abilities at least 1 year after surgery (c.f. Di Rocco et al., 2011). Given that impairments in communication years after surgery are reported for many of those individuals (e.g., Svaldi et al., 2024), there is a clear gap in reporting the language impairments shortly after surgery. This is critical for understanding the genesis of language impairment in this population and to identify variables which predict such long-term outcomes.
In the current project, we aim to develop a short evaluation to characterize and follow-up speech and language abilities in infant and toddler survivors of posterior fossa tumors. This evaluation may be based on video assessments of caregiver-child interactions, parental questionnaires, or direct assessments. Children will be examined before, closely after, 2 months, and up to 1 year after surgery. This project will be embedded within the European Study of Cerebellar Mutism Syndrome (Grønbæk et al., 2021, 2022; Persson et al., 2023). In addition to the language measures, MRI data and clinical data related to the tumor and treatment can be used to study the factors associated with speech and language disorders in this population.