Candidates are highly recommended to write a proposal that fits into one of the following research lines and are recommended to contact the relevant supervisors when developing proposals. Other projects may be developed but the potential supervisors at the partner universities MUST be contacted prior to application.

The list of projects may be updated throughout the application period.

Characterizing speech motor control across age

Supervisory team:

Assistant Prof. Defne Abur (University of Groningen,

Prof. Martijn Wieling (University of Groningen,

Dr. Michael Proctor (Macquarie University,

Speech changes due to aging are increasingly important to society since fluent communication is a core feature of quality of life and the average lifespan is steadily rising. Thus, characterizing speech changes with age is critical for structural inclusion of older adults in an aging society. Although acoustic impairments in speech in later stages of life have been well-documented, the way in which neural control of speech changes with older age (i.e, feedforward control for motor function and feedback control for monitoring sensory feedback) has not been clearly examined. This project will evaluate the comprehensive components of speech motor control across a wide range in age to provide insight into the specific sensory and motor mechanisms during speech that are impaired by aging. In addition, this project will include a modeling component; computational models of speech motor control will be leveraged to map behavioral data to neural regions and provide a mechanistic interpretation of the impact of aging on neural control of speech. 

Exploring language differences in how infants and adults use prosody in conversation

Supervisory team:

Prof. Natalie Boll-Avetisyan (University of Potsdam,

Dr. Laurence White (Newcastle University,

Prof. David Schlangen (University of Potsdam,

Patterns of variations in pitch, timing, rhythm and other prosodic features are exploited by listeners in multiple ways. In language development, prosody appears important for engaging and maintaining the attention of babies and young infants to infant-directed speech. For children and for adults, prosody also provides cues to spoken word boundaries (“speech segmentation”) and to appropriate turn-taking points during conversational interactions. Importantly, infants’ sensitivity to prosody, including prosody-based speech segmentation and turn-taking performance, predicts later language outcomes. Thus, infant exploitation of prosody is crucial for spoken language development.

Languages differ systematically in prosody, however. For example, in some languages, but not all, syllables contrast in lexical stress and/or words contrast in phrasal prominence. Indeed, much work has been dedicated to quantifying cross-linguistic prosodic variation. Much further work is required, however, to better understand the effects of observed prosodic differences on language development and processing.

This project would employ convergent experimental methods to study infants’ and adults’ use of prosody for speech segmentation and turn-taking. Specifically, infants and adults would be exposed to the speech of familiar and unfamiliar languages that differ in: a. the provision of cues to boundaries between spoken words; b. the signalling of turn-transition points in dialogue. Some of these studies could include the employment of robots (e.g., Furhat or Nao) as interlocutors in controlled settings.  

Multisensory Integration in Speech Perception: Priming with Smell

Supervisory team:

Dr. Anita Szakay (Macquarie University,

Associate Prof. Anja Schüppert (University of Groningen,

Speech perception is a highly dynamic, contextually sensitive, multisensory system that has been shown to integrate auditory information with visual information (Sumby & Pollack 1954; McGurk & MacDonald 1976) as well as aero-tactile information (Gick & Derrick 2009). However, little is known about the effects of olfactory information on cognition, and specifically on speech perception. Odours are particularly potent in eliciting rich memories (e.g. de Bruijn & Bender 2017), and the cortical areas of integration of the olfactory sensations have important interconnections with memory and language.

Recent research in experimental sociophonetics has shown that listeners store in memory and are sensitive to the phonetic consequences of a speaker’s social characteristics. The interpretation of linguistic forms depends on the perceived social characteristics of the speaker which is often manipulated by visual cues in an experimental setting (e.g. gender in Johnson, Strand & D’Imperio 1999; social class in Hay, Warren & Drager 2006; age in Drager 2011). Speech perception is also sensitive to implicit contextual cues priming a particular social category, likewise investigated mainly through visual cues (e.g. stuffed toys in Hay & Drager 2010; regional newspapers in Portes & German 2019).

The current project therefore aims to test whether listeners interpret linguistic information differently depending on which social category is evoked by olfactory cues, where smell is used either as an explicit characteristic of a speaker, or as an implicit contextual prime.

Motor learning in speech and other domains in patients with Parkinson’s disease

Supervisory team:

Prof. Martijn Wieling (University of Groningen,

Dr. Michael Proctor (Macquarie University,

Dr. Roel Jonkers (University of Groningen,

Parkinson’s disease (PD) is a progressive neurodegenerative disorder, affecting predominantly the elderly. Symptoms include several motor symptoms, such as resting tremor, slowness of movement, postural instability and rigidity, but also speech problems, such as imprecise articulations, slurring, reduced volume, and a monotonous tone of voice. It is unclear, however, whether these problems stem predominantly from problems with feedforward control (i.e., planning and learning movements) or with feedback control (i.e., monitoring the movements and integrating sensory feedback). This project will thus investigate PD patients’ ability for motor learning and feedback integration in speech and other domains (e.g., vision). This will deepen the understanding of how PD affects the ability to adapt and learn as well as advance our knowledge on how speech is connected to other motor domains. Students can choose to study speech using acoustic as well as several articulatory methods (electromagnetic articulography, ultrasound tongue imaging), and will have the new mobile laboratory of the Faculty of Arts at their disposal for data collection.

Language processing: The effect of time/cognitive load in different linguistic domains using off-line and on-line measures

Supervisory team:

PD Dr. Frank Burchert (University of Potsdam,

Dr. Nicole Stadie (University of Potsdam,

Dr. Christos Salis (Newcastle University,

The objective of the current project is to examine a rather neglected effect – the effect of time/cognitive load during syntactic and phonological processing both in Individuals with aphasia (IWA) and language neurotypical adults. IWA often encounter difficulties in language processing and previous studies have shown that deficits can occur in different language domains, e.g., syntax and phonology. In the domain of syntax, processing deficits have been described as impaired comprehension of sentences with a non-canonical word order. In the domain of phonology, impaired comprehension of sentences with phonological load was observed for individuals with phonological working memory limitations. Various effects on language processing were also examined, such as the effect of morphological cues on comprehension of non-canonical sentences, and the effect of sentence length on comprehension of sentences with phonological load. Similar limitations, though much less pronounced, have also been observed in elderly neurotypical adults.

The project will investigate the effect of time/cognitive load by conducting two experiments focusing on syntactic and phonological processing in IWA with syntactic deficits and/or limited phonological working memory and neurotypical adults. Different sentence types will be used in a sentence-picture matching task and long vs. short sentences in a rhyme judgement task. In order to capture the effect of time/cognitive load, sentences will be presented in two conditions: (1) a self-paced listening condition (SPL), in which the sentences are divided into constituents and the participant is required to press a button in order to hear the next constituent, and (2) a regular listening condition, in which sentences are presented at a normal speech rate. The effect of time/cognitive load will be operationalized in terms of accuracy, reaction times and listening times. In order to measure the cognitive load we will investigate to what extent neural resources are used to perform a language task by using methods such as Pupillometry and possibly also the Brain Engagement Index (BEI).

Building articulatory routines in more than one language

Supervisory team:

Dr. Ghada Khattab (Newcastle University,

Dr. Michael Proctor (Macquarie University,

Prof. Felicity Cox (Macquarie University,

This project examines acquisition of laterals by bilingual children using instrumental phonetic data. Laterals involve complex lingual articulations, and are typically acquired late and sometimes imperfectly. Because they are produced in different ways in different languages and by different speakers, laterals are also strong markers of social identity. Of particular interest are the articulation of the tongue tip, tongue body, and the co-ordination of these two gestures in patterns of lateral allophony along the continuum from ‘clear’ (palatalised) to ‘dark’ (velarized/pharyngealised) /l/. Lateral realisation is influenced by many factors, including prosodic and morphological conditioning, as well as dialectal and cross-linguistic systematic variation.

Syllables containing laterals require complex gestural co-ordination, and children must learn these coarticulatory patterns from an early age, along with their linguistic conditioning. Little is known about how these co-articulatory routines develop as a child acquires two languages with different types of laterals and different morpho-phonologies. Laterals differ in Arabic, German, Hindi, Mandarin, and English varieties, in ways that offer important insights into bilingual phonological acquisition. This project will investigate how bilingual or multilingual children come to acquire language- and dialect-specific gestural timing and articulatory routines for laterals in each of their languages, and how their acquisition patterns may compare with monolingual children acquiring these languages. The findings will advance our understanding of typical and atypical acquisition of gestural coordination across languages and multilingual contexts.

Prosodic investigations in aphasia

Supervisory team:

Dr. Christos Salis (Newcastle University,

Dr. Cong Zhang (Newcastle University,

Dr. Michael Proctor (Macquarie University,

Aphasia is a persistent spoken language disorder that results from neurological brain conditions. Consequently, aphasia is a communication impairment that affects the individuals’ ability to participate effectively and meaningfully in domestic, occupational, and recreational settings.

In some conditions such as stroke, aphasia is non-progressive but it can also be progressive when the aetiology stems from progressive conditions, for example, Alzheimer’s or Parkinson’s. Regardless of aetiology, it compromises the formulation and conversion of hierarchically-organised abstract linguistic representations at word, phrase and discourse levels into speech. It also varies across affected individuals depending on severity.

As well as affecting spoken word production, aphasia also affects prosody and speech fluency. Studies of prosody have been key in advancing our understanding about the nature of aphasia. For example, Broca’s aphasia which is a fairly severe non-fluent type of aphasia, is characterised by flamboyant prosodic difficulties that manifest themselves as long pauses, limited pitch range, word lengthening that are determined by the linguistic environment, e.g., phrase or sentence final. Moreover, recent research has shown that even people who recovered from aphasia after a stroke, very subtle or latent aphasic symptoms such as prolonged pauses when executing multi-word utterances in connected speech are evident and these features differ from neuro-typical controls who produce much shorter pauses.

The purpose in this project is to capitalise on the sensitivity of prosodic measures and answer novel research questions that would further our understanding of the underlying neuro-phonetic and neuro-linguistic mechanisms that affect people with aphasia when they produce speech in discourse contexts. This project could utilise data (secondary and/or primary) from non-progressive and/or progressive aphasia.

Language decline in subjective cognitive impairment: Early detection of dementia

Supervisory team:

Dr. Branislava Ćurčić-Blake (University of Groningen;

Prof. Roel Jonkers (University of Groningen,

Assistant Prof. Srđan Popov (University of Groningen,

Prof. Greg Savage (Macquarie University,

Subjective cognitive decline (SCD) is a form of cognitive impairment and one of the early predictors of Alzheimer’s disease, preceding objective mild cognitive impairment (MCI). SCD is a self-reported decline in cognition and memory without measurable objective cognitive impairment. In recent years SCD has been put in focus because it is important for the early detection of Alzheimer’s disease. While it is often possible to detect early AD using biomarkers, in order to avoid invasive and cumbersome tests (lumbar puncture and PET) it is important to find different, less invasive methods for early detection.

Language deficits have been reported for AD and aMCI. They include an impaired lexical access and restricted vocabulary range, reduced idea density, impaired semantic fluency and sentence level complexity, reduced discourse cohesion, “empty speech,” and paraphasias. Our project aims to investigate language deficits in people with SCD with a focus on deficits at the sentence level. We will integrate clinical linguistic findings with brain activation measured by fNIRS and eye-movement patterns measured by eye-tracking. This is a longitudinal study with a follow-up of 1 or 2 years. In this way, we plan to create a neurolinguistic profile of SCD using neuropsychological and neuroimaging data.

Digital assessment of dyslexia

Supervisory team:

Dr. Dörte de Kok (University of Groningen;

Dr. Barry de Groot (University of Groningen,

tbd (Macquarie University)

In order to come to a proper diagnosis of dyslexia, clinicians depend on a purposeful selection of high-quality assessment materials. The resulting assessment procedures can be quite lengthy, and most available instruments only include limited online, i.e., “live” information about the reading and writing processes as such. For example, many standardized tests only take into account the total response times and/or number of items correct, possibly discarding any further qualitative information about specific pitfalls or relative strengths that could be highly relevant for individualized instruction and remediation strategies. By developing new dynamic digital assessment materials, we not only strive to make the diagnostic process more enjoyable for clients, but also more comprehensive and accurate, as well as more time-efficient to carry out. Using different modern techniques, many tasks can be scored automatically, yielding instant test results to be presented and implemented dynamically. Going beyond marginal test results, we can easily record and utilize the response times and accuracy of individual test items, but we could also monitor self-corrections, and many other aspects of the reading/writing process, e.g., phonological-articulatory (automatic speech recognition), visual-attentional (eye tracking), and emotive aspects (facial expressions). Based on such extended item information, we can, for example, identify which words were especially difficult for the participant, and determine which features these words share. This would make the identification of the underlying problem areas more secure. Most importantly though, we can generally try for a better fit of the assessment to the abilities of the client by using adaptive materials that adjust the level of difficulty depending on the actual performance during the task, rendering the test procedure more efficient. We welcome proposals focusing on the development, implementation, and evaluation of this kind of digital assessment tools focusing on dyslexic populations.

Speech-music therapy and multilingualism

Supervisory team:

Prof. Dr. Wander Lowie (University of Groningen,

Prof. Dr. Roel Jonkers (University of Groningen,

Dr. Michael Proctor (Macquarie University,

Dr. Joost Hurkmans (Revalidatie Friesland,

There are several speech-language therapies that use musical parameters to treat neurological language and speech disorders. A speech-music therapy approach derives from the overlap between music and language in the brain. Speech-Music Therapy for Aphasia (SMTA) is a method developed and implemented in the Netherlands and treatment outcomes have been effective. However, results have been inconsistent, and a potential reason is the influence of cross-linguistic differences in prosody – rhythm and melody. This pertains to the native languages of the patients receiving SMTA.

This project aims to explore the differences in prosody between various languages on a large scale and use the findings to develop an adapted version of SMTA to provide more individualized therapy to patients. The investigation of prosodic rhythm and melody can extend to other languages which broadens the scope of SMTA. Ultimately, the musical parameters of SMTA would be adjusted to better suit the prosodic characteristics of the patient’s native language. A clinical case series follows the adaptation of the SMTA protocol, comparing treatment outcomes of the new versus old protocol. Working mechanisms of speech-music therapy remain unclear and this investigation would potentially shed light on the way prosody and musical parameters influence each other. Clinically, the final goal is to provide therapy to patients that will be the most effective for improving intelligibility and verbal communication in daily life.