Projects

Candidates are highly recommended to write a proposal that fits into one of the following research lines and are recommended to contact the relevant supervisors when developing proposals. Other projects may be developed but the potential supervisors at the partner universities MUST be contacted prior to application.

When writing your research proposal, please use the structure provided in this proposal template.

The list of projects may be updated throughout the application period.

Neurocognitive investigation of reading aloud vs silently and their effects on memory retention

Supervisory team:

Dr Frank Tsiwah (University of Groningen, f.tsiwah@rug.nl)

Dr Lili Yu (Macquarie University, lili.yu@mq.edu.au)

Reading text aloud has been shown to result in a better retention of textual information in memory than reading silently. This phenomenon has been termed the “production effect” (PE; Ozubko & MacLeod, 2010). However, the neurophysiological mechanisms underlying this effect remain poorly understood. This project aims to investigate the PE’s impact on memory recognition using both behavioral and brain-imaging approaches. We will examine its long-term sustainability in native and non-native speakers of English and Dutch. By exploring the PE in non-native language speakers, we seek to determine whether reading aloud or silently differentially influences second language learning. There is an option to also examine PE phenomena across various age groups to measure ageing affect. This research will provide insights into the cognitive processes underlying reading and memory consolidation in both short and long term. There will be an opportunity to use both behavioural and neuroimaging techniques (such as EEG, MEG and/or Eye-tracking) in this project.

Characterizing speech motor control across age

Supervisory team:

Assistant Prof. Defne Abur (University of Groningen, d.abur@rug.nl)

Prof. Martijn Wieling (University of Groningen, m.b.wieling@rug.nl)

Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)

Speech changes due to aging are increasingly important to society since fluent communication is a core feature of quality of life and the average lifespan is steadily rising. Thus, characterizing speech changes with age is critical for structural inclusion of older adults in an aging society. Although acoustic impairments in speech in later stages of life have been well-documented, the way in which neural control of speech changes with older age (i.e, feedforward control for motor function and feedback control for monitoring sensory feedback) has not been clearly examined. This project will evaluate the comprehensive components of speech motor control across a wide range in age to provide insight into the specific sensory and motor mechanisms during speech that are impaired by aging. In addition, this project will include a modeling component; computational models of speech motor control will be leveraged to map behavioral data to neural regions and provide a mechanistic interpretation of the impact of aging on neural control of speech. 

Motor learning in speech and other domains in patients with Parkinson’s disease

Supervisory team:

Prof. Martijn Wieling (University of Groningen, m.b.wieling@rug.nl)

Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)

Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)

Parkinson’s disease (PD) is a progressive neurodegenerative disorder, affecting predominantly the elderly. Symptoms include several motor symptoms, such as resting tremor, slowness of movement, postural instability and rigidity, but also speech problems, such as imprecise articulations, slurring, reduced volume, and a monotonous tone of voice. It is unclear, however, whether these problems stem predominantly from problems with feedforward control (i.e., planning and learning movements) or with feedback control (i.e., monitoring the movements and integrating sensory feedback). This project will thus investigate PD patients’ ability for motor learning and feedback integration in speech and other domains (e.g., vision). This will deepen the understanding of how PD affects the ability to adapt and learn as well as advance our knowledge on how speech is connected to other motor domains. Students can choose to study speech using acoustic as well as several articulatory methods (electromagnetic articulography, ultrasound tongue imaging), and will have the new mobile laboratory of the Faculty of Arts at their disposal for data collection.

Speech-music therapy and multilingualism

Supervisory team:

Prof. Dr. Wander Lowie (University of Groningen, w.m.lowie@rug.nl)

Prof. Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)

Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)

Dr. Joost Hurkmans (Revalidatie Friesland, j.hurkmans@revalidatie-friesland.nl)

There are several speech-language therapies that use musical parameters to treat neurological language and speech disorders. A speech-music therapy approach derives from the overlap between music and language in the brain. Speech-Music Therapy for Aphasia (SMTA) is a method developed and implemented in the Netherlands and treatment outcomes have been effective. However, results have been inconsistent, and a potential reason is the influence of cross-linguistic differences in prosody – rhythm and melody. This pertains to the native languages of the patients receiving SMTA.

This project aims to explore the differences in prosody between various languages on a large scale and use the findings to develop an adapted version of SMTA to provide more individualized therapy to patients. The investigation of prosodic rhythm and melody can extend to other languages which broadens the scope of SMTA. Ultimately, the musical parameters of SMTA would be adjusted to better suit the prosodic characteristics of the patient’s native language. A clinical case series follows the adaptation of the SMTA protocol, comparing treatment outcomes of the new versus old protocol. Working mechanisms of speech-music therapy remain unclear and this investigation would potentially shed light on the way prosody and musical parameters influence each other. Clinically, the final goal is to provide therapy to patients that will be the most effective for improving intelligibility and verbal communication in daily life.

Language processing: The effect of time/cognitive load in different linguistic domains using off-line and on-line measures

Supervisory team:

Dr. Nicole Stadie (University of Potsdam, nstadie@uni-potsdam.de)

Dr. Sandra Hanne (University of Potsdam, hanne@uni-potsdam.de)

Prof. Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)

The objective of the current project is to examine a rather neglected effect – the effect of time/cognitive load during syntactic and phonological processing both in individuals with aphasia (IWA) and language neurotypical adults. IWA often encounter difficulties in language processing and previous studies have shown that deficits can occur in different language domains, e.g., syntax and phonology. In the domain of syntax, processing deficits have been described as impaired comprehension of sentences with a non-canonical word order. In the domain of phonology, impaired comprehension of sentences with phonological load was observed for individuals with phonological working memory limitations. Various effects on language processing were also examined, such as the effect of morphological cues on comprehension of non-canonical sentences, and the effect of sentence length on comprehension of sentences with phonological load. Similar limitations, though much less pronounced, have also been observed in elderly neurotypical adults.

The project will investigate the effect of time/cognitive load by conducting two experiments focusing on syntactic and phonological processing in IWA with syntactic deficits and/or limited phonological working memory and neurotypical adults. Different sentence types will be used in a sentence-picture matching task and long vs. short sentences in a rhyme judgement task. In order to capture the effect of time/cognitive load, sentences will be presented in two conditions: (1) a self-paced listening condition (SPL), in which the sentences are divided into constituents and the participant is required to press a button in order to hear the next constituent, and (2) a regular listening condition, in which sentences are presented at a normal speech rate. The effect of time/cognitive load will be operationalized in terms of accuracy, reaction times and listening times. In order to measure the cognitive load we will investigate to what extent neural resources are used to perform a language task by using methods such as Pupillometry and possibly also the Brain Engagement Index (BEI).

The impact of accent familiarity on film subtitle processing

Supervisory team:

Prof Jan-Louis Kruger (Macquarie University, janlouis.kruger@mq.edu.au)

A/Prof Hanneke Loerts (University of Groningen, h.loerts@rug.nl)

With the rise of video on demand, viewers have a multitude of films at their fingertips, leading to a rise in the popularity of translated film though subtitling. At the same time, with developments in AI and machine translation, large parts of the subtitling process can now be automated, resulting in an increase in subtitle speed (the time a viewer has to read a subtitle) as less human intervention means that subtitles tend to be closer to a direct transcript or translation of the dialogue. The assumption behind a higher tolerance for fast subtitles is that all viewers can process subtitles efficiently regardless of their reliance on the subtitles to understand foreign language films, or even film in an unfamiliar accent.

Research has also shown that accent can impede processing of information even for L1 speakers. For example, word processing has been shown to be impeded in the presence of regional or foreign accents as evidenced by delayed word identification (c.f. Floccia, Butler, Goslin & Ellis, 2009). This finding was robust for participants from the South West of England when listening to either a French or Irish accent compared to a familiar Plymouth accent, and did not change with habituation. Similarly, L1 speakers of Canadian English were shown by Arnhold, et al., 2020) to be unable to make effective use of prosodic cues to disambiguate between two possible referents in British English instructions. In other words, word identification was again shown to be impeded by a regional accent (British English for Canadian English participants in this case), similar to L2 speakers.

This project will investigate how subtitle familiarity impacts reliance on subtitles, but also eye movement control during the reading of subtitles as well as comprehension of film in an unfamiliar accent or language. The project will build on models of eye movement control during reading as well as during the processing of multimodal input such as subtitled film (cf. Liao, Yu, Kruger and Reichle, 2021).

Leveraging simultaneous interpreters’ neural networks for hearing difficulties

Supervisory team:

Dr Sriram Boothalingam (Macquarie University, sriram.boothalingam@mq.edu.au)

Prof Outi Toumainen (University of Potsdam, tuomainen@uni-potsdam.de)

Prof Anina Rich (Macquarie University)

Prof Marc Orlando (Macquarie University)

I can hear but not understand” is the most common complaint of individuals with hearing impairment (HI). HI is a global epidemic affecting over 1.5 billion individuals. While amplification through hearing aids remains the most suitable solution, its adoption and usage are curtailed by persistent difficulties in environments with competing speech and noise (e.g., restaurants). This is because holding sustained conversations in such suboptimal acoustic environments requires engaging a multiplex of cognitive processes including, but not limited to, working memory, selective attention, inhibition, among others. In stark contrast to HI individuals, simultaneous interpreters (SIs) navigate analogous complex auditory scenarios with remarkable proficiency. SIs seamlessly decode speech in one language, translate it into another, and concurrently monitor and rectify their own output in a different language. This fluency is a testament to the intricate neural networks and cognitive processes that are strengthened through professional training. We thus posit that the expertise gleaned from SI training holds promise in aiding HI individuals in everyday noisy settings.

The objective of this project is to bridge hearing and simultaneous interpreting through cognitive neuroscience with the ultimate goal of developing auditory training paradigms that complement hearing aid use for improved outcomes in the hearing impaired. The project is a collaboration between the Departments of Linguistics and Psychology at Macquarie University (Australian Hearing Hub; primary location) and Department of Linguistics at the University of Potsdam. The candidate will also have access to experienced and trainee SIs through the world-class SI program at Macquarie University.

This PhD work will lay the foundation for this program of research by identifying neural markers of SI training. Both neuroimaging (e.g., M/EEG, fMRI) and behavioural (e.g., attention, inhibition, working memory) methods will be used.

Speech: a dynamical perspective

Supervisory team:

Prof Adamantios Gafos (University of Potsdam, gafos@uni-potsdam.de)

Dr Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)

Speech has been argued to be the most highly developed motor skill possessed by all of us. Speech production involves precise control of articulatory organs as they form and release constrictions in a limited space inside the body. Speech has evolved to harness this complex activity for the purposes of communication. Using instrumental laboratory methods to study this activity, this project aims to sharpen and broaden a research program where tools and concepts from dynamical systems theory are used to understand speech production, perception and language-particular phonological organization. This research program has inspired work, by the PIs and others, that takes a deeper look at the relation between phonology and phonetics, exploring the idea that the discrete versus continuous character that distinguishes them may be formally parallel to the qualitative versus quantitative aspects of the non-linear dynamical systems in the biological and physical sciences. Of interest are questions such as: dynamical modeling of speech gestures, expression of phonological structure via different modes of cohesion or coordination among constellations of primitives, the role of dynamics in phonological computation expressed via constraint interaction.

The impact of textual enhancement in subtitled videos on reading and language development

Supervisory team:

Prof Jan-Louis Kruger (Macquarie University, janlouis.kruger@mq.edu.au)

A/Prof Hanneke Loerts (University of Groningen, h.loerts@rug.nl)

Dr Anastasia Pattemore (University of Groningen, a.pattemore@rug.nl)

Reading is a fundamental skill for human development, significantly influencing educational achievement, employability, and overall well-being. It is therefore a major concern that a substantial portion of the population, even in resource-rich nations, faces challenges in acquiring proficient reading skills. In this context there is a significant need for literacy development tools. Subtitled video has been gaining visibility as potential reading practice. With the rise of video on demand, viewers have access to a multitude of films at their fingertips, making it an accessible and scalable tool for reading development if used in a structured manner. To support language learners in noticing and acquiring the target language, textually enhanced subtitles have been proposed as an effective tool, as they increase reading time and promote longer fixations.

However, there is limited empirical evidence on whether subtitles can help low-literacy individuals improve their reading skills in their first language, and exactly how this might be optimised. Furthermore, most existing approaches look at the role of subtitles in incidental vocabulary learning. This project will investigate the use of textual enhancement in subtitled video as a targeted intervention to address specific aspects of reading development in beginner readers or readers with low literacy. It will also look at the classification of reading skills and the assessment of the development of such skills over time using eye tracking and other assessment formats.

Infant precursors of reading skills: visual-phonetic mappings and neural phase tracking

Supervisory team:

Dr Lisi Beyersmann (Macquarie University, lisi.beyersmann@mq.edu.au)

Prof Natalie Boll-Avetisyan (University of Potsdam, nboll@uni-potsdam.de)

Prof Stefanie Höhl (University of Vienna, stefanie.hoehl@univie.ac.at)

Longitudinal research has found that children who later struggle with reading exhibit atypical speech sound processing performances from infancy. A theoretical account of this link is that phonetic awareness seems to be an essential precursor skill for reading development, particularly in the acquisition of alphabetic systems requiring a phoneme-to-grapheme mapping. Notably, a recent study attested signs of these precursor skills early infancy: in an EEG experiment, 3-month-olds were able to associate distinct phonetic representations with visual labels upon brief exposure (Mersad, Kabdebon & Dehaene-Lambertz, 2021, Cognition). Modern accounts of dyslexia suggest that the difficulties in speech sound processing may relate to difficulties in the neural phase tracking of speech sounds. While young infants show neural phase tracking of speech sounds (Di Liberto et al., 2023, Nat Comm.; Attaheri, Choisdealbha, Rocha, Brusini, Di Liberto, Mead, et al., 2024, bioRxiv), direct evidence for a link with reading skills (e.g, from longitudinal studies of infants with a risk for dyslexia) is still lacking. However, first studies with older children with diagnosed dyslexia (e.g., Araújo, Simons, Peter et al., 2024, Frontiers) provide some evidence for such claims.

This project will shed more light on the relationship between the potential precursor skills of reading in infancy by investigating whether individual differences in infants’ ability to map phonetic representations onto visual labels are associated with their neural and pupillary phase tracking of speech sounds. Both EEG and eye-tracking methods will be employed. The project includes a prolonged stay at U Vienna, where experiments will be conducted.

Effects of gesture production and auditory conditions on communicative success

Supervisory team:

Prof Naomi Sweller (Macquarie University, naomi.sweller@mq.edu.au)

Prof Outi Toumainen (University of Potsdam, tuomainen@uni-potsdam.de)

When communicating with others we adjust our speech to match the surrounding auditory conditions. For example, in challenging auditory conditions such as those with loud background noise, we may increase vocal effort to be heard and understood (Hazan et al., 2018). However, communication goes beyond verbal speech only, with important contributions from non-verbal modalities such as gestures. Under conditions of auditory interference, non-verbal communication becomes increasingly important, with gesture production having increasingly beneficial effects on a listener’s comprehension when tasks are made more difficult by the presence of background noise (McKern et al., 2021). Further, individual cognitive and personality characteristics, in addition to the rapport between communicative partners, may vary the extent to which individuals utilise gestures, thereby further influencing communication characteristics. The current project will examine the effects of individuals’ pre-existing naturalistic propensity to produce gestures, as well as individual differences, on communicative success under conditions of varying auditory difficulty. The primary location of the project will be the School of Psychological Sciences at Macquarie University, with the secondary location the Department of Linguistics at Potsdam University.