Projects

Candidates are highly recommended to write a proposal that fits into one of the following research lines and are recommended to contact the relevant supervisors when developing proposals. Other projects may be developed but the potential supervisors at the partner universities MUST be contacted prior to application.

When writing your research proposal, please use the structure provided in this proposal template.

The list of projects may be updated throughout the application period.

Supervisory team:

Dr Dörte de Kok (University of Groningen, d.a.de.kok@rug.nl)

Dr Vânia de Aguiar (University of Groningen, vania.de.aguiar@rug.nl)

Dr Lisi Beyersmann (Macquarie University, lisi.beyersmann@mq.edu.au)

In a previous project, Pauline Cuperus developed an app to treat verb- and sentence production deficits in aphasia (Cuperus, 2023; https://doi.org/10.33612/diss.769923247). We invite a project that builds further on this tool studying the efficacy of the treatment. Project proposals should therefore include a (case-series) efficacy study, investigating the treatment effects by comparing various pre- and postmeasures. A second research chapter should focus on the in-app data on e.g. cue-usage, time spent per item and accuracy development across sessions in relation to the outcomes of the treatment. The project should be completed with a third research study that could take several directions. We welcome proposals using online tasks taking a closer look at language processing in order to understand what drives potential improvement during treatment. The focus could be on morphological or sentence level, lexical retrieval or argument structure. Another option would be to study lesion symptom mapping by extending the baseline tasks and analyzing structural MRI data in relation to those tasks. A third option would be to focus on the narrative speech that is collected before and after treatment and investigate this not only with traditional measures, but also with computational (i.e., NLP) measures and analyzing changes in these measures due to treatment. While the two chapters described above are essential to the project, there is room for defining this third research study more freely. We’re open to other suggestions than those stated here, as long as they fit into the overall context of the project.

Supervisory team:

Prof. Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)

Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)

Childhood Apraxia of Speech (CAS) is a severely under-researched speech-sound disorder affecting up to 1% of the world population. CAS is classified as a speech-motor preparation dysfunction, where speech articulation is impaired, in the absence of muscular degeneration. The nature of CAS is still poorly understood, which has hindered the development of optimally effective interventions.
Several common CAS interventions rely on audio-visual integration, yet not all children with CAS benefit to the same degree from these interventions. This variation in response to treatment is poorly understood and may relate to children’s audio-visual integration skills. The present project aims to understand the underlying nature of CAS through addressing audio-visual integration, a key phenomenon that has not yet been tested in this population. In doing so, we aim to refine current treatments, enhance prediction of treatment candidacy, and establish a foundation for developing new, targeted interventions for children with CAS.

Neurocognitive investigation of reading aloud vs silently and their effects on memory retention

Supervisory team:

Dr Frank Tsiwah (University of Groningen, f.tsiwah@rug.nl)

Dr Lili Yu (Macquarie University, lili.yu@mq.edu.au)

Reading text aloud has been shown to result in a better retention of textual information in memory than reading silently. This phenomenon has been termed the “production effect” (PE; Ozubko & MacLeod, 2010). However, the neurophysiological mechanisms underlying this effect remain poorly understood. This project aims to investigate the PE’s impact on memory recognition using both behavioral and brain-imaging approaches. We will examine its long-term sustainability in native and non-native speakers of English and Dutch. By exploring the PE in non-native language speakers, we seek to determine whether reading aloud or silently differentially influences second language learning. There is an option to also examine PE phenomena across various age groups to measure ageing affect. This research will provide insights into the cognitive processes underlying reading and memory consolidation in both short and long term. There will be an opportunity to use both behavioural and neuroimaging techniques (such as EEG, MEG and/or Eye-tracking) in this project.

Speech-music therapy and multilingualism

Supervisory team:

Prof. Dr. Wander Lowie (University of Groningen, w.m.lowie@rug.nl)

Prof. Dr. Roel Jonkers (University of Groningen, r.jonkers@rug.nl)

Dr. Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)

Dr. Joost Hurkmans (Revalidatie Friesland, j.hurkmans@revalidatie-friesland.nl)

There are several speech-language therapies that use musical parameters to treat neurological language and speech disorders. A speech-music therapy approach derives from the overlap between music and language in the brain. Speech-Music Therapy for Aphasia (SMTA) is a method developed and implemented in the Netherlands and treatment outcomes have been effective. However, results have been inconsistent, and a potential reason is the influence of cross-linguistic differences in prosody – rhythm and melody. This pertains to the native languages of the patients receiving SMTA.

This project aims to explore the differences in prosody between various languages on a large scale and use the findings to develop an adapted version of SMTA to provide more individualized therapy to patients. The investigation of prosodic rhythm and melody can extend to other languages which broadens the scope of SMTA. Ultimately, the musical parameters of SMTA would be adjusted to better suit the prosodic characteristics of the patient’s native language. A clinical case series follows the adaptation of the SMTA protocol, comparing treatment outcomes of the new versus old protocol. Working mechanisms of speech-music therapy remain unclear and this investigation would potentially shed light on the way prosody and musical parameters influence each other. Clinically, the final goal is to provide therapy to patients that will be the most effective for improving intelligibility and verbal communication in daily life.

The impact of accent familiarity on film subtitle processing

Supervisory team:

Prof Jan-Louis Kruger (Macquarie University, janlouis.kruger@mq.edu.au)

A/Prof Hanneke Loerts (University of Groningen, h.loerts@rug.nl)

With the rise of video on demand, viewers have a multitude of films at their fingertips, leading to a rise in the popularity of translated film though subtitling. At the same time, with developments in AI and machine translation, large parts of the subtitling process can now be automated, resulting in an increase in subtitle speed (the time a viewer has to read a subtitle) as less human intervention means that subtitles tend to be closer to a direct transcript or translation of the dialogue. The assumption behind a higher tolerance for fast subtitles is that all viewers can process subtitles efficiently regardless of their reliance on the subtitles to understand foreign language films, or even film in an unfamiliar accent.

Research has also shown that accent can impede processing of information even for L1 speakers. For example, word processing has been shown to be impeded in the presence of regional or foreign accents as evidenced by delayed word identification (c.f. Floccia, Butler, Goslin & Ellis, 2009). This finding was robust for participants from the South West of England when listening to either a French or Irish accent compared to a familiar Plymouth accent, and did not change with habituation. Similarly, L1 speakers of Canadian English were shown by Arnhold, et al., 2020) to be unable to make effective use of prosodic cues to disambiguate between two possible referents in British English instructions. In other words, word identification was again shown to be impeded by a regional accent (British English for Canadian English participants in this case), similar to L2 speakers.

This project will investigate how subtitle familiarity impacts reliance on subtitles, but also eye movement control during the reading of subtitles as well as comprehension of film in an unfamiliar accent or language. The project will build on models of eye movement control during reading as well as during the processing of multimodal input such as subtitled film (cf. Liao, Yu, Kruger and Reichle, 2021).

Speech: a dynamical perspective

Supervisory team:

Prof Adamantios Gafos (University of Potsdam, gafos@uni-potsdam.de)

Dr Michael Proctor (Macquarie University, michael.proctor@mq.edu.au)

Speech has been argued to be the most highly developed motor skill possessed by all of us. Speech production involves precise control of articulatory organs as they form and release constrictions in a limited space inside the body. Speech has evolved to harness this complex activity for the purposes of communication. Using instrumental laboratory methods to study this activity, this project aims to sharpen and broaden a research program where tools and concepts from dynamical systems theory are used to understand speech production, perception and language-particular phonological organization. This research program has inspired work, by the PIs and others, that takes a deeper look at the relation between phonology and phonetics, exploring the idea that the discrete versus continuous character that distinguishes them may be formally parallel to the qualitative versus quantitative aspects of the non-linear dynamical systems in the biological and physical sciences. Of interest are questions such as: dynamical modeling of speech gestures, expression of phonological structure via different modes of cohesion or coordination among constellations of primitives, the role of dynamics in phonological computation expressed via constraint interaction.