Tailored driving lessons reduced the critical driving errors made by older adults. Longer term follow-up and larger trials are required.
Computerized training for cognitive enhancement is of great public interest, however, there is inconsistent evidence for the transfer of training gains to every day activity. Several large trials have focused on speed of processing (SOP) training with some promising findings for long-term effects on daily activity, but no immediate transfer to other cognitive tests. Here, we examine the transfer of SOP training gains to cognitive measures that are known predictors of driving safety in older adults. Fifty-three adults aged 65-87 years who were current drivers participated in a two group non-randomized design with repeated measures and a no-contact matched control group. The Intervention group completed an average of 7.9 ( = 3.0) hours of self-administered online SOP training at home. Control group was matched on age, gender and test-re-test interval. Measures included the Useful Field of View (UFOV) test, a Hazard Perception test, choice reaction time (Cars RT), Trail Making Test B, a Maze test, visual motion threshold, as well as road craft and road knowledge tests. Speed of processing training resulted in significant improvement in processing speed on the UFOV test relative to controls, with an average change of -45.8 ms ( = 14.5), and effect size of ω = 0.21. Performance on the Maze test also improved, but significant slowing on the Hazard Perception test was observed after SOP training. Training effects on the UFOV task was associated with similar effects on the Cars RT, but not the Hazard Perception and Maze tests, suggesting transfer to some but not all driving related measures. There were no effects of training on any of the other measures examined. Speed of processing training effects on the UFOV task can be achieved with self-administered, online training at home, with some transfer to other cognitive tests. However, differential effects of training may be observed for tasks requiring goal-directed search strategies rather than diffuse attention.
Driving is normative for many older Australians in their 70s. Similar factors are associated with actual cessation and expectation of driving suggesting that older adults do have a sense of their expected driving life.
We aimed to evaluate risk of unsafe on-road driving performance among older adults with MCI. Adults with MCI exhibit a similar range of driving ability to CN adults, although on average they scored lower on off-road and on-road assessments. Driving specific tests were more strongly associated with safety ratings than traditional neuropsychological tests.
In the general population, the ANU-ADRI, comprising lifestyle, medical and demographic factors, is associated with the risk of progression from CN to MCI, whereas a GRS comprising the main AD risk genes was not associated with this risk. The ANU-ADRI may be used for population-level risk assessment and screening.
Mid-life cannabis users had poorer verbal recall than non-users, but this was not related to their current level of cannabis use, and cannabis use was not associated with accelerated cognitive decline.
To design a low-cost simulator-based driving assessment for older adults and to compare its validity with that of an on-road driving assessment and other measures of older driver risk. A low-cost simulator-based assessment is valid as a screening instrument for identifying at-risk older drivers but not as an alternative to on-road evaluation when accurate data on competence or pattern of impairment is required for licensing decisions and training programs.
Objective. To examine the effect of diabetes treatment on change of measures of specific cognitive domains over 4 years. Research Design and Methods. The sample was drawn from a population-based cohort study in Australia (the PATH Through Life Study) and comprised 1814 individuals aged 65-69 years at first measurement, of whom 211 were diagnosed with diabetes. Cognitive function was measured using 10 neuropsychological tests. The effect of type of diabetes treatment (diet, oral hypoglycemic agents, and insulin) on measures of specific cognitive domains was assessed using Generalized Linear Models adjusted for age, sex, education, smoking, physical activity level, BMI, and hypertension. Results. Comparison of cognitive function between diabetes treatment groups showed no significant effect of type of pharmacological treatment on cognitive function compared to diet only group or no diabetes group. Of those on oral hypoglycaemic treatment only, participants who used metformin alone had better cognitive function at baseline for the domains of verbal learning, working memory, and executive function compared to participants on other forms of diabetic treatment. Conclusion. This study did not observe significant effect from type of pharmacological treatment for diabetes on cognitive function except that participants who only used metformin showed significant protective effect from metformin on domain of verbal learning, working memory, and executive function.
Grading instruments are an important part of evidence-based medicine and are used to inform health policy and the development of clinical practice guidelines. They are extensively used in the development of clinical guidelines and the assessment of research publications, having particular impact on health care and policy sectors. The positive effects of using grading instruments are, however, potentially undermined by their misuse and a number of shortcomings. This review found eight key concerns about grading instruments: (1) lack of information on validity and reliability, (2) poor concurrent validity, (3) may not account for external validity, (4) may not be inherently logical, (5) susceptibility to subjectivity, (6) complex systems with inadequate instructions, (7) may be biased toward randomized controlled trial (RCT) studies, and (8) may not adequately address the variety of non-RCTs. This narrative review concludes that there is a need to take into account these criticisms and domain-specific limitations, to enable the use and development of the most appropriate grading instruments. Grading systems need to be matched to both the research question being asked and the type of evidence being used.
To determine whether dance benefits executive function more than walking, an activity that is simple and functional. The superior potential of dance over walking on executive functions of cognitively healthy and active older adults was not supported. Dance improved one of the cognitive domains (spatial memory) important for learning dance. Controlled trials targeting inactive older adults and of a higher dose may produce stronger effects, particularly for novice dancers.
With the number of older drivers projected to increase by up to 70% over the next 20 years, preventing injury resulting from crashes involving older drivers is a significant concern for both policy-makers and clinicians. While the total number of fatal crashes per annum has steadily decreased since 2005 in Australia, the rate of fatalities has demonstrated an upward trend since 2010 in drivers aged 65 years and above (8.5 per 100,000), such that it is now on par with the fatality rate in drivers aged 17-25 years (8.0 per 100,000) (Austroads, 2015). Similar statistics are reported for the United States (NHTSA, 2012), implying there is a need for better identification of those older drivers who are unsafe and implementation of strategies that can enhance mobility while maximizing road safety.
Dementia risk reduction is a global health and fiscal priority given the current lack of effective treatments and the projected increased number of dementia cases due to population ageing. There are often gaps among academic research, clinical practice, and public policy. We present information on the evidence for dementia risk reduction and evaluate the progress required to formulate this evidence into clinical practice guidelines. This narrative review provides capsule summaries of current evidence for 25 risk and protective factors associated with AD and dementia according to domains including biomarkers, demographic, lifestyle, medical, and environment. We identify the factors for which evidence is strong and thereby especially useful for risk assessment with the goal of personalising recommendations for risk reduction. We also note gaps in knowledge, and discuss how the field may progress towards clinical practice guidelines for dementia risk reduction.
Those with type 2 diabetes, younger males with high non-diabetic HbA1c, and adults with high stable blood glucose are at increased risk of poorer cognition. The findings reinforce the need for management of diabetes risk factors in midlife.
The development and integration of risk assessment and clinical risk management for Alzheimer's disease (AD) and dementia is a rapidly emerging field of research and practice. At present, risk management is the only available approach with potential for a large impact on the projected rates of dementia, given population aging. This review describes six available risk assessment tools, including those developed specifically for AD and those for dementia. These tools differ along several important dimensions, including whether they (a) include clinical measures, (b) require a clinician's ratings, (c) are predominantly self-report, (d) are independently validated, and (e) are available online. A narrative review of recently identified risk factors not included in these instruments is included, indicating future directions for risk assessment. Finally, consideration is given to the prioritization of risk advice according to the ease of risk modification and the potential for synergies among risk factors.
There is continuing debate about long-term effects of brain injury. We examined a range of traumatic brain injury (TBI) variables (TBI history, severity, frequency, and age of injury) as predictors of cognitive outcome over 8 years in an adult population, and interactions with apolipoprotein E (APOE) genotype, sex, and age cohorts. Three randomly sampled age cohorts (20-24, 40-44, 60-64 years at baseline; N = 6333) were each evaluated three times over 8 years. TBI variables, based on self-report, were separately modeled as predictors of cognitive performance using linear mixed effects models. TBI predicted longitudinal cognitive decline in all three age groups. APOE ε4 + genotypes in the young and middle-aged groups predicted lower baseline cognitive performance in the context of TBI. Baseline cognitive performance was better for young females than males but this pattern reversed in middle age and old age. The findings suggest TBI history is associated with long-term cognitive impairment and decline across the adult lifespan. A role for APOE genotype was apparent in the younger cohorts but there was no evidence that it is associated with impairment in early old age. The effect of sex and TBI on cognition varied with age cohort, consistent with a proposed neuroprotective role for estrogen.
Atypical asymmetries of spatial attention have been reported in children with attention deficit hyperactivity disorder (ADHD) and may be exacerbated by non-spatial factors such as attentional capacity. Although preliminary evidence suggests that asymmetries of attention in ADHD may be modifiable by the psychostimulant, methylphenidate, further placebo-controlled studies are required. This study first aimed to confirm recent evidence that increasing non-spatial processing load at fixation can unmask a spatial gradient of target detection in children with ADHD but not Controls. Second, we used placebo-controlled randomised trial methodology to ask whether 20mg of methylphenidate (MPH) could remediate any load-dependent asymmetry of spatial attention in adolescents with ADHD. Twelve male adolescents with ADHD were assessed twice in a double-blind, randomized design, under either placebo or an acute dose of methylphenidate. Thirteen typically developing adolescent Controls completed a single session under placebo. Participants completed a computer-based task in which they monitored a centrally presented rapid serial visual presentation stream for a probe stimulus, while also responding to brief peripheral events. The attentional load of the central task was manipulated by varying the target instructions but not the physical stimuli or the frequency of targets. Between-group analyses under placebo conditions indicated that increased attentional load induced a spatial gradient for target detection in the ADHD but not Controls, such that load slowed response times for left, but not, right hemi-field targets. This load-dependent spatial asymmetry in the adolescents with ADHD was abolished by administration of methylphenidate. Methylphenidate may "normalise" target detection between the hemi-fields in ADHD via enhancement of the right-lateralised ventral attention networks that support non-spatial attention.
This study examined the prevalence of co-morbid age-related eye disease and symptoms of depression and anxiety in late life, and the relative roles of visual function and disease in explaining symptoms of depression and anxiety. A community-based sample of 662 individuals aged over 70 years was recruited through the electoral roll. Vision was measured using a battery of tests including high and low contrast visual acuity, contrast sensitivity, motion sensitivity, stereoacuity, Useful Field of View, and visual fields. Depression and anxiety symptoms were measured using the Goldberg scales. The prevalence of self-reported eye disease [cataract, glaucoma, or age-related macular degeneration (AMD)] in the sample was 43.4%, with 7.7% reporting more than one form of ocular pathology. Of those with no eye disease, 3.7% had clinically significant depressive symptoms. This rate was 6.7% among cataract patients, 4.3% among those with glaucoma, and 10.5% for AMD. Generalized linear models adjusting for demographics, general health, treatment, and disability examined self-reported eye disease and visual function as correlates of depression and anxiety. Depressive symptoms were associated with cataract only, AMD, comorbid eye diseases and reduced low contrast visual acuity. Anxiety was significantly associated with self-reported cataract, and reduced low contrast visual acuity, motion sensitivity and contrast sensitivity. We found no evidence for elevated rates of depressive or anxiety symptoms associated with self-reported glaucoma. The results support previous findings of high rates of depression and anxiety in cataract and AMD, and in addition show that mood and anxiety are associated with objective measures of visual function independently of self-reported eye disease. The findings have implications for the assessment and treatment of mental health in the context of late-life visual impairment.
Few studies report incidence of mild cognitive impairment (MCI) and other mild cognitive disorders (MCD) in cohorts in their 60s, at an age when diagnoses are less stable. The authors' goal was to estimate the incidence and prevalence of MCI and MCD, characterize subgroups with stable vs nonstable diagnoses, and evaluate the impact of diagnosis on daily life in a young-old cohort. MCDs in individuals in their 60s occur in at least 10% of the population and are likely to be heterogeneous in terms of their etiology and long-term prognosis, but may cause a significant impact in everyday life.
Converging evidence suggests that right-hemisphere dominant spatial attention systems can be modulated by non-spatial processes such as attentional capacity. The severity of neglect in right-hemisphere stroke patients for example, is correlated with impairments in non-lateralized attention. Evidence also suggests the coexistence of lateralized inattention and reduced capacity in developmental disorders of attention, such as attention deficit hyperactivity disorder (ADHD), which is marked by cognitive impairments suggestive of right hemisphere dysfunction. These lines of evidence argue against a coincident damage hypothesis and suggest instead a direct modulation of spatial attention by non-spatial processes. Here we sought experimental evidence for this relationship in both acquired and developmental disorders of attention. Six adult stroke patients with focal right brain injury and 19 children with ADHD were studied in comparison to control groups of both healthy older adults and typically developing children. The participants were required to detect transient, unilateral visual targets while simultaneously monitoring a stream of alphanumeric characters at fixation. Load at fixation was manipulated by asking participants either to ignore the central stream and focus on the peripheral detection task (no report condition), or to monitor the central stream for a probe item that was defined by either a unique feature (low load condition) or a conjunction of features (high load condition). As expected, in all participants greater load at fixation slowed responses to peripheral targets. Crucially, in right brain injured patients but not older healthy adults left target detection was slowed significantly more than central and right target detection. A qualitatively similar pattern was seen in children with ADHD, but not in typically developing children. The imposition of load at fixation slowed responses to left compared with right targets, and this response time asymmetry was correlated with the severity of ADHD symptoms. These results suggest that a direct manipulation of non-spatial attention can reveal lateralised attention deficits in both acquired and developmental forms of inattention. Our findings support the view that spatial attention networks are tightly integrated with non-lateralized aspects of attention.
In this relatively young cohort, retrospective self-report of cognitive decline does not reflect objective deterioration in cognition over the time period in question, but it may identify individuals in the initial stages of dementia and those with elevated psychological and genotypic risk factors for the development of dementia.
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that message is presented in a noisy background. Speech is a particularly important example of multisensory integration because of its behavioural relevance to humans and also because brain regions have been identified that appear to be specifically tuned for auditory speech and lip gestures. Previous research has suggested that speech stimuli may have an advantage over other types of auditory stimuli in terms of audio-visual integration. Here, we used a modified adaptive psychophysical staircase approach to compare the influence of congruent visual stimuli (brief movie clips) on the detection of noise-masked auditory speech and non-speech stimuli. We found that congruent visual stimuli significantly improved detection of an auditory stimulus relative to incongruent visual stimuli. This effect, however, was equally apparent for speech and non-speech stimuli. The findings suggest that speech stimuli are not specifically advantaged by audio-visual integration for detection at threshold when compared with other naturalistic sounds.
A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the 'ventriloquist aftereffect', reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their physical spatial discordance. Such dynamic changes to sensory representations are likely to underlie the brain's ability to accommodate inter-sensory discordance produced by sensory errors (particularly in sound localization) and variability in sensory transduction. It is currently unknown, however, whether these plastic changes induced by adaptation to spatially disparate inputs occurs automatically or whether they are dependent on selectively attending to the visual or auditory stimuli. Here, we demonstrate that robust auditory spatial aftereffects can be induced even in the presence of a competing visual stimulus. Importantly, we found that when attention is directed to the competing stimuli, the pattern of aftereffects is altered. These results indicate that attention can modulate the ventriloquist aftereffect.
Unilateral spatial neglect is a disorder of attention and spatial representation, in which early visual processes such as figure-ground segmentation have been assumed to be largely intact. There is evidence, however, that the spatial attention bias underlying neglect can bias the segmentation of a figural region from its background. Relatively few studies have explicitly examined the effect of spatial neglect on processing the figures that result from such scene segmentation. Here, we show that a neglect patient's bias in figure-ground segmentation directly influences his conscious recognition of these figures. By varying the relative salience of figural and background regions in static, two-dimensional displays, we show that competition between elements in such displays can modulate a neglect patient's ability to recognise parsed figures in a scene. The findings provide insight into the interaction between scene segmentation, explicit object recognition, and attention.
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.
Prismatic adaptation is increasingly recognised as an effective procedure for rehabilitating symptoms of unilateral spatial neglect–producing relatively long-lasting improvements on a variety of spatial attention tasks. The mechanisms by which the aftereffects of adaptation change neglect patients' performance on these tasks remain controversial. It is not clear, for example, whether adaptation directly influences the pathological ipsilesional attention bias that underlies neglect, or whether it simply changes exploratory motor behaviour. Here we used visual and auditory versions of a target detection task with a secondary task at fixation. Under these conditions, patients with neglect demonstrated a spatial gradient in their ability to orient to the brief, peripheral visual or auditory targets. Following prism adaptation, we found that overall performance on both the auditory and visual task improved, however, most patients in our sample did not show changes in their visual or auditory spatial gradient of attention, despite adequate aftereffects of adaptation and significant improvement in neglect on visual cancellation. Although there were individual cases that suggested prism-induced changes in visual target detection, and even reversal of the visual spatial gradient, such cases were not evident for the auditory modality. The findings indicate that spatial gradients in stimulus-driven attention may be less responsive to the effects of prism adaptation than neglect symptoms in voluntary orienting and exploratory behaviour. Individual factors such as lesion site and symptom severity may also determine the expression of prism effects on spatial neglect.
Visuomotor adaptation to a shift in visual input produced by prismatic lenses is an example of dynamic sensory-motor plasticity within the brain. Prism adaptation is readily induced in healthy individuals, and is thought to reflect the brain's ability to compensate for drifts in spatial calibration between different sensory systems. The neural correlate of this form of functional plasticity is largely unknown, although current models predict the involvement of parieto-cerebellar circuits. Recent studies that have employed event-related functional magnetic resonance imaging (fMRI) to identify brain regions associated with prism adaptation have discovered patterns of parietal and cerebellar modulation as participants corrected their visuomotor errors during the early part of adaptation. However, the role of these regions in the later stage of adaptation, when 'spatial realignment' or true adaptation is predicted to occur, remains unclear. Here, we used fMRI to quantify the distinctive patterns of parieto-cerebellar activity as visuomotor adaptation develops. We directly contrasted activation patterns during the initial error correction phase of visuomotor adaptation with that during the later spatial realignment phase, and found significant recruitment of the parieto-cerebellar network–with activations in the right inferior parietal lobe and the right posterior cerebellum. These findings provide the first evidence of both cerebellar and parietal involvement during the spatial realignment phase of prism adaptation.
Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.
The majority of research findings to date indicate that spatial cues play a minor role in enhancing listeners' ability to parse and detect a sound of interest when it is presented in a complex auditory scene comprising multiple simultaneous sounds. Frequency and temporal differences between sound streams provide more reliable cues for scene analysis as well as for directing attention to relevant auditory 'objects' in complex displays. The present study used naturalistic sounds with varying spectro-temporal profiles to examine whether spatial separation of sound sources can enhance target detection in an auditory search paradigm. The arrays of sounds were presented in virtual auditory space over headphones. The results of Experiment 1 suggest that target detection is enhanced when sound sources are spatially separated relative to when they are presented at the same location. Experiment 2 demonstrated that this effect is most prominent within the first 250 ms of exposure to the array of sounds. These findings suggest that spatial cues may be effective for enhancing early processes such as stream segregation, rather than simply directing attention to objects that have already been segmented.
Unilateral spatial neglect due to right brain damage (RBD) can occur in several different sensory modalities in the same patient. Previous studies of the association between auditory and visual neglect have yielded conflicting outcomes. Most such studies have compared performance on relatively simple clinical measures of visual neglect, such as target cancellation, with that on more sophisticated measures of auditory perception. This is problematic because such tasks are typically not matched for the cognitive processes they exercise. We overcame this limitation by using equivalent visual and auditory versions of extinction and temporal-order judgment (TOJ) tasks. RBD patients demonstrated lateralized deficits on both visual and auditory tasks when compared with same-aged, healthy controls. Critically, a significant association between the severity of visual and auditory deficits was apparent on the TOJ task but not the extinction task, suggesting that even when task demands are matched across modalities, dissociations between visual and auditory neglect can be apparent. Across the auditory tasks, patients showed more pronounced deficits for verbal stimuli than for non-verbal stimuli. These findings have implications for recent models proposed to explain the role of spatial attention in multimodal perception.
In natural environments that contain multiple sound sources, acoustic energy arising from the different sources sums to produce a single complex waveform at each of the listener's ears. The auditory system must segregate this waveform into distinct streams to permit identification of the objects from which the signals emanate . Although the processes involved in stream segregation are now reasonably well understood [1, 2 and 3], little is known about the nature of our perception of complex auditory scenes. Here, we examined complex scene perception by having listeners detect a discrete change to an auditory scene comprising multiple concurrent naturalistic sounds. We found that listeners were remarkably poor at detecting the disappearance of an individual auditory object when listening to scenes containing more than four objects, but they performed near perfectly when their attention was directed to the identity of a potential change. In the absence of directed attention, this "change deafness"  was greater for objects arising from a common location in space than for objects separated in azimuth. Change deafness was also observed for changes in object location, suggesting that it may reflect a general effect of the dependence of human auditory perception on attention.