Which of the following aspects of reading has to do with word recognition?

For children devoid of a strong knowledge of phonics, adopt a similar flash card approach beginning with easily recognized sound combinations (e.g., “th”), and eventually use two cards at a time to combine (blend) sounds to make a whole word (e.g., bi+rd).

Use flash cards of simple number combinations (begin with addition), which can be solved mentally without the benefit of a pencil/paper or calculator. Begin with the 1’s (1+3=4 with the answer shown on the reverse side), then introduce the 2’s, then the 3’s, and so on until all combinations up to the 9’s have been mastered and can be answered quickly and accurately. Use a game-like format similar to the one recommended immediately above and limit training sessions to 15 minutes.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128157558000083

Assessment and identification of learning disabilities

Emily A. Farris, ... Timothy N. Odegard, in The Clinical Guide to Assessment and Treatment of Childhood Learning and Attention Problems, 2020

Achievement

Most methods used for the identification of students with LDs include low achievement as a defining characteristic, which is necessary but insufficient for the identification of LDs. Identification procedures based on any model used for the identification of LDs will include measures of a child’s level of achievement. Such information is crucial because it provides an observation of the exact behaviors of interest, the child’s ability to read, write, spell, and solve mathematical problems (Fletcher & Miciak, 2017; Siegel, 1999; Stanovich, 1999). Furthermore, empirical data suggest that there are five different forms of LDs that each impact a different area of achievement: word recognition and spelling, reading comprehension, mathematical computations, mathematical problem-solving, and written expression (Fletcher et al., 2018).

Standardized norm-referenced measures of achievement are used to assess a child’s level of performance in these areas. Common examples of such measures include the Woodcock–Johnson Test of Achievement, currently in the fourth edition (Schrank, Mather, & McGrew, 2014), the Kaufman Test of Educational Achievement, currently in the third edition (Kaufman & Kaufman, 2014), and the Wechsler Individual Achievement Test, currently in the third edition (Psychological Corporation, 2009). Each of these test batteries measures the specific domains impacted by LDs and includes norms for age and grade that allow for standard scores to be computed. The provision of standard scores supports efforts to determine if a child is achieving at a level comparable to his or her peers. This is often operationalized as a cut point. For example, a cut point of the 25th percentile could be adopted. Those children for whom 75% of their age group outperformed them on a given measure of academic achievement (e.g., untimed isolated word reading) would be deemed as exhibiting low achievement in this area. Yet, difficulties arise in determining the exact placement of cut points that impact the provision of services to individual children (Francis et al., 2005). These difficulties emphasize that if they are to be used, the creation of cut points should be made with great care.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128157558000010

Julie Gawrylowicz, Georgina Bartlett, in The Handbook of Alcohol Use, 2021

Methodological challenges and future research directions

One methodological issue present in alcohol administration studies is the timing and rate of alcohol absorption. An examination into the variability in alcohol absorption during a drinking session found that the average peak BAC was 0.073 with a range of 0.047–1.00 g/dL (Winek, Wahba, & Dowdell, 1996). The average peak time, i.e. when the BAC reached its highest was 17.4 minutes, ranging from 0 to 74 minutes. Other factors such as the timing and nature of one’s last meal and participants’ gender can further impact upon absorption rate and target BAC, as seen in Hildebrand Karlén et al. (2015) where female participants reached significantly higher BACs than males.

To complicate things even further, research has shown that memory encoding and retrieval might be affected differently depending on the specific limb of the BAC curve. Söderlund, Parker, Schwartz, and Tulving (2005) showed that alcohol impaired encoding in cued and free recall and recognition of completed word fragments regardless of limb, but word recognition was only impaired during the ascending BAC. Thus, how alcohol affects our memory not only depends on the memory task utilized, but also on the specific timing of when the memory was tested, that is on the ascending or the descending limb of the BAC curve. Future studies should therefore examine the effects of alcohol on eyewitness memory performance on different limbs on the BAC curve to shed more light on when exactly episodic memory might be impaired.

Whilst there are methodological difficulties associated with the timing of the alcohol administration and absorption, there are also challenges associated with assigning appropriate control groups. Studies typically use a control group, in which participants are knowingly not consuming alcohol, and/or a placebo group, in which participants are under the impression that they are consuming alcohol but do actually not receive any. The placebo condition is useful in measuring the behavioral and cognitive effects of expecting alcohol in the absence of pharmacological effects. Eyewitness memory studies have now begun to use fully-balanced placebo designs to control for alcohol expectancies (see Flowe et al., 2019; Gawrylowicz et al., 2019). In addition, to the usual alcohol, placebo and sober control group, a fourth group is included: the reverse placebo group (individuals do not believe that they received alcohol when they actually did). Gawrylowicz et al. (2019) found that their reverse placebo group performed consistently poorer on a cued recall task, they gave fewer correct responses and made more errors compared to the alcohol, control and placebo group. Flowe et al. (2019) found a significant expectancy effect for recall completeness when participants were interviewed with the Self-administered Interview. There was no significant effect of actual alcohol consumption. These findings suggest that pharmacological effects of alcohol might not be solely responsible for differences in memory performance, but that alcohol-related expectancies play a crucial role too.

Including placebo groups can be challenging, as it requires convincing people that they did or did not receive alcohol when in fact they did or did not. In Flowe et al.’s (2019) study 27% of participants who had been told that they received tonic water thought that they drank alcohol, whereas 22% believed they had alcohol when in fact they consumed tonic. Similarly, Schreiber Compo et al. (2017) reported that 16% of placebo participants believed that they had not consumed alcohol. Even if the placebo manipulation is successful, placebo participants often report feeling less intoxicated than their intoxicated counterparts (Kneller & Harvey, 2016).

To summarize, the administration of alcohol in laboratory settings comes with a myriad of methodological challenges. These challenges range from variations in peak BACs to ensuring timing of administration is appropriate and equal. What’s more is that the inclusion of viable placebo groups is often not possible, as the drink deception is often difficult to execute, especially in the reverse placebo condition.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128167205000165

WISC–V and the Evolving Role of Intelligence Testing in the Assessment of Learning Disabilities

Donald H. Saklofske, ... Lawrence G. Weiss, in WISC-V (Second Edition), 2019

Subtypes of Learning Disabilities

An enormous body of research has accumulated with various approaches for identifying subtypes of LDs. Inquiry into possible subtypes of LDs began with Johnson and Myklebust (1967) and colleagues (e.g., Boshes & Myklebust, 1964; Myklebust & Boshes, 1960). They proposed a nonverbal disability characterized by the absence of serious problems in areas of language, reading, and writing, but with deficiencies in social perception, visual-spatial processing, spatial and right–left orientation, temporal perception, handwriting, mathematics, and executive functions such as disinhibition and perseveration. The neuropsychology literature later characterized a nonverbal LD as reflecting a right hemisphere deficit (Pennington, 1991; Rourke, 1989); however, it remains the least well understood.

Subtyping holds widespread appeal because it offers a way to explain the heterogeneity within the category of LDs, to describe the learning profile of students with LDs more specifically, leading to an individualized approach to intervention. Still, there is considerable debate over the existence of LDs subtypes as distinct categories that can be reliably identified, how best to categorize LDs subtypes, and whether instructional implications differ by subtype.

According to Fletcher et al. (2003), there are three main approaches to subtyping LDs: achievement subtypes, clinical inferential (rational) subtypes, and empirically based subtypes.

The first approach, achievement subtypes, relies on achievement testing profiles. For example, subgroups of reading difficulties are differentiated by performance on measures of word recognition, fluency, and comprehension. Subgroups of reading disability, math disability, and persons with a FSIQ score below 80 exhibit different patterns of cognitive attributes (Fletcher et al., 2003; Grigorenko, 2001). Compton et al. (2011) found distinctive patterns of strengths and weaknesses in abilities for several LD subgroups, in contrast to the flat pattern of cognitive and academic performance manifested by the normally achieving group. For example, students with LDs in reading comprehension showed a strength in math calculation alongside weaknesses in language involving listening comprehension, oral vocabulary, and syntax, whereas students with LDs in word reading showed strengths in math problem-solving and reading comprehension alongside weaknesses in working memory and oral language.

The second approach, clinical inferential, involves rationally defining subgroups based on clinical observations, typically by selecting individuals with similar characteristics. One example is students with a core deficit in phonological processing. The double-deficit model of subtypes distinguishes three subtypes: two subtypes with a single deficit in either phonological processing or rapid automatic naming, and a third subtype with a double-deficit in both areas (Wolf & Bowers, 1999; Wolf, Bowers, & Biddle, 2000). Another subtype model classifies two types of poor decoders: those having phonological dyslexia, orthographic/surface dyslexia, and those having mixed dyslexia, meaning weaknesses in both phonological and orthographic processing based on performance on reading nonwords and irregular exception words (Castles & Coltheart, 1993; Feifer & De Fina, 2000).

The third approach, empirically based subtypes of LDs, is based on multivariate empirical classification. It uses techniques such as Q-factor analysis and cluster analysis, and subsequent measures of external validity. Empirical subtyping models have been criticized for being atheoretical and unreliable; however, these models have provided additional support for rational subtyping methods, including the double-deficit model and differentiating garden-variety from specific reading disabilities (Fletcher et al., 2003). For example, a study by Pieters, Roeyers, Rosseel, Van Waelvelde, and Desoete (2013) used data-driven model-based clustering to identify two clusters of math disorder: one with number fact-retrieval weaknesses, and one with procedural calculation problems. When both motor and mathematical variables were included in the analysis, two clusters were identified: one with weaknesses in number fact retrieval, procedural calculation, as well as motor and visual-motor integration skills; a second with weaknesses in procedural calculation and visual-motor skills.

The identification and classification of a LD relies on either a dimensional or a categorical framework. Subtyping efforts are based on evidence that the heterogeneity within LDs is best represented as distinct subtypes. For example, reading and math LDs can be differentiated because students with reading LDs tend to have a relative strength in mathematics, whereas students with mathematics LDs tend to have a relative strength in reading (Compton et al., 2011). However, some researchers contend that the attributes of reading disability and math disability are dimensional, and efforts to categorize these as distinct subtypes are based on cut scores and correlated assessments (Branum–Martin, Fletcher, & Stuebing, 2013). Continued research is needed to advance our understanding of LD subtypes and their instructional implications for providing tailored intervention to a heterogeneous population of individuals with LDs.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128157442000094

The role of word-recognition accuracy in the development of word-recognition speed and reading comprehension in primary school: A longitudinal examination

Panagiotis Karageorgos, ... Johannes Naumann, in Cognitive Development, 2020

3.3.1 Word-recognition accuracy and speed

Word-recognition accuracy and speed were assessed with the computerized lexical decision subtest of the ProDi-L test battery (Richter et al., 2017; see also Richter, Isberner, Naumann, & Kutzner, 2013). Children were presented with 16 words (e.g., Traktor [tractor]) and pseudowords (e.g., Spinfen) in randomized order. Their task was to decide whether the presented letter string was a real word or not by using two response keys (yes/no). Pseudowords were orthographically and phonologically legal letter strings and varied in their similarity to actual German words. Pseudowords similar to actual words were constructed by changing the first character of an existing word (e.g, Name → Bame). Pseudowords dissimilar to actual words were constructed by combining the syllables of two existing words with irregular spellings. For example, the pseudoword Chilance was constructed by combining the first syllable of the word Chili and the second and third syllables of the word Balance. The pseudowords also included pseudohomophones (1–3 per measurement point), which sound like real words but have a different orthographical form (e.g., Heckse instead of Hexe/witch). These items cannot be solved via the application of phoneme-grapheme translation rules but require direct word recognition via the lexical route. Seven items in the first measurement point, nine items in the second measurement point and eight items in the last two measurement points were regular and irregular real German words. Different but parallel words and pseudowords were used at all four measurement points. Apart from the slight difference in the proportions of words and pseudowords in the first and second item set (which was due to an error), the item sets were strictly parallelized according to the item features, mean accuracy and mean response time of each item set which were obtained in another cross-sectional study (Richter et al., 2013).

The word stimuli were systematically varied in frequency and number of orthographical neighbors. They had an average frequency of 1.25 (SD = .87), retrieved from the CELEX German lemma lexicon (metric: Mannheim written frequency, logarithmic; Baayen, Piepenbrock, & Gulikers, 1995; Baayen, Piepenbrock & van Rijn, 1993), an average length of 5.62 (SD = 1.56) characters and on average 1.75 (SD = 2.46) orthographical neighbours. The pseudowords were matched in length and frequency to the word stimuli. Pseudowords were based on words with an average frequency of 1.03 (SD = .66), they had an average length of 6.31 (SD = 2.16) characters and on average 1.69 (SD = 3.25) orthographical neighbours. In order to examine whether words and pseudowords differed in frequency, length, and orthographical neighbours across the measurement points we ran three separate analyses of variance. The results indicated no significant differences (for all comparisons, p > .17) between words and pseudowords across the measurement points. These results suggest that largely parallel items were used at each measurement point.

Following the ProDi-L manual, two criteria were applied to identify and remove outliers. Logarithmic latencies that were three standard deviations below or above the mean logarithmic latency for the item in the norming sample were coded as missing. The idea behind this criterion is that very short response times are likely to indicate an irregular response, such as clicking through items without reading them, and thus they should not be included in further analyses. Likewise, very long response times are likely due to disturbances, mind wandering, etc. Furthermore, for each child, response times that deviated more than two standard deviations from the average of the individual logarithmic response times were also coded as missing. Further data preparation was performed separately for each measurement point according to the procedure reported by Karageorgos et al. (2019). The sum of correct responses was transformed into proportions representing word recognition accuracy. Furthermore, a words-per-minute score was calculated as an indicator of word-recognition speed. The number of correct and incorrect responses to words and pseudowords was multiplied by 60 000 ms and then divided by the overall latency across all items measured in ms. A child, for example, who responded to 10 items in 10 000 ms received a score of 60 words per minute. Words-per-minute scores were not computed for participants with more than 10 % missing values (due to the outlier removal criteria discussed above) at the relevant measurement point. Thus, word-recognition speed scores were missing for 449 of 4380 data points. The test-retest reliability between measurement points was computed as the intraclass correlation of word-recognition scores at the end of each school year for a total of 692 children (those with complete data sets) using the R-package irr (Gamer, Lemon, Fellows, & Singh, 2019). A two-way mixed-effects model for mean rating and absolute agreement was used for computing the ICC (Koo & Li, 2016; McGraw & Wong, 1996; Price et al., 2015). According to the interpretation guidelines proposed by Ciccheti (1994), the estimated test-retest reliability (i.e., stability) was good for the accuracy score, ρI = .624, F(691, 32.9) = 3.48, p < 0.001, 95 % CI [.40, .75], and fair for the words-per-minute score, ρI = .50, F(543, 6.7) = 4.46, p = 0.005, 95 % CI [.01, .73].

View article

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0885201420301039

Intelligence, dual coding theory, and the brain

Allan Paivio, in Intelligence, 2014

3.1.1 Logogens and imagens

Morton's (1979) word-recognition model uses modality-specific visual and auditory input logogens and output logogens to account for recognition performance. In DCT, the variety of modality-specific representations expanded to include auditory, visual, haptic, and motor logogens, as well as separate logogen systems for the different languages of multilingual individuals. Moreover, DCT treats logogens as hierarchical sequential structures of increasing length, from phonemes (or letters) to syllables, conventional words, fixed phrases, idioms, sentences, and longer discourse units – anything learned and remembered as an integrated language sequence. Imagens also are multimodal (visual, auditory, haptic, motor) representations organized hierarchically into spatial nested sets that are most apparent in visual objects – for example, we see pupils within eyes within faces within rooms within houses within larger scenes, and so on. Importantly, the modality–specificity of logogens and imagens excludes abstract mental representations such as propositions. Thus the functional domains associated with stimulus meaning and cognitive abilities are conceptualized entirely in terms of modality specific logogens and imagens.

View article

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0160289614001305

Why listening in background noise is harder in a non-native language than in a native language: A review What are the components of word recognition?

The Four-Part Processing Model for word recognition is a simplified model that illustrates how the brain reads or recognizes words. It illustrates that there are four processes that are active in the reading brain including: phonological, orthographic, meaning, and context processors (Moats & Tolman, 2019).

Which of the following is the process involving word recognition?

Reading is a multifaceted process involving word recognition, comprehension, fluency, and motivation.

What is an example of word recognition?

This is when students understand that letter combinations often make specific sounds like th, wh, thr, ou, ough, and ound. For example, when students see words like 'bound' or 'through' for the first time, they can recite and use them correctly without having to sound them out.

What are the main aspects of reading?

There are five aspects to the process of reading: phonics, phonemic awareness, vocabulary, reading comprehension and fluency. These five aspects work together to create the reading experience. As children learn to read they must develop skills in all five of these areas in order to become successful readers.