Rogers, Lambon Ralph, Hodges, and Patterson (2004) studied two-alternative forced-choice visual lexical decision performance in patients with semantic dementia. With item pairs where the target word was more “typical” (i.e., higher in bigram and trigram frequency) than the foil (all foils were pseudohomophones), lexical decision performance was good and was unaffected by word frequency. With item pairs where the target word was less “typical” (i.e., lower in bigram and trigram frequency) than the foil, lexical decision performance was worse and was affected by word frequency, being particularly inaccurate when the word targets were low in frequency. We show (using as materials all the monosyllabic items used by Rogers and colleagues) that the same pattern of results occurs in the lexical decision performance of the DRC (dual-route cascaded) computational model of reading when the model is lesioned by probabilistic deletion of low-frequency words from its orthographic lexicon. We consider that the PDP (parallel distributed processing) computational model of reading used by Woollams, Plaut, Lambon Ralph, and Patterson (2007) to simulate reading in semantic dementia is not capable of simulating this lexical decision result. We take this, in conjunction with previous work on computational modelling of reading aloud in surface dyslexia, phonological dyslexia, and semantic dementia using the DRC and PDP reading models, to indicate that the DRC model does a better job than the PDP model in accounting for what is known about the various forms of acquired dyslexia.