This paper presents a novel theory of lexical exceptionality within a probabilistic, constraint-based grammar. In Representational Strength Theory (RST), traditional faithfulness constraints are replaced by Phonological Form Constraints (PFC's), which encode lexicalized properties of a word. A series of simulations demonstrate that the weights of PFC's can be learned alongside the weights of markedness constraints so that probability-matching behavior is predicted on novel words, but real words are accurately represented, whether they are exceptions to the language's grammar or not. Because RST can model the learning process for the lexicon and the grammar at the same time, it makes specific predictions about the grammar-lexicon relationship. Two predictions are explored here: (1) that high-frequency exceptions should be more stable than low-frequency ones, and (2) that features which are entirely predictable from the grammar should not be stored on individual words.