Many have investigated the sensitivity of face processing to both spatial frequencies and face orientation, but few have researched the sensitivity of face processing to the orientation of spatial frequencies. One recent exception has been Yu, Chai, & Chung (2011), which investigated facial expression recognition in regards to the orientation of spatial filters and showed that most information is contained in the horizontal orientation. Here, we model the Yu, Chai, & Chung (2011) study using the EMPATH model, a feed-forward neural network that has been used to model facial expression recognition (Dailey, Cottrell, Padgett, & Adolphs 2002). We used the NimStim set of facial expressions, which were the basis for the Yu, Chai, & Chung (2011) experiment, and followed their method of filtering images through different spatial orientations. Our results show that this simple, biologically plausible model produces very similar results to that of human subjects in their study.