We develop and implement a new approach to utilizing color information for object and scene recognition that is inspired by the characteristics of color- and object-selective neurons in the high-level inferotemporal cortex of the primate visual system. In our hierarchical model, we introduce a new dictionary of features representing visual information as quantized color blobs that preserve coarse, relative spatial information. We run this model on several datasets such as Caltech101, Outdoor Scenes and Underwater Images. The combination of our color features with (grayscale) shape features leads to significant increases in performance over shape or color features alone. Using our model, performance is significantly higher than using color naively, i.e. concatenating the channels of various color spaces. This indicates that usage of color information per se is not enough to produce good performance, and that it is specifically our biologically-inspired approach to color that results in significant improvement.