There is a growing body of evidence that the human brain may be organized according to principles of hierarchical predictive coding. A current conjecture in neuroscience is that a brain, organized in this way, can effectively and efficiently perform genuine Bayesian inferences. Given that many forms of cognition seem to be well characterized as Bayesian inferences, this conjecture has great import for cognitive science. It suggests that hierarchical predictive coding may provide a neurally plausible account of how forms of cognition that are modeled as Bayesian inference may be physically implemented in the brain. Yet, the jury is still out on whether or not the conjecture is really true. In this presentation, we demonstrate that each key sub-computation invoked in hierarchical predictive coding potentially hides a computationally intractable problem. We furthermore identify ways in which computational modelers may or may not overcome these 'intractability hurdles.'