The loosely symmetric (LS) model, a conditional probability-like formula describing human symmetric cognitive biases, is shown to be effective in an ample amount of areas including causal induction, learning (reinforcement, supervised and unsupervised), game-theoretical situations and digital game AI. However, despite its interesting mathematical properties, the total rationale for the model has not been given. In this study, we analyze LS from the viewpoint of Bayesian statistics, especially of the empirical Bayes methods. As the result, we show that the bias terms in LS, which deviates LS from the ordinary conditional probability, are the hyper parameters for the prior Beta distribution LS assumes. Given this analysis, LS is shown to describe various cognitive biases including Gambler’s fallacy, status quo bias, the framing effect, dependence to the reference point and the reflection effect, etc., all at the same time.