Four biases (or constraints) are known to guide word learning by infants: whole object bias, noun-category bias, mutual exclusivity, and shape similarity. They can be understood in relation to manifesting and invalidating the logical types between an object and the class or a token and the type. Hence we can suppose the biases have close connection to self-reference and our flexibility in thinking. Because the biases basically contradict each other, there must be an adjusting mechanism. We propose the loosely symmetric (LS) model as the plausible mechanism. LS, a heuristics describing human symmetric cognitive biases in the form of conditional probability, is shown to be effective in an ample amount of areas including causal induction, learning (reinforcement, supervised and unsupervised), game-theoretical situations and digital game AI. Infant agent inferring what to learn with LS shows not only efficient word learning but also appropriate use of the acquired knowledge for communication. The agent also shows appropriate adjusting of which bias prevails.