Knowledge regarding social information is thought to be derived from many different sources, such as interviews and formal relationships. Social networks can likewise be generated from such external information. Recent work has demonstrated that statistical linguistic data can explain findings thought to be explained by external factors alone, such as perceptual relations. The current study explored whether language implicitly comprises information that allows for extracting social networks, by testing the hypothesis that individuals who are socially related together are linguistically talked about together, as well as the hypothesis that individuals who are socially related more are talked about more. In the first analysis using first-order co- occurrences of names of characters in the Harry Potter novels we found that an MDS solution correlated with the actual social network of characters as rated by humans. In a second study using higher-order co-occurrences, a latent semantic analysis (LSA) space was trained on all seven Harry Potter novels. LSA cosine values for all character pairs were obtained, marking their semantic similarity. Again, an MDS analysis comparing the LSA data with the actual social relationships yielded a significant bidimensional regression. These results demonstrate that linguistic information indeed encodes social relationship information and show that implicit information within language can generate social networks.