The ability to ask questions during learning is a key aspect of human cognition. While recent research has suggested common principles underlying human and machine “active learning,” the existing literature has focused on relatively simple types of queries. In this paper, we study how humans construct rich and sophisticated natural language queries to search for information in a large yet computationally tractable hypothesis space. In Experiment 1, participants were allowed to ask any question they liked in natural language. In Experiment 2, participants were asked to evaluate questions that they did not generate themselves. While people rarely asked the most informative questions in Experiment 1, they strongly preferred more informative questions in Experiment 2, as predicted by an ideal Bayesian analysis. Our results show that rigorous information-based accounts of human question asking are more widely applicable than previously studied, explaining preferences across a diverse set of natural language questions.