Humans are capable of generalizing and learning new concepts after very little experience. They have the ability to create semantic structures from concepts they acquire, they can learn appropriate inductive biases that are later used as priors for different tasks and they can learn novel categories from very few examples. While recent advances in neural networks and other machine learning methods are beginning to approach human-level capabilities in several tasks, building computational models that replicate these abilities has proven difficult. We propose a model that combines powerful features extracted from a deep neural network with a semantic structure inferred using probabilistic Hierarchical Bayes. We test and demonstrate the capabilities of our model in three different tasks: learning a new concept from a single example of a novel category, learning new categories from few examples of different categories, and learning the semantic tree from an unlabeled set of novel objects.