Virtual agents have quietly entered our life in diverse everyday domains. Human-Agent-Interaction can evoke any reaction, from complete rejection to great interest. But do humans implicitly regard virtual agents as pure machines, or beings on an anthropomorphic level? We asked participants to train an erroneous virtual agent on a cognitive task and to reward or punish it. The agent showed human-like emotional facial reactions for the experimental but not for the control group. We expected participants from the experimental group to give less harmful reinforcement and show more hesitation before punishing. Additionally, we hypothesised that participants with higher empathy show more compassion towards the agent and therefore would give more positive reinforcement and feel worse when punishing. The results indicate that the agent’s expression of emotionality is not the relevant factor for showing compassion towards it. Conversely, human empathy seems to be an important factor causing compassion for virtual agents.