Many cognitive models are evaluated by implementing an artificial system a program, a robot that performs the concrete task that the model is addressing. Success in task performance by the program is considered a proof of the adequacy of the cognitive model proposed. However, this is not a valid inference in general. The reason is that the transformation of the model from a textual or graphical form into a computer implementation is not transparent. This implies that phenomena and properties observed in the program cannot be predicated of the model. The reason for the lack of transparency is that models are not expressed using a rigorous language and that transformations into implementations arent rigorous either. Hacks are introduced during the construction of the program to make it work and they are not taken back into the model, hence invalidating the model-implementation relation.