This paper is based on a study that evaluates the efficacy of early Building Performance Simulation (BPS) users in critically examining their simulation model results with minimal external support. The objective of the study was to deepen the current understanding of challenges faced by early-users of BPS tools in setting up reliable simulation models. This study builds upon existing approach of experiential learning that calls upon an ”autopsy” or critical examination of simulation results as an important step in creating better simulation users. We examine the stage where early-users are explicitly asked to verify their model inputs/outputs and autonomously find ways to assess the reliability of their simulation model. This self verification process requires the early-users to articulate appropriate and practical diagnostic questions before proceeding with the verification process. We evaluate the ability of early-users, in this case graduate students, to independently conduct a satisfactory autopsy as compared to matrices of questions generated by experts (in this case, class tutors). Bloom’s taxonomy was further used to assess the cognitive level at which the users chose to engage with the diagnostic activity. Most users chose to engage with the diagnosis in the ”Evaluation” level where a basis of comparison had to be brought in. The ”Understanding” level, that required the early-user to exhibit grasp over the underlying physical phenomenon or processes was utilized the least in conducting the autopsy. Further work shall examine knowledge gap as potential barrier to being reflective and conscientious BPS users.