This is the continuation of my article published last month: "I passed my certification exam with a 96 percent. Now I know everything ... right?" In that piece I focused on the problem of deciding how much weight employers should give to the act of passing a certification exam. In this article, I will deal with the corollary. When someone has failed a certification exam, how much does that tell you about their knowledge of the topic being tested? Does the act of failing a certification exam suggest a fundamental lack of critical skills and understanding?
A significant percentage of certification candidates will fail an exam at some point in their career. �When this happens, what conclusions can be drawn from the experience? When someone passes an exam, the expectation is that they understand the information that was being tested and can apply that information in a real-world situation. The intent of last month's article was to shine a light on the various reasons that passing scores leave a degree of uncertainty about whether that is truly the case.
By contrast, if someone fails a certification exam, you can be reasonably certain that they do not know the information covered. None of the arguments against trusting a passing score apply. In fact, the same four arguments that were used in the prior article against being able to trust a passing score are applicable. The implications are doubly damning to test takers when considered against a failing score. Once again, I will use the Oracle SQL Fundamentals exam as the hypothetical "failed" test.
People Cheat � Failing an exam does not automatically mean that the candidate did not cheat. I recall a specific incident where a candidate complained on a certification forum about having failed his exam because the production test had errors. As "proof," he posted a question and answer combination from the materials he had used to prepare for the exam. The posted material was from a brain dump, and the "correct" answer selected was wrong.
If someone uses a brain dump to study for an exam and still fails, it certainly does not mean the candidate is more knowledgeable than the exam score would indicate. It could mean that they were using materials with significant errors, and they failed to do any corroborating research on the answers. Alternately it can mean that, despite having the questions and answers in advance, the test taker was unable to memorize enough of the material to generate a passing score. Neither option paints them in an attractive light.
Cramming is not Always Forever - The simple truth is that, in large part, passing a certification exam does require the memorization of a significant number of facts, syntax, commands, and so forth. The inability to pass an exam may be a sign that a candidate was unable to memorize and retain the tested information even for a short period of time. Not being able to do so for an exam does not guarantee they cannot learn and retain this information for a real-world position, but it is certainly not a positive indicator for a potential (or current) employee.
The Real World is not Multiple Choice - The real world is much harder than multiple choice. If someone cannot locate the correct answer from a field of five, it is difficult to argue that they can provide solutions in a situation where a list of answers is not supplied to them. Anyone who has the capability to understand real-world questions and come up with competent answers to them should find multiple choice tests on the same information to be a breeze.
Knowledge and Skill are not Synonyms - As noted last month, knowing SQL syntax does not guarantee that an individual can write complex SQL statements. The reverse, however, is not true. An individual who does not know SQL syntax is virtually guaranteed to be unable to write complex SQL statements. Not all certifications exams are linked to this degree. However most of the exams that I have taken that are geared towards entry-level individuals are built around core topics. It is difficult to imagine someone being able to perform the duties required by the associated job if they do not understand the tested information.
Where does that leave us?
I have seen several people over the years indicate that Oracle certification exams cover a lot of material that "they" have never used in the real world. In the cases where I have known the individuals making the statement, the back story is that they took the test cold because they expected their job experience to carry them. When they failed the exam, they blamed the exam rather than their lack of preparation.
Even if it could be argued, however, that a given exam contained information of little use in the real world (an assertion I generally disagree with), it would not be a valid excuse for failing it. Instead, it would be an indicator that passing the exam was less valuable than would be the case if 100 percent of the material was relevant.
While the arguments presented in these articles seem to imply that only failing scores are worth paying any attention to, that is not the message I want readers to take from this analysis. There is no form of standardized testing I am aware of that provides for absolute certainty when assessing results.
Students with high SAT scores do not always excel in college - nor does a low score guarantee someone will do poorly. I could make similar arguments about numerous other such tests. Exams provide at best an indicator of potential performance. It is more useful to have such an indicator than to have nothing at all, but unwise to trust it completely.
The real test of whether someone will make a good database administrator, developer, etc., is to put them in a position where they have to use their knowledge to perform a real-world task. When someone has passed a given Oracle certification exam, the topics of that exam provide a very clear picture of the knowledge they should possess. The topics list from the certification exam can be used as a template for designing such a task. If they are unable to demonstrate the ability to utilize that information, then it is a reasonable indicator that their test results are misleading.