Earlier this month, IT governance association ISACA announced a new portfolio of cybersecurity certifications. Of note, the seven new CSX certifications combine skills-based training with performance-based exams. Other IT organizations like CompTIA have also begun to experiment with performance-based testing in recent years, and there are plenty of advocates for making a strong move in that direction. Are we witnessing the beginning of the end for more traditional multiple choice certification exams?
Certification testing for years has been dominated by multiple-choice questions. The details can vary a little (pick the most correct answer, check all that apply, and so forth), and sometimes there’s a certain amount of work that must be done before selecting your answer ... but it usually boils down to choosing A, B, C, or D. In contrast, performance-based tests present the test taker with an actual IT problem similar to what might be encountered in the workplace. They must use their knowledge of whatever discipline is being tested to fix it, and a simulation of some sort is usually involved.
There are valid reasons for the shift in focus that favors performance-based testing over multiple choice testing, so let’s examine the pros and cons of both approaches.
Choose A, B, or C
Multiple choice testing is something everyone tends to immediately recognize. This familiarity allows test takers to focus on the task of answering questions, and there’s a steady pace to moving through the exam. It’s easy for a test taker to gauge progress and ensure that he or she finishes in the allotted time. Creating and administering the test questions is straightforward — the answers, expected duration of the test, and other factors are all known quantities. This allows a multiple choice exam to include far more questions, which in turn can provide a better overall view of the examinee’s knowledge.
There are problems with multiple choice testing, of course, just as there are problems with other types of testing. It can encourage rote memorization of details, leading to people studying to pass a test rather than learning the underlying material. Exam designers may resist this trend by including trick questions, which can trip up even skilled and knowledgeable test takers. In many cases, multiple choice testing does not encourage deep thinking: It is often easier to recognize the correct answer from its wording, rather than by recalling or articulating actual information. Due to the repetitive nature of such testing, multiple choice can also be tedious. Finally, the limited number of answer options for most questions means a savvy test takers can often supply a correct answer without being entirely fluent in the test subject simply by process of elimination. Exam designers may respond by making their questions more complex, but that doesn’t inherently make them better.
All of this leads to what is arguably the biggest concern with multiple choice testing, specifically in regards to IT certifications: Does it actually correlate well with skills, knowledge and ability, or is it possible to score well on the test without understanding many of the underlying concepts? Knowledge on its own only gets you so far; people who can apply that knowledge is what IT departments are really looking for. Fundamentally there’s a risk of multiple choice testing not actually demonstrating not whether someone is competent, but merely whether they are good at taking a test.
Take the correct action
Performance-based testing is intended to address many of the shortcomings of multiple choice testing. By using simulations of real-world situations, it can test the ability of the examinee to take appropriate action. Studying for such scenarios in the best case means test takers are learning the skills and tools that will be used in the work place. For example, CompTIA advises the following: “To prepare for exams with performance-based questions, CompTIA encourages candidates to gain hands-on practice with the topics covered by the exam objectives.” In other words, the recommended way to study for performance-based testing is to gain practical experience.
That is both the blessing and curse of performance-based testing in a nutshell. It can provide better insight into the skills and training of the examinee, but gaining those skills will in general require more time, study and practical application. The best training is on-the-job training, but many of those who are looking for certifications are doing so in order to gain employment in the first place. It’s a classic Catch-22, and it effectively makes holding a certification even more valuable to those who are able to pass the test.
While the difficulty in studying for performance-based exams certainly doesn't invalidate their use, creating appropriate testing simulations is generally far more complex than writing a multiple choice question. The ideal performance-based scenario would involve a real-world problem that needs to be fixed, and the correct “answer” would be to address the problem or take the appropriate action. Performing such a task in a test environment, however, isn’t always practical. Consider, for example, a test scenario such as a malware infection of a PC or server. Properly cleaning off malware can be done in a variety of ways, but most of these would require much more than 15 minutes in the real world, and checking all of the possible infections can require hours in some cases. Simulations of real-world problems often oversimplify complex situations.
How real is real enough?
And with any simulation, the level of fidelity will vary, which inherently limits the possible correct answers. If we look closely at CompTIA's sample Performance-Based Question (formatting the D: drive in Windows 8), anyone familiar with Windows 8 will immediately recognize items that are missing — e.g. you can’t do anything at the Start Screen other than click “Desktop,” and once there the only viable option is to double-click the Computer icon on the desktop. In short, the fidelity of the simulation in this case is quite low, and it doesn't take much time at all to stumble into the single way available to complete the question.
Increasing the fidelity isn’t a simple or desirable solution in every case, however, as the answer still needs to be evaluated. Make the simulation real enough, and it eventually becomes difficult or even impossible to check the examinee’s answer, short of having a human proctor observe and evaluate the entire process. That’s something that might be done during a hiring interview, but seems highly impractical for certification testing. There's clear a tricky balance to be struck between fidelity and monitoring of the answer.
Ultimately, both modes of testing have their pros and cons. There doesn't seem to be a simple argument for using either method exclusively. As with any sort of education or training, it’s not possible to weigh or even consider every possible scenario that stems from a given subject. To that end, it may be good to recall that certifications confirm a base level of competency. Passing certification tests is intended to help the examinees learn enough to know how to get more information when a particular problem arises. Over time, competency and knowledge will ultimately improve and increase primarily through on-the-job experience.