This feature first appeared in the Spring 2021 issue of Certification Magazine. Click here to get your own print or digital copy.
I am puzzled. It seems as though news outlets, industry reports, and vendors galore keep talking more and more about large companies or organizations being hacked — yet very few of those businesses stop operations. Time and time again, we hear that hackers are inventing new methods of accessing data, avoiding detection, and wreaking harm on anyone with a server, or even so much as a login ID.
Labor projections point to a current shortage of qualified security administrators and forecast that the shortage will get worse in years to come as demand outstrips supply by a wide margin. And yet, retail giant Target, the victim of a large-scale breach a few years ago, is doing quite well according to most financial analysts — as is almost every other company linked to a cyberattack in recent years.
If the situation is as dire as it is often portrayed, then why hasn't society descended into anarchy? Are businesses, governments, and organizations smarter than we give them credit for, actually winning the cybersecurity war with honeypots while keeping their victories silent? Is behind-the-scenes cybersecurity technology actually much stronger than anyone is letting on?
Perhaps hackers tend to intentionally limit the scope — and severity — of their sophisticated attacks to stay under the radar. Perhaps hackers are far better at avoiding detection than anyone has yet realized. Perhaps hackers are so severely fewer in number than we assume, and potential targets so rich in abundance, that only a fraction of hackable assets are truly at risk.
There may not be many measurable answers when it comes to evaluating the true tenor of cybersecurity's 'Cold War,' but there are questions, as well as some who are profiting from the unknowns. Media outlets get attention, businesses (and governments) get to hide information, schools get more students, and third parties such as insurance providers get to have a lucrative cottage industry.
A generation ago, the original Cold War, for all of its posturing and talk, stayed cold. And just as military institutions never had more abundant funding than during the Cold War, the institutions just mentioned stand to profit from the hype around a conflict that is difficult to measure, either in terms of its severity or its casualties. So who is really winning? Let's consider the battlefield.
Where I live, the local news focuses more on the weather than anything else. One television station routinely runs spots that make it sound as though the world is coming to an end: 'Ice! Snow! A whole lot more is coming our way. Details at 11.' I have used a snow blower once in the past three years, and have yet to use up a whole bag of sidewalk salt over that same period.
Not surprisingly, though, the newscast staffed by the weather alarmists has the highest rating of any in the area. They consistently (and successfully) scare viewers into watching at 11 to find out just how bad conditions may be.
In the same way that predictions of terrible weather can influence TV ratings, coverage of personal data at risk can do the same � and this leads to a bit of sensationalism on the part of news agencies.
This is not to minimize the very real risk and harm that data attacks pose. It's certainly worth considering, however, that the potential exists for the issue to be blown out of proportion if there is the possibility that highly visible news coverage will attract more readers, or viewers, or (perhaps most critically) advertisers.
If we entertain the possibility of self-interest on the part of news agencies to factor into the sensationalism equation, then it is a natural extension to allow that vendors may also have similar motives.
Those who market products that aid in intrusion detection and prevention (as well as malware protection and related services) naturally have the ability to sell more product when there is more of a need and/or when that need is more well known. In marketing literature, it is widely accepted that fear is a powerful emotion and it can be used quite effectively to increase sales.
Lack of transparency
At the other end of the spectrum from the news agencies and security vendors are the organizations that have had their data or systems jeopardized. They are at the opposite end of the spectrum because they don't want to be on the evening news, and they don't want to share their story, and they don't want to focus on what led to their being vulnerable and exposed.
This desire to keep quiet and to try to minimize what happened as much as possible leads to a lack of transparency on the part of compromised businesses and organizations and makes it difficult to know the real scale of the problem.
There are times when there is an absolute need for a lack of transparency — such as when a criminal investigation is ongoing. Many companies that announce a cybersecurity breach do so long after it happened and attribute the time delay to attempts to identify and catch the preparators without tipping them off ahead of time.
While there is likely legitimacy to this (as opposed to their just hoping no one would need to be alerted to the problem), it still results in a lack of transparency which makes it impossible to truly — objectively — grasp the scope of the problem. How many organizations have been compromised that we never learn of? Is it one or more than 1,000? Without an answer, it is hard to put the true scale of the war into perspective.
Need to feed the machine
Education is big business and there is money in training individuals to become IT security professionals. As long as companies continue to post positions with good salaries for security professionals, and as long as projections continue to show a shortage of these individuals in the future, then there will be institutions promoting the discipline. Those institutions range from traditional four-year colleges and universities to training centers, certification providers, and everyone in between.
I do not mean to imply that this is wrong — only that it skews the view. It is not wrong in that it is no different than any other field: Shortages lead to increased demand and higher wages. Higher wages and higher demand lead to more individuals gaining those skills, thus increasing the labor supply.
An increase in labor supply eventually leads to a decrease in wages and the market settles on a new, lower, equilibrium pay rate. I am old enough to have lived through the bubble for NetWare-certified administrators, as well as a few others that came after that, and recognize that the demand for more qualified labor creates a wealth of opportunities for educational providers.
The cost of educating in this field pales when compared to ones where there is a need for a high investment in training materials and thus those in the education business won't object to riding this wave as long as they can. Just as the media can profit from a scare, educational institutions can profit from a scarcity — and the more they sell it the better.
If every business that incurred a cybersecurity-related loss had to pay the full cost of that loss, they would shut their doors very quickly or enforce measures that exponentially decrease their risk (such as multi-factor authentication, 64-character passwords, and so forth).
The fact that they don't, even in the presence of some very highly publicized breaches, points to the likelihood of risk being transferred elsewhere. Two entities making this transference possible are third-party partners and insurance companies.
As for the first entity, the news at the time of this writing is abuzz with reports from Malaysia Airlines that a security breach compromised nine years' worth of personal data from their frequent flyer program's database (including date of birth, contact information, and so on). The fault is not really theirs, they assert, as that program (called Enrich) is managed by a third-party partner.
Singapore Telecommunications Limited (Singtel) also placed the blame on a third-party file sharing provider when they disclosed a breach a few weeks earlier. By using a third-party IT provider, some of the risk (and, thus, the blame) can at least in theory be transferred elsewhere.
Insurance providers make it possible to transfer some of the risk to them through Data Breach and Cyber Liability Insurance policies. While the benefits they offer overlap a bit, data breach insurance is typically more focused on PII (personally identifiable information) or PHI (personal health information) data being lost or stolen.
Hartford Insurance, for example, offers a policy for small businesses that includes notifying those affected, hiring a PR firm, and providing credit monitoring. Cyber liability insurance is intended to offset financial losses due to the attack(s) and cover the costs of lawsuits or investigations that follow. Cyber liability policies many times also cover cyber extortion, should data be held hostage.
Regardless of the name the policy is marketed under, providing insurance and third-party support for such vulnerabilities can lead to what is known in economics as moral hazard: a reduction in the incentive to guard against risk when protected from its consequences.
This means that by having insurance to cover losses should someone access proprietary data because passwords are weak, there really isn't much of an incentive to make the passwords strong. After all, enforcing stronger controls might result in a business or institution losing some customers.
What lies ahead?
While I am certain there are many other factors that come into play, these four in particular distort out ability to accurately assess the true scale and severity of the world's ongoing cybersecurity problem. As long as they continue to do so, it will be difficult to ascertain whether businesses, governments, and organizations are winning the war, whether hackers are fewer in number than purported, or whether we are in worse shape than anyone ever imagined.