A brief history of cloud computing
Posted on
April 11, 2022
by

This feature first appeared in the Spring 2022 issue of Certification Magazine. Click here to get your own print or digital copy.

It is difficult to find a networking-, security-, or even basic computing-related certification exam today that does not include one or more questions related to the cloud. Every exam candidate needs to expect an “aaS” on the end of a set of question choices and be able to think through why this might be a better option.

All of this begs the question as to what brought the cloud furor about: Certification exams — and the computing world in general — have not always been so obsessed with cloud computing. Most people outside of the information technology (IT) sector would have even had a vague knowledge of what cloud computing is just 10 years ago. So how did we get here?

In the beginning

To modestly oversimplify things, if we go back in time to a bit more than 40 years ago, the mainframe ruled most business environments. Users worked at dumb terminals (keyboards, monitors, and so forth) connected to a central computer on which everything was stored and accessed.

When it was time to purchase new technology, money was usually spent on buying the best central machine that could be afforded (whether it was a true mainframe, a “mini tower,” or something similar) and relatively little was spent on the terminals.

The benefit of this environment was that all data was in one location within the organization and could be accessed from any terminal that could connect to that machine. Data was easily backed up and could be reasonably secured. The downside of this situation was that workers typically needed to be onsite to access business data.

Remote terminals were less common back then, and all of the users competed for system resources at the same time. Everything was on the same drive(s), accessing the same processor(s), etc. — which could create a very slow environment during peak periods and lacked easy scalability.

The PC revolution transformed the workplace and was a direct assault against the central computer model. Now every user was able to store data locally, process it locally, and transport it to and from the workplace.

Projects could be worked on while traveling to a client site, while at home on the weekend, while attending kids’ sporting events, and generally at any time and any location. What began as an attempt for users to wrest more control and ownership over data morphed into a way to increase productivity at the expense of “off hours.”

The benefit of this environment is that anyone can work on anything at any time and from any location. The downside of this situation is that users are not always the best guardians of data — they don’t back it up, they don’t protect it, and they have data on their local machines that needs to be worked on by others but cannot be accessed.

Tying IT all together

Networking as we most often think of it today evolved to address the issues brought on by the PC revolution and create a workable hybrid model. By connecting PCs together, whether in a peer-based model used by a tiny office, or in a more scalable client-server model, it became possible to implement the best features of the mainframe world into the new realm.

Now some data could be stored locally, but key files could be maintained centrally, making it possible for files to be accessed and backed up. Users could share both hardware and software resources, network administrators could add in additional security features, and so on.

As the move to network everything began to pick up steam, it made sense that instead of creating massive individual networks, it would be more efficient to utilize one existing global network: the Internet.

What was started by DARPA (Defense Advanced Research Projects Agency) in the 1960s, has gone by several names. Early on, it was known as ARPANET (Advanced Research Projects Agency Network) during most of its experimental phases. When it began to be accessed by the public and started gaining a small amount of traction, it became known as the Internet.

Once HTTP took off — and it really became popular — the internet started being referred to as the World Wide Web even though that is but one component/service. No matter what we choose to call it, the internet is a global network of connected devices (servers, devices, networks) running TCP/IP that can offer a plethora of service: hosting, storage, e-mail, and many others.

Coining a term

In technology, it is often necessary to create words as new services or offerings come into being: One of the best examples of this is “Google” — a proper noun and the name of one the world’s largest tech companies — providing us with “google,” a verb that now mean using a search engine to search for information online.

Sometimes these new words are used because nothing else would capture the essence of what is being described and sometimes they are created for marketing purposes — often in an attempt to convince the market that something new has been created when that may not actually be the case.

Sometimes both of these factors play a part. That is how we came up with terms like “the cloud” and “cloud computing,” and is a topic that merits some discussion. Technical manuals and documents have a history of using an image of a cloud to represent anything outside of what was under the direct control of an administrator or developer.

The image of a cloud predates the term “cloud” actually being used to mean anything specific. The following figure, from a book on NetWare certification published in 1995, CNE Short Course, for example, shows an image of a cloud to represent the public network that data is traveling on.

The public network is outside the control of the administrator setting up, or managing, a Novell NetWare-based network. As one of the authors on the book, I can safely say that this image was merely following the convention of the time and was not intended to imply that there was anything then known as “the cloud,” per se. Take a look:

Images of clouds fit with the general theme of the air/sky/heavens prevalent in networking lingo: Over a decade earlier, for example, the “ether” had turned into Ethernet, when IEEE 802.3 and that technology began replacing most other forms of physical networking. Ever so slowly, the cloud as a noun began creeping into shared documentation.

Entering the lexicon

First an article in the April 1994 issue of Wired magazine mentioned there was an “entire Cloud out there.” Then in 1996, a document purportedly circulated internally at Compaq talking about not just “the cloud,” but “cloud computing.”

Three years later, Salesforce incorporated the phrase into their marketing as they explained how users could access everything they needed from anywhere (home, client site, office, etc.) that an internet connection was available. If you could access the Internet, you could get what you needed to do your job on demand — and as a service. (Remember the “aaS” mentioned above?)

The now defunct Enron touted some similar concepts as they promoted a world in which computing could be treated the same as a power or water utility. A few years after this — now we’re in the early 2000s — Amazon launched what went on to become Amazon Web Services (AWS) and used the cloud metaphor to market the leasing of their excess capacity.

That excess capacity — which gave them an economy of scale — could be used to host other web sites, to store data, to run programs, or almost anything else. An organization could run everything server-intensive from AWS and bypass up-front investing in their own servers and infrastructure altogether.

As all of this was happening, the network types previously referred to as the internet (public), intranet (private), and extranet (combination), started being marketed respectively as the public cloud, private cloud, and the hybrid cloud. Oracle is often credited with differentiating the varying levels of service with their marketing of SaaS (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service) and those three levels have since become the standard today.

Both Adobe and Google deserve a lot of credit for pioneering ways to make applications run smoothly within this environment: Adobe with its application suite, and Google with their Microsoft Office-like tools. Yet while a lot of companies contributed one or more key elements to the growth of the cloud as we know it today, nothing accelerated its adoption into daily life more than a non-company did: COVID-19.

When it suddenly became necessary to meet remotely, work remotely, learn remotely, buy remotely, and socialize remotely, our dependence on “the cloud” and “cloud computing” exploded. Possibilities that had once seemed questionable quite quickly became the accepted standard.

So … what is cloud computing?

It is easy to look at the progression we’ve just described and conclude that “the cloud” is just a marketing idiom for something that can be done via the internet. The NIST (National Institute of Standards and Technology) helps make a bit more of a distinction.

According to their definition, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

The five essential features detailed in NIST’s SP 800-1452, and their elaboration, are:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

There is a sense of location independence in that the customer generally has no control over or knowledge of the exact location of the provided resources, but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.

Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.

Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

It is those five concepts that separate the cloud from the Internet-based services that preceded it — and that open the door of possibilities for those that will follow.

The future

The explosive growth and lightning-fast adoption of the cloud is inspiring and points to any number of options and opportunities that can follow in a migration of computing to a more utility-like model. It is important to note, though, that no matter what features are said to make the cloud the cloud, it is built upon the internet.

For now, there are a host of security and trust issues inherent in that foundation. In order to continue to grow and to thrive, it will be necessary for the cloud technologies to find ways to buttress security beyond what is currently available. More than anything else, the evolution of cybersecurity will dictate the evolution of cloud computing.

What can be said with certainty is this: Get used to cloud computing. Now that the cloud and its unique strengths has been made a part of daily life for millions of people, it will only continue to become as ubiquitous as its natural namesake.

References:

Cady, Heywood, Homer, Dulaney, Arnett, and Niedermiller-Chaffins, CNE Short Course, New Riders Publishing, 1995, p. 287

NIST SP 800-145, The NIST Definition of Cloud Computing, https:// nvlpubs.nist.gov/nistpubs/Legacy/SP/ nistspecialpublication800-145.pdf

About the Author

Emmett Dulaney is a professor at Anderson University and the author of several books including Linux All-in-One For Dummies and the CompTIA Network+ N10-008 Exam Cram, Seventh Edition.

Posted to topic:
Tech Know

Important Update: We have updated our Privacy Policy to comply with the California Consumer Privacy Act (CCPA)

CompTIA IT Project Management - Project+ - Advance Your IT Career by adding IT Project Manager to your resume - Learn More