The CIA model

As we have seen in the ISO 27000 definition, there are three words that are very important when speaking of security, Confidentiality, Integrity, and Availability. Even though many other models have been proposed over the years, the CIA model is still the one that is most used. Let's see the various parts of it.

Confidentiality

Confidentiality is the first part of the CIA model and is usually the first thing that people consider when they think about security. Many models have been created to grant the confidentiality of information, but the most famous and used by far is the Bell-LaPadula model. Implementing this model means creating multiple levels in which the users are pided and allowing all users of the nth level to read all documents collocated at any level lower or equal to n and to write documents at any level higher or equal to n. This is often characterized by the phrase no read up, no write down.

A lot of security attacks try to break the confidentiality of the data, mainly because it is a very lucrative job. Today companies and governments are willing to pay thousands or even millions of dollars to get information about their competitor's future products or a rival nation's secrets.

One of the easiest ways to grant confidentiality is by using encryption. Encryption cannot solve all confidentiality problems, since we have to be sure that the keys to decrypt the data are not stored with the data; otherwise, the encryption is pointless. Encryption is not the solution to every problem, since encrypting a data set will decrease performances of any operation over it (read/write). Also, encryption brings a possible problem—if the encryption key is lost, this will lead to losing the access to the data set, so encryption can become a hazard to the availability of the data.

You can think of confidentiality as a chain. A chain is as strong as its weakest link. I believe this is one of the most important things to remember about confidentiality, because very often we do a lot of work and spend a lot of money hardening a specific part of the chain leaving other parts very weak, nullifying all our work and the money spent.

I once had a client who engineers and designs his products in a sector where the average expense for R&D of a single product is way beyond the million USD. When I met them, they were very concerned about the confidentiality of one of their not yet released products, since they believed that it involved several years of research and was more advanced than their competitor's projects. They knew that if one of their competitors could obtain that information, he would have been able to fill the gap in less than 6 months. The main focus of this company was the confidentiality of the data; therefore, we created a solution that was based on a single platform (hardware, software, and configurations) and with a limited replication to maximize its confidentiality, even reducing its availability. The data has been pided into four levels based on the importance, using for-the-sake-of-clarity names inspired by the US Department of Defense system, and for each level we assigned different kinds of requirements, additional to the authorization:

  • Public: All the information at this level was public for all, including people inside the company and outsiders, such as reporters. This information was something the company wanted to be public about. No security clearance or requirements were required.
  • Confidential: All information at this level was available to people working on the project. Mainly for manuals and generic documentation, such as user manuals, repairman manuals, and so on. People needed to be authorized by their manager.
  • Secret: All information at this level was available only to selected people working on the project and pided into multiple categories to fine grain permissions. This was used mainly for low-risk economical evaluations and noncritical blueprints. People needed to be authorized directly by the project manager and to use two factor authentications.
  • Top access control: The information at this level was available only to a handful of people working on the project and was pided into multiple categories to fine grain permissions. It was used for encryption keys and all critical blueprints and economical and legal evaluations. People needed to be authorized directly by the project manager to use three-factor authentications and to be in specific high-security parts of the building.

All the information was stored on a single cluster and encrypted backups that were made daily were shipped to three secure locations. As you can see, Top Secret data could not exit from the building if not heavily encrypted. This helped the company to keep their advantage over competitors.

Integrity

By integrity we mean maintaining and assuring the accuracy and the consistency of the data during its entire lifecycle. The Biba integrity model is the most known integrity module and works exactly in the opposite way of the Bell-LaPadula model. In fact, it is characterized by the phrase no read down, no write up.

There are some attacks that are structured to destroy integrity. There are two possible reasons why a hacker would be interested in doing this:

  • A lot of data has legal value only if its integrity has been maintained for the entire life span of the data. An example of this is forensic evidence. So, an attacker could be interested in creating reasonable doubt on the integrity of the data to make it unusable.
  • Sometimes an attacker would like to change a small element of data that will affect future decisions that are based on that bit of data. An example can be an attacker who wants to edit the value of some stocks, so an automatic trading program would think that selling at a very low price would be a good idea. As soon as the automatic trading program does this transaction, the company (or bank) owning it would have lost a huge amount of money and will be very hard to trace back to the attacker.

An example of integrity is the Internet DNS service, which is a very critical service and has a core composed of a few clusters that have to grant integrity and availability. Availability is really important here because otherwise the Internet would be down for many users. However, its integrity is much more important, because otherwise an attacker could change a DNS value for a big website or a bank and create a perfectly undetectable phishing attack, also known as pharming, at a global scale. Each one of these clusters are managed by a different company or an organization, with different hardware, different software, and different configurations. Availability has been implemented using multiple hardware, software, and configurations to avoid the possibility of a faulty or hackable aspect that can bring down the whole system. Confidentiality is not the focus of this system since the DNS service does not contain any sensible data (or, at least, it shouldn't). Integrity is granted by a pyramidal system in which the top DNS (root DNS) is trusted by all other DNSes. Also, lately, all DNS programs are supporting encryption and untrustworthiness of unknown DNS servers to prevent DNS cache poison attacks, which have now become more frequent.

Availability

Availability simply means at any given moment, a document that should be available, has to be available. This means that no matter what has happened to your server, the main server farm, the data has to be available.

You can think of availability as a wire rope. A wire rope holds as long as at least one wire holds, so we can say that a wire rope is as strong as its strongest wire. Naturally, the lesser wires still in place, the more load they will have to carry, so they will be more susceptible to failures.

There is a type of attack that tries to reduce or put out availability, the Denial of Service attack. This family of attacks, also known as DoS or DDoS (if it's Distributed), has become very popular thanks to some groups such as Anonymous, and could create huge losses if the target system creates profits for the company. Also, often, these attacks are combined with attacks to steal the confidential information, since DoS attacks create a huge amount of traffic and could easily be used as a persion.

In February 2014, CloudFlare, a big content delivery network and distributed DNS company, was attacked by a massive 400Gb/s DDoS attack that caused a huge slow down in CloudFlare services. This was the single biggest DDoS attack in history (until the end of 2014, when this book is being written). Lately, huge DDoS attacks are becoming more frequent. In fact from 2013 to 2014, DDoS attacks over 20Gb/s are doubled.

An interesting case I would like to relay here is the Feedly DDoS attack, which happened between July 10, 2014 and July 14, 2014. During this attack, Feedly servers had been attacked and a person, claiming to be the attacker, asked the company to pay some money to end the attack, which the Feedly company affirms not to have paid. I think this case gives us a lot to think about. Many companies are now moving towards a complete rely on computers, so new forms of extortion could become popular and you should start to think on how to defend yourself and your company.

Another type of DoS attack that is becoming more popular with the coming of public clouds, where you can virtually scale up your infrastructure unlimitedly is the Economic Denial of Sustainability (EDoS). In this kind of attack, the goal is not to max out the resources since that would be pretty difficult, but it is to make it economically unsustainable for the company under attack. This kind of attack could even be a persistent attack where the attacker increases a company cloud bill of 10-20 percent without creating any income for the company. In the long run, this could make a company fail.

Some considerations

As you can imagine, based on the CIA model, there is no way a system can meet 100 percent of the requirements, because confidentiality, availability, and integrity are in contradiction. For instance, to decrease the probability of a leak (also known as loss of confidentiality), we can decide to use a single platform (hardware, software, and configuration) to be able to spend 100 percent of our efforts towards the hardening of this single platform. However, to grant better availability we should try to create many different platforms, as different as possible, to be sure that at least one would survive the attack or failure. How can we handle this? We simply have to understand our system needs and design the perfect mix of the two. I will go over a real-life example here that will give you a better understanding of mixing and matching your resources to your needs.

A real-world example

Recently, I helped a client to figure out how to store files safely. The company was an international company owning more than 10 different buildings in as many countries. The company has had few unhappy situations that lead it to consider it to be more important to keep the data safe. Specifically, the following things happened in the previous months:

  • Many employees wanted to have an easy way to share their documents between their devices and with colleagues, so they often used unauthorized third-party services
  • Some employees had been stopped at security controls in airports and the airport security had copied their entire hard drive
  • Some employees had lost their phones, tablets, and computers full of company information
  • Some employees had reported data loss after their computer hard drive failed and the IT team had to replace it
  • An employee left the company revealing his passwords, locking the company out of his data

As often happens, companies decide to change their current system when multiple problems occurs, and they prefer to change to a solution that solves their problems altogether.

The solution we came up with was to create a multiregional cluster with Ceph, which provided the object storage we needed to put all the employer's data into. This allowed us to have multizone redundancy, which was necessary to grant availability. It also allowed us to create all backups in only two places instead of forcing us to have backups at all places. This increased the availability of backups and decreased their cost.

Also, client applications for computers, tablets, and phones have been created to allow the user to manage its files and automatically synchronize all files in the system. A nice feature of these clients is that they encrypt all the data with a password that is dynamically generated for each file and stored on another system (in a different data center) encrypted with the user GNU Privacy Guard (GPG) key. The user GPG key is also kept on an Hardware Security Module in a different Data Center to grant the company the possibility to decrypt a user's data if they leave. This granted a very high level of security and allowed to share a document between two or more colleagues.

The GPG key is also used to sign each file to grant that the file integrity has not been compromised.

To grant better security towards the loss or copy of computers, all company's computers have the hard drive completely encrypted with a key known only to the employer.

This solved all technical problems. To be sure that the people were trained enough to keep the system safe, the company decided to give a 5 days security course to all their employers and to add 1 day every year of mandatory security update course.

No further accidents happened in the company.