CIA Triad

From a security management standpoint, there is a risk-based methodology called the CIA Triad: Confidentiality, Integrity, and Availability (CIA).

Confidentiality Defined

Confidentiality means that only authorized users and processes should be able to access or modify data.

Confidentiality is often simplified to mean encryption.  But there are three separate technology areas: encryption at rest, encryption in transit, and emerging technologies applying encryption during processing (a.k.a. confidential computing). This oversimplification is an artifact of pre-Zero-Trust siloed thinking.

In this older technological paradigm encryption was deployed piecemeal on the infrastructure:

  • Encryption of Data at Rest: by Storage Engineers using the encryption technologies supported by the various vendor choices
  • Encryption of Data in Transit: by Network Engineers using such technologies as MACsec or WAN tunnels with IPSec, iWAN, DMVPN, or other SD-WAN technologies
  • Encryption of Data in Use: an emerging technology called Confidential Computing that closes gaps in data security while data is in use

However, confidentiality has always involved privileged access – verifying that the user accessing the data has the right to see or modify it.  So, the older operational approach separated out the infrastructure work and user access technology as independent issues.

As a result, to maintain data confidentiality, an enterprise required multiple independent groups to be firing on all cylinders to function correctly.

The Zero-trust approach with confidentiality is to integrate the approach across all these silos.  This means implementing least privileged access technologies such as role-based access controls (RBAC) and even attribute-based access control (ABAC), an emerging technology standard that can apply context to the permissions.

Confidentiality Examples

Loss of confidentiality is defined as data being seen by unauthorized users.  As a result, most of the cyber incidents in the press are examples of confidentiality breaches.

To fight this, we need authentication, authorization, and encryption.

Authentication includes a huge number of technologies and techniques, but it can be satisfied with Multi-Factor Authentication.

This can consist of a combination of at least two of the following:

  • Something the user knows (e.g., password, pin or account number)
  • Something the user has (e.g., key or security token)
  • Something the user is (e.g., biometrics)
  • Somewhere the user is (e.g., location validated by GPS)

Authorization involves ‘need to know’ mechanisms, and sometimes this is as simple as having separate user IDs for Admin access.  However, authorization can be more complex, and this is where the NIST standard on ABAC was developed.  This permits policies that differentiate not just on ‘read and write’ access or specific data sets, but they can accommodate dynamic rulesets based on location or even on a risk score that looks at a series of risk-based attributes.

Encryption seems straightforward but can be very complex.  Consider that many current data centers use overlay technologies that do not support encryption.  While this may be viewed as a problem, it can normally be worked around using hardware technologies such as MACsec (802.1AE).  The trick is to step back and look at the problem holistically.

However, encryption requires the management of a lot of keys.  As a result, you really need to think through the process and make sure your plans involve a comprehensive view of key management.

But confidentiality technology alone cannot solve all issues.  NetCraftsmen does a lot of work in healthcare and the infrastructure we develop often supports electronic medical records (EMR) systems. Many of these are old and cannot differentiate access to patient data as required by HIPAA regulations.  As a result, if you can see and modify records for one patient, the only thing preventing you from looking up data on someone you are not treating (and therefore not authorized to view) is an HR policy.

In these cases, the policy might be enforced through the examination of log files. While after the fact, the presence of a forensic trail would be a powerful incentive to prevent snooping.

Integrity Defined

Integrity describes thatdata should be maintained in a correct state, and nobody should be able to improperly modify it, either accidentally or maliciously. 

Integrity is often simplified to mean checksums, backups and/or disaster recovery (DR).  But it literally means that data should be maintained in a correct state, and no person or process should be able to improperly modify it.

As a result, there is substantial crossover with Confidentiality.  Encryption prevents several vectors and role-based access controls (RBAC) and attribute-based access control (ABAC) define user access to data.  For example, this creates an environment that segments access so a user can’t modify what they can’t reach.

But the unique contribution is with data integrity and that is maintained through technologies such as digital certificates, digital signatures, hashing and yes, backup and recovery technologies.  This establishes the goal of ensuring the data is both trustworthy and tamper proof.

The ultimate safeguard is immutable storage.  This is where copies of the data are made that cannot be modified.  This is emerging as a primary defense against ransomware attacks where the attacker encrypts the data and holds it hostage to extort money.  With one client we designed a solution moving the immutable backups to a colocation facility not visible from within their environment.  This kind of offsite storage is also a safeguard against any number of DR scenarios.

The Zero-trust approach with integrity is to integrate the approach across all IT silos.  This means implementing least privileged access technologies such as RBAC and even ABAC, an emerging technology standard that can apply context to the permissions. It also involves coordinating encryption technologies, certificate management, and backups that include immutable storage as needed.

Integrity Examples

Loss of integrity is defined as data being modified without authorization.  A public example of a security breach based on integrity is defacing a public web site to sully a firm’s reputation.  A more insidious example would be breaching an administrative account and changing file permissions to permit changes.

We need all the technologies deployed for confidentiality along with operational excellence.  On the integrity specific side, the technologies needed include digital certificate management, backups, disaster recovery planning and immutable storage.

Availability Defined

Availability describes that an authorized user should be able to access data wherever and whenever they need it.

Availability is often simplified to mean backups, disaster recovery (DR) and system design.  But it literally means that data should be available to users whenever and wherever it’s needed to support the business.

As a result, there is substantial crossover with integrity.

Again, as with integrity, the ultimate safeguard is immutable storage.  This is where copies of the data are made that cannot be modified.  This is emerging as a primary defense against Ransomware attacks where the attacker encrypts the data and holds it hostage to extort money.  With one client we designed a solution moving the immutable backups to a colocation facility not visible from within their environment.  This kind of offsite storage is also a safeguard against any number of DR scenarios.

The Zero-trust approach with integrity is to integrate the approach across all IT silos.  This means implementing least privileged access technologies such as role-based access controls (RBAC) and even attribute-based access control (ABAC), an emerging technology standard that can apply context to the permissions. It also involves coordinating encryption technologies, certificate management and backups that include immutable storage as needed.

Finally, we need to examine the system from an availability mindset.  This means a lot more than simply providing redundancy, it means thinking through what the end user needs from a readiness standpoint.

In reliability engineering we discuss 5 9s as the concept of a system being highly available (HA).  That number was inherited from the telecommunication service provider industry. The literal definition of this is that the system is 99.999% available. This results in an expectation that there be no more than 5.26 minutes of downtime per year.

But what if you need continuous availability? And how does one maintain these systems?

Even more challenging is that for the user to be able to interact with the data, the discrete systems include:

  • Data storage technologies being used
  • The data base and file storage systems utilizing the physical storage
  • The application suites involved
  • The network infrastructure between the users and the data

The result is that the combination of these 4 systems results in a combined availability of less than 5 9s.  To achieve 5 9s for the complete system, each component must be at 6 9s.

And this does not even answer the question on continuous availability.

A system that can sustain only a few minutes of downtime per year is very likely to be in terrible shape from a security and operations standpoint. As a result, designing a system or a system of systems that can operate during maintenance permits the system to remain current and up to date with security and other patches.

Availability Examples

Loss of availability is defined as data being unable to access, modify or add data.  A public example of a security breach based on availability is a distributed denial of service (DDoS) attack.  This type of attack consumes a firms Internet infrastructure making it difficult to do business.

A more subtle example would be loss of access due to a systemic IT issue, a failed design or because of a facilities loss. Such a loss could be due to power, weather, cybersecurity incident.

This brings up disaster recover and business continuity planning.  What many organizations fail to plan on is the length of the recovery period and amount of work to ensure the plans will work when needed.  Source

How Can ITM Help You?

iTM covers all aspects of Cyber Security including but not limited to Home cyber security managed solutions to automated, manage threat intelligence, forensic investigations, Mobile Device Management, Cloud security best practice & architecture, OSINT and cyber security training. Our objective is to support organisations and consumers at every step of their cyber maturity journey. Contact Us for more information.