These notes are directly copied from my Notion. Please excuse any formatting issues.

Access Control Matrix

  • An access control matrix is a table that defines access permissions between specific subjects and objects. A matrix is a data structure that acts as a table lookup for the operating system.
UsersData Access File #1Data Creation Application
JohnRead/WriteExecute
AlexReadExecute
BeckyNoneNone
  • Rows show the capabilities for each subject a.k.a capability list. Columns show the access control list for each object.

Bell-LaPadula

  • Confidentiality
  • DoD approved

Properties:

  • Simple Security Property - No Read UP (to stop people to access confidential information)
  • Simple * Property - No Write DOWN (so there won’t be any information leakage to lower levels)
  • Strong * Property - Same Level (communication or data transfer only happens at the same level)

Clark-Wilson Model

  • Commercial model
  • Keep users out of your stuff or they’ll break it.
  • Input validation through an application or intermediary. Do not let end user access database/back-end
  • Enforces well-formed transactions through the use of the access triple:
    • user → transformation procedure → constrained data item (CDI)

3 Integrity goals

Separation of Duties:

  • Prevents unauthorized users from making modifications

  • Prevents authorized users from making improper modifications

  • Maintain internal and external consistency

Defense in Depth

a.k.a Layered Defense or Onion Defense

layered-defense-768x762.png

Defense in depth is a security strategy that involves implementing multiple layers of defense to protect against potential threats. The idea behind defense in depth is that if one layer of defense fails, there are still additional layers to prevent a complete security breach.

The multiple layers of defense typically include physical security measures, network security measures, and data security measures. Physical security measures could include security cameras, access control systems, and security guards. Network security measures could include firewalls, intrusion detection systems, and antivirus software. Data security measures could include encryption, data backups, and access controls.

Implementing a defense in depth strategy can significantly increase the overall security posture of an organization. It ensures that even if one layer of defense fails, there are still additional layers to prevent a complete security breach. However, it is important to note that defense in depth is not a one-time solution. It requires ongoing monitoring and maintenance to ensure that the multiple layers of defense are still effective against new and emerging threats.

In conclusion, defense in depth is a critical security strategy that can help organizations protect against potential threats. By implementing multiple layers of defense, organizations can significantly increase their overall security posture and minimize the risk of a complete security breach.

Least Privilege

Least privilege is a computer security concept that requires users and processes to have only the minimum level of access necessary to perform their duties. This principle is based on the idea that limiting user privileges to the bare minimum will reduce the risk of unauthorized access, data breaches, and malware infections.

In practice, least privilege means that users should only have access to the files, folders, applications, and systems that they need to do their job. This can be enforced by implementing role-based access control (RBAC) policies, which assign permissions based on job functions or responsibilities.

For example, a receptionist may only need access to the company’s visitor log system, while a software developer may require access to the source code repository, build servers, and testing environments. By limiting access to only the necessary resources, the risk of accidental or intentional misuse of sensitive information is greatly reduced.

In addition to RBAC, other security measures such as firewalls, antivirus software, and intrusion detection systems can complement least privilege policies and provide additional layers of protection against cyber threats.

In conclusion, implementing least privilege policies is a crucial step in securing sensitive information and maintaining the integrity of computer systems. By limiting user privileges to the minimum level necessary, organizations can greatly reduce the risk of data breaches, insider threats, and other cyber attacks.

Need to Know

The need to know principle is a security concept that limits access to sensitive information to only those who need it to perform their job duties. This principle ensures that confidential or sensitive data is only accessible on a need-to-know basis, reducing the risk of unauthorized access and data breaches. In practice, need-to-know policies can be enforced by implementing access controls and user authentication mechanisms. This principle is closely related to the least privilege principle, which also aims to limit access to only the necessary resources.

Difference between Least Privilege and Neet to Know

“Need to know” is commonly applied in scenarios where there is a need to protect sensitive data, such as classified government information, trade secrets, or personally identifiable information (PII). It helps limit exposure and minimize the risk of unauthorized disclosure or misuse of sensitive information. Access is granted only when there is a legitimate and justifiable need, and it is strictly controlled and monitored.

“Least Privilege” applies to various aspects of information systems, including user accounts, processes, applications, and systems. It ensures that users have only the necessary permissions required to carry out their specific job functions, limiting their capabilities to the minimum needed for effective performance. This principle helps reduce the attack surface and potential impact of unauthorized or malicious activities.

While “Need to know” primarily deals with access to information, “Least Privilege” is broader in scope, encompassing access privileges across various system components. “Least Privilege” can be seen as an implementation of the “Need to know” principle in terms of access permissions

Privacy by Design

Privacy by Design (PbD) is a framework and approach that promotes incorporating privacy considerations into the design and development of systems, products, and services from the very beginning. It aims to proactively address privacy requirements and protect individuals’ personal information throughout the entire lifecycle of a project.

Principles of Privacy by Design

  • Proactive not Reactive approach; Preventative not Remedial
  • Privacy as the default setting
  • Privacy is Embedded into Design; not bolted on or added on later.
  • Full functionality - Privacy should not hinder with functionality - Positive Sum, not Zero Sum
  • End to End security - Full Lifecycle Protection
  • Visibility and Transparency - Organizations should be transparent about their data handling practices and provide clear information to individuals about how their personal information is collected, used, and shared. Users should have control and the ability to make informed choices about their data.
  • Respect for User Privacy - Keep it user-centric

Secure Defaults

When it comes to security, it’s always better to be safe than sorry. One of the best ways to ensure the safety of your systems and data is to set secure defaults. Secure defaults are preconfigured settings that are designed to provide a high level of security without any additional configuration.

Some examples of secure defaults include:

  • Using strong passwords that are difficult to guess
  • Enabling two-factor authentication
  • Disabling unnecessary services or protocols
  • Using encryption for sensitive data
  • Enforcing regular system updates and patches

By setting secure defaults, you can greatly reduce the risk of security breaches, data theft, and other cyber attacks. It’s important to note that secure defaults are not a panacea; they should be seen as a baseline of security that can be further enhanced by additional measures.

In summary, secure defaults are an essential part of any security strategy. By implementing them, you can greatly reduce the risk of security breaches and ensure the safety of your systems and data.

Fail Securely

Fail securely is a concept that refers to the way a system should behave when it encounters an error or a failure. When a system fails securely, it means that it fails in a way that does not compromise the security or integrity of the system or the data it contains. This may involve shutting down the system, isolating the failure, or triggering an alert to notify the system administrator or security team. The goal of failing securely is to minimize the impact of the failure and prevent any further damage or exploitation. By failing securely, you can ensure that your systems and data remain secure even in the face of unexpected failures or errors.

Fail Safe

Fail safe is a concept that refers to the way a system should behave when it encounters a critical failure or malfunction. When a system fails safe, it means that it fails in a way that minimizes the risk of harm to people, property, or the environment. This may involve triggering an emergency shutdown, activating safety systems, or alerting emergency responders. The goal of failing safe is to prevent any harm or damage that could result from the failure. By failing safe, you can ensure the safety and well-being of your users and the public.

Separataion of Duties (SoD)

Separation of Duties (SoD) is a security principle that is used to ensure that no single individual has complete control over a business process from start to finish. The principle is based on the idea that having multiple individuals involved in a process reduces the risk of errors, fraud, and malicious activities.

In practice, SoD means that different roles and responsibilities are assigned to different individuals within an organization. For example, the person who approves a purchase order should not also be the person who receives the goods. By separating these duties, the organization reduces the risk of fraud or errors, as it requires collusion between two or more individuals to carry out any malicious activity.

SoD is particularly important in the areas of finance and accounting, where it can help prevent financial fraud. The principle is also used in IT environments to ensure that no one person has complete control over critical systems or data.

To implement SoD, organizations need to identify the business processes that require separation of duties, and then ensure that the roles and responsibilities for each process are clearly defined and documented. This can be done through policies, procedures, and training.

In conclusion, Separation of Duties is a critical security principle that helps organizations reduce the risk of errors, fraud, and malicious activities. By assigning different roles and responsibilities to different individuals, organizations can ensure that no single person has complete control over a process or system.

TCB and OS Architecture

  • The totality of protection mechanisms within a system

  • TCB components include the CPU, RAM, OS Kernel, BIOS, etc.

  • The TCB is what is evaluated when certifying a system

OS Kernel

  1. Reference Monitor

    • The Law
    • defines the set of design requirements to make the determination regarding subject/object access. It provides the rules that govern access.
    • a conceptual security mechanism that enforces access controls and mediates all interactions between subjects (such as processes or users) and objects (such as files or devices) in a system. It acts as an authoritative source of security policies and ensures that all access requests are evaluated and enforced consistently.
  2. Security Kernel

    • The Police
    • enforces the reference monitor

    Three requirements for the reference monitor/security kernel:

    1. Must facilitate isolation of processes
    2. Must be invoked at every access attempt
    3. Must be small enough to be tested and verified in a comprehensive manner
  • Open System - uses open hardware and standards. using standard components from a variety of vendors.
  • Closed System - uses proprietary hardware or software

Protection Rings

05fig02.jpg

  • Layer or Ring 0 - OS Kernel (Privileged mode)
  • Ring 1 - Non-privileged part of the OS
  • Ring 2 - OS and I/O drivers and OS utilities
  • Ring 3 - Applications or User Programs
    • Note → Ring -1 for Virtualization

Threat Modeling

Threat modeling is a process that helps identify potential threats and vulnerabilities to an organization’s assets, such as data, systems, and people. It is a systematic approach that involves identifying, prioritizing, and mitigating security risks based on their likelihood and impact.

The process of threat modeling typically involves the following steps:

  1. Identify the assets: Determine the critical assets that need protection, such as data, applications, systems, and people.
  2. Define the threat environment: Identify potential attackers, including internal and external threats, and consider the various methods they might use to compromise the assets.
  3. Create a threat model: Develop a comprehensive model of the threats that could potentially harm the assets. This includes identifying potential attack vectors, determining the likelihood and impact of each threat, and estimating the cost of each potential incident.
  4. Prioritize the threats: Determine which threats pose the greatest risk to the assets and prioritize them for mitigation.
  5. Mitigate the threats: Develop and implement a plan to mitigate the identified threats. This may include technical controls, such as firewalls and intrusion detection systems, as well as administrative controls, such as policies and procedures.
  6. Review and update: Threat modeling is an ongoing process that should be reviewed and updated regularly to ensure that the organization’s security posture remains effective.

Overall, threat modeling helps organizations identify potential security risks and implement effective controls to protect their assets. It is an important component of any comprehensive security program and is a critical skill for security professionals.

Threat models

  1. STRIDE

STRIDE is a threat modeling methodology that focuses on identifying and mitigating threats based on six categories: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.

  • Spoofing: An attacker pretends to be someone else in order to gain access to unauthorized resources or information.
  • Tampering: An attacker modifies data or code in order to gain unauthorized access or cause harm.
  • Repudiation: An attacker denies a legitimate action or event in order to avoid accountability.
  • Information Disclosure: An attacker gains access to sensitive information without authorization.
  • Denial of Service: An attacker prevents legitimate users from accessing a resource or service.
  • Elevation of Privilege: An attacker gains access to higher-level permissions or privileges than they should have.

By using the STRIDE model, organizations can identify potential threats and implement effective controls to mitigate them.

  1. PASTA

PASTA is a threat modeling methodology that stands for Process for Attack Simulation and Threat Analysis. It is a seven-step process that involves:

  1. Define the application: Identify the scope of the application and the assets that need protection.
  2. Create an abuse case: Develop a comprehensive list of potential attacks that could compromise the assets.
  3. Identify the security objectives: Determine the overall security objectives for the application and identify any specific security requirements.
  4. Create a threat profile: Develop a detailed profile of each potential attacker and their capabilities, motivations, and resources.
  5. List the security controls: Identify the security controls that are currently in place and list any additional controls that may be needed to mitigate the identified threats.
  6. Perform a threat analysis: Evaluate the effectiveness of the current security controls and identify any vulnerabilities or weaknesses that need to be addressed.
  7. Report and manage: Compile a report detailing the findings of the threat analysis and develop a plan to manage the identified risks.

By using the PASTA model, organizations can identify potential threats and implement effective controls to mitigate them.

Zero Trust & Trust but verify

Zero Trust is a security concept that assumes that every user and device is a potential threat. It requires strict access controls and verification for every user, device, and network attempting to access resources. Zero Trust is based on the principle of “never trust, always verify.”

On the other hand, Trust but verify is a security approach that assumes that users and devices are trusted until there is evidence that they are not. It involves granting users and devices access to resources, but monitoring and auditing their activities for any signs of malicious behavior.

Both Zero Trust and Trust but verify have their advantages and disadvantages. Zero Trust provides a higher level of security by assuming that every user and device is a potential threat, but it can be complex to implement and manage. Trust but verify is easier to implement and manage, but it may not provide enough security against sophisticated attacks.

In conclusion, Zero Trust and Trust but verify are two different security approaches that can be used to protect resources and data. The choice between them depends on the organization’s risk tolerance, security requirements, and resources available for implementation and management.