DEV Community

Cover image for The Best Way Of Making Cybersecurity Effective
Romanus Onyekwere
Romanus Onyekwere

Posted on • Edited on

The Best Way Of Making Cybersecurity Effective

Chapter 1; Architecting for security
Chapter 2; Protecting Payment Card Data
Chapter 3; Clouding The Issues
Chapter 4; Securing Things On The Internet
Chapter 5; Ensuring Security is Effective
Chapter 6; Incident Management Basics
Chapter 7; Measuring Incident Management Maturity
Chapter 8 ; Detecting an Attack
Chapter 9; Hunting For Threats
Chapter 10; Responding to an Incident
Chapter 11; Communications Plan And Notification
Chapter 12; Cybersecurity Goes Global
Chapter 13; Understanding Cyber Norms
Chapter 14; Cybil And The Global Forum on Cyber Expertise
Chapter 15; The Traffic Light Protocol

Chapter 1; Architecting for security
Security doesn't exist in isolation. It's a characteristic of a business service, a business system and business information. All of these are either secure or not secure. Although in reality, security isn't black and white, it most definitely comes in shades of gray. Nevertheless, organizations will often adopt a set of controls to secure their IT systems and not consider whether or not these reflect any of the requirements of the business. Rather than just adopt a generic control set, we can use what's known as enterprise security architecture to architect a security solution which meets the needs of our business and then apply the controls that are necessary to achieve that architecture. One of the most popular enterprise security architecture frameworks is SABSA, the Sherwood Applied Business Security Architecture. SABSA is used to capture business requirements and then determine what security is needed to meet those requirements. The basic construct in SABSA is its architecture matrix.

Image description
The two top layers, the contextual and conceptual, contain the elements of the architecture necessary in the strategy and planning stage. While the three lower layers, the logical physical and component, contain those necessary to design secure IT systems and processes to support the higher level business goals and objectives. This matrix is used to capture all relevant security concepts and activities for the enterprise and these are shown in summary in the cells. Security is often defined in terms of the information assurance we achieve by considering a system's confidentiality, integrity and availability. This approach came from early work on models of security and while it's a very common approach, it's also a very constricting and artificial paradigm. Confidentiality, integrity and availability are indeed three attributes but they're not the only ones.

Image description
We can add more such as non-repudiation, authenticity, utility. But to create an effective information security architecture that's business centric rather than security centric, this is still inadequate. The SABSA framework provides a comprehensive set of business security attributes, which have been collected from hundreds of consultancy projects, and most, if not all, attributes that an organization needs to define their own conceptof security.

Image description
SABSA's many attributes are grouped into seven categories in what's called an attribute taxonomy: user, management, operational, risk, technical, legal, and business. These categories represent the focus of the business outcome which is being protected by the attributes. This is in effect a picture of what business success looks like. This set of attributes are often used as a pick list from which to choose a relevant subset of attributes for an architecture project and they're quite useful as a cross check on the attributes derived from the business. We can show the attributes in a business relevant form. We show the attributes representing activities which are important to the military, preparation of the force, intelligence regarding the battlefield, the characteristics of operations, commanding and sustaining the force and providing protection of the force.

Image description
When we architect security, we start with business goals and objectives. Here's a strategic business construct showing goals and objectives. The smaller ellipses together represent the objectives which are required to meet the goals in the larger ellipses. We can analyze the objectives to determine what we need in terms of security attributes to ensure the objectives are met. In this table, we can see some general business objectives, we call these business drivers, which map to a number of the individual enterprise technology and business division objectives. For each of these, we can describe the business driver as a set of security focused attributes. We can then use these attributes to measure security across the organization and we can map them down to the information systems which support the various business processes. By measuring the effect of a security incident on an attribute, we can map this easily back to the business goals and objectives, which depend upon it. There's much more to SABSA and enterprise security architecture in general but the main takeaway is that we need to always look at security through business eyes because security is what we need to achieve business success.

Chapter 2; Protecting Payment Card Data
Cyber criminals understand that credit cards are a lucrative target for attack. The payment card industry governing body, the PCI Council, has responded to this threat by issuing the PCI Data Security Standard as an actionable framework for developing a robust security regime for cardholder data. More recently, and in the light of state sponsored attacks on personal and government data identified in the Snowden leaks, government regulators have enacted regulatory requirements for notification of data breaches. In particular, the European General Data Protection Regulations. This has increased the business liability in the event of a data breach.

It's now critically important for any business taking payments through credit cards to protect their information and transactions. It helps to understand the terminology of PCI when reviewing the Data Security Standard. Let's look at the key terms. A merchant is someone who takes a credit card or debit card as a form of payment. A service provider is someone who provides a service that is used for payment card information storage, or transactions of a merchant. A qualified security assessor is an independent person certified to report on PCI compliance. An internal security assessor works for the merchant, and is certified to submit a self-assessment. A data breach is a failure of security, which results in the loss of cardholder information. Cardholder data, or information, is the primary account number, the cardholder name, the expiration date, and the service code. The sensitive authentication data is the encoded data on the magnetic stripe or chip, the card verification value, and the pin. In the event of a data breach, the card company will launch an investigation to determine the cause. If the company has its PCI compliance, it can claim safe harbor. If not, it could face a hefty fine, or removal of its right to accept credit cards. Regardless, it will face remediation costs, which could include card replacement and possibly customer compensation. With that background to PCI, let's now look at the PCI Data Security Standard itself. The standard provides a set of actionable controls together with testing procedures to provide a clear definition of what has to be done to achieve compliance. Version 3.2 of the standard provides 12 technical and operational requirements areas, covering almost 200 mandatory controls. Let's have a look at some of the key controls for the first six requirements areas. The first requirement is to have an effective firewall configuration. This means that firewall configuration standards have been set, that all firewall changes are tested, and that the rule sets are reviewed every six months. A network diagram must be maintained for any part of the network that stores, transmits, or interfaces to payment card data. And data flows across the network need to be defined. Traffic not related to cardholder transactions must be denied access to the cardholder systems. So it's normal to have a segregated PCI zone on the internal network so that the traffic can be managed at the PCI zone gateway. A demilitarized zone is required for any systems with direct internet access, and this needs to be firewalled at both the internal and external gateways. Firewalls are not just for the enterprise. Mobile devices, including any employee-owned devices that are allowed to be connected to the internal network, must have personal firewall software installed and operational, with a configuration that's not able to be changed by the employee. This is a key consideration when thinking about bring your own device, or BYOD, environments. The next requirement is that all default passwords and insecure configuration settings are changed. Security configuration standards are required for each device and system component to allow effective hardening, and all necessary reports and services should be removed. Stored cardholder data has to be kept to a minimum and protected, and no sensitive authentication data can be stored, even in encrypted form, once authentication has been completed. The account number, when displayed, must be masked, typically by replacing all except the last four digits with asterisks. When stored, the account number must be protected through strong cryptography, or one-way hashing. Key management is a critical part of any cryptographic solution, and must be implemented effectively. Transmitted cardholder data on open networks, such as the internet and unprotected wireless networks, must be encrypted. And then user systems, such as email and messaging, must never be used to send unprotected account numbers. The cryptographic scheme must be effective, and what's effective may change over time. For many years, the Secure Sockets Layer, SSL, had been a common cryptographic solution for web access. In 2013, a fundamental vulnerability in the scheme was detected and exploited in the Heartbleed vulnerability. Subsequently, the PCI Security Standards Council determined that the Secure Sockets Layer protocol was no longer an acceptable solution for the protection of cardholder data. Systems processing cardholder data must implement antivirus software to provide protection against malware on both endpoint devices and servers. As some malware may enter a system prior to its signature being included in the antivirus database, regular scans must also be undertaken. Threats and vulnerabilities should be monitored through vendor alerts and threat intelligence feeds, and critical security patches must be installed within one month of release. Development and test accounts must be removed before systems are put into production, and custom development must include source code review prior to implementation, with special attention given to common vulnerabilities, such as SQL injection and cross-site scripting. Production account numbers must not be used for testing. The next six security requirements in the PCI Data Security Standard address the level of security required outside of the PCI environment. Of particular interest is the requirement to restrict physical access, which extends to special purpose devices used to read cards. ATMs, and more recently embedded readers in devices such as gas pumps, are regularly targeted by criminals who install skimmers, which can copy credit card data. These are big business. In April, 2015, a sweep of 6,000 gas stations in Florida found 81 skimmers attached to gas pumps. This particular scam has been estimated to make, in the US, as much as $3 billion a year for criminals. This has been a quick introduction to the PCI Data Security Standard. There's much more detail provided by the PCI Council on this and their other standards, and these are available for download from their website, shown here.

Chapter 3; Clouding The Issues
Cloud technology is no longer a novel approach to deploying infrastructure, but is a mainstream option for enterprises. Cloud security solutions may be deployed by the enterprise IT team, or solutions may be deployed by business groups.

Image description
There are three forms of Cloud deployments in common use, infrastructure as a service, platform as a service and software as a service. However, there are many more specialist forms of Cloud service that can be used. In all cases, there's a need for security controls to be used to protect the Cloud solution, just as there is for an open premise solution. However, there are some differences in the controls when using them for Cloud, and there are some new controls that need to be considered. The International Standards Organization has produced an ISO 27,000 standard for Cloud known as ISO 27017. This is based on ISO 27002, and includes an additional six controls. NIST has produced the SP 800-144, "Guidelines on Security and Privacy in Public Cloud Computing," which refers back to the SP 853 controls. However, the main reference for Cloud security is the "Cloud Security Alliance Security Guidance" for critical areas of focus in Cloud computing, with its 14 areas of controls. These domains provide a full description of Cloud, its security needs, and the controls needed to protect Cloud deployments. The guidance is supported by a Cloud controls matrix, which can be downloaded from the CSA site, shown here. The Cloud Controls Matrix provides the fundamental security principles that should be adopted by Cloud vendors and that can assist Cloud customers in assessing the overall security risk of a Cloud provider. It provides clarity on the shared responsibilities between the Cloud service provider and customer, according to the form of Cloud. More importantly, it provides a controls framework cross-reference to the other major security standards recognized in industry. Version 3.0.1 of the CCM has 133 controls in 16 domains.

Image description
The 16 domains can be seen here. They don't align with the domains in the security guidance for critical areas of focus in Cloud computing, but they do provide a comprehensive coverage of security across Cloud, starting with application and interface security and finishing with threatened vulnerability management. However, the main reference for Cloud security is the "Cloud Security Alliance Security Guidance" for critical areas of focus in Cloud computing, with its 14 areas of controls.

Chapter 4; Securing Things On The Internet
``The Internet of Things is a term which means everything that's connected to the internet that isn't a standard laptop, workstation or server. One dictionary definition of the Internet of Things is the interconnection via the internet of computing devices embedded in everyday objects, enabling them to send and receive data. Wikipedia goes a little deeper and defines the Internet of Things as a system of interrelated computing devices, mechanical and digital machines, objects, animals, or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human to human or human to computer interaction.

Image description
An obvious characteristic of the Internet of Things is that it's connected to the internet. It may only send data, it may only receive data, or it may do both. An important class of internet things are the low power things, those objects which have an embedded battery and no external power supply. These are often required to have a life of 10 years or longer, and so require very low power operation. One of the first organizations to provide guidance on security for the Internet of Things was the IoT Alliance Australia, a part of the Australian Communications Alliance. Its initial Internet of Things Security Guideline was published in February, 2017, and provides an introduction to IoT technology and the key IoT industry sectors. It covers legal, privacy, security, resilience, and survivability issues, as well as IoT device development considerations. There is no definitive set of security controls for IoT, although organizations such as OWASP and GSMA have provided some guidance. The IoT Security Foundation has published a comprehensive set of 142 controls in their security guideline which are grouped into 13 areas of compliance. Take a moment to think about the challenges in providing guidance for IoT. A small sensor may have little memory and a very low power processor but an industrial SCADA device may be as powerful as a modern PC. Think about an IoT soil moisture sensor which is deployed out in the field and has to run in its own internal battery for no less than 10 years. Jot down two reasons why you wouldn't want it to have to run antivirus software. An interesting additional attribute that the IoT Security Foundation has tagged to each control is its compliance class, which can be one of five values relating to the data generated or the level of control provided by the device. The control is then relevant to the IoT device if it's compliance class is equal or higher to the control tag. Class 0 means that the compromise is likely to result in little discernible impact on an individual or organization. Class 1 means that the compromise would likely have limited impact on an individual or organization. Class 2 devices are those designed to resist attacks on availability that would have a significant impact on individuals or an organization. Class 3 devices additionally are designed to protect sensitive data and Class 4 devices are those which have the potential to affect critical infrastructure or cause personal injury. We're likely to see much more attention being given to IoT security controls as we see deployments into key sectors such as intelligent transport and smart cities.

Chapter 5; Ensuring Security is Effective
Commercial cybersecurity products and services can be very expensive, and often we'll pay for a hundred percent of a product and use only five to 10% of its functionality. However, cybersecurity doesn't need to be expensive. We can cover many of our cybersecurity needs with open source products and get a solid capability in place for minimal cost. Having gained experience with them, we can then better determine where we have gaps and which areas we need to invest in to grow our capability. A good place to start in looking for open source solutions is the Kali Purple platform, which has been developed as both a cyberdefense workstation and as a cyberdefense platform for server-based tools. You can check out the details of Kali Purple in my Complete Guide to Kali Purple course, or by going to the Kali Purple Wiki in GitLab. Let's take a look at some of the open source products we can use. There are three popular open source firewalls, pfSense, OPNSense, and Smoothwall. These are great solutions to start with, and as we outgrow them, we can consider moving up to higher performance commercial products. Behind the firewall, we might want to set up a demilitarized zone or DMZ and run a proxy server to manage all traffic in and out of our internal network. Nginx is a well-respected web and proxy server which we can use. Another important tool to have running is a web application firewall. This tool monitors our web traffic to stop attacks on our web applications. Unfortunately, the open source WAF solutions for nginx have been pretty much discontinued, so this is a gap we'll need to address when we upgrade nginx to nginx plus. We'll also want to have an intrusion detection system to monitor for malware coming through the network. Suricata is an open source intrusion detection system that we can install. Another key area of cybersecurity capability is logging alert monitoring. There are a number of open source solutions for this, including ELK, which stands for Elastic, Logstash, and Kibana. These solutions provide dashboards and real-time log displays, which allow us to see everything that's happening on our networks and in our systems. Here's an example of the ELK solution running on Kali Purple. Being able to check that we've patched our vulnerabilities is one of the more important capabilities that we need.

Image description
The Greenbone Vulnerability Manager provides a dedicated vulnerability scanning solution as a community product, and as our needs grow, we can seamlessly move up to its commercial version. Here's an example of GVM running on Kali Purple. A slightly more sophisticated capability is threat hunting, which is where we proactively check out networks for any malicious activity we might not have caught with our monitoring. A good example of what we might use is the Malcolm solution developed by Idaho Labs and the Department of Homeland Security. This integrates a number of tools so we can check the details of sessions that have been run and deep dive into our logs after the event.

Image description
Velociraptor is another threat hunting tool with which we can run queries concurrently across large networks. This provides an extremely efficient way of investigating the spread of malware. Again, it's running on Kali Purple. Another open source threat hunting solution is SELKS, which is again built on Elasticsearch and Kibana. This is designed for smaller networks and we can move to the commercial grade Stamus Security Platform as we outgrow the community version. Wazuh is an open source multi-role solution providing alert monitoring, compliance, and vulnerability management all in the one tool. Here we see Wazuh running on Kali Purple. There are many more open source solutions and guidance on installation and use of many of these is being delivered as part of the Kali Purple Initiative. Putting open source or commercial tools in place is a good start to securing our networks, but it isn't the complete answer. Tools need trained staff to use them effectively, and cybersecurity requires lifelong learning.

Chapter 6; Incident Management Basics
With the resources being invested in both cybercrime and state-sponsored malware, it's inevitable that an attack will eventually penetrate even the most careful organization. When that happens, the difference between inconvenience and disaster will be how well-prepared the organization is to respond to the incident. NIST Cybersecurity Framework provides a set of control objectives under the functional area, Respond. This consists of five categories: Planning, Communicate, Analysis, Mitigation, and Improvements. The framework also includes a recovery function, which adds to the three of the Respond categories. The five cybersecurity framework categories align closely with the four-stage incident handling process defined in the NIST Special Publication SP 800-61, Incident Handling Guide. Unlike the cybersecurity framework, the communications which occur throughout these four stages is not shown as a separate stage. The cybersecurity framework and the SP 800-61 can also be aligned to the three-stage model published by Crest UK, with its model of Prepare, Respond, and Follow Up. Whatever the model, a key aspect of incident management is information sharing. This includes threat intelligence in the preparation stage and operational response matters during an incident. NIST established the Forum of Incident Response and Security Teams, or FIRST, in 1990, and this continues today as an active forum helping support the industry, government and vendor communities. FIRST runs workshops and conferences to foster cooperation and coordination in incident prevention to stimulate rapid reaction to incidents and where subject matter experts can meet to share information. The Community of Computer Incident Response Teams or CERTs, operate at a national level to protect the government and its critical infrastructure and to provide community advice on cybersecurity matters. The US-CERT, for example, is part of the Department of Homeland Security. Through its 24-by-7 operations center. US-CERT accepts, triages, and collaborates on incidents, provides technical assistance and disseminates notifications of current and potential issues. CERTs also collaborate at the international level through the Forum of Incident Response Teams. This involves not only maintaining national CERT-to-CERT channels, running training courses, and participating in annual conferences, but also being the main contact for CERTs to organizations such as the Global Forum of Security Experts and the International Telecommunications Union. It's useful to have a common language when talking about types of incidents and having a set of generic templates which are fit for purpose for each. US-CERT defines seven categories of incidents. Category 0 covers incidents that are part of cyber exercises for testing network defenses. Category 1 incidents are those where an individual gains logical or physical access without permission to a network system, application, data, or other resource. Category 2 incidents are denial-of-service events where the attack successfully prevents or impairs the normal authorized functionality of a network, system, or application by exhausting resources. Category 3 covers the successful installation of malicious software, not quarantined by antivirus software. Category 4 incidents are those involving a breach of acceptable use. Category 5 incidents are scans and probes of a system, looking for open ports, protocols, or services, which don't directly result in a compromise or denial of service. Category 6 is for incidents involving unconfirmed, but potentially malicious activity, which justifies further investigation. Incidents don't often appear in a way which is immediately obvious for categorization. We'll usually have some form of event that's flagged as suspicious and some investigation is needed. An important tool for incident management is the trouble ticket system, which enables us to maintain all relevant information on an event through to it becoming an incident and eventually being resolved. Here's an example of a trouble ticket system called osTicket, displaying its list of open tickets. The US Cybersecurity and Infrastructure Agency runs the National Initiative for Cybersecurity Careers and Studies, and through that, has published what is known as the NICE Framework, which describes workforce roles in cybersecurity. There are three roles related to incident response. Cyber defense analyst, whose role includes running vulnerability scans, monitoring for attacks, and analyzing malware. Cyber defense incident responder, whose role is to investigate, analyze, and respond to cyber incidents, and cyber defense forensics analyst whose role is to analyze digital evidence and investigate incidents. The NICE Framework provides a useful reference to the skills and knowledge required for each of these roles. Why don't you pause the course and take a moment to check out the skills and knowledge required to be a cyber defense incident responder, and check out the tasks you'll be expected to undertake.

Chapter 7; Measuring Incident Management Maturity
The effort put into preparing for an incident will be paid back many times over through a timely and effective response which contains the damage. Preparation involves establishing and training an incident response team, establishing and exercising processes, and acquiring the necessary tools and resources. Once a basic program is up and running, it's useful to carry out a baseline survey of incident response preparedness. CREST UK has developed an incident response maturity assessment tool, which is free to download and use. This is a spreadsheet-based tool which contains over 600 questions across the three stages of incident management. And it can be used to assess an organization's readiness to respond to a cyber attack. A summary version of the tool with just a handful of higher level questions is also available for download from its website. Another early task for the incident response team will be to take advantage of the strategic threat intelligence sources which are being used to inform the cyber risk management team. In addition to the threat reports, it's useful to have tactical and operational threat intelligence. Tactical information exchange is usually available within an operating community. A good example of this is the financial services FS-ISAC. FS-ISAC is an intelligence sharing community for the banking industry. This allows organizations to get early warnings and real-time information about the kind of activities that are impacting other members of the community. At an operational level, the use of mechanisms for distributing indicators of compromise provides real-time actionable intelligence for feeding into firewalls and intrusion detection devices. MITRE has been leading the development of standards for operational feed mechanisms, and the STIX/TAXII protocols are widely recognized within the incident response community. Incident response procedures need to be defined and installed. An issue tracking system is an important tool to enable effective incident management from operational detection through to resolution and recovery. While a standard service management or IT operations ticket system may include incident tracking, it may not satisfy the full requirements for security incident handling.

Image description
The Hive is an open source cybersecurity incident management system which runs in the cloud and allows multiple teams to collaborate on incident investigations. It enables automated analysis at scale of incoming incident information and includes integrated real-time threat intelligence. Another requirement is to establish a set of response playbooks which detail the actions to be taken for specific categories of incident. Many incident response teams create a jump kit, which is a portable case that contains materials that may be needed during an investigation. A jump kit typically includes a laptop loaded with networking and forensic software, backup devices, blank media, and basic networking equipment and cables. The preparation stage is a good time to build relationships in the incident response community, so that access to information and support comes naturally during a crisis. It's also a good time to build relationships inside the company, particularly with the IT team, so that there's no political stumbling blocks when a response is necessary. Finally, incident responders will need to be able to function effectively when managing the containment of an incident. And this means having pre-authorization to take unilateral action and make or direct emergency changes. The last thing a good crisis needs is decision making by committee. With a team established, the key element of ongoing preparation is cyber crisis exercises. These exercise the incident response procedures as well as the skills of the team, and provide visibility of the impact of a cybersecurity incident on the organization. The initial website provides a substantial amount of training and exercise material which can be used for internal cert training and as the basis for customization to the wider crisis management program. This includes handbooks tools, and a full program of pre-exercise training through to complete exercises.

Chapter 8 ; Detecting an Attack
Let's look at the operational response phases of incident response. In the NIST model of incident response these are detection and analysis, containment, eradication, and recovery, and post-incident activities. Detection and analysis is the non-stop process of monitoring for evidence of a cyber attack, and this is the job of the SOC analyst. During the detection phase, the SOC analyst is looking for evidence of malware or intrusive behavior coming into the organization from external sources. This will usually involve watching real-time alerting screens, which run 24-by-7. The analyst is also looking for evidence of malware that has succeeded in penetrating the organization by running file scans and monitoring for signals going out to the malware's command and control servers. A further requirement is to monitor for lateral malicious movement between systems inside the organization to detect malware or an intruder that's penetrating deeper into our networks. Here's an example of the monitoring screen which SOC analysts use. This is the Splunk system, but there are others, such as Graylog and ELK Stack, each with its own pros and cons. All of them, however, digest log records and raise alerts when certain conditions are met. Life in the operations room monitoring for attack isn't easy. It involves many hours of staring at screens of scrolling log records and alerts for candidate incidents which are relevant to pull out and investigate further. Even when there's real evidence of an incident, such as a crashed server, it's often difficult to determine whether the incident is just an IT issue or whether it really is security related, and if so the type, extent, and magnitude of the problem. Picking cyber attacks often requires as much intuition as intelligence. Another challenge is that many alert sources such as IDS have a high rate of false positives. Being under-responsive will let the attack in, but being over-responsive means there's a risk of crying wolf. When an incident is confirmed as being security-related, incident responders will often be asked to analyze ambiguous, contradictory, and incomplete symptoms to determine what's happened. This is where an analyst's skill really becomes important. Signs of an incident fall into one of two categories. A precursor is a sign that an incident may occur in the future. A port scan may be a precursor to an attack, as an adversary would likely do surveillance before launching a hard attack. Similarly, the release of an exploit in the wild to attack a known vulnerability in the organization would be a precursor to an attack. An indicator is a sign that an incident may have occurred or may be occurring now. A beaconing connection back to an unusual IP address may be an indicator that malware is attempting to make a command and control connection. Many of the alerts which are raised in the operations room will be false positives, and it's important to validate any detection before raising alarms. Understanding normal behavior is one of the best ways of discriminating between false precursors and indicators and real ones. Having a knowledge base helps, as this can be used to quickly determine whether the same anomaly has been seen before. Here's what a traffic flow monitoring screen looks like. With this, a SOC analyst can check for unusual flows of information such as might occur in a major data breach. And here's another showing unusually large amounts of traffic going to a port which normally has minimal traffic flows. Detection may involve correlating information over a period of time. Today's analytical tools tend to use big data analytics as a key strategy to detect long and slow APT infections. Deep packet inspection can be used to provide a detailed snapshot of activity on a particular part of the network, and this may give more context to the precursor or indicator. Host-based packet capture tools such as Wireshark can be used, as can network-based devices such as FireEye and Net Witness. Once an indicator is turned into an incident, prioritization is perhaps the most critical decision point in the incident handling process. Incidents shouldn't be handled on a first come, first serve basis, but should be prioritized based on the criticality to the business.

Chapter 9; Hunting For Threats
Stock analysts don't have to just wait for an attack to be detected. They can be proactive and hunt for any threats that might have got into the system undetected. Threat hunting involves both looking for the threat agent itself or by detecting traces of activity related to the threat agent. For example, finding a file of user credit cards in a temporary shared folder or finding an account which shouldn't exist are both evidence that there may have been an attack. A comprehensive set of threat characteristics, what are known as indicators of compromise, are necessary to enable the threat hunter to search for known threats. We can also use advanced analytics and big datasets to look for traces of threat activity in logs. This is how we might find beaconing, the regular connections malware sends out to its command and control system. The threat hunting process is a continuous process of looking around for a trigger to provide the context for a specific investigation, the investigation itself, and then resolution through taking action to mitigate the threat that has been found. Idaho Labs, in conjunction with the Department of Homeland Security, has released an excellent tool for threat hunting called Malcolm. This tool can be used in real time to monitor an attack as it happens, or more usually, as a way of analyzing a packet capture file to hunt for signs of an attack.

Image description
We can view a dashboard, or we can view the packet capture, either as packets using the Malcolm component called Arkime, or at the session level using the Malcolm component called Zeek.

Chapter 10; Responding to an Incident
Early containment is necessary to stop an incident overwhelming resources, or increasing the level of damage it inflicts. Pre-authorization to take action enables containment and allows time to develop a tailored remediation strategy. Containment decisions, such as disconnecting a system, are much easier to make if a response plan template for this kind of incident has been predetermined. Separate containment strategies for each major incident type need to be prepared and pre-authorized. Most incidents will require an ongoing investigation to trace back to the source and the cause of the attack, and this will occur in parallel with containment and recovery activities. This is likely to be the primary role for cyber instant responders, with IT and networks taking the lead on containment and recovery. Access to a wide range of sensor information is important to getting the network visibility that's required to fit all the pieces together. If the incident is serious, then it's likely that a major incident management event will be called. This will typically be run by IT or network operations and will be under the control of the MIM manager. A MIM consists of a group of key stakeholders establishing regular meetings or conference calls to monitor the progress of incident resolution, make decisions collaboratively and coordinate messaging. Although the primary reason for gathering evidence during an incident is to resolve the incident, it may also be needed for legal proceedings. In such cases, it's important to clearly document how all evidence, including compromised systems, has been preserved, using an official chain of custody evidence tag. After an incident has been contained, eradication may be necessary to delete malware, disabled breached user accounts and identify and mitigate all vulnerabilities that were exploited. It's important to eradicate the issues, not only on the affected hosts, but on all hosts that could be affected through the same or a similar attack. For example, removing a default administrator account on one server, whilst leaving the same account open on another, is just asking for more trouble. The last and probably most important rule when responding to an incident, is to continue monitoring for other incidents. An attack may well be a diversion in order to gain more subtle access somewhere else on the network.

Chapter 11; Communications Plan And Notification
One of the critical activities in any incident response is communications. It's particularly important to get this right when we have to prepare our senior executives to face interviews with the media. Doctors Knight and Nurse are two British researchers who've developed a practical framework for effective corporate communication in the event of a data breach. It covers the preparation of the communications plan in advance of an incident and execution of the plan as part of the response. The pre-crisis component of the framework covers five objectives as shown. It requires that we establish and prioritize our long-term aims beyond just the response. This might include protecting our stock value, our brand, and our ability to trade. We need to determine security gaps so that we're not caught flatfooted in the event of weaknesses in our system that might have contributed to the incident. Better we know about and explain them than be caught unawares. We need to make sure before we have an incident that we do have the capability to respond to a crisis, both in terms of tools and skills staff. We can gain a lot by making sure our response plans include working with our partners and key organizations in our supply chain. We'll have a more effective response and we'll be prepared to communicate with a unified voice. Last but not least, we need to perform regular rehearsals and testing to make sure the response plans work and that we're experienced in following them. When we experience an incident that has an external impact, we'll need to decide when and how to disclose it. The framework provides for the two situations, firstly, where we are required to disclose it, and secondly, where we choose to disclose it to avoid potential downstream issues of being perceived to be hiding it. Having made the decision to disclose, we then need to address the main points of our disclosure. Can those impacted by the incident mitigate the risk, for example, by changing their credit cards? We need to be able to say what has been lost, its impact, and provide a point of contact for any questions. We'll also be asked and need to have an accurate answer for the size of the breach. We need to be mindful of what interpretation the media may put on this, and ensure we establish our own interpretation of the incident. The way we frame the message will have a significant bearing on how it's interpreted. There are fourth key things we need to do. Accept responsibility for having let data in our care be breached. Avoid trying to make the incident less important than it is because the truth will out eventually. Be aware and make a point of addressing the fact that those impacted, our staff or our customers, may feel quite vulnerable as a result of their data being potentially made public or misused. Finally, it's important that we don't try to blame someone else for the incident. Even if the weakness came from a service or product we're using, it's our responsibility to make sure these are fit for purpose. By this point, we've had an incident and we need to make as good an impact as we can under the circumstances. Being upfront and taking responsibility, we'll go a long way to mitigating the long-term impact. It's likely that for a significant incident, the person who has to face up to the regulators and media will be the chair of the board or the CEO. In addition, there's an increasing focus from regulators on making directors accountable for cyber incidents and data breach in particular. An example of this is the U.S. Securities and Exchange Commission ruling that came into force in December, 2023. This requires public companies to disclose material cybersecurity incidents within four days and disclose on an annual basis material information regarding their cybersecurity risk management, strategy and governance. In addition to disclosing the governance processes, directors will need to be able to evidence that they've in fact provided effective oversight of cybersecurity. There are guidelines that directors can follow in governing cybersecurity. Ensure CIOs provide effective cyber resilience for IT systems. Ensure cyber resilience is a critical project success factor. Cybersecurity is about managing risk and requires separation of duties. So the CISO should report to the chief risk officer, not the CIO. Acknowledge that security is never perfect and ensure CISOs are able to effectively detect and respond to a cyber attack. Finally, ensure IT systems are checked and approved for operation on a regular basis, a process known as accreditation, and ensure our board-level risk and resilience dashboard is maintained.

Chapter 12; Cybersecurity Goes Global
With the advent of the internet, there came a need to interconnect certain aspects of information technology. Telecommunications providers needed to be able to connect data services through global gateways, and with that, came the need to provide security at those gateways. Electronic information evolved from simple bulletin boards to sophisticated websites and a simple exchange of text messages evolved to the now ubiquitous electronic mail system. Such evolution required establishing global technical standards for interconnectivity and security. The internet engineering task force had been producing technical requests for comments or RFCs from the start of the internet. Shortly thereafter, ISO, the International Standards Organization initiated a project to develop a more sophisticated set of standards known as Open Systems Interconnect or OSI. These were not widely adopted and the IETF continues to be the driving force in internet standardization. Coordination of internet addressing is managed by the internet corporation for assigned names and numbers. ICANN, as it's known is an American-based organization responsible for the databases which determine internet naming and traffic routing. While this arrangement is designed to ensure the stable and secure operation of the internet, it also gave America control of the internet.

Image description
This became a bone of contention with some of the other cyber savvy nations. As early as 2008, there were signs that Russia was concerned about US control over the internet and was considering breaking away and running its own national network. This was also driven by Russia's goal of managing the information available to its citizens. China also was making sure that free information and western culture did not permeate the emerging Chinese cyberspace domain. The great firewall of China, with its estimated 50,000 cybersecurity defenders, carries out a highly effective program of cyber control and surveillance of its citizens. By 2010, the West was becoming very nervous that the global economic miracle being realized through the internet was about to crash. Should the internet become a splinter net? This led the UK to run the first of what was to be an ongoing program of global conferences on cyberspace. The conference considered a set of seven principles for use of the internet, which provided the foundation for maintaining a global network while ensuring nations were able to operate within their own culture.

Image description
The seven principles proposed at the start of the initiative were as follows; Proportionality. Government should act in cyberspace, in accordance with national, international law. Accessibility. All people should be able to access cyberspace. Respect. Users of cyberspace should show tolerance and respect for diversity of language, culture, and ideas. Human rights. The internet should encourage the right to privacy and protection of intellectual property for all people. Openness. Cyberspace should be an open forum for innovation and the free flow of ideas, information, and expression. Collaboration. Nations should work collectively to tackle the threat from criminals acting online. And competition. The internet should be a competitive environment which ensures a fair return on investment in network services and content. Before moving on, take a moment to think about these principles. How well do they fit with the approach the US, UK, Russia and China takes to the internet?

Chapter 13; Understanding Cyber Norms
The principles tabled at the First Cyberspace Conference have evolved into the United Nations Cyber Norms, the rules of normally acceptable behavior for any nation using the internet. These are managed by the group of governmental experts at the UN's Office for Disarmament Affairs, UNODA. The United Nations encourages peaceful use of the internet through adherence to the set of cyber norms and through an active program of cyber diplomacy. UNODA provides a full training course on cyber diplomacy, which includes a module on cyber norms, rules and principles. Implementing cyber norms isn't always easy, however. The first cyber norm is cooperation between states in order to increase stability and cybersecurity, and to discourage harmful cyber practices, particularly those that might impose threats to international peace and security. There's been a lot of progress on cooperation, with nations maintaining a technical focus and avoiding political issues. The second cyber norm is a duty of care over incidents. This means not jumping to conclusions and making sure that all aspects of the incident are considered. This includes addressing the challenges of determining accurate attribution and understanding the impact that's occurred. This is important to avoid misunderstandings and wrongful blame escalating into a more serious event. The third cyber norm is that states should not knowingly allow their territory to be used for malicious cyber activities, including launching cyber attacks and running malicious servers. This is a challenging norm to uphold, especially when private citizens or groups respond to international events by launching private attacks or when a state relays their attacks through another country. The fourth cyber norm is similar to the first in that it involves cooperation between states. However, the focus in this norm is to counter terrorist and criminal use of cyber. The norm suggests that nations exchange information, assist each other and pursue prosecution as part of bi and multilateral cooperation. The fifth cyber norm is to respect human rights on the internet, including freedom of expression and privacy online. There are many cultural challenges in meeting this norm and challenges also with the growing use of misinformation and oversight of social media. As a result, this norm encourages nations to apply the same rights online as exist in their nation offline. The next norm is similar to the third norm, encouraging nations not to carry out or support malicious cyber activities, but with a focus on those that impact critical infrastructure. This is the first of three norms relating to critical infrastructure. Following this is the second critical infrastructure norm, encouraging nations to proactively protect their critical infrastructure from attack. The third critical infrastructure norm is that nations are encouraged to respond to requests from other nations whose critical infrastructure is under attack, particularly where that attack emanates from or relays through their nation. The ninth cyber norm is to take steps to protect the supply chain from being compromised. Starting with nations where information technology products are designed and developed. This is a challenging norm for technology-producing countries where the temptation to subvert equipment is high. The 10th norm is about sharing vulnerability information between nations to support early global mitigation. The final norm, again, encourages nations not to carry out or support malicious cyber activity, this time with a focus on the systems of the Computer Emergency Response Teams of other nations. Take a moment to consider the fifth cyber norm which covers freedom of expression and privacy online. We're seeing a lot of hateful commentary on the internet, some of which is nation-state generated to influence another nation's opinion. Is this okay, because we're encouraging freedom of speech? Consider the privacy of terrorists communicating about an attack they're planning. Should they be allowed to do this in private? And if not, then how do we manage legitimate privacy concerns? The United Nations cyber norms set out what are generally-accepted behaviors on the internet and have evolved significantly from the initial London principles. While laudable, there is a big gap between what nations accept as global norms and what they practice as global participants.

Chapter 14; Cybil And The Global Forum on Cyber Expertise
The Global Forum on Cyber Exchange was established during the 2015 GCCS meeting in the Hague, with the aim of strengthening cyber capacity building globally. At the New Delhi GCCS meeting in 2017, the GFCE launched the Global Agenda for Cyber Capacity building, and in doing so became the global coordinating body for capacity building. The GFCE has five themes, cybersecurity policy and strategy, cyber incident management and critical infrastructure protection, countering cyber crime, cybersecurity culture and skills, and cybersecurity standards. The GFCE encourages voluntary participation by governments, private companies, civil society, the technology industry, and academia, in order to share expertise. The GFCE operates a number of conferences, meetings, working groups, and task forces, and provides a clearinghouse to enable participants to offer their services and expertise to countries which need assistance in developing their cyberspace. This is achieved through the use of a collaboration portal called Cybil. The GFCE provides a range of reports on the development of cybersecurity and resilience and cyber diplomacy, and these are available via the GFCE Cybil portal. Cybil also provides details of the various projects by beneficiary country. Here we see the first 4 of 27 projects relating to assisting Cambodia develop its cyber expertise.

Chapter 15; The Traffic Light Protocol
As cybersecurity collaboration between governments and private industry and other nations has grown, it became apparent there was a need for managing information exchange without resorting to national classification schemes. Information needed to flow freely to those that needed it but not be accessible to the point where it compromised the global cybersecurity activities it was intending to assist. This led to the creation of a scheme called the Traffic Light Protocol, which adds markings to information being exchanged to indicate how freely the information can be shared. There are four marking levels, three of which reflect the colors used in traffic lights. White: where information is marked white, this information can be freely shared as there is no risk of misuse. Green information can be circulated widely within the recipient's sector community, but not via publicly accessible channels, such as an open website. An example of this would be sharing a sector-specific malware analysis. Information marked TLP Amber can be shared with members of the recipient's organization and with clients or customers who need the information to protect themselves. Once again, this information should not be shared via publicly accessible channels. This form of information might include such items as sensitive indicators of compromise. And Red, this is the highest level of marking in the protocol, and it's used when information is intended for the recipient only. This may be an individual or a committee. Unauthorized disclosure of TLP Red information could lead to impacts on a party's privacy, reputation, or operations if misused. Examples of TLP Red might include tentative attribution of a cyber attack. ENISA provides more detailed information on what we might need to think about when we receive TLP-marked information.

Top comments (0)