If a theme emerged during the third annual NTSC National CISO Policy Conference, it was one of hope. Hope that the partnership between the public and private sector continues to improve and strengthen national security. Hope that cybersecurity evolves toward models that helps CISOs more proactively fend off cyberattacks. Hope that a continued focus on cybersecurity threats leads toward better legislation, regulations, and best practices to help our industry.
But some clouds continued to darken that hope. Many of the keynote speakers and panelists acknowledged that threats to critical infrastructure keep them up at night. Many acknowledged that the cyber threat intelligence exchange needed to keep up with cyber threats is not moving fast enough. Many acknowledged that technology advances—despite a lot of excitement and promise—continue to outpace our ability to secure that technology.
This blog post summarizes several important discussions among the keynote speakers, panelists, CISOs, and senior technology executives who attended our annual conference. These discussions are an essential part of our mission to bring the public and private sector into productive dialogue with each other, and these candid conversations lead to action items (facilitated by the NTSC) that impact national cybersecurity standards.
Soon into his talk, Krebs applauded the NTSC’s work in helping operationalize the public-private partnership on a national basis, citing H.R. 1975 as proof of the NTSC’s leadership. With CISA’s sharpened focus (compared to its predecessor, the National Protection and Programs Directorate) and collaboration with other federal departments and agencies, Krebs made the case that the federal government is getting better with its processes, policies, and measures. CISA is actively working on better informing information collectors, aligning resources, operationalizing, partnering, and centralizing to save money and find efficiencies.
Using the CISA Statement on Iranian Cybersecurity Threats as a recent example, Krebs described how CISA takes scattered data points and stitches them together to benefit businesses and the public. CISA can pull together a bigger picture of specific cyber threats than the private sector can do for itself. Because of this capability, Krebs wants to support the CISO’s mission by engaging more CEOs and boards and make the broader case for public-private cyber intelligence exchange to business leaders. Considering itself the nation’s risk advisor, CISA wants to put information in the private sector’s hands to help them make decisions. Ultimately, the private sector manages risk—CISA will advise but not manage risk for companies.
Right now, major areas of focus for CISA include critical infrastructure (as seen by CISA’s national critical functions set that maps to economic functions), the global supply chain (seen by examples such as Kaspersky or Huawei), ransomware (especially at the state and local government level), and issues related to the fundamental fabric of the internet (such as DNS).
General Davis pointed out that the world of cybersecurity is rushing away from the world of technology at a time when it needs to align. Right now, technology grows simpler, easier, and more convenient with fewer people involved, native integration into products, and more automation. At the same time, cybersecurity becomes more difficult and complicated by involving more people, relying on too many manual processes, and reactively responding to attacks.
To do a U-turn and align to technology, cybersecurity needs to involve four “Is”:
Drawing upon his experiences in the federal government, General Davis said that nation states just want companies to make a mistake. They recon and probe, exploit vulnerabilities, and deliver malware. To counter such sophisticated attacks, a security framework needs to use unstructured machine learning and respond at the speed that cyber threats operate. That requires complete visibility, reducing the attack surface, and preventing known and unknown threats. As cybersecurity aligns more with technology, we must nurture a prevention mindset.
To conclude, General Davis pointed out that we need to get over our industry culture of sharing leeriness (which took a big hit with Edward Snowden). The public-private sector relationship is better than it appears. Businesses are run by shareholders and customers, so they may not see the importance of information sharing. Congress can help by advocating for a legislative incentive to move in this direction.
At first, a CMO’s point of view seemed out of place at an event full of CISOs. However, Neumeier’s presentation became one of the day’s most well-received—emphasizing the CMO’s critical role before, during, and after cyber events.
In many cases, lawyers end up leading crisis communications when a data breach occurs. That’s not a good idea. Lawyers are not professional communicators like marketers, and CMOs should not take a backseat to lawyers. A host of activities needs to take place beyond a crisis plan such as responding fast and effectively within hours, communicating apologies through multiple mediums, and explaining how the data breach won’t happen again.
It’s a mistake to act stoic during a cyber event, as others could construe that reaction as indifference or guilt and allow others to define the narrative. Neumeier posed the question, “What would reasonable people appropriately expect a responsible organization or leader to do when confronted with this kind of situation?” Four additional questions included:
Neumeier said that smaller teams are better than larger teams when enacting a crisis plan. His concluding recommendations to CISOs were to find your CMO, review your crisis communications plan, ensure that you achieve your cybersecurity scenario, and drill the plan.
Arsenault led a discussion about cybersecurity transformation that involves empowering employees, engaging customers, optimizing operations, and transitioning products. As an example, Microsoft (like many other companies in the early days of technology) would give its technology product to customers and the customer would be responsible for operating it. A company like Microsoft would have no visibility into how a customer ran their product. Today, we take the opposite approach where many technology companies have visibility into how customers run their products—and that includes a responsibility to make sure those products are secure.
As an example of protecting the security of both customers and employees, the discussion explored why we make two-factor authentication (2FA) so hard when it’s one of the single best ways to secure. Eliminating passwords is possible with the help of 2FA, and yet less than one percent of people turn on 2FA. This leads to a dilemma for CISOs. What’s their responsibility? To force it on users? To make 2FA the default? These approaches are risky as they tend to decrease user experience and add friction.
The discussion also delved into the opportunities and challenges of AI, which attendees seemed to agree should be called “augmented” (rather than “artificial”) intelligence. Practical applications of AI include using machine learning to prevent people from using common passwords, helping executives fend off spear phishing attacks, or tracking and interpreting regulations. As AI develops, it needs to be rooted in ethical principles and timeless values such as fairness, inclusiveness, reliability, safety, transparency, privacy, security, and accountability.
Attendees wrestled with thoughts about assessing and reskilling people for digital transformation. Too much outsourcing occurs instead of developing in-house, full-time talent. When developing this talent, companies need to simplify skills and recruit more technical product builders. Overall, companies must build for the future and increase diversity by hiring people early in their career. However, one downside of reducing vendor count and hiring more full-time people is losing flexibility with budget. Digital transformation will inevitably involve tradeoffs that challenge the status quo and lead to different ways that CISOs approach their organizations.
US Cyber Command and the NSA are facing continual threats below the level of armed conflict such as intellectual property theft and threats to democratic principles—but threats to our critical infrastructure worried the three generals most. The United States is facing unacceptable threats, and decades of not responding to low-level attacks has led to strategic harm.
Before cyberspace, we could clearly identify if a situation was war or not war. Now, cyberspace presents a vast middle area. To confront such ambiguity, we need to be collaborative, not accept the status quo of the old deterrence rules, and make sure someone is in charge of responding to cyberspace threats. At the same time, we need to be careful about contingencies and consequences of decisions in cyberspace. For example, Eastern European countries share the same electric grid as Russia. If we do something to Russia’s grid, we also affect partners and allies.
In addition, the panelists noted we are defending the nation with very few people and resources. This is why the concept of “collective defense” holds such importance as a vision that leads us toward breaking down barriers between the public and private sectors, eliminating proprietary information sharing restrictions, and using AI more. Otherwise, we may be opening ourselves up to a “Cyber Pearl Harbor” event.
The way to beat these threats involves operationalizing the public-private sector partnership, and it’s clear we have a long way to go. While the panelists acknowledged that the federal government is slowly getting better at information sharing, it’s not moving fast enough. One key problem is trust. While we hear a lot of talk about public-private partnerships, actions speak louder than words. A tremendous amount of work still needs to be done across all entities that share information, and it takes a lot of time and resources to nurture these partnerships. Only by working together can we understand the entire cyber threat picture.
The good news is that different partnerships are rapidly growing in different areas, and information sharing platforms are also exploding. Solutions must move at the speed and scale of the threat. We need a common data standard to help public-private sector cyber threat intelligence exchange and encourage sharing this information more broadly. The US government is having trouble doing this internally. Plus, many industries still don’t see the national security imperative of public-private partnerships. We need more buy in from private industry, and they need to see themselves as part of national security.
A clear correlation exists between cybersecurity compliance and national security. When companies comply with cybersecurity standards, national security improves. However, compliance challenges include budgets and board support, CISOs’ creation of innovative and entrepreneurial environments, and the cybersecurity workforce shortage. Because CISOs aren’t the most popular executives within their companies, it’s sometimes difficult for them to get the resources they need to do their jobs.
CISOs often turn over, so the impact of the team they leave behind matters when it comes to ongoing compliance. In addition, the cybersecurity workforce shortage is impacting compliance. With 3.5 million projected unfilled positions by 2021 and many barriers of entry to a wider, diverse talent pool, there is insufficient organizational support where many resumes do not get to CISOs from HR. CISOs are also getting pressure from boards to offshore SOCs.
Competency program outcomes need to be based on a repeatable solution, a standard, and measuring risk to the organization. The outcome needs to result in a highly skilled, diverse cybersecurity workforce. Instead of thinking about the traditional GRC, Gina coined GTPRC (Governance Threat People Risk Compliance). The NICE Framework serves as a standard to align people to cybersecurity strategy by helping CISOs map job descriptions to it, develop organizational structure and management team development (such as labs, cyber ranges, and boot camps), assess skills, and create workforce development plans.
An attendee noted that we need to change our culture from the inside. CISOs and the security industry don’t do a great job at opening their arms. There is a tribal aspect within the security industry encouraging the hiring of people “like them.” As a result, the talent shortage becomes essentially a shortage of our own making.
As the next evolution in cyber defense, Integrated Adaptive Cyber Defense (IACD) is often misconstrued as a “thing”—as if just another product. It’s actually a strategy and framework that works differently for each company, using a combination of integration, automation, and orchestration that all helps drive network defense.
During this event, the NTSC announced a partnership with Johns Hopkins related to IACD. The two organizations will team up to establish a working group. The NTSC will facilitate bringing CISO projects to Johns Hopkins, and Johns Hopkins can work with a CISO’s organization on implementing an IACD program. To establish the proof of concept, Johns Hopkins ran two successful IACD pilots with Huntington Bank and HP Enterprise that drew out lessons learned and proved important ideas about information sharing.
At large enterprises, the implementation of IACD proves extremely effective. Many organizations that wish to skip IACD often make a few mistakes such as thinking they just need an automation and SOAR technology upgrade, thinking they don’t need orchestration because they already do scripting, and thinking they just need an orchestrator. None of these are effective. The solution is not just about technology; it’s about people and process. Scripts can conflict, and orchestration provides sustainability in management that scripting doesn’t get you.
Panelists talked about the importance of automation. For example, if incident response and investigation processing is not automated, then you’re behind. Automation works especially well when addressing incidents with various levels of benefit and regret. An IACD strategy can help organizations automate the determination of risk and the action to be taken. When this occurs, teams process handle less tasks and free up work. IACD values consistency, joint prioritization, and visibility into everything. Does something matter or not? What’s going to allow you to act? This strategy leads to quick cross-organizational process improvements and gives you threat intelligence that drives network defense.
At a high level, reflections about the IACD pilots were positive. At first, challenges existed making the business case or getting vendors on board, but everyone eventually saw the value of IACD. The IACD community ended up as an unexpected bonus. Abstracted capabilities become playbooks that are shared and can be adapted for each company. The community also shares ideas and use cases across different industries.
When the discussion centered on how teams react, panelists acknowledged that people can be reluctant about IACD. They sometimes think the end game of automation eliminates them from the picture, and they feel fear. Getting people comfortable with the change is important. Teams need to know that if they automate, they succeed. The impact to the team isn’t losing work or experiencing job threats. Instead, they can keep learning.
Instead of having to manage many different products, CISOs can use IACD to streamline operations and return to the driver’s seat. To develop a strategy that works, panelists recommended taking highly repetitive tasks that people don’t like to do and automating them. Don’t try to do it all at once, maybe starting with a third of the organization. Taking such steps over time allows the entire strategy to eventually come together.
We need to grow bolder because the adversary uses automation and orchestration, leaving us with no choice about IACD. The attackers are moving faster than us, and we’re not doing a good job responding. Organizations cannot resist IACD. It’s the next evolution in cyber operations, moving the conversation from speed to scale.
It always helps to hear directly from the FBI about trends that relate to national cybersecurity policy, information sharing, and best practices. After establishing some basic definitions of how the FBI identifies cybercrime, Frigm pointed out that the definition of "authorization" is key when legally enforcing hacking cases—especially when assessing if someone exceeded authorized access. This definition is important because such language makes tactics such as “hacking back” legally problematic.
Also useful was a review of the mission of the FBI Cyber Division and how PPD 41 in July 2016 demarcated rules of responding to cyber incidents between DHS, DoD / NSA, and the DOJ / FBI. The FBI’s information sharing is all about actionable intelligence. Advisories are disseminated to private sector partners based on current threat analyses, and the FBI collaborates with IC3, DHS, and the Secret Service. Investigative resources include the Cyber Action Team (deployed within 24 hours to an incident), the National Cyber Investigative Joint Task Force, and CART (which focuses on computer forensics).
Today’s top cyber threats are driven by hacktivism, crime, insider threats, espionage, terrorism, and warfare. While threat actors have different motivations, they often use the same tactics—such as ransomware. Ransomware originated as a petty, low payment crime but has evolved into more sophisticated, targeted attacks asking for very large amounts of money. Why? Because people pay. Threats to industrial control systems, IoT, and the supply chain are also especially worrisome right now.
As an example, we indicted Iranians in 2016 for denial of service attacks they conducted in 2013. They gained access to the control system for a dam in Rye, New York and could have flooded several towns. It was just luck that it didn’t happen—the control systems were disconnected for maintenance. Nation states are also using destructive malware to attack organizations and destroy data in the United States.
Baseline best practices include backing up, developing a strong incident response plan, getting sensitive data off the internet (and being careful about attached systems), turning logging on, and paying attention to social engineering risks and threats.
Interested in participating in next year’s conference, joining our Board, or becoming an NTSC underwriter? Reach out to the NTSC at firstname.lastname@example.org.