From my perspective, the collaboration and conversation that has been occurring within business, industry, and government has really allowed us to look at the way we used to do things and see how we can improve. I'm not looking for prescriptions, but direction is nice. I'd also like to know what my peers are doing. Many of the meetings we attend today are specifically about getting together with peers and discussing how we're meeting problems and challenges. So, just the fact that industry has opened up to have that cooperative conversation is important.
But government has also evolved. If you look at older documentation versus the new cyber framework, there's a lot more real-world collaborative information in the documentation today that allows people to actually get things done instead of just checking a box. You have the government actually trying to move in the direction of industry and say, "Okay, how do we improve ourselves? How do we get beyond the checkbox mentality?" For me, while we're always going to have audits and regulations, I am seeing a shift in these organizations and entities moving toward a more proactive, non-audit security mentality. They really get it on the forefront of whether you're developing, installing, or protecting something. It's not just, "Check this box and you're done." It's moving beyond that.
Security awareness has also come significantly far along the maturity curve. Security used to be something that geeks in a dark room worried about. You know the old saying, "They'll keep the barbarians at the gate."
That’s changed. Technology has advanced and the perimeter is no longer defined by hard lines such as firewalls and IPS/IDS. Now, you take devices with you. They're mobile. They're outside your environment. You bring things that are outside your environment into it. And unfortunately, that has presented an opportunity for events and incidents to occur, as that type of advancement has outpaced the security field in many cases.
But to take a negative and spin it into a positive, this situation has helped increase awareness because now everyone is part of the conversation. Things like the Sony hack brought a level of awareness to the average individual even though national security isn’t their purview. And then, within the security realm as that conversation has been fostered, you now get board members and senior executives saying, "Hey, you know, I heard about that Sony breach. Are we protected?" That was a real world example where awareness, as an unfortunate event for somebody else, helped foster a stronger posture and community perspective.
Unfortunately, I think it’s a bit of a Wild West show at this point. Manufacturers in many of these arenas are trying to be first to market from an innovation or UX perspective to deliver a service, and they're not even thinking about security in many regards. The average consumer certainly isn’t thinking that their Samsung TV might listen to them or that someone may access their baby monitor. Those are things that I don't think in any way, shape, or form crosses the minds of the consumer—and they certainly do not, in many cases, cross the minds of developers. That's because despite more awareness and everything about software development and security development lifecycles, businesses still drive hard on either first to market or increased market share. They will pay attention to that until something major happens.
Beyond the consumer in a more commercial sense, non-traditional things becoming internet-aware or internet-enabled create security issues. For example, take HVAC. Obviously, it created a big challenge for Target as it was the point of entry for their massive breach. I think there will be many organizations that are going to struggle with the fact that it might be a non-traditional group like facilities working with HVAC vendors that need to be protected. It can be something as simple as your vending machine.
So, I think that is a huge challenge from a security professional's perspective as those items are being brought into the work space more and more. You have everything from patching to vulnerability management and things of that nature and, unfortunately, many of these things are black boxes. They don't make patching easy to do, and that’s if you even know about them.
I absolutely think it will be a continual problem to keep up with it all. Let's be honest. The real security incentive to an organization that's creating IoT is likely to be either a lawsuit or legislation because the average consumer is not going to stop buying baby monitors, cameras for their house, or internet-enabled thermostats. They're not stopping, period. A lot of people like to say, "Well, the consumer will vote with their feet and their wallet in this case." The reality is the consumer has voted. The consumer wants the convenience, ease, and coolness. They expect, but don't demand, that it is secure. That's because if they demand it, they wouldn't buy the product.
It's a challenge, for sure. And a delicate balance. A lot of security practitioners in the last few years have tried to jump on the “we are an enabler” bandwagon. "We're an enabler. We're not a disabler. We're not a speed bump."
Sorry, but yes we are. Security is all about ensuring that something bad doesn't happen. To do that, you have to put certain controls in place that also add an extra business layer and deny you from doing something without a specific reason. It’s as simple as the lock on the door in my house that keeps me secure. If I want true ease of access, I have no door. If I want the easiest possible access with a door, I have no lock.
As a security apparatus controlling things, you are going to introduce a certain amount of hindrance to business. The challenge becomes picking and choosing which hindrances reduce the greatest risk to the organization. How much am I willing to hinder people bringing, say, BYOD into the organization? Every organization is different. It depends on the information with which you're dealing and to which information users need access. In my particular industry, I'm certainly not going to allow someone to put credit card data on their private machine that I cannot control. If you were a medical facility, you're certainly not going to allow medical patient records with HIPAA impact to be placed on a machine that you're not controlling.
It’s an arms race, in a way, and technology moves at lightning speed. Often, security lags in that arms race. It’s part of security's responsibility to come up with a solution that meets the business need and helps minimize the shadow IT challenge. While I prefer the carrot point of view, the stick is also needed. No organization that deals with some form of regulatory or legislative data such as HIPAA wants to be on the news because Bob's laptop got stolen and he realized he had downloaded 1,500 patient records because he wanted to do some work on the weekend.
It’s an old saying, but the human firewall is the most fallible. So, ongoing education and awareness are going to remain top of mind because, in my opinion, you can always do blanket awareness efforts but then you must do targeted education. It's fine to do phishing campaigns and educate everyone about phishing. But if you have specific needs, from dealing with PCI or PAN data to HIPAA data and finance, there's an additional education component that I believe has to go beyond just general awareness to educate those people about why.
Not the what, because that's easy. "Don't do this." That's the what. I find, once people can make a logical or emotional connection to the reasoning why, then they begin to own the problem. They begin to own their response to how they're going to interact with the systems and how they're going to behave. User behavior, at that point, becomes much more important. In many cases shortly after a phishing campaign, you'll often see an uptick of reported suspected phishing activity to your cybersecurity organization because that awareness has been made. People still feel that pain.
If we don't educate the user, we are doomed to failure. The attackers are human. The threat actors are human. The tools they use are technological. But all they are doing is finding ways to exploit the human condition to get that individual to do something with a piece of technology. Most of the attacks that you'll see today are in some way, shape, or form exploiting the human.
I think the role is going to continue to change. We used the term "shadow IT" before. I think it's all about bringing the leadership role of the security organization out of the shadows. For a long time, CISOs have stood in the shadow of the CIO, and that's not to say it was a bad thing. It's just natural growth. The CIO is often seen as the pinnacle of IT leadership. The challenge becomes that often security requirements and recommendations may introduce some slowness or additional overhead that would conflict with the primary goal of the CIO, which is to have an IT organization that supports the business.
Many of the peers with whom I now interact are no longer just pure security geeks, if you will. They are becoming business-savvy. The Chief Security Officer truly understands business in general and the specific business needs of their organization, and they are able to tailor their vision, view, and implementation of the security apparatus to that organization. That approach has garnered the attention of the executive team and executive leadership that now says, "Okay, we've got someone who can sit at the table. They can understand the business, where we need to go, and tailor what needs to be done to that."
And on the flip side, you have a security individual who won't just articulate the security side to the business individual. I think that ability to bridge the gap has been incredibly important to the Chief Security Officer. If you have someone who comes in and speaks just pure technical language, you're going to lose the executive team in about four sentences.