The "Higher Security Mindset" - Seven Best Practices to Keep you Safe
Updated: Mar 16, 2018
Recently I received a couple of emails in regards to some of my posts where I refer to the fact we must have a "higher level" of thinking when it comes to information security. The question in these emails asks just what higher level means... and what it consists of.
I wish I could take credit for this type of thinking, but it really was taught to me by Kevin Day. However, I don't mind passing it on to you to further that knowledge to others. Most of this is ripped from his book "Inside the Security Mind", and I highly recommend you check out the book if you don't already own it.
When looking at infosec as a whole, we got to stop worrying about the next wiz bang security tool and start thinking about security best practices that when followed, will help to keep an organization safe. Even though the security landscape is constantly changing, these practices (when applied) will adapt to the highly dynamic nature of information warfare and allow you to repel your adversaries without much incident. And that is what makes a higher security mindset.
So lets talk about seven best practices, that when applied, will do more to protect you than running to buy the next wiz bang security tool uninformed.
Think in terms of Zones
Zoning is the process in which you define and isolate different subjects and objects based on their unique security requirements. For those uninitiated to the terms, a "subject" is a person, place or thing gaining access. An "object" is the person, place or think the subject is gaining access to. I use the terms generically since when zoning you really could be applying it to anything. A file, a server, or even the physical access to your safe. You have probably seen the concept of zoning in Internet Explorer where Microsoft breaks zones down into the Internet, Local Intranet, Trusted Sites and Restricted Sites. This is just one example of how you can break something into zones. Of course the concept of zoning can be applied anywhere, as long as each zone treats security in a different manner.
Although I have seen most people think of zones in a network-centric manner, it doesn't have to be. It could apply to applications, physical areas and even employee interactions with others as a defense against social engineering tactics.
Anyways, a zone is a grouping of resources that have a similar security profile. In other words, it has similar risks, trust levels, exposures and/or security needs. For example, an Internet facing web server will have a different trust and exposure level than an intranet web site. As such, the two should be in different zones. Though you can have umpteen different zones, typically the most common scenarios involve three zones:
The trusted (internal) zone
The semi-trusted (dmz) zone
The untrusted (external) zone
These three zones can apply to almost anything, from network based services, application programming and even physical security layouts.
The trick is separating zones in such a way so that we can maintain higher levels of security by protecting resources from zones of lesser security controls. The separation mechanism between zones could be as simple as a firewall, a piece of managed code or a locked door. The goal is to have some degree of control over what happens between the zones. And have logical communication medians to allow for zones to communicate safely where appropriate.
Theoretically it would be nice to live in isolation and never care about other zones. But in reality, at times some zones will need to be able to talk to others. If we didn't allow that, you wouldn't be allowed on the untrusted zone of the Internet from your trusted zone of your internal LAN. It would have to be severed. The trick is to understand the risks of exposure when communicating between zones, ensuring that some sort of filtering safeguard is working in between to determine what is, and more importantly what is NOT, allowed to communicate through the filter. As an example, there is a much higher level of risk in allowing a direct inbound connection from an untrusted zone to a trusted zone. This is why we have firewalls on our perimeters. (You DO have a firewall between the Internet and your computers don't you????) And the risks are significantly reduced if we place an untrusted inbound connection into a semi-trusted DMZ.
See how this all fits together? Zones give us the ability to reduce risk by applying technical safeguards in a logical manner through grouped resources. How we communicate between a trusted and semi-trusted zone would be different than an untrusted to trusted zone. And we can make better security decisions by understanding that.
I have been using a six-step process that Kevin showed me to apply the zoning concept into the decision-making process for infosec. The following procedures can help in that process:
Identify any instance where an untrusted or less trusted object comes in contact with a trusted, valuable or more sensitive object.
Determine the direction of communication that is needed. Ask yourself "Is it possible to use an outbound communication model (trusted going to less trusted), or do I need to have the untrusted object initiate the communication". Where possible ALWAYS try to have the more trusted zone control the communication.
Determine where it would be possible to separate the trusted object into two components; one that handles sensitive information and the other that acts as a relay or middle entity in the transaction. This is why proxies can work so well in security.
Determine what forms of communication need to take place between zones and block everything else. Understand the different levels of risk exposure and determine if its necessary to perform the tasks. As an example, why use clear text telnet if you can use secure shell (SSH)?
Place as many security controls between each of the components as is reasonably possible, remembering what assets you are trying to protect. A $50,000 firewall doesn't make a lot of sense to protect your $500 collection of Michael Bolton MP3s.
Document the reasoning, supporting data and conclusions in this decision-making process. Keep this document for reference and to simplify the decision-making process for similar situations in the future.
Since the dawn of time, chokepoints have been a key part of security practices in warfare. A chokepoint is a tight area of control wherein all inbound and outbound access is forced to traverse. Kings of medieval times understood that if you could funnel the enemy through tight doorways it makes it much easier to pour down fiery oils on them. Likewise, its much easier to keep a thief out of your network when the network only has one gateway leading in and out. In the infosec space, chokepoints also grant us the advantages of:
Security focus - We can focus on particular areas of control.
Ease of monitoring - It is much easier to watch our enemies when there is only a few places to look
Ease of control - It is much easier to implement good security mechanisms when only dealing with a limited space
Cost reduction - By filtering access at chokepoints, we will only need to implement one control device at the chokepoint rather than having separate controls for every object. This reduces the time and materials required for the implementation and maintenance of security measures.
Exposure reduction - By focusing on just a few chokepoints of access, we introduce fewer opportunities for error and exposure than if we enforce security controls in multiple areas.
Chokepoints are a critical component of a higher security mind. They greatly reduce the infinite number of possible attacks that can take place, and thus are some of the best tools to use in information security.
One thing to consider when using chokepoints though. They also become single points of failure. As such, it is important to increase the availability measures taken in relation to the number of access points consolidated. As an example, if everyone has to go through a single point to access the Internet, it might make a lot of sense to ensure there is a level of redundancy at the chokepoint.
Applying chokepoints is pretty easy. Here are some simple steps when contemplating chokepoints:
Identify all access points to a particular resources or related set of resources
Consolidate all such access points through a single security object
Enforce tight controls, monitoring and redundancy on that security object
Establish a policy for future access points, stating that they must be filtered through an approved chokepoint
Continue to test and scan for new access points that do not filter through a chokepoint.
I think Bruce Schneier said it best when he stated that "security is a process, not a product." When looking at security architecture, it is important to recognize that no single device is without flaws. Every significant application, server, router and firewall on the market today harbors some vulnerabilities. Additionally, most of these same resources have a good chance of being misconfigured, unmonitored or improperly maintained. On their own, each object will eventually become a weak link that would allow an attacker to get in. As such, layered defenses are crucial to repel intruders and ensure that any one weakness on its own will not let an attacker in (or out for that matter).
Layered security is a hot topic and I don't have to really go into great detail. But here are a few things you can do to apply layered security in your organization:
Take an object and apply as much security directly on the object as is reasonably possible
Consider the access points to the object and apply as much security between the subject and the object as is reasonably possible
Consider all the object's dependencies, include the OS, third-party services etc and apply security to each. This should be performed for both the object itself and any security mechanisms protecting the object
Make sure the object itself and anything guarding the object are monitored and generate access logs. If one object is compromised, secured logs should exist elsewhere on a secure device for forensic analysis
NEVER consider an object safe simply because another object is protecting it. NEVER forgo directly applying security on the object assuming no one will ever be able to attack it
Understand Relational Security
Information security involves numerous chains and relationships. Any given object will almost always have a series of relationships with other networks, applications, events etc which will prove to be of great significance to our security considerations. The security of any object is dependent on the security of its related objects, and if we fail to see these relationships, we will be unable to properly address security. This is called relational security.
A server, for example, may be considered safe because it is not connected to the Internet. It is, however, accessible by the administrator's home computer through a dial-up session. The admin's system itself is connected to the Internet through a broadband connection. Thus, by following this chain of relationships, the server is actually connected to the Internet. Following such chains can point out where systems and networks that are considered to be safe are, in reality, vulnerable. And this is exactly how hackers typically gain access to systems. They go in through less secure back doors to gain access to more trusted systems.
Vulnerability inheritance is probably the most vital and yet most neglected security relationship. The level of vulnerability within any object should be considered in relation to the vulnerability of its related objects. A file share between a secure system and a vulnerable system greatly diminishes the security of the secure system. If the secure system is accessible in any way from the vulnerable system, then, to some degree, it will inherit those vulnerabilities.
This is exactly how modern worms breach sensitive systems. Which is why I think its NUTS to have things like nuclear power plants remotely accessable. When will we ever learn!!!
Understanding Secretless Security
The best security solutions are those that rely as little as possible on secrecy for protection. Relying on secrets for security has several weaknesses. For examples, secrets tend to leak out. If you keep your life savings under your mattress, and yet talk in your sleep... your secret may be easily compromised. Secrets can also be guessed. A thief breaking into your house may just look under your mattress during the burglary. If you magnify this problem by a few thousand end-users and several administrators, then you will probably spend more time securing your secrets than securing your valuables. Let's look at some classic examples where secretless security is commonly applied.
Open encryption algorithms - Cryptography history is riddled with the failures of encryption that used secrecy for protecting information. It wasn't until the most modern algorithms that used keys that secretless security came into play. Not that it's perfect mind you. Instead of worrying about keeping the algorithm secret, we typically have to still keep our keys secret. Which is why the focus of encryption of today is in protecting the keys... and not the code itself.
Open security applications - This is the old argument of black box vs crystal box security. You can read my article on Shattering the crystal and poking holes in the black box to understand what I mean about this. As has been proven time and time again, applications that base their security on a secret will eventually be discovered and the security will be rendered useless. Good security applications do not base their security on secrets.
Secretless authentication - With the dismal failure of secret-based solutions such as passwords over the years, many organizations are now turning to alternate approaches to safeguard authentication. Advanced authentication no longer bases itself on just what you know, but typically also include something you have and/or something you are. This is why two factor authentication is surging in the enterprise space right now. It is much easier, for example, to fake someone's password at an authentication prompt than it is to fake their eye pattern during a retinal scan.
Have you ever heard the phrase "don't put all your eggs in one basket"? Never have all your investments in one industry; never rely on a single person to do a critical process; and never, never, assign all security responsibilities to one employee, one system, or one process. And, if you are a security professional, make sure you are never the one with all the responsibilities and power. (Even if you WANT to be a BOFH... don't)
Separating responsibilities does not stop with personnel, however. This concept applies just as strongly to placing all our faith in one security application, or one security device. If Server X is the only thing protecting our entire company, performing filtering, content management, intrusion detection and authentication, and running VPN and logging, we have a security issue. No system is perfect, and no security device is unbreakable. (No matter how many vendors claim their's is... even when offering rewards to hack it) At a minimum we should have something monitoring and protecting the security of our main security devices.
Here are some standard management practices you can take to divide responsibility within your organization:
Maintain redundant staff - Always have a designated backup employee who can take over another security employee's primary responsibilities. Rotate people through the positions so that they are not only familiar with the role, they can adapt and take on the responsibility later if the primary employee in that role leaves, gets sick or goes on vacation.
Monitor everyone equally - Ensure that any security measure applied to the organization is either universally enforced or has some equivalent security measure applied to the admin and security staff. If the security staff monitor everyone's access, make sure someone else is monitoring them. (Such as the admin group, and vice versa)
Enforce security rules on everyone equally - Everyone should be made aware of the rules and the fact that NO ONE is an exception. That includes the CEO... and the security staff.
Always follow layered security practices - By applying depth of security in your organization, when one thing fails external components assisting in security should know about it. Separating responsibility in this manner ensures that if a system is compromised, the logging and monitoring systems should know about it.
If you read my blog at all, you know I typically talk about this when talking about designing secure software. It is more important to test the code execution paths when something fails, than when it succeeds. This same thinking should be applied to the higher security mindset.
Everything is subject to failure, no matter how robust or expensive it is. Such failures often lead to lost productivity and potential security issues. As such, potential failure scenarios should be considered before any new implementation. When programming an application, failures should be made to lock down security. When a network architecture is designed, failures should not result in bypassing security as is commonly done. It should fail "CLOSED" (to not give any access). If a power outage occurs, services, applications and devices should apply security during the reboot process. Consider failures in all devices and services, walk through the contingency plan, and consider the security implications therein. This is especially essential for major failure plans like disaster recovery policies.
I used to get annoyed with Windows Server 2003 when you rebooted after a power failure. (Our battery backup only lasts an hour right now, and doesn't signal Windows properly to halt *sigh*) It always prompts you for a dialog to explain WHY the system shutdown unexpectedly. Then I realized what a great opportunity this was. This allowed us to consolidate the logs and make business contingency planning decisions about power based on knowledge collected in the audit logs for the Windows servers. Understanding that on failure it rebooted and then prompted the admin before it would startup allowed us to track what was going on at all times. And this has been very useful information as we make decisions on our new office architecture needs in relation to power.
So now that you are thinking...
If you apply these seven security best practices into your daily infosec life, you will really move to a higher security mindset. Policy decisions will be made with a more informed basis and you will be able to adapt to the dynamic nature of the information security field. Although it may not be perfect, it will go a long way to assist in effectively applying the technical safeguards within your organization. And that is what will make you a better security professional. Not because you know how to configure and use the next wiz bang security product. Because you will know how to apply it in the midst of your security best practices to make it work FOR you more effectively.