The Archaic Traditions of Security

TL;DR:

Information security threats facing companies are constantly changing; yet many organizations have information security policies that are dripping with outdated procedures which are ineffective or even harmful towards the organization’s posture against the latest threats.  Security incidents can be extremely detrimental to all organizations, but especially small and medium sized organizations.  Nevertheless, many organizations (especially small and medium organizations) do not allocate enough resources to information security until after an incident has already occurred.  Part of this issue lies within the belief that properly securing a company’s infrastructure will cost millions of dollars, and malicious actors will not target smaller organizations as much as larger ones.  In reality, having expensive electronic controls in place may often be less effective than simply having the proper organizational policies in place, enforcing the policies, and reviewing the policies frequently (ideally with 3rd party input).  In regards to the size of an organization being an issue for attackers, it should be remembered that security through obscurity is not security at all – it is simply dumb luck.  A business that relies on dumb luck is setup to fail from the start.

            “Red-alert!  New threat detected!”  This was a slack message that I was used to constantly seeing in the infosec channel at a former job.  I even managed to thoroughly annoy my coworkers by building a plugin that played the Star Trek “red alert” sound effect every time the message was received (I personally thought it was hilarious).  As the sole security engineer for the entire organization, it was easy to see why this message was so frequent.  I would no sooner finish incident response on one threat, and a new one would pop up.  I was even getting calls in the middle of the night when users would inevitably get compromised and start sending out phishing/spam to everyone else inside and outside of the company.  I learned that I was doing things the hard way – the constant cycle of asking upper management for money to buy more “toys”.  If I wasn’t trying to tell them that I needed the newest and greatest software or gadget on the market in order to curb the enthusiasm of our users from clicking on “stuff”, I was instead asking to buy the latest and greatest security awareness training program.  Anyone who has had to have a budget meeting in a medium sized organization knows that asking for new stuff is never fun, but it was a conversation that was held frequently.  Don’t get me wrong, all of those are great things when used properly (and you certainly need the correct tools for the job – maintaining a secure infrastructure is not possible with policy alone), but they are only band-aids that mask the real problem: lack of enforced policies.  Whenever I had to tell a user “no”, I always had the acceptable use policy in tow; but the wakeup call was one day when my boss said “oh… don’t tell anyone that it’s against the policy… none of us really think that it is enforceable… or at the very least no one currently follows it”.  This is when I realized the premise of this discussion: the information security threats that we are facing are constantly changing; but many organizations (including the organization I was working for at the time) have security policies which are dripping with outdated procedures that are ineffective or even harmful towards the organization’s overall posture against the latest threats.  It isn’t a problem with our users, it’s a problem with us and poorly written policies that users would never be able to understand – much less follow.

            Policy failure does not only affect small to medium sized organizations, it also affects larger organizations as well.  One of the most famous recent examples of this was the Equifax breach.  The initial breach supposedly stemmed from a vulnerability in Apache Struts, which had been patched on March 8th, 2017.  According to congressional testimony by Equifax’s former CEO Richard Smith, Equifax’s patching policy at the time required that this patch be installed within a 48 hour time period, yet the security team failed to identify the vulnerable software in their environment until it had already been compromised.  Mr. Smith went on to state that “Based on the investigation to date, it appears that the first date the attacker(s) accessed sensitive information may have been on May 13, 2017. The company was not aware of that access at the time. Between May 13 and July 30, there is evidence to suggest that the attacker(s) continued to access sensitive information, exploiting the same Apache Struts vulnerability. During that time, Equifax’s security tools did not detect this illegal access.” (“Prepared Testimony of Richard F. Smith before the U.S. House Committee on Energy and Commerce Subcommittee on Digital Commerce and Consumer Protection”, 2017, p. 3)  Based on this testimony, it is clear that Equifax had the knowledge necessary to protect their systems against this vulnerability before it was exploited; and in fact their policy required that they do so.  The issue laid in the fact that the policy was not enforced properly, and their systems were not adequately documented.  Even with an internal memo that went out alerting the security team that a patch needed to be installed, the inadequate documentation allowed the security team to believe that none of their systems needed to be patched; and proper follow-up was not performed to ensure none of their systems required the patch.  This first incident could have been prevented if their systems were properly documented and the patching policy added a requirement that once a potentially necessary patch had been identified, a supervisor or other employee should be required to manually verify that it was installed – no fancy tools were needed to fix the issue before it was exploited.  Of course, this assumes that the policy would have been properly enforced even if it was properly written.

            While policy failure can obviously happen with large organizations, small to medium sized organizations also tend to have issues with inadequate or outdated policies.  This is often due to the fact that small and medium organizations tend to rely on general IT administrators (such as support staff) to develop security policies and practices instead of dedicated security personnel; and those IT administrators often do not have time to research current best practices, so they will default to what they know best.  A perfect example of this is password policies.  For years, many IT departments have used password policies with something to the effect of “a minimum of 8 characters, must be changed every 90 days, and must have three of the four types of characters: capital A-Z, lowercase a-z, 0-9, and symbols”.  Current research shows that these policies are ineffective – not because they need to be strengthened while requiring the user to build even more complex and hard to use passwords (quite the opposite in fact).  This research has been around since 1999; yet many people working in IT are still cautious to accept it even to this day.  Anne Adams and Angela Sasse note that “Many users have to remember multiple passwords, that is, use different passwords for different applications and/or change passwords frequently due to password expiration mechanisms. Having a large number of passwords reduces their memorability and increases insecure work practices, such as writing passwords down—50% of questionnaire respondents wrote their passwords down in one form or another. One employee emphasized this relationship when he said ‘…because I was forced into changing it every month I had to write it down.’ Poor password design (for example, using ‘password’ as the password) was also found to be related to multiple passwords. ‘Constantly changing passwords’ were blamed by another employee for producing ‘…very simple choices that are easy to guess, or break, within seconds of using ‘Cracker’. Hence there is no security.’ It is interesting to note here that users, again, perceive their behavior to be caused by a mechanism designed to increase security.” (Adams & Sasse, 1999, pg. 42)  The National Institute of Standards and Technology states that “[Password] Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets [passwords]. [Password] Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically).  However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.” (Grassi, et al., “Digital identity guidelines: authentication and lifecycle management”, 2017, pg. 14)  This means that not only does peer-reviewed research suggest that stringent password policies can often cause more problems than they fix, the National Institute of Standards and Technology also explicitly states that complexity requirements and periodic password expiration should not be required.  (The one exception being if there are indications that the password has been compromised – in which case it should be forced to be changed immediately.)

            Even with multiple studies and standards, and authorities recommending against requiring different character types and periodically changing passwords, many IT administrators are still hesitant to change their policies.  One such organization gave me the opportunity to prove the ineffectiveness of this policy when they had doubts as to my recommendations for modifications to their password policy.  This organization has approximately 1,000 users, and uses the complexity requirements listed above.  Using a list of NTLM hashes downloaded from https://haveibeenpwned.com/Passwords and the open source software “Compromise Checker” (https://semsec.net/2018/08/28/introducing-compromise-checker/), I was able to compare the NTLM hashes stored in this organization’s Active Directory environment with the NTLM hashes of passwords which were known to have been previously compromised (provided by Troy Hunt/HaveIBeenPwned).  The assumption was that if the current password policies were effective, only a small percentage of accounts should be using passwords which were known to be compromised.  What we found instead was that just over 300 accounts had been compromised.  Included in this list were IT admins with “Domain Admin” permissions and executives.  The “Domain Admins” at the organization certainly should have known better than to use passwords which could have been compromised, and based on the training programs in place at this organization, it is arguable that the executives (who should have motivation to follow best practices when they know how) should have known better as well.  While this specific example/experiment did not necessarily prove that the password requirements were directly at fault, it did strongly suggest that the organization’s current password policies were ineffective and needed to be revised in some fashion.  Further testing would be needed to evaluate the effectiveness of password policy adjustments, but with over one third of the company’s users relying on compromised passwords, it was obvious that something needed to be done differently.

            Another area that organizations tend to fall behind in is user education.  Cryptographer Bruce Schneier notes that “Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as ‘teachable moments’ for others. ‘If only everyone was more security aware and had more security training,’ they say, ‘the Internet would be a much safer place.’  Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?” (Schneier, 2016, pg. 96)  While I certainly don’t believe that anyone is advocating that users shouldn’t understand the basics of security awareness, I think it does bring up a point similar to the one that Adams and Sasse made – users want to stay safe; but if they perceive the methods to do so as restrictive, they are less likely to even attempt to follow any of the methods (possibly even put a significant amount of effort into bypassing controls altogether).  I would argue that most users already know that they shouldn’t use the same password on multiple sites.  Similarly, I suspect that most users already know that they shouldn’t write those passwords down on a sticky note and put it on their monitor; but what they already know most of all is that it is not physically possible for them to remember hundreds of different passwords.  Most information security professionals recommend the use of a password manager, yet in my experience, very few end-users have even realized that a password manager was a possibility (when security professionals give advice to never write down passwords, it is often hard for users to differentiate from using a password manager unless the advice to use a password manager immediately follows).  While a password manager does provide a single point of failure, information security isn’t about removing risk entirely (which would be impossible), it is about reducing the risk to an acceptable level.  I believe that Troy Hunt said it best in his response to arguments against password managers (due to a vulnerability which had been disclosed in LastPass shortly before he wrote this): “Password managers don’t have to be perfect, they just have to be better than not having one” … “The [arguments are] generally centred around the premise that here was proof a password manager should never be used because it poses an unacceptable risk. It’s the same irrational response we’ve seen after previous disclosures relating to LastPass and other password managers, my favourite 1Password included in that. It’s irrational because it’s a single-dimension response: the password manager had a flaw therefore we should no longer use it.”  (Hunt, 2017)  If instead of focusing on training users on what not to do (use the same password on multiple sites, click on links in their email, etc), what if we instead focused on training them on how to effectively use different passwords for each account (through password managers) and how to click on links safely (inspecting the URL and understanding what makes a URL likely to be unsafe)?  By focusing on the positive (how to safely perform the behaviors instead of avoiding the behaviors altogether), we can help eliminate the attitude of “it is impossible to use a computer safely, so why should I even try?”  This thought often stems from the disconnect between system designers and system users.  “The usable security community has so far focused on enhancing the usability of already existing security systems, with a narrow interpretation of usability: they focus exclusively on ‘fixing’ human users (rather than fixing technologies), so as to render them ‘able’ to ‘use’ security. Consequently, non-usability-related causes of user disengagement from security are not examined. Security experts fail to notice the divergence between what they imagine user values to be and users’ actual values. This divergence, in turn, can cause otherwise usable security artefacts to be useless, counter-productive or even harmful.” (Becker, Sasse, Dodier-Lazaro, & Abu-Salma, 2017, pg. 1)  An added benefit of focusing on the positive and helping users learn to safely perform the actions we have told them to avoid over the years is that we start to see how the controls and technology that we have in place can be improved so that the user experience promotes security.

            The third (and final) policy that I will discuss is that of restrictions.  Restrictions are almost always implemented for a reason, and it is important to review whether those restrictions are still valid.  One very specific instance of this point comes to mind.  A few months back I was evaluating a well-known SaaS for a client.  This software would be handling highly sensitive personally identifiable information, so it was important that the platform was safe to use.  One of the things that I immediately noticed was that while I was creating the test account, the password requirements stated that I couldn’t use a set of characters that would often be used in cross site scripting attacks.  My pen-tester brain started to kick in, and I began to question why those restrictions would be put in place – after all, if the software was properly hashing (and salting) the passwords, there would be no reason for these restrictions.  My suspicion was that the passwords were stored in plain text or in reversable encryption.  I informed the client that I had suspicions over the SaaS’s password handling procedures that brought the security of the rest of the application into question.  (After all, if an application can’t even process passwords properly, how can I expect it to process the rest of my data safely?)  I got in touch with my client’s sales rep at the SaaS provider and voiced my concerns.  After signing lots of NDA paperwork that said I wouldn’t disclose any vulnerabilities (if found) and that I wouldn’t disclose any internal processes or proprietary information that was used at the SaaS, I was sent a huge security packet.  It was admittedly impressive (if everything in the packet was accurate, they had one of the most evolved security practices that I have seen), but it still didn’t answer my question about the character restrictions.  I later received an email from a senior product manager who informed me that I was the first person who ever questioned that restriction, and it started an internal discussion.  It turned out that I was partially correct – the restrictions were originally put in place to prevent cross-site scripting; but after years of improvements to their processes, after being questioned, the consensus was that they probably were no longer needed, and should be considered for removal.  It comes down to this: if a restriction is no longer necessary, why would you have it in place?  This particular restriction only served to weaken their defenses, since it made it harder for users to use a password manager, and it told potential attackers that they could remove 2 characters from the character set which they were using to attack the system with any form of brute force, dictionary, etc. attack.  A malicious actor will try to extrapolate information from error messages and restrictions; why shouldn’t you beat them to the punch and give less of a foothold for them to start on?

            There are many more policies that could be discussed which are commonplace, yet are pointless (at best), or harmful (at worst); but the examples mentioned above should be a good sampling to demonstrate some of the negative impacts that outdated policies can have.  Once we understand the problem, we can attempt to fix it; but caution must be observed as wildly making changes to policy for no reason can also be a problem.  The solution is not an easy one and does require time to implement properly.  It comes down to one simple concept: every policy should be reviewed regularly.  The interval at which it is reviewed will depend on the type of policy.  Some policies such as threat detection policies may need to be reviewed every few months; whereas others (such as acceptable use policies) may only need to be reviewed every few years to remain effective.  The National Center for Education Statistics states that “By definition, security policy refers to clear, comprehensive, and well-defined plans, rules, and practices that regulate access to an organization’s system and the information included in it.  Good policy protects not only information and systems, but also individual employees and the organization as a whole.  It also serves as a prominent statement to the outside world about the organization’s commitment to security.” (Szuba, 1998, pg. 28)  If the policy is no longer effectively regulating access to an organization’s system and the information on it, why is it still in place?  Organizations change over time, so policies need to as well.  Without regular changes to policy, they become archaic traditions, or simply a waste of paper which everyone ignores.  One way to ensure good policies is to get upper management involved.  Many IT administrators tend to dread getting upper management involved in the decision making process for IT resources.  Upper management tends to ask things such as “why do we need to do XYZ?” or “how do we know that ZYX is working the way that we wanted it to – what metrics do you have for it?”.  These are hard questions to answer; because as people well versed in IT, we tend to take certain things for granted.  It turns out that these are the exact questions we need to be asking ourselves.  How do we know that our acceptable use policy is working like we want it to?  Do we have to resort to spying on our users in order to enforce it?  Do our passwords really need to expire every 90 days?  Do we have mechanisms in place to check our user’s passwords against ones that we know are compromised?  How can we develop accurate measurements for success without having to resort to the ever famous “well… we haven’t been hacked yet, so it must be working… right?”  These are some of the things that upper management can help with if IT professionals would stop acting like the IT hermits who jump into the safety of our cubicles every time that we see another person in the distance.

            While involving upper management is a good first step, another step that we can improve our policies with is user involvement.  Especially in an environment where many users are remote and may not see the IT team around the office, there can tend to be an attitude that it is “IT against the rest of the world”.  This often stems from IT administrators frequently forgetting that using a computer is a method to support the business, it is rarely the business itself (which is easy to forget when the IT department’s job revolves solely around the same technology that supports the business).  “Employees do not go online primarily to create complex passwords and query the authenticity of phishing emails – these are secondary tasks intended to make communication, collaboration, and sharing more secure (for both the individual and the organisation).  Within large organisations, secondary security tasks constitute security responsibilities, defined in a central security policy and mediated by provisioned systems that have integrated security controls (e.g., password-protected access, access cards, email filters). Where the design of these systems does not adequately consider the fit of security with the primary task, compliance with security policy and expected use can become burdensome.”  (Parkin & Krol, 2015, pg. 1-2)  By involving a set of end-users in the decision making process, it often makes it easier to understand the business needs that the policies will be supporting.  While in many cases it would not be feasible to involve all end-users in the process, involving some of the users will allow IT to articulate the needs of the organization from a security standpoint and allow the users to voice any concerns over how those controls should be implemented.

            As policy items are questioned, the types of data stored should also be questioned (which requires involvement from all business units).  Business needs change over time, and privacy issues can creep up.  Many IT administrators have a tendency to save everything by default; but the reality is that the more data is stored, the more the organization would suffer in the event of a breach.  Organizations should take a privacy-first approach in which only the PII of customers, employees, etc that is absolutely necessary is stored.  In addition to lessening the amount of data that could be exposed in a potential breach, it also sets a positive example in a world where many organizations treat privacy advocates like an ancient cult.  IT departments often want to push writing the privacy policy to the legal department; while the legal department frequently considers it largely IT’s responsibility – resulting in a policy that stays stagnant over time.

            While end-user and upper management involvement is essential to writing good policies, another foundation that needs to be found is the ability to break through traditions.  Every part of a policy should be questioned – no matter how much it makes sense at face value.  The threats that we are facing changes every day.  Gone are the days when using an AntiVirus and common sense was enough to stay safe, these days we need to be on the lookout for supply chain attacks, SMB vulnerabilities, SPECTRE issues, and much more.  The reality is that attack vectors are expanding quicker than our controls can, and what is best practice now may change in an hour.  We must come to the realization that we can’t remove risk entirely, we can only mitigate it to an acceptable level.  Doing so while maintaining user productivity is essential.  One of the best ways to accomplish this is through 3rd party involvement.  Outside auditors and testing firms can assist with developing proper strategy and policy review.  In addition to giving you a fresh perspective and a different take on policy items which may be outdated, they also bring the experience of working with many other organizations and understanding what worked and what didn’t for those other organizations.  Ensure that the policy reviewers assist with developing proper metrics and testing scenarios to ensure that the new policies are working effectively.  An added bonus to using 3rd parties to review your policy is that they can often assist with understanding what (if any) regulatory compliance is necessary for your organization.  Once regulatory compliance is identified, it is important that the policies being written reflect those compliance issues.  3rd party review helps to ensure that nothing is missing from organizational policies which could get you in trouble with regulatory commissions.

            While you are involving 3rd parties, this is the perfect time to schedule a pen-test.  Identifying the flaws in your organization’s systems can go a long way towards driving policy.  As flaws are identified, the policies should be written in such a way that they address the resolution of those flaws and aid in future mitigation of similar issues; after all, policies are in place to protect the organization and define how incidents should be handled.  If the policy does not adequately deal with an issue found during pen-testing, then it becomes obvious that the policy requires further review before it can be considered appropriate for the organization.

            Luckily, policy does not need to be written from scratch – especially when it comes to information security policies.  While there are many frameworks available for writing policy, one of the most popular is the NIST cybersecurity framework.  Originally developed for use in the federal government, it is available and targeted for all organizations to be able to effectively implement it for their own use.  “The Framework focuses on using business drivers to guide cybersecurity activities and considering cybersecurity risks as part of the organization’s risk management processes. The Framework consists of three parts: the Framework Core, the Implementation Tiers, and the Framework Profiles. The Framework Core is a set of cybersecurity activities, outcomes, and informative references that are common across sectors and critical infrastructure. Elements of the Core provide detailed guidance for developing individual organizational Profiles. Through use of Profiles, the Framework will help an organization to align and prioritize its cybersecurity activities with its business/mission requirements, risk tolerances, and resources. The Tiers provide a mechanism for organizations to view and understand the characteristics of their approach to managing cybersecurity risk, which will help in prioritizing and achieving cybersecurity objectives.”  (“Framework for improving critical infrastructure cybersecurity”, 2018, pg. 5)

            Using existing frameworks such as the NIST Cybersecurity Framework can be extremely helpful, but it is also important that they not be relied on as 100% authoritative and simply copied and pasted with a few changes here and there.  Frameworks are designed to be guides, and each framework has its own challenges and shortcomings.  Each organization must evaluate each part of the framework and compare it with any existing policy to evaluate the effectiveness that it will have within their organization.  An alternative approach would be to build organizational policy first, and then compare it with the targeted framework in order to use the framework to identify the shortcomings in the organizational policy, instead of using organizational policy to identify the shortcomings in the framework.  One trick to effectively use a framework in this manner is to have a different person in the organization in charge of reviewing the policy each year.  While the policies themselves may obviously need to be approved by the same person each year, having multiple people driving the changes can drastically help avoid “blinders” when it comes to identifying the faults of outdated or ineffective policy.

            At this point, many readers might be considering the fact that while this started talking about small to medium sized organizations, the latter parts tend to apply only towards larger corporations who have large IT departments, legal departments, and lots of resources.  While it is easy to dismiss these issues within smaller organizations, it is even more crucial that they be considered in small organizations.  In smaller organizations, it may be more effective to have “C-level” upper management be the primary drivers for the policy.  With assistance from the IT department and 3rd party consultants, upper management often has the best chance of seeing the entire picture from a business perspective.  While there are a lot of technical details to work out (which IT can help with), the reality is that a policy is a business document – and often upper management is considerably better suited to write that document when the facts have been presented.

            Admittedly this paper could be summarized in five words: “Review Your Security Policies Frequently” and this is a long-winded way of conveying that point.  The reality is that there is more to it than simply reviewing the policy.  A significant amount of research is necessary in order to thoroughly understand the topic that each policy item is designed to define.  The threats that organizations are facing day to day can change on an hourly basis, and it’s obvious that we cannot be reviewing policy that often – otherwise no other work would get done.  The reality is that exceptions to policy are always going to be necessary; however, if left alone long enough, the policy will end up becoming the exception instead of the rule.  When that happens, the organization becomes the wild wild west where anything can happen.  Clearly defined and frequently reviewed policies are critical to the long-term success and safety of an organization.  Once the policies have been reviewed, ensuring user understanding and enforcement of the policies is critical.  Make them easy to understand and digestible so that the user is not overwhelmed when attempting to comply with the policy.  The fact that there are policies that users are expected to know should not ever be a surprise.

References

Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40-46. doi:10.1145/322796.322806

Becker, I., Sasse, A., Dodier-Lazaro, S., & Abu-Salma, R. (2017). From Paternalistic to User-Centred Security: Putting Users First with Value-Sensitive Design. In CHI 2017 Workshop on Values in Computing. ACM.

United States of America, U.S. Department of Commerce, National Institute of Standards and Technology. (2018). Framework for improving critical infrastructure cybersecurity.

Grassi, P. A., Grassi, P. A., Fenton, J. L., Newton, E. M., Perlner, R. A., Regenscheid, A. R., . . . Theofanos, M. F. (2017). Digital identity guidelines: Authentication and lifecycle management.

Hunt, T. (2017, April 04). Password managers don’t have to be perfect, they just have to be better than not having one. Retrieved November 6, 2018, from https://www.troyhunt.com/password-managers-dont-have-to-be-perfect-they-just-have-to-be-better-than-not-having-one/

Parkin, S., & Krol, K. (2015). Appropriation of security technologies in the workplace. Experiences of Technology Appropriation: Unanticipated Users, Usage, Circumstances, and Design.

Prepared Testimony of Richard F. Smith before the U.S. House Committee on Energy and Commerce Subcommittee on Digital Commerce and Consumer Protection, (2017) (testimony of Richard F. Smith).

Schneier, B. (2016). Stop Trying to Fix the User. IEEE Security & Privacy, 14(5), 96. doi:10.1109/msp.2016.101

Szuba, T. (1998). Safeguarding your technology: Practical guidelines for electronic education information security. Washington, DC: National Center for Education Statistics.

About thegeekkid

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.