Vulnerability Management
Organizations are subject to innumerable vulnerabilities on a daily basis. Some are small and require minimal attention, while others can pose a substantial risk to an organization and must be handled with urgency. Risk management is required to properly identify and assess specific vulnerabilities so controls can be put in place to best mitigate the associated risk. Risk identification involves not only determining what risks may exist, but categorizing the risks and documenting as much information as possible including the source and potential threats (“Risk assessment and”, 2007). In some cases generalized data about the impact to an organization, and any significant specific characteristics that clarify relevance are provide (“Information Security Risk,” 1999).
After identification the next step is a risk assessment to help prioritize any discovered risks. An assessment includes determining what impact a risk could have to an organization then sorting the risks based on severity. This often requires substantial estimation, as the likelihood of certain risks turning into a reality cannot easily be determined.
Individual organizations must perform their own analysis to determine what specific vulnerabilities may impact them. There are certain risks that are somewhat common and should be assessed by a majority of organizations. Cyber security vulnerabilities are generally thought of as causing a substantial amount of damage. Because these vulnerabilities can impact software and web portals they can be very difficult to mitigate and can quickly gain public attention (“The Top Cyber”, 2009). While it is extremely important that an organization consider these vulnerabilities, in many cases physical and client vulnerabilities have a higher probability of posing a risk to an organization.
Any piece of software from an operating system to a simple application can be riddled with unanticipated vulnerabilities. Software developers write code with a desired function in mind, but attackers can find vulnerabilities just by trying things the programmers didn’t consider and the program isn’t expecting. This is very troublesome, especially as the number of vulnerabilities being discovered, and subsequently corrected, is rising over time. Applications used internally are often plagued with vulnerabilities. Software developed and used only in house may actually pose a lower threat then mainstream software since attackers would have very limited access to attempt to manipulate the code. Commercial software has had security holes brought to light in very public fashion numerous times, setting records. In August 2010 Microsoft had a record ‘patch Tuesday’ providing updates for 34 separate vulnerabilities (Mills, 2010). Only 8 months later in April 2011 set a new record 64 vulnerabilities fixed – nearly doubling the previous record (Bort, 2011). Adobe’s own record of 23 patches was set in October 2010 (Reeder, 2010). Almost a year later in September 2011 code execution vulnerabilities were discovered and a ‘critical’ patch was released (Naraine, 2011). Applications are often slow to be patched because it may be difficult to determine what combination of things is specifically causing a vulnerability to exist (“The Top Cyber,” 2009). Web portals are also under the constant threat of attack; both by attackers attempting to gain access or using trusted websites to infect visitors. One of the most common attacks to gain illicit access is an SQL injection attack. In this type of attack SQL commands are entered into input fields, such as user name or password, in an attempt to query a database and retrieve information. On the opposite side malicious code may be embedded onto a trusted website, when a user visits the website an unpatched vulnerability in an application is exploited to provide an entry point for a virus, worm, or data theft (SANS).
Physical security is a concern that should be at the forefront of any organization. There are many risks to information systems, but the threats posed by intruders are some of the first that come to mind – maybe someone hanging from the ceiling in the style of Mission Impossible. The ways in which an attacker could gain illicit access must all be evaluated and controlled. Many of these controls are also the most obtrusive. Multiple controls should be used in combination to deter, delay, detect and respond to prevent an attacker from gaining access or an intruder being stopped. Deterrent controls may include gates and walls, security cameras, guards and identification badges. If these do not scare a potential intruder away they add an additional element of support by creating a delay. Additional delays including strong locks on doors to compartmentalize the building by clearance, electronic locks that are tied into an alarm system and time delayed doors can also be utilized. It is important to have additional security measures inside the facility in case an attacker was to gain access. Workstations must always be locked down with passwords and all drive bays and nonessential input ports should be disabled or restricted. Even this may prove insufficient if an attacker were to gain access posing as an IT employee fixing a computer or a messenger picking up a package as hardware could be physically removed. All machines must be locked down. Desktop towers must be immovable and sealed to prevent removal of a drive. Laptops are especially vulnerable and employees should be provided a lock to be used on and off the premises. This type of safety is especially important for servers. Some attackers may be more interested in destroying property than stealing it making measures from cages to specially designed and located rooms practical depending on the potential risk.
The damage an intruder can do is substantial, but the extensive number of risks posed by natural disasters creates an even greater threat that is often overlooked. Flooding is one very serious concern as it can destroy machinery, reduce computing power, overload circuits and even ruin the building itself (P., & Lawrence, 2007). Even in areas where natural flooding is not a concern a broken water main could happen at any time. Even more dangerous than water is fire as it often moves quicker poses a more immediate threat to human life (P., & Lawrence, 2007). Fire detection is important to ensure the proper response is deployed to suppress or extinguish the threat. This may include smoke and heat detectors. Though water is typically used to suppress a fire we have already seen how this can be a poor choice in combination with electronics. Restricting the fire and removing oxygen would work but is overly complex and expensive for many operations, so frequently sprinklers are used in combination with chemicals that will put out the flames. The vibrations caused by even the slightest tremor could cause a hard drive head to skip, ruining the disk, cause a machine to fall over or make a connector cable come loose. Lighting poses electrical threats, but so do a downed power line and even an exceptionally dry day. Uninterruptable power supplies and humidity controls or other anti-static measures are potential solutions, respectively.
Hardware failure is another naturally occurring disaster than can strike in a number of ways. In Fighting Computer Crime by Donn B. Parker the seven major sources of physical loss are:
- Extreme temperature: heat, cold
- Gasses: war gases, commercial vapors, humid or dry air, suspended particle
- Liquids: water, chemicals
- Living organisms: viruses, bacteria, people, animals, insects
- Projectiles: tangible objects in motion, powered objects
- Movement: collapse, shearing, shaking, vibration, liquefaction, flow waves, separation, slide
Energy anomalies: electrical surge or failure, magnetism, static electricity, aging circuitry; radiation: sound, light, radio, microwave, electromagnetic, atomic
Hard disk failures are not at all uncommon and a Google study examining hard drive failure trends suggests that other effects, such as manufacturing quality, may have a more substantial impact than usage or even heat (Pinheiro, Weber, & Barroso, 2007). Hardware can fail for what seems like no reason, making redundancy and backup of the utmost importance. Backups should be done frequently with both local and offsite copies to provide the fastest return to normalcy. A single backup is never enough if some disaster were to strike destroying the building, or simply erasing the data. Depending on the needs of the organization redundancy can exist not just for disks but for machines as well. For an organization needing to provide the highest possible continuous uptime, backup machines that can kick in when the primary machines fail may be a better solution than drive backups alone.
For many organizations one of the most difficult to control risks is related to clients. From a risk management standpoint both internal and external clients must be considered. In general internal clients refers to the employees of an organization, and external clients are typically customers. Employees may use bad or easy to guess passwords, or write it down on a slip of paper hidden under their keyboard. Enforcing password requirements is a simple way to mitigate this vulnerability. The employee who visits an unauthorized website or installs an unapproved piece of software could be exposing an entire network to attack, but this can be avoided by always restricting software installation capabilities. A phone call could seem legitimate but really be some social engineering – an attacker pretending to be a client, for example. Regular training is a must for client facing employees, but even the simple act of raising awareness at various times throughout the year an help cut down on the success of these types of attacks. External customers pose many similar threats, but are more difficult to control. Passwords pose an additional challenge as there needs to be a balance between security and ease for the customer – stringent password requirements increase security, but may also increase the cost associated with customer service calls for password reset assistance. A virus on a customers computer may track customer information providing access to an attacker in the future, this could even open the firm to risk if a vulnerability existed when an infected client logged into an organization’s website. Customers are very susceptible to phishing attacks, especially as these attacks become more sophisticated and with increasing frequency. In September 2011 the number of identified phishing attempts jumped a whopping 45 percent setting an all time high of 38,970 attacks (“Cyber Security”, 2011). Steps can be taken by an organization to make customers aware of phishing and limiting the ability of an attacker to emulate emails or the official website. Some additional steps could include specialized password cues, or a policy of never including URLs in email.