Physical Security
Cryptology
Access Controls
Common Attacks
Security Policy
Public Key Cryptography
Intellectual Property
Mainstream Privacy
One of the wildest of these cases involves Michigan man Leon Walker. In the summer of 2009 Mr. Walker accessed his wife’s Gmail account when he believed she was cheating on him. He claims Clara Walker, his now ex-wife, kept her password in a book near the computer and he believed it would be OK to read her email in their home to confirm his suspicions. Ms. Walker and Oakland County prosecutors pressed charges of felony misuse of a computer, having a maximum penalty of five years in prison.
The prosecution contends that Walker used his specialized skills and knowledge as a computer technician to hack into his wife’s email (Oosting, 2011). The law they are using was passed in 1979 by the Michigan State Legislature. Act 53 of 1979 relates to fraudulent access to computers, computer systems, and computer networks but the prosecution focused on Section 752.795:
A person shall not intentionally and without authorization or by exceeding valid authorization do any of the following:
(a) Access or cause access to be made to a computer program, computer, computer system, or computer network to acquire, alter, damage, delete, or destroy property or otherwise use the service of a computer program, computer, computer system, or computer network.
Insert or attach or knowingly create the opportunity for an unknowing and unwanted insertion or attachment of a set of instructions or a computer program into a computer program, computer, computer system, or computer network, that is intended to acquire, alter, damage, delete, disrupt, or destroy property or otherwise use the services of a computer program, computer, computer system, or computer network […]
Unsurprisingly the defense feels the law is inappropriately being applied in this case. The law was originally created to prevent identity theft and hacking that could lead to other crimes. In reference to the act defense attorney Leon Weiss said “The word 'e-mail' does not appear in this statute. This is an anti-hacking statute. It does not, in any way, shape or form encompass reading somebody's e-mail.” He continued “People who live under the same roof, be they married or not, and who share a computer -- as in this instance -- they may have some personal privacy lines that they adhere to. And if they don't, that's between the two individuals” (Bury, & Hovell, 2010).
This case expanded past only those involved. Former House Speaker Chuck Perricone told the Oakland Press he was concerned by the precedent the case could set. State Representative Tom McMillin wants to introduce legislation to clarify the law. Criticizing Oakland County Prosecutor Jessica Cooper on how the case is being handled, McMillin feels she is wasting taxpayer money and was quoted “Since it appears at least one prosecutor in the state can’t see that [the law was never meant to curb spousal snooping], I’ll introduce legislation early next year to clarify … the obvious” (Oosting, 2011).
Walker did discover his wife was having an affair with her 2nd ex-husband, Walker being her third. As guilty as Clara Walker is of having an extramarital affair, an interview from Nightline illustrated substantial education and reasonable legislation will be required to address privacy. In the interview Clara Walker said “I hope that he will let go of this anger that he has towards me… we are a team, whether he likes it or not.” As the audience must have been thinking interviewer Bill Weir points out the obvious: “But you do know that you’re going to testify against him with the goal of putting him in jail, so that’s a lot to ask.” Walker responds “It is, but what does he expect me to do? I mean, he violated my privacy. I was violated. So what about me? … He’s making it all about him. He forgot what he did. And it’s not fair for him to keep this going and forgetting about what he actually did to me” (Wilson, 2011). A strong emotional response, to be sure, but one wonders how one can suggest reading email is a significantly more substantial crime than lying and cheating to cover up an affair.
There is no easy solution. However, it is clear that lawmakers could never foresee how technology would be used and new laws must be created to address our concerns today. Old laws are too antiquated, inflexible and unrelated to modern society to be used properly. Definitions of public and private are changing every day, as are the aspects of our lives people are willing to make public or want to keep private. In the case of Leon and Clara Walker, one wonders if a husband could go to jail for opening his wife’s mail or looking through her drawers, as this is essentially a modern day equivalent.
Defending Networks
It is becoming increasingly common to hear about new, dangerous malware or some company being hacked. The number of attacks is on the rise and it seems this may simply be something that comes with the territory. Companies are spending more and more money to ensure they have the strongest most up to date defenses to protect their networks. Frequently, however, an attacker does not need to penetrate these defenses as simple exploits are often left unpatched or unguarded.
Exploits are growing in number and sophistication every day, so it is unreasonable to think every vulnerability could be located and secured. However, there are a number of common exploits that are commonly overlooked. Default passwords or blank administrative passwords are all too common issues. This is primarily a concern with networking hardware itself and allows easy access for an attacker. Sometimes an administrator may be rushing when creating a privileged user account and leave the password blank creating a clean point of entry for an attacker (“Common Exploits”). Intrusion Prevention / Detection Systems (IPS/IDS) are used by more companies, however incomplete or incorrect deployment of these systems is one of the most common vulnerabilities. Companies with a comprehensive security strategy should have a plan in place to find segments that were missed and must be incorporated into the IPS/IDS system by mapping the network (Markluec, 2010).
Attackers are more likely to use a tried and true method, so it is important for a company to know about and defend against the most common attacks. Network attacks generally fall in the category of logic attack or resource attack. Exploiting existing vulnerabilities or software bugs with the intent of crashing a system, granting access to an intruder or degrading network performance are categorized as logic attacks. An attacker may compromise a system by finding a flaw or vulnerability through a service provided over the internet. These service vulnerabilities can go unnoticed during development and testing as an issue such as a buffer overflow is unlikely to be discovered (“Common Exploits”). Frequently, simple or basic websites are manipulated without the owner’s knowledge. Extremely common web application vulnerabilities like Cross-Site Scripting and SQL injection are not uncommon for even trusted websites, giving an attacker access to information and systems (“Top Risks”). While difficult to guard against, an administrator should always keep security and application systems up to date; also ensuring services are not given privileged access or run as root user provides the best chance of defense (“Common Exploits”). Client-side applications are vulnerable as well. Directing users to websites with malicious scripting and spear phishing e-mail attacks help an attacker gain access to a network or system. Windows link files, Office documents, PDF documents, QuickTime videos and everything Flash has to offer have been used to attack and gain access from the client-side (“Top Risks”). Keeping systems up to date and trying to educate the user are primary methods of defense, but restricting use of and access to unneeded file types can be used when security requirements are greater.
Resource attacks attempt to bring a system down by compromising the physical resources of the system such as the memory or processor. Denial of Service (DoS) Attacks are happening more and more frequently as downloadable applications have made it trivial to send massive amounts of unrecognized data to a network or server (“Common Exploits”). This risk has become even more substantial in the light of distributed DoS (DDoS) attacks especially when under the control of a single individual or group in the form of a botnet. Implementing systems that recognize and block or ignore data generated for DoS attacks is one of the most effective ways to ensure legitimate traffic can still be processed quickly.
While these are only a few of the potential attacks a network may face, the most common attacks are often the most destructive. A network may never be completely safe but proper administration, up-to-date systems, quick action and effective testing are the components of strong security. Common attacks change as new exploits are discovered, but often have roots in exploits from the past. Staying current about vulnerabilities and security news while maintaining a strong awareness and understanding of old exploits will also give an administrator a substantially better chance at success.
Securing Data
Databases are essential for storing, accessing, manipulating and organizing data. Databases may contain unimportant information, but more and more highly sensitive data is maintained. Names, addresses, phone numbers, credit card numbers, Social Security numbers and more are stored in databases and many times this data is accessible through some type of web interface. As the amount and type of data stored online continues to expand, attackers have become increasingly more focused on gaining illicit access to these systems. As a result, keeping data secure in a database is a primary concern for developers moving into the future. The primary areas of security concern include physical, software, access and redundancy. These must all be taken into consideration in order to ensure the security of the database and the data therein.
Physical security is the first line of defense against any intrusion. However, this aspect is frequently overlooked. Every step to gain physical access should be considered, especially who, including employees, might be able to obtain access to the system. Multiple layers of security should be put in place to ensure people are only in the areas in which they are allowed. To secure the building itself security staff can be hired, cameras installed and require an identification badge to enter the front doors. The data warehouse itself could have an additional layer of identification to gain entry, and the physical hardware should be locked in cages or otherwise secured to prevent theft or damage. For extremely sensitive data 24-hour video monitoring can be added inside the data center and proximity, motion, vibration, sound or even temperature sensors can be installed to minimize risk of any physical theft or harm.
Security for the software is not as simplistic, plus it is inherently tied to access. If one were to take a computer with a fresh install of Windows XP, no service packs, and connect it to the internet the machine would be attacked, infected and completely insecure within seconds. Any server made accessible from the internet is vulnerable to attackers; the more substantial the data, the higher the risk of attack. Data should be placed on its own server, separate from the web server and inaccessible from the internet. Otherwise, if an attacker gained access to the web server they would immediately have access to the database as well (Cobb, 2005). A firewall between the web server and data server would provide an additional layer of protection. Data within the database should be hashed and encrypted, never stored in plain text, and proprietary cryptography should never be used as it is prone to errors while open source crypto is tested and proven. The data server should never allow anonymous connections and only respond to the known IP of the web server (Cobb, 2005). File access and modification needs to be locked down; user authentication can help block attacks, but is also effective in ensuring users do not modify data they should not (Davis, 2010).
When the data is accessible from the web additional security considerations must be made. One of the biggest risks to a database from the web is an SQL injection attack, wherein an SQL command is used in place of normally entered data allowing an unauthorized user to gain higher level access to the database. By not running a query directly against the table and instead use a stored procedure, also known as SPROC. Not only does using SPROCs make it easier to modify how data handling is processed and provide better performance, but a SPROC will incur a syntax error and halt execution if an SQL statement is inserted (McLaws, 2003). There are also certain things that can help stop an attacker who does gain access. Windows Access Control Lists (ACLs) can be used to restrict what operations users can perform and the system resources they have access to. Even more data protection can be gained by combining ACLs with permissions and table access control; if a table should only be read, make it read-only (Cobb, 2005). Make it as difficult as possible for any attacker that may gain access by using different, long, random passwords for server, system and software access (McLaws, 2003). This is only a brief overview as there are many other ways to help protect the software and access side of data in a database.
Finally, redundancy is of the utmost importance for any data storage. Just like physical security, data loss is rarely thought of as a security risk; and just like physical theft or destruction, loss of data has a substantial impact on business. To recover from a failure as soon as possible redundant backups, including at least one off-site backup, and hardware are a must. Local, mirrored redundant servers make it quick and easy to switch to a live backup if any substantial failure were to occur. Hardware redundancy allows the system to get back up and running in the event of any hardware failure. Redundancy allows a system to recover from failures quickly and efficiently; without it data could be lost or compromised.
With so much data being stored and transmitted electronically, security of this data is of the utmost importance. Stories of large sites with many users being hacked are becoming more and more frequent, user data stolen and released to the wild. More often than not, the attack could have been avoided if best security practices had been employed. Database developers and admins must take security seriously and incorporate secure practices from the start to ensure the security of the data.
Developing Secure Applications and Programs
Malicious attacks have and will continue to be on the rise. As technology becomes more widespread, miniaturized and ubiquitous, the incentive for a cracker to find a vulnerability and exploit it becomes more and more appealing. Attackers who are able to gain illicit access to private information and customer accounts can cause a lot of damage. Physical assets may be stolen, but access to personal data is even more worrisome. Even if an attacker is not able to gain explicit access to an account or system via a flaw, they may be able to obtain information that can be used for social engineering and gain the benefits anyway. While it may be difficult to stop an attacker from manipulating an employee, software engineers can take significant steps to reduce the risk of an attacker gaining access through a vulnerability in an application.
The most important thing a software developer must do to develop a secure application is eliminate any preconceived notions about security. In computer security, being paranoid is a virtue. A developer who believes her software is bug free is setting herself up for failure. Frequently programmers want to put as much functionality into their application as possible, but this may cause the application to have unexpected functions that, when discovered by an attacker, can be utilized to gain illicit access. If a programmer believes their code is perfect and contains no bugs, they may continue adding functionality increasing the danger. It is important for the software developer to limit what access the program has. For example if a fillable web form is created that allows any type of data to be entered for "phone number" an attacker may be able to gain access via cross-site scripting or an SQL injection. The programmer who tells the web form that a field can only be a certain type of data and to bounce anything that does not meet the requirements will produce a substantially more secure application.
Software testing should be enhanced. Typical QA testing is looking for bugs and errors that occur during normal use. In normal use cases it is very unlikely a security flaw would be discovered, as the tester is not looking for security flaws. Testing should be enhanced to test for non-normal use scenarios. This may include using random or unusual data, using a penetration sniffing tool to look for vulnerabilities or even testing under extreme conditions in a fashion similar to browser fuzzing. By having an individual actually attempt to penetrate the application, vulnerabilities can be discovered and patched that would otherwise remain in released code.
A software developer should not rely on security tools already in place. The amount of time or work required to develop an application could be significantly reduced if the software developer believes technology in place will provide protection, but this increases the risk of an attacker exploiting a vulnerability. Encryption may not allow an attacker to view sensitive data, but internal proprietary encryption is often poorly implemented and easy to crack. It should be trivial for the programmer to implement a strong open-source encryption within the application, and by not doing so may expose the organization and all users to additional risk. The same can be said for a firewall; normally this should provide protection from unexpected incoming or outgoing traffic, but if the firewall is penetrated data sent to and from the application may be compromised and could continue unnoticed for a long time.
It is also important to understand the context around security for a particular application. This understanding includes all aspects of security that are of concern including the overall environment, potential threats and objectives. By knowing to what extent confidentiality, availability and integrity are required for the data the programmer can build in strong authentication and tools to help with audits. With this knowledge the programmer is also able to account for any vulnerability that already exist in the system architecture and build controls into the new application to reduce the risk of these vulnerabilities. Another aspect of this is examining and learning from past mistakes. While an attack can have serious consequences and should be avoided as best as possible, looking at successful attacks and what issues or errors allowed the attacker to gain access not only allows vulnerabilities to be closed but helps to ensure the vulnerability will not be exposed in the future.
There are many development models used with the intention of ensuring applications are secure. Each method may apply a different process for ensuring the resulting application is secure, though no code is perfect vulnerabilities may still persist. By applying some formal methodology to ensure all areas of concern are properly constructed, tested and implemented the risk of an attack can be substantially reduced. While the specific methodology used may not be of the utmost significance, it is always important for a software developer to keep security at the forefront of concern. The software developer who fails to take security into consideration will not only be less trusted, but has failed to protect the organization, its employees and clients.
Fencing, fence register, bounds and relocation
Fencing is an early form of memory management designed to ensure applications and user data do not corrupt the operating system. The earliest, and most basic form of fencing simply separated the operating system and user data into two sections, creating boundaries on either side. While effective for separating the OS from data, the bounds were fixed so neither side could expand beyond a certain point making a fence very restrictive. Additionally, locations for memory were often physically written to the code in the past, making adjustment or movement of that data within the fence extremely difficult.
These issues were addressed with the creation of a fence register, a type of hardware register. A fence register is more flexible and allows for a dynamic fence that can resize and move as needed. In this implementation, the operating system and user data are assigned to different sections of memory, the locations marked with integers. When an action or modification is made above a certain integer bound this is recognized as user input and the modification is accepted. When the action falls below a certain bound it is viewed as an action to affect the operating system; unless proper permissions are provided this action will fail and output an error message. Even the fence register has limitations, specifically protection is unidirectional; the operating system can be protected from a user, but users and applications are not necessarily protected from one another.
Relocation is important so the operating system knows where data is stored and can access it. Ideally programs/data would start at address 0 and go up, and if an operating system is a fixed size the programmer can know at which address to begin. However, the operating system can change in size from version to version, and it would be extremely difficult and time consuming to manually update the address location. By adding a relocation factor to each address for the program, the system can automatically update the addresses as needed. Combined with a fence register, a program can be moved to a new memory location while maintaining a fence ensuring access to any location lower than the fence is restricted (except in special circumstances).
Fence registers only provide a starting point, or lower bound, which is useful when multiple users are trying to execute programs, but the lack of an upper bound has inherent risks from overflow, and as was previously mentioned corruption from other users. A second register is typically added to provide an upper bound limit; this is known as a bounds register. Creating an upper bound on actions forms something of a sandbox to ensure that one user cannot negatively impact another.
As always Safari Falls First - Pwn2Own 2011
For the uninformed Pwn2Own is an annual browser hacking competition at CanSecWest, an annual security conference in Vancouver. Canada Security West, btw. Successful contestants win the rig they exploited, plus some cash money. Sponsor TippingPoint submits a report to the responsible company with details about the vulnerability including how the exploit worked. This information is not made public until the company fixes the problem.
Fictional Case Study: Don't Shoot the Messenger, Inc.
Many companies recognize the benefits in having an IT infrastructure that is highly integrated and collaborative. Attempting to change to this type of structure is extremely difficult for most companies, as the infrastructure has become a complicated web. Many companies fall into a business-IT alignment trap where IT is continually tapped to support the business, leading to overcomplicated legacy applications and systems which consume time and resources to maintain. This case study examines a company that recognized the importance of good IT infrastructure, made this a primary concern and when faced with massive change was able to switch to an integrated solution with minimal difficulty, allowing them to grow substantially while maintaining the flexibility of a smaller company.
Background/History
Don’t Shoot the Messenger, Inc. (DSM) started with two ex-bike messengers, Josh and John, who wanted to build a better messenger bag. Their proprietary design was a messenger bag that was lighter, more durable and utilized a proprietary weight distribution system making it easier to carry heavy loads longer. Production was outsourced to a local specialty sewing shop. Initially only targeting the messenger market, they approached friends and ex-colleagues for sales - the first order of bags sold out almost instantly. Word began to spread to messengers in nearby areas and purchase requests started pouring in. The orders were accompanied by suggestions for improvements and requests for new features and different bag styles.
The company outgrew the local sewing shop and found a U.S. based company that manufactured apparel and other fabric based goods for small to medium companies. The service was relatively inexpensive and provided access to a software solution that could be integrated with a website to handle order management, route it to the facility where the bag would be manufactured and shipped directly to the customer. This software had support for the most popular small and medium business accounting software, so it simplified things tremendously. By outsourcing the manufacturing the company could grow without having to concern itself with the supply chain and production as well as helping to keep costs low. The manufacturing facility allowed for some flexibility including many different available materials, access to work with suppliers to find materials that may be better suited to the task and the ability to quickly do small test runs of new designs.
To ensure things would run smoothly, three employees were hired. The first person, Kate, handled administration including accounting and other financial tasks, timely payment to vendors and other suppliers, corporate documents etc. The second, Jack, was there to verify payment before forwarding the order to the manufacturer, monthly order auditing and delivery confirmation, and handle customer service inquiries, as this was expected to have minimal need. The third individual, James, was what constituted as the “IT department” with responsibilities like website creation, maintenance and integration with the software from the manufacturer as well as the accounting software.
A basic website was created to take orders and gather input and suggestions from their customers. Integration of the ordering, manufacturing and accounting software had gone very smoothly making every process nearly fully automated. A few design changes were made based on the feedback received before DSM put their messenger 2.0 plus a laptop variation up for sale on the new website. Demand, spread by word of mouth was building and there were many people anxious to order a DSM bag. The moment the web store opened, orders started pouring in from around the country, and all the systems worked flawlessly. The founders began to think about the future of the company and, alleviated of most day to day activities, were able to focus on new product ideas and modifications or upgrades to the old designs. They hired a textiles expert, Michael, to work closer with suppliers to find new, better materials, and keep an eye out for any R&D the company may find useful.
The primary objective of the company was to minimize costs, streamline ordering and production as much as possible so they could maintain barebones staff to maximize net income. The 5 year plan was to expand the product line to many types of bags for daily use targeted to not only messengers but students and professionals as well. The longer term plan included various types of luggage, specialty travel bags and apparel including jackets. DSM’s founders had a lot of ideas about how to improve the things people use as mobile storage. They decided the most effective way to do this would be to save as much cash as possible over the next few years by outsourcing as many aspects of the business as possible. Minor changes could be made as needed; they were already researching larger manufacturing facilities and ways to outsource aspects of accounting, order handling, customer service and some IT responsibilities to ensure they were prepared for any amount of growth. They knew outsourcing everything would only work for so long if they wanted to grow. The plan revolved around slowly introducing new products to gain popularity, begin planning a new corporate structure and governance where all aspects are handled in house, implement said structure just before announcing plans for the new bags, luggage and apparel, then utilizing the new structure to grow while keeping costs low and maintaining the flexibility of a small company.
Their timeline lengthened significantly as, heading into their fifth year, growth was still steadily increasing. The comfort, design and quality of the messenger bags had attracted students and professionals alike. They made a few modifications and grew the product line to six styles of messenger bags, now version 3.0. They were all basically the same in general design, but one maintained the classic messenger internals, the second added internal organization ideal for students, and the third added internal storage for professionals (especially journalists, a large market) and had a slightly classier fit and finish. Laptop variants of each made for six available choices. A new material was also utilized that was a bit lighter and more durable without giving anything up in feel and allowed for a plethora of color choices. Once released, demand quickly overwhelmed production capabilities, but their business plan had a contingency for this and within a week manufacturing was outsourced to a production facility in China. Because of volume, this ended up being less expensive and provided even more capabilities for future production. One such capability allowed DSM to introduce a custom bag option where customers could pick the style and the colors of the various parts of the bag and have it custom built – this was introduced six months after the initial 3.0 release.
DSM was still fairly small as they had maintained their outsourcing mentality, but they had grown to around 20 employees. Josh and John found that Kate understood and believed in their vision, in fact she had been invaluable in all discussion and planning in aspects of continuity, contingency, growth and the future. She now handled oversight of the day to day activities as CEO. Aspects that had been handled by just one person were now small departments, each with at least a few employees. Jack was made head of Administration and Finances which included oversight of HR and personnel, financial, internal auditing, billing and legal aspects. Michael was now the head of R&D. Josh and John were still heavily involved in the design process, John had especially taken an interest in textiles and materials, but they started staffing up and had begun design on the products they planned to introduce in a few years, creating a requirement for someone to head up R&D and maintain the quality DSM was known for. James was made the head of IT as handling the website and integration soon became too much for just one person. Josh, John and Kate all agreed that IT was one of the most important aspects for the future of the company, especially once all processes are handled in house, so they always supported any way in which IT could help grow the business. Finally, customer service had been outsourced but they hired a head of customer support, Desmond, to ensure customer support was being handled well and compile complaints and confusions to help improve the product in the future. This was the same basic structure, excluding production, the company would take in the future. The candidates were ideal because they all had various levels of experience at large companies in their respective fields; this knowledge was used in planning the future organizational structure to improve efficiencies and eliminate problems they found with their previous experience.
At the beginning of the 7th year version 4.0 of the bags were released with additional changes and modifications. This was also the first time DSM had released any products outside of the traditional messenger style bag. Two backpacks, each with a laptop variant, were introduced in addition to a duffle style bag, an airplane carry-on bag and a camera bag. A number of accessories were also released including sleeves and cases for laptops, cell phones, portable media players and other devices. They also introduced their first clothing products – a zip up hoodie and a jacket. These new products maintained the quality, durability and comfort of past DSM products, and utilized a specialized pocket structure to maximize storage space while simplifying organization. All services were able to handle the new product and additional load, but these were the last major iterations DSM planned to release before they started handling all aspects of the business in house.
A lot of time and money had been spent by the R&D department looking for and experimenting with new materials. One of the most promising was a synthetic fiber discovered by some researchers. The materials needed to produce the fiber were readily available and relatively inexpensive. DSM found the fiber easy to manipulate, dye, spin, weave and sew. It was also discovered that slight adjustments to the spinning and weaving processes allowed the resulting fabric to have different properties, making it easy to change flexibility, comfort, waterproofing, etc. as needed. After extensive testing, DSM acquired all rights to the fiber for future production. They had formed solid relationships with producers of the raw materials necessary to produce the new fiber. This was the first step to becoming completely independent.
R&D had also been working on a number of new product designs. Everything had been moving smoothly, and they would be ready to roll out the expanded and improved product line for the companies 10 year anniversary. Plans were in place to open offices in Europe within a year and the company had just acquired a manufacturing facility in Vietnam. Between steady sales and minimizing expenses, the company was extremely cash rich. This was the position they were building toward, and they were nearly ready to implement the new business structure and strategy.
Infrastructure Requirements
Handling all aspects of the company in house was expected to be a large undertaking. The decision to handle manufacturing, in conjunction with the new material, had more extensive requirements than were originally anticipated. Substantial supply chain management was now needed, as well as systems to simplify the steps to take the raw material, spin into thread, weave into fabric and construct the bags. Organizing orders and ensuring packages were shipped in a timely fashion were also important aspects.
DSM would need better order handling and customer tracking history. Historically orders were made by individual customers and were entered via the website. Over the years a number of retailers had approached DSM wanting to sell their product in stores, but in the past DSM never agreed to any deals. As part of their reorganization, they wanted to pursue this market and they needed infrastructure that could support online customers and efficient handling of retail orders. As part of the overall restructuring plan the company had gotten in contact with a number of previously interested retailers and had already established preliminary agreements with a half dozen national retailers who would sell the new line of products from store shelves. Working with retailers creates unique challenges due to volume and requirements for order processing, record maintenance and billing.
In order to protect proprietary information about their new textile materials, DSM decided customer support should be handled in house as well. While easy to care for, the material had some unique quirks. This would require inbound telephony and a larger technical support staff to ensure customer service representatives would have access to customer records and other information. The company would be growing in many other areas as well. New departments were being created and the company structure was about to be much more complicated than it ever had been.
Josh, John and Kate knew the one thing that would make or break the company following the structural change would be the ease of information transmission across departments to ensure all areas of the company were working together to meet objectives. All previously outsourced solutions provided solutions integrated well with each other, for the most part, and the company and they recognized the value in this. Even though the new structure was more complicated and dense, DSM wanted to find a solution that would allow for easy integration, compatibility and growth. Some flexibility would be nice so proprietary software could be written and utilized when necessary, but above all they wanted a well-established, strongly supported system. While the IT department had and would continue to experience substantial growth, the company preferred to not develop a solution in house.
By the end of the 7th year, European offices and the Vietnamese production facility had been acquired and were in the process of being staffed. Headquarters was still based in the United States, as was the primary R&D facility. Agreements were in place for raw materials and retail sales. State of the art equipment was purchased; the new designs were nearly complete. They had two years to make sure they had a solid information system in place before rebranding and production. With massive changes to the corporate structure and needing to staff up all departments and ensure the systems to connect those departments were effective and reliable, DSM brought in an IT consultant to help find the best top level solution.
Infrastructure Planning
Josh, John and Kate worked directly with the consultant to make sure they got everything they needed. They spent some time explaining the company background, their objectives and what they wanted to accomplish with the future of the company. Aspects of the structure plan were proprietary, but a basic corporate structure was provided (Appendix A) covering all areas of the company that needed to be handled by the system.
There were a lot of aspects to consider – marketing and sales, administration and financial, R&D, purchasing raw materials, supply chain management, maintenance and upkeep and QA. Physical production alone required manufacturing the new fiber, spinning, weaving, cutting, sewing, finishing and sampling. While systems in the past had allowed for integration, there were issues and faults that they wanted to bypass. It was important that marketing, sales, administration, customer service and all aspects of production were tied together to simplify communication, and it needed to be easy for the IT department to maintain all systems, easily make changes as needed and ensure integration. In discussions with the consultant it became clear the issues caused by the amalgamation of various systems, while minor, could have significant impacts to the business in the future.
Various options were discussed, but it was eventually determined that a third party ERP solution would be the most reliable and suitable. There were a number of third party providers who had solutions that were popular and well established. The tools these companies provided were flexible and customizable, would provide tools for every department and guaranteed integration and compatibility. Though their structure was highly evolved, business practices had remained rather archaic, but since they were redoing the entire structure anyway, DMS had no qualms with modifying their practices to meet the requirements of an ERP system.
The Solution
The consultant spent some time with the heads of all departments to better understand their operations, and learn about what features each department needs and issues encountered with past tools. A number of systems were evaluated including mySAP, JD Edwards World and EnterpriseOne, NetSuite and Microsoft Dynamics before the consultant settled on a recommendation. A presentation to Josh, John, Kate and all department heads summarized thusly:
The best choice is the Oracle E-Business Suite 12.1. This system provides services and applications for all areas of the company, is extremely robust, diverse, is backed by a well-established company and is proven technology. Not only will it meet the needs of the company today, but it will grow with the company in the future. This suite has been used by many businesses and is ideal for growing enterprises and provides rapid value solutions, integrated business processes and purpose-built industry solutions. Value chain planning solutions are a part of Oracle Supply Chain Management, which helps to maximize value by integrating and optimizing supply, demand, and design chains. Oracle Incentive Compensation helps effectively manage benefits, compensation and pay for performance on a global level, allowing for quick ROI. Inventory management and operational efficiency can be improved using Oracle Warehouse Management, and seamless integration of systems helps reduce deployment time and lowers overall cost by providing analytics on procurement & spend, HR & talent management and product development.
The suite consists of 7 primary applications: Customer Relationship Management (CRM), Service Management, Financial Management, Human Capital Management, Project Portfolio Management, Advanced Procurement, Supply Chain Management, Value Chain Planning, and Value Chain Execution (Logistics.) CRM is a set of applications servicing areas like channel revenue management, marketing, order management, sales and service. Service management provides service applications to provide world class service to customers including inbound and outbound telephony, a scheduler, depot repair, e-mail center support and other customer support tools. Aspects such as asset lifecycle management, cash & treasury management, credit-to-cash, financial control & reporting, financial analytics, governance, risk & compliance, lease and finance management, procure-to-pay and even travel & expense management are handled primarily by Financial Management.
The Human Capital Management application provides the services necessary to handle human resources, payroll, benefits management, time and labor, workforce scheduling, project resource management and many more. To ensure projects go as smoothly as possible Project Portfolio Management includes tools for project analytics, billing, contracts, collaboration, costing, management, portfolio analysis, resource management and time and labor.
Taking on the responsibilities as manufacturer, especially since DSM will be producing many of the materials, procurement and supply chain management are very important. Oracle Advanced Procurement will help keep all supply chain management costs low, reduce spending on goods and services and streamline processes. Oracle Supply Chain Management automates all key supply chain processes including design, planning, procurement, manufacturing and fulfillment in a single tool. Finally Value Chain Planning and Value Chain Execution are a set of tools to enhance the flexibility of purchasing, design, implementation and management of supply chain solutions.
Conclusion
Everyone at DSM agreed the Oracle E-Business Suite was an optimal solution. It met all their needs and was extremely versatile and flexible all for a relatively low cost. After the decision was made, the consultant stayed on, assisted in IT staffing, worked with IT, department heads and acted as a liaison with Oracle to ensure the system was exactly as requested before implementation. Working diligently over the course of a year and a half, DSM was finally ready for a trial run and switched all systems over to Oracle E-Business two months before they were ready to introduce the new product line. They had now grown to over 500 employees including extensive growth in IT and R&D, but the majority of the increase was a result of the factory in Vietnam. The systems had a few minor bugs, but these were easily resolved and the system was everything they had hoped. They announced the new product line, started production and sales and became hugely successful. The Oracle E-Business suite was able to keep pace with the growth of the company. The founders and CEO had always preached the benefits of a simplified, integrated structure, but it only worked because they recognized the importance and structured the organization to support integration and collaboration.
How To Protect Children On The Internet Without Infringing Upon Free Speech
The First Amendment exists as a protection against any law that infringes on freedom of speech. The internet provides a new medium for personal expression and any limitations, excluding criminal activity, violate First Amendment rights. There are many benefits to an open platform where anyone can express their opinion, but the internet also opens the possibility of exposing children to material that could be harmful or obscene. Debate has raged on since the early days of the internet. One side believes laws need to be enacted to provide additional protection to minors and the other argues this impedes upon the First Amendment.
The Child Online Protection Act (COPA) was passed by congress in October 1998, but never enacted due to a series of injunctions following a lawsuit claiming the law violated free speech. COPA was an attempt to ensure children 13 and under are not exposed to material deemed “harmful” or “obscene.” The ACLU was a main opponent to COPA enforcement. ACLU lawyer Ann Beeson explains their position by pointing out that adults have the right to be open about sex and any filtering or requirements for identification violate both free-speech rights and right to privacy. Eventually COPA was completely thrown out and it is unlikely any similar laws will not violate free-speech rights; clearly government intervention is not the answer.
The only solution is for parents to take responsibility. Parents should be monitoring the media their children consume and ensuring they are not exposed to anything the parent considers inappropriate. Clearly a single standard cannot be applied across a community or a country as individuals have differing opinions, and this is especially true of the internet. Families need to develop and enforce a household standard for what is appropriate and acceptable conduct. There are certain issues with familial enforcement including parents who are unaware filtering tools are available as well as kids who understand technology better than their parents and can get past filters. These can be mostly overcome by providing easier access to the education, resources and tools parents need to be empowered.
How Viruses, Worms and Trojans Impact Organizations
Similar in many ways to a biological virus, a computer virus is a piece of software that is able to self replicate, spread from one computer to another via removable media or over a network, and are typically malicious. Viruses are usually designed to damage or extract data, and frequently the computer's performance will be adversely affected. A computer worm is also self replicating, but is typically able to spread itself across a network on its own without any user intervention. Different still is a Trojan horse which does not replicate itself, but hides itself in what appears to be a useful piece of software but is actually malicious and is attempting to steal data or otherwise harm the system. No matter the specific threat any of these can have a significant impact on an organization.
The impact of a computer virus can be very far reaching, depending how much damage it is able to do. Systems may simply perform slower, as a virus runs in the background, resulting in less productive workers or slower processes. Certain systems may have to be temporarily suspended, for example an employee clicks on a link in an e-mail and a virus starts spreading through a Microsoft Exchange server, sending e-mail to everyone at the corporation. In this case the entire e-mail server may have to be pulled off line making communication more difficult for employees and potentially effecting business. A survey by ICSA Labs in 2002 found that for companies with more than 500 computers, on average it took 23 person days to recover from a virus. The same survey found that around 86% of infections were spread via e-mail, the remainder were related to web browsing and downloads.
There are a few primary losses an organization may sustain. There is clearly a cost related to protection and removal of viruses. It is estimated that in 2003 alone approximately $55 Billion in damages were sustained to businesses by computer viruses. Specifc costs include the time and labor associated with analyzing system impact and repairing infected systems and the software and hardware tools that may need to be purchased in order to get the network back up and running. Loss of productivity will likely occur if a virus takes down the network and employees no longer have access to critical applications or data to perform their jobs. Data theft and loss is another concerns as many viruses steal or destroy data completely. Depending how long the virus has been in the system, it's possible it may have even infected data backups, causing even larger problems for the company in the future. All of these losses compound as revenue and general income may be lost during the network downtime, increasing total losses.
It is important for companies to take proper steps to protect themselves. Companies need to ensure antivirus software is installed on every computer, that the software is maintained and always up to date. Regular data backups are another critical step to limiting losses from a computer virus. Staggaring backups may reduce the risk of a virus infecting the data backup. Anti-spyware or malware software should also be used to reduce the risk of potentially malicious software infecting computers from simple web browsing. Most imporantly, companies need to educate their employees so the risks are understood and reduce the potential for someone to click on a link in an e-mail or visit a malicious web site.
Sarbanes-Oxley: Review & Impact
The Sarbanes-Oxley Act (SOX), passed in 2002, was created to increase corporate financial and accounting oversight. Corporate scandals, such as those at Enron and MCI, made it evident new legislation needed to be passed to hold corporations accountable and reduce the potential for such activities to be perpetuated in the future. SOX required sweeping changes in reporting for publically traded companies which has, and will continue to have, negative and positive impacts on corporations. The pros and cons of the impacts are still hotly debated. These impacts will be examined and discussed following a brief summary of the act.
In response to a number of very public corporate and accounting scandals, SOX was enacted July 30, 2002. Public confidence in securities markets was shaken as companies like Enron, Tyco International, Adelphia, Peregrine Systems and WorldCom collapsed and investors lost billions of dollars without warning. As the activities of these companies were examined and better understood, it was clear certain aspects were similar across companies and needed to be addressed and considered as part of the act. These issues include conflicts of interest among auditors, boardroom oversight failures, conflicts of interest among securities analysts, poor banking practices, executive compensation relating to stock options as compensation, issues related to the internet bubble that could happen again, and finally poor rule creation and enforcement by the SEC due to underfunding.
The financial reporting requirements and mandates of SOX are laid out in 11 titles each containing of several sections, summarized here:
I. Public Company Accounting Oversight Board
Title I establishes independent oversight of auditors, forms a centralized oversight board to define procedures for auditing compliance, auditor registration and policing and enforcing compliance with SOX.
II. Auditor Independence
Title II attempts to limit conflicts of interest by introducing standards for external auditors. This title restricts an auditing company from providing audit clients with non-audit services as well as creating requirements for new auditor approval, audit partner rotation and reporting.
III. Corporate Responsibility
Title III attempts to increase confidence in the accuracy and completeness of corporate financial reports by requiring certain senior executives to sign and take individual responsibility every quarter.
IV. Enhanced Financial Disclosures
Title IV details the enhanced requirements for financial reporting. To provide assurance of the accuracy of financial reports and disclosures, internal controls and audits on these controls, are mandated. Reporting requirements for transactions including off-balance-sheet transactions, pro-forma figures and stock transactions of corporate insiders are also covered by this title, as well as timing to report material changes and SEC review requirements.
V. Analyst Conflicts of Interest
Title V creates a code of conduct for security analysts and requirements for disclosure of knowable conflicts of interest to help restore investor confidence in analyst reporting.
VI. Commission Resources and Authority
Title VI also addresses investor confidence in analysts by introducing defined practices including the conditions under which an individual broker, advisor or dealer would be barred from practicing.
VII. Studies and Reports
Title VII specifies the studies and reports the Comptroller General and the SEC must perform. Aspects such as accounting firm consolidation, the impact of credit rating agencies on securities markets operations, securities violations and enforcement, and what role investment banks played in the corporate scandals must be covered when the findings are reported.
VIII. Corporate and Criminal Fraud Accountability
Title VIII establishes certain protections for whistle-blowers while setting standard specific criminal penalties for manipulation, alteration or destruction of financial records and other interference with investigations.
IX. White Collar Crime Penalty Enhancements
Title IX raises the criminal penalties related to white-collar crimes by making failure to certify financial reports a criminal offense and recommending stronger sentencing guidelines for white collar crime.
X. Corporate Tax Returns
Title X states the company tax return should be signed by the Chief Executive Officer of the company.
XI. Corporate Fraud and Accountability
Title XI also relates to sentencing and penalties. This title classifies records tampering and corporate fraud as criminal offenses. Modifications to sentencing guidelines and stronger penalties are also included, giving the SEC the ability to temporarily freeze suspicious transactions.
Measuring the impact of SOX is complicated as other factors that influence the stock market are difficult to isolate and remove. Even with significant analysis and research many different conclusions have been reached in regards to the benefits and costs of SOX thus far as well as into the future. Since the primary concern is the accuracy of financial reporting data, under SOX the importance of IT only relates to its ability to make that reporting more reliable. The negative impact primarily revolves around compliance costs, while the goals of the act itself make up the majority of the benefits.
Compliance with SOX can be very expensive. Multiple research firms have conducted surveys and found significant increases in costs for compliance. Expenses such as internal control costs, accounting fees and higher liability insurance premiums for directors and officers are passed on to customers in the form of higher prices, potentially eliminating any operating profit for small companies. In addition to monetary costs, time and priorities are skewed as employees must be assigned to focus on compliance and additional legal or accounting counsel is hired to ensure the company is not breaking the law. Some critics claim an additional negative impact is small or international businesses not listing stock on a US exchange due to higher costs and compliance requirements.
Researchers have analyzed the impact of the act and determined many of the acts original goals have been met, demonstrating some success. One paper found that corporate transparency for cross-listed firms subject to SOX had increased relative to foreign only listed companies. Other studies have found benefits including more conservative earnings reporting, lower borrowing costs for corporations with improved internal controls and improved internal controls, in general. One study’s findings indicated that companies tend to experience significantly greater increases in share price if there are no significant weaknesses, or issues are corrected quickly, with their internal controls compared to those with problems, even suggesting the increase in share price was greater than SOX compliance costs.
The Sarbanes-Oxley Act is very divisive, making it difficult to determine if the benefits outweigh the costs. No matter the side of the argument, SOX was clearly successful in accomplishing its goal of enhancing the standards for compliance, transparency and accountability. Debate continues to this day, and detractors are one of the biggest potential challenges as the constitutionality of the law has been brought under scrutiny and there has been some pressure to repeal from the financial industry. Perhaps finding a way to simplify compliance without impacting transparency would be an acceptable solution to all.
Business Continuity Planning (BCP)
Business continuity planning is about more than how to keep a business running in the event of a disaster, it is about protecting the reputation of the business when something does go wrong. In the context of BCP the term ‘disaster’ encompasses a number of events. Natural disasters such as earthquakes, tornados, fires, blizards and floods certainly need to be addressed as any one could cause an office to be closed, employees unable to get to work leading to delays in processing, production and shipping. Technology disasters including server failures, hard drive crashes, viruses and malicious attacks all can also lead to business failures if a continuity plan is not in place to get a business up and running.
Businesses that are unable to recover quickly and efficiently from a disaster with minimal client impact face many risks. Customers may lose faith due to unreliability, slow or missing delivery of a product or difficulty in reaching a representative for assistance. Investor confidence may be tested as the company looks weak and unprepared, and the expenses associated with disaster recovery without a preexisting plan could be astronomical. If a business wants to be able to get back to business quickly with as little negative customer, employee and revenue impact as possib le it is important to have a BCP ahead of time.
When creating, implementing and maintaining a plan there are five aspects that must be addressed:
- Assess potential threats – Time should be spent brainstorming disaster scenarios and threats. All risks should be evaluated, but additional focus should be given to the highest risk and/or most likely scenarios. Determining what systems are affected and how should be here or in the next step.
- Core operations must be identified – These are the systems necessary for the business to maintain day-to-day operations, specifically the primary business function. Part of this process is determining a minimum acceptable level at which the business can run.
- Identify critical functions of core operations – Determining both functional tasks and the required resources, tools or equipment necessary to complete these tasks. This includes finding the minimum aspects of operations that must be able to continue to run to meet the minimum acceptance level. This information allows restricted resources to be directed to where they are most needed.
- Delegate critcal function responsibilities – Ensure individuals understand their responsibilities in the event of a disaster. Employees must know the plan, their role, and how to get necessary resources, tools and equipment to complete their tasks. Distribution of emergency procedure materials and training on said procedure are included in this step.
- Testing and Evaluation – Once all other steps are complete and the plan is put in place, it must be tested and evaluated on a regular basis. Testing may range from fire drills to simulated primary server outages. The plan is evaluated by comparing against various benchmarks and the needs of the business, and should be updated or changed as necessary.
EDI - Advantages and Disadvantages
EDI (Electronic Data Interchange) refers to electronic communications of business transactions between organizations. EDI implies computer to computer transactions directly into vendor databases and ordering systems. In 1996 the National Institute of Standards and Technology defined electronic data interchange as "the computer-to-computer interchange of strictly formatted messages that represent documents other than monetary instruments. EDI implies a sequence of messages between two parties, either of whom may serve as originator or recipient. The formatted data representing the documents may be transmitted from originator to recipient via telecommunications or physically transported on electronic storage media." (http://www.itl.nist.gov/fipspubs/fip161-2.htm) They specify that the usual processing is done by computer only and human intervention is intended only for problem resolution, quality review or other special circumstances.
Advantages of EDI
- EDI provides cost savings by reducing paper and eliminating paper processing.
- Time savings and eliminating repetition are other benefits from the reduction in paper processing.
- Documents can be transferred more quickly and processing errors can be decreased allowing business to be done more efficiently.
- More efficient processing will likely lead to improved customer service which will ultimately expand the customer base.
Disadvantages of EDI
- Contrasted to XML, which is not strictly standardized, many consider EDI to have too many standards.
- There are various standards bodies who have developed 'standard document formats' for EDI which can cause problems with cross compatibility.
- These standards bodies also push standards revisions annually which could cause problems if you have a more recent version of a document than a business partner.
- EDI systems are extremely expensive making it difficult for small businesses to implement.
- Many large organizations will only work with others who utilize EDI. This may limit the business small companies are able to do with such organizations and limit trading partners.