|
In this multipart series I will be presenting some of the leading industry-standard best practices for enterprise network security using Cisco technologies. Each article in the series will cover a different aspect of security technologies and designs and how each can be deployed in the enterprise to provide the best security posture at the lowest possible budgetary and administrative cost.
In Part 1 of this series, I provided an overview of the critical role that properly designed data security architectures play within an Internet-connected organization. Before we begin to discuss the security designs, processes and recommendations related to Cisco technology, let’s first discuss some of the ways a network becomes unsecure.
Rarely are we presented the opportunity to design and deploy networks in a Greenfield environment—that is, an environment free from constraints imposed by previously deployed systems and technologies. The network has existed in the organization long before we accepted the responsibility of supporting and securing it, so we are faced with a situation where we must work with the current design and configuration. To further our misery, quite typically we find ourselves in the situation of designing network solutions around archaic technologies or seemingly century-old systems that should have visited the scrap heap long ago. We are forced to do this because it makes business sense. That archaic technology or century-old system continues to perform to the needs of the business, and like it or not it is here to stay. And with this realization comes the moment of profound epiphany where we realize that we must not only design our network security around this thorn in our side, but our security solutions must coexist with it and provide protection for it as well.
To understand the security solutions of tomorrow it is necessary to understand the mistakes of the past. Understanding why a network is not secure requires us to understand how networks reach a state of susceptibility to security breach, and what events brought the network to this state. To understand a network environment and its ability—or inability—to provide security you must understand its history and its evolution.
Networks can be thought of as living, breathing organisms. These organisms experience life in much the same way as their human creators. Networks grow and change over time, and it is this cycle of growth and change—this evolution of the network—that determines the strength and effectiveness of network security. The breath and depth of change occurring in a network environment depends on many types of external driving forces, which often include such factors as market direction, business need, customer demand, and availability of budget dollars.
Change also comes as a result of staffing changes in network and security personnel. This staffing turnover leads to change in technical direction, design concept and mindset, and overall philosophy in terms of network and security architectures. With each change the network environment is impacted to some degree and with each degree of change comes an increased potential for security vulnerabilities to appear in the environment.
The Impact of Change
Change is the only constant. This is very true when it comes to the areas of network and data security technologies. Speeds are always improving, networks are always expanding, and the amount of data to be transmitted and secured is seemingly ever-increasing. We are moving more data to more places and need to do it faster and more reliably, while all the while under the requirement of doing it cheap and doing it secure. All of this requires change.
As the level of technology sophistication continues to increase our network architectures must be flexible and adaptable enough to absorb the change and grow with it. No network environment stays at a constant configuration or design for very long, as market trends and customer demands force upon business new requirements that translate into change in business practices. Because network architectures are designed to meet the needs of the business, network architectures are required to adapt to these new requirements and change as needed. As the network environment changes and the business grows, so to does the security policies, designs, and architectures securing the business information systems. Growth for any organization is good news in today’s economy, and network and data security systems must change and grow with the business to ensure future growth and prosperity.
From a security perspective, constant change in the network and security environment is required to stay one step ahead of the seemingly endless number of new vulnerabilities disclosed on a weekly or monthly basis. The environment must be updated, patched, and tuned to mitigate these newly exposed threats in as fast a pace as can be managed and tracked. Anti-virus updates, IDS signatures, hardware and server operating system (OS) patches to name a few. These are all types of changes that are required on a constant and consistent basis to keep information systems secure, and for that reason this type of change is good. Good change, however, must occur under the strictest of change control processes to prevent a good change from becoming a change which negatively impacts the security of the network.
However, with all the business justifications, technological advancements, and vulnerability mitigation reasoning we can conjure to rationalize change in our network environment it can not overcome one undeniable fact: change, in any form, can be bad for network security. When change is introduced into an environment the environment as a whole must be re-evaluated and changed as a whole, if necessary, to properly adapt to the new change. We do this every day in life as we unconsciously adapt our lives and our behaviors to the external forces—the changes—taking place around us. Network environments lack the cognitive abilities of humans to make these adjustments to change automatically. When you introduce change into an environment, the environment in turn must be adapted to the change. When that adaptation fails to take place the network security posture of the environment is weakened and becomes susceptible to security vulnerabilities.
Days of Risk - The Vulnerability Life Cycle
While we are not able to predict when new security vulnerabilities will occur, we can predict with a high degree of accuracy the process that will occur from the moment the vulnerability is discovered until it is fully mitigated. This is known in the security industry as the concept of a Vulnerability Life Cycle, and is the starting point in explaining some of the reasons an unsecured network evolves.
According to the Vulnerability Life Cycle, security vulnerabilities pass through five major stages during their existence:
Stage 1 - Vulnerability DiscoveredAt stage 1, an individual or group of individuals identify for the first time the vulnerability in a network, security, or application system.
Stage 2 - Vulnerability DisclosedAt this stage of the life cycle the vulnerability is made publicly known, typically through the Internet or the press. Due to widespread distribution of information about the vulnerability, this event increases the likelihood that an exploit will be created to take advantage of the vulnerability.
Stage 3 - Fix AvailableAt stage 3 a tested patch or mitigation strategy is made available from hardware or software vendors to mitigate the vulnerability.
Stage 4 - Exploit Code AvailableAt this stage of the lifecycle, sample exploit code or information detailing how to take advantage of the vulnerability is made publicly available.
Stage 5 - Fix Deployed (or vulnerability mitigated)At this stage hardware and software systems have been updated to neutralize the vulnerability, reducing security risk back to normal levels.
The length of time it takes a vulnerability to make it from stage 1, when a vulnerability is discovered, to stage 5, when a permanent fix is deployed, is referred to as the Days of Risk. This length of time represents the time of peak danger for an organization affected by a specific vulnerability. A zero-day (or zero-hour) attack, as an example, is an attack that exposes and exploits undisclosed or unpatched system vulnerabilities. Zero-day attacks are considered extremely dangerous because they take advantage of a flaw at an early stage in the Vulnerability Life Cycle, at a time when no fix is currently available.
Logic dictates that the greater the number of Days of Risk the more susceptible an organization becomes to vulnerability, as more time is provided to create exploits of the vulnerability. Until the vendor is able to release a proper fix for the vulnerability the organization will be dependent on other security measures deployed within the network environment to lessen the impact of the vulnerability. Should an event occur, the network environments most affected will be those lacking in structured and layered security architectures.
Many organizations err in their belief that technical staff will quickly and efficiently handle security patching and apply updates in a timely manner, thereby reducing the risk of exposure. In reality, and in almost all situations, many systems remain vulnerable to security flaws months or even years after corrections become available. In far too many cases security incidents could have been prevented had the network and security systems been actively managed, with all security-relevant patches and updates installed.
One of the primary reasons for the failure to properly maintain secured systems is the lack of technical resources brought on by the economics of business. Because most organizations see their information technology departments as cost centers—and few, if any, generate revenue towards the bottom line—budgetary funding for additional labor to maintain and manage security infrastructures and systems is not available. In these cases, the ever-increasing task of maintaining systems falls to a smaller number of available resources who must balance this task with other critical job functions. Security tasks often fall to the bottom of the list when prioritized projects and tasks that positively impact the bottom line are deemed more important, and remain there as new projects and tasks are added to the top of the list. This cycle is one most technical personnel are far too familiar with, and explains one of the factors that allow an unsecured network to evolve.
Yet another cause in the evolution of an unsecured network can be directly attributed to the level of complexity of a network or security architecture. The more complex the architecture, the more planning and labor required to expand, upgrade, and maintain the environment. As organizations grow, merge, or connect with partner organizations the complexity can increase exponentially. Again citing the same small set of resources most organizations depend on to administer and maintain their network and security architectures, few organizations include the cost of additional network personnel in their business growth goals. All too often a company will announce a major push in a new direction, but fail to realize the work required of their technical resources to accomplish this goal, with maintenance of the network and security architectures again prioritized at the bottom of the list in order to meet the timetable of these new goals.
No matter what the reason may be for the existence of an unsecured network, with the current threat landscape it is no longer sufficient for an organization to wait for a vendor to respond to security vulnerabilities. As part of an overall security strategy the organization must adopt a network security design and posture that allows the network to not only sustain the impact of an exploited vulnerability, but also to minimize and localize the impact through the use of intelligent structured and layered security architectures.
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byCSC
Sponsored byRadix
Sponsored byDNIB.com