“Who Makes the Rules?” The Hidden Risks of Defining Visibility Policies

Imagine what would happen if the governor of one state got to change all the laws for the whole country for a day, without the other states or territories ever knowing about it. And then the next day, another governor gets to do the same. And then another.

Such foreseeable chaos is precisely what happens when multiple IT or security administrators define traffic filtering policies without some overarching intelligence keeping tabs on who’s doing what. Each user acts from their own unique perspective with the best of intentions –but with no way to know how the changes they make might impact other efforts.

In most large enterprises, multiple users need to be able to view and alter policies to maximize performance and security as the network evolves. In such scenarios, however, “last in, first out” policy definition creates dangerous blind spots, and the risk may be magnified in virtualized or hybrid environments where visibility architectures aren’t fully integrated.

Dynamic Filtering Accommodates Multiple Rule-makers, Reduces Risk of Visibility Gap

Among the advances added to latest release of Ixia’s Net Tool Optimizer™ (NTO) network packet brokers are enhancements to the solution’s unique Dynamic Filtering capabilities. This patented technique imposes that overarching intelligence over the visibility infrastructure as multiple users act to improve efficiency or divert threats. This technology becomes an absolute requirement when automation is used in the data center as dynamic changes to network filters require advanced calculations to other filters to ensure overlaps are updated to prevent loss of data.

Traditional rule-based systems may give a false sense of security and leave an organization vulnerable as security tools don’t see everything they need to see in order to do their job effectively. Say you have 3 tools each requiring slightly different but overlapping data.

  • Tool 1 wants a copy of all packets on VLAN 1-3
  • Tool 2 wants a copy of all packets containing TCP
  • Tool 3 wants a copy of all packets on VLAN 3-6

Overlap occurs in that both Tools 1 and 3 need to see TCP on VLAN 3. In rule-based systems, once a packet matches a rule, it is forwarded on and no longer available. Tool 1 will receive TCP packets on VLAN 3 but not tool 3. This creates a false sense of security because tool 3 still receives data and is not generating an alarm, which would indicate all is well. But what if the data stream going to tool 1 contains the smoking gun? Tool 3 would have detected this. And as we know from recent front-page breaches, a single incident can ruin a company’s brand image and have a severe financial impact.

Extending Peace of Mind across Virtual Networks

NVOS 4.3 also integrates physical and virtual visibility, allowing traffic from Ixia’s Phantom™ Virtualization Taps (vTaps) or standard VMware-based visibility solutions to be terminated on NTO along with physical traffic. Together, these enhancements eliminate serious blind spots inherent in other solutions avoiding potential risk and, worst case, liability caused by putting data at risk.

Integrating physical and virtual visibility minimizes equipment costs and streamlines control by eliminating extra devices that add complexity to your network. Other new additions –like the “double your ports” feature extend the NTO advantage delivering greater density, flexibility and ROI.

Download the latest NTO NVOS release from www.ixiacom.com.

Additional Resources:

Ixia Visibility Solutions

Thanks to Ixia for the article

Ixia Study Finds That Hidden Dangers Remain within Enterprise Network Virtualization Implementations

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced global survey results demonstrating that while most companies believe virtualization technology is a strategic priority, there are clear risks that need to be addressed. Ixia surveyed more than 430 targeted respondents in South and North America (50 percent), APAC (26 percent) and EMEA (24 percent).

The accompanying report titled, The State of Virtualization for Visibility Architecture™ 2015 highlights key findings from the survey, including:

  • Virtualization technology could create an environment for hidden dangers within enterprise networks. When asked about top virtualization concerns, over one third of respondents said they were concerned with their ability (or lack thereof) to monitor the virtual environment. In addition, only 37 percent of the respondents noted they are monitoring their virtualized environment in the same manner as their physical environment. This demonstrates that there is insufficient monitoring of virtual environments. At the same time, over 2/3 of the respondents are using virtualization technology for their business-critical applications. Without proper visibility, IT is blind to any business-critical east-west traffic that is being passed between the virtual machines.
  • There are knowledge gaps regarding the use of visibility technology in virtual environments. Approximately half of the respondents were unfamiliar with common virtualization monitoring technology – such as virtual tap and network packet brokers. This finding indicates an awareness gap about the technology itself and its ability to alleviate concerns around security, performance and compliance issues. Additionally, less than 25 percent have a central group responsible for collecting and monitoring data, which leads to a higher probability for a lack of consistent monitoring and can pose a huge potential for improper monitoring.
  • Virtualization technology adoption is likely to continue at its current pace for the next two years. Almost 75 percent of businesses are using virtualization technology in their production environment, and 65 percent intend to increase their use of virtualization technology in the next two years
  • Visibility and monitoring adoption is likely to continue growing at a consistent pace. The survey found that a large majority (82 percent) agree that monitoring is important. While 31 percent of respondents indicated they plan on maintaining current levels of monitoring capabilities, nearly 38 percent of businesses plan to increase their monitoring capabilities over the next two years.

“Virtualization can bring companies incredible benefits – whether in the form of cost or time saved,” said Fred Kost, Vice President of Security Solutions Marketing, Ixia. “At Ixia, we recognize the importance of this technology transformation, but also understand the risks that are involved. With our solutions, we are able to give organizations the necessary visibility so they are able to deploy virtualization technology with confidence.”

Download the full research report here.

Ixia's The State of Virtualization for Visibility Achitectures 2015

Thanks to Ixia for the article.

Virtualization Gets Real

Optimizing NFV in Wireless Networks

The promise of virtualization looms large: greater ability to fast-track services with lower costs, and, ultimately, less complexity. As virtualization evolves to encompass network functions and more, service delivery will increasingly benefit from using a common virtual compute and storage infrastructure.

Ultimately, providers will realize:

Lower total cost of ownership (TCO) by replacing dedicated appliances with commodity hardware and software-based control.

Greater service agility and scalability with functions stitched together into dynamic, highly efficient “service chains” in which each function follows the most appropriate and cost-effective path.

Wired and wireless network convergence as the 2 increasingly share converged networks, virtualized billing, signaling, security functions, and other common underlying elements of provisioning. Management and orchestration (M&O) and handoffs between infrastructures will become more seamless as protocol gateways and other systems and devices migrate to the cloud;

On-the-fly self-provisioning with end users empowered to change services, add features, enable security options, and tweak billing plans in near real-time.

At the end of the day, sharing a common pool of hardware and flexibly allocated resources will deliver far greater efficiency, regardless of what functions are being run and the services being delivered. But the challenges inherent in moving vital networking functions to the cloud loom even larger than the promise, and are quickly becoming real.

The Lifecycle NFV Challenge: Through and Beyond Hybrid Networks

Just 2 years after a European Telecommunications Standards Institute (ETSI) Industry Specification Groups (ISG) outlined the concept, carriers worldwide are moving from basic proof of concept (PoC) demonstrations in the lab to serious field trials of Network Functions Virtualization (NFV). Doing so means making sure new devices and unproven techniques deliver the same (or better) performance when deployments go live.

The risks of not doing so — lost revenues, lagging reputations, churn — are enough to prompt operators to take things in stages. Most will look to virtualize the “low-hanging fruit” first.

Devices like firewalls, broadband remote access servers (BRAS), policy servers, IMS components, and customer premises equipment (CPE) make ideal candidates for quickly lowering CapEx and OpEx without tackling huge real-time processing requirements. Core routing and switching functions responsible for data plane traffic will follow as NFV matures and performance increases.

In the meantime, hybrid networks will be a reality for years to come, potentially adding complexity and even cost (redundant systems, additional licenses) near-term. Operators need to ask key questions, and adopt new techniques for answering them, in order to benefit sooner rather than later.

To thoroughly test virtualization, testing itself must partly become virtualized. Working in tandem with traditional strategies throughout the migration life cycle, new virtualized test approaches help providers explore these 4 key questions:

1. What to virtualize and when? To find this answer, operators need to baseline the performance of existing networks functions, and develop realistic goals for the virtualized deployment. New and traditional methods can be used to measure and model quality and new configurations.

2. How do we get it to work? During development and quality assurance, virtualized test capabilities should be used to speed and streamline testing. Multiple engineers need to be able to instantiate and evaluate virtual machines (VMs) on demand, and at the same time.

3. Will it scale? Here, traditional testing is needed, with powerful hardware systems used to simulate high-scale traffic conditions and session rates. Extreme precision and load aid in emulating real-world capacity to gauge elasticity as well as performance.

4. Will it perform in the real world? The performance of newly virtualized network functions (VNFs) must be demonstrated on its own, and in the context of the overall architecture and end-to-end services. New infrastructure components such as hypervisors and virtual switches (vSwitches) need to be fully assessed and their vulnerability minimized.

Avoiding New Bottlenecks and Blind Spots

Each layer of the new architectural model has the potential to compromise performance. In sourcing new devices and devising techniques, several aspects should be explored at each level:

At the hardware layer, server features and performance characteristics will vary from vendor to vendor. Driver-level bottlenecks can be caused by routine aspects such as CPU and memory read/writes.

With more than 1 type of server platform often in play, testing must be conducted to ensure consistent and predictable performance as Virtual Machines (VMs) are deployed and moved from one type of server to another. The performance level of NICs can make or break the entire system as well, with performance dramatically impacted by simply not having the most recent interfaces or drivers.

Virtual switches and implementations vary greatly, with some coming packaged with hypervisors and others functioning standalone. vSwitches may also vary from hypervisor to hypervisor, with some favoring proprietary technology while others leverage open source. Finally, functionality varies widely with some providing very basic L2 bridge functionality and others acting as full-blown virtual routers.

In comparing and evaluating vSwitch options, operators need to weigh performance, throughput, and functionality against utilization. During provisioning, careful attention must also be given to resource allocation and the tuning of the system to accommodate the intended workload (data plane, control plane, signaling).

Moving up the stack, hypervisors deliver virtual access to underlying compute resources, enabling features like fast start/stop of VMs, snapshot, and VM migration. Hypervisors allow virtual resources (memory, CPU, and the like) to be strictly provisioned to each VM, and enable consolidation of physical servers onto a virtual stack on a single server.

Again, operators have multiple choices. Commercial products may offer more advanced features, while open source alternatives have the broader support of the NFV community. In making their selection, operators should evaluate both the overall performance of each potential hypervisor, and the requirements and impact of its unique feature set.

Management and Orchestration is undergoing a profound fundamental shift from managing physical boxes to managing virtualized functionality. Increased automation is required as this layer must interact with both virtualized server and network infrastructures, often using OpenStack protocols, and in many cases SDN.

VMs and VNFs themselves ultimately impact performance as each requires virtualized resources (memory, storage, and vNICs), and involves a certain number of I/O interfaces. In deploying a VM, it must be verified that the host OS is compatible with the hypervisor. For each VNF, operators need to know which hypervisors the VMs have been verified on, and assess the ability of the host OS to talk to both virtual I/O and the physical layer. The ultimate portability, or ability of a VM to be moved between servers, must also be demonstrated.

Once deployments go live, other overarching aspects of performance, like security, need to be safeguarded. With so much now occurring on a single server, migration to the cloud introduces some formidable new visibility challenges that must be dealt with from start to finish:

Pinpointing performance issues grows more difficult. Boundaries may blur between hypervisors, vSwitches, and even VMs themselves. The inability to source issues can quickly give way to finger pointing that wastes valuable time.

New blind spots also arise. In a traditional environment, traffic is visible on the wire connected to the monitoring tools of choice. Inter-VM traffic within virtualized servers, however, is managed by the hypervisor’s vSwitch, without traversing the physical wire visible to monitoring tools. Traditional security and performance monitoring tools can’t see above the vSwitch, where “east-west” traffic now flows between guest VMs. This newly created gap in visibility may attract intruders and mask pressing performance issues.

Monitoring tool requirements increase as tools tasked with filtering data at rates for which they were not designed quickly become overburdened.

Audit trails may be disrupted, making documenting compliance with industry regulations more difficult, and increasing the risk of incurring fines and bad publicity.

To overcome these emerging obstacles, a new virtual visibility architecture is evolving. As with lab testing, physical and virtual approaches to monitoring live networks are now needed to achieve 100% visibility, replicate field issues, and maintain defenses. New virtualized monitoring Taps (vTaps) add the visibility into inter-VM traffic that traditional tools don’t deliver.

From There On…

The bottom line is that the road to the virtualization of the network will be a long one, without a clear end and filled with potential detours and unforeseeable delays. But with the industry as a whole banding together to pave the way, NFV and its counterpart, Software Defined Networking (SDN) represent a paradigm shift the likes of which the industry hasn’t seen since mobilization itself.

As with mobility, virtualization may cycle through some glitches, retrenching, and iterations on its way to becoming the norm. And once again, providers who embrace the change, validating the core concepts and measuring success each step of the way will benefit most (as well as first), setting themselves up to innovate, lead, and deliver for decades to come.

Thanks to OSP for the article

3 Steps to Server Virtualization Visibility

Network Instruments- 3 Steps to Server Virtualization

Each enterprise has its own reasons for moving to virtual infrastructure, but it all boils down to the demand for better and more efficient server utilization. Ensure comprehensive visibility with three practical steps.

Virtualization is a money-saving technology that allows the enterprise to stretch its IT budget much further by better utilizing server assets. Consider the immediate reduction in data center footprint, maintenance, and capital and operating expense overhead. Then add the promise to dynamically adjust server workloads and service delivery to achieve optimal user experience—the vision of true orchestration. It’s easy to see why server virtualization is key to many organizations’ operational strategy.

But, what if something goes wrong? With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercialware monitoring tools. How can you get the same visibility you need to validate service health within the virtual server hypervisor and vSwitch east/west traffic?

3 Steps to Virtual Visibility Cheat Sheet

Step One:

Get status of host and virtualization components

  • Use polling technologies such as SNMP, WSD, and WMI to provide performance metrics like CPU utilization, memory usage, and virtualized variables like individual VM instance status to find the real cause of service issues.
  • Do your homework. Poor application response time and other service issues can be tied to unexpected sources.

Step Two:

Monitor vSwitch east/west traffic

  • To the network engineer, everything disappears once it hits the virtual server. To combat this “black box effect,” there are two methods to maintain visibility:

1. Inside Virtual Monitoring Model

a. Create a dedicated VM “monitoring instance”.
b. Transmit relevant data to this instance for analysis.
c. Analyze traffic locally with a monitoring solution.
d. Transmit summary or packet data to an external central analysis solution.

2. Outside Virtual Monitoring Model

a. Push copies of raw, unprocessed vSwitch east/west traffic out of the virtualized server.

Step Three:

Inspect perimeter and client north/south conversations

  • Instrument highly-saturated Application Access Layer links with a packet capture device like Observer GigaStor™ to record conversations and rewind for back in time analysis.

To learn more, dowload the white paper here:

Network Instruments- 3 Steps to Server Virtualization