Virtualization Visibility

See Virtual with the Clarity of Physical

The cost-saving shift to virtualization has challenged network teams to maintain accurate views. While application performance is often the first casualty when visibility is reduced, the right solution can match and in some cases even exceed the capabilities of traditional monitoring strategies.

Virtual Eyes

Network teams are the de facto “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around all virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts may be offset by sub-par application performance.

Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources, including host, hypervisor, and virtual switch (vSwitch) along with perimeter client, and application traffic.

In addition, unique communication technologies like VXLAN, and Cisco FabricPath must be supported for full visibility into the traffic in these environments. Without this support, network analyzers cannot gain comprehensive views into virtual data center (VDC) traffic.

Step One: Get Status of Host and Virtualization Components

The host, hypervisor, and vSwitch are the foundation of the entire virtualization effort so their health is crucial. Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Metrics like CPU utilization, memory usage, and virtualized variables like individual VM instance status are examples of accessible data. Often, these parameters can point to the root cause of service issues that may otherwise manifest themselves indirectly.

Virtualization Visibility

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Next Steps

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user.

To learn more about how your team can achieve the same visibility in virtualized environments as you do in physical environments, download the complete 3 Steps to Server Virtualization Visibility White Paper now.

Thanks to Viavi Solutions for the article.

Don’t Miss the Forest for the Trees: Taps vs. SPAN

These days, your network is as important to your business as any other item—including your products. Whether your customers are internal or external, you need a dependable and secure network that grows with your business. Without one, you are dead in the water.

IT managers have a nearly impossible job. They must understand, manage, and secure the network all the time against all problems. Anything less than a 100 percent working network is a failure. There is a very familiar saying: Don’t miss the forest for the trees. Meaning don’t let the details prevent you from seeing the big picture. But what if the details ARE the big picture?

Today’s IT managers can’t miss the forest OR the trees!

Network visibility is a prime tool in properly monitoring your network. You need an end-to-end visibility architecture to truly see your network. This visibility architecture must reveal both the big picture and the smallest details to present a true view of what is happening in the network.

The first building-block to your visibility architecture is access to the data. To efficiently monitor a network, you must have complete visibility into that network. This means being able to reliably capture 100% of the network traffic under all network conditions.

To achieve this, devices need to be installed into the network to capture that data using “taps” or Switch Port Analyzers (SPANs).

A tap is a passive splitting mechanism placed between two network devices. It provides a monitoring connection. Using taps, you can easily connect monitoring devices such as protocol analyzers, RMON probes and intrusion detection and prevention systems to the network. The tap duplicates all traffic on the link and forwards this to the monitoring device. Any monitoring device connected to a tap receives the same traffic as if it were in-line. This includes all errors. Taps do not introduce delay, or alter the content or structure of the data. They also fail open so that traffic continues to flow between network devices, even if you remove a monitoring device or power to the device is lost.

A SPAN port – also known as a mirroring port – is a function of one or more ports on a switch in the network. Like a tap, monitoring devices can also be attached to this SPAN port.

So what are the advantages of taps vs SPAN?

  • A tap captures everything on the wire, including MAC and media errors. A SPAN port will drop those packets.
  • A tap is unaffected by bandwidth saturation. A SPAN port cannot handle heavily used full-duplex links without dropping packets.
  • A tap is simple to install. A SPAN port requires an engineer to configure the switch or switches.
  • A tap is not an addressable network device. It cannot be hacked. SPAN ports leave you vulnerable.
  • A tap doesn’t require you to dedicate a switch port to monitoring. It frees the monitoring port up for switching traffic.

Don’t Miss the Forest for the Trees: Taps vs. SPAN

Thanks to Ixia for the article.

A Deeper Look Into Network Device Policy Checking

In our last blog post “Why you need NCCM as part of your Network Management Platform” I introduced the many reasons that growing networks should investigate and implement an NCCM solution. One of the reasons is that an NCCM system can help with automation in a key area which is related to network security as well as compliance and availability – Policy Checking.

So, in this post, I will be taking a deeper dive into Network Device Policy Checking which will (hopefully) shed some light onto what I believe is an underutilized component of NCCM.

The main goal of Policy Checking in general is to make sure that all network devices are adhering to pre-determined standards with regard to their configuration. These standards are typically put in place to address different but interrelated concerns. Broadly speaking these concerns are broken down into the following:

  1. Device Authentication, Authorization and Accounting (AAA, ACL)
  2. Specialized Regulatory Compliance Rules (PCI, FCAPS, SOX, HIPAA …)
  3. Device Traffic Rules (QoS policies etc.)

Device Authentication, Authorization and Accounting (AAA)

AAA policies focus on access to devices – primarily by engineering staff- for the purposes of configuration, updating and so forth as well as how this access is authenticated, and tracked. Access to infrastructure devices are policed and controlled with the use of AAA TACACS+, RADIUS servers, and ACLs (Access Control Lists) so as to increase security access into device operating systems.

It is highly recommended to create security policies so that the configurations of security access can be policed for consistency and reported on if changed or vital elements of the configuration are missing.

Many organizations, including the very security conscious NSA, even publish guidelines for AAA policies they believe should be implemented.

They offer these guidelines for specific vendors such as Cisco and others which can be downloaded from their website http://www.nsa.gov these guidelines are useful to anyone that is interested in securing their network infrastructure, but become hard requirements if you need to interact in anyway with US government or military networks.

Some basic rules include:

  1. Establishing a dedicated management network
  2. Encrypt all traffic between the manager and the device
  3. Establishing multiple levels or roles for administrators
  4. Logging the devices activities

These rules, as well as many others, offer a first step toward maintain a secure infrastructure.

Specialized Regulatory Compliance Rules:

Many of these rules are similar to and overlap with the AAA rules mentioned above. However, these policies often have very specialized additional components designed for special restrictions due to regulatory laws, or certification requirements.

Some of the most common policies are designed to meet the requirements of devices that carry traffic with sensitive data like credit card numbers, or personal data like Social Security numbers or hospital patient records.

For example, according to PCI, public WAN link connections are considered untrusted public networks. A VPN is required to securely tunnel traffic between a store and the enterprise network. The Health Insurance Portability and Accountability Act (HIPAA) also provides guidelines around network segmentation (commonly implemented with VLAN’s) where traffic carrying sensitive patient data should be separated from “normal” traffic like Web and email.

If your company or organization has to adhere to these regulatory requirements, then it is imperative that such configuration policies are put in place and checked on a consistent basis to ensure compliance.

Device Traffic Rules:

These rule policies are generally concerned with the design of traffic flow and QoS policies. In large organizations and service providers (Telco’s, MSP’s, ISP’s) it is common to differentiate traffic based on pre-defined service types related to prioritization or other distinction.

Ensuring service design rules are being applied and policed is usually a manual process and therefore is susceptible to inaccuracies. Creating design policy rules provides greater control around the service offerings, i.e. QOS settings for Enhanced service offerings, or a complete End-2-End service type, and ensures compliancy with the service delivery SLAs (Service Level Agreements).

Summary:

Each of these rules and potentially others should be defined and policed on a continuous basis. Trying to accomplish this manually is very time consuming, inefficient, and fraught with potential errors (which can become really big problems).

The best way to keep up with these policy requirements is with an automated, electronic policy checking engine. These systems should be able to run on a schedule and detect whether the devices under its control are in or out of compliance. When a system is found to be out of compliance, then it should certainly have the ability to report this to a manager, and potentially even have the ability to auto-remediate the situation. Remediation may involve removing any known bad configurations or rolling back the configuration to a previously known “good” state.

b2ap3_thumbnail_a59aa1b3-b1de-4b3c-a75f-5f279e9cfe6c-1_20150914-142624_1.png

Thanks to NMSaaS for the article.

Infosim® Global Webinar – Why is this App So Terribly Slow?

Infosim® Global Webinar Day
Why is this app so terribly slow?

How to achieve full
Application Monitoring with StableNet®

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?Join Matthias Schmid, Director of Project Management with Infosim® for a Webinar and Live Demo on “How to achieve full Application Monitoring with StableNet®”.

This Webinar will provide insight into:

  • Why you need holistic monitoring for all your company applications
  • How the technologies offered by StableNet® will help you master this challenge

Furthermore, we will provide you with an exclusive insight into how StableNet® was used to achieve full application monitoring for a global company.

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?b2ap3_thumbnail_Fotolia_33050826_XS_20150928-173035_1.jpg

A recording of this Webinar will be available to all who register!
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Is Network Function Virtualization (NFV) Ready to Deliver?

There is no doubt that virtualization is one of the hottest technology topics with communication service providers (CSPs) today. Nearly all the forecasts suggest that widespread NFV adoption will happen over the next few years, with CSPs benefitting from significantly reduced operational costs and much higher revenues resulting from increased service flexibility and velocity. So much for the hype – but where do NFV standards, guidelines and technology implementations stand today and when will the promised benefits be fully realized.

“Nearly all the forecasts suggest that widespread NFV adoption will happen over the next few years, with content service providers benefitting from significantly reduced operational costs and much higher revenues resulting from increased service flexibility and velocity.” – Ronnie Neil, JDSU

All analysts and CSPs agree that the introduction of virtualization will happen in phases. Exactly what the phases will be does vary from forecast to forecast, but a relatively common and simple model details the following three phases:

The financial benefits of virtualization will incrementally grow as each stage is reached with the full benefits not realized until stage 3 is reached. So where are we today in this NFV evolution?

  • Islands of specific network functions with no-to-little service chaining and manual configuration.
  • Either islands of specific network functions with dynamic self-configuration, or introduction of service chaining, but again employing manual configuration.
  • Finally, service chaining coupled with dynamic self-configuration functionality.

Phase 1 is already happening with some early commercial deployments of stand-alone virtualized network functions. hese deployments include virtualized functions of customer premise equipment (CPE), for example gateways and firewalls, and evolved packet core (EPC) components, such as HLRs and MMEs, these functions lending themselves to virtualization due to their software-only architectures. But generally speaking this is as far as commercial NFV deployments have reached in their evolution, with phases 2 and 3 still some way off. One of the main reasons for this is that these latter phases introduce major new requirements for the management tools associated with network virtualization.

And it is only recently that industry efforts to define standards, guidelines and best practices for the management and orchestration of NFV (or MANO as it is referred to) are starting. The emphasis up until now within research forums has been to focus on the basics of delivering the network function virtualization itself.

The TM Forum Zero-touch Operation, Orchestration, and Management (ZOOM) program is one of the foremost industry forums focused on the MANO aspects of virtualization. At this year’s TM Forum Live! event (Nice, France, June 1-4), the following two ZOOM-related catalyst projects will demonstrate aspects of MANO associated with NFV dynamic self-configuration.

  • Maximizing Profitability with Network Functions Virtualization
  • Operations Transformation and Simplifications Enabled by Virtual CPE

Thanks to Viavi Solutions for the article.

Why You Need NCCM As Part Of Your Network Management Platform

In the landscape of Enterprise Network Management most products (and IT Professionals) tend to focus on “traditional” IT monitoring. By that I mean the monitoring of devices, servers, and applications for performance issues and faults. That makes sense because most networks evolve in a similar fashion. They are first built out to accommodate the needs of the business. This primarily involves supporting access for people to applications they need to do their jobs. Once the initial buildout is done (or at least slows down) then next phase is typically implementing a monitoring solution to notify the service desk when there are problems. This pattern of growth, implementation, and monitoring continues essentially forever until the business itself changes through an acquisition or (unfortunately) a shutdown.

However, when a business reaches a certain size, there are a number of new considerations that come into play in order to effectively manage the network. The key word here is “manage” as opposed to “monitor”. These are different concepts, and the distinction is important. While monitoring is primarily concerned with the ongoing surveillance of the network for problems (think alarms that result in a service desk incident) – Network Management is processes, procedures, and policies that govern access to devices and change of the devices.

What is NCCM?

Commonly known by the acronym NCCM which stands for Network Configuration and Change Management – NCCM is the “third leg” of IT management with includes the traditional Performance and Fault Management (PM and FM). The focus of NCCM is to ensure that as network systems move through their common lifecycle (see figure 1 below) there are policies and procedures in place that ensure proper governance of what happens to them.

Figure 1. Network Device Lifecycle

Why You Need NCCM As Part Of Your Network Management Platform

Source: huawei.com

NCCM therefore is focused on the devices itself as an asset of the organization, and then how that asset is provisioned, deployed, configured, changed, upgraded, moved, and ultimately retired. Along each step of the way there should be controls put in place as to Who can access the device (including other devices), How they can access it, What they can do to it (with and without approval) and so on. All NCCM systems should also incorporate logging and auditing so that managers can review what happened in case of a problem later.

These controls are becoming more and more important in today’s modern networks. Depending on which research you read, between 60% and 90% of all unplanned network downtime can be attributed to a mistake made by an engineer when reconfiguring a device. Despite many organization having strict written policies about when a change can be made to a device, the fact remains that many network engineers can and will log into a production device during working hours and make on-the-fly changes. Of course, no engineer willfully brings down a core device. They believe the change they are making is both necessary and non-invasive. But as the saying goes “The road to (you know where) is paved with good intentions”.

A correctly implemented NCCM system can therefore mitigate the majority of these unintended problems. By strictly controlling access to devices and forcing all changes to devices to be both scheduled and approved, an NCCM platform can be a lifesaver. Additionally, most NCCM applications use some form of automation to accomplish repetitive tasks which are another common source of device misconfigurations. For example, instead of a human being making the same ACL change to 300 firewalls (and probably making at least 2-3 mistakes) the NCCM software can perform that task the same way, over and over, without error (and in much less time).

As NCCM is more of a general class of products and not an exact standard, there are many additional potential features and benefits of NCCM tools. Many of them can also perform the initial Discovery and Inventory of the network device estate. This provides a useful baseline of “what we have” which can be a critical component of both NCCM and Performance and Fault Management.

Most NCCM tools should also be able to perform a scheduled backup of device configurations. These backups are the foundation for many aspects of NCCM including historical change reporting, device recovery through rollback options, and policy checking against known good configurations or corporate security and access policies.

Lastly, understanding of the vendor lifecycle for your devices such as End-of-Life and End-of-Support is another critical component of advanced NCCM products. Future blog posts will explore each of these functions in more detail.

The benefits of leveraging configuration management solutions reach into every aspect of IT.

Configuration management solutions also enable organizations to:

  • Maximize the return on network investments by 20%
  • Reduce the Total Cost of Ownership by 25%
  • Reduce the Mean Time to Repair by 20%
  • Reduce Overexpansion of Bandwidth by 20%

Because of these operational benefits, NCCM systems have become a critical component of enterprise network management platforms.

Best Practices Guide - 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

Cloud, Virtualization Solution – Example of Innovation

Our team is excited to represent Viavi Solutions during an industy (IT and cloud-focused) event, VMworld, in San Francisco at booth #2235. We’ll be showcasing our latest innovation – the GigaStor Software Edition designed for managing performance in virtual, cloud, and remote environments.

Here are some topline thoughts about why this product matters for our customers and core technologies trending today, what a great time it is for the industry and to be Viavi!

For starters, the solution is able to deliver quick and accurate troubleshooting and assurance in next generation network architecture. As networks become virtualized and automated through SDN initiatives, performance monitoring tools need to evolve or network teams risk losing complete visibility into user experience and missing performance problems. With GigaStor Software, engineers have real-time insight to assess user experience in these environments, and proactively identify application problems before they impact the user.

GigaStor Software Edition helps engineers troubleshoot with confidence in virtual and cloud environments by having all the traffic retained for resolving any challenge and expert analytics …leading to quick resolution.”

With the explosion of online applications and mobile devices, the role of cloud and virtualization will increase in importance, along with the need for enterprises and services providers need to guarantee around-the-clock availability or risk losing customers. With downtime costing companies $300K per hour or $5,600/minute, the solution that solves the problem the fastest will get the business. Walking the show floor at VMworld, IT engineers will be looking for solutions like GigaStor Software that help ensure quality network and services, as well as speed and accuracy when enabling advanced networks for their customers.

And, what a great time to be Viavi Solutions! Our focus on achieving visibility regardless of the environment and delivering real-time actionable insights in a cost-effective solution means our customers are going to be able to guarantee high levels of service and meet customer expectations without breaking the bank. GigaStor Software Edition helps engineers troubleshoot with confidence in virtual and cloud environments by having all the traffic retained for resolving any challenge and expert analytics that lead to quick resolution.

Thanks to Viavi Solutions for the article.

Do You Have a Network Operations Center Strategy?

The working definition of a Network Operations Center (NOC) varies with each customer we talk with; however, the one point which remains unified is that the NOC should be the main point of visibility for key functions that combine to provide business services.

The level at which a NOC ‘product’ is interactive depends on individual customer goals and requirements. Major equipment vendors trying to increase revenue are delving into management and visibility solutions with acquisitions and mergers, and while their products may provide many good features; those features are focused on their own product lines. In mixed vendor environments this becomes challenging and expensive, if you have to increase the number of visibility islands.

One trend we have seen emerging is the desire for consolidation and simplification within the Operations Centre. In many cases our customers may have the information required to understand the root cause but, getting to that information quickly is a major challenge across multiple standalone tools. Let’s face it, there will never be one single solution that will fulfill absolutely all monitoring and business requirements, and having specialized tools is likely necessary.

The balance lies in finding a powerful, yet flexible solution; one that not only offers a solid core functionality and feature set, but also encourages the orchestration of niche tools. A NOC tool should provide a common point of visibility if you want to quickly identify which business service is affected; easily determine the root cause of that problem, and take measures to correct the problem. Promoting integration with existing business systems, such as CMDB and Helpdesk, both northbound and southbound, will ultimately expand the breadth of what you can accomplish within your overall business delivery strategy. Automated intelligent problem resolution, equipment provisioning, and Change and Configuration Management at the NOC level should also be considered as part of this strategy.

Many proven efficiencies are exposed when you fully explore tool consolidation with a goal of eliminating overlapping technologies and process related bottlenecks, or duplication. While internal tool review often brings forth resistance, it is necessary, and the end result can be enlightening from both a financial and a process aspect. Significant cost savings are easily achieved with fewer maintenance contracts, but with automation a large percent of the non-value adding activities of network engineers can be automated within a product, freeing network engineers to work on proactive new innovations and concepts.

Do You Have a  Network Operations Center Strategy?The ‘Dark Side’

Forward thinking companies are deploying innovative products which allow them to move towards unmanned Network Operations Center, or ‘Dark NOC’. Factors such as energy consumption, bricks and mortar costs, and other increasing operational expenditures strengthen the fact that their NOC may be located anywhere with a network connection and still provide full monitoring and visibility. Next generation tools are no longer a nice to have, but a reality in today’s dynamic environment! What is your strategy?

Ixia Taps into Visibility, Access and Security in 4G/LTE

The Growing Impact of Social Networking Trends on Lawful Interception

Lawful Interception (LI) is the legal process by which a communications network operator or Service Provider (SP) gives authorized officials access to the communications of individuals or organizations. With security threats mushrooming in new directions, LI is more than ever a priority and major focus of Law Enforcement Agencies (LEAs). Regulations such as the Communications Assistance for Law Enforcement Act (CALEA), mandate that SPs place their resources at the service of these agencies to support surveillance and interdiction of individuals or groups.

CALEA makes Lawful Interception a priority mission for Service Providers as well as LEA; its requirements make unique demands and mandate specific equipment to carry out its high-stakes activities. This paper explores requirements and new solutions for Service Provider networks in performing Lawful Interception.

A Fast-Changing Environment Opens New Doors to Terrorism and Crime

In the past, Lawful Interception was simpler and more straightforward because it was confined to traditional voice traffic. Even in the earlier days of the Internet, it was still possible to intercept a target’s communication data fairly easily.

Now, as electronic communications take on new forms and broaden to a potential audience of billions, data volumes are soaring, and the array of service offerings is growing apace. Lawful Interception Agencies and Service Providers are racing to thwart terrorists and other criminals who have the technological expertise and determination to carry out their agendas and evade capture. This challenge will only intensify with the rising momentum of change in communication patterns.

Traffic patterns have changed: In the past it was easier to identify peer-to-peer applications or chat using well known port numbers. In order to evade LI systems, the bad guys had to work harder. Nowadays, most applications use Ixia Taps into Visibility, Access and Security in 4G/LTE standard HTTP and in most cases SSL to communicate. This puts an extra burden on LI systems that must identify overall more targets on larger volumes of data with fewer filtering options.

Social Networking in particular is pushing usage to exponential levels, and today’s lawbreakers have a growing range of sophisticated, encrypted communication channels to exploit. With the stakes so much higher, Service Providers need robust, innovative resources that can contend with a widening field of threats. This interception technology must be able to collect volume traffic and handle data at unprecedented high speeds and with pinpoint security and reliability.

LI Strategies and Goals May Vary, but Requirements Remain Consistent

Today, some countries are using nationwide interception systems while others only dictate policies that providers need to follow. While regulations and requirements vary from country to country, organizations such as the European Telecommunications Standards Institute (ETSI) and the American National Standards Institute (ANSI) have developed technical parameters for LI to facilitate the work of LEAs. The main functions of any LI solution are to access Interception-Related Information (IRI) and Content of Communication (CC) from the telecommunications network and to deliver that information in a standardized format via the handover interface to one or more monitoring centers of law enforcement agencies.

High-performance switching capabilities, such as those offered by the Ixia Director™ family of solutions, should map to following LI standards in order to be effective: They must be able to isolate suspicious voice, video, or data streams for an interception, based on IP address, MAC address or other parameters. The device must also be able to carry out filtering at wire speed. Requirements for supporting Lawful Interception activities include:

  • The ability to intercept all applicable communications of a certain target without gaps in coverage, including dropped packets, where missing encrypted characters may render a message unreadable or incomplete
  • Total visibility into network traffic at any point in the communication stream
  • Adequate processing speed to match network bandwidth
  • Undetectability, unobtrusiveness, and lack of performance degradation (a red flag to criminals and terrorists on alert for signs that they have been intercepted)
  • Real-time monitoring capabilities, because time is of the essence in preventing a crime or attack and in gathering evidence
  • The ability to provide intercepted information to the authorities in the agreed-upon handoff format
  • Load sharing and balancing of traffic that is handed to the LI system .

From the perspective of the network operator or Service Provider, the primary obligations and requirements for developing and deploying a lawful interception solution include:

  • Cost-effectiveness
  • Minimal impact on network infrastructure
  • Compatibility and compliance
  • Support for future technologies
  • Reliability and security

Ixia’s Comprehensive Range of Solutions for Lawful Interception

This Ixia customer, (the “Service Provider”), is a 4G/LTE pioneer that relies on Ixia solutions. Ixia serves the LI architecture by providing the access part of an LI solution in the form of Taps and switches. These contribute functional flexibility and can be configured as needed in many settings. Both the Ixia Director solution family and the iLink Agg™ solution can aggregate a group of links in traffic and pick out conversations with the same IP address pair from any of the links.

Following are further examples of Ixia products that can form a vital element of a successful LI initiative:

Test access ports, or Taps, are devices used by carriers and others to meet the capability requirements of CALEA legislation. Ixia is a global leader in the range and capabilities of its Taps, which provide permanent, passive access points to the physical stream.

Ixia Taps reside in both carrier and enterprise infrastructures to perform network monitoring and to improve both network security and efficiency. These inline devices provide permanent, passive access points to the physical stream. The passive characteristic of Taps means that network data is not affected whether the Tap is powered or not. As part of an LI solution, Taps have proven more useful than Span ports. If Law Enforcement Agencies must reconfigure a switch to send the right conversations to the Span port every time intercept is required, a risk arises of misconfiguring the switch and connections. Also, Span ports drop packets—another significant monitoring risk, particularly in encryption.

Director xStream™ and iLink Agg xStream™ enable deployment of an intelligent, flexible and efficient monitoring access platform for 10G networks. Director xStream’s unique TapFlow™ filtering technology enables LI to focus on select traffic of interest for each tool based on protocols, IP addresses, ports, and VLANs. The robust engineering of Director xStream and iLink Agg xStream enables a pool of 10G and 1G tools to be deployed across a large number of 10G network links, with remote, centralized control of exactly which traffic streams are directed to each tool. Ixia xStream solutions enable law enforcement entities to view more traffic with fewer monitoring tools as well as relieving oversubscribed 10G monitoring tools. In addition, law enforcement entities can share tools and data access among groups without contention and centralize data monitoring in a network operations center.

Director Pro™ and Director xStream Pro data monitoring switches offers law enforcement the ability to perform better pre-filtering via Deep Packet Inspection (DPI) and to hone in on a specific phone number or credit card number. Those products differs from other platforms that might have the ability to seek data within portions of the packet thanks to a unique ability to filter content or perform pattern matching with hardware and in wire speed potentially to Layer 7. Such DPI provides the ability to apply filters to a packet or multiple packets at any location, regardless of packet length or how “deep” the packet is; or to the location of the data to be matched within this packet. A DPI system is totally independent of the packet.

Thanks to Ixia for the article.

The Case for an All-In-One Network Monitoring Platform

There are many famous debates in history: dogs vs cats, vanilla vs chocolate & Coke vs Pepsi just to name a few. In the IT world, one of the more common debates is “single platform vs point solution”. That is, when it comes to the best way to monitor and manage a network, is it better to have a single management platform that can do multiple things, or would it be better to have an array of tools that are each specialized for a job?

The choice can be thought of as being between Multitaskers & Unitaskers. Swiss Army knives, vs dedicated instruments. As for most things in life, the answer can be complex, and probably will never be agreed upon by everyone – but that doesn’t mean we can’t explore the subject and form some opinions of our own.

For this debate, we need to look the major considerations which go into this choice. That is, what key areas need to be addressed by any type of network monitoring and management solution and then how do our two options fair in those spaces? For this post, I will focus on 3 main areas to try to draw some conclusions:

  • Initial Cost
  • Operations
  • Maintenance

1) Initial Cost

This may be one of the more difficult areas to really get a handle on, as costs can vary wildly from one vender to another. Many of the “All-In-One” tools come with a steep entry price, but then do not grow significantly after that. Other AIO tools offer flexible licensing options which allow you to only purchase the particular modules or features that you need, and then easily add-on other features when you want them.

In contrast, the “Point-Solutions” may not come with a large price tag, but you need to purchase multiple tools in order to cover your needs. You can therefore take a piecemeal approach to purchasing which can certainly spread your costs out as long as you don’t leave critical gaps in your monitoring in the meantime. And, over time, the combined costs for many tools can become larger than a single system.

Newer options like pay-as-you-go SaaS models can greatly reduce or even eliminate the upfront costs for both AOI and Point Solutions. It is important to investigate if the vendors you are looking at offer that type of service.

Bottom Line:

Budgets always matter. If your organization is large enough to absorb the initial cost of a larger umbrella NMS, then this typically leads to a lower total cost in the long run, as long as you don’t also need to supplement the AIO solution with too many secondary solutions. SaaS models can be a great way to get going with either option as they reduce the initial Cap-Ex spend necessary.

2) Operations

In some ways, the real heart of the question AIO vs PS comes should come down to this – “which choice will help me solve issues more quickly”? Most monitoring solutions are used to respond when there is an issue with service delivery, and so the first goal of any NMS should be to help the IT team rapidly diagnose and repair problems.

When thought of in the context of the AIO vs PS debate, then you need to think about the workflow involved when an alarm or ticket is raised. With an AIO solution, an IT pro would immediately use that system to try both see the alarm and then to dive into the affected systems or devices to try and understand the root cause of the problem.

If the issue is systemic (meaning that multiple locations/users/services are affected) then an AIO solution has the clear advantage of being able to see a more holistic view of the network as a whole instead of just a small portion as would be the case for many Point Solutions. If the AIO application contains a root cause engine then this can be a huge time saver as it may be able to immediately point the staff in the right direction.

On the other hand, if that AIO solution cannot see deeply enough into the individual systems to pinpoint the issues, then a point solution has an advantage due to its (typically) deeper understanding of the systems it monitors. It may be that only a solution provided directly by the systems manufacturer would have insight into the cause of the problem.

Bottom line

All In One solutions typically work best when problems occur which affect more than one area of the network. Whereas Point Solutions may be required if there are proprietary components that don’t have good support for standards based monitoring like SNMP.

3) Maintenance

The last major consideration is one that I don’t think gets enough attention in this debate- the ongoing maintenance of the solutions themselves i.e. “managing the management solutions”. All solutions require “maintenance” to keep them working optimally. There are upgrades, patches, server moves etc. There are also the training requirements of any staff that need to use these systems. This can add up to significant time and energy “costs”.

This is where AIO solutions can really shine. Instead of having to maintain and upgrade many solutions, your staff can focus on maintaining a single system. The same thing goes for training – think about how hard it can be to really become an expert in anything, then multiply that by the training required to become proficient at X number of tools that your organization has purchased.

I have seen many places where the expertise in certain tools becomes specialized – and therefore becomes a single point of failure for the organization. If only “Bob” knows how to use that tool, then what happens when there is a problem and “Bob” in on vacation, or leaves the group?

Bottom Line:

Unless your organization can spend the time and money necessary to keep the entire staff fully trained on all of the critical network tools, then AIO solutions offer a real advantage over point solutions when it comes to maintainability of your IT management systems.

In the end, I suspect that this debate will never completely be decided. There are many valid reasons for organizations to choose one path over another when it comes how to organize their IT monitoring platforms.

In our view, we see some real advantages to the All-In-One solution approach, as long as the platform of choice does not have too many gaps in it which then need to be filled with additional point solutions.

Thanks to NMSaaS for the article.