The State of Enterprise Security Resilience – An Ixia Research Report

Ixia, an international leader in application performance and security resilience technology, conducted a survey to better understand how network security resilience solutions and techniques are used within the modern enterprise. While information exists on security products and threats, very little is available on how it is actually being used and the techniques and technology to ensure that security is completely integrated into the corporate network structure. This report presents the research we uncovered.

During this survey, there were three areas of emphasis exploring security and visibility architectures. One portion of the survey focused on understanding the product types and use. The second area of emphasis was on understanding the processes in use. The final area of emphasis was on understanding the people components of typical architectures.

This report features several key findings that include the following:

  • Many enterprises and carriers are still highly vulnerable to the effects of a security breach. This is due to concerns with lack of following best practices, process issues, lack of awareness, and lack of proper technology.
  • Lack of knowledge, not cost, is the primary barrier to security improvements. However, typical annual spend on network security is less than $100K worldwide.
  • Security resilience approaches are growing in worldwide adoption. A primary contributor is the merge of visibility and security architectures. Additional data shows that life-cycle security methodologies and security resilience testing are also positive contributors.
  • The top two main security concerns for IT are data loss and malware attacks.

These four key findings confirm that while there are still clear dangers to network security in the enterprise, there is some hope for improvement. The severity of the risk has not gone away, but it appears that some are managing it with the right combination of investment in technology, training, and processes.

To read more, download the report here.

The State of Enterprise Security Resilience

Thanks to Ixia for the article.

5 Reasons Why You Must Back Up Your Routers and Switches

I’ve been working in the Network Management business for over 20 years, and in that time I have certainly seen my share of networks. Big and small, centralized and distributed, brand name vendor devices in shiny datacenters, and no-name brands in basements and bunkers. The one consistent surprise I continue to encounter is how many of these organization (even the shiny datacenter ones) lack a backup system for their network device configurations.

I find that a little amazing, since I also can’t tell you the last time I talked to a company that didn’t have a backup system for their servers and storage systems. I mean, who doesn’t backup their critical data? It seems so obvious that hard drive need to be backed up in case of problems –and yet many of these same organizations, many of whom spend huge amounts of money on server backup, do not even think of backing up the critical configurations of the devices that actually move the traffic around.

So, with that in mind, I present 5 reasons why you must back up your Routers and Switches (and Firewalls and WLAN controllers, and Load Balancers etc).

1. Upgrades and new rollouts.

Network Devices get swapped out all of the time. In many cases, these rollouts are planned and scheduled. At some point (if you’re lucky) an engineer will think about backing up the configuration of the old device before the replacements occurs. However, I have seen more than one time when this didn’t happen. In those cases, the old device is gone, and the new devices need to be reconfigured from scratch – hopefully with all of the correct configs. A scheduled backup solution makes these situations a lot less painful.

2. Disaster Recovery.

This is the opposite of the simple upgrade scenario. The truth is that many times a device is not replaced until it fails. Especially those “forgotten” devices that are on the edge of networks in ceilings and basements and far flung places. These systems rarely get much “love” until there is a problem. Then, suddenly, there is an outage – and in the scramble to get back up and running, and central repository of the device configuration can be a time (and life) saver.

3. Compliance

We certainly see this more in larger organizations, but it also becomes a real driving force in smaller companies that operate in highly regulated industries like banking and healthcare. If your company falls into one of those categories, then chances are you actually have a duty to backup your devices in order to stay within regulatory compliance. The downside of being non-compliant can be harsh. We have worked with companies that were being financially penalized for every day they were out of compliance with a number of policies including failure to have a simple router / switch / firewall backup system in place.

4. Quick Restores.

Ask most network engineers and they will tell you – we’ve all had that “oops” moment when we were making an configuration change on the fly and realized just a second after hitting “enter” that we just broke something. Hopefully, we just took down a single device. Sometimes it’s worse than that and we go into real panic mode. I can tell you, it is that exact moment when we realize how important configuration backups can be. The restoration process can be simple and (relatively) painless, or it can be really, really painful; and it all comes down to whether or not you have good backups.

5. Policy Checking.

One of the often overlooked benefits of backing up your device configurations, is that it allows an NCCM systems to then automatically scan those backups and compare them to known good configurations in order to ensure compliance to company standards. Normally, this is a very tedious (and therefore ignored) process – especially in large organizations with many devices and many changes taking place. Sophisticated systems can quickly identify when a device configuration has changed, immediately backup the new config, and then scan that config to make sure it’s not violating any company rules. Regular scans can be rolled up into scheduled reports which provide management with a simple but important audit of all devices that are out of compliance.

Bottom Line:

Routers, Switches and Firewalls really are the heart of a network. Unless they are running smoothly, everything suffers. One of the simplest yet effective practices for helping ensure the operation of a network is to implement an automatic device configuration backup system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article. 

NetFlow Auditor – Best Practices for Network Traffic Analysis Webinar

In this Webinar John Olson, from NetFlow Auditor, discusses the top 10 Best Practices for using Netflow data to analyse, report and alert on multiple aspects of Network traffic.

These best practices include:

  • Understanding where is the best place in your network to collect netflow data from
  • How long you should retain your flow data
  • Why you should archive more than just the top 100 flows
  • When static thresholds should be replaced by adaptive thresholds
  • More…

b2ap3_thumbnail_Fotolia_33050826_XS_20151006-142344_1.jpg

Thanks to NetFlow Auditor for the article.  

CIO Review – Infosim Unified Solution for Automated Network Management

CIO Review

20 Most Promising Networking Solution Providers

Virtualization has become the life blood of the networking industry today. With the advent of technologies such as software-defined networking and network function virtualization, the black box paradigm or the legacy networking model has been shattered. In the past, the industry witnessed networking technology such as Fiber Distributed Data Interface (FDDI), which eventually gave way to Ethernet, the predominant network of choice. This provided opportunities to refresh infrastructures and create new networking paradigms.Today, we see a myriad of proprietary technologies, competing for the next generation networking models that are no longer static, opaque or rigid.

Ushering a new way of thinking and unlocking the possibilities, customers are increasingly demanding for automation from the network solution providers. The key requirement is an agile network controlled from a single source. Visibility into the network has also become a must-have in the networking spectrum, providing realtime information about the events befalling inside the networks.

In order to enhance enterprise agility, improve network efficiency and maintain high standards of security, several innovative players in the industry are delivering cutting-edge solutions that ensure visibility, cost savings and automation in the networks. In the last few months we have looked at hundreds of solution providers who primarily serve the networking industry, and shortlisted the ones that are at the forefront of tackling challenges faced by this industry.

In our selection, we looked at the vendor’s capability to fulfill the burning needs of the sector through the supply of a variety of cost effective and flexible solutions that add value to the networking industry. We present to you CIO Review’s 20 Most Promising Networking Solution Providers 2015.

Infosim Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to Infosim for the article. 

Infosim® Global Webinar – Why is this App So Terribly Slow?

Infosim® Global Webinar Day
Why is this app so terribly slow?

How to achieve full
Application Monitoring with StableNet®

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?Join Matthias Schmid, Director of Project Management with Infosim® for a Webinar and Live Demo on “How to achieve full Application Monitoring with StableNet®”.

This Webinar will provide insight into:

  • Why you need holistic monitoring for all your company applications
  • How the technologies offered by StableNet® will help you master this challenge

Furthermore, we will provide you with an exclusive insight into how StableNet® was used to achieve full application monitoring for a global company.

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?b2ap3_thumbnail_Fotolia_33050826_XS_20150928-173035_1.jpg

A recording of this Webinar will be available to all who register!
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Why You Need NCCM As Part Of Your Network Management Platform

In the landscape of Enterprise Network Management most products (and IT Professionals) tend to focus on “traditional” IT monitoring. By that I mean the monitoring of devices, servers, and applications for performance issues and faults. That makes sense because most networks evolve in a similar fashion. They are first built out to accommodate the needs of the business. This primarily involves supporting access for people to applications they need to do their jobs. Once the initial buildout is done (or at least slows down) then next phase is typically implementing a monitoring solution to notify the service desk when there are problems. This pattern of growth, implementation, and monitoring continues essentially forever until the business itself changes through an acquisition or (unfortunately) a shutdown.

However, when a business reaches a certain size, there are a number of new considerations that come into play in order to effectively manage the network. The key word here is “manage” as opposed to “monitor”. These are different concepts, and the distinction is important. While monitoring is primarily concerned with the ongoing surveillance of the network for problems (think alarms that result in a service desk incident) – Network Management is processes, procedures, and policies that govern access to devices and change of the devices.

What is NCCM?

Commonly known by the acronym NCCM which stands for Network Configuration and Change Management – NCCM is the “third leg” of IT management with includes the traditional Performance and Fault Management (PM and FM). The focus of NCCM is to ensure that as network systems move through their common lifecycle (see figure 1 below) there are policies and procedures in place that ensure proper governance of what happens to them.

Figure 1. Network Device Lifecycle

Why You Need NCCM As Part Of Your Network Management Platform

Source: huawei.com

NCCM therefore is focused on the devices itself as an asset of the organization, and then how that asset is provisioned, deployed, configured, changed, upgraded, moved, and ultimately retired. Along each step of the way there should be controls put in place as to Who can access the device (including other devices), How they can access it, What they can do to it (with and without approval) and so on. All NCCM systems should also incorporate logging and auditing so that managers can review what happened in case of a problem later.

These controls are becoming more and more important in today’s modern networks. Depending on which research you read, between 60% and 90% of all unplanned network downtime can be attributed to a mistake made by an engineer when reconfiguring a device. Despite many organization having strict written policies about when a change can be made to a device, the fact remains that many network engineers can and will log into a production device during working hours and make on-the-fly changes. Of course, no engineer willfully brings down a core device. They believe the change they are making is both necessary and non-invasive. But as the saying goes “The road to (you know where) is paved with good intentions”.

A correctly implemented NCCM system can therefore mitigate the majority of these unintended problems. By strictly controlling access to devices and forcing all changes to devices to be both scheduled and approved, an NCCM platform can be a lifesaver. Additionally, most NCCM applications use some form of automation to accomplish repetitive tasks which are another common source of device misconfigurations. For example, instead of a human being making the same ACL change to 300 firewalls (and probably making at least 2-3 mistakes) the NCCM software can perform that task the same way, over and over, without error (and in much less time).

As NCCM is more of a general class of products and not an exact standard, there are many additional potential features and benefits of NCCM tools. Many of them can also perform the initial Discovery and Inventory of the network device estate. This provides a useful baseline of “what we have” which can be a critical component of both NCCM and Performance and Fault Management.

Most NCCM tools should also be able to perform a scheduled backup of device configurations. These backups are the foundation for many aspects of NCCM including historical change reporting, device recovery through rollback options, and policy checking against known good configurations or corporate security and access policies.

Lastly, understanding of the vendor lifecycle for your devices such as End-of-Life and End-of-Support is another critical component of advanced NCCM products. Future blog posts will explore each of these functions in more detail.

The benefits of leveraging configuration management solutions reach into every aspect of IT.

Configuration management solutions also enable organizations to:

  • Maximize the return on network investments by 20%
  • Reduce the Total Cost of Ownership by 25%
  • Reduce the Mean Time to Repair by 20%
  • Reduce Overexpansion of Bandwidth by 20%

Because of these operational benefits, NCCM systems have become a critical component of enterprise network management platforms.

Best Practices Guide - 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

Cloud, Virtualization Solution – Example of Innovation

Our team is excited to represent Viavi Solutions during an industy (IT and cloud-focused) event, VMworld, in San Francisco at booth #2235. We’ll be showcasing our latest innovation – the GigaStor Software Edition designed for managing performance in virtual, cloud, and remote environments.

Here are some topline thoughts about why this product matters for our customers and core technologies trending today, what a great time it is for the industry and to be Viavi!

For starters, the solution is able to deliver quick and accurate troubleshooting and assurance in next generation network architecture. As networks become virtualized and automated through SDN initiatives, performance monitoring tools need to evolve or network teams risk losing complete visibility into user experience and missing performance problems. With GigaStor Software, engineers have real-time insight to assess user experience in these environments, and proactively identify application problems before they impact the user.

GigaStor Software Edition helps engineers troubleshoot with confidence in virtual and cloud environments by having all the traffic retained for resolving any challenge and expert analytics …leading to quick resolution.”

With the explosion of online applications and mobile devices, the role of cloud and virtualization will increase in importance, along with the need for enterprises and services providers need to guarantee around-the-clock availability or risk losing customers. With downtime costing companies $300K per hour or $5,600/minute, the solution that solves the problem the fastest will get the business. Walking the show floor at VMworld, IT engineers will be looking for solutions like GigaStor Software that help ensure quality network and services, as well as speed and accuracy when enabling advanced networks for their customers.

And, what a great time to be Viavi Solutions! Our focus on achieving visibility regardless of the environment and delivering real-time actionable insights in a cost-effective solution means our customers are going to be able to guarantee high levels of service and meet customer expectations without breaking the bank. GigaStor Software Edition helps engineers troubleshoot with confidence in virtual and cloud environments by having all the traffic retained for resolving any challenge and expert analytics that lead to quick resolution.

Thanks to Viavi Solutions for the article.

Do You Have a Network Operations Center Strategy?

The working definition of a Network Operations Center (NOC) varies with each customer we talk with; however, the one point which remains unified is that the NOC should be the main point of visibility for key functions that combine to provide business services.

The level at which a NOC ‘product’ is interactive depends on individual customer goals and requirements. Major equipment vendors trying to increase revenue are delving into management and visibility solutions with acquisitions and mergers, and while their products may provide many good features; those features are focused on their own product lines. In mixed vendor environments this becomes challenging and expensive, if you have to increase the number of visibility islands.

One trend we have seen emerging is the desire for consolidation and simplification within the Operations Centre. In many cases our customers may have the information required to understand the root cause but, getting to that information quickly is a major challenge across multiple standalone tools. Let’s face it, there will never be one single solution that will fulfill absolutely all monitoring and business requirements, and having specialized tools is likely necessary.

The balance lies in finding a powerful, yet flexible solution; one that not only offers a solid core functionality and feature set, but also encourages the orchestration of niche tools. A NOC tool should provide a common point of visibility if you want to quickly identify which business service is affected; easily determine the root cause of that problem, and take measures to correct the problem. Promoting integration with existing business systems, such as CMDB and Helpdesk, both northbound and southbound, will ultimately expand the breadth of what you can accomplish within your overall business delivery strategy. Automated intelligent problem resolution, equipment provisioning, and Change and Configuration Management at the NOC level should also be considered as part of this strategy.

Many proven efficiencies are exposed when you fully explore tool consolidation with a goal of eliminating overlapping technologies and process related bottlenecks, or duplication. While internal tool review often brings forth resistance, it is necessary, and the end result can be enlightening from both a financial and a process aspect. Significant cost savings are easily achieved with fewer maintenance contracts, but with automation a large percent of the non-value adding activities of network engineers can be automated within a product, freeing network engineers to work on proactive new innovations and concepts.

Do You Have a  Network Operations Center Strategy?The ‘Dark Side’

Forward thinking companies are deploying innovative products which allow them to move towards unmanned Network Operations Center, or ‘Dark NOC’. Factors such as energy consumption, bricks and mortar costs, and other increasing operational expenditures strengthen the fact that their NOC may be located anywhere with a network connection and still provide full monitoring and visibility. Next generation tools are no longer a nice to have, but a reality in today’s dynamic environment! What is your strategy?

Ixia Taps into Visibility, Access and Security in 4G/LTE

The Growing Impact of Social Networking Trends on Lawful Interception

Lawful Interception (LI) is the legal process by which a communications network operator or Service Provider (SP) gives authorized officials access to the communications of individuals or organizations. With security threats mushrooming in new directions, LI is more than ever a priority and major focus of Law Enforcement Agencies (LEAs). Regulations such as the Communications Assistance for Law Enforcement Act (CALEA), mandate that SPs place their resources at the service of these agencies to support surveillance and interdiction of individuals or groups.

CALEA makes Lawful Interception a priority mission for Service Providers as well as LEA; its requirements make unique demands and mandate specific equipment to carry out its high-stakes activities. This paper explores requirements and new solutions for Service Provider networks in performing Lawful Interception.

A Fast-Changing Environment Opens New Doors to Terrorism and Crime

In the past, Lawful Interception was simpler and more straightforward because it was confined to traditional voice traffic. Even in the earlier days of the Internet, it was still possible to intercept a target’s communication data fairly easily.

Now, as electronic communications take on new forms and broaden to a potential audience of billions, data volumes are soaring, and the array of service offerings is growing apace. Lawful Interception Agencies and Service Providers are racing to thwart terrorists and other criminals who have the technological expertise and determination to carry out their agendas and evade capture. This challenge will only intensify with the rising momentum of change in communication patterns.

Traffic patterns have changed: In the past it was easier to identify peer-to-peer applications or chat using well known port numbers. In order to evade LI systems, the bad guys had to work harder. Nowadays, most applications use Ixia Taps into Visibility, Access and Security in 4G/LTE standard HTTP and in most cases SSL to communicate. This puts an extra burden on LI systems that must identify overall more targets on larger volumes of data with fewer filtering options.

Social Networking in particular is pushing usage to exponential levels, and today’s lawbreakers have a growing range of sophisticated, encrypted communication channels to exploit. With the stakes so much higher, Service Providers need robust, innovative resources that can contend with a widening field of threats. This interception technology must be able to collect volume traffic and handle data at unprecedented high speeds and with pinpoint security and reliability.

LI Strategies and Goals May Vary, but Requirements Remain Consistent

Today, some countries are using nationwide interception systems while others only dictate policies that providers need to follow. While regulations and requirements vary from country to country, organizations such as the European Telecommunications Standards Institute (ETSI) and the American National Standards Institute (ANSI) have developed technical parameters for LI to facilitate the work of LEAs. The main functions of any LI solution are to access Interception-Related Information (IRI) and Content of Communication (CC) from the telecommunications network and to deliver that information in a standardized format via the handover interface to one or more monitoring centers of law enforcement agencies.

High-performance switching capabilities, such as those offered by the Ixia Director™ family of solutions, should map to following LI standards in order to be effective: They must be able to isolate suspicious voice, video, or data streams for an interception, based on IP address, MAC address or other parameters. The device must also be able to carry out filtering at wire speed. Requirements for supporting Lawful Interception activities include:

  • The ability to intercept all applicable communications of a certain target without gaps in coverage, including dropped packets, where missing encrypted characters may render a message unreadable or incomplete
  • Total visibility into network traffic at any point in the communication stream
  • Adequate processing speed to match network bandwidth
  • Undetectability, unobtrusiveness, and lack of performance degradation (a red flag to criminals and terrorists on alert for signs that they have been intercepted)
  • Real-time monitoring capabilities, because time is of the essence in preventing a crime or attack and in gathering evidence
  • The ability to provide intercepted information to the authorities in the agreed-upon handoff format
  • Load sharing and balancing of traffic that is handed to the LI system .

From the perspective of the network operator or Service Provider, the primary obligations and requirements for developing and deploying a lawful interception solution include:

  • Cost-effectiveness
  • Minimal impact on network infrastructure
  • Compatibility and compliance
  • Support for future technologies
  • Reliability and security

Ixia’s Comprehensive Range of Solutions for Lawful Interception

This Ixia customer, (the “Service Provider”), is a 4G/LTE pioneer that relies on Ixia solutions. Ixia serves the LI architecture by providing the access part of an LI solution in the form of Taps and switches. These contribute functional flexibility and can be configured as needed in many settings. Both the Ixia Director solution family and the iLink Agg™ solution can aggregate a group of links in traffic and pick out conversations with the same IP address pair from any of the links.

Following are further examples of Ixia products that can form a vital element of a successful LI initiative:

Test access ports, or Taps, are devices used by carriers and others to meet the capability requirements of CALEA legislation. Ixia is a global leader in the range and capabilities of its Taps, which provide permanent, passive access points to the physical stream.

Ixia Taps reside in both carrier and enterprise infrastructures to perform network monitoring and to improve both network security and efficiency. These inline devices provide permanent, passive access points to the physical stream. The passive characteristic of Taps means that network data is not affected whether the Tap is powered or not. As part of an LI solution, Taps have proven more useful than Span ports. If Law Enforcement Agencies must reconfigure a switch to send the right conversations to the Span port every time intercept is required, a risk arises of misconfiguring the switch and connections. Also, Span ports drop packets—another significant monitoring risk, particularly in encryption.

Director xStream™ and iLink Agg xStream™ enable deployment of an intelligent, flexible and efficient monitoring access platform for 10G networks. Director xStream’s unique TapFlow™ filtering technology enables LI to focus on select traffic of interest for each tool based on protocols, IP addresses, ports, and VLANs. The robust engineering of Director xStream and iLink Agg xStream enables a pool of 10G and 1G tools to be deployed across a large number of 10G network links, with remote, centralized control of exactly which traffic streams are directed to each tool. Ixia xStream solutions enable law enforcement entities to view more traffic with fewer monitoring tools as well as relieving oversubscribed 10G monitoring tools. In addition, law enforcement entities can share tools and data access among groups without contention and centralize data monitoring in a network operations center.

Director Pro™ and Director xStream Pro data monitoring switches offers law enforcement the ability to perform better pre-filtering via Deep Packet Inspection (DPI) and to hone in on a specific phone number or credit card number. Those products differs from other platforms that might have the ability to seek data within portions of the packet thanks to a unique ability to filter content or perform pattern matching with hardware and in wire speed potentially to Layer 7. Such DPI provides the ability to apply filters to a packet or multiple packets at any location, regardless of packet length or how “deep” the packet is; or to the location of the data to be matched within this packet. A DPI system is totally independent of the packet.

Thanks to Ixia for the article.

The Case for an All-In-One Network Monitoring Platform

There are many famous debates in history: dogs vs cats, vanilla vs chocolate & Coke vs Pepsi just to name a few. In the IT world, one of the more common debates is “single platform vs point solution”. That is, when it comes to the best way to monitor and manage a network, is it better to have a single management platform that can do multiple things, or would it be better to have an array of tools that are each specialized for a job?

The choice can be thought of as being between Multitaskers & Unitaskers. Swiss Army knives, vs dedicated instruments. As for most things in life, the answer can be complex, and probably will never be agreed upon by everyone – but that doesn’t mean we can’t explore the subject and form some opinions of our own.

For this debate, we need to look the major considerations which go into this choice. That is, what key areas need to be addressed by any type of network monitoring and management solution and then how do our two options fair in those spaces? For this post, I will focus on 3 main areas to try to draw some conclusions:

  • Initial Cost
  • Operations
  • Maintenance

1) Initial Cost

This may be one of the more difficult areas to really get a handle on, as costs can vary wildly from one vender to another. Many of the “All-In-One” tools come with a steep entry price, but then do not grow significantly after that. Other AIO tools offer flexible licensing options which allow you to only purchase the particular modules or features that you need, and then easily add-on other features when you want them.

In contrast, the “Point-Solutions” may not come with a large price tag, but you need to purchase multiple tools in order to cover your needs. You can therefore take a piecemeal approach to purchasing which can certainly spread your costs out as long as you don’t leave critical gaps in your monitoring in the meantime. And, over time, the combined costs for many tools can become larger than a single system.

Newer options like pay-as-you-go SaaS models can greatly reduce or even eliminate the upfront costs for both AOI and Point Solutions. It is important to investigate if the vendors you are looking at offer that type of service.

Bottom Line:

Budgets always matter. If your organization is large enough to absorb the initial cost of a larger umbrella NMS, then this typically leads to a lower total cost in the long run, as long as you don’t also need to supplement the AIO solution with too many secondary solutions. SaaS models can be a great way to get going with either option as they reduce the initial Cap-Ex spend necessary.

2) Operations

In some ways, the real heart of the question AIO vs PS comes should come down to this – “which choice will help me solve issues more quickly”? Most monitoring solutions are used to respond when there is an issue with service delivery, and so the first goal of any NMS should be to help the IT team rapidly diagnose and repair problems.

When thought of in the context of the AIO vs PS debate, then you need to think about the workflow involved when an alarm or ticket is raised. With an AIO solution, an IT pro would immediately use that system to try both see the alarm and then to dive into the affected systems or devices to try and understand the root cause of the problem.

If the issue is systemic (meaning that multiple locations/users/services are affected) then an AIO solution has the clear advantage of being able to see a more holistic view of the network as a whole instead of just a small portion as would be the case for many Point Solutions. If the AIO application contains a root cause engine then this can be a huge time saver as it may be able to immediately point the staff in the right direction.

On the other hand, if that AIO solution cannot see deeply enough into the individual systems to pinpoint the issues, then a point solution has an advantage due to its (typically) deeper understanding of the systems it monitors. It may be that only a solution provided directly by the systems manufacturer would have insight into the cause of the problem.

Bottom line

All In One solutions typically work best when problems occur which affect more than one area of the network. Whereas Point Solutions may be required if there are proprietary components that don’t have good support for standards based monitoring like SNMP.

3) Maintenance

The last major consideration is one that I don’t think gets enough attention in this debate- the ongoing maintenance of the solutions themselves i.e. “managing the management solutions”. All solutions require “maintenance” to keep them working optimally. There are upgrades, patches, server moves etc. There are also the training requirements of any staff that need to use these systems. This can add up to significant time and energy “costs”.

This is where AIO solutions can really shine. Instead of having to maintain and upgrade many solutions, your staff can focus on maintaining a single system. The same thing goes for training – think about how hard it can be to really become an expert in anything, then multiply that by the training required to become proficient at X number of tools that your organization has purchased.

I have seen many places where the expertise in certain tools becomes specialized – and therefore becomes a single point of failure for the organization. If only “Bob” knows how to use that tool, then what happens when there is a problem and “Bob” in on vacation, or leaves the group?

Bottom Line:

Unless your organization can spend the time and money necessary to keep the entire staff fully trained on all of the critical network tools, then AIO solutions offer a real advantage over point solutions when it comes to maintainability of your IT management systems.

In the end, I suspect that this debate will never completely be decided. There are many valid reasons for organizations to choose one path over another when it comes how to organize their IT monitoring platforms.

In our view, we see some real advantages to the All-In-One solution approach, as long as the platform of choice does not have too many gaps in it which then need to be filled with additional point solutions.

Thanks to NMSaaS for the article.