Troubleshooting Cheat Sheet: Layers 1-3

Any time you encounter a user complaint, whether regarding slow Internet access, application errors, or other issues that impact productivity, it is important to begin with a thorough understanding of the user’s experience.

Not sure where to begin?  User complaints usually fall into three categories: slow network, inability to access network resources, and application-specific issues.

Based upon the complaint being presented you need to understand the symptoms and then isolate the issue to the correct layer of the Open Systems Interconnection (OSI) model.

The following Troubleshooting Cheat Sheet shows the questions to ask with a typical slow network complaint.

What to Ask What it Means
What type of application is being used? Is it web-based? Is it commercial, or a homegrown application? Determines whether the person is accessing local or external resources.
How long does it take the user to copy a file from the desktop to the mapped network drive and back? Verifies they can send data across the network to a server, and allows you to evaluate the speed and response of the DNS server.
How long does it take to ping the server of interest? Validates they can ping the server and obtain the response time.
If the time is slow for a local server, how many hops are needed to reach the server? Confirms the number of hops taking place. Look at switch and server port connections, speed to the client, and any errors.

Quick OSI Layer Review

With these questions answered, working through the OSI model is a straightforward process. When dealing with the different layers, understanding how each layer delivers data and functions will impact how you would troubleshoot each layer.

Physical Layer

  • If it can blind or shock you, think Physical Layer
  • Defines physical characteristics of cables and connectors
  • Provides the interface between network and network devices
  • Describes the electrical, light, or radio data stream signaling

Data Link Layer

  • Converts signals into bits which become the packet data that everyone wants
  • Performs error detection and correction of the data streams
  • Manages flow and link control between the physical signaling and network
  • Constructs and synchronizes data frame packets

Network Layer

  • Controls logical addressing, routing, and packet generation
  • Carries out congestion control and error handling
  • Performs route monitoring and message forwarding

Assessing the Physical Layer

Generally speaking, Physical Layer symptoms can be classified into two groups of outage and performance issues. In most cases, investigating outage issues is the easiest place to begin, as it’s a matter of confirming the link light is out or that a box is not functioning. Additionally, validating equipment failure is a matter of replacing the cable or switch and confirming everything works.

Physical Layer issues are overlooked by people pinging or looking at NetFlow for the problem, when in reality it’s a Layer 1 issue caused by a cable, jack, or connector.

The next step in investigating Physical Layer issues is delving into performance problems. It’s not just dealing with more complex issues, but also having the correct tools to diagnose degraded performance. Essential tools in your tool box for testing physical issues are a cable tester for cabling problems, and a network analyzer or SNMP poller for other problems.

Assessing Physical Performance Errors

In diagnosing performance issues from a network analyzer, you’ll notice that there are patterns common with these errors, which are usually indicative of what’s causing the Physical Layer problem. These can be divided into intelligent and non-intelligent errors.

Intelligent Errors: An intelligent host is smashing into your network signal and corrupting the data.

Example: Overloaded WiFi network or a busy channel.

Non-Intelligent Errors: An outside entity causing noise that interferes with the signal or flow of data across the network.

Example: A microwave interfering with a WiFi signal.

Climbing Further up the Stack

Confirming performance problems, taking a systematic approach to troubleshooting, and understanding how communication occurs across the layers of the OSI model are key to slashing troubleshooting times and improving resolution accuracy.

Load Balancing Your Security Solution for Fun and Profit!

Maximizing the Value and Resiliency of Your Deployed Enterprise Security Solution with Intelligent Load Balancing

Correctly implementing your security solution in the presence of complex, high-volume user traffic has always been a difficult challenge for network architects. The data in transit on your network originates from many places and fluctuates with respect to data rates, complexity, and the occurrence of malicious events. Internal users create vastly different network traffic than external users using your publically available resources. Synthetic network traffic from bots has exceeded real users as the most prevalent creators of network traffic on the internet . How do you maximize your investment in a security solution while gaining the most value from the deployed solution? The answer is intelligent deployment through realistic preparation.

Let’s say that you have more than one point of ingress and egress into your network, and predicting traffic loads it is very difficult (since your employees and customers are global). Do you simply throw money at the problem by purchasing multiple instances of expensive network security infrastructure that could sit idle at times and then get saturated during others? A massive influx of user traffic could overwhelm your security solution in one rack, causing security policies to not be enforced, while the solution at the other point of ingress has resources to spare.

High speed inline security devices are not just expensive—the more features you enable on them the less network traffic they can successfully parse. If you start turning on features like sandboxing (which spawns virtual machines to deeply analyze potential new security events) you can really feel the pain.

Using a network packet broker with load balancing capability with multiple inline Next Generation Firewalls (NGFW) into a single logical solution, allows you to maximize your secruity investment.  To test the effectiveness we ran 4 scenerio’s using an advanced featured packet broker and load testing tools to see how effective this strategy is.

TESTING PLATFORM

Usung two high end NGFWs, we enabled nearly every feature (including scanning traffic for attacks, identifying user applications, and classifying network security risk based on the geolocation of the client) and load balanced the two devices using an advanced featured packet broker. Then using our load testing tools we created all of my real users and a deluge of different attack scenarios.  Below are the results of 4 testing scenerios

Scenario One: Traffic Spikes

Your 10GbE NGFW will experience inconsistent amounts of network traffic. It is crucial to be able effectively inforce security policies during such events. In the first test I created a baseline of 8Gbps of real user traffic, then introduced a large influx of traffic that pushed the overall volume to 14Gbps. The packet broker load balancer ensured that the traffic was split between the two NGFWs evenly, and all of my security policies were enforced.

Load Balancing Your Security Solution for Fun and Profit!

Figure 1: Network traffic spike

Scenario Two: Endurance Testing

Handling an isolated event is interesting, but maintaining security effectiveness over long periods of time is crucial for a deployed security solution. In the next scenario, I ran all of the applications I anticipated on my network at 11Gbps for 60 hours. The packet broker gave each of my NGFWs just over 5Gbps of traffic, allowing all of my policies to be enforced. Of the 625 million application transactions attempted throughout the duration of the test, users enjoyed a 99.979% success rate.

Load Balancing Your Security Solution for Fun and Profit!

Figure 2: Applications executed during 60 hour endurance test

Scenario Three: Attack Traffic

Where the rubber meets the road for a security solution is during an attack. Security solutions are insurance policies against network failure, data exfiltration, misuse of your resources, and loss of reputation. I created a 10Gbps baseline of the user traffic (described in Figure 2) and added a curveball by launching 7261 remote exploits from one zone to another. Had these events not been load balanced with the packet broker, a single NGFW might have experienced the entire brunt of this attack. The NGFW could have been overwhelmed and failed to inforce policies. The NGFW might have been under such duress mitigating the attacks that legitimate users would have been collateral damage of the NGFW attempting to inforce policies. The deployed solution performed excellently, mitigating all but 152 of my attacks.

Concerning the missed 152 attacks: the load testing tool library contains a comprehensive amount of undisclosed exploits. That being said, as with the 99.979% application success rate experienced during the endurance test, nothing is infallible. If my test worked with 100% success, I wouldn’t believe it and neither should you.

Load Balancing Your Security Solution for Fun and Profit!

Figure 3: Attack success rate

Scenario Four: The Kitchen Sink

Life would indeed be rosy if the totality of a content aware security solution was simply making decisions between legitimate users and known exploits. For my final test I added another wrinkle. The solution also had to deal with large volume of fuzzing to my existing deluge of real users and attacks. Fuzzing is the concept of sending intentionally flawed network traffic through a device or at an endpoint with the hopes of uncovering a bug that could lead to a successful exploitation. Fuzzed traffic can be as simple as incorrectly advertised packet lengths, to erroneously crafted application transactions. My test included those two scenarios and everything in between. The goal of this test was stability. I achieved this by mixing 400Mbps of pure chaos via load testing fuzzing engine, with Scenario Three’s 10Gbps of real user traffic and exploits. I wanted to make certain that my load-balanced pair of NGFWs were not going to topple over when the unexpected took place.

The results were also exceptionally good. Of the 804 million application transactions my users attempted, I only had 4.5 million go awry—leaving me with a 99.436% success rate. This extra measure of maliciousness only changed the user experience by increasing the failures by about ½ of a percent. Nothing crashed and burned.

Load Balancing Your Security Solution for Fun and Profit!

Figure 4: Application Success rates during the “Kitchen Sink” test

Conclusion

All four of the above scenarios illustrate how you can enhance the effectiveness of a security solution while maximizing your budget. However, we are only scratching the surface. What if you needed your security solution to be deployed in a High Availability environment? What if the traffic your network services expand? Setting up the packet broker to operate in HA or adding additional inline security solutions to be load balanced is probably the most effective and affordable way of addressing these issues.

Let us know if you are intrested in seeing a live demonstration of a packet broker load balancing attacks from secruity testing tool over multiple inline security solutions? We would be happy to show you how it is done.

Additional Resources:

Network Packet Brokers

CyPerf

Year-End Network Monitoring Assessment

Planning for the Future

As we approach the New Year, many organizations’ data centers and network configurations are in lockdown mode. Whether this is due to assuming a defensive posture against the onslaught of holiday ecommerce traffic, or an accommodation to vacationing staff, the situation provides network managers an opportunity to perform a year-end network monitoring assessment

Establish Future Goals, Identify Current Weaknesses and Make Sure Core Tasks and Goals Are Achieved

Q. How many locations will you need to monitor in the New Year?

If there are new server clusters or even new data centers in the works, be sure to plan accordingly, and ensure that your network monitoring tools will have visibility into those areas.  Network Taps can be used to incorporate more points of visibility for your existing monitoring tools within your growing network. Advanced appliances such as Network Packet Brokers (NPBs) can perform more sophisticated switching and filtering to optimize visibility within that network sprawl.

Q. What traffic will you be responsible for monitoring?

If you are providing network support, you need to understand immediately the nature, volume and security of the traffic flowing over your network. Is your organization planning to implement new applications or services on the network? Even the introduction or expansion of virtualization will require a monitoring plan that incorporates Virtual Taps. Additionally using advanced features on a packet broker like load balancing can extend the useful life of existing tools by sharing current traffic across a pool of devices.

Q. What new threats will the network face, and what preventative measures will you add?

The growing phenomena of advanced persistent threats (APTs) and directed attacks against network vulnerabilities demand a stronger response from security personnel. Up to 75 percent of devices within an organization’s network can contain a known security vulnerability. Many organizations deploy a defense-in-depth strategy with overlapping security tools to provide more robust security coverage. Be sure to schedule software updates for all of your network security tools, and make sure those security tools have total visibility of the traffic they are monitoring.

Q. What is your replacement plan for older equipment?

Take inventory of network equipment that have reached end-of-life, end-of-sale or end-of-support.. Budgeting for, and planning ahead for the obsolescence or re-tasking of these devices should be included in your plan for the coming year.

Q. What are your redundancy and failover plans?

One option for extending the useful life of your legacy monitoring tools is to utilize them as redundant tools in case of failover. Utilizing a bypass switch or high-availability modes in NPBs can make use of these tools in the event a primary device is put in maintenance mode, taken offline, or experiences a hardware failure. Consider assessing your older equipment on the basis of discarding the equipment entirely OR re-purposing it as a hot-standby.

Q. Have you included hardware/software maintenance in your annual budget?

Most hardware vendors offer annual maintenance and service plans for their devices. Renewing and maintaining these plans is critical to ensuring that you have access to the latest software updates. Additionally, should any of your devices experience hardware failure, advance replacement plans can get replacement equipment into your network as soon as possible.

Managing Your Application Performance

apmcomponents1 end user monitoring

Are you are planning to implement new IT projects such as data center consolidation, server virtualization, cloud computing, or perhaps adding new applications on the network?  Do you understand exactly how these upgrades will affect the existing applications and the user experience?  What strategy are you going to use to ensure application performance is not compromised?  It is imperative that you understand how each of the components can affect overall application performance.

A Strategy you can use to Manage Application Performance (APM)

Most companies have some sort of visibility of their network systems via the network elements, but a healthy infrastructure does not necessarily mean that your applications are running efficiently because they do nothing to monitor the actual transaction.  There are many different components to APM, so what capabilities do you need to accurately monitor, measure, and troubleshoot?

6 Components of Application Monitoring

Monitoring the End-user Experience is quite different from Infrastructure Monitoring. We actually measure the end-user experience so we see what they see.

apmcomponents1 end user monitoring 
  • Measures response time and availability from the end user’s perspective
  • Aligns performance management with the needs of business users

 

Application Mapping is a separate dimension to APM this takes the different bits about applications and maps them together so you can trace an application and its dependencies/relationships across the network.

apmcomponents2 Application Mapping 
  • Discover application components and their relationships
  • Fundamental for managing an application

 

Transaction Following/Tracing is a critical component is being able to trace transactions on the network through multiple servers that all communicate to make up an application.

apmcomponents3 Transaction Following Tracing 
  •  Follow transactions through application tiers and components
  • Trace performance of each transaction at the code level
  • Holistic view of transactions from end-to-end

 

Deep Application Component Monitoring –  deep inside the system that runs the code allows you to get to the fine detail, and why there may be application issues.

apmcomponents4 Deep Application Component Monitoring 
  • Collect fine-grained metrics from application internals, including code-level performance
  • Produce detail needed for real-time analytics and true root cause analysis
  • Complemented by broad infrastructure performance information

 

Network-Aware APM – is another form of data collection because the network is where applications travel, and the network team usually gets called to solve problems first.

apmcomponents5 Network Aware APM 
  • The network is the backplane of modern applications
  • Understand impact of network performance on application behaviour
  • Provides tracing and visibility for ALL applications

 

Analytics – There is a tremendous amount of information as you collect data across 100’s of applications and perhaps millions of transactions.  You need to collect and store that data efficiently to be able to trend, analyze, and solve problems with it

apmcomponents6 Analytics
  • Store and index large amounts of performance data
  • Automatically extract anomalous behaviour, correlate information, identify the root cause of problems, and predict events and performance trends
  • Reveals valuable information to alert staff and resolve problems faster

 

With these 6 elements covered you will have complete visibility and will be able to effectively manage application performance and minimize user impact going forward.

Learn More Here

How to Dodge 6 Big Network Headaches

The proper network Management tools allows you to follow these 6 simple tips. This will help you stay ahead of network probelms, and if a problem does occurs you will have the data to be able to analyze the problem.

Troubleshoot sporadic issues with the right equipment

The most irksome issues are often sporadic and require IT teams to wait for the problem to reappear or spend hours recreating the issue. With retrospective network analysis (RNA) solutions, it’s possible to eliminate the need to recreate issues. Performance management solutions with RNA have the capacity to store terabytes of data that allow teams to immediately rewind, review, and resolve intermittent problems.

Baseline network and application performance

It’s been said that you can’t know where you’re going if you don’t know where you’ve been. The same holds true for performance management and capacity planning. Unless you have an idea of what’s acceptable application and network behavior today, it’s difficult to gauge what’s acceptable in the future. Establishing benchmarks and understanding long-term network utilization is key to ensuring effective infrastructure changes.

Clarify whether it’s a network or application issue

Users often blame the network when operations are running slow on their computer. To quickly pinpoint network issues, it’s critical to analyze and isolate problems pertaining to both network and application performance.

Leverage critical information already available to you with NetFlow?

Chances are your network is collecting NetFlow data. This information can help you easily track active applications on the network. Aggregate this data into your analyzer so that you can get real-time statistics on application activity and drill down to explore and resolve any problems.

Run pre-deployment assessments for smooth rollouts

Network teams often deploy an application enterprise-wide before knowing its impact on network performance. Without proper testing of the application or assessing the network’s ability to handle the application, issues can result in the middle of deployment or configuration. Always ensure you run a site survey and application performance testing before rolling out a new application – this allows you to anticipate how the network will respond and to resolve issues before they occur.

Manage proactively by fully understanding network traffic patterns

Administrators frequently only apply analysis tools after the network is already slow or down. Rather than waiting for problems, you should continuously track performance trends and patterns that may be emerging. Active management allows you to solve an emerging issue before it can impact the end user.

Infosim’s SDN/NFV-enabled Security Architecture for Enterprise Networks

Infosim® together with University of Würzburg, TU Munich & genua mbH

Key Learning Objectives 

  • ​The SarDiNe Research Project
  • Fine-grained Access Control in SDN Networks
  • Resiliency and Offloading for a Firewall Virtual Network Function
  • SDN/NFV & Cloud managed by StableNet®

The ever-increasing number and complexity of cyber threats is continuously causing new challenges for enterprises to keep their network secure. One way to tackle these challenges is offered by SDN/NFV-enabled approaches.

Next stop Würzburg, join the SarDiNe research group, talking about the prospects of a seamless integration of SDN/NFV-based security operations into existing networks. The SarDiNe research team just came back from SIGCOMM 2017 in L.A. where they presented a demo about SDN/NFV-enabled security for enterprise networks. For this Webinar, Infosim got four of them to talk about the joined research project and the demo they presented.

Watch the webinar below

Try your free 30 Day StableNet trial here today

 Thanks to Infosim for this article and webinar

Ixia Special Edition Network Visibility For Dummies

Advanced cyber threats, cloud computing, and exploding traffic volume pose significant challenges if you are responsible for your organization’s network security and performance management. The concept of ‘network visibility’ is frequently introduced as the key to improvement. But what exactly is network visibility and how does it help an organization keep its defenses strong and optimize performance? This e-book, presented in the straight-forward style of the For Dummies series, describes the concept from the ground up. Download this guide to learn how to use a visibility foundation to access all the relevant traffic moving through your organization and deliver the information you need to protect and maximize customer experience.

Download your free copy of Ixia’s Special Edition of Network Visibility for Dummies E-Book below

Thanks to Ixia for this article and content.

NMSaaS’ The Importance of Network Performance Monitoring

In 2017, any drop in your network’s performance will affect your enterprise’s overall productivity. Ensuring the optimal performance of your network, in turn, requires sophisticated network performance monitoring. Through exceptional reporting and optimization, a high-quality network performance monitoring service can ensure that every aspect of your network is always performing at its best. Here we take a close look at how network performance monitoring works, and the key benefits it provides. 

​How it works

Getting outside the system

In order to engage in effective network performance monitoring, the service that is monitoring the network must reside outside the network itself.

This is obvious in the case of monitoring a network for system failure: if the monitoring software resides within the network that fails, then it cannot report the failure.

However, keeping the monitoring service outside the network is just as important when it comes to monitoring performance. Otherwise, the service will be monitoring its own performance as yet another part of the system, which will compromise the accuracy of its performance data.

Monitoring key metrics

A high-quality network performance monitoring service monitors all of the devices that make up your company’s network, as well as all of the applications that depend on them.

One of the key metrics the monitoring service will track is your network’s response time. In the context of a computer network, response time is a measure of the time it takes for various components and applications within your company’s network to respond. For example, suppose you have an Enterprise Resource Planning (ERP) system installed on your network. Further, suppose an employee clicks on a tab in the ERP’s main dashboard and experiences a long delay. This indicates a poor response time, which is often due to a network with sub-optimal performance.

Alerting and reporting

Depending on your company’s preferences, if there is a significant drop in a key measure of network performance, the network monitoring service will generally alert the system administrator. Yet most high-quality monitoring services also provide network optimization; real-time performance data; and detailed periodic reports calling out any weak points the administrator needs to address to preserve and enhance your network’s overall health and performance. 

The key benefits

A competitive edge

Suppose that you and your top competitor both have precisely the same enterprise network hardware, software, configuration and bandwidth. However, your network has sophisticated performance monitoring in place to keep it in optimal health, and your competitor’s network does not.

Consequently, your staff is sailing through the same network-dependent tasks that your competitor’s staff is slogging through. In this case, you have a significant edge over your top competitor in terms of productivity. On the other hand, if the roles are reversed, and your competitor is the one with high-quality network performance monitoring in place, then they are the one with the edge. Either way, it’s clear that network performance monitoring is a significant contributor to enterprise productivity.

Time and cost savings

Even a few seconds of delay in performing common tasks and opening up key pages within a networked enterprise application can eventually result in hundred of hours of lost time each year. Performance monitoring can save you that time. It can also help you optimize your server and other network components to help prolong their expected lifespans. On the other hand, hardware with consistent performance problems is under strain and may fail unexpectedly.

Consistency

Having a network with consistent optimal performance significantly improves productivity and morale. On the other hand, a network with inconsistent performance will lead to unpredictable work conditions and inconsistent output. For example, suppose your staff is working on an urgent project with a tight deadline. If the network’s performance is inconsistent, they will have no way of predicting whether they can reasonably meet the deadline or not. For this reason, a network with inconsistent performance can be as problematic as a network with frequent downtime.

Effective troubleshooting

There are a large number of subtle factors and gradually unfolding events that can degrade the performance of your network and hurt your company’s productivity without your being directly aware of it. One of the most common is a gradual increase in network usage over time without any improvements to the network itself to accommodate the increase. For example, a company might expand its staff without adding servers or increasing network bandwidth. As the new team members ramp up their usage, network performance begins to decline.

In other cases, an enterprise application connected to the network may suffer from sub-optimal performance that staff believe is due to the network, when in fact it’s an enterprise software issue. In all of these cases, a high-quality network performance monitoring service can summon the data to quickly troubleshoot the situation. If usage has gradually increased without any network improvements to support the increase, the monitoring service’s data can identify the need for improvements. If an enterprise application is exhibiting long response times, the monitoring service can identify whether the source of the problem resides in the network or the enterprise software itself. This capacity to quickly and effectively troubleshoot network-related performance problems can save time and heartache.

Moreover, performance monitoring can also serve as a preventative measure by recognizing and troubleshooting performance problems before they become system failures. For example, by identifying a network component that is not performing optimally and that may soon fail, a high-quality performance monitoring service can help prevent the system failure from occurring. 

Getting started with network performance monitoring

Network performance monitoring keeps a close eye on key indicators of your network’s health. Additionally, a high-quality external monitoring service oversees your network from the outside to provide reliable, real-time performance-related reporting, alerting and optimization.

There are several key benefits of network performance monitoring for enterprises: the monitoring service helps your network-dependent applications and tasks perform at their optimal speed, giving you an edge on the competition; it reduces network response times and helps prolong hardware lifespans, saving you time and money; it provides performance consistency, giving you a network you can count on under time-sensitive deadlines; and it greatly accelerates the network troubleshooting process, giving you the freedom to focus on your business and not your network. Get started with NMSaaS here.

Thanks to NMSaaS and author John Olson for this article

3 Key Differences Between NetFlow and Packet Capture Performance Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis at center-stage of the conversation. Granted, both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, however rich in network metrics, requires sniffing devices and agents throughout the network, which invariably require some level of maintenance during their lifespan. In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with packet sniffers can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router or firewall a NetFlow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, NetFlow analyzers of varying feature-sets are available for network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow’s ability to provide WAN-wide metrics in near real-time makes it a suitable troubleshooting companion for engineers. And with version 9 of NetFlow extending the wealth of information it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Packet Capture. Packet Capture tools, however, do what they do best, which is Deep Packet Inspection (DPI), which allows for the identification of aspects in the traffic hidden in the past to Netflow analyzers. But Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR and other DPI solutions who have recognized that all they need to do is use flexible Netflow tools to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on. We could argue that packet sniffing is able to provide much of this information too, but it doesn’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting performance anomalies that could be subscribed to a number of factors such as untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Packet Capture obsolete

The short answer is, no. In fact, Packet Capture, when properly coupled with NetFlow, can make a very elegant solution. For example, using NetFlow to identify an attack profile or illicit traffic and then analyzing corresponding raw packets becomes an attractive solution. However, NetFlow strikes that perfect balance between detail and context and gives NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform. Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring attests to NetFlow’s growing prominence as the monitoring tool of choice. And as it and its various iterations such sFlow, IPFIX and  others continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

Thank you to NetFlow Auditor for this post.

Infosim® Product Called StableNet® Chosen as Athenahealth Consolidates Network Performance Monitoring

Infosim®, the new leader in network performance management, today announced that it has been selected as the supplier of choice to consolidate the IT infrastructure performance monitoring capabilities at Athenahealth.

Following an extensive evaluation of the performance management market, the organization identified StableNet® as the only vendor capable of offering a single comprehensive view of the performance and capacity of its IT infrastructure in one unified solution, and with the highest levels of performance, scalability, and interoperability.

When introducing a performance monitoring solution, it is essential that it can be fully integrated with the existing infrastructure. Interoperability with existing monitoring systems was essential to the organization’s project, and will allow users to create the alerts and reports that they need to maintain current operations and plan for future capacity needs proactively.

Athenahealth’s network engineering team was looking for a tool that could monitor the health and performance of the company’s multivendor network and replace the majority of the point management tools being used. After narrowing the search to Infosim® StableNet®, the team conducted a successful proof of concept and elected to adopt the solution. StableNet® will replace more than a half-dozen point management tools and streamline network management practices.

Supporting Quotes:

Shamus McGillicuddy, Enterprise Management Associates Senior Analyst comments:
“Athenahealth, a provider of cloud-based healthcare software, will replace more than a half-dozen stand-alone network management tools with Infosim StableNet®. StableNet®, an enterprise network availability and performance management system, will help unify operations by providing customizable dashboards and network transparency to all key stakeholders in Athenahealth’s IT organization.”

Brian Lubelczyk, senior manager of data networks at Athenahealth comments:
“I discovered them at Cisco Live two years ago, and I was really impressed overall with how well they were able to hit everything we wanted to do on this project, from monitoring to capacity planning and transparency of the network. The more we used the product, the more we liked it. Even for simple bandwidth trending we were using three or four different tools.”

Link to full case study

ABOUT STABLENET®

StableNet® is available in two versions: Telco (for Telecom Operators and ISPs) and Enterprise (for IT and Managed Service Providers).

StableNet® Telco is a comprehensive unified management solution; offerings include: Quad-play, Mobile, High-speed Internet, VoIP (IPT, IPCC), IPTV across Carrier Ethernet, Metro Ethernet, MPLS, L2/L3 VPNs, Multi Customer VRFs, Cloud and FTTx environments. IPv4 and IPv6 are fully supported.

StableNet® Enterprise is an advanced, unified and scalable network management solution for true End-to-End management of medium to large scale mission-critical IT supported networks with enriched dashboards and detailed service-views focused on both Network & Application services.

Infosim®, the Infosim® logo and StableNet® are registered trademarks of Infosim® GmbH & Co. KG. All other trademarks or registered trademarks seen belong to their respective companies and are hereby acknowledged.

Thank you to Infosim for this post