Load Balancing Your Security Solution for Fun and Profit!

Maximizing the Value and Resiliency of Your Deployed Enterprise Security Solution with Intelligent Load Balancing

Correctly implementing your security solution in the presence of complex, high-volume user traffic has always been a difficult challenge for network architects. The data in transit on your network originates from many places and fluctuates with respect to data rates, complexity, and the occurrence of malicious events. Internal users create vastly different network traffic than external users using your publically available resources. Synthetic network traffic from bots has exceeded real users as the most prevalent creators of network traffic on the internet . How do you maximize your investment in a security solution while gaining the most value from the deployed solution? The answer is intelligent deployment through realistic preparation.

Let’s say that you have more than one point of ingress and egress into your network, and predicting traffic loads it is very difficult (since your employees and customers are global). Do you simply throw money at the problem by purchasing multiple instances of expensive network security infrastructure that could sit idle at times and then get saturated during others? A massive influx of user traffic could overwhelm your security solution in one rack, causing security policies to not be enforced, while the solution at the other point of ingress has resources to spare.

High speed inline security devices are not just expensive—the more features you enable on them the less network traffic they can successfully parse. If you start turning on features like sandboxing (which spawns virtual machines to deeply analyze potential new security events) you can really feel the pain.

Using a network packet broker with load balancing capability with multiple inline Next Generation Firewalls (NGFW) into a single logical solution, allows you to maximize your secruity investment.  To test the effectiveness we ran 4 scenerio’s using an advanced featured packet broker and load testing tools to see how effective this strategy is.

TESTING PLATFORM

Usung two high end NGFWs, we enabled nearly every feature (including scanning traffic for attacks, identifying user applications, and classifying network security risk based on the geolocation of the client) and load balanced the two devices using an advanced featured packet broker. Then using our load testing tools we created all of my real users and a deluge of different attack scenarios.  Below are the results of 4 testing scenerios

Scenario One: Traffic Spikes

Your 10GbE NGFW will experience inconsistent amounts of network traffic. It is crucial to be able effectively inforce security policies during such events. In the first test I created a baseline of 8Gbps of real user traffic, then introduced a large influx of traffic that pushed the overall volume to 14Gbps. The packet broker load balancer ensured that the traffic was split between the two NGFWs evenly, and all of my security policies were enforced.

Load Balancing Your Security Solution for Fun and Profit!

Figure 1: Network traffic spike

Scenario Two: Endurance Testing

Handling an isolated event is interesting, but maintaining security effectiveness over long periods of time is crucial for a deployed security solution. In the next scenario, I ran all of the applications I anticipated on my network at 11Gbps for 60 hours. The packet broker gave each of my NGFWs just over 5Gbps of traffic, allowing all of my policies to be enforced. Of the 625 million application transactions attempted throughout the duration of the test, users enjoyed a 99.979% success rate.

Load Balancing Your Security Solution for Fun and Profit!

Figure 2: Applications executed during 60 hour endurance test

Scenario Three: Attack Traffic

Where the rubber meets the road for a security solution is during an attack. Security solutions are insurance policies against network failure, data exfiltration, misuse of your resources, and loss of reputation. I created a 10Gbps baseline of the user traffic (described in Figure 2) and added a curveball by launching 7261 remote exploits from one zone to another. Had these events not been load balanced with the packet broker, a single NGFW might have experienced the entire brunt of this attack. The NGFW could have been overwhelmed and failed to inforce policies. The NGFW might have been under such duress mitigating the attacks that legitimate users would have been collateral damage of the NGFW attempting to inforce policies. The deployed solution performed excellently, mitigating all but 152 of my attacks.

Concerning the missed 152 attacks: the load testing tool library contains a comprehensive amount of undisclosed exploits. That being said, as with the 99.979% application success rate experienced during the endurance test, nothing is infallible. If my test worked with 100% success, I wouldn’t believe it and neither should you.

Load Balancing Your Security Solution for Fun and Profit!

Figure 3: Attack success rate

Scenario Four: The Kitchen Sink

Life would indeed be rosy if the totality of a content aware security solution was simply making decisions between legitimate users and known exploits. For my final test I added another wrinkle. The solution also had to deal with large volume of fuzzing to my existing deluge of real users and attacks. Fuzzing is the concept of sending intentionally flawed network traffic through a device or at an endpoint with the hopes of uncovering a bug that could lead to a successful exploitation. Fuzzed traffic can be as simple as incorrectly advertised packet lengths, to erroneously crafted application transactions. My test included those two scenarios and everything in between. The goal of this test was stability. I achieved this by mixing 400Mbps of pure chaos via load testing fuzzing engine, with Scenario Three’s 10Gbps of real user traffic and exploits. I wanted to make certain that my load-balanced pair of NGFWs were not going to topple over when the unexpected took place.

The results were also exceptionally good. Of the 804 million application transactions my users attempted, I only had 4.5 million go awry—leaving me with a 99.436% success rate. This extra measure of maliciousness only changed the user experience by increasing the failures by about ½ of a percent. Nothing crashed and burned.

Load Balancing Your Security Solution for Fun and Profit!

Figure 4: Application Success rates during the “Kitchen Sink” test

Conclusion

All four of the above scenarios illustrate how you can enhance the effectiveness of a security solution while maximizing your budget. However, we are only scratching the surface. What if you needed your security solution to be deployed in a High Availability environment? What if the traffic your network services expand? Setting up the packet broker to operate in HA or adding additional inline security solutions to be load balanced is probably the most effective and affordable way of addressing these issues.

Let us know if you are intrested in seeing a live demonstration of a packet broker load balancing attacks from secruity testing tool over multiple inline security solutions? We would be happy to show you how it is done.

Additional Resources:

Network Packet Brokers

CyPerf

Eight Steps to take when conducting your first threat hunt

Unlike traditional, reactive approaches to detection, hunting is proactive. With hunting, security professionals don’t wait to take action until they’ve received a security alert or, even worse, suffer a data breach. Instead, hunting entails looking for opponents who are already in your environment.

Hunting leads to discovering undesirable activity in your environment and using this information to improve your security posture. These discoveries happen on the security team’s terms, not the attacker’s. Rather than launching an investigation after receiving an alert, security teams can hunt for threats when their environment is calm instead of in the midst of the chaos that follows after a breach is detected.

To help security professionals better facilitate threat hunting, here are step-by-step instructions on how to conduct a hunt.

1. Internal vs. outsourced 

If you decide to conduct a threat hunting exercise, you first need to decide whether to use your internal security team or outsource it to an external threat hunting service provider. Some organization have skilled security talent that can lead a threat hunt session. To enable a proper exercise, they should solely work on the hunting assignment for the span of the operation, equipping them to solely focus on this task.

When a security team lacks the time and resources hunting requires, they should consider hiring an external hunting team to handle this task.

2. Start with proper planning

Whether using an internal or external vendor, the best hunting engagements start with proper planning. Putting together a process for how to conduct the hunt yields the most value. Treating hunting as an ad hoc activity won’t produce effective results. Proper planning can assure that the hunt will not interfere with an organization’s daily work routines.

3. Select a topic to examine 

Next, security teams need a security topic to examine. The aim should be to either confirm or deny that a certain activity is happening in their environment. For instance, security teams may want to see if they are targeted by advanced threats, using tools like fileless malware, to evade the organization’s current security setup.

4. Develop and test a hypothesis

The analysts then establish a hypothesis by determining the outcomes they expect from the hunt. In the fileless malware example, the purpose of the hunt is to find hackers who are carrying out attacks by using tools like PowerShell and WMI.

Collecting every PowerShell processes in the environment would overwhelm the analysts with data and prevent them from finding any meaningful information. They need to develop a smart approach to testing the hypothesis without reviewing each and every event.

Let’s say the analysts know that only a few desktop and server administrators use PowerShell for their daily operations. Since the scripting language isn’t widely used throughout the company, the analysts executing the hunt can assume to only see limited use of PowerShell. Extensive PowerShell use may indicate malicious activity. One possible approach to testing the hunt’s hypothesis would be to measure the level of PowerShell use as an indicator of potentially malicious activity.

5. Collect information

To review PowerShell activity, analysts would need network information, which can be obtained by reviewing network logs, and endpoint data, which is found in database logs, server logs or Windows event logs.

To figure out what PowerShell use look like in a specific environment, the analyst will collect data including process names, command line files, DNS queries, destination IP addresses and digital signatures. This information will allow the hunting team to build a picture of relationships across different data types and look for connections.

6. Organize the data

Once that data has been compiled, analysts need to determine what tools they’re going to use to organize and analyze this information. Options include the reporting tools in a SIEM, purchasing analytical tools or even using Excel to create pivot tables and sort data. With the data organized, analysts should be able to pick out trends in their environment. In the example reviewing a company’s PowerShell use, they could convert event logs into CSV files and uploaded them to an endpoint analytics tool.

7. Automate routine tasks 

Discussions about automation may turn off some security analysts get turn off. However, automating some tasks is key for hunting team’s’ success. There are some repetitive tasks that analysts will want to automate, and some queries that are better searched and analyzed by automated tools.

Automation spares analysts from the tedious task of manually querying the reams of network and endpoint data they’ve amassed. For example, analysts may want to consider automating the search for tools that use DGAs (domain generation algorithms) to hide their command and control communication. While an analyst could manually dig through DNS logs and build data stacks, this process is time consuming and frequently leads to errors.

8. Get your question answered and plan a course of action

Analyst will should now have enough information to answer their hypothesis, know what’s happening in their environment and take action. If a breach is detected, the incident response team should take over and remediate the issue. If any vulnerabilities are found, the security team should resolve them.

Continuing with the PowerShell example, let’s assume that malicious PowerShell activity was detected. In addition to alerting the incident response team, security teams or IT administrators should the Group Policy Object settings in Windows to prevent PowerShell scripts from executing.

Thanks to Cybereason, and author Sarah Maloney for this article

Ixia Special Edition Network Visibility For Dummies

Advanced cyber threats, cloud computing, and exploding traffic volume pose significant challenges if you are responsible for your organization’s network security and performance management. The concept of ‘network visibility’ is frequently introduced as the key to improvement. But what exactly is network visibility and how does it help an organization keep its defenses strong and optimize performance? This e-book, presented in the straight-forward style of the For Dummies series, describes the concept from the ground up. Download this guide to learn how to use a visibility foundation to access all the relevant traffic moving through your organization and deliver the information you need to protect and maximize customer experience.

Download your free copy of Ixia’s Special Edition of Network Visibility for Dummies E-Book below

Thanks to Ixia for this article and content.

Ixia Has Your Secret Weapon Against SSL Threats

It has finally happened: thanks to advances in encryption, legacy security and monitoring tools are now useless when it comes to SSL. Read this white paper from Ixia, to learn how this negatively impacts visibility into network applications, such as e-mail, e-commerce, online banking, and data storage. Or even worse, how advanced malware increasingly uses SSL sessions to hide, confident that security tools will neither inspect nor block its traffic.

  • ​Consider the following challenges:
  • Visibility into ephemeral key traffic
  • Coping with CPU-intensive encryption and decryption tasks
  • Chaining and handling multiple security tools
  • Meeting the demands of regulatory compliance

The very technology that made our applications secure is now a significant threat vector. The good news is, there is an effective solution for all of these problems. Learn how to eliminate SSL related threats in this white paper.

Thanks to Ixia for this article

Private Cloud: The ABCs of Network Visibility

Cloud computing has become the de facto foundation for digital business. As more and more enterprises move critical workloads to private and public clouds, they will face new challenges ensuring security, reliability, and performance of these workloads. If you are responsible for IT security, data center operations, or application performance, make sure you can see what’s happening in the cloud. This is the first of two blogs on the topic of cloud visibility and focuses on private cloud.

VISIBILITY CHALLENGES

If you wondering why cloud visibility is important, consider the following visibility-related concerns that can occur in private cloud environments.

1. Security blind spots. Traditional security monitoring relies on intercepting traffic as it flows through physical network devices. In virtualized data centers and private clouds, this model breaks down because many packets move between virtual machines (VMs) or application instances and never cross a physical “wire” where they can be tapped for inspection. Because of these blind spots, virtual systems can be tempting targets for malicious breaches.

2. Tools not seeing all relevant data. The point of visibility is not merely to see cloud data, but to export that data to powerful analytics and reporting tools. Tools that receive only a limited view of traffic will have a harder time analyzing performance issues or resolving latency issues, especially as cloud traffic increases. Without access to data from cloud traffic, valuable clues to performance issues may not be identified, which can delay problem resolution or impact the user experience.

3. Security during data generation. Some organizations may use port mirroring in their virtualization platform to access traffic moving between virtual machines. However, this practice can create security issues in highly-regulated environments. Security policies need to be consistently applied, even as application instances move within the cloud environment.

4. Complexity of data collection. With multiple data center and cloud environments, gathering all the relevant data needed by security and monitoring tools becomes complex and time-consuming. Solutions that make it easy to collect traffic from cloud and non-cloud sources can lead to immediate operational savings.

5. Cost of monitoring in the data center. The total cost of a private cloud will rise with the volume of traffic that needs to be transported back to the data center for monitoring. The ability to filter cloud traffic at its source can minimize backhaul and the workload on your monitoring tools.

CLOUD VISIBILITY USE CASES

Given these issues, better visibility can provide valuable benefits to an organization, particularly in:

Security and compliance: Keeping your defenses strong in the cloud, as you do in the data center, requires end-to-end visibility for adequate monitoring and control. Packets that are not inspected represent unnecessary risk to the organization and can harbor malware or other attacks. Regulatory compliance may also require proof that you have secured data as it moves between virtual instances.

Performance analytics: As with security, analysis is dependent on having the necessary data—before, during, and after cloud migration. Your monitoring tools must receive the right inputs to produce accurate insights and to quickly detect and isolate performance problems.

Troubleshooting: If an application that runs in your virtual data center experiences an unusual slow-down, how will you pinpoint the source of the problem? Packet data combined with application-layer intelligence can help you isolate traffic associated with specific combinations of application, user, device, and geolocation, to reduce your mean-time-to-resolution.

In each of these areas, you need the ability to see all of the traffic moving between virtual resources. Without full visibility to what’s happening in your clouds, you increase your risk for data breaches, delays in problem resolution, and loss of productivity or customer satisfaction.

VISIBILITY SOLUTIONS

 So, if cloud visibility is essential to security and application performance, what can you do to address the blind spots that naturally occur? Here are a few things to look for:

Virtual Taps 

Tapping is the process of accessing virtual or cloud packets in order to send them to security and performance monitoring tools. In traditional environments, a physical tap accesses traffic flowing through a physical network switch. In cloud environments, a virtual tap is deployed as a virtual instance in the hypervisor and:

  • ​Accesses all traffic passing between VMs or application instances
  • Provides basic (Layer 2-4) filtering of virtual traffic

For maximum flexibility, you should choose virtual taps like those in Ixia CloudLens Private that support all the leading hypervisors, including OpenStack KVM, VMware ESXi/NSX, and Microsoft Hyper-V and are virtual switch agnostic.

Virtual Packet Processors 

Packet processing is used for more advanced manipulation of packets, to trim the data down to only what is necessary, for maximum tool efficiency. Look for solutions that provide data aggregation, deduplication, NetFlow generation, and SSL decryption. Ixia CloudLens Private packet processing can also do more granular filtering using application intelligence to identify traffic by application, user, device, or geolocation. You can do advanced packet processing using a physical packet broker by transmitting your cloud data back to the data center. Teams that already have physical packet brokers in place, or are new to monitoring cloud traffic, may choose this approach. Another approach is to perform advanced packet processing right in the cloud. Only Ixia offers this all-cloud solution. With this option, you can send trimmed data directly to cloud-based security or analysis tools, eliminating the need for backhaul to the data center. This can be an attractive option for organizations with extremely high traffic volume.

Common Management Interface

Deploying cloud is complicated enough without having to worry about how to get an integrated view across physical and virtual traffic. Ixia’s CloudLens solution provides a comprehensive graphical view of all your network traffic, from all sources. With the power of application intelligence, the Ixia dashboard can tell you where all your traffic is coming from, which applications and locations are the most active, and which operating systems and devices are on the network—valuable information for performance management.

SUMMARY

 As you move more workloads to private cloud environments, be sure to consider a visibility solution that will let you access and visualize your cloud traffic. Don’t let blind spots in your network result in security breaches, application bottlenecks, or dissatisfied users.

Thanks to Ixia and author Lora O’Haver for this article.

Viavi: Nearly 90 Percent of Enterprise Network Teams Spend Time Troubleshooting Security Issues; 80 Percent Report More Time Spent on Security vs. Last Year

Tenth Annual “State of the Network” Global Survey from Viavi Reveals Network and Security Trends from over 1,000 Network Professionals

In April 2017, Viavi Solutions (NASDAQ: VIAV) released the results of its tenth annual State of the Network global study today. This year’s study focused on security threats, perhaps explaining why it garnered the highest response rate in the survey’s history. Respondents included 1,035 CIOs, IT directors, and network engineers around the world. The study is now available for download.

“As our State of the Network study shows, enterprise network teams are expending more time and resources than ever before to battle security threats. Not only are they faced with a growing number of attacks, but hackers are becoming increasingly sophisticated in their methods and malware,” said Douglas Roberts, Vice President and General Manager, Enterprise & Cloud Business Unit, Viavi Solutions. “Dealing with these types of advanced, persistent security threats requires planning, resourcefulness and greater visibility throughout the network to ensure that threat intelligence information is always at hand.”

Highlights of the 2017 study include:

  • ​ Network team members’ involvement in security: Eighty-eight percent of respondents say they are involved in troubleshooting security-related issues. Of those, nearly 80 percent report an increase in the time they spend on such issues, with nearly three out of four spending up to 10 hours a week on them.
  • Evolution of security threats: When asked how the nature of security threats has changed in the past year, IT teams have identified a rise in email and browser-based malware attacks (63 percent), and an increase in threat sophistication (52 percent). Nearly one in three also report a surge in distributed denial of service (DDos) attacks.
  • Key sources of security insight: Syslogs were cited by nearly a third of respondents as the primary method for detecting security issues, followed by long-term packet capture and analysis (23 percent) and performance anomalies (15 percent).
  • Overall factors driving network team workload: Bandwidth usage in enterprises continues to surge, with two out of three respondents expecting bandwidth demand to grow by up to 50 percent in 2017. This trend is in turn driving increased adoption of emerging technologies including software-defined networks (SDN), public and private clouds and 100 Gb. Network teams are managing these major initiatives while simultaneously confronting an aggressive rise in security issues.

 “A combination of new technology adoption, accelerating traffic growth and mounting security risks has spawned unprecedented challenges throughout the enterprise market,” commented Shamus McGillicuddy, Senior Analyst at Enterprise Management Associates. “The need to detect and deal with security threats is notably complicated by the diverse mix of today’s enterprise traffic, which spans across virtual, public and hybrid cloud environments in addition to physical servers.”

Key takeaways: what should IT service delivery teams do?

  • ​Know your “normal” – Recognizing abnormal traffic is critical for pinpointing an ongoing attack or security issue. Start comparing network traffic and behavior over points in time, either manually with freeware analyzer Wireshark, or using automated benchmarking in commercial network performance monitoring and diagnostic (NPMD) tools.
  • Speed discovery with traffic evidence – According to the recent Mandiant M-Trends report, the median number of days that attackers were present on a victim’s network before being discovered is still 146 days; despite the use of IDS and other traditional security tools. Using packet capture with retrospective analysis, network teams can rewind to the time of the incident(s) and track exactly what the hackers accessed.
  • Ensure long-term packet retention – For high-traffic enterprise, data center, or security forensics applications, a purpose-built appliance with its own analytics may be the next step. Depending on size and volume, there are appliances that can capture and store up to a petabyte of network traffic for later analysis, simplifying forensic investigation for faster remediation.
  • Facilitate effective network and security team cooperation – Ensure successful collaboration between network and security teams on investigations with documented workflows and integration between security, network forensics, and performance management tools.

Thanks to Viavi for this article ​

Infosim explains how the SaaS model of Network & Services Management can transform your IT operations

 It is really next to impossible to avoid hearing about how “The Cloud” is transforming business everywhere. Well-known technology companies from the old guard like IBM, Oracle, Microsoft, and Adobe, to established tech powerhouses Google and Amazon, to pretty much every new hot startup like Cloudera are all promoting how they are using the cloud to deliver their product using a SaaS delivery model. Infosim® is seeing this trend start to take hold in the Network & Services Management and IT Service Assurance space.

Our customers want to know if they would benefit from using a cloud/SaaS monitoring solution. So, why the big push to SaaS and away from traditional enterprise software sales? Well, from a business standpoint a recurring revenue model is preferred by salespeople and Wall Street alike. It guarantees a revenue stream in the future while also making the customer interactions a little more “sticky” (to use a silicon valley term). But, while that may be great for the technology company, can the same be said for the end-user? Do their customers see an equal, or even greater, benefit from switching from a long-term contract to a more on-demand model? This article seeks to investigate the customer side of the equation to see if the change to SaaS really is a win-win for both the vendor and the customer.

1. Cost

Let’s begin by looking at cost. Make no mistake; the bottom line is always the most important factor in any comparison of delivery mechanisms. If customers can receive the same operations success at a lower price point, they will go that way every time.

The cost analysis can be somewhat complicated because while the entry price is always significantly lower when purchasing software via SaaS, you have to also think about how that cost might add up over time if you continue to use the software for a long period of time. Thankfully, these types of cost analyses have been studied in depth over the last few years and the verdict is very clear, SaaS Total Cost of Ownership (TCO) is lower than the traditional way of purchasing. Most models have come to the conclusion that SaaS reduces total cost (including maintenance, upgrades, personnel, etc.) to between 35% and 50% over the cost of traditional on-premise application systems. This cannot be ignored and is (and will most probably continue to be) the main driver behind the explosion of SaaS deployments.

2. Ease of deployment

The second most popular reason often cited for moving to SaaS is ease of deployment. This really comes down to the fact that when implementing a SaaS model, the end-user typically has to deploy much fewer compute and storage resources vs an on-premise application. Fewer (or zero) servers, databases, storage arrays, etc. This means a smaller datacenter footprint, which means less power, space, cooling, and everything that goes with managing your own datacenter resources. These factors have cost implications in their own right but notwithstanding financial benefits this reduction in infrastructure means a much lower physical burden on the overall IT service organization. Many see this benefit as having an equal if not larger overall impact to the end-user as the pure financial reduction.

3. Increased flexibility

The third-biggest driver of SaaS from the customer point of view is typically thought to be the increased flexibility that this model delivers. Under the old model, once you purchase a software feature set, you are stuck with what you have committed to. If your needs change, you typically have to eat those sunk costs and then buy more/newer features to meet your changing needs. Conversely, with SaaS you can very rapidly turn features on or off, scale up or down and generally make changes to your systems almost on the fly. This makes management much more comfortable committing to a solution when they know that if their needs change quickly, their software vendor can change with them.

4. Availability

 When evaluating any solution, one of the sometimes overlooked but important “abilities” to take into consideration is availability. The concept is simple, if the software is unavailable, then your users cannot get their jobs done. The cost of downtime for manufactures, financial institutions, and most other businesses can be staggering. It can be the difference between success and failure. This is why most companies spend a lot of time and money on creating disaster recovery plans, geo-redundant systems and other solutions to ensure availability. SaaS has an inherent advantage due to the typically large scale of the software solutions global footprint. AWS from Amazon, Azure from Microsoft, and others have spent huge sums of money to build out global datacenters which have more redundancy than a typical organization could afford to build on their own. Even very large companies that could potentially build out these datacenters have begun to move their systems to the cloud when they realize that advantages outsourcing availability.

5. Expertise

 Another consideration that may not come to mind initially is the availability of expertise to help solve issues or drive development of features. When many, many customers are using essentially the same services provided in the same way from the SaaS solution vendor, they tend to encounter the same problems (and find solutions) very quickly. And, those solutions tend to get posted to user community websites or YouTube etc. almost immediately. This means that as an end-user, when you have an issue, you can typically go online and find a fix or training almost immediately. This speeds up time to resolution dramatically.

6. Security

 Many initially see security concerns as a reason against moving to the cloud. They see their very important data as being kept someplace outside their control and open to attacks and hackers, etc. However, the truth is that if you investigate most of the major well-known data breaches over the last few years; you see that the majority of them have happened to organizations that have been breached internally and not via large-scale cloud infiltrations. So many smaller and medium sized organizations do not have the security expertise or budget to effectively block the advanced threats that are commonplace today. In fact, it tends to be the large cloud providers that more effectively create a security moat around important data than a smaller company could. SaaS vendors can apply a critical security patch to all of their customer in minutes, and not have to rely on the end-user downloading and applying the fix themselves. This ultimately creates a much more secure environment for your data.

When combined, both the common features of SaaS along with some of the lesser-known benefits can add up to a complete and very positive transformation of delivering an IT service such as network management and service assurance.

Thanks to Infosim, and author Dietmar Kneidl

3 Key Differences Between NetFlow and Packet Capture Performance Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis at center-stage of the conversation. Granted, both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, however rich in network metrics, requires sniffing devices and agents throughout the network, which invariably require some level of maintenance during their lifespan. In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with packet sniffers can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router or firewall a NetFlow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, NetFlow analyzers of varying feature-sets are available for network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow’s ability to provide WAN-wide metrics in near real-time makes it a suitable troubleshooting companion for engineers. And with version 9 of NetFlow extending the wealth of information it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Packet Capture. Packet Capture tools, however, do what they do best, which is Deep Packet Inspection (DPI), which allows for the identification of aspects in the traffic hidden in the past to Netflow analyzers. But Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR and other DPI solutions who have recognized that all they need to do is use flexible Netflow tools to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on. We could argue that packet sniffing is able to provide much of this information too, but it doesn’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting performance anomalies that could be subscribed to a number of factors such as untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Packet Capture obsolete

The short answer is, no. In fact, Packet Capture, when properly coupled with NetFlow, can make a very elegant solution. For example, using NetFlow to identify an attack profile or illicit traffic and then analyzing corresponding raw packets becomes an attractive solution. However, NetFlow strikes that perfect balance between detail and context and gives NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform. Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring attests to NetFlow’s growing prominence as the monitoring tool of choice. And as it and its various iterations such sFlow, IPFIX and  others continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

Thank you to NetFlow Auditor for this post.

ThreatARMOR Reduces Your Network’s Attack Surface

2014 saw the creation of more than 317 million new pieces of malware. That means an average of nearly one million new threats were released each day.

Here at Ixia we’ve been collecting and organizing threat intelligence data for years to help test the industry’s top network security products. Our Application and Threat Intelligence (ATI) research center maintains one of the most comprehensive lists of malware, botnets, and network incursions for exactly this purpose. We’ve had many requests to leverage that data in support of enterprise security, and this week you are seeing the first product that uses ATI to boost the performance of existing security systems. Ixia’s ThreatARMOR continuously taps into the ATI research center’s list of bad IP sources around the world and blocks them.

Ixia’s ThreatARMOR represents another innovation and an extension for the company’s Visibility Architecture, reducing the ever-increasing size of their global network attack surface.

A network attack surface is the sum of every access avenue an individual can use to gain access to an enterprise network. The expanding enterprise security perimeter must address new classes of attack, advancing breeds of hackers, and an evolving regulatory landscape.

“What’s killing security is not technology, it’s operations,” stated Jon Oltsik, ESG senior principal analyst and the founder of the firm’s cybersecurity service. “Companies are looking for ways to reduce their overall operations requirements and need easy to use, high performance solutions, like ThreatARMOR, to help them do that.”

Spending on IT security is poised to grow tenfold in ten years. Enterprise security tools inspect all traffic, including traffic that shouldn’t be on the network in the first place: traffic from known malicious IPs, hijacked IPs, and unassigned or unused IP space/addresses. These devices, while needed, create a more work than a security team could possible handle. False security attack positives consume an inordinate amount of time and resources: enterprises spend approximately 21,000 hours per year on average dealing with false positive cyber security alerts per a Ponemon Institute report published January 2015. You need to reduce the attack surface in order to only focus on the traffic that needs to be inspected.

“ThreatARMOR delivers a new level of visibility and security by blocking unwanted traffic before many of these unnecessary security events are ever generated. And its protection is always up to date thanks to our Application and Threat Intelligence (ATI) program.” said Dennis Cox, Chief Product Officer at Ixia.

“The ATI program develops the threat intelligence for ThreatARMOR and a detailed ‘Rap Sheet’ that provides proof of malicious activity for all blocked IP addresses, supported with on-screen evidence of the activity such as malware distribution or phishing, including date of the most recent confirmation and screen shots.”

ThreatARMOR: your new front line of defense!

Additional Resources:

ThreatARMOR

Thanks to Ixia for the article.

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.” The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need. In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss.

NetFlow Auditor - Start your free trial today!Thanks to NetFlow Auditor for the article.

{tag}link rel=”canonical” href=”http://blog.netflowauditor.com/two-ways-networks-are-transformed-by-netflow?utm_campaign=September%2015%20-%20NetFlow%20Guide&utm_content=22860989&utm_medium=social&utm_source=linkedin”{/tag}