Three Key Area’s to Address Deploying Inline Security Tools

Anyone in network security knows that it is a complicated and involved process. The clear goal is to prevent security breaches. How do you go about that though? There are so many schools of thought, methods, and configurations. Here are just a few examples:

  • Cyber range training
  • Cyber resilience
  • Defense in depth
  • Encryption
  • Endpoint security
  • Firewalls
  • Inline security tools (IPS, WAF, TLS decryption, etc.)
  • Out-of-band security tools (IDS, DLP, SIEM, etc.)
  • Penetration testing
  • Port scanning
  • Security device testing
  • Threat hunting
  • Threat intelligence
  • And the list goes on …

With so many different approaches and activities to focus on, it can become quite confusing. While several of these items stand out as high value, I want to focus on the approach of using inline security tools as part of your security architecture approach. Once you have a basic security architecture (firewalls and basic out-of-band security appliances like an intrusion detection system (IDS) and data loss prevention (DLP) solution) in place, an inline security tool approach can give you a lot of “bang for the security buck.” This architecture allows you to actively screen for threats in real-time and block a significant amount of these threats coming into your network. This immediately reduces your risk because you actively inspect traffic and eliminate or quarantine anything that appears suspicious.

At the same time, life is never a “bed of rose petals”, right? There are always some thorns. While there is a clear and high value benefit to those inline security tools (i.e. the ability to catch malware and other attacks at the entry into the network before they start their dirty work), there are always some areas of concern to watch out for. These concerns include: creating a single point of failure, complexity, additional costs, and alert fatigue. The trick here is to implement the right type of inline solution which will eliminate, or at least mitigate, those concerns.

So, what is a best practice for implementing inline security? There are three keys items to address in your plan:

  • How to best insert the equipment into the network?
  • The best method for data inspection
  • The reliability of the solution

First, you need to decide how you are going to deploy your inline security tools. For instance, do you plan to insert every appliance directly into the path between your firewall and router? If so, what order do you plan to insert the devices into the network? Which devices are you planning to deploy, and what happens if you make any architectural changes?

A first reaction to all of these questions is that this can be very overwhelming. You are correct. So, the first recommendation would be to maximize flexibility and eliminate complexity here. I would recommend implementing an external bypass switch and a network packet broker (NPB) first. As the figure below shows, the bypass switch is the fundamental equipment in the flow of network data. It will then shunt the data off to the NPB for analysis. However, with this configuration, should anything need to be changed from the NPB on (i.e. the security appliances), you don’t have to kill the network to make any changes. This makes your network design much simpler than trying to install two, three, four or more appliances directly inline, one after another. This one decision reduces an exorbitant amount of complexity and risk.

It is important to have an external bypass switch in the architecture to enable fail-over capability during upgrades. Certain inline security tools include an internal bypass switch. This becomes a problem when you want to replace the security tool, as you have to shut the network down. Even some software upgrades may cause the internal bypass a problem if the upgrade requires a reset. This is another network shut down. The simple solution is to use an external bypass and then you don’t have to worry about future upgrades.

The second consideration is how do you plan to inspect the data. If you did not follow the bypass and NPB scenario I just recommended, then you will be in for a nightmare scenario. The security appliances will need to be arranged in order of how the data will be inspected. For instance, does the data go to an intrusion prevention system (IPS) first, then onto a web application firewall (WAF), and then on to another device (possibly a unified threat management (UTM) solution or something). After that how do you plan to handle encrypted data? For instance, are you going to decrypt and re-encrypt after every security tool or are you going to try to decrypt once at the start of the chain and send the data all the way back to the front for re-encryption (and of course how does that work without going back through the whole chain again)? By the way, what is the transmission delay that you are adding to your data transit process with these decisions?

Hopefully you did select the bypass and NPB option. If so, then you still need to put some effort into this area, but it is easy compared to the previous discussion. As the next figure shows, once the NPB is in place, you simply connect all of the devices to the NPB and set up filter rules within the NPB for how to route the data. Setting up the filter rules sounds like it could be difficult, but it is not if you choose the right packet broker that has a graphical user interface (GUI) interface. This type of solution uses a drag and drop visual method to set up the filters.

What is especially beneficial to this solution is that the NPB supports serial data chaining. So, if data sent to the IPS is flagged as suspicious, it can then be sent to a WAF or UTM for further analysis to either quarantine the data, kill it, or clear the data as okay.

Another great feature of packet brokers is that they can perform data decryption. This lets you inspect encrypted data for malware, which is a considerable threat as over half of Internet data is now encrypted. If you can’t inspect the data, you have no idea what you just let into your network. With an integrated encryption function, this will typically save you time and money during the inspection process. It significantly reduces the complexity of dealing with encrypted data without the packet broker. If an external decryption appliance is desired, then the NPB can easily support that option as well. You simply connect the device to the NPB and create the filter rules.

The third area to focus on is the reliability of your solution. No one wants to implement an architecture that completely stops the flow of network data or causes the network intermittent transmission issues. This gets very complicated and expensive if you choose the non-NPB option here. For one thing, you will need redundant equipment (which means double the cost). You will also need load balancers and potentially dual routing capabilities.

With the bypass and NPB option, survivability is built in. The bypass switch and NPB both support heartbeat messaging. Should either device fail to get a return heartbeat, then it considers that device down and follows it’s failover practice. Even in a fail-over situation, the bypass switch or NPB will continue to send heartbeats to see if the failed device has come back online again. If so, the solution will revert back to regular operation. This helps create a self-healing architecture. Proper heartbeat algorithms will prevent a “tromboning” condition (should a faulty device fluctuate between being online and off-line).

The NPB also supports multiple survivability options. First, you can have full redundancy of all equipment (redundant bypass switches, redundant NPBs, and redundant security appliances). Second, you can deploy a high availability option for just the NPB, so that it has dual processors running. If one fails, the other will continue to process data. The third option is to use the NPB to support load balancing and n+1 survivability for your security tools. For instance, if you need 3 tools to process the data load, simply add a fourth tool. The NPB will split the load equally between all four devices. However, should any one of these devices fail, the NPB will divide the load across the remaining three devices. This provides a high level of reliability without the cost of full redundancy. You can also increase your network survivability by implementing n+2 survivability all the way to n+n, if you want.

You now have all the basic information that you need to start planning an inline security architecture. If you want more information on this topic, try reading this whitepaper.

New Partnership Announcement – Aukua Systems + Telnet Networks

Telnet Networks is now partnering with Aukua Systems, Inc. to be the Canadian reseller for Aukua’s industry leading Ethernet testing and monitoring solutions. Aukua is an Austin, Tx based company, whose products have been supporting QA and R&D teams at Network Equipment Manufacturer, Semiconductor, Military and Aerospace Telecommunication Service Providers, Healthcare, Finance and Enterprise IT organizations since 2015. Aukua’s MGA2510 and XGA4250 platforms operate as a powerful Ethernet traffic generator, or connect transparently inline as a protocol analyzer and traffic monitoring system, or as an Ethernet and Fibre Channel network impairment emulator.

The Traffic Generator and Analyzer supports Bit Error Rate (BER) testing, throughput testing, precise latency measurement analysis and impairment jamming.

The Inline Capture and Protocol Analyzer connects transparently inline in the network or test-bed helping Aukua’s customers to detect and resolve complex issues, capture traffic, or analyze throughput and error conditions in real-time.

The Ethernet Network Impairment Emulator and Fibre Channel SAN Delay Emulator connect inline in the test-bed helping Aukua’s customers recreate network delay and impairments, in the lab, for more realistic performance testing.

If you would like more information please contact us.

Should I Be Concerned About Duplicate Packets on My Network?

By 

Duplicate packets – no big deal right? That answer is most often wrong. While it is normal to have some duplicate packets on your network, the amount of duplicate packets matters. In addition, where are the duplicate packets coming from? You could have a problem and not even know it.

Duplicate packets of monitoring data can come from several sources, including the use of SPAN ports and the geographic location of data captures. For instance, a normally configured SPAN port (which is frequently used to connect monitoring tools to the network) can generate multiple copies of the same packet and transfer that data to your security and monitoring tools. These copies are exact duplicates of the original packet. Even when optimally configured, a SPAN port may generate between one and four copies of a packet and the duplicate packets can represent as much as 50% of the network traffic being sent to a monitoring tool.

It also matters where you capture monitoring data, as this can create duplicate data as well. If you capture the data at the ingress to the network and then again in the core, you have probably copied the same data twice. This double capture is in addition to whatever duplicates were made by the core switches themselves.

Purpose of De-duplication

Why do duplicate packets matter? Elimination of unnecessary packets from the data inspection process will improve the capacity of data that your monitoring tools can process. A 50% reduction in the amount of data that needs to be processed is a significant reduction. This can result in a sizable cost savings for your network, as you should be able to decrease the number of unnecessary tools that you need as well as potentially delay the implementation of higher bandwidth pipes.

So how do you accomplish this? Advanced context-aware data processing features, like de-duplication, within a network packet broker (NPB) can remove these duplicate packets. The NPB is capable of removing duplicate packets at full line rate before forwarding the traffic to the monitoring tools. Multiple copies are simply dropped from the data stream, with no negative effect on the tools. A large de-duplication window and the ability to configure the window size within the NPB makes the de-duplication feature extremely powerful. Based upon Ixia customer research, tool efficiency increases of up to 30 to 50% improvement have been seen when an NPB is used to perform de-duplication.

The de-duplication process is literally as simple as deleting the unnecessary copies of the packet data:

Typical Use Cases

As mentioned earlier, the most common de-duplication use case is to filter out unnecessary copies of packet data when SPAN ports are used in the network. This reduces the load on security and monitoring tools.

A second use case is for Cisco Application Centric Infrastructure (ACI) architectures. Redundant traffic streams and a distributed leaf and spine architecture means that you have to tap in multiple places to collect all of the monitoring data needed in this architecture. This creates a significant amount of duplicate data that needs to be removed from the monitoring stream. To complicate matters, leaf portions of the networks are running at 40 Gbps and the spine portions typically run at 100 Gbps. Removal of 40 Gbps duplicate packets can be very expensive, if this is performed by the monitoring tool at line rate instead of by an NPB.

A third use case is to actually turn off de-duplication periodically to perform a network analysis. Once the function is turned off, it can be observed how duplicate traffic is created and from where. This alerts you to probable network errors – either a poor network design that is creating duplicate traffic or equipment that is potentially failing and generating duplicate packets in error.

Considerations

Here are some things to keep in mind when considering de-duplication solutions.

Why not just buy a monitoring tool that has de-duplication built into it and skip the NPB? – Some monitoring tools can definitely perform this function as well. The issue with the tool performing this function is that you are now spending tool CPU resources and time to perform this function. This slows down the processing capability of the tool and might even necessitate buying another tool to handle the extra load. Since monitoring tools are often expensive, this can become a costly choice. A packet broker is usually a much more cost-effective alternative since it is purpose-built for these types of functions.

Packet brokers can perform de-duplication cost-effectively at line speeds – Since an NPB is purpose-built for de-duplication, a properly built NPB can perform de-duplication at line rate up to 100 Gbps. Only a few of the monitoring tools on the market can even handle this capability at 40 Gbps, or higher, as this places a heavy burden on the CPU. So, verify that your NPB selection can truly perform the functions needed at full load. This means you’ll have to test the vendor’s system (and not rely on whatever the vendor says).

More Information on De-duplication

If you want more information on the benefits of de-duplication and Ixia’s NPB solution, read this whitepaper.

The difference between network and security monitoring

Network monitoring can be described in many ways, but very often it is defined in the following way: ‘Network management is the process of configuring, monitoring and maintaining a reliable network ensuring connectivity between devices and the people or software applications.’ Several frameworks have been developed around network monitoring, for example, FCAPS (Fault, Configuration, Accounting, Performance, and Security) by TMN. While this definition focuses on functions, TeleManagement Forum’s (TMF) Business Process Framework called eTOM has more focus on business and processes. In the TMF model, the overlaying strategy spans from lifecycle management to operations readiness and support to fulfillment, assurance, and billing.

Security monitoring in a summary is the automated process of collecting and analyzing indicators of potential security threats, and triaging these threats with the appropriate action.

This blog is focusing on the functionality comparison of network and security management. The frameworks aside, the Network and security operations teams have an ever-increasing amount of work and responsibility in taking care of the overall health, performance, and security of a business’ infrastructure as technologies evolve, complexity increases, and enterprise networks continue to grow.

At first glance network and security operations may look similar or at least partially overlapping, but these functions and tools serve different (and essential) purposes within an organization.

Network Monitoring

Network monitoring looks at analyzing and tracking the health of an organization’s network. Network monitoring detects problems caused by malfunctioning devices, servers, overloaded resources, firewalls, and Virtual Machines (VMs), to name a few. The tools available for cloud-native environments differ in some ways and are worth another blog entirely.

Network operations teams need to understand network topology, configurations, performance, and security. Small enterprises can sometimes get by with cloud-hosted infrastructure and monitoring, without the need to fully comprehend the underlying technology. However, an organization’s infrastructure consists of numerous elements, such as hybrid cloud and even bare metal infrastructure, that frequently span over multiple locations and utilize a variety of technologies making network monitoring more important and complex than ever before.

Many network monitoring technologies enable end-to-end network and application visibility. Network monitoring is carried out by using a set of tools such as NMS/EMS, dedicated applications such as application monitoring, and diagnostic and troubleshooting tools. Proactive network monitoring is an essential component of network monitoring that helps in the early detection of performance issues to prevent network failures and downtime. This is typically achieved by using forecasting algorithms combining fault, performance, and configuration data feeding the data to the algorithms with the help of AI/ML these days.

The most common network monitoring protocol is the Simple Network Management Protocol (SNMP). In addition to SNMP, Syslog, flow-based monitoring, and packet analysis are used. ICMP is used especially for troubleshooting by analyzing error messages sent by network devices. The rise of threats, attacks, and ransomware has made the role of administration and security pivotal. Therefore, the CISOs have become an integral part of the organizations and are responsible for security and privacy matters at the highest company level.

Network Security Monitoring

Network Security Monitoring, especially XDR (Extended Detection and Response), is a security threat detection and incident response tool rather than just passive observation. SIEM can be a part of the process or it can be used as a tool to support both Network and Security monitoring.

While network monitoring collects data for the analysis of network and application health and overall system structure and integrity, network security monitoring analyses among others:

  • Network signaling
  • Network payload
  • Used protocols
  • Client-server communications
  • Encrypted traffic sessions
  • Traffic patterns and traffic flow
  • Anomaly detection
  • Network confidentiality, integrity, and availability (CIA triad)

Unlike traditional network monitoring, network security monitoring enables evidence-based decision-making by detecting intrusions, for example, zero-day vulnerabilities. The advent of modern continuous network monitoring and analysis technologies provides levels of detection and mitigation support that can significantly lower the likelihood of a successful attack or breach, however, at the same time attackers’ methods have gotten more sophisticated and the race continues.

Though the SNMP is useful for monitoring networks and planning future capacity, it doesn’t offer granular information, for example about signaling traffic. In an increasing number of cases, the only way to detect a sophisticated attack is to analyze the packets and traffic patterns. This approach results in high bandwidth utilization and requires high capacity and performance tools. Therefore, many tools either convert packets to PCAP files for analysis or use instead generated NetFlow, IPFIX, or similar types of flow metrics. There are also products in the market that further compress NetFlow/IPFIX to reduce the hardware requirements at the analytics end. The question is if detailed, granular data is required or if aggregated or compressed data is sufficient. This depends on the tool and use case, but often full security screening requires raw packet information.

NetFlow is created in the network element or by a device converting packets to NetFlow, cached, and stored, flows are then exported to the collector that receives and preprocesses the data. After a flow goes dormant or a preset time passes, the device exports the flow records to a flow collector. A flow analyzer further provides insights through visualization, statistics, and historical and real-time reporting. Collectors and analyzers are often bundled into a larger network monitoring system.

Why do Businesses need Network and Security Monitoring?

Network monitoring is vital in having visibility and control over the network, optimizing network performance and reliability, improving the bottom line, understanding current and future capacity, and finally ensuring corporate compliance. Automation has become one of the most important factors in network monitoring due to the increasing complexity and number of network elements. Manual actions are just too slow, error-prone, and require too much manpower.

Network monitoring, for example, focuses on understanding the composition, availability, status, behaviour, performance, and configuration of all the components within the infrastructure. It also actively tests the availability and accessibility of IP hosts to assess. Network monitoring as such is not designed for security management, thus a set of security management tools is needed either as an integrated tool or as separate, but connected applications.

Today security monitoring is part of a company’s compliance and regulatory requirements. Data breaches can be costly in many ways: ransom cost, fines, and compensations, bad publicity, lower brand value, and paused operations preventing business activities. Some of these can have a long-lasting impact on the company, also reducing its stock value.

Network security monitoring has the core objective of minimizing downtime by preventing attacks and preserving data to keep an organization operational. By combining attack and passive security monitoring and automating the processes as far as possible, organizations can protect themselves from network threats and identify attackers.

Together network and security monitoring provide comprehensive information, analysis, and reports:

  • Enable both network operations and network security staff to collect, filter, and refine their investigations in order to identify problems
  • Determine if the event is a normal network or malicious/disruptive activity
  • Provide continuous, real-time, and reliable data gathering for extracting crucial information about the health and security posture of the network
  • Deploy active testing tools to test vital network functions
  • Allow automation and standardized trouble ticketing processes

Both TAPs and NPBs are fundamentally important in providing visibility to the network by providing a copy of packets traversing the network.

Cubro Network Packet Broker EX48200
  • TAP is a stand-alone piece of hardware that mirrors packets by making an exact copy of the traffic ensuring that total visibility is provided across all of the network’s security and monitoring platforms.
  • NPB optimizes the traffic between TAP and monitoring systems. Also known as Traffic Aggregator, an NPB improves the functionality of network analysis and security tools, and helps to optimize network security and the performance of monitoring and analysis tools by decapsulating tunneling protocols, and slicing packets if needed, aggregating, filtering, replicating, and load balancing.
  • Advanced NPBs generate NetFlow/IPFIX, PCAP files for a given period of time and provide a basic security view.

While network monitoring can cope with information received from network elements, many other tools require packet information. This is where network visibility becomes essential since without tapping and NPBs the information from the network is not complete or even useful. NPBs play another important role in optimizing, reducing, and formatting the data for the monitoring tool, thus reducing the investment need.