By Brian Handrigan on Monday, 31 March 2014
Category: Network Management

When to Dedupe Packets: Trending vs. Troubleshooting

When it comes to points of network visibility, common knowledge dictates that more is always better. And, when we’re talking about troubleshooting and identifying the point of delay, increased visibility is important. But multiple points of visibility can also lead to duplicate packets. There are times when duplicate packets help in troubleshooting, but they can also lead to incorrect analysis and forecasting. It’s critical for successful network management to understand when duplicate packets can be your friend or foe.

To determine whether you need duplicate packets, you need to understand what type of analysis is being done: network trending or troubleshooting.

 

Trending

Troubleshooting

How duplicates impact network analysis

Duplicate packets result in data being counted multiple times, leading to skewed trending statistics, such as application performance and utilization metrics. More time is also required to process and analyze the data

When correlating packets traversing multiple segments, for example via MultiHop or Server-to-Server analysis, capturing these duplicates is critical in order to pinpoint where traffic was dropped or excessive delay was experienced.

Typically, as an engineer, you want to have access to duplicate packets when necessary for troubleshooting, but you do not want those duplicate packets to skew network traffic summary statistics. So, how do you design a monitoring infrastructure that gives you the flexibility to quickly troubleshoot while ensuring accurate trending?

1) Utilize two appliances when capturing and analyzing traffic.

The first solution should be a probe appliance, such as the Gigabit or 10 Gb Probe appliances, dedicated specifically for trending. The second solution would be a retrospective network analysis solution, such as the GigaStor, devoted to capturing all of the traffic including duplicate packets. When a problem is discovered within trending, you then have access to all the packets for troubleshooting.

2) Develop a monitoring strategy that minimizes duplicates for trending.

The advantage of avoiding duplicate packets by design is that it reduces the processing that your hardware will have to perform to remove duplicates.

  1. Be selective when choosing monitoring points.

Identify the aggregation points on your network, such as when traffic enters the core or a server farm, where the traffic naturally collapses from multiple links and devices into a single primary link. This gives you maximum visibility from a single vantage point when looking at performance or trending statistics.

  1. Don’t get too carried away with SPANs or mirror ports.

Monitoring device-to-device traffic communicating on the same switch can be tricky and will produce a lot of duplicate packets, if you are not mindful of how the data flows. Identify key paths that the data takes, such as communications to and from a front-end server to a back-end server connected to the same switch.

If you monitor all the traffic to and from both devices, you will end up with duplicate traffic. In that case, choose to mirror the traffic to and from the front-end server instead of both. This will give you the conversation between the clients and the front end as well as back end conversations.

Additionally, if you SPAN a VLAN or multiple ports, this can also cause duplicates. Spanning uplink ports or using TAPs is very useful when monitoring communication between devices that are connected to different switches.

3) When capturing packets for trending, remove duplicates via hardware.

If you’re using a network monitoring switch (or network packet broker), like Matrix, verify that it has packet deduplication. This is important if you are aggregating multiple links which throws all the traffic including duplicates into a single bucket before feeding the data to the analysis device. Additionally, if you have GigaStor, you can also utilize the Gen2 capture card to perform deduplication.

By being aware of the impact of duplicates on monitoring and implementing a strategy of dedicated hardware for trending and troubleshooting, you can guarantee forecasting and monitoring accuracy while ensuring granular and speedy troubleshooting.

Thanks to Network Instruments for the article. 

Leave Comments