A Simple Solution To Combatting Virtual Data Center Blind Spots

Blind spots are a long-established threat to virtual data centers. They are inherent to virtual machine (VM) usage and technology due to the nature of VMs, lack of visibility for inter- and intra-VM data monitoring, the typical practices around the use of VM’s, and the use of multiple hypervisors in enterprise environments.

Virtual machines by their nature hide inter- and intra-VM traffic. This is because the traffic stays within in a very small geographic area. As I mentioned in a previous blog, Do You Really Know What’s Lurking in Your Data Center?, Gartner Research found that 80% of VM traffic never reaches the top of the rack where it can be captured by traditional monitoring technology. This means that if something is happening to that 80% of your data (security threat, performance issue, compliance issue, etc.), you’ll never know about it. This is a huge area of risk.

In addition, an Ixia conducted market survey on virtualization technology released in March 2015, exposed a high propensity for data center blind spots to exist due to typical data center practices. This report showed that there was probably hidden data, i.e. blind spots, existing on typical enterprise data networks due to inconsistent monitoring practices, lack of monitoring practices altogether in several cases, and the typical lack of one central group responsible for collecting monitoring data.

For instance, only 37% of the respondents were monitoring their virtualized environment with the same processes that they use in their physical data center environments, and what monitoring was done usually used less capabilities in the virtual environment. This means that there is a potential for key monitoring information to NOT be captured for the virtual environment, which could lead to security, performance, and compliance issues for the business. In addition, only 22% of business designated the same staff to be responsible for monitoring and managing their physical and virtual technology monitoring. Different groups being responsible for monitoring practices and capabilities often leads to inconsistencies in data collection and execution of company processes.

The survey further revealed that only 42% of businesses monitor the personally identifiable information (PII) transmitted and stored on their networks. At the same time, 2/3 of the respondents were running critical applications across within their virtual environment. Mixed together, these “typical practices” should definitely raise warning signs for IT management.

Additional research by firms like IDC and Gartner are exposing another set of risks for enterprises around the use of multiple hypervisors in the data center. For instance, the IDC Virtualization and the Cloud 2013 study found that 16% of customers had already deployed or were planning to deploy more than one hypervisor. Another 45% were open to the idea in the future. In September 2014, another IDC market analysis stated that now over half of the enterprises (51%) have more than one type of hypervisor installed. Gartner ran a poll in July 2014 that also corroborated that multiple hypervisors were being used in enterprises.

This trend is positive, as having a second hypervisor is a good strategy for an enterprise. Multiple hypervisors allow you to:

  • Negotiate pricing discounts by simply having multiple suppliers
  • Help address corporate multi-vendor sourcing initiatives
  • Provide improved business continuity scenarios for product centric security threats

But it is also very troubling, because the cons include:

  • Extra expenses for the set-up of a multi-vendor environment
  • Poor to no visibility into a multi-hypervisor environment
  • An increase in general complexity (particularly management and programming)
  • And further complexities if you have advanced data center initiatives (like automation and orchestration)

One of the primary concerns is lack of visibility. With a proper visibility strategy, the other cons of a multi-hypervisor environment can be either partially or completely mitigated. One way to accomplish this goal is to deploy a virtual tap that includes filtering capability. The virtual tap allows you the access to all the data you need. This data can be forwarded on to a packet broker for distribution of the information to the right tool(s). Built-in filtering capability is an important feature of the virtual tap so that you can limit costs and bandwidth requirements.

Blind spots that can create the following issues:

  • Hidden security issues
  • Inadequate access to data for trending
  • Inadequate data to demonstrate proper regulatory compliance policy tracking

Virtual taps (like the Ixia Phantom vTap) address blind spots and their inherent dangers.

If the virtual tap is integrated into a holistic visibility approach using a Visibility Architecture, you can streamline your monitoring costs because instead of having two separate monitoring architectures with potentially duplicate equipment (and duplicate costs), you have one architecture that maximizes the efficiency of all your current tools, as well any future investments. When installing the virtual tap, the key is to make sure that it installs into the Hypervisor without adversely affecting the Hypervisor. Once this is accomplished, the virtual tap will have the proper access to inter and intra-VMs that it needs, as well as the ability to efficiently export that information. After this, the virtual tap will need a filtering mechanism so that exported data can be “properly” limited so as not to overload the LAN/WAN infrastructure. The last thing you want to do is to cause any performance problems to your network. Details on these concepts and best practices are available in the whitepapers Illuminating Data Center Blind Spots and Creating A Visibility Architecture.

As mentioned earlier, a multi-hypervisor environment is now a fact for the enterprise. The Ixia Phantom Tap supports multiple hypervisors and has been optimized for VMware ESX and kernel virtual machine (KVM) environments. KVM is starting to make a big push into the enterprise environment. It has been part of the Linux kernel since 2007. According to IDC, shipments of the KVM license were around 5.2 million units in 2014 and they expect that number to increase to 7.2 million by 2017. A lot of the KVM ecosystem is organized by the Open Virtual Alliance and the Phantom vTap supports this recommendation.

To learn more, please visit the Ixia Phantom vTap product page, the Ixia State of Virtualization for Visibility Architectures 2015 report or contact us to see a Phantom vTap demo!

Additional Resources:

Ixia Phantom vTap

Ixia State of Virtualization for Visibility Architectures 2015 report

White Paper: Illuminating Data Center Blind Spots

White Paper: Creating A Visibility Architecture

Blog: Do You Really Know What’s Lurking in Your Data Center?

Solution Focus Category

Network Visibility

Thanks to Ixia for the article.

Ixia Exposes Hidden Threats in Encrypted Mission-Critical Enterprise Applications

Delivers industry’s first visibility solution that includes stateful SSL decryption to improve application performance and security forensics

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced it has extended its Application and Threat Intelligence (ATI) Processor™ to include stateful, bi-directional SSL decryption capability for application monitoring and security analytics tools. Stateful SSL decryption provides complete session information to better understand the transaction as opposed to stateless decryption that only provides the data packets. As the sole visibility company providing stateful SSL decryption for these tools, Ixia’s Visibility Architecture™ solution is more critical than ever for enterprise organizations looking to improve their application performance and security forensics.

“Together, FireEye and Ixia offer a powerful solution that provides stateful SSL inspection capabilities to help protect and secure our customer’s networks,” said Ed Barry, Vice President of Cyber Security Coalition for FireEye.

As malware and other indicators of compromise are increasingly hidden by SSL, decryption of SSL traffic for monitoring and security purposes is now more important for enterprises. According to Gartner research, for most organizations, SSL traffic is already a significant portion of all outbound Web traffic and is increasing. It represents on average 15 percent to 25 percent of total Web traffic, with strong variations based on the vertical market.1 Additionally, compliance regulations such as the PCI-DSS and HIPAA increasingly require businesses to encrypt all sensitive data in transit. Finally, business applications like Microsoft Exchange, Salesforce.com and Dropbox run over SSL, making application monitoring and security analytics much more difficult for IT organizations.

Enabling visibility without borders – a view into SSL

In June, Ixia enabled seamless visibility across physical, virtual and hybrid cloud data centers. Ixia’s suite of virtual visibility products allows insight into east-west traffic running across the modern data center. The newest update, which includes stateful SSL decryption, extends security teams’ ability to look into encrypted applications revealing anomalies and intrusions.

Visibility for better performance – improve what you can measure

While it may enhance security of transferred data, encryption also limits network teams’ ability to inspect, tune and optimize the performance of applications. Ixia eliminates this blind spot by providing enterprises with full visibility into mission critical applications.

The ATI Processor works with Ixia’s Net Tool Optimizer® (NTO™) solution and brings a new level of intelligence to network packet brokers. It is supported by the Ixia Application & Threat Intelligence research team, which provides fast and accurate updates to application and threat signatures and application identification code. Additionally, the new capabilities will be available to all customers with an ATI Processor and an active subscription.

To learn more about Ixia’s latest innovations read:

ATI processor

Encryption – The Next Big Security Threat

Thanks to Ixia for the article. 

Campus to Cloud Network Visibility

Visibility. Network visibility. Simple terms that are thrown around quite a bit today. But the reality isn’t quite so simple. Why?

Scale for one. It’s simple to maintain visibility for a small network. But large corporate or enterprise networks? That’s another story altogether. Visibility solutions for these large networks have to scale from one end of the network to the other end – from the campus and branch office edge to the data center and/or private cloud. Managing and troubleshooting performance issues demands that we maintain visibility from the user to application and every step or hop in between.

So deploying a visibility architecture or design from campus to cloud requires scale. When I say scale, I mean scale on multiple layers – 5 layers to be exact – product, portfolio, design, management, and support. Let’s look at each one briefly.

Product Scale

Building an end-to-end visibility architecture for an enterprise network requires products that can scale to the total aggregate traffic from across the entire network, and filter that traffic for distribution to the appropriate monitoring and visibility tools. This specifically refers to network packet brokers that can aggregate traffic from 1GE, 10GE, 40GE, and even 100GE links. But it is more than just I/O. These network packet brokers have to have capacity that scales – meaning they have to operate at wire rate – and provide a completely non-blocking architecture whether they exist in a fixed port configuration or a modular- or chassis-based configuration.

Portfolio Scale

Building an end-to-end visibility architecture for an enterprise network also requires a portfolio that can scale. This means a full portfolio selection of network taps, virtual taps, inline bypass switches, out-of-band network packet brokers, inline network packet brokers, and management. Without these necessary components, your designs are limited and your future flexibility is limited.

Design Scale

Building an end-to-end visibility architecture for an enterprise network also requires a set of reference designs or frameworks that can scale. IT organizations expect their partners to provide solutions and not simply product – partners that can provide architectures or design frameworks that solve the most pressing challenges that IT is grappling with on a regular basis.

Management Scale

Building an end-to-end visibility architecture for an enterprise network requires management scale. Management scale is pretty much self-explanatory – a management solution that can manage the entire portfolio of products used in the overall design framework. However, it goes beyond that. Management requires integration. Look for designs that can also integrate easily into existing data center management infrastructures. Look for designs that allow automated service or application provisioning. Automation can really help to provide management scalability.

Support Scale

Building and supporting an end-to-end visibility architecture for an enterprise network requires support services that scale, both in skills sets and geography. Skill sets implies that deployment services and technical support personnel understand more than simply product, but that they understand the environments in which these visibility architectures operate as well. And obviously support services must be 24 x 7 and cover deployments globally.

So, if you’re looking to build an end-to-end visibility solution for your enterprise network, consider the scalability of the solution you’re considering. Consider scale in every sense of the word, not simply product scale. Deploying campus to cloud visibility requires scale from product, to portfolio, to design, to management, to support.

Additional Resources:

Ixia network visibility solutions

Ixia network packet brokers

Thanks to Ixia for the article

NTO Now Provides Twice the Network Visibility

Ixia is proud to announce that we are expanding one of the key capabilities in Ixia xStream platforms, “Double Your Ports,” to our Net Tool Optimizers (NTO) family of products. As of our 4.3 release, this capability to double the number of network and monitor inputs is now available on the NTO platform. If you are not familiar with Double Your Ports, it is a feature that allows you to add additional network or tool ports to your existing NTO by allowing different devices to share a single port. For example, if you have used all of the ports on your NTO but want to add a new tap, you can enable Double Your Ports so that a Net Optics Tap and a monitoring tool can share the same port, utilizing both the RX and TX sides of the port. This is how it works:

Standard Mode

In the standard mode, the ports will behave in a normal manner: when there is a link connection on the RX, the TX will operate. When the RX is not connected, the system assumes the TX link is also not connected (down).

Loopback Mode

When you designate a port to be loopback, the data egressing on the TX side will forward directly to the RX side of the same port. This functionality does not require a loopback cable to be plugged into the port. The packets will not transmit outside of the device even if a cable is connected.

Simplex Mode

When you designate a port to be in simplex mode, the port’s TX state is not dependent on the RX state. In the standard mode, when the RX side of the port goes down, the TX side is disabled. If you assign a port mode to simplex, the TX state is up when there is a link on the TX even when there is no link on the RX. You could use a simplex cable to connect a TX of port A to an RX of port B. If port A is in simplex mode, the TX will transmit even when the port A RX is not connected.

To “double your ports” you switch the port into simplex mode, then use simplex fiber cables and connect the TX fiber to a security or monitoring tool and the RX fiber to a tap or switch SPAN port. On NTO, the AFM ports such as the AFM 16 support simplex mode allowing you to have 32 connections per module: 16 network inputs and 16 monitor outputs simultaneously (with advanced functions on up to 16 of those connections). The Ixia xStream’s 24 ports can be used as 48 connections: 24 network inputs and 24 monitor outputs simultaneously.

The illustration below shows the RX and TX links of two AFM ports on the NTO running in simplex mode. The first port’s RX is receiving traffic from the Network Tap and the TX is transmitting to a monitoring tool.

The other port (right hand side on NTO) is interconnected to the Network Tap with its RX using a simplex cable whereas its TX is unused (dust-cap installed).

With any non-Ixia solution, this would have taken up three physical ports on the packet broker. With Ixia’s NTO and xStream packet brokers we are able to double up the traffic and save a port for this simple configuration, with room to add another monitoring tool where the dust plug is shown. If you expand this across many ports you can double your ports in the same space!

NTO Now Provides Twice the Network Visibility

Click here to learn more about Ixia’s Net Tool Optimizer family of products.

Additional Resources:

Ixia xStream

Ixia NTO solution

Ixia AFM

Solution Focus Category

Network Visibility

Thanks to Ixia for the article.

Cost-Effective Monitoring for Multi-Device Copper Networks is Here!

Proper access is the core component of any visibility architecture—you need to be able to capture the data before you can properly analyze it. To further help our customers, Ixia has released a new regenerator tap for copper networks. Regeneration means you get the same clean copy of incoming data distributed to multiple output ports in real time.

The Ixia Net Optics Regeneration Taps solve the key physical layer challenges of multi-device monitoring for 10, 100, and 1000MB (1 GbE) copper networks. Up to 16 devices can be connected to a single regenerator tap. This helps IT maximize resources and save on access points because multiple devices can monitor link traffic simultaneously through one cost-effective tap. Secure, passive access for many devices will deliver a superior return on your monitoring investments.

The regeneration tap is perfect for simple out-of-band access or when you need in-line monitoring. Once you have the proper data, it can then be forwarded to a packet broker for filtering or sent on directly to monitoring tools.

To get more details on the on this new product offering, visit the Ixia Copper Regenerator Tap product page.

Additional Resources:

Ixia Copper Regenerator Taps

Solution Focus Category

Network Visibility

Thanks to Ixia for the article.

Improving Network Visibility – Part 4: Intelligent, Integrated, and Intuitive Management

In the three previous blogs in this series, I answered an often asked customer question – “What can really be done to improve network visibility?” – with discussions on data and packet conditioning, advanced filtering, and automated data center capability. In the fourth part of this blog series, I’ll reveal another set of features that can further improve network visibility and deliver even more verifiable benefits.

Too quickly summarize, this multi-part blog covers an in-depth view of various features that deliver true network visibility benefits. There are five fundamental feature sets that will be covered:

  • Data & Packet Conditioning
  • Advanced Packet Filtering
  • Automated Real Time Response Capability
  • Intelligent, Integrated, and Intuitive Management
  • Vertically-focused Solution Sets

When combined, these capabilities can “supercharge” your network. This is because the five categories of monitoring functionality work together to create a coherent group of features that can, and will, lift the veil of complexity. These feature sets need to be integrated, yet modular, so you can deploy them to attack the complexity. This will allow you to deliver the right data to your monitoring and security tools and ultimately solve your business problems.

This fourth blog focuses on intelligent, integrated, and intuitive management of your network monitoring switches – also known as network packet brokers (NPB). Management of your equipment is a key concern. If you spend too much time on managing equipment, you lose productivity. If you don’t have the capability to properly manage all the equipment facets, then you probably won’t derive the full value from your equipment.

When it comes to network packet brokers, the management of these devices should align to your specific needs. If you purchase the right NPBs, the management for these devices will be intelligent, integrated, and intuitive.

So, what do we mean by intelligent, integrated, and intuitive? The following are the definitions I use to describe these terms and how they can control/minimize complexity within an element management system (EMS):

Intuitive – This is involves a visual display of information. Particularly, an easy to read GUI that shows you your system, ports, and tool connections at a glance so you don’t waste time or miss things located on a myriad of other views.

Integrated – Everyone wants the option of “One Stop Shopping.” For NPBs, this means no separate executables required for basic configuration. Best-of-breed approaches often sound good, but the reality of integrating lots of disparate equipment can become a nightmare. You’ll want a monitoring switch that has already been integrated by the manufacturer with lots of different technologies. This gives you the flexibility you want without the headaches.

Intelligent – A system that is intelligent can handle most of the nitpicky details, which are usually the ones that take the most effort and reduce productivity the most. Some examples include: the need for a powerful filtering engine behind the scenes to prevent overlap filtering and eliminate the need to create filtering tables, auto-discovery, ability to respond to commands from external systems, and the ability to initiate actions based upon user defined threshold limits.

At the same time, scalability is the top technology concern of IT for network management products, according to the EMA report Network Management 2012: Megatrends in Technology, Organization and Process published in February 2012. A key component of being able to scale is the management capability. Your equipment management capability will throttle how well your system scales or doesn’t.

The management solution for a monitoring switch should be flexible but powerful enough to allow for growth as your business grows – it should be consistently part of the solution and not the problem and must, therefore, support current and potential future needs. The element management system needs to allow for your system growth either natively or through configuration change. There are some basic tiered levels of functionality that are needed. I’ve attempted to summarize these below but more details are available in a whitepaper.

Basic management needs (these features are needed for almost all deployments)

  • Centralized console – Single pane of glass interface so you can see your network at a glance
  • The ability to quickly and easily create new filters
  • An intuitive interface to easily visualize existing filters and their attributes
  • Remote access capability
  • Secure access mechanisms

Small deployments – Point solutions of individual network elements (NEs) (1 to 3) within a system

  • Simple but powerful GUI with a drag and drop interface
  • The ability to create and apply individual filters
  • Full FCAPS (fault, configuration, accounting, performance, security) capability from a single interface

Clustered solutions – Larger solutions for campuses or distributed environments with 4 to 6 NEs within a system

  • These systems need an EMS that can look at multiple monitoring switches from a single GUI
  • More points to control also requires minimal management and transmission overhead to reduce clutter on the network
  • Ability to create filter templates and libraries
  • Ability to apply filter templates to multiple NE’s

Large systems – Require an EMS for large scale NE control

  • Need an ability for bulk management of NE’s
  • Require a web-based (API) interface to existing NMS
  • Need the ability to apply a single template to multiple NE’s
  • Need role-based permissions (that offer the ability to set and forget filter attributes, lock down ports and configuration settings, “internal” multi-tenancy, security for “sensitive” applications like CALEA, and user directory integration – RADIUS, TACACS+, LDAP, Active Directory)
  • Usually need integration capabilities for reporting and trend analysis

Integrated solutions – Very large systems will require integration to an external NMS either directly or through EMS

  • Need Web-based interface (API) for integration to existing NMS and orchestration systems
  • Need standardized protocols that allow external access to monitoring switch information (SYSLOG, SNMP)
  • Require role-based permissions (as mentioned above)
  • Requires support for automation capabilities to allow integration to data center and central office automation initiatives
  • Must support integration capabilities for business Intelligence collection, trend analysis, and reporting

Statistics should be available within the NPB, as well as through the element management system, to provide business intelligence information. This information can be used for instantaneous information or captured for trend analysis. Most enterprises typically perform some trending analysis of the data network. This analysis would eventually lead to a filter deployment plan and then also a filter library that could be exported as a filter-only configuration file loadable through an EMS on other NPBs for routine diagnostic assessments.

More information on the Ixia Net Tool Optimizer (NTO) monitoring switch and advanced packet filtering is available on the Ixia website. In addition, we have the following resources available:

  • Building Scalability into Visibility Management
  • Best Practices for Building Scalable Visibility Architectures
  • Simplify Network Monitoring whitepaper

Additional Resources:

Ixia Net Tool Optimizer (NTO)

White Paper: Building Scalability into Visibility Management

Ixia Visibility Solutions

Thanks to Ixia for the article. 

“Who Makes the Rules?” The Hidden Risks of Defining Visibility Policies

Imagine what would happen if the governor of one state got to change all the laws for the whole country for a day, without the other states or territories ever knowing about it. And then the next day, another governor gets to do the same. And then another.

Such foreseeable chaos is precisely what happens when multiple IT or security administrators define traffic filtering policies without some overarching intelligence keeping tabs on who’s doing what. Each user acts from their own unique perspective with the best of intentions –but with no way to know how the changes they make might impact other efforts.

In most large enterprises, multiple users need to be able to view and alter policies to maximize performance and security as the network evolves. In such scenarios, however, “last in, first out” policy definition creates dangerous blind spots, and the risk may be magnified in virtualized or hybrid environments where visibility architectures aren’t fully integrated.

Dynamic Filtering Accommodates Multiple Rule-makers, Reduces Risk of Visibility Gap

Among the advances added to latest release of Ixia’s Net Tool Optimizer™ (NTO) network packet brokers are enhancements to the solution’s unique Dynamic Filtering capabilities. This patented technique imposes that overarching intelligence over the visibility infrastructure as multiple users act to improve efficiency or divert threats. This technology becomes an absolute requirement when automation is used in the data center as dynamic changes to network filters require advanced calculations to other filters to ensure overlaps are updated to prevent loss of data.

Traditional rule-based systems may give a false sense of security and leave an organization vulnerable as security tools don’t see everything they need to see in order to do their job effectively. Say you have 3 tools each requiring slightly different but overlapping data.

  • Tool 1 wants a copy of all packets on VLAN 1-3
  • Tool 2 wants a copy of all packets containing TCP
  • Tool 3 wants a copy of all packets on VLAN 3-6

Overlap occurs in that both Tools 1 and 3 need to see TCP on VLAN 3. In rule-based systems, once a packet matches a rule, it is forwarded on and no longer available. Tool 1 will receive TCP packets on VLAN 3 but not tool 3. This creates a false sense of security because tool 3 still receives data and is not generating an alarm, which would indicate all is well. But what if the data stream going to tool 1 contains the smoking gun? Tool 3 would have detected this. And as we know from recent front-page breaches, a single incident can ruin a company’s brand image and have a severe financial impact.

Extending Peace of Mind across Virtual Networks

NVOS 4.3 also integrates physical and virtual visibility, allowing traffic from Ixia’s Phantom™ Virtualization Taps (vTaps) or standard VMware-based visibility solutions to be terminated on NTO along with physical traffic. Together, these enhancements eliminate serious blind spots inherent in other solutions avoiding potential risk and, worst case, liability caused by putting data at risk.

Integrating physical and virtual visibility minimizes equipment costs and streamlines control by eliminating extra devices that add complexity to your network. Other new additions –like the “double your ports” feature extend the NTO advantage delivering greater density, flexibility and ROI.

Download the latest NTO NVOS release from www.ixiacom.com.

Additional Resources:

Ixia Visibility Solutions

Thanks to Ixia for the article

Advanced Packet Filtering with Ixia’s Advanced Filtering Modules (AFM)

An important factor in improving network visibility is the ability to pass the correct data to monitoring tools. Otherwise, it becomes very expensive and aggravating for most enterprises to sift through the enormous amounts of data packets being transmitted (now and in the near future). Bandwidth requirements are projected to continue increasing for the foreseeable future – so you may want to prepare now. As your bandwidth needs increase, complexity increases due to more equipment being added to the network, new monitoring applications, and data filtering rule changes due to additional monitoring ports.

Network monitoring switches are used to counteract complexity with data segmentation. There are several features that are necessary to perform the data segmentation needed and refine the flow of data. The most important features needed for this activity are: packet deduplication, load balancing, and packet filtering. Packet filtering, and advanced packet filtering in particular, is the primary workhorse feature for this segmentation.

While many monitoring switch vendors have filtering, very few can perform the advanced filtering that adds real value for businesses. In addition, filtering rules can become very complex and require a lot of staff time to write initially and then to maintain as the network constantly changes. This is time and money wasted on tool maintenance instead of time spent on quickly resolving network problems and adding new capabilities to the network requested by the business.

Basic Filtering

Basic packet filtering consists of filtering the packets as they either enter or leave the monitoring switch. Filtering at the ingress will restrict the flow of data (and information) from that point on. This is most often the worst place to filter as tools and functionality downstream from this point will never have access to that deleted data, and it eliminates the ability to share filtered data to multiple tools. However, ingress filtering is commonly used to limit the amount of data on the network that is passed on to your tool farm, and/or for very security sensitive applications that wish to filter non-trusted information as early as possible.

The following list provides common filter criteria that can be employed:

  • Layer 2
    • MAC address from packet source
    • VLAN
    • Ethernet Type (e.g. IPv4, IPv6, Apple Talk, Novell, etc.)
  • Layer 3
    • DSCP/ECN
    • IP address
    • IP protocol ( ICMP, IGMP, GGP, IP, TCP, etc.)
    • Traffic Class
    • Next Header
  • Layer 4
    • L4 port
    • TCP Control flags

Filters can be set to either pass or deny traffic based upon the filter criteria.

Egress filters are primarily meant for fine tuning of data packets sent to the tool farm. If an administrator tries to use these for the primary filtering functionality, they can easily run into an overload situation where the egress port is overloaded and packets are dropped. In this scenario, aggregated data from multiple network ports may be significantly greater than the egress capacity of the tool port.

Advanced Filtering

Network visibility comes from reducing the clutter and focusing on what’s important when you need it. One of the best ways to reduce this clutter is to add a monitoring switch that can remove duplicated packets and perform advanced filtering to direct data packets to the appropriate monitoring tools and application monitoring products that you have deployed on your network. The fundamental factor to achieve visibility is to get the right data to the right tool to make the right conclusions. Basic filtering isn’t enough to deliver the correct insight into what is happening on the network.

But what do we mean by “advanced filtering”? Advanced filtering includes the ability to filter packets anywhere across the network by using very granular criteria. Most monitoring switches just filter on the ingress and egress data streams.

Besides ingress and egress filtering, operators need to perform packet processing functions as well, like VLAN stripping, VNtag stripping, GTP stripping, MPLS stripping, deduplication and packet trimming.

Ixia’s Advanced Feature Modules

The Ixia Advanced Feature Modules (AFM) help network engineers to improve monitoring tool performance by optimizing the monitored network traffic to include only the essential information needed for analysis. In conjunction with the Ixia Net Tool Optimizer (NTO) product line, the AFM module has sophisticated capability that allows it to perform advanced processing of packet data.

Advanced Packet Processing Features

  • Packet De-Duplication – A normally configured SPAN port can generate multiple copies of the same packet dramatically reducing the effectiveness of monitoring tools. The AFM16 eliminates redundant packets, at full line rate, before they reach your monitoring tools. Doing so will increase overall tool performance and accuracy.
  • Packet Trimming – Some monitoring tools only need to analyze packet headers. In other monitoring applications, meeting regulatory compliance requires tools remove sensitive data from captured network traffic. The AFM16 can remove payload data from the monitored network traffic, which boosts tool performance and keeps sensitive user data secure.
  • Protocol Stripping – Many network monitoring tools have limitations when handling some types of Ethernet protocols. The AFM16 enables monitoring tools to monitor required data by removing GTP, MPLS, VNTag header labels from the packet stream.
  • GTP Stripping – Removes the GTP headers from a GTP packet leaving the tunneled L3 and L4 headers exposed. Enables tools that cannot process GTP header information to analyze the tunneled packets.
  • NTP/GPS Time Stamping – Some latency-sensitive monitoring tools need to know when a packet traverses a particular point in the network. The AFM16 provides time stamping with nanosecond resolution and accuracy.

Additional Resources:

Ixia Advance Features Modules

Ixia Visibility Architecture

Thanks to Ixia for the article. 

How Not to Rollout New Ideas, or How I Learned to Love Testing

I was recently reading an article in TechCrunch titled “The Problem With The Internet Of Things,” where the author lamented how bad design or rollout of good ideas can kill promising markets. In his example, he discussed how turning on the lights in a room, through the Internet of Things (IoT), became a five step process rather than the simple one step process we currently use (the light switch).

This illustrates the problem between the grand idea, and the practicality of the market: it’s awesome to contemplate a future where exciting technology impacts our lives, but only if the realities of everyday use are taken into account. As he effectively state, “Smart home technology should work with the existing interfaces of households objects, not try to change how we use them.”

Part of the problem is that the IoT is still just a nebulous concept. Its everyday implications haven’t been worked out. What does it mean when all of our appliances, communications, and transportation are connected? How will they work together? How will we control and manage them? Details about how the users of exciting technology will actually participate in the experience is the actual driver of technology success. And too often, this aspect is glossed over or ignored.

And, once everything is connected, will those connections be a door for malware or hacktivists to bypass security?

Part of the solution to getting new technology to customers in a meaningful way, that is both a quality end user experience AND a profitable model for the provider, is network validation and optimization. Application performance and security resilience are key when rolling out, providing, integrating or securing new technology.

What do we mean by these terms? Well:

  • Application performance means we enable successful deployments of applications across our customers’ networks
  • Security resilience means we make sure customer networks are resilient to the growing security threats across the IT landscape

Companies deploying applications and network services—in a physical, virtual, or hybrid network configuration—need to do three things well:

  • Validate. Customers need to validate their network architecture to ensure they have a well-designed network, properly provisioned, with the right third party equipment to achieve their business goals.
  • Secure. Customers must secure their network performance against all the various threat scenarios—a threat list that grows daily and impacts their end users, brand, and profitability.

(Just over last Thanksgiving weekend, Sony Pictures was hacked and five of its upcoming pictures leaked online—with the prime suspect being North Korea!)

  • Optimize. Customers seek network optimization by obtaining solutions that give them 100% visibility into their traffic—eliminating blind spots. They must monitor applications traffic and receive real-time intelligence in order to ensure the network is performing as expected.

Ixia helps customers address these pain points, and achieve their networking goals every day, all over the world. This is the exciting part of our business.

When we discuss solutions with customers, no matter who they are— Bank of America, Visa, Apple, NTT—they all do three things the same way in their networks:

  • Design—Envision and plan the network that meets their business needs
  • Rollout—Deploy network upgrades or updated functionality
  • Operate—Keep the production network seamlessly providing a quality experience

These are the three big lifecycle stages for any network design, application rollout, security solution, or performance design. Achieving these milestones successfully requires three processes:

  • Validate—Test and confirm design meets expectations
  • Secure— Assess the performance and security in real-world threat scenarios
  • Optimize— Scale for performance, visibility, security, and expansion

So when it comes to new technology and new applications of that technology, we are in an amazing time—evidenced by the fact that nine billion devices will be connected to the Internet in 2018. Examples of this include Audio Video Bridging, Automotive Ethernet, Bring Your Own Apps (BYOA), etc. Ixia sees only huge potential. Ixia is a first line defense to creating the kind of quality customer experience that ensures satisfaction, brand excellence, and profitability.

Additional Resources:

Article: The Problem With The Internet Of Things

Ixia visibility solutions

Ixia security solutions

Thanks to Ixia for the article.

What if Sony Used Ixia’s Application and Threat Intelligence Processor (ATIP)?

Trying to detect intrusions in your network and extracting data from your network is a tricky business. Deep insight requires a deep understanding of the context of your network traffic—where are connections coming from, where are they going, and what are the specific applications in use. Without this breadth of insight, you can’t take action to stop and remediate attacks, especially from Advanced Persistent Threats (APT).

To see how Ixia helps its customers gain this actionable insight into the applications and threats on their network, we invite you to watch this quick demo of Ixia’s Application and Threat Intelligence Processor (ATIP) in action. Chief Product Officer Dennis Cox uses Ixia’s ATIP to help you understand threats in real time, with the actual intrusion techniques employed in the Sony breach.

Additional Resources:

Ixia Application and Threat Intelligence Processor

Thanks to Ixia for the article.