Don’t Be Lulled to Sleep with a Security Fable. . .

Once upon a time, all you needed was a firewall to call yourself “secure.” But then, things changed. More networks are created every day, every network is visible to the others, and they connect with each other all the time—no matter how far away or how unrelated.

And malicious threats have taken notice . . .

As the Internet got bigger, anonymity got smaller. It’s impossible to go “unnoticed” on the Internet now. Everybody is a target.

Into today’s network landscape, every network is under the threat of attack all the time. In reaction to threats, the network “security perimeter” has expanded in reaction to new attacks, new breeds of hackers, more regions coming online, and emerging regulations.

Security innovation tracks threat innovation by creating more protection—but this comes with more complexity, more maintenance, and more to manage. Security investment rises with expanding requirements. Just a firewall doesn’t nearly cut it anymore.

Next-generation firewalls, IPS/IDS, antivirus software, SIEM, sandboxing, DPI: all of these tools have become part of the security perimeter in an effort to stop traffic from getting in (and out) of your network. And they are overloaded, and overloading your security teams.

In 2014, there were close to 42.8 million cyberattacks (roughly 117,339 attacks each day) in the United States alone. These days, the average North American enterprise fields around 10,000 alerts each day from its security systems—way more than their IT teams can possibly process—a Damballa analysis of traffic found.

Your network’s current attack surface is huge. It is the sum of every access avenue an attacker could use to enter your network (or take data out of your network). Basically, every connection to and/or from anywhere.

There are two types of traffic that hit every network: The traffic worth analyzing for threats, and the traffic not worth analyzing for threats that should be blocked immediately before any security resource is wasted inspecting or following up on it.

Any way to filter out traffic that is either known to be good or known to be bad, and doesn’t need to go through the security system screening, reduces the load on your security staff. With a reduced attack surface, your security resources can focus on a much tighter band of information, and not get distracted by non-threatening (or obviously threatening) noise.

Thanks to Ixia for the article.

5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

We’ve been doing a lot of blogging around here lately about NCCM and the importance of having an automated configuration and change management system. We’ve even published a Best practices guide for NCCM. One of the main points in any NCCM system is having consistent and accurate configuration backups of all of your “key” devices.

When I ask Network Managers to name their key devices, they generally start with WAN / Internet routers and Firewalls. This makes sense of course because, in a modern large-scale network, connectivity (WAN / Internet routers) & security (Firewalls) tend to get most of the attention. However, we think that it’s important not to overlook core and access switching layers. After all, without that “front line” connectivity – the internal user cannot get out to the WAN/Internet in the first place.

With that in mind, today’s blog offers up 5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

1. Switch Failure

LAN switches tend to be some of the most utilized devices in a network. They also don’t generally come with the top quality hardware and redundant power supplies that core devices have. In many cases, they may also be located on less than pristine locations. Dirty manufacturing floors, dormitory closets, remote office kitchens – I have seen access switches in all of these places. When you combine a heavy workload with tough conditions and less expensive part, you have a recipe for devices that will fail at a higher rate.

So, when that time comes to replace / upgrade a switch, having its configuration backed up and a system which can automate the provisioning of the new system can be a real time and workload saver. Just put the IP address and some basic management information on the new device and the NCCM tool should be able to take off the rest in mere minutes.

2. User Tracking

As the front line connectivity device for the majority of LAN users, the switch is the best place to track down user connections. You may want to know where a particular user is located, or maybe you are trying to troubleshoot an application performance issue; no matter what the cause, it’s important to have that connectivity data available to the IT department. NCCM systems may use layer 2 management data from CDP/LLDP as well as other techniques to gather this information. A good system will allow you to search for a particular IP/MAC/DNS and return connectivity information like which device/port it is connected to as well as when it was first and last seen on that port. This data can also be used to draw live topology maps which offer a great visualization of the network.

3. Policy Checking

Another area where the focus tends to be on “gateway” devices such as WAN routers and firewalls is policy checking. While those devices certainly should have lots of attention paid to them, especially in the area of security policies, we believe that it’s equally as important not to neglect the access layer when it comes to compliance. In general terms, there are two aspects of policy checking which need to be addressed on these devices: QoS policies and regulatory compliance policies.

The vast majority of VoIP and Video systems will connect to the network via a traditional LAN switch. These switches, therefore, must have the correct VLAN and QoS configurations in order to accurately forward the traffic in the appropriate manner so that Quality of Service is maintained.

If your organization is subject to regulatory compliance standards such as PCI, HIPAA etc then these regulations are applicable to all devices and systems that are connected to or pass sensitive data.

In both of these cases, it is incredibly important to ensure policy compliance on all of your devices, even the ones on the “edge” of your network.

4. Asset Lifecycle Management

Especially in larger and more spread out organizations, just understanding what you have can be a challenge. At some point (and always when you are least prepared for it) you will get the “What do we have?” question from a manager. An NCCM system is exactly the right tool to use to answer this question. Even though NCCM is generally considered to be the tool for change – it is equally the tool for information. Only devices that are well documented can be managed and that documentation is best supplied through the use of an automated inventory discovery system. Likewise, when it is time for a technology refresh, or even the build out of a new location or network, understanding the current state of the existing network is the first step towards building an effective plan for the future.

5. New Service Initiatives

Whether you are a large IT shop or a service provider – new applications and services are always coming. In many cases, that will require widespread changes to the infrastructure. The change may be small or larger, but if it needs to be implemented on a number of systems at the same time, it will require coordination and automation to get it done efficiently and successfully. In some instances, this will only require changes to the core, but in many cases it will also require changes to the switch infrastructure as well. This is what NCCM tools were designed to do and there is no reason that you should be handcuffed in your efforts to implement change just because you haven’t added all of your devices into the NCCM platform.

Networks are complicated systems of many individual components spread throughout various locations with interdependencies that can be hard to comprehend without the help of network management tools. While the temptation may be to focus on the core systems, we think that it’s critical to view all parts, even the underappreciated LAN switch, as equal pieces to the puzzle and, therefore, should not be overlooked when implementing an NCCM system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

{tag}link rel=”canonical” href=”http://blog.nmsaas.com/5-reasons-why-you-should-include-lan-switches-in-your-nccm-scope”{/tag}

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of. And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered. So what are the benefits?

1. Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded. Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files. This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2. Network speed – This is one of the most important and easily quantified aspects of managing netflow. It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays. Obviously, neither of these is a good problem to have. Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3. Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector. As the business grows, the network will have to grow with it to support more employees and clients. By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed. As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4. Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. An unsecured network is worse than a useless network, and data breaches can ruin a company. So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With proper software, these issues can be not only monitored, but also recorded and corrected.

5. Usability – Unfortunately, not all employees have a working knowledge of how networks operate. In fact, as many in IT support will attest, most employees aren’t tech savvy. However, most employees will need to use the network as part of their daily work. This conflict is why usability is so important. The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly. Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be? Are you able to monitor the network’s performance and flow, or do network forensics to determine where issues are? Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

b2ap3_thumbnail_aec80d47-9384-4ff8-8d9b-294574b3612f_20151014-140219_1.png

Thanks to NetFlow Auditor for the article.

Benefits of Network Security Forensics

The networks that your business operates on are often open and complex.

Your IT department is responsible for mitigating network risks, managing performance and auditing data to ensure functionality.

Using NetFlow forensics can help your IT team maintain the competitiveness and reliability of the systems required to run your business.

In IT, network security forensics involves the monitoring and analysis of your network’s traffic to gather information, obtain legal evidence and detect network intrusions.

These activities help keep your company perform the following actions.

  • Adjust to increased data and NetFlow volumes
  • Identify heightened security vulnerabilities and threats
  • Align with corporate and legislative compliance requirements
  • Contain network costs
  • Analyze network performance demands
  • Recommend budget-friendly implementations and system upgrades

NetFlow forensics helps your company maintain accountability and trace usage; these functions become increasingly difficult as your network becomes more intricate.

The more systems your network relies on, the more difficult this process becomes.

While your company likely has standard security measures in place, e.g. firewalls, intrusion detection systems and sniffers, they lack the capability to record all network activity.

Tracking all your network activity in real-time at granular levels is critical to the success of your organization.

Until recently, the ability to perform this type of network forensics has been limited due to a lack of scalability.

Now, there are web-based solutions that can collect and store this data to assist your IT department with this daunting task.

Solution capabilities include:

  • Record NetFlow data at a micro level
  • Discover security breaches and alert system administrators in real-time
  • Identify trends and establish performance baselines
  • React to irregular traffic movements and applications
  • Better provisioning of network services

The ability to capture all of this activity will empower your IT department to provide more thorough analysis and take faster action to resolve system issues.

But, before your company can realize the full value of NetFlow forensics, your team needs to have a clear understanding of how to use this intelligence to take full advantage of these detailed investigative activities.

Gathering the data through automation is a relatively simple process once the required automation tools have been implemented.

Understanding how to organize these massive amounts of data into clear, concise and actionable findings is an additional skill set that must be developed within your IT team.

Having a team member, whether internal or via a third-party vendor, that can aggregate your findings and create visual representations that can be understood by non-technical team members is a necessary part of NetFlow forensics. It is important to stress the necessity of visualization; this technique makes it much easier to articulate the importance of findings.

In order to accurately and succinctly visualize security issues, your IT staff must have a deep understanding of the standard protocols of your network. Without this level of understanding, the ability to analyze and investigate security issues is limited, if not impossible.

Utilizing a software to support the audit functions required to perform NetFlow forensics will help your company support the IT staff in the gathering and tracking of these standard protocols.

Being able to identify, track and monitor the protocols in an automated manner will enhance your staff’s ability to understand and assess the impact of these protocols on network performance and security. It will also allow you to quickly assess the impact of changes driven by real-time monitoring of your network processes.

Sound like a daunting task?

It doesn’t have to be. Choose a partner to support your efforts and help you build the right NetFlow forensics configuration to support your business.

Contact us today and let us help you manage your company’s IT network.

Benefits of Network Security ForensicsThanks to NetFlow Auditor for the article.

The State of Enterprise Security Resilience – An Ixia Research Report

Ixia, an international leader in application performance and security resilience technology, conducted a survey to better understand how network security resilience solutions and techniques are used within the modern enterprise. While information exists on security products and threats, very little is available on how it is actually being used and the techniques and technology to ensure that security is completely integrated into the corporate network structure. This report presents the research we uncovered.

During this survey, there were three areas of emphasis exploring security and visibility architectures. One portion of the survey focused on understanding the product types and use. The second area of emphasis was on understanding the processes in use. The final area of emphasis was on understanding the people components of typical architectures.

This report features several key findings that include the following:

  • Many enterprises and carriers are still highly vulnerable to the effects of a security breach. This is due to concerns with lack of following best practices, process issues, lack of awareness, and lack of proper technology.
  • Lack of knowledge, not cost, is the primary barrier to security improvements. However, typical annual spend on network security is less than $100K worldwide.
  • Security resilience approaches are growing in worldwide adoption. A primary contributor is the merge of visibility and security architectures. Additional data shows that life-cycle security methodologies and security resilience testing are also positive contributors.
  • The top two main security concerns for IT are data loss and malware attacks.

These four key findings confirm that while there are still clear dangers to network security in the enterprise, there is some hope for improvement. The severity of the risk has not gone away, but it appears that some are managing it with the right combination of investment in technology, training, and processes.

To read more, download the report here.

The State of Enterprise Security Resilience

Thanks to Ixia for the article.

5 Reasons Why You Must Back Up Your Routers and Switches

I’ve been working in the Network Management business for over 20 years, and in that time I have certainly seen my share of networks. Big and small, centralized and distributed, brand name vendor devices in shiny datacenters, and no-name brands in basements and bunkers. The one consistent surprise I continue to encounter is how many of these organization (even the shiny datacenter ones) lack a backup system for their network device configurations.

I find that a little amazing, since I also can’t tell you the last time I talked to a company that didn’t have a backup system for their servers and storage systems. I mean, who doesn’t backup their critical data? It seems so obvious that hard drive need to be backed up in case of problems –and yet many of these same organizations, many of whom spend huge amounts of money on server backup, do not even think of backing up the critical configurations of the devices that actually move the traffic around.

So, with that in mind, I present 5 reasons why you must back up your Routers and Switches (and Firewalls and WLAN controllers, and Load Balancers etc).

1. Upgrades and new rollouts.

Network Devices get swapped out all of the time. In many cases, these rollouts are planned and scheduled. At some point (if you’re lucky) an engineer will think about backing up the configuration of the old device before the replacements occurs. However, I have seen more than one time when this didn’t happen. In those cases, the old device is gone, and the new devices need to be reconfigured from scratch – hopefully with all of the correct configs. A scheduled backup solution makes these situations a lot less painful.

2. Disaster Recovery.

This is the opposite of the simple upgrade scenario. The truth is that many times a device is not replaced until it fails. Especially those “forgotten” devices that are on the edge of networks in ceilings and basements and far flung places. These systems rarely get much “love” until there is a problem. Then, suddenly, there is an outage – and in the scramble to get back up and running, and central repository of the device configuration can be a time (and life) saver.

3. Compliance

We certainly see this more in larger organizations, but it also becomes a real driving force in smaller companies that operate in highly regulated industries like banking and healthcare. If your company falls into one of those categories, then chances are you actually have a duty to backup your devices in order to stay within regulatory compliance. The downside of being non-compliant can be harsh. We have worked with companies that were being financially penalized for every day they were out of compliance with a number of policies including failure to have a simple router / switch / firewall backup system in place.

4. Quick Restores.

Ask most network engineers and they will tell you – we’ve all had that “oops” moment when we were making an configuration change on the fly and realized just a second after hitting “enter” that we just broke something. Hopefully, we just took down a single device. Sometimes it’s worse than that and we go into real panic mode. I can tell you, it is that exact moment when we realize how important configuration backups can be. The restoration process can be simple and (relatively) painless, or it can be really, really painful; and it all comes down to whether or not you have good backups.

5. Policy Checking.

One of the often overlooked benefits of backing up your device configurations, is that it allows an NCCM systems to then automatically scan those backups and compare them to known good configurations in order to ensure compliance to company standards. Normally, this is a very tedious (and therefore ignored) process – especially in large organizations with many devices and many changes taking place. Sophisticated systems can quickly identify when a device configuration has changed, immediately backup the new config, and then scan that config to make sure it’s not violating any company rules. Regular scans can be rolled up into scheduled reports which provide management with a simple but important audit of all devices that are out of compliance.

Bottom Line:

Routers, Switches and Firewalls really are the heart of a network. Unless they are running smoothly, everything suffers. One of the simplest yet effective practices for helping ensure the operation of a network is to implement an automatic device configuration backup system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article. 

CIO Review – Infosim Unified Solution for Automated Network Management

CIO Review

20 Most Promising Networking Solution Providers

Virtualization has become the life blood of the networking industry today. With the advent of technologies such as software-defined networking and network function virtualization, the black box paradigm or the legacy networking model has been shattered. In the past, the industry witnessed networking technology such as Fiber Distributed Data Interface (FDDI), which eventually gave way to Ethernet, the predominant network of choice. This provided opportunities to refresh infrastructures and create new networking paradigms.Today, we see a myriad of proprietary technologies, competing for the next generation networking models that are no longer static, opaque or rigid.

Ushering a new way of thinking and unlocking the possibilities, customers are increasingly demanding for automation from the network solution providers. The key requirement is an agile network controlled from a single source. Visibility into the network has also become a must-have in the networking spectrum, providing realtime information about the events befalling inside the networks.

In order to enhance enterprise agility, improve network efficiency and maintain high standards of security, several innovative players in the industry are delivering cutting-edge solutions that ensure visibility, cost savings and automation in the networks. In the last few months we have looked at hundreds of solution providers who primarily serve the networking industry, and shortlisted the ones that are at the forefront of tackling challenges faced by this industry.

In our selection, we looked at the vendor’s capability to fulfill the burning needs of the sector through the supply of a variety of cost effective and flexible solutions that add value to the networking industry. We present to you CIO Review’s 20 Most Promising Networking Solution Providers 2015.

Infosim Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to Infosim for the article. 

3 Reasons for Real Time Configuration Change Detection

So far, we have explored what NCCM is, and taken a deep dive into device policy checking – in this post we are going to be exploring Real Time Configuration Change Detection (or just Change Detection as I will call it in this blog). Change Detection is the process by which your NCCM system is notified – either directly by the device or from a 3rd party system that a configuration change has been made on that device. Why is this important? Let’s identify 3 main reasons that Change Detection is a critical component of a well deployed NCCM solution.

1. Unauthorized change recording. As anyone that works in an enterprise IT department knows, changes need to be made in order to keep systems updated for new services, users and so on. Most of the time, changes are (and should be) scheduled in advance, so that everyone knows what is happening, why the change is being made, when it is scheduled and what the impact will be on running services.

However, the fact remains that anyone with the correct passwords and privilege level can usually log into a device and make a change at any time. Engineers that know the network and feel comfortable working on the devices will often just login and make “on-the-fly” adjustments that they think won’t hurt anything. Unfortunately as we all know, those “best intentions” can lead to disaster.

That is where Change Detection can really help. Once a change has been made, it will be recorded by the device and a log can be transmitted either directly to the NCCM system or to a 3rd party logging server which then forwards the message to the NCCM system. At the most basic level this means that if something does go wrong, there is an audit trail which can be investigated to determine what happened and when. It can also potentially be used to roll back the changes to a known good state

2. Automated actions.

Once a change has been made (scheduled or unauthorized) many IT departments will wish to perform some automated actions immediately at the time of change without waiting for a daily or weekly schedule to kick in. Some of the common automated activities are:

  • Immediate configuration backup. So that all new changes are recorded in the backup system.
  • Launch of a new discovery. If the change involved any hardware or OS type changes like a version upgrade, then the NCCM system should also re-discover the device so that the asset system has up-to-date information about the device

These automation actions can ensure that the NCCM and other network management applications are kept up to date as changes are being made without having to wait for the next scheduled job to start. This ensures that any other systems are not acting “blindly” when they try to perform an action with/on the changed device.

3. Policy Checking. New configuration changes should also prompt an immediate policy check of the system to ensure that the change did not inadvertently breach a compliance or security rule. If a policy has been broken, then a manager can be notified immediately. Optionally, if the NCCM system is capable of remediation, then a rollback or similar operation can happen to bring the system back into compliance immediately.

Almost all network devices are capably of logging hardware / software / configuration changes. Most of the time these can easily be exported in the form of an SNMP trap or Syslog. A good NCCM system can receive these messages, parse them to understand what has happened and if the log signifies a change has taken place – is then able to take some action(s) as described above. This real time configuration change detection mechanism is a staple part of an enterprise NCCM solution and should be implemented in all organizations where network changes are commonplace.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

Ixia Taps into Visibility, Access and Security in 4G/LTE

The Growing Impact of Social Networking Trends on Lawful Interception

Lawful Interception (LI) is the legal process by which a communications network operator or Service Provider (SP) gives authorized officials access to the communications of individuals or organizations. With security threats mushrooming in new directions, LI is more than ever a priority and major focus of Law Enforcement Agencies (LEAs). Regulations such as the Communications Assistance for Law Enforcement Act (CALEA), mandate that SPs place their resources at the service of these agencies to support surveillance and interdiction of individuals or groups.

CALEA makes Lawful Interception a priority mission for Service Providers as well as LEA; its requirements make unique demands and mandate specific equipment to carry out its high-stakes activities. This paper explores requirements and new solutions for Service Provider networks in performing Lawful Interception.

A Fast-Changing Environment Opens New Doors to Terrorism and Crime

In the past, Lawful Interception was simpler and more straightforward because it was confined to traditional voice traffic. Even in the earlier days of the Internet, it was still possible to intercept a target’s communication data fairly easily.

Now, as electronic communications take on new forms and broaden to a potential audience of billions, data volumes are soaring, and the array of service offerings is growing apace. Lawful Interception Agencies and Service Providers are racing to thwart terrorists and other criminals who have the technological expertise and determination to carry out their agendas and evade capture. This challenge will only intensify with the rising momentum of change in communication patterns.

Traffic patterns have changed: In the past it was easier to identify peer-to-peer applications or chat using well known port numbers. In order to evade LI systems, the bad guys had to work harder. Nowadays, most applications use Ixia Taps into Visibility, Access and Security in 4G/LTE standard HTTP and in most cases SSL to communicate. This puts an extra burden on LI systems that must identify overall more targets on larger volumes of data with fewer filtering options.

Social Networking in particular is pushing usage to exponential levels, and today’s lawbreakers have a growing range of sophisticated, encrypted communication channels to exploit. With the stakes so much higher, Service Providers need robust, innovative resources that can contend with a widening field of threats. This interception technology must be able to collect volume traffic and handle data at unprecedented high speeds and with pinpoint security and reliability.

LI Strategies and Goals May Vary, but Requirements Remain Consistent

Today, some countries are using nationwide interception systems while others only dictate policies that providers need to follow. While regulations and requirements vary from country to country, organizations such as the European Telecommunications Standards Institute (ETSI) and the American National Standards Institute (ANSI) have developed technical parameters for LI to facilitate the work of LEAs. The main functions of any LI solution are to access Interception-Related Information (IRI) and Content of Communication (CC) from the telecommunications network and to deliver that information in a standardized format via the handover interface to one or more monitoring centers of law enforcement agencies.

High-performance switching capabilities, such as those offered by the Ixia Director™ family of solutions, should map to following LI standards in order to be effective: They must be able to isolate suspicious voice, video, or data streams for an interception, based on IP address, MAC address or other parameters. The device must also be able to carry out filtering at wire speed. Requirements for supporting Lawful Interception activities include:

  • The ability to intercept all applicable communications of a certain target without gaps in coverage, including dropped packets, where missing encrypted characters may render a message unreadable or incomplete
  • Total visibility into network traffic at any point in the communication stream
  • Adequate processing speed to match network bandwidth
  • Undetectability, unobtrusiveness, and lack of performance degradation (a red flag to criminals and terrorists on alert for signs that they have been intercepted)
  • Real-time monitoring capabilities, because time is of the essence in preventing a crime or attack and in gathering evidence
  • The ability to provide intercepted information to the authorities in the agreed-upon handoff format
  • Load sharing and balancing of traffic that is handed to the LI system .

From the perspective of the network operator or Service Provider, the primary obligations and requirements for developing and deploying a lawful interception solution include:

  • Cost-effectiveness
  • Minimal impact on network infrastructure
  • Compatibility and compliance
  • Support for future technologies
  • Reliability and security

Ixia’s Comprehensive Range of Solutions for Lawful Interception

This Ixia customer, (the “Service Provider”), is a 4G/LTE pioneer that relies on Ixia solutions. Ixia serves the LI architecture by providing the access part of an LI solution in the form of Taps and switches. These contribute functional flexibility and can be configured as needed in many settings. Both the Ixia Director solution family and the iLink Agg™ solution can aggregate a group of links in traffic and pick out conversations with the same IP address pair from any of the links.

Following are further examples of Ixia products that can form a vital element of a successful LI initiative:

Test access ports, or Taps, are devices used by carriers and others to meet the capability requirements of CALEA legislation. Ixia is a global leader in the range and capabilities of its Taps, which provide permanent, passive access points to the physical stream.

Ixia Taps reside in both carrier and enterprise infrastructures to perform network monitoring and to improve both network security and efficiency. These inline devices provide permanent, passive access points to the physical stream. The passive characteristic of Taps means that network data is not affected whether the Tap is powered or not. As part of an LI solution, Taps have proven more useful than Span ports. If Law Enforcement Agencies must reconfigure a switch to send the right conversations to the Span port every time intercept is required, a risk arises of misconfiguring the switch and connections. Also, Span ports drop packets—another significant monitoring risk, particularly in encryption.

Director xStream™ and iLink Agg xStream™ enable deployment of an intelligent, flexible and efficient monitoring access platform for 10G networks. Director xStream’s unique TapFlow™ filtering technology enables LI to focus on select traffic of interest for each tool based on protocols, IP addresses, ports, and VLANs. The robust engineering of Director xStream and iLink Agg xStream enables a pool of 10G and 1G tools to be deployed across a large number of 10G network links, with remote, centralized control of exactly which traffic streams are directed to each tool. Ixia xStream solutions enable law enforcement entities to view more traffic with fewer monitoring tools as well as relieving oversubscribed 10G monitoring tools. In addition, law enforcement entities can share tools and data access among groups without contention and centralize data monitoring in a network operations center.

Director Pro™ and Director xStream Pro data monitoring switches offers law enforcement the ability to perform better pre-filtering via Deep Packet Inspection (DPI) and to hone in on a specific phone number or credit card number. Those products differs from other platforms that might have the ability to seek data within portions of the packet thanks to a unique ability to filter content or perform pattern matching with hardware and in wire speed potentially to Layer 7. Such DPI provides the ability to apply filters to a packet or multiple packets at any location, regardless of packet length or how “deep” the packet is; or to the location of the data to be matched within this packet. A DPI system is totally independent of the packet.

Thanks to Ixia for the article.

Ixia Taps into Hybrid Cloud Visibility

One of the major issues that IT organizations have with any form of external cloud computing is that they don’t have much visibility into what is occurring within any of those environments.

To help address that specific issue, Ixia created its Net Tool Optimizer, which makes use of virtual and physical taps to provide visibility into cloud computing environments. Now via the latest upgrade to that software, Ixia is providing support for both virtual and physical networks while doubling the number of interconnects the hardware upon which Net Tool Optimizer runs can support.

Deepesh Arora, vice president of product management for Ixia, says providing real-time visibility into both virtual and physical networks is critical, because in the age of the cloud, the number of virtual networks being employed has expanded considerably. For many IT organizations, this means they have no visibility into either the external cloud or the virtual networks that are being used to connect them.

The end goal, says Arora, should be to use Net Tool Optimizer to predict what will occur across those hybrid cloud computing environments, but also to enable IT organizations to use that data to programmatically automate responses to changes in those environments.

Most IT organizations find managing the network inside the data center to be challenging enough. With the additional of virtual networks that span multiple cloud computing environments running inside and outside of the data center, that job is more difficult than ever. Of course, no one can manage what they can’t measure, so the first step toward gaining visibility into hybrid cloud computing environments starts with something as comparatively simple as a virtual network tap.

Thanks to IT Business Edge for the article.