Advanced Packet Filtering with Ixia’s Advanced Filtering Modules (AFM)

An important factor in improving network visibility is the ability to pass the correct data to monitoring tools. Otherwise, it becomes very expensive and aggravating for most enterprises to sift through the enormous amounts of data packets being transmitted (now and in the near future). Bandwidth requirements are projected to continue increasing for the foreseeable future – so you may want to prepare now. As your bandwidth needs increase, complexity increases due to more equipment being added to the network, new monitoring applications, and data filtering rule changes due to additional monitoring ports.

Network monitoring switches are used to counteract complexity with data segmentation. There are several features that are necessary to perform the data segmentation needed and refine the flow of data. The most important features needed for this activity are: packet deduplication, load balancing, and packet filtering. Packet filtering, and advanced packet filtering in particular, is the primary workhorse feature for this segmentation.

While many monitoring switch vendors have filtering, very few can perform the advanced filtering that adds real value for businesses. In addition, filtering rules can become very complex and require a lot of staff time to write initially and then to maintain as the network constantly changes. This is time and money wasted on tool maintenance instead of time spent on quickly resolving network problems and adding new capabilities to the network requested by the business.

Basic Filtering

Basic packet filtering consists of filtering the packets as they either enter or leave the monitoring switch. Filtering at the ingress will restrict the flow of data (and information) from that point on. This is most often the worst place to filter as tools and functionality downstream from this point will never have access to that deleted data, and it eliminates the ability to share filtered data to multiple tools. However, ingress filtering is commonly used to limit the amount of data on the network that is passed on to your tool farm, and/or for very security sensitive applications that wish to filter non-trusted information as early as possible.

The following list provides common filter criteria that can be employed:

  • Layer 2
    • MAC address from packet source
    • VLAN
    • Ethernet Type (e.g. IPv4, IPv6, Apple Talk, Novell, etc.)
  • Layer 3
    • DSCP/ECN
    • IP address
    • IP protocol ( ICMP, IGMP, GGP, IP, TCP, etc.)
    • Traffic Class
    • Next Header
  • Layer 4
    • L4 port
    • TCP Control flags

Filters can be set to either pass or deny traffic based upon the filter criteria.

Egress filters are primarily meant for fine tuning of data packets sent to the tool farm. If an administrator tries to use these for the primary filtering functionality, they can easily run into an overload situation where the egress port is overloaded and packets are dropped. In this scenario, aggregated data from multiple network ports may be significantly greater than the egress capacity of the tool port.

Advanced Filtering

Network visibility comes from reducing the clutter and focusing on what’s important when you need it. One of the best ways to reduce this clutter is to add a monitoring switch that can remove duplicated packets and perform advanced filtering to direct data packets to the appropriate monitoring tools and application monitoring products that you have deployed on your network. The fundamental factor to achieve visibility is to get the right data to the right tool to make the right conclusions. Basic filtering isn’t enough to deliver the correct insight into what is happening on the network.

But what do we mean by “advanced filtering”? Advanced filtering includes the ability to filter packets anywhere across the network by using very granular criteria. Most monitoring switches just filter on the ingress and egress data streams.

Besides ingress and egress filtering, operators need to perform packet processing functions as well, like VLAN stripping, VNtag stripping, GTP stripping, MPLS stripping, deduplication and packet trimming.

Ixia’s Advanced Feature Modules

The Ixia Advanced Feature Modules (AFM) help network engineers to improve monitoring tool performance by optimizing the monitored network traffic to include only the essential information needed for analysis. In conjunction with the Ixia Net Tool Optimizer (NTO) product line, the AFM module has sophisticated capability that allows it to perform advanced processing of packet data.

Advanced Packet Processing Features

  • Packet De-Duplication – A normally configured SPAN port can generate multiple copies of the same packet dramatically reducing the effectiveness of monitoring tools. The AFM16 eliminates redundant packets, at full line rate, before they reach your monitoring tools. Doing so will increase overall tool performance and accuracy.
  • Packet Trimming – Some monitoring tools only need to analyze packet headers. In other monitoring applications, meeting regulatory compliance requires tools remove sensitive data from captured network traffic. The AFM16 can remove payload data from the monitored network traffic, which boosts tool performance and keeps sensitive user data secure.
  • Protocol Stripping – Many network monitoring tools have limitations when handling some types of Ethernet protocols. The AFM16 enables monitoring tools to monitor required data by removing GTP, MPLS, VNTag header labels from the packet stream.
  • GTP Stripping – Removes the GTP headers from a GTP packet leaving the tunneled L3 and L4 headers exposed. Enables tools that cannot process GTP header information to analyze the tunneled packets.
  • NTP/GPS Time Stamping – Some latency-sensitive monitoring tools need to know when a packet traverses a particular point in the network. The AFM16 provides time stamping with nanosecond resolution and accuracy.

Additional Resources:

Ixia Advance Features Modules

Ixia Visibility Architecture

Thanks to Ixia for the article. 

Introducing the First Self-Regulating Root Cause Analysis: Dynamic Rule Generation with StableNet® 7

Infosim®, a leading manufacturer of automated Service Fulfillment and Service Assurance solutions for Telcos, ISPs, MSPs and Corporations, today announced a proprietary new technology called Dynamic Rule Generation (DRG) with StableNet® 7.

The challenge: The legacy Fault Management approach includes a built-in dilemma: Scalability vs. Aggregation. On the one hand, it is unfeasible to pre-create all possible rules while on the other hand, not having enough rules will leave NOC personnel with insufficient data to troubleshoot complex scenarios.

The solution: DRG expands and contracts rules that automatically troubleshoot networks by anticipating all possible scenarios from master rule sets. DRG is like cruise control for a network rule set. When DRG is turned on, it can robotically expand and contract rule sets to keep troubleshooting data at optimum levels constantly without human intervention. It will also allow for automatic ticket generation and report alarms raised by dynamically generated rules. DRG leads to fast notification, a swift service Impact Analysis, and results in the first self-regulating Root Cause Analysis in today’s Network Management Software market.

Start automating Fault Management and stop manually creating rules! Take your hands off the keyboard and allow the DRG cruise control to take over!

Supporting Quotes:

Dr. Stefan Köhler, CEO for Infosim® comments:

“We at Infosim® believe you should receive the best value from your network, and exchange of information should be as easy as possible. The way we want to achieve these goals, is to simplify the usage and automate the processes you use to manage your network. Rules creation and deletion has been an Achilles’ heel of legacy network management systems. With DRG (Dynamic Rule Generation), we are again delivering another new technology to our customers to achieve our goal of the dark NOC.”

Marius Heuler, CTO for Infosim® comments:

“By further enhancing the already powerful Root Cause Analysis of StableNet®, we are providing functionality to our users that will both take care of ongoing changes in their networks while automatically keeping the rules up to date.”

ABOUT INFOSIM®

Infosim® is a leading manufacturer of automated Service Fulfillment and Service Assurance solutions for Telcos, ISPs, Managed Service Providers and Corporations. Since 2003, Infosim® has been developing and providing StableNet® to Telco and Enterprise customers. Infosim® is privately held with offices in Germany (Würzburg – Headquarters), USA (Austin) and Singapore.

Infosim® develops and markets StableNet®, the leading unified software solution for Fault, Performance and Configuration Management. StableNet® is available in two versions: Telco (for Telecom Operators and ISPs) and Enterprise (for IT and Managed Service Providers). StableNet® is a single platform unified solution designed to address today’s many operational and technical challenges of managing distributed and mission-critical IT infrastructures.

Many leading organizations and Network Service Providers have selected StableNet® due to its enriched features and reduction in OPEX & CAPEX. Many of our customers are well-known global brands spanning all market sectors. References available on request.

At Infosim®, we take pride in the engineering excellence of our high quality and high performance products. All products are available for a trial period and professional services for proof of concept (POC) can be provided on request.

ABOUT STABLENET®

StableNet® is available in two versions: Telco (for Telecom Operators and ISPs) and Enterprise (for IT and Managed Service Providers).

StableNet® Telco is a comprehensive unified management solution; offerings include: Quad-play, Mobile, High-speed Internet, VoIP (IPT, IPCC), IPTV across Carrier Ethernet, Metro Ethernet, MPLS, L2/L3 VPNs, Multi Customer VRFs, Cloud and FTTx environments. IPv4 and IPv6 are fully supported.

StableNet® Enterprise is an advanced, unified and scalable network management solution for true End-to-End management of medium to large scale mission-critical IT supported networks with enriched dashboards and detailed service-views focused on both Network & Application services.

Thanks to Infosim for the article. 

Security Breaches Keep Network Teams Busy

Network Instruments study shows that network engineers are spending more of their day responding to breaches and deploying security controls.

This should come as no big surprise to most network teams. As security breaches and threats proliferate, they’re spending a lot of time dealing with security issues, according to a study released Monday.

Network Instruments’ eighth annual state of the network report shows that network engineers are increasingly consumed with security chores, including investigating security breaches and implementing security controls. Of the 322 network engineers, IT directors and CIOs surveyed worldwide, 85% said their organization’s network team was involved in security. Twenty percent of those polled said they spend 10 to 20 hours per week on security issues.

Security Breaches Keep Network Teams Busy

Almost 70% said the time they spend on security has increased over the past 12 months; nearly a quarter of respondents said the time spend increased by more than 25%.

The top two security activities keeping networking engineers busy are implementing preventative measures and investigating attacks, according to the report. Flagging anomalies and cleaning up after viruses or worms also are other top time sinks for network teams.

“Network engineers are being pulled into every aspect of security,” Brad Reinboldt, senior product manager for Network Instruments, the performance management unit of JDSU, said in a prepared statement

Security Breaches Keep Network Teams Busy

Network teams are drawn into security investigations and preparedness as high-profile security breaches continue to make headlines. Last year, news of the Target breach was followed by breach reports from a slew of big-name companies, including Neiman Marcus, Home Depot, and Michaels.

A report issued last September by the Ponemon Institute and sponsored by Experian showed that data breaches are becoming more frequent. Of the 567 US executives surveyed, 43 percent said they had experienced a data breach, up from 33% in a similar survey in 2013. Sixty percent said their company had suffered more than one data breach in the past two years, up from 52% in 2013.

According to Network Instruments’ study, syslogs were as the top method for detecting security issues, with 67% of survey respondents reporting using them. Fifty-seven percent use SNMP while 54% said they use anomalies for uncovering security problems.

In terms of security challenges, half of the survey respondents ranked correlating security and network performance as their biggest problem.

The study also found that more than half of those polled expect bandwidth to grow by more than 51% next year, up from the 37% from last year’s study who expected that kind of growth. Several factors are driving the demand, including users with multiple devices, larger data files, and unified communications applications, according to the report.

The survey also queried network teams about their adoption of emerging technologies. It found that year-over-year implementation rates for 40 Gigabit Ethernet, 100GbE, and software-defined networking have almost doubled. One technology that isn’t gaining traction among those polled is 25 GbE, with more than 62% saying they have no plans for it.

Thanks to Network Computing for the article.

How Not to Rollout New Ideas, or How I Learned to Love Testing

I was recently reading an article in TechCrunch titled “The Problem With The Internet Of Things,” where the author lamented how bad design or rollout of good ideas can kill promising markets. In his example, he discussed how turning on the lights in a room, through the Internet of Things (IoT), became a five step process rather than the simple one step process we currently use (the light switch).

This illustrates the problem between the grand idea, and the practicality of the market: it’s awesome to contemplate a future where exciting technology impacts our lives, but only if the realities of everyday use are taken into account. As he effectively state, “Smart home technology should work with the existing interfaces of households objects, not try to change how we use them.”

Part of the problem is that the IoT is still just a nebulous concept. Its everyday implications haven’t been worked out. What does it mean when all of our appliances, communications, and transportation are connected? How will they work together? How will we control and manage them? Details about how the users of exciting technology will actually participate in the experience is the actual driver of technology success. And too often, this aspect is glossed over or ignored.

And, once everything is connected, will those connections be a door for malware or hacktivists to bypass security?

Part of the solution to getting new technology to customers in a meaningful way, that is both a quality end user experience AND a profitable model for the provider, is network validation and optimization. Application performance and security resilience are key when rolling out, providing, integrating or securing new technology.

What do we mean by these terms? Well:

  • Application performance means we enable successful deployments of applications across our customers’ networks
  • Security resilience means we make sure customer networks are resilient to the growing security threats across the IT landscape

Companies deploying applications and network services—in a physical, virtual, or hybrid network configuration—need to do three things well:

  • Validate. Customers need to validate their network architecture to ensure they have a well-designed network, properly provisioned, with the right third party equipment to achieve their business goals.
  • Secure. Customers must secure their network performance against all the various threat scenarios—a threat list that grows daily and impacts their end users, brand, and profitability.

(Just over last Thanksgiving weekend, Sony Pictures was hacked and five of its upcoming pictures leaked online—with the prime suspect being North Korea!)

  • Optimize. Customers seek network optimization by obtaining solutions that give them 100% visibility into their traffic—eliminating blind spots. They must monitor applications traffic and receive real-time intelligence in order to ensure the network is performing as expected.

Ixia helps customers address these pain points, and achieve their networking goals every day, all over the world. This is the exciting part of our business.

When we discuss solutions with customers, no matter who they are— Bank of America, Visa, Apple, NTT—they all do three things the same way in their networks:

  • Design—Envision and plan the network that meets their business needs
  • Rollout—Deploy network upgrades or updated functionality
  • Operate—Keep the production network seamlessly providing a quality experience

These are the three big lifecycle stages for any network design, application rollout, security solution, or performance design. Achieving these milestones successfully requires three processes:

  • Validate—Test and confirm design meets expectations
  • Secure— Assess the performance and security in real-world threat scenarios
  • Optimize— Scale for performance, visibility, security, and expansion

So when it comes to new technology and new applications of that technology, we are in an amazing time—evidenced by the fact that nine billion devices will be connected to the Internet in 2018. Examples of this include Audio Video Bridging, Automotive Ethernet, Bring Your Own Apps (BYOA), etc. Ixia sees only huge potential. Ixia is a first line defense to creating the kind of quality customer experience that ensures satisfaction, brand excellence, and profitability.

Additional Resources:

Article: The Problem With The Internet Of Things

Ixia visibility solutions

Ixia security solutions

Thanks to Ixia for the article.

End User Experience Testing Made Easier with NMSaaS

End user experience & QoS are consistently ranked at the top of priorities for Network Management teams today. According to research over 60% of companies today say that VoIP is present in a significant amount of their networks, this is the same case with streaming media within the organization.

As you can see having effective end user experience testing is vital to any business. If you have a service model, whether you’re an actual service provider like a 3rd party or you’re a corporation where your IT acts as a service provider you have a certain goal. This goal is to provide assured applications/services to your customers at the highest standard possible.

The success of your business is based upon your ability to deliver effective end user experience. How many times have you been working with a business and have been told to wait because the businesses computers systems were “slow”. It is something which we all have become frustrared with in the past.

b2ap3_thumbnail_angry-user-post-size.jpg

To ensure that your organization can provide effective and successful end user experience you need to be able to proactively test your live environment and be alerted to issues in real time.

This is comprised of 5 key elements:

1) Must be able to test from end-to-end

2) Point to Point or Meshed testing

3) Real traffic and “live” test, not just “ping” and trace route

4) Must be able to simulate the live environments

  • Class of service
  • Number of simultaneous tests
  • Codecs
  • Synthetic login/query

5) Must be cost effective and easy to deploy.

NMSaaS is able to provide all of these service at a cost effective price.

If this is something you might be interested in, or if you would like to find more about our services and solutions – why not start a free 30 day trial today?

b2ap3_thumbnail_file-2229790027.png

Thanks to NMSaaS for the article.

Avoid Network Performance Problems with Automated Monitoring

Network administrators can streamline the troubleshooting process by deploying automated monitoring systems.

With automated monitoring in place, admins can get early warnings about emerging problems and address them before the adverse effects continue for too long. In addition, automated monitoring can help maintain up to date information about network configuration and devices on the network that can be essential for diagnosing network performance problems.

An automated network monitoring regime requires a combination of tools along with policies and procedures for utilizing those tools.

Network hardware vendors and third party software vendors offer a wide range of tools for network management. Here are some tips for identifying the right tool, or set of tools, for your needs.

The first step in setting up automated monitoring system is having an accurate inventory of devices on your network. A key requirement for just about any automated network tool set is automated discovery of IP addressable devices. This includes network hardware, like switches and routers, as well as servers and client devices.

Another valuable feature is the ability to discover network topology. If you cringe every time someone erases your network diagram from the whiteboard, it’s probably time to get a topology mapping tool. Topology discovery may be included with your device discovery tool but not necessarily.

Device and topology discovery tools provide a baseline of information about the structure of your network. These tools can be run at regular intervals to detect changes and update the device database and topology diagrams. As a side benefit, this data can be useful for compliance reporting as well.

Once you have an inventory of devices on your network, you will need to collect data on the state of those devices. Although IT organizations often separate network administration and server administration duties, it is often helpful to have performance data on servers and the network.

The Simple Network Management Protocol (SNMP) and the Windows Management Instrumentation (WMI) protocols are designed to collect such device data. Network performance monitoring tools can be configured to poll network devices and collect data on availability, latency and traffic volumes using SNMP. WMI is a Microsoft protocol designed to allow monitoring programs to query Windows operating systems about the state of a system. Network performance monitoring tools can collect, consolidate and correlate network and server information from multiple devices.

In addition to monitoring the state of servers, some tools support running Powershell monitoring and action scripts for Windows devices and SSH support for administering Linux servers.

Thanks to Tom’s IT Pro for the article.

Flow-Based Network Intelligence You Can Depend On

NetFlow Auditor is a complete and flexible toolkit for flow based network analysis, which includes real-time analysis, long-term trending and base-lining.

NetFlow Auditor uses NetFlow based analysis as opposed to the traditional network analysis products which focus on the health of network gateway devices with basic information and overview trends.

Netflow analysis looks at end-to-end performance using a technological approach that is largely independent of the underlying network infrastructure thus providing greater visibility of the IP environment as a whole.

NetFlow Auditor provides an entire team in a box and is focussed on delivering four main value propositions for reporting for IP based networks:

NetFlow Auditor Network Performance

Network Performance

NetFlow Auditor Network Security

Network Secutiry

NetFlow Auditor Anomaly Detection

Network Intelligence

NetFlow Auditor Network Team in a Box

Network Accounting

Network Performance

Bandwidth management, bottleneck identification and alerting, resource and capacity planning, asset management, content management, quality of service

Network Security

Network data forensics and anomaly detection, e-security surveillance, network abuse, P2P discovery, access management, Compliance, track and trace and risk management

Network Intelligence

Network Anomaly Detection and Data metrics.

Network Accounting

Customer billing management for shared networks which translates to other costs, invoicing, bill substantiation, chargeback, 95th Percentile, total cost of ownership, forecasting, Information Technology ROI purchases substantiation.

How NetFlow Auditor Shines

Scalability – NetFlow Auditor can handle copious amounts of flows per second and therefore key data won’t be missed when pipes burst or when flows increases. Auditor can analyze large network cores, distribution and edge points. This includes point solutions or multi-collector hierarchies.

Granularity- NetFlow Auditor provides complete drill down tools to fully explore the data and to perform Comparative Base-lining in real time and over long term. This gives users the ability to see Network data in all perspectives.

Flexibility – NetFlow Auditor allows easy customization of every aspect of the system from tuning of data capture to producing templates and automated Reporting and Alerting thus decreasing the workload for engineers, management and customers.

Anomaly Detection – NetFlow Auditor’s ability to learn a baseline on any kind of data is unsurpassed. The longer it runs the smarter it becomes.

Root Cause Analysis – NetFlow Auditor’s drill filter and discovery tool allows real-time forensic and trending views, with threshold alerting and scheduled reporting.

QoS Analysis – NetFlow Auditor can help analyze VoIP impact, and Multicast and Separate traffic by Class of Service and by Location.

Key Issued Solved using Flow-Based Network Management

Absolute Visibility – As businesses use their data networks to deliver more applications and services, the monitoring and managing the network for problems performance can become a challenge. NetFlow Auditor real time monitoring and improve reaction times to solve network issues such as identifying and shutting down malicious traffic when it appears on the network.

Compliance and Risk – System relocations, Business and System Mergers.

Convergence – Organizations that are moving disparate networks to a converged platform in an effort to streamline costs and increase productivity can use NetFlow Auditor to understand its impact on security and to address security blind spots in the converged network

Proactive Network Management – NetFlow Auditor can be used as a tool by Risk Management to reduce risk and improve incident management by comparing normal network behaviours and performance at different times of the day to compare the current problems with a baseline.

Customers include Internet Service Providers, Banks, Education, Healthcare and Utilities such as:

  • Bell Aliant
  • KDDI
  • BroadRiver
  • First Digital
  • NSW Department of Education and Training
  • IBM
  • StreamtheWorld
  • Desjardins Bank
  • Commonwealth Bank of Australia
  • Miami Dade County
  • Miami Herald
  • Sheridan College
  • Mitsui Sumitomo
  • Caprock Energy
  • Zesco Electricity
  • Self Regional Healthcare

Thanks to NetFlow Auditor for the article.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

When it comes to IT services, business value and user satisfaction are both dependent upon the server, network, and applications all working together seamlessly.

Failure to adequately monitor each of these and their interactions, means that you could be flying blind – susceptible to degraded service levels.

While application and network monitoring receive a lot of the attention, it is important to also understand what’s going on with the server.

Virtualization changes the face of service delivery

The environment in which modern services run is complex. Superficially, it appears as though we’ve traveled back to the 1960s, with data centers again appearing like big monolithic constructs (whether cloud or internally hosted) with highly-virtualized server farms connecting through large core networks.

The emergence of virtualized clients (with most computing done remotely) takes the analogy a step further and makes it feel as if we are on the set of “Mad Men” with the old dumb terminals connected to the mainframe.

But that may be where the analogy ends. Today’s IT service delivery is almost never performed in a homogeneous vendor setting—from a hardware or software perspective. Likewise, the diversity of complex multi-tier applications and methods by which they are accessed continues to proliferate.

To learn more, download the white paper.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

Thanks to Network Instruments for the article.

Solving 3 Key Network Security Challenges

With high profile attacks from 2014 still fresh on the minds of IT professionals and almost half of companies being victims of an attack during the last year, it’s not surprising that security teams are seeking additional resources to augment defenses and investigate attacks.

As IT resources shift to security, network teams are finding new roles in the battle to protect network data. To be an effective asset in the battle, it’s critical to understand the involvement and roles of network professionals in security as well as the 3 greatest challenges they face.

Assisting the Security Team

The recently released State of the Network Global Study asked 322 network professionals about their emerging roles in network security. Eighty-five percent of respondents indicated that their organization’s network team was involved in handling security. Not only have network teams spent considerable time managing security issues but the amount of time has also increased over the past year:

  • One in four spends more than 10 hours per week on security
  • Almost 70 percent indicated time spent on security has increased

Solving 3 Key Network Security Challenges

Roles in Defending the Network

From the number of responses above 50 percent, the majority of network teams are involved with many security-related tasks. The top two roles for respondents – implementing preventative measures (65 percent) and investigating security breaches (58 percent) – mean they are working closely with security teams on handling threats both proactively and after-the-fact.

Solving 3 Key Network Security Challenges

3 Key Security Challenges

Half of respondents indicated the greatest security challenge was an inability to correlate security and network performance. This was followed closely by an inability to replay anomalous security issues (44 percent) and a lack of understanding to diagnose security issues (41 percent).

Solving 3 Key Network Security Challenges

The Packet Capture Solution

These three challenges point to an inability of the network team to gain context to quickly and accurately diagnose security issues. The solution lies in the packets.

  • Correlating Network and Security Issues

Within performance management solutions like Observer Platform, utilize baselining and behavior analysis to identify anomalous client, server, or network activities. Additionally, viewing top talkers and bandwidth utilization reports, can identify whether clients or servers are generating unexpectedly high amounts of traffic indicative of a compromised resource.

  • Replaying Issues for Context

The inability to replay and diagnose security issues points to long-term packet capture being an under-utilized resource in security investigations. Replaying captured events via retrospective analysis appliances like GigaStor provides full context to identify compromised resources, exploits utilized, and occurrences of data theft.

As network teams are called upon to assist in security investigations, effective use of packet analysis is critical for quick and accurate investigation and remediation. Learn from cyber forensics investigators how to effectively work with security teams on threat prevention, investigations, and cleanup efforts at the How to Catch a Hacker Webinar. Our experts will uncover exploits and share top security strategies for network teams.

Thanks to Network Instruments for the article.

State of Networks: Faster, but Under Attack

Two recent studies that look at the state of mobile and fixed networks show that while networks are getting ever faster, security is a paramount concern that is taking up more time and resources.

Akamai recently released its fourth quarter 2014 State of the Internet report. Among the findings:

  • In terms of network security, high tech and public sector targets saw increased numbers of attacks from 2013 to 2014, while enterprise targets had fewer attacks over the course of the year – except Q4, where the commerce and enterprise segment were the most frequently targeted.

“Attacks against public sector targets reported throughout 2014 appear to be primarily motivated by political unrest, while the targeting of the high tech industry does not appear to be driven by any single event or motivation,” Akamai added.

  • Akamai customers saw DDoS attacks up 20% from the third quarter, although the overall number of such attacks held steady from 2013 to 2014 at about 1,150.
  • Average mobile speeds differ widely on a global basis, from 16 megabits per second in the U.K., to 1 Mbps in New Caledonia. Average peak mobile connection speeds continue to increase, from a whopping 157.3 Mbps in Singapore, to 7.5 Mbps in Argentina. And Denmark, Saudi Arabia, Sweden and Venezuela had 97% of unique IP addresses from mobile providers connect to Akamai’s network at speeds faster than the 4 Mbps threshold that is considered the minimum for “broadband.”

Meanwhile, Network Instruments, part of JDSU, recently completed its eighth annual survey of network professionals. It found that security is an increasing area of focus for network teams and that they are spending an increasing amount of time focused on security incidents and prevention.

NI reported that its survey found that the most commonly reported network security challenge is correlating security issues with network performance (reported by 50% of respondents) – meanwhile, the most common method for identifying security issues are “syslogs” (used by 67% of respondents). Other methods included simple network management protocol and tracking performance anomalies, while long-term packet capture and analysis was used by slightly less than half of the survey participants – 48%. Network Instruments said that relatively low utilization of long-term packet capture makes it “an under-utilized resource in security investigations” and that “replaying the events would provide greater context” for investigators.

NI also found that “application overload” is driving a huge increase in bandwidth use expectations, due to users accessing network resources and large files with multiple devices; real-time unified communications applications that require more bandwidth; as well as private cloud and virtualization adoption. See Network Instrument’s full infographic below:

Network Instruments' State of the Network infographic

Thanks to RCR Wireless News for the article.