How Not to Rollout New Ideas, or How I Learned to Love Testing

I was recently reading an article in TechCrunch titled “The Problem With The Internet Of Things,” where the author lamented how bad design or rollout of good ideas can kill promising markets. In his example, he discussed how turning on the lights in a room, through the Internet of Things (IoT), became a five step process rather than the simple one step process we currently use (the light switch).

This illustrates the problem between the grand idea, and the practicality of the market: it’s awesome to contemplate a future where exciting technology impacts our lives, but only if the realities of everyday use are taken into account. As he effectively state, “Smart home technology should work with the existing interfaces of households objects, not try to change how we use them.”

Part of the problem is that the IoT is still just a nebulous concept. Its everyday implications haven’t been worked out. What does it mean when all of our appliances, communications, and transportation are connected? How will they work together? How will we control and manage them? Details about how the users of exciting technology will actually participate in the experience is the actual driver of technology success. And too often, this aspect is glossed over or ignored.

And, once everything is connected, will those connections be a door for malware or hacktivists to bypass security?

Part of the solution to getting new technology to customers in a meaningful way, that is both a quality end user experience AND a profitable model for the provider, is network validation and optimization. Application performance and security resilience are key when rolling out, providing, integrating or securing new technology.

What do we mean by these terms? Well:

  • Application performance means we enable successful deployments of applications across our customers’ networks
  • Security resilience means we make sure customer networks are resilient to the growing security threats across the IT landscape

Companies deploying applications and network services—in a physical, virtual, or hybrid network configuration—need to do three things well:

  • Validate. Customers need to validate their network architecture to ensure they have a well-designed network, properly provisioned, with the right third party equipment to achieve their business goals.
  • Secure. Customers must secure their network performance against all the various threat scenarios—a threat list that grows daily and impacts their end users, brand, and profitability.

(Just over last Thanksgiving weekend, Sony Pictures was hacked and five of its upcoming pictures leaked online—with the prime suspect being North Korea!)

  • Optimize. Customers seek network optimization by obtaining solutions that give them 100% visibility into their traffic—eliminating blind spots. They must monitor applications traffic and receive real-time intelligence in order to ensure the network is performing as expected.

Ixia helps customers address these pain points, and achieve their networking goals every day, all over the world. This is the exciting part of our business.

When we discuss solutions with customers, no matter who they are— Bank of America, Visa, Apple, NTT—they all do three things the same way in their networks:

  • Design—Envision and plan the network that meets their business needs
  • Rollout—Deploy network upgrades or updated functionality
  • Operate—Keep the production network seamlessly providing a quality experience

These are the three big lifecycle stages for any network design, application rollout, security solution, or performance design. Achieving these milestones successfully requires three processes:

  • Validate—Test and confirm design meets expectations
  • Secure— Assess the performance and security in real-world threat scenarios
  • Optimize— Scale for performance, visibility, security, and expansion

So when it comes to new technology and new applications of that technology, we are in an amazing time—evidenced by the fact that nine billion devices will be connected to the Internet in 2018. Examples of this include Audio Video Bridging, Automotive Ethernet, Bring Your Own Apps (BYOA), etc. Ixia sees only huge potential. Ixia is a first line defense to creating the kind of quality customer experience that ensures satisfaction, brand excellence, and profitability.

Additional Resources:

Article: The Problem With The Internet Of Things

Ixia visibility solutions

Ixia security solutions

Thanks to Ixia for the article.

End User Experience Testing Made Easier with NMSaaS

End user experience & QoS are consistently ranked at the top of priorities for Network Management teams today. According to research over 60% of companies today say that VoIP is present in a significant amount of their networks, this is the same case with streaming media within the organization.

As you can see having effective end user experience testing is vital to any business. If you have a service model, whether you’re an actual service provider like a 3rd party or you’re a corporation where your IT acts as a service provider you have a certain goal. This goal is to provide assured applications/services to your customers at the highest standard possible.

The success of your business is based upon your ability to deliver effective end user experience. How many times have you been working with a business and have been told to wait because the businesses computers systems were “slow”. It is something which we all have become frustrared with in the past.

b2ap3_thumbnail_angry-user-post-size.jpg

To ensure that your organization can provide effective and successful end user experience you need to be able to proactively test your live environment and be alerted to issues in real time.

This is comprised of 5 key elements:

1) Must be able to test from end-to-end

2) Point to Point or Meshed testing

3) Real traffic and “live” test, not just “ping” and trace route

4) Must be able to simulate the live environments

  • Class of service
  • Number of simultaneous tests
  • Codecs
  • Synthetic login/query

5) Must be cost effective and easy to deploy.

NMSaaS is able to provide all of these service at a cost effective price.

If this is something you might be interested in, or if you would like to find more about our services and solutions – why not start a free 30 day trial today?

b2ap3_thumbnail_file-2229790027.png

Thanks to NMSaaS for the article.

Avoid Network Performance Problems with Automated Monitoring

Network administrators can streamline the troubleshooting process by deploying automated monitoring systems.

With automated monitoring in place, admins can get early warnings about emerging problems and address them before the adverse effects continue for too long. In addition, automated monitoring can help maintain up to date information about network configuration and devices on the network that can be essential for diagnosing network performance problems.

An automated network monitoring regime requires a combination of tools along with policies and procedures for utilizing those tools.

Network hardware vendors and third party software vendors offer a wide range of tools for network management. Here are some tips for identifying the right tool, or set of tools, for your needs.

The first step in setting up automated monitoring system is having an accurate inventory of devices on your network. A key requirement for just about any automated network tool set is automated discovery of IP addressable devices. This includes network hardware, like switches and routers, as well as servers and client devices.

Another valuable feature is the ability to discover network topology. If you cringe every time someone erases your network diagram from the whiteboard, it’s probably time to get a topology mapping tool. Topology discovery may be included with your device discovery tool but not necessarily.

Device and topology discovery tools provide a baseline of information about the structure of your network. These tools can be run at regular intervals to detect changes and update the device database and topology diagrams. As a side benefit, this data can be useful for compliance reporting as well.

Once you have an inventory of devices on your network, you will need to collect data on the state of those devices. Although IT organizations often separate network administration and server administration duties, it is often helpful to have performance data on servers and the network.

The Simple Network Management Protocol (SNMP) and the Windows Management Instrumentation (WMI) protocols are designed to collect such device data. Network performance monitoring tools can be configured to poll network devices and collect data on availability, latency and traffic volumes using SNMP. WMI is a Microsoft protocol designed to allow monitoring programs to query Windows operating systems about the state of a system. Network performance monitoring tools can collect, consolidate and correlate network and server information from multiple devices.

In addition to monitoring the state of servers, some tools support running Powershell monitoring and action scripts for Windows devices and SSH support for administering Linux servers.

Thanks to Tom’s IT Pro for the article.

Flow-Based Network Intelligence You Can Depend On

NetFlow Auditor is a complete and flexible toolkit for flow based network analysis, which includes real-time analysis, long-term trending and base-lining.

NetFlow Auditor uses NetFlow based analysis as opposed to the traditional network analysis products which focus on the health of network gateway devices with basic information and overview trends.

Netflow analysis looks at end-to-end performance using a technological approach that is largely independent of the underlying network infrastructure thus providing greater visibility of the IP environment as a whole.

NetFlow Auditor provides an entire team in a box and is focussed on delivering four main value propositions for reporting for IP based networks:

NetFlow Auditor Network Performance

Network Performance

NetFlow Auditor Network Security

Network Secutiry

NetFlow Auditor Anomaly Detection

Network Intelligence

NetFlow Auditor Network Team in a Box

Network Accounting

Network Performance

Bandwidth management, bottleneck identification and alerting, resource and capacity planning, asset management, content management, quality of service

Network Security

Network data forensics and anomaly detection, e-security surveillance, network abuse, P2P discovery, access management, Compliance, track and trace and risk management

Network Intelligence

Network Anomaly Detection and Data metrics.

Network Accounting

Customer billing management for shared networks which translates to other costs, invoicing, bill substantiation, chargeback, 95th Percentile, total cost of ownership, forecasting, Information Technology ROI purchases substantiation.

How NetFlow Auditor Shines

Scalability – NetFlow Auditor can handle copious amounts of flows per second and therefore key data won’t be missed when pipes burst or when flows increases. Auditor can analyze large network cores, distribution and edge points. This includes point solutions or multi-collector hierarchies.

Granularity- NetFlow Auditor provides complete drill down tools to fully explore the data and to perform Comparative Base-lining in real time and over long term. This gives users the ability to see Network data in all perspectives.

Flexibility – NetFlow Auditor allows easy customization of every aspect of the system from tuning of data capture to producing templates and automated Reporting and Alerting thus decreasing the workload for engineers, management and customers.

Anomaly Detection – NetFlow Auditor’s ability to learn a baseline on any kind of data is unsurpassed. The longer it runs the smarter it becomes.

Root Cause Analysis – NetFlow Auditor’s drill filter and discovery tool allows real-time forensic and trending views, with threshold alerting and scheduled reporting.

QoS Analysis – NetFlow Auditor can help analyze VoIP impact, and Multicast and Separate traffic by Class of Service and by Location.

Key Issued Solved using Flow-Based Network Management

Absolute Visibility – As businesses use their data networks to deliver more applications and services, the monitoring and managing the network for problems performance can become a challenge. NetFlow Auditor real time monitoring and improve reaction times to solve network issues such as identifying and shutting down malicious traffic when it appears on the network.

Compliance and Risk – System relocations, Business and System Mergers.

Convergence – Organizations that are moving disparate networks to a converged platform in an effort to streamline costs and increase productivity can use NetFlow Auditor to understand its impact on security and to address security blind spots in the converged network

Proactive Network Management – NetFlow Auditor can be used as a tool by Risk Management to reduce risk and improve incident management by comparing normal network behaviours and performance at different times of the day to compare the current problems with a baseline.

Customers include Internet Service Providers, Banks, Education, Healthcare and Utilities such as:

  • Bell Aliant
  • KDDI
  • BroadRiver
  • First Digital
  • NSW Department of Education and Training
  • IBM
  • StreamtheWorld
  • Desjardins Bank
  • Commonwealth Bank of Australia
  • Miami Dade County
  • Miami Herald
  • Sheridan College
  • Mitsui Sumitomo
  • Caprock Energy
  • Zesco Electricity
  • Self Regional Healthcare

Thanks to NetFlow Auditor for the article.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

When it comes to IT services, business value and user satisfaction are both dependent upon the server, network, and applications all working together seamlessly.

Failure to adequately monitor each of these and their interactions, means that you could be flying blind – susceptible to degraded service levels.

While application and network monitoring receive a lot of the attention, it is important to also understand what’s going on with the server.

Virtualization changes the face of service delivery

The environment in which modern services run is complex. Superficially, it appears as though we’ve traveled back to the 1960s, with data centers again appearing like big monolithic constructs (whether cloud or internally hosted) with highly-virtualized server farms connecting through large core networks.

The emergence of virtualized clients (with most computing done remotely) takes the analogy a step further and makes it feel as if we are on the set of “Mad Men” with the old dumb terminals connected to the mainframe.

But that may be where the analogy ends. Today’s IT service delivery is almost never performed in a homogeneous vendor setting—from a hardware or software perspective. Likewise, the diversity of complex multi-tier applications and methods by which they are accessed continues to proliferate.

To learn more, download the white paper.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

Thanks to Network Instruments for the article.

Solving 3 Key Network Security Challenges

With high profile attacks from 2014 still fresh on the minds of IT professionals and almost half of companies being victims of an attack during the last year, it’s not surprising that security teams are seeking additional resources to augment defenses and investigate attacks.

As IT resources shift to security, network teams are finding new roles in the battle to protect network data. To be an effective asset in the battle, it’s critical to understand the involvement and roles of network professionals in security as well as the 3 greatest challenges they face.

Assisting the Security Team

The recently released State of the Network Global Study asked 322 network professionals about their emerging roles in network security. Eighty-five percent of respondents indicated that their organization’s network team was involved in handling security. Not only have network teams spent considerable time managing security issues but the amount of time has also increased over the past year:

  • One in four spends more than 10 hours per week on security
  • Almost 70 percent indicated time spent on security has increased

Solving 3 Key Network Security Challenges

Roles in Defending the Network

From the number of responses above 50 percent, the majority of network teams are involved with many security-related tasks. The top two roles for respondents – implementing preventative measures (65 percent) and investigating security breaches (58 percent) – mean they are working closely with security teams on handling threats both proactively and after-the-fact.

Solving 3 Key Network Security Challenges

3 Key Security Challenges

Half of respondents indicated the greatest security challenge was an inability to correlate security and network performance. This was followed closely by an inability to replay anomalous security issues (44 percent) and a lack of understanding to diagnose security issues (41 percent).

Solving 3 Key Network Security Challenges

The Packet Capture Solution

These three challenges point to an inability of the network team to gain context to quickly and accurately diagnose security issues. The solution lies in the packets.

  • Correlating Network and Security Issues

Within performance management solutions like Observer Platform, utilize baselining and behavior analysis to identify anomalous client, server, or network activities. Additionally, viewing top talkers and bandwidth utilization reports, can identify whether clients or servers are generating unexpectedly high amounts of traffic indicative of a compromised resource.

  • Replaying Issues for Context

The inability to replay and diagnose security issues points to long-term packet capture being an under-utilized resource in security investigations. Replaying captured events via retrospective analysis appliances like GigaStor provides full context to identify compromised resources, exploits utilized, and occurrences of data theft.

As network teams are called upon to assist in security investigations, effective use of packet analysis is critical for quick and accurate investigation and remediation. Learn from cyber forensics investigators how to effectively work with security teams on threat prevention, investigations, and cleanup efforts at the How to Catch a Hacker Webinar. Our experts will uncover exploits and share top security strategies for network teams.

Thanks to Network Instruments for the article.

State of Networks: Faster, but Under Attack

Two recent studies that look at the state of mobile and fixed networks show that while networks are getting ever faster, security is a paramount concern that is taking up more time and resources.

Akamai recently released its fourth quarter 2014 State of the Internet report. Among the findings:

  • In terms of network security, high tech and public sector targets saw increased numbers of attacks from 2013 to 2014, while enterprise targets had fewer attacks over the course of the year – except Q4, where the commerce and enterprise segment were the most frequently targeted.

“Attacks against public sector targets reported throughout 2014 appear to be primarily motivated by political unrest, while the targeting of the high tech industry does not appear to be driven by any single event or motivation,” Akamai added.

  • Akamai customers saw DDoS attacks up 20% from the third quarter, although the overall number of such attacks held steady from 2013 to 2014 at about 1,150.
  • Average mobile speeds differ widely on a global basis, from 16 megabits per second in the U.K., to 1 Mbps in New Caledonia. Average peak mobile connection speeds continue to increase, from a whopping 157.3 Mbps in Singapore, to 7.5 Mbps in Argentina. And Denmark, Saudi Arabia, Sweden and Venezuela had 97% of unique IP addresses from mobile providers connect to Akamai’s network at speeds faster than the 4 Mbps threshold that is considered the minimum for “broadband.”

Meanwhile, Network Instruments, part of JDSU, recently completed its eighth annual survey of network professionals. It found that security is an increasing area of focus for network teams and that they are spending an increasing amount of time focused on security incidents and prevention.

NI reported that its survey found that the most commonly reported network security challenge is correlating security issues with network performance (reported by 50% of respondents) – meanwhile, the most common method for identifying security issues are “syslogs” (used by 67% of respondents). Other methods included simple network management protocol and tracking performance anomalies, while long-term packet capture and analysis was used by slightly less than half of the survey participants – 48%. Network Instruments said that relatively low utilization of long-term packet capture makes it “an under-utilized resource in security investigations” and that “replaying the events would provide greater context” for investigators.

NI also found that “application overload” is driving a huge increase in bandwidth use expectations, due to users accessing network resources and large files with multiple devices; real-time unified communications applications that require more bandwidth; as well as private cloud and virtualization adoption. See Network Instrument’s full infographic below:

Network Instruments' State of the Network infographic

Thanks to RCR Wireless News for the article.

See How Ixia’s NTO 7300 Vastly Outperforms the Closest Competitor in 100GbE Visibility, Scalability, Capacity, and Cost-Efficiency

Visibility Is an Urgent Challenge

Lack of visibility is behind the worst of IT headaches, leaving the network open to malicious intrusions, as well as compliance, availability, and performance problems. Today’s soaring traffic volumes are bringing greater complexity, proliferating apps and devices, and rising virtual traffic—in fact, “east-west” traffic between virtual machines now makes up half of all traffic on the network. Virtual traffic is the culprit that spawns unmonitored “blind spots,” a breeding ground for errors and attacks.

All these challenges make visibility critical to network security and management. Customers need a highly scalable visibility architecture—one that can eliminate blind spots and reduce complexity, while providing resilience and control. Visibility relies on monitoring tools, and new tool investment can be a real budget-buster. That’s why companies need to protect their investments in 1GbE and 10GbE monitoring tools, and why load balancing has become such a smart approach. Now, as networks move into the 100GbE environment, Ixia offers the NTO 7300, enabling total visibility into multiple 100GbE links and dominating its competition.

Dramatic Design Difference

The NTO 7300 delivers the ability to optimize 1GbE and 10GbE monitoring tools for the intensive 100GbE environment and offers decisive advantages over competitors. No other solution packs as many ports into a compact footprint for industry-leading density and cost-efficiency. The NTO 7300’s one-two punch of design ingenuity plus advanced technology makes it the clear choice in every comparison. If you take a typical 100GbE deployment that requires 8 100GbE ports, advanced filtering, and 10GbE ports for tool access, it becomes clear that other solutions cannot keep up with the density and performance Ixia provides.

The Numbers Speak for Themselves

Compare the Ixia NTO 7300 to its closest competitor, and you see a striking difference in capacity, scalability and performance. The NTO 7300 commands every category for customer needs by providing more performance in 71% less space!

Ixia's Net Tool Optimizer 7300 b2ap3_thumbnail_competitor_0.png
7300: Port-Plentiful

The Ixia NTO7300 configuration fits neatly and entirely in a single 8U chassis, with many unused ports.

Competition: Port-Poor

This competitor requires 28U and has insufficient 40GbE ports. It’s significantly lower in density, with no ports on advanced processing blades and fabric modules placed awkwardly in front.

Per Chassis:24 40GbE ports (or 96x10GbE)

64 10GbE AFM ports

8 100GbE ports

640Gbps Deduplication

Per Chassis (2 chassis required):2x40GbE ports

40x10GbE ports

4x100GbE ports

240Gbps Deduplication

With its “pay as you grow” scalability; savings on rack space and power; a simple, rack-mountable chassis; superior advanced features such as header stripping and deduplication; and wire-speed performance in any configuration, the NTO 7300 is ideal for filling that critical visibility gap in the 100GbE environment.

Ixia NTO7300 Other
Fabric Module location Rear panel Occupy front slots
100GbE configuration 2x100GbE + 4x40GbE or 16x10GbE 2x100GbE + 8x10GbE
Advanced Processing capacity per slot Up to 640Gbps (320Gbps ingress + 320Gbps egress) Up to 80Gbps
Advanced Processing card configuration 2xAFM16s + 4xQSFP + 640Gbps AFM, per slot No tool or network ports, “the other’s” processor only
Slots per chassis 6 8
Chassis RU 8 (with AC shelf) 14
Total Configuration Ixia NTO7300 Other Advantage
10GbE ports 64 (up to 160) 80 (up to 96) Ixia (67% more max)
40GbE ports 96 8* Ixia (1100% more max
100GbE ports 8 8
Deduplication bandwidth 640Gbps 480Gbps* Ixia (33% more)
Total RU 8 28 Ixia (71% less)
*Doesn’t meet requirements

Additional Resources:

Ixia Visibility Architecture

Ixia NTO 7300

Thanks to Ixia for the article.

Why Companies are Making the Switch to Cloud Based Network Monitoring

Many Enterprises today are making the switch over to “The Cloud” for a variety of applications. The most popular cloud based (business) applications include CRM software, email, project management, development & backup. It’s predicted that by 2015, end-user spending on cloud services could be more than $180 billion.

Some of you may be asking “Why the cloud?” or “is it really worth it?” The answer to these questions are both simple and compelling.

If an Enterprise decides to use a cloud based solution of any kind they’re going to see immediate benefits in 3 major areas:

  • Cost savings
  • Speed
  • Flexibility

Cost saving

In the network monitoring space, all of the “big guys” require a hefty upfront fee for their software, and then an equally, if not more expensive fee for professional services to actually make the system operate and integrate with other platforms.

On the contrary, most cloud based systems are sold as yearly (or less) SaaS models. The removal of a huge upfront investment usually makes the CFO happy. They’re also happy when they don’t need to pay for server hardware, storage and other costs (like electricity, and space) associated with running a solution in house.

Flexibility

“Use what you need, when you need it and then turn it off when you don’t” – that is one of the most common (and powerful) sales pitches in the cloud world. But, unlike the sales pitch from your local used car salesperson, this one is true! Cloud based system are generally much more flexible in terms of deployment, usage, terms, and even support compared to “legacy” software deployments.

Most cloud based SaaS applications offer a free no-obligation evaluation period and can be upgraded, downgraded or cancelled with just a few clicks. This means that organizations are not completely “locked in” to any solution for many years that might not do the job they need. Try that with your behemoth on premise software!

Speed

In the IT world, speed comes in many forms. You might think of application performance or Internet download speeds, but in the cloud speed generally means how fast a new application or service can go from “I need it” to “I have it”.

One of the biggest advantages of cloud based systems is that they are already running. The front end, backend and associated applications are already installed. As a user all you have to do is raise your hand and say you want it and in most cases your service can be provisioned in a matter of hours (or less).

In the cloud world of SaaS this “lead time” has shrunk from weeks or months to hours or minutes. That means more productivity, less downtime and happier users.

In the end, all organizations are looking for ways to trim unnecessary costs and increase capabilities. One of the easiest ways to accomplish this today is to switch to a cloud based network monitoring application.

b2ap3_thumbnail_file-2161011154_20150401-132453_1.png

Thanks to NMSaaS for the article.

Network Instruments State of the Network Global Study 2015

Eighth Annual “State of the Network” Global Study from JDSU’s Network Instruments Finds 85 Percent of Enterprise Network Teams Now Involved in Security Investigations

Deployment Rates for High-Performance Network Visibility and Software Defined Solutions Expected to Double in Two Years

Network Instruments, a JDSU Performance Management Solution released the results of its eighth annual State of the Network global study today. Based on insight gathered from 322 network engineers, IT directors and CIOs around the world, 85 percent of enterprise network teams are involved with security investigations, indicating a major shift in the role of those teams within enterprises.

Large-scale and high-profile security breaches have become more common as company data establishes itself as a valuable commodity on the black market. As such, enterprises are now dedicating more IT resources than ever before to protect data integrity. The Network Instruments study illustrates how growing security threats are affecting internal resources, identifies underutilized resources that could help improve security, and highlights emerging challenges that could rival security for IT’s attention.

As threats continue to escalate, one quarter of network operations professionals now spend more than 10 hours per week on security issues and are becoming increasingly accountable for securing data. This reflects an average uptick of 25 percent since 2013. Additionally, network teams’ security activities are diversifying. Teams are increasingly implementing preventative measures (65 percent), investigating attacks (58 percent) and validating security tool configurations (50 percent). When dealing with threats, half of respondents indicated that correlating security issues with network performance is their top challenge.

“Security is becoming so much more than just a tech issue. Regular media coverage of high-profile attacks and the growing number of malware threats that can plague enterprises – and their business – has thrust network teams capable of dealing with them into the spotlight. Network engineers are being pulled into every aspect of security, from flagging anomalies to leading investigations and implementing preventative measures,” said Brad Reinboldt, senior product manager for Network Instruments. “Staying on top of emerging threats requires these teams to leverage the tools they already have in innovative ways, such as applying deep packet inspection and analysis from performance monitoring solutions for advanced security forensics.”

The full results of the survey, available for download, also show that emerging network technologies* have gained greater adoption over the past year.

Highlights include:

  • 40, 100 Gigabit Ethernet and SDN approaching mainstream: Year-over-year implementation rates for 40 Gb, 100 Gb and SDN in the enterprise have nearly doubled, according to the companies surveyed. This growth rate is projected to continue over the next two years as these technologies approach more than 50 percent adoption. Conversely, survey respondents were less interested in 25 Gb technology, with over 62 percent indicating no plans to invest in equipment using the newer Ethernet specification.
  • Enterprise Unified Communications remains strong but lacks performance-visibility features: The survey shows that Voice-over-IP, videoconferencing and instant messaging technologies, which enable deeper collaboration and rich multimedia experiences, continue making strides in the enterprise, with over 50 percent penetration. Additionally, as more applications are virtualized and migrated to the cloud, this introduces new visibility challenges and sources that can impact performance and delay. To that end, respondents noted a lack of visibility into the end-user experience as a chief challenge. Without visibility into what is causing issues, tech teams can’t ensure uptime and return-on-investment.
  • Bandwidth use expected to grow 51 percent by 2016: Projected bandwidth growth is a clear factor driving the rollout of larger network pipes. This year’s study found the majority of network teams are predicting a much larger surge in bandwidth growth than last year, when bandwidth was only expected to grow by 37 percent. Key drivers for future bandwidth growth are being fueled by multiple devices accessing network resources and larger and more complex data such as 4K video. Real-time unified communications applications are also expected to put more strain on networks, while unified computing, private cloud and virtualization initiatives have the potential to create application overload on the backend.

Key takeaways: what can network teams do?

  • Enterprises need to be on constant alert and agile in aligning IT teams and resources to handle evolving threats. To be more effective in taking on additional security responsibilities, network teams should be trained to think like a hacker and recognize increasingly complex and nefarious network threats.
  • They also need to incorporate performance monitoring and packet analysis tools already used by network teams for security anomaly detection, breach investigations, and assisting with remediation.
  • Security threats aren’t the only thing dictating the need for advanced network visibility tools that can correlate network performance with security and application usage. High-bandwidth activities including 4K video, private clouds and unified communications are gaining traction in the enterprise as well.

State of the Network Global Study Methodology

Network Instruments has conducted its State of the Network global study for eight consecutive years, drawing insight about network trends and painting a picture of what challenges IT teams face. Questions were designed based on interviews with network professionals as well as IT analysts. Results were compiled from the insights of 322 respondents, including network engineers, IT directors, and CIOs from around the world. In addition to geographic diversity, the study’s sample was evenly distributed among networks and business verticals of different sizes. Responses were collected from December 16, 2014 to December 27, 2014 via online surveys.

JDSU Network Instruments State of the Network 2015 Video

Thanks to Network Instruments for the article.