Be Ready for SDN/ NFV with StableNet®

Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are two terms that have garnered a great deal of attention in the Telco market over the last couple of years. However, before actually adopting SDN/NFV, several challenges have to be mastered. This article discusses those challenges and explains why StableNet® is the right solution to address them.

SDN and NFV are both very promising approaches. The main objectives are to increase the flexibility of the control and to reduce costs by moving from expensive special purpose hardware to common off-the-shelf devices. SDN enables the separation of the control plane and data plane, which results in better control plane programmability, flexibility, and much cheaper costs in the data plane. NFV is similar but differs in detail. This concept aims at removing network functions from inside the network and putting them into typically central places, such as datacenters.

Six things to think about before implementing SDN/NFV

The benefits of SDN and NFV seem evident. Both promise to increase flexibility and reduce cost. The idea of common standards seems to further ease the configuration and handling of an SDN or NFV infrastructure. However, our experience shows that with the decision to “go SDN or NFV” a lot of new challenges arise. Six of the major ones – far from a complete list – are addressed in the following:

1. Bootstrapping/”Underlay”- configuration:

How to get the SDN/NFV infrastructure setup before any SDN/NFV protocols can actually be used?

2. Smooth migration:

How to smoothly migrate from an existing environment while continuously assuring the management of the entire system consisting of SDN/NFV and legacy elements?

3. Configuration transparency and visualization:

How to assure that configurations via an abstracted northbound API are actually realized on the southbound API in the proper and desired way?

4. SLA monitoring and compliance:

How to guarantee that the improved applicationawareness expected from SDN and NFV really brings the expected benefits? How to objectively monitor the combined SDN/NFV and legacy network, its flows, services and corresponding KPIs? How to show that the expected benefits have been realized and quantify that to justify the SDN/NFV migration expenses?

5. Competing standards and proprietary solutions:

How to orchestrate different standardized northbound APIs as well as vendor-specific flavors of SDN and NFV?

6. Localization of failures:

How to locate failures and their root cause in a centralized infrastructure without any distributed intelligence?

Almost certainly, some or even all of these challenges are relevant to any SDN/NFV use case. Solutions need to be found before adopting SDN/NFV.

  • StableNet® SDN/NFV Portfolio – Get ready to go SDN/NFV StableNet® is a fully integrated 4 in 1 solution in a single product and data structure, which includes Configuration, Performance, and Fault Management, as well as Network and Service Discovery. By integrating SDN and NFV, StableNet® offers leverage the following benefits:
  • Orchestration of large multi-vendor SDN/NFV and legacy environments- StableNet® is equipped with a powerful and highly automated discovery engine. Besides its own enhanced network CMDB, it offers inventory integration with third party CMDBs. Furthermore, StableNet® supports over 125 different standardized and vendor-specific interfaces and protocols. Altogether, this leads to an ultra-scalable unified inventory for legacy and SDN/NFV environments.
  • KPI measurements and SLA assurance- The StableNet® Performance Module offers holistic service monitoring on both server and network levels. It thereby combines traditional monitoring approaches, such as NetFlow with new SDN and NFV monitoring approaches. A powerful script engine allows to configure sophisticated End-to-End monitoring scripts. The availability of cheap Plug & Play StableNet® Embedded Agents furthermore simplifies the distributed measurement of a service. Altogether, this gives the possibility to measure all the necessary KPIs of a service and to assure its SLA compliance.
  • Increased availability and mitigation of failures in mixed environments- The StableNet® Fault and Impact Modules with the SDN extension combines a device-based automated root cause analysis with a service-based impact analysis to provide service assurance and fulfillment.
  • Automated service provisioning, including SDN/NFV– StableNet® offers an ultra-scalable, automated change management system. An integration with SDN northbound interfaces adds the ability to configure SDN devices. Support of various standardized and vendorspecific virtualization solutions paves the way for NFV deployments. StableNet® also offers options to help keep track of changes done to the configuration or to check for policy violations and vulnerabilities.

StableNet® Service Workflow – Predestined for SDN/NFV The increasing IT complexity in today’s enterprises more and more demands for a holistic, aggregated view on the services in a network, including all involved entities, e.g. network components, servers and user devices.

The availability of this service view, including SDN/NFV components, facilitates different NMS tasks, such as SLA monitoring or NCCM.

The definition, rollout, monitoring, and analysis of services is an integral part of the Service Workflow offered by StableNet®. This workflow, see Figure 1, is also predestined to ease the management of SDN and NFV infrastructures.

Figure 1: StableNet® Service WorkFlow – predestined for SDN/NFV management

Be Ready for SDN/ NFV with StableNet®

Trend towards “Virtualize Everything”

Besides SDN and NFV adoption trending upwards, there is also an emerging trend that stipulates “virtualize everything”. Virtualizing servers, software installations, network functions, and even the network management system itself leads to the largest economies of scale and maximum cost reductions.

StableNet® is fully ready to be deployed in virtualized environments or as a cloud service. An excerpt of the StableNet® Management Portfolio is shown in Figure 2.

Figure 2: StableNet® Management Portfolio – Excerpt

Be Ready for SDN/ NFV with StableNet®

Thanks to InterComms for the article.

Solving 3 Key Network Security Challenges

With high profile attacks from 2014 still fresh on the minds of IT professionals and almost half of companies being victims of an attack during the last year, it’s not surprising that security teams are seeking additional resources to augment defenses and investigate attacks.

As IT resources shift to security, network teams are finding new roles in the battle to protect network data. To be an effective asset in the battle, it’s critical to understand the involvement and roles of network professionals in security as well as the 3 greatest challenges they face.

Assisting the Security Team

The recently released State of the Network Global Study asked 322 network professionals about their emerging roles in network security. Eighty-five percent of respondents indicated that their organization’s network team was involved in handling security. Not only have network teams spent considerable time managing security issues but the amount of time has also increased over the past year:

  • One in four spends more than 10 hours per week on security
  • Almost 70 percent indicated time spent on security has increased

Solving 3 Key Network Security Challenges

Roles in Defending the Network

From the number of responses above 50 percent, the majority of network teams are involved with many security-related tasks. The top two roles for respondents – implementing preventative measures (65 percent) and investigating security breaches (58 percent) – mean they are working closely with security teams on handling threats both proactively and after-the-fact.

Solving 3 Key Network Security Challenges

3 Key Security Challenges

Half of respondents indicated the greatest security challenge was an inability to correlate security and network performance. This was followed closely by an inability to replay anomalous security issues (44 percent) and a lack of understanding to diagnose security issues (41 percent).

Solving 3 Key Network Security Challenges

The Packet Capture Solution

These three challenges point to an inability of the network team to gain context to quickly and accurately diagnose security issues. The solution lies in the packets.

  • Correlating Network and Security Issues

Within performance management solutions like Observer Platform, utilize baselining and behavior analysis to identify anomalous client, server, or network activities. Additionally, viewing top talkers and bandwidth utilization reports, can identify whether clients or servers are generating unexpectedly high amounts of traffic indicative of a compromised resource.

  • Replaying Issues for Context

The inability to replay and diagnose security issues points to long-term packet capture being an under-utilized resource in security investigations. Replaying captured events via retrospective analysis appliances like GigaStor provides full context to identify compromised resources, exploits utilized, and occurrences of data theft.

As network teams are called upon to assist in security investigations, effective use of packet analysis is critical for quick and accurate investigation and remediation. Learn from cyber forensics investigators how to effectively work with security teams on threat prevention, investigations, and cleanup efforts at the How to Catch a Hacker Webinar. Our experts will uncover exploits and share top security strategies for network teams.

Thanks to Network Instruments for the article.

Application Intelligence Supercharges Network Security

I was recently at a conference where the topic of network security came up again, like it always does. It seems like there might be a little more attention on it now, not really due to the number of breaches—although that plays into a little—but more because companies are being held accountable for allowing the breaches. Examples include Target (where both the CIO and CEO got fired over that breach in 2013) and the fact that the FCC and FTC are fining companies (like YourTel America, TerraCom, Presbyterian Hospital, and Columbia University) that allow a breach to compromise customer data.

This is an area where application intelligence could be used to help IT engineers. Just to be clear, application intelligence won’t fix ALL of your security problems, but it can give you additional and useful information that was very difficult to ascertain before now. For those that haven’t heard about application intelligence, this technology is available through certain network packet brokers (NPBs). It’s an extended functionality that allows you to go beyond Layer 2 through 4 (of the OSI model) packet filtering to reach all the way into Layer 7 (the application layer) of the packet data.

The benefit here is that rich data on the behavior and location of users and applications can be created and exported in any format needed—raw packets, filtered packets, or NetFlow information. IT teams can identify hidden network applications, mitigate network security threats from rogue applications and user types, and reduce network outages and/or improve network performance due to application data information.

Application Intelligence Supercharges Network SecurityIn short, application intelligence is basically the real-time visualization of application level data. This includes the dynamic identification of known and unknown applications on the network, application traffic and bandwidth use, detailed breakdowns of applications in use by application type, and geo-locations of users and devices while accessing applications.

Distinct signatures for known and unknown applications can be identified, captured, and passed on to specialized monitoring tools to provide network managers a complete view of their network. The filtered application information is typically sent on to 3rd party monitoring tools (e.g. Plixer, Splunk, etc.) as NetFlow information but could also be consumed through a direct user interface in the NPB. The benefit to sending the information to 3rd party monitoring tools is that it often gives them more granular, detailed application data than they would have otherwise to improve their efficiency.

With the number of applications on service provider and enterprise networks rapidly increasing, application intelligence provides unprecedented visibility to enable IT organizations to identify unknown network applications. This level of insight helps mitigate network security threats from suspicious applications and locations. It also allows IT engineers to spot trends in application usage which can be used to predict, and then prevent, congestion.

Application intelligence effectively allows you to create an early warning system for real-time vigilance. In the context of improving network security, application intelligence can provide the following benefits:

  • Identify suspicious/unknown applications on the network
  • Identify suspicious behavior by correlating connections with geography and known bad sites
  • Identify prohibited applications that may be running on your network
  • Proactively identify new user applications consuming network resources

Application Intelligence Supercharges Network Security

A core feature of application intelligence is the ability to quickly identify ALL applications on a network. This allows you to know exactly what is or is not running on your network. The feature is often an eye opener for IT teams, and they are surprised to find out that there are actually applications on their network they knew nothing about. Another key feature is that all applications are identified by a signature. If the application is unknown, a signature can be developed to record its existence. These unknown application signatures should be the first step as part of IT threat detection procedures so that you can identify any hidden/unknown network applications and user types. The ATI Processor correlates applications with geography, and can identify compromised devices and malicious activities such as Command and Control (CNC) communications from malicious botnet activities.

A second feature of application intelligence is the ability to visualize the application traffic on a world map for a quick view of traffic sources and destinations. This allows you to isolate specific application activity by granular geography (country, region, and even neighborhood). User information can then be correlated with this information to further identify and locate rogue traffic. For instance, maybe there is a user in North Korea that is hitting an FTP server in Dallas, TX and transferring files off network. If you have no authorized users in North Korea, this should be treated as highly suspicious. At this point, you can then implement your standard security protocols—e.g., kill the application session immediately, capture origin and destination information, capture file transfer information, etc.

Another way of using application intelligence is to audit your network policies and usage of those policies. For instance, maybe your official policy is for employees to use Outlook for email. All inbound email traffic is then passed through an anti-viral/malware scanner before any attachments are allowed entry into the network. With application intelligence, you would be able to tell if users are following this policy or whether some are using Google mail and downloading attachments directly through that service, which is bypassing your malware scanner. Not only would this be a violation of your policies, it presents a very real threat vector for malware to enter your network and commence its dirty work.

Ixia’s Application and Threat Intelligence (ATI) Processor brings intelligent functionality to the network packet broker landscape with its patent pending technology that dynamically identifies all applications running on a network. The Ixia ATI Processor is a 48 x 10GE interface card that can be used standalone in a compact 1 rack unit high chassis or within an Ixia Net Tool Optimizer (NTO) 7300 network packet broker (NPB) for a high port density option.

As new network security threats emerge, the ATI Processor helps IT improve their overall security with better intelligence for their existing security tools. To learn more, please visit the ATI Processor product page or contact us to see a demo!

Additional Resources:

Thanks to Ixia for the article.

Manage your SDN Network with StableNet®

SDN – A Promising Approach for Next Generation Networks

Recently, Software Defined Networking (SDN) has become a very popular term in the area of communication networks. The key idea of SDN is to introduce a separation of the control plane and the data plane of a communication network. The control plane is removed from the normal network elements into typically centralized control components. The normal elements can be replaced by simpler and therefore cheaper off-the-shelf devices that are only taking care of the data plane, i.e. forwarding traffic according to rules introduced by the control unit.

SDN is expected to bring several benefits, such as reduced investment costs due to cheaper network elements, and a better programmability due to a centralized control unit and standardized vendor-independent interfaces. In particular, SDN is also one of the key enablers to realize network virtualization approaches which enable companies to provide application aware networks and simplify cloud network setups.

There are various SDN use cases and application areas that mostly tend to benefit from the increased flexibility and dynamicity offered by SDN. The application areas reach from resource management in the access area with different access technologies, over datacenters where flexible and dynamic cloud and network orchestration are more and more integrated, to the business area where distributed services can be realized more flexibly over dedicated lines realized by SDN. Each SDN use case is, however, as individual as the particular company. A unified OSS solution supporting SDN, as offered with Infosim® StableNet®, should therefore be an integral part of any setup.

SDN Challenges – 6 things to think about before you go SDN

The benefits of SDN seem evident. It is a very promising approach to increase flexibility and reduce cost. The idea of a common standard further seems to ease the configuration and handling of an SDN network. However, our experience shows that with the decision to “go SDN” a lot of new challenges arise. Six major ones of these challenges – by far no complete list – are addressed in the following.

Bootstrapping, Configuration and Change Management

SDN aims at a centralization of the network control plane. Given that an SDN network is readily setup, a central controller can adequately manage the different devices and the traffic on the data plane. That is, however, exactly one of the challenges to solve. How to get the SDN network to be setup before any SDN protocols can actually be used? How to assign the different SDN devices to their controllers? How to configure backup controllers in case of outages, etc.? The common SDN protocols are not adequate for these tasks but focus on the traffic control of a ready setup network. Thus, to be ready to “go SDN” additional solutions are required.

Smooth migration from legacy to SDN

The previously mentioned phenomenon of a need for configuration and change management is furthermore exacerbated by the coexistence of SDN and non-SDN devices. From our experience in the Telco industry, it is not possible to start from scratch and move your network to an SDN solution. The challenge is rather to smoothly migrate an existing environment while continuously assuring the management of the entire system consisting of SDN and non-SDN elements. Thus, legacy support is an integral part of SDN support.

Configuration transparency and visualization

By separating control plane and data plane, SDN splits the configuration into two different levels. When sending commands to the SDN network, normally you never address any device directly but this “south bound” communication is totally taken care of by the controllers. All control steps are conducted using the “north bound” API(s). This approach on the one side simplifies the configuration process, on the other side leads to a loss of transparency. SDN itself does not offer any neutral/ objective view on the network if the configuration sent via the north bound API actually was sent to the south bound API in the proper and desired way. However, such a visualization, e.g., as overlay networks or on a per flow base, is an important requirement to assure the correctness and transparency of any setup.

SLA monitoring and compliance

The requirements to SDN mentioned earlier even go one step further. Assuring the correctness and transparency of the setup as mentioned before only guarantees that path layouts, flow rules, etc. are setup in a way as desired during the configuration process. To assure the effectivity of resource management and traffic engineering actions and thus the satisfaction of the customers using the infrastructure, however, more than just the correctness of the setup is needed. SLAs have to be met to assure that the users really get what they expect and what they pay for. The term SDN often goes along with the notion of application-aware networks. However, SDN itself cannot provide any guarantee that this application-awareness really brings the expected benefits. Thus, a possibility for an objective monitoring of the network, its flows, services and their corresponding KPIs is necessary.

Handling of competing standards and proprietary solutions

One main expectation to SDN is that a common standard will remove the necessity to control any proprietary devices. However, recent developments show that first, the different standardization efforts cannot really agree on a single controller and thus a single north bound API. Furthermore, many popular hardware vendors even offer their proprietary flavor of SDN. To cope with these different solutions, an adequate mediation layer is needed.

Localization of failures without any “distributed intelligence”

In a traditional network with decentralized control plane, the localization of failed components can be done as a part of the normal distributed protocols. Since devices are in continuous contact anyway, failure information is spread in the network and the actual location of an outage can thus be identified. In an SDN environment, what has been a matter of course suddenly cannot be relied on anymore. By design, devices normally only talk to the controller and only send requests if new instructions are needed. Devices never talk to any other devices and might not even have such capabilities. Therefore, on the one hand a device will in general not recognize if any adjacent link is down, and on the other hand in particular, will not spread such information in the network. Therefore, new possibilities have to be found to recognize and locate failures.

To learn more, download the white paper

Manage your SDN Network with StableNet®

Expanding Services Monitoring for Small Sites

Expanding Services Monitoring for Small SitesExpanding Services Monitoring to Small sites has always been a challenge, especially in environments where there are large numbers of relatively small sites. While these sites may be physically ‘small’ they are all very important to the overall business service delivery. This model includes enterprise businesses like retail, branch based financial organizations, educational institutions, as well as providers who deliver services to home and small office.

Some of the challenges to monitoring these include:

  • Gaining visibility of services quality (QoE) at large numbers of remote sites
  • Establish secure management and monitoring across public internet, VPNs, firewalls on larger numbers of remote sites
  • Deploy cost-efficient monitoring of large numbers of small remote sites
  • Gain online services quality reference information at distributed customer reference sites
  • Gain performance metrics like “download speed” from key websites (URL) from the distributed customer perspective
  • Gain services availability and quality independent of user devices and applications
  • Test service availability, quality, and performance of multimedia encoded streams End-to-End or Hop-to-Hop in relation to multimedia stream container, carrying audio and video coded traffic from distributed customer perspectives

How can StableNet’s Embedded Agent (SNEA)® technology solve this problem?

Typical use cases

  • Gain visibility of your distributed services, including the customer site areas
  • The critical part: Cost efficient services assurance on large numbers of small sites
  • User and application usage independent services monitoring of large numbers of distributed sites
    • Small and home offices – QoE for distributed customer site, connectivity and availability
    • Bank offices, retail shops, franchise shops, POS terminals
    • Regional government, community offices, police stations, fire stations
    • Industrial distributed small sites, e.g. pump stations, power stations, base stations etc.
    • Distributed installations, e.g. IP addressable equipment in
      • Next-Hop services and distributed IT infrastructure monitoring
      • Distributed offices running across provider networks
      • Monitoring regional company offices connectivity via DSL or IP/cable
  • E2E reference monitoring
    • Remote site reference simulation and QoE monitoring of IP multimedia audio and video encoded traffic
    • Remote site reference call simulation of IP telephony calls monitoring quality and call availability
    • Centrally managed remote execution of monitoring tasks
  • Inventory: Discovery of regional small sites IT devices and infrastructure
  • Security:
    • Detecting rogue devices and unwanted devices within small offices
    • Secure monitoring of small sites behind firewalls across public internet

Typical use cases using StableNet® SNEA can be summarized as follows:

1) Use SNEAs for monitoring availability and service performance on larger numbers of small sites/offices, instead of an uneconomical, often not applicable shadow router: Jitter, RTT, etc.

  • Regionally distributed small offices, regional bank offices
  • Regionally distributed services users, e.g. retail shops, gas stations, franchise shops
  • Regional government, local community offices
  • Police and fire stations
  • Distributed, automated monitoring stations with multiple measurement and monitoring equipment using common IP services

2) Use SNEAs to run plug-in/scripts in distributed small sites, e.g. if you have:

  • several thousand customer sites to reference check your IP or multimedia services
  • several thousand branch offices to run periodic availability and performance reference checks on
  • numerous ATM machines, cash desks, retail locations, etc. which you need to check if they are accessible and can provide their services
  • numerous remotes sites connecting back to the centralized DC applications

3) Use SNEAs to measure E2E tests like:

  • IPT/VoIP availability and call quality test (simulate VoIP encoded traffic, simulate SIP IP Telephony call)
  • Video tests (simulate IPTV encoded traffic and video conferencing traffic)
  • Key application availability and response time
  • Wireless access availability and response time

4) Use of SNEA to execute IP, data or VoIP reference calls via mobile sticks

5) “Next-Hop” measurements – Monitor entire distribution and infrastructure chains by performing cost-efficient “Next-Hop” monitoring, e.g. IP: IP-SLA type measurements like Jitter, delay, RTT, packet loss, ICMP Echo/Ping or encoded traffic simulation

6) Use of SNEA to independently monitor IP connected equipment like:

  • Equipment in distributed TV transmission and head-end stations
  • Equipment in mobile base stations
  • Cloud services environment to monitor QoE from the cloud users perspective

These are just a few examples of how StableNet can expand Services Monitoring to high numbers or remote sites in a highly functional, yet cost effective manner. How will you monitor your remote sites?
For more information see:

http://www.telnetnetworks.ca/en/resources/infosim/doc_download/745-stablenet-regional-and-e2e-monitoring-with-the-stablenet-embedded-agent-snea-on-a-banana-pi-type-hardware-platform.html

State of Networks: Faster, but Under Attack

Two recent studies that look at the state of mobile and fixed networks show that while networks are getting ever faster, security is a paramount concern that is taking up more time and resources.

Akamai recently released its fourth quarter 2014 State of the Internet report. Among the findings:

  • In terms of network security, high tech and public sector targets saw increased numbers of attacks from 2013 to 2014, while enterprise targets had fewer attacks over the course of the year – except Q4, where the commerce and enterprise segment were the most frequently targeted.

“Attacks against public sector targets reported throughout 2014 appear to be primarily motivated by political unrest, while the targeting of the high tech industry does not appear to be driven by any single event or motivation,” Akamai added.

  • Akamai customers saw DDoS attacks up 20% from the third quarter, although the overall number of such attacks held steady from 2013 to 2014 at about 1,150.
  • Average mobile speeds differ widely on a global basis, from 16 megabits per second in the U.K., to 1 Mbps in New Caledonia. Average peak mobile connection speeds continue to increase, from a whopping 157.3 Mbps in Singapore, to 7.5 Mbps in Argentina. And Denmark, Saudi Arabia, Sweden and Venezuela had 97% of unique IP addresses from mobile providers connect to Akamai’s network at speeds faster than the 4 Mbps threshold that is considered the minimum for “broadband.”

Meanwhile, Network Instruments, part of JDSU, recently completed its eighth annual survey of network professionals. It found that security is an increasing area of focus for network teams and that they are spending an increasing amount of time focused on security incidents and prevention.

NI reported that its survey found that the most commonly reported network security challenge is correlating security issues with network performance (reported by 50% of respondents) – meanwhile, the most common method for identifying security issues are “syslogs” (used by 67% of respondents). Other methods included simple network management protocol and tracking performance anomalies, while long-term packet capture and analysis was used by slightly less than half of the survey participants – 48%. Network Instruments said that relatively low utilization of long-term packet capture makes it “an under-utilized resource in security investigations” and that “replaying the events would provide greater context” for investigators.

NI also found that “application overload” is driving a huge increase in bandwidth use expectations, due to users accessing network resources and large files with multiple devices; real-time unified communications applications that require more bandwidth; as well as private cloud and virtualization adoption. See Network Instrument’s full infographic below:

Network Instruments' State of the Network infographic

Thanks to RCR Wireless News for the article.

Why Companies are Making the Switch to Cloud Based Network Monitoring

Many Enterprises today are making the switch over to “The Cloud” for a variety of applications. The most popular cloud based (business) applications include CRM software, email, project management, development & backup. It’s predicted that by 2015, end-user spending on cloud services could be more than $180 billion.

Some of you may be asking “Why the cloud?” or “is it really worth it?” The answer to these questions are both simple and compelling.

If an Enterprise decides to use a cloud based solution of any kind they’re going to see immediate benefits in 3 major areas:

  • Cost savings
  • Speed
  • Flexibility

Cost saving

In the network monitoring space, all of the “big guys” require a hefty upfront fee for their software, and then an equally, if not more expensive fee for professional services to actually make the system operate and integrate with other platforms.

On the contrary, most cloud based systems are sold as yearly (or less) SaaS models. The removal of a huge upfront investment usually makes the CFO happy. They’re also happy when they don’t need to pay for server hardware, storage and other costs (like electricity, and space) associated with running a solution in house.

Flexibility

“Use what you need, when you need it and then turn it off when you don’t” – that is one of the most common (and powerful) sales pitches in the cloud world. But, unlike the sales pitch from your local used car salesperson, this one is true! Cloud based system are generally much more flexible in terms of deployment, usage, terms, and even support compared to “legacy” software deployments.

Most cloud based SaaS applications offer a free no-obligation evaluation period and can be upgraded, downgraded or cancelled with just a few clicks. This means that organizations are not completely “locked in” to any solution for many years that might not do the job they need. Try that with your behemoth on premise software!

Speed

In the IT world, speed comes in many forms. You might think of application performance or Internet download speeds, but in the cloud speed generally means how fast a new application or service can go from “I need it” to “I have it”.

One of the biggest advantages of cloud based systems is that they are already running. The front end, backend and associated applications are already installed. As a user all you have to do is raise your hand and say you want it and in most cases your service can be provisioned in a matter of hours (or less).

In the cloud world of SaaS this “lead time” has shrunk from weeks or months to hours or minutes. That means more productivity, less downtime and happier users.

In the end, all organizations are looking for ways to trim unnecessary costs and increase capabilities. One of the easiest ways to accomplish this today is to switch to a cloud based network monitoring application.

b2ap3_thumbnail_file-2161011154_20150401-132453_1.png

Thanks to NMSaaS for the article.

Network Instruments State of the Network Global Study 2015

Eighth Annual “State of the Network” Global Study from JDSU’s Network Instruments Finds 85 Percent of Enterprise Network Teams Now Involved in Security Investigations

Deployment Rates for High-Performance Network Visibility and Software Defined Solutions Expected to Double in Two Years

Network Instruments, a JDSU Performance Management Solution released the results of its eighth annual State of the Network global study today. Based on insight gathered from 322 network engineers, IT directors and CIOs around the world, 85 percent of enterprise network teams are involved with security investigations, indicating a major shift in the role of those teams within enterprises.

Large-scale and high-profile security breaches have become more common as company data establishes itself as a valuable commodity on the black market. As such, enterprises are now dedicating more IT resources than ever before to protect data integrity. The Network Instruments study illustrates how growing security threats are affecting internal resources, identifies underutilized resources that could help improve security, and highlights emerging challenges that could rival security for IT’s attention.

As threats continue to escalate, one quarter of network operations professionals now spend more than 10 hours per week on security issues and are becoming increasingly accountable for securing data. This reflects an average uptick of 25 percent since 2013. Additionally, network teams’ security activities are diversifying. Teams are increasingly implementing preventative measures (65 percent), investigating attacks (58 percent) and validating security tool configurations (50 percent). When dealing with threats, half of respondents indicated that correlating security issues with network performance is their top challenge.

“Security is becoming so much more than just a tech issue. Regular media coverage of high-profile attacks and the growing number of malware threats that can plague enterprises – and their business – has thrust network teams capable of dealing with them into the spotlight. Network engineers are being pulled into every aspect of security, from flagging anomalies to leading investigations and implementing preventative measures,” said Brad Reinboldt, senior product manager for Network Instruments. “Staying on top of emerging threats requires these teams to leverage the tools they already have in innovative ways, such as applying deep packet inspection and analysis from performance monitoring solutions for advanced security forensics.”

The full results of the survey, available for download, also show that emerging network technologies* have gained greater adoption over the past year.

Highlights include:

  • 40, 100 Gigabit Ethernet and SDN approaching mainstream: Year-over-year implementation rates for 40 Gb, 100 Gb and SDN in the enterprise have nearly doubled, according to the companies surveyed. This growth rate is projected to continue over the next two years as these technologies approach more than 50 percent adoption. Conversely, survey respondents were less interested in 25 Gb technology, with over 62 percent indicating no plans to invest in equipment using the newer Ethernet specification.
  • Enterprise Unified Communications remains strong but lacks performance-visibility features: The survey shows that Voice-over-IP, videoconferencing and instant messaging technologies, which enable deeper collaboration and rich multimedia experiences, continue making strides in the enterprise, with over 50 percent penetration. Additionally, as more applications are virtualized and migrated to the cloud, this introduces new visibility challenges and sources that can impact performance and delay. To that end, respondents noted a lack of visibility into the end-user experience as a chief challenge. Without visibility into what is causing issues, tech teams can’t ensure uptime and return-on-investment.
  • Bandwidth use expected to grow 51 percent by 2016: Projected bandwidth growth is a clear factor driving the rollout of larger network pipes. This year’s study found the majority of network teams are predicting a much larger surge in bandwidth growth than last year, when bandwidth was only expected to grow by 37 percent. Key drivers for future bandwidth growth are being fueled by multiple devices accessing network resources and larger and more complex data such as 4K video. Real-time unified communications applications are also expected to put more strain on networks, while unified computing, private cloud and virtualization initiatives have the potential to create application overload on the backend.

Key takeaways: what can network teams do?

  • Enterprises need to be on constant alert and agile in aligning IT teams and resources to handle evolving threats. To be more effective in taking on additional security responsibilities, network teams should be trained to think like a hacker and recognize increasingly complex and nefarious network threats.
  • They also need to incorporate performance monitoring and packet analysis tools already used by network teams for security anomaly detection, breach investigations, and assisting with remediation.
  • Security threats aren’t the only thing dictating the need for advanced network visibility tools that can correlate network performance with security and application usage. High-bandwidth activities including 4K video, private clouds and unified communications are gaining traction in the enterprise as well.

State of the Network Global Study Methodology

Network Instruments has conducted its State of the Network global study for eight consecutive years, drawing insight about network trends and painting a picture of what challenges IT teams face. Questions were designed based on interviews with network professionals as well as IT analysts. Results were compiled from the insights of 322 respondents, including network engineers, IT directors, and CIOs from around the world. In addition to geographic diversity, the study’s sample was evenly distributed among networks and business verticals of different sizes. Responses were collected from December 16, 2014 to December 27, 2014 via online surveys.

JDSU Network Instruments State of the Network 2015 Video

Thanks to Network Instruments for the article. 

Infosim StableNet Legacy Refund Certificate (value up to $250,000.00)

Are you running on Netcool, CA eHealth or any other legacy network management solutions?

$$$Stop throwing away your money$$$

Infosim® will give you a certificate (value up to $250,000) of product credit for switching from your legacy product maintenance spend.

Check whether your legacy NMS applies!

Fill out the request form and we can check whether your system matches one of the ten that qualify.

Find out your trade-up value!

Make your budget work this year!

Thank you!

Thanks to Infosim for the article.

The Importance of Using Network Discovery in your Business

Network discovery is not a single thing. In general terms it is the process of gathering information about the Network resources near you.

You may be asking why is this even important to me? The primary reasons why it is vital for your business to use network discovery is as follows:

  • If you don’t know what you have, you cannot hope to monitor and manage it.
  • You can’t track down interconnected problems.
  • You don’t know when something new comes on the network.
  • You don’t know when you need upgrades.
  • You may be paying too much for maintenance.

All of these key factors above are vital in maintaining the success of your company’s network resources.

One of the most important aspects which I’ve mentioned is not knowing what you have, this is a huge problem for many companies. If you don’t know what you have how can you manage or monitor it.

Most of the time in network management you’re trying to track down potential issues within your network and how you’re going to resolve these issues. This is a very hard task especially if you’re dealing with a large scale network. If one thing goes down within the network it starts a trickle effect and then more aspects of the network will in return start to go down.

All of these problems are easily fixed. NMSaaS has network discovery capabilities with powerful and flexible tools allowing you to determine what exactly is subject to monitoring.

These elements are automatically labeled and grouped. This makes automatic data collection possible, as well as threshold monitoring and reporting on already discovered elements.

As a result of this we can access critical details like IP address, MAC address, OS, firmware, Services, Memory, Serial Numbers, Interface Information, Routing Information, Neighbor data, these are all available at the click of a button or as a scheduled report.

Thanks to NMSaaS for the article.