Expanding Services Monitoring for Small Sites

Expanding Services Monitoring for Small SitesExpanding Services Monitoring to Small sites has always been a challenge, especially in environments where there are large numbers of relatively small sites. While these sites may be physically ‘small’ they are all very important to the overall business service delivery. This model includes enterprise businesses like retail, branch based financial organizations, educational institutions, as well as providers who deliver services to home and small office.

Some of the challenges to monitoring these include:

  • Gaining visibility of services quality (QoE) at large numbers of remote sites
  • Establish secure management and monitoring across public internet, VPNs, firewalls on larger numbers of remote sites
  • Deploy cost-efficient monitoring of large numbers of small remote sites
  • Gain online services quality reference information at distributed customer reference sites
  • Gain performance metrics like “download speed” from key websites (URL) from the distributed customer perspective
  • Gain services availability and quality independent of user devices and applications
  • Test service availability, quality, and performance of multimedia encoded streams End-to-End or Hop-to-Hop in relation to multimedia stream container, carrying audio and video coded traffic from distributed customer perspectives

How can StableNet’s Embedded Agent (SNEA)® technology solve this problem?

Typical use cases

  • Gain visibility of your distributed services, including the customer site areas
  • The critical part: Cost efficient services assurance on large numbers of small sites
  • User and application usage independent services monitoring of large numbers of distributed sites
    • Small and home offices – QoE for distributed customer site, connectivity and availability
    • Bank offices, retail shops, franchise shops, POS terminals
    • Regional government, community offices, police stations, fire stations
    • Industrial distributed small sites, e.g. pump stations, power stations, base stations etc.
    • Distributed installations, e.g. IP addressable equipment in
      • Next-Hop services and distributed IT infrastructure monitoring
      • Distributed offices running across provider networks
      • Monitoring regional company offices connectivity via DSL or IP/cable
  • E2E reference monitoring
    • Remote site reference simulation and QoE monitoring of IP multimedia audio and video encoded traffic
    • Remote site reference call simulation of IP telephony calls monitoring quality and call availability
    • Centrally managed remote execution of monitoring tasks
  • Inventory: Discovery of regional small sites IT devices and infrastructure
  • Security:
    • Detecting rogue devices and unwanted devices within small offices
    • Secure monitoring of small sites behind firewalls across public internet

Typical use cases using StableNet® SNEA can be summarized as follows:

1) Use SNEAs for monitoring availability and service performance on larger numbers of small sites/offices, instead of an uneconomical, often not applicable shadow router: Jitter, RTT, etc.

  • Regionally distributed small offices, regional bank offices
  • Regionally distributed services users, e.g. retail shops, gas stations, franchise shops
  • Regional government, local community offices
  • Police and fire stations
  • Distributed, automated monitoring stations with multiple measurement and monitoring equipment using common IP services

2) Use SNEAs to run plug-in/scripts in distributed small sites, e.g. if you have:

  • several thousand customer sites to reference check your IP or multimedia services
  • several thousand branch offices to run periodic availability and performance reference checks on
  • numerous ATM machines, cash desks, retail locations, etc. which you need to check if they are accessible and can provide their services
  • numerous remotes sites connecting back to the centralized DC applications

3) Use SNEAs to measure E2E tests like:

  • IPT/VoIP availability and call quality test (simulate VoIP encoded traffic, simulate SIP IP Telephony call)
  • Video tests (simulate IPTV encoded traffic and video conferencing traffic)
  • Key application availability and response time
  • Wireless access availability and response time

4) Use of SNEA to execute IP, data or VoIP reference calls via mobile sticks

5) “Next-Hop” measurements – Monitor entire distribution and infrastructure chains by performing cost-efficient “Next-Hop” monitoring, e.g. IP: IP-SLA type measurements like Jitter, delay, RTT, packet loss, ICMP Echo/Ping or encoded traffic simulation

6) Use of SNEA to independently monitor IP connected equipment like:

  • Equipment in distributed TV transmission and head-end stations
  • Equipment in mobile base stations
  • Cloud services environment to monitor QoE from the cloud users perspective

These are just a few examples of how StableNet can expand Services Monitoring to high numbers or remote sites in a highly functional, yet cost effective manner. How will you monitor your remote sites?
For more information see:

http://www.telnetnetworks.ca/en/resources/infosim/doc_download/745-stablenet-regional-and-e2e-monitoring-with-the-stablenet-embedded-agent-snea-on-a-banana-pi-type-hardware-platform.html

State of Networks: Faster, but Under Attack

Two recent studies that look at the state of mobile and fixed networks show that while networks are getting ever faster, security is a paramount concern that is taking up more time and resources.

Akamai recently released its fourth quarter 2014 State of the Internet report. Among the findings:

  • In terms of network security, high tech and public sector targets saw increased numbers of attacks from 2013 to 2014, while enterprise targets had fewer attacks over the course of the year – except Q4, where the commerce and enterprise segment were the most frequently targeted.

“Attacks against public sector targets reported throughout 2014 appear to be primarily motivated by political unrest, while the targeting of the high tech industry does not appear to be driven by any single event or motivation,” Akamai added.

  • Akamai customers saw DDoS attacks up 20% from the third quarter, although the overall number of such attacks held steady from 2013 to 2014 at about 1,150.
  • Average mobile speeds differ widely on a global basis, from 16 megabits per second in the U.K., to 1 Mbps in New Caledonia. Average peak mobile connection speeds continue to increase, from a whopping 157.3 Mbps in Singapore, to 7.5 Mbps in Argentina. And Denmark, Saudi Arabia, Sweden and Venezuela had 97% of unique IP addresses from mobile providers connect to Akamai’s network at speeds faster than the 4 Mbps threshold that is considered the minimum for “broadband.”

Meanwhile, Network Instruments, part of JDSU, recently completed its eighth annual survey of network professionals. It found that security is an increasing area of focus for network teams and that they are spending an increasing amount of time focused on security incidents and prevention.

NI reported that its survey found that the most commonly reported network security challenge is correlating security issues with network performance (reported by 50% of respondents) – meanwhile, the most common method for identifying security issues are “syslogs” (used by 67% of respondents). Other methods included simple network management protocol and tracking performance anomalies, while long-term packet capture and analysis was used by slightly less than half of the survey participants – 48%. Network Instruments said that relatively low utilization of long-term packet capture makes it “an under-utilized resource in security investigations” and that “replaying the events would provide greater context” for investigators.

NI also found that “application overload” is driving a huge increase in bandwidth use expectations, due to users accessing network resources and large files with multiple devices; real-time unified communications applications that require more bandwidth; as well as private cloud and virtualization adoption. See Network Instrument’s full infographic below:

Network Instruments' State of the Network infographic

Thanks to RCR Wireless News for the article.

Why Companies are Making the Switch to Cloud Based Network Monitoring

Many Enterprises today are making the switch over to “The Cloud” for a variety of applications. The most popular cloud based (business) applications include CRM software, email, project management, development & backup. It’s predicted that by 2015, end-user spending on cloud services could be more than $180 billion.

Some of you may be asking “Why the cloud?” or “is it really worth it?” The answer to these questions are both simple and compelling.

If an Enterprise decides to use a cloud based solution of any kind they’re going to see immediate benefits in 3 major areas:

  • Cost savings
  • Speed
  • Flexibility

Cost saving

In the network monitoring space, all of the “big guys” require a hefty upfront fee for their software, and then an equally, if not more expensive fee for professional services to actually make the system operate and integrate with other platforms.

On the contrary, most cloud based systems are sold as yearly (or less) SaaS models. The removal of a huge upfront investment usually makes the CFO happy. They’re also happy when they don’t need to pay for server hardware, storage and other costs (like electricity, and space) associated with running a solution in house.

Flexibility

“Use what you need, when you need it and then turn it off when you don’t” – that is one of the most common (and powerful) sales pitches in the cloud world. But, unlike the sales pitch from your local used car salesperson, this one is true! Cloud based system are generally much more flexible in terms of deployment, usage, terms, and even support compared to “legacy” software deployments.

Most cloud based SaaS applications offer a free no-obligation evaluation period and can be upgraded, downgraded or cancelled with just a few clicks. This means that organizations are not completely “locked in” to any solution for many years that might not do the job they need. Try that with your behemoth on premise software!

Speed

In the IT world, speed comes in many forms. You might think of application performance or Internet download speeds, but in the cloud speed generally means how fast a new application or service can go from “I need it” to “I have it”.

One of the biggest advantages of cloud based systems is that they are already running. The front end, backend and associated applications are already installed. As a user all you have to do is raise your hand and say you want it and in most cases your service can be provisioned in a matter of hours (or less).

In the cloud world of SaaS this “lead time” has shrunk from weeks or months to hours or minutes. That means more productivity, less downtime and happier users.

In the end, all organizations are looking for ways to trim unnecessary costs and increase capabilities. One of the easiest ways to accomplish this today is to switch to a cloud based network monitoring application.

b2ap3_thumbnail_file-2161011154_20150401-132453_1.png

Thanks to NMSaaS for the article.

Network Instruments State of the Network Global Study 2015

Eighth Annual “State of the Network” Global Study from JDSU’s Network Instruments Finds 85 Percent of Enterprise Network Teams Now Involved in Security Investigations

Deployment Rates for High-Performance Network Visibility and Software Defined Solutions Expected to Double in Two Years

Network Instruments, a JDSU Performance Management Solution released the results of its eighth annual State of the Network global study today. Based on insight gathered from 322 network engineers, IT directors and CIOs around the world, 85 percent of enterprise network teams are involved with security investigations, indicating a major shift in the role of those teams within enterprises.

Large-scale and high-profile security breaches have become more common as company data establishes itself as a valuable commodity on the black market. As such, enterprises are now dedicating more IT resources than ever before to protect data integrity. The Network Instruments study illustrates how growing security threats are affecting internal resources, identifies underutilized resources that could help improve security, and highlights emerging challenges that could rival security for IT’s attention.

As threats continue to escalate, one quarter of network operations professionals now spend more than 10 hours per week on security issues and are becoming increasingly accountable for securing data. This reflects an average uptick of 25 percent since 2013. Additionally, network teams’ security activities are diversifying. Teams are increasingly implementing preventative measures (65 percent), investigating attacks (58 percent) and validating security tool configurations (50 percent). When dealing with threats, half of respondents indicated that correlating security issues with network performance is their top challenge.

“Security is becoming so much more than just a tech issue. Regular media coverage of high-profile attacks and the growing number of malware threats that can plague enterprises – and their business – has thrust network teams capable of dealing with them into the spotlight. Network engineers are being pulled into every aspect of security, from flagging anomalies to leading investigations and implementing preventative measures,” said Brad Reinboldt, senior product manager for Network Instruments. “Staying on top of emerging threats requires these teams to leverage the tools they already have in innovative ways, such as applying deep packet inspection and analysis from performance monitoring solutions for advanced security forensics.”

The full results of the survey, available for download, also show that emerging network technologies* have gained greater adoption over the past year.

Highlights include:

  • 40, 100 Gigabit Ethernet and SDN approaching mainstream: Year-over-year implementation rates for 40 Gb, 100 Gb and SDN in the enterprise have nearly doubled, according to the companies surveyed. This growth rate is projected to continue over the next two years as these technologies approach more than 50 percent adoption. Conversely, survey respondents were less interested in 25 Gb technology, with over 62 percent indicating no plans to invest in equipment using the newer Ethernet specification.
  • Enterprise Unified Communications remains strong but lacks performance-visibility features: The survey shows that Voice-over-IP, videoconferencing and instant messaging technologies, which enable deeper collaboration and rich multimedia experiences, continue making strides in the enterprise, with over 50 percent penetration. Additionally, as more applications are virtualized and migrated to the cloud, this introduces new visibility challenges and sources that can impact performance and delay. To that end, respondents noted a lack of visibility into the end-user experience as a chief challenge. Without visibility into what is causing issues, tech teams can’t ensure uptime and return-on-investment.
  • Bandwidth use expected to grow 51 percent by 2016: Projected bandwidth growth is a clear factor driving the rollout of larger network pipes. This year’s study found the majority of network teams are predicting a much larger surge in bandwidth growth than last year, when bandwidth was only expected to grow by 37 percent. Key drivers for future bandwidth growth are being fueled by multiple devices accessing network resources and larger and more complex data such as 4K video. Real-time unified communications applications are also expected to put more strain on networks, while unified computing, private cloud and virtualization initiatives have the potential to create application overload on the backend.

Key takeaways: what can network teams do?

  • Enterprises need to be on constant alert and agile in aligning IT teams and resources to handle evolving threats. To be more effective in taking on additional security responsibilities, network teams should be trained to think like a hacker and recognize increasingly complex and nefarious network threats.
  • They also need to incorporate performance monitoring and packet analysis tools already used by network teams for security anomaly detection, breach investigations, and assisting with remediation.
  • Security threats aren’t the only thing dictating the need for advanced network visibility tools that can correlate network performance with security and application usage. High-bandwidth activities including 4K video, private clouds and unified communications are gaining traction in the enterprise as well.

State of the Network Global Study Methodology

Network Instruments has conducted its State of the Network global study for eight consecutive years, drawing insight about network trends and painting a picture of what challenges IT teams face. Questions were designed based on interviews with network professionals as well as IT analysts. Results were compiled from the insights of 322 respondents, including network engineers, IT directors, and CIOs from around the world. In addition to geographic diversity, the study’s sample was evenly distributed among networks and business verticals of different sizes. Responses were collected from December 16, 2014 to December 27, 2014 via online surveys.

JDSU Network Instruments State of the Network 2015 Video

Thanks to Network Instruments for the article. 

Infosim StableNet Legacy Refund Certificate (value up to $250,000.00)

Are you running on Netcool, CA eHealth or any other legacy network management solutions?

$$$Stop throwing away your money$$$

Infosim® will give you a certificate (value up to $250,000) of product credit for switching from your legacy product maintenance spend.

Check whether your legacy NMS applies!

Fill out the request form and we can check whether your system matches one of the ten that qualify.

Find out your trade-up value!

Make your budget work this year!

Thank you!

Thanks to Infosim for the article.

The Importance of Using Network Discovery in your Business

Network discovery is not a single thing. In general terms it is the process of gathering information about the Network resources near you.

You may be asking why is this even important to me? The primary reasons why it is vital for your business to use network discovery is as follows:

  • If you don’t know what you have, you cannot hope to monitor and manage it.
  • You can’t track down interconnected problems.
  • You don’t know when something new comes on the network.
  • You don’t know when you need upgrades.
  • You may be paying too much for maintenance.

All of these key factors above are vital in maintaining the success of your company’s network resources.

One of the most important aspects which I’ve mentioned is not knowing what you have, this is a huge problem for many companies. If you don’t know what you have how can you manage or monitor it.

Most of the time in network management you’re trying to track down potential issues within your network and how you’re going to resolve these issues. This is a very hard task especially if you’re dealing with a large scale network. If one thing goes down within the network it starts a trickle effect and then more aspects of the network will in return start to go down.

All of these problems are easily fixed. NMSaaS has network discovery capabilities with powerful and flexible tools allowing you to determine what exactly is subject to monitoring.

These elements are automatically labeled and grouped. This makes automatic data collection possible, as well as threshold monitoring and reporting on already discovered elements.

As a result of this we can access critical details like IP address, MAC address, OS, firmware, Services, Memory, Serial Numbers, Interface Information, Routing Information, Neighbor data, these are all available at the click of a button or as a scheduled report.

Thanks to NMSaaS for the article.

Ixia’s Virtual Visibility with ControlTower and OpenFlow

Ixia is announcing support for OpenFlow SDN in Ixia’s ControlTower architecture. Our best-in-breed Visibility Architecture now extends data center visibility by taking advantage of a plethora of qualified OpenFlow hardware.

ControlTower is our innovative platform for distributed visibility launched nearly two years ago. This solution manages a cluster of our Net Tool Optimizers (NTOs) as if you were managing a single logical NTO. At the time of its launch, we leveraged Software Defined Networking (SDN) concepts to achieve powerful distributed monitoring for data centers and campus networks. The drag and drop GUI, advanced packet processing, and patented filter compiler allow multiple users to manage and optimize traffic across the cluster without interfering with each other. We had great response from customers to the ControlTower concept; they loved how we took very complex routing and rules calculation problems and boiled them down to an easy-to-use, single-pane-of-glass GUI (or API) even when spanning across multiple NTOs.

Our announcement takes ControlTower one giant leap further by allowing qualified OpenFlow switches to become members of a ControlTower cluster, incorporating them under one powerful and simple management console, extending powerful network visibility capabilities throughout the data center. You don’t need to be an OpenFlow expert, just hook up your OpenFlow switches and we take care of the complicated management. You get all the benefits of our straightforward GUI and advanced features for the entire cluster.

We heard from many customers that scalable, cost-effective network visibility is critical to operating a secure and high performance data center. They need analytics tools that access any segment of the network quickly and easily. Monitored traffic must be filtered and optimized to ensure tools are used efficiently. Customers need to focus on optimizing application performance and heading off security issues in every part of their data center, not managing switch ACL’s, CLI’s, forwarding rulesets, interconnects, etc.

Ixia responded by enhancing ControlTower to recognize OpenFlow devices, allowing customers to scale our powerful visibility features across hundreds of OpenFlow ports. Today, ControlTower is qualified to work with HP, Dell, and Arista OpenFlow switches—and we will expand the list further in the future.

This addition to the ControlTower platform is exciting for several reasons:

  • The powerful advanced features of ControlTower can now be applied across more of your network for greater visibility.
  • You don’t need to be conversant in OpenFlow or deploy an SDN controller, we take care of all the complexity in managing the OpenFlow switches. Just hook them up and our clever software takes control of the configuration details.
  • We provide RESTful API for integration with automation.
  • You can apply features such as Dynamic Filters, Packet Deduplication, ATIP (Application Threat Intelligence Processor), TimeStamping, Packet Trimming, and Traffic Shaping to any traffic in the cluster.
  • OpenFlow is ubiquitous with Ethernet switch vendors, presenting tremendous range of deployment options
  • OpenFlow helps future proof your visibility architecture by incorporating future developments in speed, density and capacity.
  • You have the flexibility to share precious switching hardware and rack space between production and visibility networks.
  • You can easily partition a switch, with some OpenFlow ports for network visibility and some ports for normal production traffic. The production partition doesn’t even need to run OpenFlow, it can be a basic L2 Ethernet switch!
  • You can easily provision more visibility ports dynamically as your network expands or changes.
  • Ixia’s extensive OpenFlow expertise enabled us to make this advancement. Ixia was first in the testing of OpenFlow technologies with our IxNetwork product several years ago, and we have been very active in development of the OpenFlow standard.

Customers who have seen this new feature set have been very excited. ControlTower’s OpenFlow capabilities will help them reach all the corners of their data center, and provide a new flexibility to deploy network resources how they wish with all benefits of an end-to-end Network Visibility Architecture.

Additional Resources:

NTO ControlTower

Network Visibility Architecture

Thanks to Ixia for the article.

Infosim® Global Webinar Day- Return On Investment (ROI) for StableNet®

We all use a network performance management system to help improve the performance of your network. But what is the return to the operations bottom line by using or upgrading these systems? This Thursday, March 26th, Jim Duster CEO of Infosim will be holding a webinar “How do I convince my boss to buy a network management solution?”

Jim will discuss-

Why would anyone buy network management system in the first place?

  • Mapping a technology purchase to the business value of making a purchase
  • Calculating a value larger than the technology total cost of ownership (TCO)
  • Two ROI tools (Live Demo)

You can sign up for this 30 minute webinar here

March 26 4:00 – 4:30 EST

b2ap3_thumbnail_register_button_20150323-144626_1.jpg

A recording of this Webinar will be available to all who register!

(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to CIO Review for the article.

The Advancements of VoIP Quality Testing

The Advancements of VoIP Quality Testing

Industry Analysts say that approximately 85% of today’s networks will require voip quality testing upgrades to their data networks to properly support high-quality VoIP and video traffic.

Organizations are always looking for a way to reduce costs, and that’s why they often try to deploy VoIP by switching voice traffic over to a LAN or WAN links.

In a lot of cases the data networks which the business has chosen handle VoIP traffic accordingly, generally speaking voice traffic is uniquely time sensitive, it cannot be qued and if data grams are lost the conversation can become choppy.

To ensure this doesn’t happen many organizations will conduct a VoIP quality test in the pre and post deplomyent stage.

Pre Deployment testing

There are several steps network engineers can take to ensure VoIP technology can meet expectations. Pre-deployment testing is the first step towards ensuring the network is ready to handle the VoIP traffic.

After the testing process, IT staff should be able to:

  • Determine the total VoIP traffic the network can handle without audio deprivation.
  • Discover any configuration errors with the network and VoIP equipment.
  • Identify and resolve erratic problems that affect network and application performance.
  • Identify security holes that allow malicious eavesdropping or denial of service.
  • Guarantee call quality matches user expectations.

Post deployment testing

Places that already have VoIP/video need to constantly and easily monitor the quality of those links to ensure good quality of service. Just because it was fine when you first installed it, doesn’t mean that it is still working well today, or will be tomorrow.

The main objective of post deployment VoIP testing is to measure the quality and standard of the system before you decide to go live with it. This will in return stop people from complaining about poor quality calls.

Post-deployment testing should be done early and often to minimize the cost of fault resolution and also to provide an opportunity to apply lessons learned later on during the installation.

In both pre and post deployment the testing needs to be simple to setup and provide at a glance actionable information including alarms when there is a problem.

Continuous monitoring

In many cases your network changes every day as devices are added or removed, these could include laptops, IP phones or even routers. All of these contribute to the continuous churn of the IP network experience.

A key driving factor for any business is finding any faults before they become a potential hindrance on the company, regular monitoring will eliminate any potential threats.

In this manner, you’ll receive maximum benefit from your VoIP investment. Regular monitoring builds upon all the assessments and testing performed in support of a deployment. You continue to verify key quality metrics of all the devices and the overall IP network health.

If you found this interesting have a look at the recording of one our webinars on this topic, you will get an in-depth look on this topic.

Thanks to NMSaaS for the article.