Infosim® Global Webinar Day- Return On Investment (ROI) for StableNet®

We all use a network performance management system to help improve the performance of your network. But what is the return to the operations bottom line by using or upgrading these systems? This Thursday, March 26th, Jim Duster CEO of Infosim will be holding a webinar “How do I convince my boss to buy a network management solution?”

Jim will discuss-

Why would anyone buy network management system in the first place?

  • Mapping a technology purchase to the business value of making a purchase
  • Calculating a value larger than the technology total cost of ownership (TCO)
  • Two ROI tools (Live Demo)

You can sign up for this 30 minute webinar here

March 26 4:00 – 4:30 EST

b2ap3_thumbnail_register_button_20150323-144626_1.jpg

A recording of this Webinar will be available to all who register!

(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Visibility Architectures Enable Real-Time Network Vigilance

Ixia's Network Visibility Architecture

A couple of weeks ago, I wrote a blog on how to use a network lifecycle approach to improve your network security. I wanted to come back and revisit this as I’ve had a few people ask me why the visibility architecture is so important. They had (incorrectly, IMO) been told by others to just focus on the security architecture and everything else would work out fine.

The reason you need a visibility architecture in place is because if you are attacked, or breached, how will you know? During a DDoS attack you will most likely know because of website performance problems, but most for most of the other attacks how will you know?

This is actually a common problem. The 2014 Trustwave Global Security Report stated that 71% of compromised victims did not detect the breach themselves—they had no idea and attack had happened. The report also went on to say that the median number of days from initial intrusion to detection was 87! So most companies never detected the breach on their own (they had to be told by law enforcement, a supplier, customer, or someone else), and it took almost 3 months after the breach for that notification to happen. This doesn’t sound like the optimum way to handle network security to me.

The second benefit of a visibility architecture is faster remediation once you discover that you have been breached. In fact, some Ixia customers have seen an up to 80% reduction in their mean time to repair performance due to implementing a proper visibility architecture. If you can’t see the threat, how are you going to respond to it?

A visibility architecture is the way to solve these problems. Once you combine the security architecture with the visibility architecture, you equip yourself with the necessary tools to properly visualize and diagnose the problems on your network. But what is a visibility architecture? It’s a set of components and practices that allow you to “see” and understand what is happening in your network.

The basis of a visibility architecture starts with creating a plan. Instead of just adding components as you need them at sporadic intervals (i.e., crisis points), step back and take a larger view of where you are and what you want to achieve. This one simple act will save you time, money and energy in the long run.

Ixia's Network Visibility Architecture

The actual architecture starts with network access points. These can be either taps or SPAN ports. Taps are traditionally better because they don’t have the time delays, summarized data, duplicated data, and the hackability that are inherent within SPAN ports. However, there is a problem if you try to connect monitoring tools directly to a tap. Those tools become flooded with too much data which overloads them, causing packet loss and CPU overload. It’s basically like drinking from a fire hose for the monitoring tools.

This is where the next level of visibility solutions, network packet brokers, enter the scene. A network packet broker (also called an NPB, packet broker, or monitoring switch) can be extremely useful. These devices filter traffic to send only the right data to the right tool. Packets are filtered at the layer 2 through layer 4 level. Duplicate packets can also be removed and sensitive content stripped before the data is sent to the monitoring tools if that is required as well. This then provides a better solution to improve the efficiency and utility of your monitoring tools.

Access and NPB products form the infrastructure part of the visibility architecture, and focus on layer 2 through 4 of the OSI model. After this are the components that make up the application intelligence layer of a visibility architecture, providing application-aware and session-aware visibility. This capability allows filtering and analysis further up the stack at the application layer, (layer 7). This is only available in certain NPBs. Depending upon your needs, it can be quite useful as you can collect the following information:

  • Types of applications running on your network
  • Bandwidth each application is consuming
  • Geolocation of application usage
  • Device types and browsers in use on your network
  • Filter data to monitoring tools based upon the application type

These capabilities can give you quick access to information about your network and help to maximize the efficiency of your tools.

These layer 7 application oriented components provide high-value contextual information about what is happening with your network. For example, this type of information can be used to generate the following benefits:

  • Maximize the efficiency of current monitoring tools to reduce costs
  • Gather rich data about users and applications to offer a better Quality of Experience for users
  • Provide fast, easy to use capabilities to spot check for security & performance problems

Ixia's Network Visibility Architecture

And then, of course, there are the management components that provide control of the entire visibility architecture: everything from global element management, to policy and configuration management, to data center automation and orchestration management. Engineering flexible management for network components will be a determining factor in how well your network scales.

Visibility is critical to this third stage (the production network) of your network’s security lifecycle that I referred to in my last blog. (You can view a webinar on this topic if you want.) This phase enables the real-time vigilance you will need to keep your network protected.

As part of your visibility architecture plan, you should investigate and be able to answer these three questions.

  1. Do you want to be proactive and aggressively stop attacks in real-time?
  2. Do you actually have the personnel and budget to be proactive?
  3. Do you have a “honey pot” in place to study attacks?

Depending upon those answers, you will have the design of your visibility architecture. As you can see from the list below, there are several different options that can be included in your visibility architecture.

  • In-line components
  • Out-of-band components
  • Physical and virtual data center components
  • Layer 7 application filtering
  • Packet broker automation
  • Monitoring tools

In-line and/or out-of-band security and monitoring components will be your first big decision. Hopefully everybody is familiar with in-line monitoring solutions. In case you aren’t, an in-line (also called bypass) tap is placed in-line in the network to allow access for security and monitoring tools. It should be placed after the firewall but before any equipment. The advantage of this location is that should a threat make it past the firewall, that threat can be immediately diverted or stopped before it has a chance to compromise the network. The tap also needs to have heartbeat capability and the ability to fail closed so that should any problems occur with the device, no data is lost downstream. After the tap, a packet broker can be installed to help traffic to the tools. Some taps have this capability integrated into them. Depending upon your need, you may also want to investigate taps that support High Availability options if the devices are placed into mission critical locations. After that, a device (like an IPS) is inserted into the network.

In-line solutions are great, but they aren’t for everyone. Some IT departments just don’t have enough personnel and capabilities to properly use them. But if you do, these solutions allow you to observe and react to anomalies and problems in real-time. This means you can stop an attack right away or divert it to a honeypot for further study.

The next monitoring solution is an out-of-band configuration. These solutions are located further downstream within the network than the in-line solutions. The main purpose of this type of solution is to capture data post event. Depending whether interfaces are automated or not, it is possible to achieve near real-time capabilities—but they won’t be completely real-time like the in-line solutions are.

Nevertheless, out-of-band solutions have some distinct and useful capabilities. The solutions are typically less risky, less complicated, and less expensive than in-line solutions. Another benefit of this solution is that it gives your monitoring tools more analysis time. Data recorders can capture information and then send that information to forensic, malware and/or log management tools for further analysis.

Do you need to consider monitoring for your virtual environments as well as your physical ones? Virtual taps are an easy way to gain access to vital visibility information in the virtual data center. Once you have the data, you can forward it on to a network packet broker and then on to the proper monitoring tools. The key here is apply “consistent” policies for your virtual and physical environments. This allows for consistent monitoring policies, better troubleshooting of problems, and better trending and performance information.

Other considerations are whether you want to take advantage of automation capabilities, and do you need layer 7 application information? Most monitoring solutions only deliver layer 2 through 4 packet data, so layer 7 data could be very useful (depending upon your needs).

Application intelligence can be a very powerful tool. This tool allows you to actually see application usage on a per-country, per-state, and per-neighborhood basis. This gives you the ability to observe suspicious activities. For instance, maybe an FTP server is sending lots of files from the corporate office to North Korea or Eastern Europe—and you don’t have any operations in those geographies. The application intelligence functionality lets you see this in real time. It won’t solve the problem for you, but it will let you know that the potential issue exists so that you can make the decision as to what you want to do.

Another example is that you can conduct an audit for security policy infractions. For instance, maybe your stated process is for employees to use Outlook for email. You’ve then installed anti-malware software on a server to inspect all incoming attachments before they are passed onto users. With an application intelligence product, you can actually see if users are connecting to other services (maybe Gmail or Dropbox) and downloading files through that application. This practice would bypass your standard process and potentially introduce a security risk to your network. Application intelligence can also help identify compromised devices and malicious botnet activities through Command and Control communications.

Automation capability allows network packet brokers to be automated to initiate functions (e.g., apply filters, add connections to more tools, etc.) in response to external commands. This automation allows a switch/controller to make real-time adjustments to suspicious activities or problems within the data network. The source of the command could be a network management system (NMS), provisioning system, security information and event management (SIEM) tool or some other management tool on your network that interacts with the NPB.

Automation for network monitoring will become critical over the next several years, especially as more of the data center is automated. The reasons for this are plain: how do you monitor your whole network at one time? How do you make it scale? You use automation capabilities to perform this scaling for you and provide near real-time response capabilities for your network security architecture.

Finally, you need to pick the right monitoring tools to support your security and performance needs. This obviously depends the data you need and want to analyze.

The life-cycle view discussed previously provides a cohesive architecture that can maximize the benefits of visibility like the following:

  • Decrease MTTR up to 80% with faster analysis of problems
  • Monitor your network for performance trends and issues
  • Improve network and monitoring tool efficiencies
  • Application filtering can save bandwidth and tool processing cycles
  • Automation capabilities, which can provide a faster response to anomalies without user administration
  • Scale network tools faster

Once you integrate your security and visibility architectures, you will be able to optimize your network in the following ways:

  • Better data to analyze security threats
  • Better operational response capabilities against attacks
  • The application of consistent monitoring and security policies

Remember, the key is that by integrating the two architectures you’ll be able to improve your root cause analysis. This is not just for security problems but all network anomalies and issues that you encounter.

Additional Resources

  • Network Life-cycle eBook – How to Secure Your Network Through Its Life Cycle
  • Network Life-cycle webinar – Transforming Network Security with a Life-Cycle Approach
  • Visibility Architecture Security whitepaper – The Real Secret to Securing Your Network
  • Security Architecture whitepaper – How to Maximize IT Investments with Data-Driven Proof of Concept (POC)
  • Security solution overview – A Solution to Network Security That Actually Works
  • Cyber Range whitepaper – Accelerating the Deployment of the Evolved Cyber Range

Thanks to Ixia for the article. 

Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to CIO Review for the article.

The Advancements of VoIP Quality Testing

The Advancements of VoIP Quality Testing

Industry Analysts say that approximately 85% of today’s networks will require voip quality testing upgrades to their data networks to properly support high-quality VoIP and video traffic.

Organizations are always looking for a way to reduce costs, and that’s why they often try to deploy VoIP by switching voice traffic over to a LAN or WAN links.

In a lot of cases the data networks which the business has chosen handle VoIP traffic accordingly, generally speaking voice traffic is uniquely time sensitive, it cannot be qued and if data grams are lost the conversation can become choppy.

To ensure this doesn’t happen many organizations will conduct a VoIP quality test in the pre and post deplomyent stage.

Pre Deployment testing

There are several steps network engineers can take to ensure VoIP technology can meet expectations. Pre-deployment testing is the first step towards ensuring the network is ready to handle the VoIP traffic.

After the testing process, IT staff should be able to:

  • Determine the total VoIP traffic the network can handle without audio deprivation.
  • Discover any configuration errors with the network and VoIP equipment.
  • Identify and resolve erratic problems that affect network and application performance.
  • Identify security holes that allow malicious eavesdropping or denial of service.
  • Guarantee call quality matches user expectations.

Post deployment testing

Places that already have VoIP/video need to constantly and easily monitor the quality of those links to ensure good quality of service. Just because it was fine when you first installed it, doesn’t mean that it is still working well today, or will be tomorrow.

The main objective of post deployment VoIP testing is to measure the quality and standard of the system before you decide to go live with it. This will in return stop people from complaining about poor quality calls.

Post-deployment testing should be done early and often to minimize the cost of fault resolution and also to provide an opportunity to apply lessons learned later on during the installation.

In both pre and post deployment the testing needs to be simple to setup and provide at a glance actionable information including alarms when there is a problem.

Continuous monitoring

In many cases your network changes every day as devices are added or removed, these could include laptops, IP phones or even routers. All of these contribute to the continuous churn of the IP network experience.

A key driving factor for any business is finding any faults before they become a potential hindrance on the company, regular monitoring will eliminate any potential threats.

In this manner, you’ll receive maximum benefit from your VoIP investment. Regular monitoring builds upon all the assessments and testing performed in support of a deployment. You continue to verify key quality metrics of all the devices and the overall IP network health.

If you found this interesting have a look at the recording of one our webinars on this topic, you will get an in-depth look on this topic.

Thanks to NMSaaS for the article.

Infosim® Announces Release of StableNet® 7.0

Infosim® Announces Release of StableNet® 7.0

Infosim®, the technology leader in automated Service Fulfillment and Service Assurance solutions, today announced the release of its award-winning software suite StableNet® version 7.0 for Telco and Enterprise customers.

StableNet® 7.0 provides a significant number of powerful new features, including:

  • StableNet® Embedded Agent (SNEA) that allows for highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things (IoT)
  • StableNet® Network Configuration & Change Management (NCCM) now offers a REST API extension to allow an easy workflow integration
  • New look and feel of the StableNet® GUI to improve the user experience in terms of usability and workflow
  • StableNet® Server is now based on WildFly 8.2, a modern Java Application Server that supports web services for easier integration of 3rd party systems
  • Extended device support for Phybridge, Fortinet Firewalls, Arista, Sofaware (Checkpoint), Mitel, Keysource UPS, Cisco Meraki and Ixia

StableNet® version 7.0 is available for purchase now. Customers with current maintenance contracts may upgrade free of charge as per the terms and conditions of their contract.

Supporting Quotes:

Marius Heuler, CTO Infosim®

“With this new release of StableNet®, we have enhanced our technological basis and laid out the groundwork to support extensive new automation features for our customers. This is another big step forward towards the industrialization of modern network management.”

Thanks to Infosim for the article.

Why SNMP Monitoring is Crucial for your Enterprise

Why SNMP Monitoring is Crucial for your Enterprise

What is SNMP? Why should we use it? These are all common questions people ask when deciding if its the right feature for them, the answers to these questions are simple.

Simple Network Management Protocol is an “internet-standard protocol for managing devices on IP netowrks”. Devices that typically support this solution include routers, switches, servers, workstations, printers, modem racks and more.

Key functions

  • Collects data about its local environment.
  • Stores and retrieves administration information as defined in the MIB.
  • Signals an event to the manager.
  • Acts as a proxy for some non–SNMP manageable network device.

It typicaly uses, one or more administrative computers, called managers, which have the task of monitoring or managing a group of hosts/devices on a computer network.

Each SNMP Monitoring tool provides valuable insight to any network administrator who requires complete visibility into the network, and it acts as a primary component of a complete management solution information via SNMP to the manager.

The specific agents uncover data on the managed systems as variables. The protocol also permits active management tasks, such as modifying and applying a new configuration through remote modification of these variables.

Companies such as Paessler & Manage engine have been providing customers with reliable SNMP for years, and its obvious why.

Why use it?

It delivers information in a common, non-proprietary manner, making it easy for an administrator to manage devices from different vendors using the same tools and interface.

Its power is in the fact that it is a standard: one SNMP-compliant management station can communicate with agents from multiple vendors, and do so simultaneously.

Another advantage of the application is in the type of data that can be acquired. For example, when using a protocol analyzer to monitor network traffic from a switch’s SPAN or mirror port, physical layer errors are invisible. This is because switches do not forward error packets to either the original destination port or to the analysis port.

However, the switch maintains a count of the discarded error frames and this counter can be retrieved via a simple network management protocol query.

Conclusion

When selecting a solution like this, choose a solution that delivers full network coverage for multi-vendor hardware networks including a console for the devices anywhere on your LAN or WAN.

If you want additional information download our free whitepaper below.

NMSaaS- Top 10 Reasons to Consider a SaaS Based Solution

Thanks to NMSaaS for the article. 

Ixia Extends Visibility Architecture with Native OpenFlow Integration

Network Visibility Solutions

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced an update to its ControlTower distributed network visibility platform that includes support for OpenFlow enabled switches from industry leading manufacturers. ControlTower OpenFlow support has at present been interoperability tested with Arista, Dell and HP OpenFlow enabled switches.

“Dell is a leading advocate for standards such as Openflow on our switching platforms to enable rich and innovative networking applications,” said Arpit Joshipura, Vice President, Dell Networking. “With Ixia choosing to support our Dell Networking switches within its ControlTower management framework, Dell can extend cost-effective visibility and our world-class services to our enterprise customers.”

Ixia’s enhanced ControlTower platform takes a unique open-standards based approach to significantly increase scale and flexibility for network visibility deployments. The new integration makes ControlTower the most extensible visibility solution on the market. This allows customers to leverage SDN and seamlessly layer the sophisticated management and advanced processing features of Ixia’s Net Tool Optimizer® (NTO) family of solutions on top of the flexibility and baseline feature set provided by OpenFlow switches.

“Data centers benefit from the power and flexibility that OpenFlow switches can provide but cannot afford to lose network visibility,” said Shamus McGillicuddy, Senior Analyst, Network Management at Enterprise Management Associates. “However organizations can use these same SDN-enabled switches with a visibility architecture to ensure that their existing monitoring and performance management tools can maintain visibility.”

Key highlights of the expanded visibility architecture include:

  • Ease of use, advanced processing functions and single pane of glass configuration through Ixia’s NTO user interface and purpose-built hardware
  • Full programmability and automation control using RESTful APIs
  • Patented automatic filter compiler engine for hassle-free visibility
  • Architectural support for line speeds from 1Gbps to 100Gbps in a highly scalable design
  • Open, standards-based integration with the flexibility to use a variety of OpenFlow enabled hardware and virtual switch platforms
  • Dynamic repartitioning of switch ports between production switching and visibility enablement to optimize infrastructure utilization

“This next-generation ControlTower delivers solutions that leverage open standards to pair Ixia’s field-proven visibility architecture with best of breed switching, monitoring and security platforms,” added Deepesh Arora, Vice President of Product Management at Ixia. These solutions will provide our customers the flexibility needed to access, aggregate and manage their business-critical networks for the highest levels of application performance and security resilience.”

About Ixia’s Visibility Architecture

Ixia’s Visibility Architecture helps companies achieve end-to-end visibility and security in their physical and virtual networks by providing their tools with access to any point in the network. Regardless of network scale or management needs, Ixia’s Visibility Architecture delivers the control and simplicity necessary to improve the usefulness of these tools.

Thanks to Ixia for the article.

Network Device Backup is a Necessity with Increased Cyber Attacks

NMSaaS- Network Device backup is a necessity with increased cyber attacks

In the past few years cyber-attacks have become far more predominant with data, personal records and financial information stolen and sold on the black market in a matter of days. Major companies such as E-Bay, Domino’s, Montana Health Department and even the White House have fallen victim to cyber criminals.

Security Breach

The most recent scandal was Anthem, one of the country largest health insurers. They recently announced that there systems had been hacked into and over 80 million customer’s information had been stolen. This information ranged from social security numbers, email data, addresses and income material.

Systems Crashing

If hackers can break into your system they can take down your system. Back in 2012 Ulster banks systems crashed, it’s still unreported if it was a cyber-attack or not but regardless of the case there was a crisis. Ulster banks entre banking system went down, people couldn’t take money out, pay bills or even pay for food. As a result of their negligence they were forced to pay substantial fines.

This could have all been avoided if they had installed a proper Network Device Backup system.

Why choose a Network Device Backup system

If your system goes down you need to find the easiest and quickest way to get it back up and running, this means having an up-to-date network backup plan in place that enables you to quickly swap out the faulty device and restore the configuration from backup.

Techworld ran a survey and found that 33% of companies do not back up their network device configurations.

The reason why you should have a backup device configuration in place is as follows:

  • Disaster recovery and business continuity.
  • Network compliance.
  • Reduced downtime due to failed devices.
  • Quick reestablishment of device configs.

It’s evident that increased security is a necessity but even more important is backing up your system. If the crash of Ulster bank in 2012 is anything to go by we should all be backing up our systems. If you would like to learn more about this topic click below.

Telnet Networks- Contact UsThanks to NMSaaS for the article. 

Magic Quadrant for Network Performance Monitoring and Diagnostics

Magic Quadrant for Network Performance Monitoring and Diagnostics

Network professionals support an increasing number of technologies and services. With adoption of SDN and network function virtualization, troubleshooting becomes more complex. Identify the right NPMD tools to detect application issues, identify root causes and perform capacity planning.

Market Definition/Description

Network performance monitoring and diagnostics (NPMD) enable network professionals to understand the impact of network behavior on application and infrastructure performance, and conversely, via network instrumentation. Other users and use cases exist, especially because these tools provide insight into the quality of the end-user experience. The goal of NPMD products is not only to monitor network components to facilitate outage and degradation resolution, but also to identify performance optimization opportunities. This is conducted via diagnostics, analytics and debugging capabilities to complement additional monitoring of today’s complex IT environments. At an estimated $1.1 billion, the NPMD market is a fast-growing segment of the larger network management space ($1.9 billion in 2013), and overlaps slightly with aspects of the application performance monitoring (APM) space ($2.4 billion in 2013).

Magic Quadrant

Magic Quadrant for Network Performance Monitoring and Diagnostics

Vendor Strengths and Cautions- Highlights

Ixia

Ixia was founded in 1997, specializing in network testing. Ixia entered the NPMD market through acquisition of Net Optics in 2013 and its Spyke monitoring product. The tool is aimed at small or midsize businesses (SMBs), although it can support gigabit and 10G environments. The Spyke tool has been subject to an end of life (EOL) announcement, with end of sale (EOS) beginning 31 October 2014, and EOL beginning 31 October 2017.

Given Ixia’s focus on the network packet broker (NPB) space, it can cover NPMD and NPB use cases, something only a few other vendors can claim. Ixia launched a new NPB platform, the Network Tool Optimizer (NTO) 7300 in 1H14, which provides large-scale chassis design and additional modules that help offload some NPMD capabilities. The goal of these modules is optimal use of the existing end-user NPMD tool. Modules include Ixia Packet Capture Module (PCM) with 14GB of triggered packet capture at 40GbE line rates and 48 ports of NPB, and the Ixia Application and Threat Intelligence (ATI) Processor, which provides extensive processing power in addition to 48 ports of NBP. The ATI Processor requires a subscription at an additional recurring cost. The new 7300 product and platform has no current Gartner-verified customer references. Fundamental VoIP, application visibility and end-user experience metrics are standard capabilities. While the tool provides packet inspection and application visibility, product updates have not been observed for some time and the road map remains unclear.

Ixia’s NPMD revenue is between $5 million and $10 million per year. Ixia did not respond to requests for supplemental information and/or to review the draft contents of this document. Gartner’s analysis for this vendor is therefore based on other credible sources, including previous vendor briefings and interactions, the vendor’s own marketing collateral, public information and discussions with more than 200 end users who either have evaluated or deployed each NPMD product.

Strengths

  • Ixia’s ATI Processor provides visibility of, and rules to classify, traffic based on application types and performance of applications.
  • Ixia has significant R&D resources. Of the 1,800 staff, more than 800 are engineering- and R&D-focused.
  • Ixia’s market leadership in NPB allows it to leverage scalable hardware design with software capabilities to enable NPMD and additional troubleshooting needs by offloading some of these requirements from other more comprehensive NPMD tools.

Cautions

  • With the EOS of the Spyke and Net Optics appTap platforms, Ixia appears to have discontinued investments in pure NPMD capabilities.
  • Since the launch of the NTO 7300 platform in early 2014, there has been limited traction due to existing NPB investments and high cost for the hardware buy-in.
  • Financial reporting restatements and filing delays, combined with the resignation of two senior corporate officers, may hinder overall strategic focus and vision.

JDSU (Network Instruments)

In 2014, we have witnessed the completion of JDSU’s acquisition of Network Instruments, its subsequent integration into JDSU’s Network and Service Enablement business segment, the recent release of updates to its NPMD offering, and announced plans to separate JDSU into two entities in 2015. While this action could provide additional efficiencies and focus in the future, the preceding business integration and sales enablement efforts are only now beginning to bear fruit and will have to shift once more in response to the coming changes. The Network Instruments unit has followed a well-established, vertically integrated technology development strategy, designing and manufacturing most of its product components and software. An OEM relationship with CA Technologies, which had Network Instruments providing its GigaStor products to CA customers, devolved into a referral relationship, but no meaningful challenges have been voiced by Gartner clients as a result of this. Two key parts of the NPMD solution have new product names (Observer Apex and Observer Management Server) and a new, modern UI that is a significant improvement. Network Instruments’ current NPMD solution set is now part of the Observer Performance Management Platform 17, and includes Observer Apex, Observer Analyzer, Observer Management Server, Observer GigaStor, Observer Probes and Observer Infrastructure (v.4.2).

JDSU’s (Network Instruments) NPMD revenue is between $51 million and $150 million per year.

Strengths

  • Data- and process-level integration workflows are well-thought-out across the solution’s component products.
  • Network Instruments’ recent addition of a network packet broker product (Observer Matrix) to its offerings may appeal to small-scale enterprises looking for NPMD and NPB capabilities from the same vendor.
  • Packet capture and inspection capability (via GigaStor) is well-regarded by clients.

Cautions

  • While significant business integration activities have not, to date, had a perceptible impact on support or development productivity, this process is ongoing and now part of a larger business separation action that could result in challenges in the near future.
  • The NPMD solution requires multiple components with differing user interfaces that are not consistent across products.
  • The solution is focused on physical appliances, with limited options beyond proprietary hardware.

To learn more, download the full report here

Thanks to Gartner for the article. 

The Highs and Lows of SaaS Network Management

The Highs and Lows of SaaS Network Management

In the technology era that we live in something which cannot be ignored is SaaS network management, in business everything you work off is in some shape of form part of the tech network. This may include printers, phones, routers and even electronic note pads, all of these need to be managed successfully within the business to avoid misfortunes.

While looking at SaaS network management there are always going to be some pros and cons.

The ease of deployment

Because SaaS exists in the cloud, it eradicates the necessity of installing software on a system and ensures that it is configured properly. The management of SaaS is naturally handled through simple interfaces that allows the user to configure and provision the service as required.

As more establishments move their formerly in-house systems into the cloud, incorporating it with these existing services requires limited effort.

Lower costs

SaaS has a differential regarding costs since it usually resides in a shared or multitenant environment where the hardware and software license costs are low compared with the traditional models. Maintenance costs are reduced as well, since the SaaS provider owns the environment and it is split among all customers that use that solution.

Scalability and integration

Usually, SaaS solutions reside in cloud environments that are scalable and have integration with other SaaS offerings. Comparing with the traditional model, users do not have to buy another server or software. They only need to enable a new SaaS offering and, in terms of server capacity planning, the SaaS provider will own that.

Obviously in this world nothing is perfect and there are some slight downsides to SaaS network management. They are very minimal and some of them would not account for everyone, however it’s still necessary to mention them.

Limited applications.

SaaS is gaining in popularity. However, there are still many software applications that don’t offer a hosted platform. You may find it essential to still host certain applications on site, especially if your company relies on multiple software solutions.

Maintenance

Obviously SaaS adoption makes maintenance simpler, because the vendor has more control on the full installation. But the task here might be related to the psychological attitude. For an on premise installation, the customer accepts the responsibility for maintenance and allocates human resources for it. With SaaS the customer tends to this that he or she is released from any of these responsibilities, which is fairly true in most cases but you still should always be keeping an eye on the software no matter what.

Dependence on high speed internet

A high speed internet connection is a must for SaaS, while this is not a big challenge in developed nations, it can be a serious limitation in developing nations with poor infrastructure and unreliable connectivity. Thus firms should choose wisely understanding the connectivity bottleneck.

As you can see the pros outweigh the cons and in business today all organization are looking for a cheaper and faster resources, and it’s obvious that SaaS network management is on of them.

The Highs and Lows of SaaS Network Management

Thanks to NMSaaS for the article.