How to Dodge 6 Big Network Headaches

The proper network Management tools allows you to follow these 6 simple tips. This will help you stay ahead of network probelms, and if a problem does occurs you will have the data to be able to analyze the problem.

Troubleshoot sporadic issues with the right equipment

The most irksome issues are often sporadic and require IT teams to wait for the problem to reappear or spend hours recreating the issue. With retrospective network analysis (RNA) solutions, it’s possible to eliminate the need to recreate issues. Performance management solutions with RNA have the capacity to store terabytes of data that allow teams to immediately rewind, review, and resolve intermittent problems.

Baseline network and application performance

It’s been said that you can’t know where you’re going if you don’t know where you’ve been. The same holds true for performance management and capacity planning. Unless you have an idea of what’s acceptable application and network behavior today, it’s difficult to gauge what’s acceptable in the future. Establishing benchmarks and understanding long-term network utilization is key to ensuring effective infrastructure changes.

Clarify whether it’s a network or application issue

Users often blame the network when operations are running slow on their computer. To quickly pinpoint network issues, it’s critical to analyze and isolate problems pertaining to both network and application performance.

Leverage critical information already available to you with NetFlow?

Chances are your network is collecting NetFlow data. This information can help you easily track active applications on the network. Aggregate this data into your analyzer so that you can get real-time statistics on application activity and drill down to explore and resolve any problems.

Run pre-deployment assessments for smooth rollouts

Network teams often deploy an application enterprise-wide before knowing its impact on network performance. Without proper testing of the application or assessing the network’s ability to handle the application, issues can result in the middle of deployment or configuration. Always ensure you run a site survey and application performance testing before rolling out a new application – this allows you to anticipate how the network will respond and to resolve issues before they occur.

Manage proactively by fully understanding network traffic patterns

Administrators frequently only apply analysis tools after the network is already slow or down. Rather than waiting for problems, you should continuously track performance trends and patterns that may be emerging. Active management allows you to solve an emerging issue before it can impact the end user.

Managing Phantom Devices on Your Network

{jcomments on}

Detecting Phantom Devices on Your NetworkWatch the Video - InfoSim

So you run a network discovery and you notice devices that you are not familiar with. A phantom device is a device that is unmanaged that should be monitored by your Network Management System (NMS)

It seems these devices show up even though you have processes in place to prevent this type of behavior. These could be devices connected to the wrong network, printers, BYOD etc. A phantom device is invisible to you so you are unaware of the device, opening a vulnerability, missing patches, Misconfigurations etc.

How to detect and integrate phantom devices

The first step is to find these devices so you know that they exist and track them. Once you find the device you need to extract device information and understand how they are integrated into your network. The detection process cannot interfere with your daily business; you don’t want to add any unnecessary load to the network and false positives need to be avoided.

Once the phantom devices have been discovered you need to set up a process to incorporate them into your Network Management System (NMS) or remove them from the network

InfoSim SableNet

Has the ability to help you in this process by using the automated discovery engine. This allows you to tag and then reporting on phantom devices. You can then see how they are connected to the network and using SNMP and the NCCM module you can then manage or remove these devices from your Network

 

3 Key Differences Between NetFlow and Packet Capture Performance Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis at center-stage of the conversation. Granted, both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, however rich in network metrics, requires sniffing devices and agents throughout the network, which invariably require some level of maintenance during their lifespan. In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with packet sniffers can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router or firewall a NetFlow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, NetFlow analyzers of varying feature-sets are available for network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow’s ability to provide WAN-wide metrics in near real-time makes it a suitable troubleshooting companion for engineers. And with version 9 of NetFlow extending the wealth of information it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Packet Capture. Packet Capture tools, however, do what they do best, which is Deep Packet Inspection (DPI), which allows for the identification of aspects in the traffic hidden in the past to Netflow analyzers. But Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR and other DPI solutions who have recognized that all they need to do is use flexible Netflow tools to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on. We could argue that packet sniffing is able to provide much of this information too, but it doesn’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting performance anomalies that could be subscribed to a number of factors such as untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Packet Capture obsolete

The short answer is, no. In fact, Packet Capture, when properly coupled with NetFlow, can make a very elegant solution. For example, using NetFlow to identify an attack profile or illicit traffic and then analyzing corresponding raw packets becomes an attractive solution. However, NetFlow strikes that perfect balance between detail and context and gives NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform. Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring attests to NetFlow’s growing prominence as the monitoring tool of choice. And as it and its various iterations such sFlow, IPFIX and  others continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

Thank you to NetFlow Auditor for this post.

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

InterComms talks to Marius Heuler, CTO Infosim®, about Infosim® StableNet® and the management and orchestration of SDN and NFV environments

infosim-1Marius Heuler has more than 15 years of experience in network management and optimization. As CTO and founding member of Infosim®, he is responsible for leading the Infosim® technical team in architecting, developing, and delivering StableNet®. He graduated from the University of Würzburg with a degree in Computer Science, holds several Cisco certifications, and has subject matter expert knowledge in various programming languages, databases and protocol standards. Prior to Infosim®, Marius held network management leadership positions and performed project work for Siemens, AOK Bavaria and Vodafone.

Q: The terms SDN and NFV recently have been on everybody’s lips. However, according to the critics, it is still uncertain how many telcos and enterprises use these technologies already. What is your point of view on this topic?

A: People tend to talk about technologies and ask for the support of a certain interface, service, or technology. Does your product support protocol X? Do you offer service Y? What about technology Z?

Experience shows that when looking closer at the actual demand, it is often not the particular technology, interface, or service people are looking for. What they really want is a solution for their particular case. That is why I would rather not expect anybody to start using SDN or NFV as an end in itself. People will start using these technologies once they see that it is the best (and most cost-efficient) way to relieve their pain points.

Andrew Lerner, one of the Gartner Blog Network members, recently gave a statement pointing in the exact same direction, saying that Gartner won’t publish an SDN Magic Quadrant, “because SDN and NFV aren’t markets. They are an architectural approach and a deployment option, respectively.“

infosim-chart-big

Q: You have been talking about use cases for SDN and NFV. A lot of these use cases are also being discussed in different standardization organizations or in research projects. What is Infosim®’s part in this?

A:There are indeed a lot of different use cases being discussed and as you mentioned a lot of different standardization and research activities are in progress. At the moment, Infosim® is committing to this area in various ways: We are a member of TM Forum and recently also joined the ETSI ISG NFV. Furthermore, we follow the development of different open source activities, such as the OpenDaylight project, ONOS, or OPNFV, just to name a few. Besides this, Infosim® is part of several national and international research projects in the area of SDN and NFV where we are working together with other subject matter experts and researchers from academia and industry. Topics cover among others operation and management of SDN and NFV environments as well as security aspects. Last but not least, Infosim® is also in contact with various hardware and software vendors regarding these topics. We thereby equally look on open source solutions as well as proprietary ones.

 

Q: Let us talk about solutions then: With StableNet® you are actually quite popular and successful in offering a unified network management solution. How do SDN and NFV influence the further development of your offering?

A: First of all, we are proud to be one of the leading manufacturers of automated Service Fulfillment and Service Assurance solutions. The EMAtm has rated our solution as the most definitive Value Leader in the EMAtm Radar for Enterprise Network Availability Monitoring Systems in 2014. We do not see ourselves as one of the next companies to develop and offer their own SDN controller or cloud computing solution. Our intent is rather to provide our well-known strength in unified network management for the SDN/NFV space as well. This includes topics like Service Assurance, Fault Management, Configuration, and Provisioning, Service Modelling, etc.

 

Q: Are there any particular SDN controller or cloud computing solutions you can integrate with?

A: There is a wide range of different SDN controllers and cloud computing solutions that are currently of general interest. In its current SDN controller report the SDxcentral gave an overview and comparison of the most common open source and proprietary SDN controllers. None of these controllers can be named as a definite leader. Equally regarding the NFV area, the recent EMAtm report on Open Cloud Management and Orchestration showed that besides the commonly known OpenStack there are also many other cloud computing solutions that enterprises are looking at and think of working with.

These developments remind me of something that, with my experience in network management, I have known for over a decade now. Also when looking at legacy environments there have always been competing standards. Despite years of standardization activities of various parties, often none of the competing standards became the sole winner and rendered all other interfaces or technologies obsolete. In fact, there is rather a broad range of various technologies and interfaces to be supported by a management system.

This is one of the strengths that we offer with StableNet®. We currently support over 125 different standardized and vendor-specific interfaces and protocols in one unified network management system. Besides this, with generic interfaces both for monitoring and configuration purposes we can easily integrate with any structured data source by the simple creation of templates rather than the complicated development of new interfaces. This way, we can shift the main focus of our product and development activities to the actual management and orchestration rather than the adaption to new data sources.

 

Q: Could you provide some examples here?

A: We continuously work on the extension of StableNet® with innovative new features to further automate the business processes of our customers and to simplify their daily work. Starting from Version 7, we have extended our existing integration interfaces by a REST API to further ease the integration with third party products. With Dynamic Rule Generation, Distributed Syslog Portal, and Status Measurements we offer the newest technologies for an efficient alarming and fault management. Our StableNet® Embedded Agent (SNEA) allows for an ultra-scalable, distributed performance monitoring as well as for the management of IoT infrastructures. Being part of our unified network management solution, all these functionalities, including the ultra-scalable and vendor-agnostic configuration management, can equally be used in the context of SDN and NFV. A good way to keep up-to-date with our newest developments are our monthly Global Webinar Days. I would really recommend you to have a look at those.

 

Q: As a last question, since we have the unique chance to directly talk with the CTO of Infosim®, please let us be a little curious. What key novelties can people expect to come next from Infosim®?

A:There are of course many things that I could mention here, but the two areas that will probably have the most significant impact on management and orchestration are our new service catalog and the new tagging concept. With the service catalog the management is moved from a rather device- or server-based perspective to a holistic service-based view. This tackles both the monitoring and the configuration perspective and can significantly simplify and speed up common business processes. This is of course also related to our new tagging concept.

This new approach is a small revolution to the way that data can be handled for management and orchestration. We introduce the possibility for an unlimited number of customizable tags for each entity, let it be a device, an interface, or an entire service, and combine this with automated relations and inheritance of proprieties between the different entities. Furthermore, the entities can be grouped in an automated way according to arbitrary tag criteria. This significantly extends the functionality, usability, and also the visualization possibilities.

About Infosim® StableNet®
StableNet® is available in two versions: Telco (for Telecom Operators and ISPs) and Enterprise (for IT and Managed Service Providers).

StableNet® is a 3rd generation highly automated Network Management System. The key differentiation of StableNet® to other legacy type Operational Support Systems (OSS) is that StableNet® is a unified OSS system with three integrated functionalities that focus on Configuration, Fault, and Performance Management, with automated Root Cause Analysis (RCA). StableNet® can be deployed on a Multi-Tenant, Multi-Customer, or dedicated platform and can be operated in a highly dynamic flex-compute environment.

Thank you to InterComms for this post.

Infosim® Product Called StableNet® Chosen as Athenahealth Consolidates Network Performance Monitoring

Infosim®, the new leader in network performance management, today announced that it has been selected as the supplier of choice to consolidate the IT infrastructure performance monitoring capabilities at Athenahealth.

Following an extensive evaluation of the performance management market, the organization identified StableNet® as the only vendor capable of offering a single comprehensive view of the performance and capacity of its IT infrastructure in one unified solution, and with the highest levels of performance, scalability, and interoperability.

When introducing a performance monitoring solution, it is essential that it can be fully integrated with the existing infrastructure. Interoperability with existing monitoring systems was essential to the organization’s project, and will allow users to create the alerts and reports that they need to maintain current operations and plan for future capacity needs proactively.

Athenahealth’s network engineering team was looking for a tool that could monitor the health and performance of the company’s multivendor network and replace the majority of the point management tools being used. After narrowing the search to Infosim® StableNet®, the team conducted a successful proof of concept and elected to adopt the solution. StableNet® will replace more than a half-dozen point management tools and streamline network management practices.

Supporting Quotes:

Shamus McGillicuddy, Enterprise Management Associates Senior Analyst comments:
“Athenahealth, a provider of cloud-based healthcare software, will replace more than a half-dozen stand-alone network management tools with Infosim StableNet®. StableNet®, an enterprise network availability and performance management system, will help unify operations by providing customizable dashboards and network transparency to all key stakeholders in Athenahealth’s IT organization.”

Brian Lubelczyk, senior manager of data networks at Athenahealth comments:
“I discovered them at Cisco Live two years ago, and I was really impressed overall with how well they were able to hit everything we wanted to do on this project, from monitoring to capacity planning and transparency of the network. The more we used the product, the more we liked it. Even for simple bandwidth trending we were using three or four different tools.”

Link to full case study

ABOUT STABLENET®

StableNet® is available in two versions: Telco (for Telecom Operators and ISPs) and Enterprise (for IT and Managed Service Providers).

StableNet® Telco is a comprehensive unified management solution; offerings include: Quad-play, Mobile, High-speed Internet, VoIP (IPT, IPCC), IPTV across Carrier Ethernet, Metro Ethernet, MPLS, L2/L3 VPNs, Multi Customer VRFs, Cloud and FTTx environments. IPv4 and IPv6 are fully supported.

StableNet® Enterprise is an advanced, unified and scalable network management solution for true End-to-End management of medium to large scale mission-critical IT supported networks with enriched dashboards and detailed service-views focused on both Network & Application services.

Infosim®, the Infosim® logo and StableNet® are registered trademarks of Infosim® GmbH & Co. KG. All other trademarks or registered trademarks seen belong to their respective companies and are hereby acknowledged.

Thank you to Infosim for this post

Healthcare IT Reveals Network Rx

IT Heroes: A Prescription for Network Health

Each and every day, the bold men and women of IT risk it all to deliver critical applications and services. Their stories are unique. Their triumphs inspire. For the first time ever, the IT Heroes Series offers a revealing glimpse into the secrets and strategies that have won accolades for network teams around the world – and could do the same for you.

Initial Symptoms

Located in South West England, the Northern Devon National Health Service (NHS) trust serves a population of just under half a million. Operating across 1,300 square miles and providing vital IT services to a large district medical center and 17 community hospitals is serious business.

When the network slowed to a crawl, Network Technology Specialist, Peter Lee and his team were motivated to provide a fast diagnosis.

Tools of the Trade

Viavi Managing Healthcare IT

Since many life-saving tests and medical information are communicated via the healthcare network, it was critical for the team to get everything back on track fast. After receiving complaints about the “slow network,” Lee tested it out for himself. Like end users, he also experienced a series of timed-out sessions.

“I used Observer® GigaStor™ Retrospective Network Analysis to rewind the data, putting a filter on the machine. All that was coming back was SOPHOS,” says Lee, regarding the popular security software. “I widened the search to the subnet. It was an 11 minute capture with 25,000 hits on SOPHOSXL.net.”

Lee and his team had a hunch that the traffic from the SOPHOS application was abnormally high and hogging valuable network resources. But how could they prove it?

“I went back to a previous capture that I had run last February,” says Lee, referring to an ad hoc baseline established months before. “In some 20 minutes, the average was only 3,000 hits.”

With the previous network snapshot from GigaStor, the team was able to prove that the application traffic had drastically increased and was undoubtedly the cause of the slow network.

An Rx for a Network Fix

“We’ve got a call open with the SOPHOS senior team looking into this,” says Lee. “It works out to between 33 to 50 percent of all our DNS traffic is going out to SOPHOS. Without the GigaStor, I would have never known about the problem. It’s simple, it’s easy, and it’s fantastic.”

Find out how this IT Hero found the hardware issue that brought the network to its knees, and how his team uses Wireshark to troubleshoot on the go. Download the full Northern Devon NHS Case Study now.

Thanks to VIAVI for the article.

Key Factors in NCCM and CMDB Integration – Part 2 – Change Configuration and Backup

In Part 1 of this series I discussed how an NCCM solution and a CMDB can work together to create a more effective IT inventory system. In this post, I will be taking that a step further and show how your change configuration process will benefit from integration with that same CMDB.

In general, the process of implementing IT infrastructure change happens at 3 separate stages of an assets lifecycle.

  1. Initial deployment / provisioning
  2. In production / changes
  3. Decommissioning / removal

In each of these stages, there is a clear benefit to having the system(s) that are responsible for orchestrating the change be integrated with an asset inventory / CMDB tool. Let’s take a look at each one to see why.

1. Initial Deployment / Provisioning

When a new device is ready to be put onto the network, it must go through at least one (and probably many) pre-deployment steps in order to be configured for its eventual job in the IT system. From “out of the box” to “in production” requires at least the following:

  1. Installation / turn on/ pretest of HW
  2. Load / upgrade of SW images
  3. Configuration of “base” information like IP address / FQDN / Management configuration
  4. Creation / deployment of full configuration

This may also include policy security testing and potentially manual acceptance by an authorized manager. It is best practice to control this process through an ITIL compliant system using a software application which has knowledge of what is required at each step and controls the workflow and approval process. However, the CMDB / Service desk rarely if ever can also process the actual changes to the devices. This is typically a manual process or (in the best case) is automated with an NCCM system. So, in order to coordinate that flow of activity, it is absolutely essential to have the CMDB be the “keeper” of the process and then “activate” the NCCM solution when it is time to make the changes to the hardware. The NCCM system should then be able to inform the CMDB that the activity was performed and also report back any potential issues or errors that may have occurred.

2. In Production / Changes

Once a device has been placed into production, at some point there will come a time where the device needs to have changes made to its hardware, software or configuration. Once again, the change control process should be managed through the CMDB / service desk. It is critical that as this process begins, the CMDB has been kept up today as to the current asset information. That way there are no “surprises” when it comes time to implement the changes. This goes back to having a standard re-discovery process which is performed on a known schedule by the NCCM system. We have found that most networks require a full rediscovery about 1x per week to be kept up to date –but we have also worked with clients that adjust this frequency up or down as necessary.

Just as in the initial deployment stage, it is the job of the NCCM system to inform the CMDB as to the state of the configuration job including any problems that might have been encountered. In some cases it is prudent to have the NCCM system automatically retry any failed job at least once prior to reporting the failure.

3. Decommissioning / Removal

When the time has come for a device to be removed from production and/or decommissioned the same type of process should be followed from when it was initially provisioned (but in reverse). If the device is being replaced by a newer system then the part of (or potentially the whole) configuration may just be moved to the new hardware. This is where the NCCM systems backup process will come into play. As per all NCCM best practices, there should be a regular schedule of backups that happen in order to make sure the configuration is known and accessible in case of emergency etc.

Once the device has been physically removed from the network, it must also either be fully removed from the CMDB or at the very least should be tagged as decommissioned. This has many benefits including stopping the accidental purchase of support and maintenance on a device which is no longer in service as well as preventing the NCCM system from attempting to perform discovery or configuration jobs on the device in the future (which would create a failure etc).

NCCM systems and CMDB’s really work hand in hand to help manage the complete lifecycle of an IT asset. While it could be possible to accurately maintain two non-connected systems, the time and effort involved, not to mention that much greater potential for error, makes the integration of your CMDB and NCCM tools a virtual necessity for large modern IT networks.

Top 20 Best Practices for NCCMThanks to NMSaaS for the article. 

{tag}link rel=”canonical” href=”http://blog.nmsaas.com/key-factors-in-nccm-and-cmdb-integration-part-2-%E2%80%93change-configuration-and-backup”{/tag}

Infosim® Announces Release of StableNet® 7.5

Infosim®, the technology leader in automated Service Fulfillment and Service Assurance solutions, today announced the release of version 7.5 of its award-winning software suite StableNet® for Telco and Enterprise customers.

StableNet® 7.5 provides a significant number of powerful new features, including:

  • Dynamic Rule Generation (DRG); a new and revolutionary Fault Management concept
  • REST interface supporting the new StableNet® iPhone (and upcoming Android) app
  • Highly customizable dashboard in both the GUI and Web Portal
  • Enabling integration with SDN/NFV element managers
  • NCCM structurer enabling creation of optimized and well-formatted device configurations
  • New High-Availability (HA) infrastructure based on Linux HA technology
  • Syslog & Trap Forwarding enabling integration of legacy systems that rely on their original trap & syslog data
  • Open Alarms GeoMap enabling geographical representation of open alarms

StableNet® version 7.5 is available for purchase now. Customers with current maintenance contracts may upgrade free of charge as per the terms and conditions of their contract.

Supporting Quotes:

Jim Duster, CEO Infosim® ,Inc.

“We are happy that our newest release is again full of innovative features like DRG. Our customers are stating this new DRG feature will help them receive a faster ROI by improving automation in their fault management area and dramatically increase the speed of Root-Cause Analysis.”

Download the release notes here.

Thanks to Infosim for the article.

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.” The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need. In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss.

NetFlow Auditor - Start your free trial today!Thanks to NetFlow Auditor for the article.

{tag}link rel=”canonical” href=”http://blog.netflowauditor.com/two-ways-networks-are-transformed-by-netflow?utm_campaign=September%2015%20-%20NetFlow%20Guide&utm_content=22860989&utm_medium=social&utm_source=linkedin”{/tag}

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

InterComms talks to Marius Heuler, CTO Infosim®, about Infosim® StableNet® and the management and orchestration of SDN and NFV environments

Marius Heuler has more than 15 years of experience in network management and optimization. As CTO and founding member of Infosim®, he is responsible for leading the Infosim® technical team in architecting, developing, and delivering StableNet®. He graduated from the University of Würzburg with a degree in Computer Science, holds several Cisco certifications, and has subject matter expert knowledge in various programming languages, databases and protocol standards. Prior to Infosim®, Marius held network management leadership positions and performed project work for Siemens, AOK Bavaria and Vodafone.

Q: The terms SDN and NFV recently have been on everybody’s lips. However, according to the critics, it is still uncertain how many telcos and enterprises use these technologies already. What is your point of view on this topic?
A: People tend to talk about technologies and ask for the support of a certain interface, service, or technology. Does your product support protocol X? Do you offer service Y? What about technology Z?

Experience shows that when looking closer at the actual demand, it is often not the particular technology, interface, or service people are looking for. What they really want is a solution for their particular case. That is why I would rather not expect anybody to start using SDN or NFV as an end in itself. People will start using these technologies once they see that it is the best (and most cost-efficient) way to relieve their pain points.

Andrew Lerner, one of the Gartner Blog Network members, recently gave a statement pointing in the exact same direction, saying that Gartner won’t publish an SDN Magic Quadrant, “because SDN and NFV aren’t markets. They are an architectural approach and a deployment option, respectively.“

Q: You have been talking about use cases for SDN and NFV. A lot of these use cases are also being discussed in different standardization organizations or in research projects. What is Infosim®’s part in this?
A: There are indeed a lot of different use cases being discussed and as you mentioned a lot of different standardization and research activities are in progress. At the moment, Infosim® is committing to this area in various ways: We are a member of TM Forum and recently also joined the ETSI ISG NFV. Furthermore, we follow the development of different open source activities, such as the OpenDaylight project, ONOS, or OPNFV, just to name a few. Besides this, Infosim® is part of several national and international research projects in the area of SDN and NFV where we are working together with other subject matter experts and researchers from academia and industry. Topics cover among others operation and management of SDN and NFV environments as well as security aspects. Last but not least, Infosim® is also in contact with various hardware and software vendors regarding these topics. We thereby equally look on open source solutions as well as proprietary ones.

Q: Let us talk about solutions then: With StableNet® you are actually quite popular and successful in offering a unified network management solution. How do SDN and NFV influence the further development of your offering?
A: First of all, we are proud to be one of the leading manufacturers of automated Service Fulfillment and Service Assurance solutions. The EMAtm has rated our solution as the most definitive Value Leader in the EMAtm Radar for Enterprise Network Availability Monitoring Systems in 2014. We do not see ourselves as one of the next companies to develop and offer their own SDN controller or cloud computing solution. Our intent is rather to provide our well-known strength in unified network management for the SDN/NFV space as well. This includes topics like Service Assurance, Fault Management, Configuration, and Provisioning, Service Modelling, etc.

Q: Are there any particular SDN controller or cloud computing solutions you can integrate with?
A: There is a wide range of different SDN controllers and cloud computing solutions that are currently of general interest. In its current SDN controller report the SDxcentral gave an overview and comparison of the most common open source and proprietary SDN controllers. None of these controllers can be named as a definite leader. Equally regarding the NFV area, the recent EMAtm report on Open Cloud Management and Orchestration showed that besides the commonly known OpenStack there are also many other cloud computing solutions that enterprises are looking at and think of working with.

These developments remind me of something that, with my experience in network management, I have known for over a decade now. Also when looking at legacy environments there have always been competing standards. Despite years of standardization activities of various parties, often none of the competing standards became the sole winner and rendered all other interfaces or technologies obsolete. In fact, there is rather a broad range of various technologies and interfaces to be supported by a management system.

This is one of the strengths that we offer with StableNet®. We currently support over 125 different standardized and vendor-specific interfaces and protocols in one unified network management system. Besides this, with generic interfaces both for monitoring and configuration purposes we can easily integrate with any structured data source by the simple creation of templates rather than the complicated development of new interfaces. This way, we can shift the main focus of our product and development activities to the actual management and orchestration rather than the adaption to new data sources.

Q: Could you provide some examples here?
A: We continuously work on the extension of StableNet® with innovative new features to further automate the business processes of our customers and to simplify their daily work. Starting from Version 7, we have extended our existing integration interfaces by a REST API to further ease the integration with third party products. With Dynamic Rule Generation, Distributed Syslog Portal, and Status Measurements we offer the newest technologies for an efficient alarming and fault management. Our StableNet® Embedded Agent (SNEA) allows for an ultra-scalable, distributed performance monitoring as well as for the management of IoT infrastructures. Being part of our unified network management solution, all these functionalities, including the ultra-scalable and vendor-agnostic configuration management, can equally be used in the context of SDN and NFV. A good way to keep up-to-date with our newest developments are our monthly Global Webinar Days. I would really recommend you to have a look at those.

Q: As a last question, since we have the unique chance to directly talk with the CTO of Infosim®, please let us be a little curious. What key novelties can people expect to come next from Infosim®?
A: There are of course many things that I could mention here, but the two areas that will probably have the most significant impact on management and orchestration are our new service catalog and the new tagging concept. With the service catalog the management is moved from a rather device- or server-based perspective to a holistic service-based view. This tackles both the monitoring and the configuration perspective and can significantly simplify and speed up common business processes. This is of course also related to our new tagging concept.

This new approach is a small revolution to the way that data can be handled for management and orchestration. We introduce the possibility for an unlimited number of customizable tags for each entity, let it be a device, an interface, or an entire service, and combine this with automated relations and inheritance of proprieties between the different entities. Furthermore, the entities can be grouped in an automated way according to arbitrary tag criteria. This significantly extends the functionality, usability, and also the visualization possibilities.

Thanks to InterComms for the article.