CIO Review – Infosim Unified Solution for Automated Network Management

CIO Review

20 Most Promising Networking Solution Providers

Virtualization has become the life blood of the networking industry today. With the advent of technologies such as software-defined networking and network function virtualization, the black box paradigm or the legacy networking model has been shattered. In the past, the industry witnessed networking technology such as Fiber Distributed Data Interface (FDDI), which eventually gave way to Ethernet, the predominant network of choice. This provided opportunities to refresh infrastructures and create new networking paradigms.Today, we see a myriad of proprietary technologies, competing for the next generation networking models that are no longer static, opaque or rigid.

Ushering a new way of thinking and unlocking the possibilities, customers are increasingly demanding for automation from the network solution providers. The key requirement is an agile network controlled from a single source. Visibility into the network has also become a must-have in the networking spectrum, providing realtime information about the events befalling inside the networks.

In order to enhance enterprise agility, improve network efficiency and maintain high standards of security, several innovative players in the industry are delivering cutting-edge solutions that ensure visibility, cost savings and automation in the networks. In the last few months we have looked at hundreds of solution providers who primarily serve the networking industry, and shortlisted the ones that are at the forefront of tackling challenges faced by this industry.

In our selection, we looked at the vendor’s capability to fulfill the burning needs of the sector through the supply of a variety of cost effective and flexible solutions that add value to the networking industry. We present to you CIO Review’s 20 Most Promising Networking Solution Providers 2015.

Infosim Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to Infosim for the article. 

3 Reasons for Real Time Configuration Change Detection

So far, we have explored what NCCM is, and taken a deep dive into device policy checking – in this post we are going to be exploring Real Time Configuration Change Detection (or just Change Detection as I will call it in this blog). Change Detection is the process by which your NCCM system is notified – either directly by the device or from a 3rd party system that a configuration change has been made on that device. Why is this important? Let’s identify 3 main reasons that Change Detection is a critical component of a well deployed NCCM solution.

1. Unauthorized change recording. As anyone that works in an enterprise IT department knows, changes need to be made in order to keep systems updated for new services, users and so on. Most of the time, changes are (and should be) scheduled in advance, so that everyone knows what is happening, why the change is being made, when it is scheduled and what the impact will be on running services.

However, the fact remains that anyone with the correct passwords and privilege level can usually log into a device and make a change at any time. Engineers that know the network and feel comfortable working on the devices will often just login and make “on-the-fly” adjustments that they think won’t hurt anything. Unfortunately as we all know, those “best intentions” can lead to disaster.

That is where Change Detection can really help. Once a change has been made, it will be recorded by the device and a log can be transmitted either directly to the NCCM system or to a 3rd party logging server which then forwards the message to the NCCM system. At the most basic level this means that if something does go wrong, there is an audit trail which can be investigated to determine what happened and when. It can also potentially be used to roll back the changes to a known good state

2. Automated actions.

Once a change has been made (scheduled or unauthorized) many IT departments will wish to perform some automated actions immediately at the time of change without waiting for a daily or weekly schedule to kick in. Some of the common automated activities are:

  • Immediate configuration backup. So that all new changes are recorded in the backup system.
  • Launch of a new discovery. If the change involved any hardware or OS type changes like a version upgrade, then the NCCM system should also re-discover the device so that the asset system has up-to-date information about the device

These automation actions can ensure that the NCCM and other network management applications are kept up to date as changes are being made without having to wait for the next scheduled job to start. This ensures that any other systems are not acting “blindly” when they try to perform an action with/on the changed device.

3. Policy Checking. New configuration changes should also prompt an immediate policy check of the system to ensure that the change did not inadvertently breach a compliance or security rule. If a policy has been broken, then a manager can be notified immediately. Optionally, if the NCCM system is capable of remediation, then a rollback or similar operation can happen to bring the system back into compliance immediately.

Almost all network devices are capably of logging hardware / software / configuration changes. Most of the time these can easily be exported in the form of an SNMP trap or Syslog. A good NCCM system can receive these messages, parse them to understand what has happened and if the log signifies a change has taken place – is then able to take some action(s) as described above. This real time configuration change detection mechanism is a staple part of an enterprise NCCM solution and should be implemented in all organizations where network changes are commonplace.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

Ixia Taps into Visibility, Access and Security in 4G/LTE

The Growing Impact of Social Networking Trends on Lawful Interception

Lawful Interception (LI) is the legal process by which a communications network operator or Service Provider (SP) gives authorized officials access to the communications of individuals or organizations. With security threats mushrooming in new directions, LI is more than ever a priority and major focus of Law Enforcement Agencies (LEAs). Regulations such as the Communications Assistance for Law Enforcement Act (CALEA), mandate that SPs place their resources at the service of these agencies to support surveillance and interdiction of individuals or groups.

CALEA makes Lawful Interception a priority mission for Service Providers as well as LEA; its requirements make unique demands and mandate specific equipment to carry out its high-stakes activities. This paper explores requirements and new solutions for Service Provider networks in performing Lawful Interception.

A Fast-Changing Environment Opens New Doors to Terrorism and Crime

In the past, Lawful Interception was simpler and more straightforward because it was confined to traditional voice traffic. Even in the earlier days of the Internet, it was still possible to intercept a target’s communication data fairly easily.

Now, as electronic communications take on new forms and broaden to a potential audience of billions, data volumes are soaring, and the array of service offerings is growing apace. Lawful Interception Agencies and Service Providers are racing to thwart terrorists and other criminals who have the technological expertise and determination to carry out their agendas and evade capture. This challenge will only intensify with the rising momentum of change in communication patterns.

Traffic patterns have changed: In the past it was easier to identify peer-to-peer applications or chat using well known port numbers. In order to evade LI systems, the bad guys had to work harder. Nowadays, most applications use Ixia Taps into Visibility, Access and Security in 4G/LTE standard HTTP and in most cases SSL to communicate. This puts an extra burden on LI systems that must identify overall more targets on larger volumes of data with fewer filtering options.

Social Networking in particular is pushing usage to exponential levels, and today’s lawbreakers have a growing range of sophisticated, encrypted communication channels to exploit. With the stakes so much higher, Service Providers need robust, innovative resources that can contend with a widening field of threats. This interception technology must be able to collect volume traffic and handle data at unprecedented high speeds and with pinpoint security and reliability.

LI Strategies and Goals May Vary, but Requirements Remain Consistent

Today, some countries are using nationwide interception systems while others only dictate policies that providers need to follow. While regulations and requirements vary from country to country, organizations such as the European Telecommunications Standards Institute (ETSI) and the American National Standards Institute (ANSI) have developed technical parameters for LI to facilitate the work of LEAs. The main functions of any LI solution are to access Interception-Related Information (IRI) and Content of Communication (CC) from the telecommunications network and to deliver that information in a standardized format via the handover interface to one or more monitoring centers of law enforcement agencies.

High-performance switching capabilities, such as those offered by the Ixia Director™ family of solutions, should map to following LI standards in order to be effective: They must be able to isolate suspicious voice, video, or data streams for an interception, based on IP address, MAC address or other parameters. The device must also be able to carry out filtering at wire speed. Requirements for supporting Lawful Interception activities include:

  • The ability to intercept all applicable communications of a certain target without gaps in coverage, including dropped packets, where missing encrypted characters may render a message unreadable or incomplete
  • Total visibility into network traffic at any point in the communication stream
  • Adequate processing speed to match network bandwidth
  • Undetectability, unobtrusiveness, and lack of performance degradation (a red flag to criminals and terrorists on alert for signs that they have been intercepted)
  • Real-time monitoring capabilities, because time is of the essence in preventing a crime or attack and in gathering evidence
  • The ability to provide intercepted information to the authorities in the agreed-upon handoff format
  • Load sharing and balancing of traffic that is handed to the LI system .

From the perspective of the network operator or Service Provider, the primary obligations and requirements for developing and deploying a lawful interception solution include:

  • Cost-effectiveness
  • Minimal impact on network infrastructure
  • Compatibility and compliance
  • Support for future technologies
  • Reliability and security

Ixia’s Comprehensive Range of Solutions for Lawful Interception

This Ixia customer, (the “Service Provider”), is a 4G/LTE pioneer that relies on Ixia solutions. Ixia serves the LI architecture by providing the access part of an LI solution in the form of Taps and switches. These contribute functional flexibility and can be configured as needed in many settings. Both the Ixia Director solution family and the iLink Agg™ solution can aggregate a group of links in traffic and pick out conversations with the same IP address pair from any of the links.

Following are further examples of Ixia products that can form a vital element of a successful LI initiative:

Test access ports, or Taps, are devices used by carriers and others to meet the capability requirements of CALEA legislation. Ixia is a global leader in the range and capabilities of its Taps, which provide permanent, passive access points to the physical stream.

Ixia Taps reside in both carrier and enterprise infrastructures to perform network monitoring and to improve both network security and efficiency. These inline devices provide permanent, passive access points to the physical stream. The passive characteristic of Taps means that network data is not affected whether the Tap is powered or not. As part of an LI solution, Taps have proven more useful than Span ports. If Law Enforcement Agencies must reconfigure a switch to send the right conversations to the Span port every time intercept is required, a risk arises of misconfiguring the switch and connections. Also, Span ports drop packets—another significant monitoring risk, particularly in encryption.

Director xStream™ and iLink Agg xStream™ enable deployment of an intelligent, flexible and efficient monitoring access platform for 10G networks. Director xStream’s unique TapFlow™ filtering technology enables LI to focus on select traffic of interest for each tool based on protocols, IP addresses, ports, and VLANs. The robust engineering of Director xStream and iLink Agg xStream enables a pool of 10G and 1G tools to be deployed across a large number of 10G network links, with remote, centralized control of exactly which traffic streams are directed to each tool. Ixia xStream solutions enable law enforcement entities to view more traffic with fewer monitoring tools as well as relieving oversubscribed 10G monitoring tools. In addition, law enforcement entities can share tools and data access among groups without contention and centralize data monitoring in a network operations center.

Director Pro™ and Director xStream Pro data monitoring switches offers law enforcement the ability to perform better pre-filtering via Deep Packet Inspection (DPI) and to hone in on a specific phone number or credit card number. Those products differs from other platforms that might have the ability to seek data within portions of the packet thanks to a unique ability to filter content or perform pattern matching with hardware and in wire speed potentially to Layer 7. Such DPI provides the ability to apply filters to a packet or multiple packets at any location, regardless of packet length or how “deep” the packet is; or to the location of the data to be matched within this packet. A DPI system is totally independent of the packet.

Thanks to Ixia for the article.

Ixia Taps into Hybrid Cloud Visibility

One of the major issues that IT organizations have with any form of external cloud computing is that they don’t have much visibility into what is occurring within any of those environments.

To help address that specific issue, Ixia created its Net Tool Optimizer, which makes use of virtual and physical taps to provide visibility into cloud computing environments. Now via the latest upgrade to that software, Ixia is providing support for both virtual and physical networks while doubling the number of interconnects the hardware upon which Net Tool Optimizer runs can support.

Deepesh Arora, vice president of product management for Ixia, says providing real-time visibility into both virtual and physical networks is critical, because in the age of the cloud, the number of virtual networks being employed has expanded considerably. For many IT organizations, this means they have no visibility into either the external cloud or the virtual networks that are being used to connect them.

The end goal, says Arora, should be to use Net Tool Optimizer to predict what will occur across those hybrid cloud computing environments, but also to enable IT organizations to use that data to programmatically automate responses to changes in those environments.

Most IT organizations find managing the network inside the data center to be challenging enough. With the additional of virtual networks that span multiple cloud computing environments running inside and outside of the data center, that job is more difficult than ever. Of course, no one can manage what they can’t measure, so the first step toward gaining visibility into hybrid cloud computing environments starts with something as comparatively simple as a virtual network tap.

Thanks to IT Business Edge for the article.

Infosim® Global Webinar Day – How to prevent – Or Recover From – a Network Disaster

Oh. My. God. This time it IS the network!

How to prevent – or recover from – a network disaster

Jason Farrer Join Jason Farrer, Sales Engineer with Infosim® Inc. for a Webinar and Live Demo on “How to prevent – or recover from – a network disaster”.Join Jason Farrer, Sales Engineer with Infosim® Inc. for a Webinar and Live Demo on “How to prevent – or recover from – a network disaster”.

 

This Webinar will provide insight into:

  • Why is it important to provide for a network disaster?
  • How to deal with network disaster scenarios [Live Demo]
  • How to prevent network corruption & enhance network security

Watch Now!

Infosim® Global Webinar Day August 27th, 2015

A recording of this Webinar will be available to all who register!
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

A Simple Solution To Combatting Virtual Data Center Blind Spots

Blind spots are a long-established threat to virtual data centers. They are inherent to virtual machine (VM) usage and technology due to the nature of VMs, lack of visibility for inter- and intra-VM data monitoring, the typical practices around the use of VM’s, and the use of multiple hypervisors in enterprise environments.

Virtual machines by their nature hide inter- and intra-VM traffic. This is because the traffic stays within in a very small geographic area. As I mentioned in a previous blog, Do You Really Know What’s Lurking in Your Data Center?, Gartner Research found that 80% of VM traffic never reaches the top of the rack where it can be captured by traditional monitoring technology. This means that if something is happening to that 80% of your data (security threat, performance issue, compliance issue, etc.), you’ll never know about it. This is a huge area of risk.

In addition, an Ixia conducted market survey on virtualization technology released in March 2015, exposed a high propensity for data center blind spots to exist due to typical data center practices. This report showed that there was probably hidden data, i.e. blind spots, existing on typical enterprise data networks due to inconsistent monitoring practices, lack of monitoring practices altogether in several cases, and the typical lack of one central group responsible for collecting monitoring data.

For instance, only 37% of the respondents were monitoring their virtualized environment with the same processes that they use in their physical data center environments, and what monitoring was done usually used less capabilities in the virtual environment. This means that there is a potential for key monitoring information to NOT be captured for the virtual environment, which could lead to security, performance, and compliance issues for the business. In addition, only 22% of business designated the same staff to be responsible for monitoring and managing their physical and virtual technology monitoring. Different groups being responsible for monitoring practices and capabilities often leads to inconsistencies in data collection and execution of company processes.

The survey further revealed that only 42% of businesses monitor the personally identifiable information (PII) transmitted and stored on their networks. At the same time, 2/3 of the respondents were running critical applications across within their virtual environment. Mixed together, these “typical practices” should definitely raise warning signs for IT management.

Additional research by firms like IDC and Gartner are exposing another set of risks for enterprises around the use of multiple hypervisors in the data center. For instance, the IDC Virtualization and the Cloud 2013 study found that 16% of customers had already deployed or were planning to deploy more than one hypervisor. Another 45% were open to the idea in the future. In September 2014, another IDC market analysis stated that now over half of the enterprises (51%) have more than one type of hypervisor installed. Gartner ran a poll in July 2014 that also corroborated that multiple hypervisors were being used in enterprises.

This trend is positive, as having a second hypervisor is a good strategy for an enterprise. Multiple hypervisors allow you to:

  • Negotiate pricing discounts by simply having multiple suppliers
  • Help address corporate multi-vendor sourcing initiatives
  • Provide improved business continuity scenarios for product centric security threats

But it is also very troubling, because the cons include:

  • Extra expenses for the set-up of a multi-vendor environment
  • Poor to no visibility into a multi-hypervisor environment
  • An increase in general complexity (particularly management and programming)
  • And further complexities if you have advanced data center initiatives (like automation and orchestration)

One of the primary concerns is lack of visibility. With a proper visibility strategy, the other cons of a multi-hypervisor environment can be either partially or completely mitigated. One way to accomplish this goal is to deploy a virtual tap that includes filtering capability. The virtual tap allows you the access to all the data you need. This data can be forwarded on to a packet broker for distribution of the information to the right tool(s). Built-in filtering capability is an important feature of the virtual tap so that you can limit costs and bandwidth requirements.

Blind spots that can create the following issues:

  • Hidden security issues
  • Inadequate access to data for trending
  • Inadequate data to demonstrate proper regulatory compliance policy tracking

Virtual taps (like the Ixia Phantom vTap) address blind spots and their inherent dangers.

If the virtual tap is integrated into a holistic visibility approach using a Visibility Architecture, you can streamline your monitoring costs because instead of having two separate monitoring architectures with potentially duplicate equipment (and duplicate costs), you have one architecture that maximizes the efficiency of all your current tools, as well any future investments. When installing the virtual tap, the key is to make sure that it installs into the Hypervisor without adversely affecting the Hypervisor. Once this is accomplished, the virtual tap will have the proper access to inter and intra-VMs that it needs, as well as the ability to efficiently export that information. After this, the virtual tap will need a filtering mechanism so that exported data can be “properly” limited so as not to overload the LAN/WAN infrastructure. The last thing you want to do is to cause any performance problems to your network. Details on these concepts and best practices are available in the whitepapers Illuminating Data Center Blind Spots and Creating A Visibility Architecture.

As mentioned earlier, a multi-hypervisor environment is now a fact for the enterprise. The Ixia Phantom Tap supports multiple hypervisors and has been optimized for VMware ESX and kernel virtual machine (KVM) environments. KVM is starting to make a big push into the enterprise environment. It has been part of the Linux kernel since 2007. According to IDC, shipments of the KVM license were around 5.2 million units in 2014 and they expect that number to increase to 7.2 million by 2017. A lot of the KVM ecosystem is organized by the Open Virtual Alliance and the Phantom vTap supports this recommendation.

To learn more, please visit the Ixia Phantom vTap product page, the Ixia State of Virtualization for Visibility Architectures 2015 report or contact us to see a Phantom vTap demo!

Additional Resources:

Ixia Phantom vTap

Ixia State of Virtualization for Visibility Architectures 2015 report

White Paper: Illuminating Data Center Blind Spots

White Paper: Creating A Visibility Architecture

Blog: Do You Really Know What’s Lurking in Your Data Center?

Solution Focus Category

Network Visibility

Thanks to Ixia for the article.

Ixia Exposes Hidden Threats in Encrypted Mission-Critical Enterprise Applications

Delivers industry’s first visibility solution that includes stateful SSL decryption to improve application performance and security forensics

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced it has extended its Application and Threat Intelligence (ATI) Processor™ to include stateful, bi-directional SSL decryption capability for application monitoring and security analytics tools. Stateful SSL decryption provides complete session information to better understand the transaction as opposed to stateless decryption that only provides the data packets. As the sole visibility company providing stateful SSL decryption for these tools, Ixia’s Visibility Architecture™ solution is more critical than ever for enterprise organizations looking to improve their application performance and security forensics.

“Together, FireEye and Ixia offer a powerful solution that provides stateful SSL inspection capabilities to help protect and secure our customer’s networks,” said Ed Barry, Vice President of Cyber Security Coalition for FireEye.

As malware and other indicators of compromise are increasingly hidden by SSL, decryption of SSL traffic for monitoring and security purposes is now more important for enterprises. According to Gartner research, for most organizations, SSL traffic is already a significant portion of all outbound Web traffic and is increasing. It represents on average 15 percent to 25 percent of total Web traffic, with strong variations based on the vertical market.1 Additionally, compliance regulations such as the PCI-DSS and HIPAA increasingly require businesses to encrypt all sensitive data in transit. Finally, business applications like Microsoft Exchange, Salesforce.com and Dropbox run over SSL, making application monitoring and security analytics much more difficult for IT organizations.

Enabling visibility without borders – a view into SSL

In June, Ixia enabled seamless visibility across physical, virtual and hybrid cloud data centers. Ixia’s suite of virtual visibility products allows insight into east-west traffic running across the modern data center. The newest update, which includes stateful SSL decryption, extends security teams’ ability to look into encrypted applications revealing anomalies and intrusions.

Visibility for better performance – improve what you can measure

While it may enhance security of transferred data, encryption also limits network teams’ ability to inspect, tune and optimize the performance of applications. Ixia eliminates this blind spot by providing enterprises with full visibility into mission critical applications.

The ATI Processor works with Ixia’s Net Tool Optimizer® (NTO™) solution and brings a new level of intelligence to network packet brokers. It is supported by the Ixia Application & Threat Intelligence research team, which provides fast and accurate updates to application and threat signatures and application identification code. Additionally, the new capabilities will be available to all customers with an ATI Processor and an active subscription.

To learn more about Ixia’s latest innovations read:

ATI processor

Encryption – The Next Big Security Threat

Thanks to Ixia for the article. 

The Top 3 Reasons Why Network Discovery is Critical to IT Success

Network discovery is the process of identifying devices attached to a network. It establishes the current state and health of your IT infrastructure.

It’s essential for every business due to the fact that without the visibility into your entire environment you can’t successfully accomplish even the basics of network management tasks.

When looking into why Network Discovery is critical to IT success there are three key factors to take into consideration.

1. Discovering the Current State & Health of the Infrastructure.

Understanding the current state and health of the network infrastructure is a fundamental requirement in any infrastructure management environment. What you cannot see you cannot manage, or even understand, so it is vital for infrastructure stability to have a tool that can constantly discover the state and health of the components in operation.

2. Manage & Control the Infrastructure Environment

  • Once you know what you have its very easy to compile an accurate inventory of the following:
  • The environment’s components provide the ability to track hardware.
  • To manage end-of-life and end‑of‑support.
  • The hardware threshold management (i.e. Swap-Out device before failure)
  • To effectively manage the estates operating systems and patch management.

3. Automate Deployment

Corporation’s today place a lot of emphasis on automation therefore, it is very important that when choosing a Network Discovery tool to operate your infrastructure environment, it can integrate seamlessly with your CRM system. Having a consistent view of the infrastructure inventory and services will allow repeatable and consistent deployment of hardware and configuration in order to automate service fulfillment and deployment.

If you’re not using network discovery tool don’t worry were offering the service for absolutely free, just click below and you will be one step closer to improving your network management system.

The Top 3 Reasons Why Network Discovery is Critical to IT Success

Thanks to NMSaaS for the article. 

CVE-2015-5119 and the Value of Security Research and Ethical Disclosure

The Hacking Team’s Adobe Flash zero day exploit CVE-2015-5119, as well as other exploits, were recently disclosed.

Hacking Team sells various exploit and surveillance software to government and law enforcement agencies around the world. In order to keep their exploits working as long as possible, Hacking Team does not disclose their exploits. As such, the vulnerabilities remain open until they are discovered by some other researcher or hacker and disclosed.

This particular exploit is a fairly standard, easily weaponizable use-after-free—a type of exploit which accesses a pointer that points to already free and likely changed memory, allowing for the diversion of program flow, and potentially the execution of arbitrary code. At the time of this writing, the weaponized exploits are known to be public.

What makes this particular set of exploits interesting is less how they work and what they are capable of (not that the damage they are able to do should be downplayed: CVE-2015-5119 is capable of gaining administrative shell on the target machine), but rather the nature of their disclosure.

This highlights the importance of both security research and ethical disclosure. In a typical ethical disclosure, the researcher contacts the developer of the vulnerable product, discloses the vulnerability, and may even work with the developer to fix it. Once the product is fixed and the patch enters distribution, the details may be disclosed publically, which can be useful learning tools for other researchers and developers, as well as for signature development and other security monitoring processes. Ethical disclosure serves to make products and security devices better.

Likewise, security research itself is important. Without security research, ethical disclosure isn’t an option. While there is no guarantee that the researchers will find the exact vulnerabilities held secret by the likes of Hacking Team, the probability goes up as the number and quality of researches increases. Various incentives exist, from credit given by the companies and on vulnerability databases, to bug bounties, some of which are quite substantial (for instance, Facebook has awarded bounties as high as $33,500 at the time of this writing).

However some researchers, especially independent researchers, may be somewhat hesitant to disclose vulnerabilities, as there have been past cases where rather than being encouraged for their efforts, they instead faced legal repercussions. This unfortunately discourages security research, allowing for malicious use of exploits to go unchecked in these areas.

Even in events such as the sudden disclosure of Hacking Team’s exploits, security research was again essential. Almost immediately, the vendors affected began patching their software, and various security researchers developed penetration test tools, IDS signatures, and various other pieces of security related software as a response to the newly disclosed vulnerabilities.

Security research and ethical disclosure practices are tremendously beneficial for a more secure Internet. Continued use and encouragement of the practice can help keep our networks safe. Ixia’s ATI subscription program, which is releasing updates that mitigate the damage the Hacking Team’s now-public exploits can do, helps keep network security resilience at its highest level.

Additional Resources:

ATI subscription

Malwarebytes UnPacked: Hacking Team Leak Exposes New Flash Player Zero Day

Thanks to Ixia for the article

3 Steps to Configure Your Network For Optimal Discovery

All good network monitoring / management begins the same way – with an accurate inventory of the devices you wish to monitor. These systems must be on boarded into the monitoring platform so that it can do its job of collecting KPI’s, backing up configurations and so on. This onboarding process is almost always initiated through a discovery process.

This discovery is carried out by the monitoring system and is targeted at the devices on the network. The method of targeting may vary, from a simple list of IP addresses or host names, to a full subnet discovery sweep, or even by using an exported csv file from another system. However, the primary means of discovery is usually the same for all Network devices, SNMP.

Additional means of onboarding can (and certainly do) exist, but I have yet to see any full-featured management system that does not use SNMP as one of its primary foundations.

SNMP has been around for a long time, and is well understood and (mostly) well implemented in all major networking vendors’ products. Unfortunately, I can tell you from years of experience that many networks are not optimally configured to make use of SNMP and other important configuration options which when setup correctly will optimize the network for a more efficient and ultimately more successful discovery and onboarding process.

Having said that, below are 3 simple steps that should be taken, in order to help maximize your network for optimal discovery.

1) Enable SNMP

Yes it seems obvious to say that if SNMP isn’t enabled then it will not work. But, as mentioned before it still astonishes me how many organizations I work with that still do not have SNMP enabled on all of the devices they should have. These days almost any device that can connect to a network usually has some SNMP support built in. Most networks have SNMP enabled on the “core” devices like Routers / Switches / Servers, but many IT pros many not realize that SNMP is available on non- core systems as well.

Devices like VoIP phones and video conferencing systems, IP connected security cameras, Point of Sale terminals and even mobile devices (via apps) can support SNMP. By enabling SNMP on as many possible systems in the network, the ability to extend the reach of discovery and monitoring has grown incredibly and now gives visibility into the network end-points like never before.

2) Setup SNMP correctly

Just enabling SNMP isn’t enough – the next step is to make sure it is configured correctly. That means removing / changing the default Read Only (RO) community string (which is commonly set by default to “public”) to a more secure string. It is also best practice to use as few community strings as you can. In many large organizations, there can be some “turf wars” over who gets to set these strings on systems. The Server team may have one standard string and the network team has another.

Even though most systems will allow for multiple strings, it is generally best to try to keep these as consistent as possible. This helps prevent confusion when setting up new systems and also helps eliminate unnecessary discovery overhead on the management systems (which may have to try multiple community strings for each device on an initial discovery run). As always, security is important, so you should configure the IP address of the known management server as an allowed SNMP system and block any other systems from being allowed to run an SNMP query against your systems.

3) Enable Layer 2 discovery protocols

In your network, you want much deeper insight into not only what you have, but how it is all connected. One of the best way to get this information is to enable layer 2 (link layer) discovery abilities. Depending on the vendor(s) you have in your network, this may accomplished with a proprietary protocol like the Cisco Discovery Protocol (CDP) or it may be implemented in a generic standard like the Link Layer Discovery Protocol (LLDP). In either case, by enabling these protocols, you gain valuable L2 connectivity information like connected MAC addresses, VLAN’s, and more.

By following a few simple steps, you can dramatically improve the results of your management system’s onboarding / discovery process and therefore gain deeper and more actionable information about your network.

b2ap3_thumbnail_6313af46-139c-423c-b3d5-01bfcaaf724b.png

Thanks to NMSaaS for the article.