An Insight into Fault Monitoring

NMSaaS Network Monitoring

Fault monitoring is the process used to monitor all hardware, software, and network fault monitoring configurations for any deviations from normal operating conditions. This monitoring process typically includes major and minor changes to the expected bandwidth, performance, and utilization of the established computer environment.

Some of the features in fault monitoring may include:

  • Automated correlation of root cause events without having to code or update rules
  • Enriches alarm information and dashboards with business impacting data
  • Provides alarm monitors for crucial KPIs of all network assets automatically
  • Supports integration via SMS, pager, email, trouble ticket, and script execution on alarm events

Network fault management is a big challenge when you have a small team. The duty becomes more complicated if you have to manage a remote site and have to dispatch a technician to the site only to find out the problem is something you could have fixed remotely or you could find that you don’t have the right equipment and have to go back and get it that hurts your service restoration time.

In most cases, the time taken to identify the root cause of a problem is actually longer than the time taken to fix it. Having a proactive network fault monitoring tool helps you quickly identify the root cause of the problem and fix it before end-users notice it.

Finding that tool who can do a root cause analysis in real time has many benefits. If you have this tool it means your engineers get to focus on service affecting events and are able to properly prioritize them. Authentic problem analysis in real-time and subsequent problem solving requires precise automation of several interacting components.

If you would like to learn more about this topic please feel free to click below to get our educational whitepaper. It will give you a greater insight into these cloud serves such as fault monitoring and many more.

NMSaaS- 10 Reasons to Consider a SaaS Network Management Solution

Thanks to NMSaaS for the article.

Top Three Policies in Network Configuration Management

Top Three Policies in Network Configuration Management

When a network needs repair, modification, expansion or upgrading, the administrator Network Configuration Management refers to the network configuration management database to determine the best course of action.

Top Three Policies in Network Configuration ManagementThis database contains the locations and network addresses of all hardware devices, as well as information about the programs, versions and updates installed in network computers.

A main focus to consider when discussing network configuration management is Policy checking capabilities. There are three key policy checking capabilities which should not be ignored, and they are as follows

  1. Regulatory Compliance Policy
  2. Vendor Default Policy
  3. Security Access Policy

Regulatory compliance policy

The obvious one is regulatory compliance policy. If you have a network configuration system you should always implement a regular checking system to ensure consistency with design standards, processes and directives with internal and external regulators.

In the past people would use manual processes this is something that was time intensive, costly, inaccurate and more importantly, your business was at risk and open to potential attacks through not having the desired real-time visibility.

Now thanks to the infamous cloud this is all a thing of the past.

Vendor default policy

Vendor default policy is a best practice recommendation to scan the configurations of your infrastructure devices and to eradicate potential holes so that the risk can be mitigated. Furthermore so that the infrastructure security access is maintained to the highest possible levels.

Such holes may arise due to your configuration settings being overlooked. Sometimes a default username and passwords, or SNMP ‘public’ and ‘private’ community strings etc. are not removed, leaving a hole in your security for potential attacks.

Security Access Policy

Access to infrastructure devices are policed and controlled with the use of AAA (Authentication, Authorization, Accounting), TACACS+, RADIUS servers, and ACLs (Access Control Lists) so as to increase security access into device operating systems.

It is very important therefore that the configuration elements of infrastructure devices have the consistency across the managed estate. It is highly recommended to create security policies so that the configurations of security access can be policed for consistency and reported on if changed, or vital elements of the configuration are missing.

Thanks to NMSaaS for the article. 

3 Steps to Server Virtualization Visibility

Network Instruments- 3 Steps to Server Virtualization

Each enterprise has its own reasons for moving to virtual infrastructure, but it all boils down to the demand for better and more efficient server utilization. Ensure comprehensive visibility with three practical steps.

Virtualization is a money-saving technology that allows the enterprise to stretch its IT budget much further by better utilizing server assets. Consider the immediate reduction in data center footprint, maintenance, and capital and operating expense overhead. Then add the promise to dynamically adjust server workloads and service delivery to achieve optimal user experience—the vision of true orchestration. It’s easy to see why server virtualization is key to many organizations’ operational strategy.

But, what if something goes wrong? With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercialware monitoring tools. How can you get the same visibility you need to validate service health within the virtual server hypervisor and vSwitch east/west traffic?

3 Steps to Virtual Visibility Cheat Sheet

Step One:

Get status of host and virtualization components

  • Use polling technologies such as SNMP, WSD, and WMI to provide performance metrics like CPU utilization, memory usage, and virtualized variables like individual VM instance status to find the real cause of service issues.
  • Do your homework. Poor application response time and other service issues can be tied to unexpected sources.

Step Two:

Monitor vSwitch east/west traffic

  • To the network engineer, everything disappears once it hits the virtual server. To combat this “black box effect,” there are two methods to maintain visibility:

1. Inside Virtual Monitoring Model

a. Create a dedicated VM “monitoring instance”.
b. Transmit relevant data to this instance for analysis.
c. Analyze traffic locally with a monitoring solution.
d. Transmit summary or packet data to an external central analysis solution.

2. Outside Virtual Monitoring Model

a. Push copies of raw, unprocessed vSwitch east/west traffic out of the virtualized server.

Step Three:

Inspect perimeter and client north/south conversations

  • Instrument highly-saturated Application Access Layer links with a packet capture device like Observer GigaStor™ to record conversations and rewind for back in time analysis.

To learn more, dowload the white paper here:

Network Instruments- 3 Steps to Server Virtualization

Have You Considered Using a Network Discovery Software Solution

Have you Considered Using a Network Discovery Software Solution?

When you have a network discovery software solution it allows your computer to see another network computers and devices and allows people on other network computers to see your computer. This makes it easier to share files and printers etc, but that’s not all.

You may be asking why is this even important to me? The primary reasons why it is vital for your business to use network discovery is as follows:

  • If you don’t know what you have, you cannot hope to monitor and manage it.
  • You can’t track down interconnected problems.
  • You don’t know when something new comes on the network.
  • You don’t know when you need upgrades.
  • You may be paying too much for maintenance.

Most of the time in network management you’re trying to track down potential issues within your network and how you’re going to resolve these issues. This is a very hard task especially if you’re dealing with a large scale network. If one thing goes down within the network it starts a trickle effect and then more aspects of the network will in return start to go down.

All of these problems are easily fixed. A lot of network discovery capabilities have powerful and flexible tools allowing you to determine what exactly is subject to monitoring.

These elements can be automatically labeled and grouped. This makes automatic data collection possible, as well as threshold monitoring and reporting on already discovered elements.

Another aspect of network discovery software is that it can perform a network topology discovery in the managed network. The discovery process probes each device to determine its configuration and relation to other managed elements.

This information can be then used to create instances as a dependency model. This simplifies event correlation, i.e. no rules programming and the subsystem guarantees identification of critical problems. The discovery detects network devices and topology automatically.

As a result of this we can access critical details like IP address, MAC address, OS, firmware, Services, Memory, Serial Numbers, Interface Information, Routing Information, Neighbor data, these are all available at the click of a button or as a scheduled report.

If you would like to find out more about how we can benefit your enterprise greatly then schedule a technical discussion with one of our experienced engineers.

Contact Us

Thanks to NMSaaS for the article.

Webinar: Cloud Based Network Device Backup and Compliance Policy Checking

NMSaaS- Webinar: Cloud Based Network Device Backup and Compliance Policy Checking

NMSaaS is Cloud based network management system which has features that allow you to capture your network device configurations and perform detailed policy and compliance checks. The Configuration and Change Management (NCCM) module allows to not only proactively search for compliancy issues but also protect devices from having compliance violations inadvertently introduced.

This 30 minute webinar will discuss

  • Backup the running configuration of devices
  • Backup additional “Show” commands
  • Compare older configurations to the current one
  • Restore configurations from previous backups

Join us on January 28Th. from 2:00 to 2:30 EST to discuss how cloud based network management system can help you with policy and configuration management.

Listen to the recording here

b2ap3_thumbnail_listen_button_20150302-150853_1.jpg

Thanks to NMSaaS for the article.

Virtual Server Rx

JDSU Network Instruments- Virtual Server Rx

The ongoing push to increase server virtualization rates is driven by its many benefits for the data center and business. A reduction in data center footprint and maintenance, along with capital and operating cost reductions are key to many organizations’ operational strategy. The ability to dynamically adjust server workloads and service delivery to achieve optimal user experience is a huge plus for IT teams working in the virtualized data center – unless something goes wrong.

With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercial-ware monitoring tools.

How can you get the same visibility you need to validate service health within the virtualized data center?

First Aid for the Virtual Environment

Network teams often act as “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts can be quickly offset by sub-par app performance. Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources.

Health Checks

Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Virtual servers are often highly provisioned and operating at elevated utilization levels. Assessing their underlying health and adding additional resources when necessary, is essential for peak performance.

Use performance monitoring tools to check:

  • CPU Utilization
  • Memory Usage
  • Individual VM Instance Status

Often, these metrics can point to the root cause of service issues that may otherwise manifest themselves indirectly.

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Further Diagnostics

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user. To do so, care must be given in properly instrumenting virtualized server deployments and the supporting network infrastructure.

Ready for more? Download the free white paper 3 Steps to Server Virtualization Visibility, featuring troubleshooting diagrams, metrics, and detailed strategies to help diagnose what’s really going on in your virtual data centers. You’ll learn two methods to monitor VSwitch traffic, as well as how to further inspect perimeter and client conversations.

Download 3 Steps to Server Virtualization Visibility

Thanks to Network Instruments for the article. 

The Advantages and Disadvantages of Network Monitoring

NMSaaS The Advantages and Disadvantages of Network Monitoring

Implementing network monitoring for organization is not something new for large enterprise networks; but for small to medium sized businesses a comprehensive monitoring solution is often not inside their limited budgets.

Network monitoring involves a system that keeps track of the status of the various elements within a network; this can be something as simple as using ICMP (ping) traffic to verify that a device is responsive. However, the more comprehensive options offer a much deeper perspective on the network.

Such elements include:

  • Predictive Analysis
  • Root Cause Analysis
  • Alarm Management
  • SLA Monitoring and Measurement
  • Configuration Management
  • NetFlow Analysis
  • Network Device and Back up

All of these element are key driving factors for any business. If you have read any of my previous blogs you will be aware of the three clear benefits of using a network monitoring system, these benefits include:

  1. Cost savings
  2. Speed
  3. Flexibility

However there a few small cons when looking at this topic.

Security

The security of any solution that requires public connectivity is of the utmost importance; using a cloud network monitoring solution requires a great amount of trust being placed in the cloud provider. If you trust your supplier there should be no reason to worry.

Connectivity

With network monitoring applications that are deployed in-house, the systems themselves typically sit at the most central part of an organization’s network.

With a cloud based solution, the connection to an external entity is not going to have such straightforward connectivity, and with this the risk of losing access to managed elements is a real possibility. However if your provider has an automatic back up device installed this is something which should not deter you away.

Performance

This interlinks with connectivity as the availability of bandwidth between an in-house system and managed elements vs the available bandwidth between an external cloud option and the managed elements can be significant.

If the number of elements that require management is large, it is best to find a cloud option that offers the deployment of an internal collector that sits inside the premises of the organization.

There are certainly a number of advantages that come with cloud offerings which make them very attractive, especially to smaller organizations; however it is important to analyze the whole picture before making a decision.

If you would like to find out more about this topic why not schedule a one on one technical discussion with or experienced technical engineers.

Telnet Networks- Contact us to schedule a live demo

Thanks to NMSaaS for the article.

Top 3 Network Management Solutions to Consider in 2015

Top 3 Network Management Solutions to Consider in 2015

As networking becomes more complex, network management and monitoring has continued to evolve. In a world where technology is continually on the rise people must start becoming more aware of what capabilities are out there for the networks.

There are vast amount of network management solutions out there to help aid your business. The elusive goal of network monitoring tools is not only to alert administrators when there is trouble on the network but to develop trends in the health of the network.

The top 3 solutions to take into consideration in 2015 for all business are:

  1. Detailed Netflow.
  2. Network discovery.
  3. Fault & Event management.

Detailed Netflow

NetFlow is a feature that was introduced on Cisco routers that provides the ability to collect IP network traffic as it enters or exits an interface. By analyzing the data provided by NetFlow a network administrator can determine things such as the source and destination of traffic, class of service, and the causes of congestion. This is a crucial benefit for any organization as it allows them to stay on top of potential difficulties that may arise.

Network Discovery

This is an obvious one to consider for the new year because if you don’t know what you have how are you going to fix it? Network discovery has been around for a while now but not every organization implements it, this is something which should not be ignored.

Why choose network discovery:

  1. You can’t track down interconnected problems.
  2. You don’t know when something new comes on the network.
  3. You may be paying too much for maintenance.

Fault & Event Management

The fault and event management process is essential in a complex technological environment, and most companies have deployed a myriad of tools to collect millions of events, logs, and messages from their network devices, servers and applications.

The advantages of using it are as follows:

  1. Up-to-date knowledge of availability of all monitored network devices and interfaces.
  2. Fully automated Root Cause Analysis allows administrators to focus on actual point of failure and ignore collateral damage.
  3. Instant visibility of network alerts and problems as reported by devices and servers.

These are the top three network management solutions I think every business should take into consideration in 2015. NMSaaS can provide you with numerous amounts of network solutions all integrated into one package. To find out how we can help your business get in contact with one of experienced technical engineers.

Contact UsThanks to NMSaaS for the article. 

Network Performance Monitoring

Ixia's Net Tool Optimizer

Visibility Into the Business

With virtualization, “Big Data,” and the sheer complexity of enterprise networks on the rise, dynamic network monitoring of performance and security provides a critical business advantage. Ixia’s network visibility solutions deliver ongoing insight into production networks to help maximize your company’s productivity and profitability, as well as its return on new and existing IT investments.

Leveraging state-of-the-art technology and techniques, Ixia’s powerful, high-performance network monitoring switches equip network engineers to meet the growing challenge of testing, assessing and monitoring complex, high-performance networks with limited access points. These solutions add intelligence between network access points and sophisticated monitoring tools to streamline the flow of data, ensuring that each tool receives the exact information it needs. Data from multiple TAP and SPAN ports is aggregated and multicast to performance and security monitoring tools, providing network operators with maximum visibility into both physical and virtual networks.

Ixia network visibility solutions:

  • Optimize traffic for monitoring with advanced filtering, aggregation, and replication
  • Extend investments in 1G monitoring tools to 10G and 40G deployments
  • Automate troubleshooting to reduce MTTR
  • Introduce “drag and drop” simplicity to streamline configuration and management
  • Expand network monitoring capacity enabling simultaneous monitoring of multiple connection points from a single port

Poor application performance leads to poor business performance: lost sales, missed opportunities, inefficient operations, and disgruntled customers, weakening the corporate brand. Mitigating this risk, Ixia’s network visibility solutions equip network engineers to leverage actionable insight—maximizing network and application performance while helping to optimize security, compliance, management, scalability, and ROI.

 

Ixia's Net Tool Optimizer Net Optics Network Taps

Net Tool Optimizers
Out-of-band traffic
aggregation, filtering,
dedup, load balancing

Net Optics Network Taps
Passive network access for
security and monitoring tools

 

Thanks to Ixia for the article. 

Network Strategies for 2015

As we say goodbye to 2014 and review our network equipment plans for the new year, looking at replacement options is not enough.

We have to consider the currents that network technology flows in and where they are taking us.

Ignoring buying decisions and looking at the bigger picture provides an opportunity to assess what emerging companies are doing to redefine and redirect our network thinking, from the higher levels of standardisation, convergence and virtualisation down to how startups are meeting these challenges.

Here is what you should be aware of in 2015.

Standardisation

2015 will see the shifts in IT investments move towards standardised hardware and software products. The software and hardware standardisation efforts inherent in software-defined networks (SDN) and network function virtualisation (NFV) initiatives in the wide area network (WAN) will affect corporate network

Virtualisation

Existing datacentre hardware is being optimised in virtualised environments, and applications are being farmed out to public cloud providers, significantly changing the hardware equation.

Convergence

Hyperconverged infrastructure products combine compute, networking and storage resources to create all-in-one solutions. Hyperconverged appliances offer the scale-out architecture that fits the needs of most shared virtualised environments. To facilitate this, unified software packages have been adopted to converge networking functions previously allocated to dedicated hardware boxes such as WAN optimisers, packet shapers, application development controllers, application and network performance managers, load balancers and next-generation firewalls. This means storage and security are becoming intrinsic to networking topologies and, as such, will become embedded in networking hardware and software.

New challenges in 2015

The specific board-level demands to most enterprise network managers in 2015 will include:

  • Handling 100% traffic growth with the same budget as in 2014.
  • Recognising that much of that traffic growth, namely video, will be latency sensitive.
  • Ensuring the growing bring your own device (BYOD) demand for connectivity is secure and delivers quality of service (QoS) to the customers.
  • Minimising capital expenditure and go with industry-standard, bare-metal hardware to support SDN/NFV.
  • Maximising operating expenses in software and hardware deals.

This translates into key concepts around aligning networks to support business processes, shifting more traffic to Ethernet, flexible cloud deployments and better integration of security and storage capabilities. Startups present interesting next-step products to dominant suppliers in all these categories.

Aligning Network Hardware To Business Processes

When the buyer focus shifts to commoditisation, this presents a serious challenge to profit margins for premium network hardware brands such as Cisco, HP and IBM. Conversely, it presents an opportunity for nimble startups in the network hardware business, as brand loyalty is eroded and the focus shifts to supporting horizontal business processes.

Startup hardware suppliers are adopting the same hyperconvergence logic as software suppliers by integrating complementary software functionality into their boxes to facilitate core business processes. The result is hardware with better integration levels, cheaper and simpler deployments and easier scale-out capacity than their software and brand-name competitors. Instead of outsourcing functions, these network hardware startups advocate on-premise enterprise networking strategies. The message certainly whets the appetite of investors.

They are not looking for startups selling Lego blocks for DIY constructions, but rather emerging suppliers with the integrated hardware and software to handle specific business needs with faster time to value than existing value propositions on the market. Market leader VMware, with its Evo: Rail concept, has aligned all parts of its vSphere and Virtual SAN (storage area network) ecosystem with seven hardware partners (Dell, EMC, Fujitsu, Inspur – China’s dominant cloud computing and service provider, NetOne – Japanese infrastructure optimiser, HP, and SuperMicro – the US application-optimised server, workstation, blade, storage and GPU systems provider).

Startup company Scale Computing, with its HC3 platforms, presents an interesting challenge to the Evo: Rail design, aimed at small and medium-sized enterprises (SMEs), and values simplicity and fast deployment. The three HC3 platforms scale from 40 to 400 virtual machines (VMs). Scale Computing uses a customised version of Red Hat’s KVM hypervisor and leverages a block-level storage architecture as opposed to Virtual SAN’s (VSAN) object-based approach. While KVM may not have as many features as vSphere, Scale Computing is banking on the simplicity of operation along with aggressive pricing compared to the competition, and uses a scale-out architecture that can handle four nodes as the infrastructure grows.

Large enterprises should look at the startup Simplivity and its OmniCube, a hyperconverged infrastructure that delivers the economies of scale of a cloud computing model while ensuring enterprise IT performance and resiliency for virtual workloads. OmniCube has a data architecture that addresses data efficiency and global management requirements in virtualised and cloud computing environments. Its single unified stack runs on standard and hyperconverged x86 building blocks, simplifying and lowering the cost of infrastructure. Deploying a network of two or more OmniCubes creates a global federation that facilitates efficient data movement, resource sharing and scalability.

Ethernet deployments

Ethernet adoption continues to expand and startups such as Arista provide important contributions with the 10-1000Gbps Ethernet switches that target cloud service providers with purpose-built hardware. Its EOS network operating system provides single-binary system images across all platforms, maximum system uptime, stateful fault repair, zero-touch provisioning, latency analysis and a fully accessible Linux shell. With native support for VMware virtualisation and hundreds of Linux applications integrated into hardware platforms, it is designed to meet the stringent power and cooling requirements of today’s most demanding datacentres.

Cloud in a box

In the SME market, SixSq’s Nuvlabox offers a turnkey private cloud in a box. The Mac Mini-sized box includes a complete infrastructure as a service (IaaS) framework, powered by StratusLab, and a platform as a service (PaaS) powered by Slipstream. The built-in Wi-Fi provides network connectivity. With the ability to run up to eight VMs, capacity constraints are solved by adding more boxes and managing them as a single unit. Nuvlabox comes with a library of standard apps and operating system images, including different flavours of Linux and Windows and allows secure remote monitoring and application deployment from a single dashboard. To bypass the capital expenditure objection, SixSq has shifted its business model towards business-to-business licensing, where service provider customers pay rental fees for the equipment and SixSq provides ongoing maintenance and call centre support.

Network Security

Increased use of IT adds value to corporate network transactions and attracts a lot of unwelcome attention. In 2015, we expect more hackers, script kiddies, professional thieves and state-sponsored advanced persistent threat (APT) attacks to target corporate networks. But there is still a lot of low-hanging fruit to gather, such as increased employee awareness of weak passwords and phishing exploits, faster remediation of security holes and better denial of service protection measures. There is also a need for better tools and procedures to protect the enterprise network and ensure these measures meet corporate governance, risk and compliance (GRC) requirements.

One supplier aiming to address these needs is Bromium, which combines a software client on any device with a central security server. Instead of using signatures, behaviours or heuristics to identify potential threats, its vSentry client creates hardware-isolated micro‑VMs for every network-related task, such as visiting a web page, downloading a document or opening an email attachment. All micro-VMs are separated from each other and from the trusted enterprise network. Thus, malware is contained in the hardware-isolated micro-VM. Bromium’s Live Attack Visualization and Analysis (Lava) server converts each micro-VM in the enterprise into a honeypot and automates the often prolonged post-attack malware analysis process. An entire attack is automatically and instantly forwarded to the Lava console, which provides an automatic in-depth analysis of the advanced malware.

Network Storage

Video and social network communications from mobile devices with always-on technology has mushroomed data flows. In the enterprise, big data analytics relies on huge volumes of unstructured data, itself often comprised of large file formats that require secure storage and fast retrieval capacity. Network data volumes are moving from exabyte to zettabyte levels of data and higher. Most pundits and some analyst firms predict traffic and storage volumes will continue to double every two years. Next-generation storage systems include hyperscale data storage, virtualisation to improve utilisation, cloud storage for disaster recovery and lower power consumption to save costs. To enhance storage security, storage systems may incorporate data dispersal and keyless encryption to keep data secure against breaches.

The startup company Solidfire has developed a storage system built on the native ability to achieve significant scale, guarantee storage performance, and enable complete system automation. Combined with enterprise applications and deeply integrated with key management frameworks, Solidfire delivers validated products that make a next-generation datacentre deployment more cohesive, automated, and dynamically scalable.

At the high end, Insieme Networks is the driving force behind Cisco’s Application Centric Infrastructure (ACI) at the core of Cisco’s long-awaited SDN strategy. The ACI architecture leverages a mix of merchant and custom Asics, along with Cisco’s new line of Nexus 9000 switches and its Application Policy Infrastructure Controller (APIC).

Establishing business models

Startup companies in the network hardware business are not only introducing new technology perspectives, they are also exploring new business models and establishing customer relationships. Building on standardised platforms allows users to do more process management and security tasks themselves. With higher levels of personalisation and control, users can more easily explore alternative business processes and combine functions across different platforms, which translates into faster time to value. 2015 promises to be an exciting year for enterprise IT departments looking to revamp their corporate network infrastructures – they may actually meet their boards’ network targets.

Thanks to Computerweekly for the article