Virtualization Gets Real

Optimizing NFV in Wireless Networks

The promise of virtualization looms large: greater ability to fast-track services with lower costs, and, ultimately, less complexity. As virtualization evolves to encompass network functions and more, service delivery will increasingly benefit from using a common virtual compute and storage infrastructure.

Ultimately, providers will realize:

Lower total cost of ownership (TCO) by replacing dedicated appliances with commodity hardware and software-based control.

Greater service agility and scalability with functions stitched together into dynamic, highly efficient “service chains” in which each function follows the most appropriate and cost-effective path.

Wired and wireless network convergence as the 2 increasingly share converged networks, virtualized billing, signaling, security functions, and other common underlying elements of provisioning. Management and orchestration (M&O) and handoffs between infrastructures will become more seamless as protocol gateways and other systems and devices migrate to the cloud;

On-the-fly self-provisioning with end users empowered to change services, add features, enable security options, and tweak billing plans in near real-time.

At the end of the day, sharing a common pool of hardware and flexibly allocated resources will deliver far greater efficiency, regardless of what functions are being run and the services being delivered. But the challenges inherent in moving vital networking functions to the cloud loom even larger than the promise, and are quickly becoming real.

The Lifecycle NFV Challenge: Through and Beyond Hybrid Networks

Just 2 years after a European Telecommunications Standards Institute (ETSI) Industry Specification Groups (ISG) outlined the concept, carriers worldwide are moving from basic proof of concept (PoC) demonstrations in the lab to serious field trials of Network Functions Virtualization (NFV). Doing so means making sure new devices and unproven techniques deliver the same (or better) performance when deployments go live.

The risks of not doing so — lost revenues, lagging reputations, churn — are enough to prompt operators to take things in stages. Most will look to virtualize the “low-hanging fruit” first.

Devices like firewalls, broadband remote access servers (BRAS), policy servers, IMS components, and customer premises equipment (CPE) make ideal candidates for quickly lowering CapEx and OpEx without tackling huge real-time processing requirements. Core routing and switching functions responsible for data plane traffic will follow as NFV matures and performance increases.

In the meantime, hybrid networks will be a reality for years to come, potentially adding complexity and even cost (redundant systems, additional licenses) near-term. Operators need to ask key questions, and adopt new techniques for answering them, in order to benefit sooner rather than later.

To thoroughly test virtualization, testing itself must partly become virtualized. Working in tandem with traditional strategies throughout the migration life cycle, new virtualized test approaches help providers explore these 4 key questions:

1. What to virtualize and when? To find this answer, operators need to baseline the performance of existing networks functions, and develop realistic goals for the virtualized deployment. New and traditional methods can be used to measure and model quality and new configurations.

2. How do we get it to work? During development and quality assurance, virtualized test capabilities should be used to speed and streamline testing. Multiple engineers need to be able to instantiate and evaluate virtual machines (VMs) on demand, and at the same time.

3. Will it scale? Here, traditional testing is needed, with powerful hardware systems used to simulate high-scale traffic conditions and session rates. Extreme precision and load aid in emulating real-world capacity to gauge elasticity as well as performance.

4. Will it perform in the real world? The performance of newly virtualized network functions (VNFs) must be demonstrated on its own, and in the context of the overall architecture and end-to-end services. New infrastructure components such as hypervisors and virtual switches (vSwitches) need to be fully assessed and their vulnerability minimized.

Avoiding New Bottlenecks and Blind Spots

Each layer of the new architectural model has the potential to compromise performance. In sourcing new devices and devising techniques, several aspects should be explored at each level:

At the hardware layer, server features and performance characteristics will vary from vendor to vendor. Driver-level bottlenecks can be caused by routine aspects such as CPU and memory read/writes.

With more than 1 type of server platform often in play, testing must be conducted to ensure consistent and predictable performance as Virtual Machines (VMs) are deployed and moved from one type of server to another. The performance level of NICs can make or break the entire system as well, with performance dramatically impacted by simply not having the most recent interfaces or drivers.

Virtual switches and implementations vary greatly, with some coming packaged with hypervisors and others functioning standalone. vSwitches may also vary from hypervisor to hypervisor, with some favoring proprietary technology while others leverage open source. Finally, functionality varies widely with some providing very basic L2 bridge functionality and others acting as full-blown virtual routers.

In comparing and evaluating vSwitch options, operators need to weigh performance, throughput, and functionality against utilization. During provisioning, careful attention must also be given to resource allocation and the tuning of the system to accommodate the intended workload (data plane, control plane, signaling).

Moving up the stack, hypervisors deliver virtual access to underlying compute resources, enabling features like fast start/stop of VMs, snapshot, and VM migration. Hypervisors allow virtual resources (memory, CPU, and the like) to be strictly provisioned to each VM, and enable consolidation of physical servers onto a virtual stack on a single server.

Again, operators have multiple choices. Commercial products may offer more advanced features, while open source alternatives have the broader support of the NFV community. In making their selection, operators should evaluate both the overall performance of each potential hypervisor, and the requirements and impact of its unique feature set.

Management and Orchestration is undergoing a profound fundamental shift from managing physical boxes to managing virtualized functionality. Increased automation is required as this layer must interact with both virtualized server and network infrastructures, often using OpenStack protocols, and in many cases SDN.

VMs and VNFs themselves ultimately impact performance as each requires virtualized resources (memory, storage, and vNICs), and involves a certain number of I/O interfaces. In deploying a VM, it must be verified that the host OS is compatible with the hypervisor. For each VNF, operators need to know which hypervisors the VMs have been verified on, and assess the ability of the host OS to talk to both virtual I/O and the physical layer. The ultimate portability, or ability of a VM to be moved between servers, must also be demonstrated.

Once deployments go live, other overarching aspects of performance, like security, need to be safeguarded. With so much now occurring on a single server, migration to the cloud introduces some formidable new visibility challenges that must be dealt with from start to finish:

Pinpointing performance issues grows more difficult. Boundaries may blur between hypervisors, vSwitches, and even VMs themselves. The inability to source issues can quickly give way to finger pointing that wastes valuable time.

New blind spots also arise. In a traditional environment, traffic is visible on the wire connected to the monitoring tools of choice. Inter-VM traffic within virtualized servers, however, is managed by the hypervisor’s vSwitch, without traversing the physical wire visible to monitoring tools. Traditional security and performance monitoring tools can’t see above the vSwitch, where “east-west” traffic now flows between guest VMs. This newly created gap in visibility may attract intruders and mask pressing performance issues.

Monitoring tool requirements increase as tools tasked with filtering data at rates for which they were not designed quickly become overburdened.

Audit trails may be disrupted, making documenting compliance with industry regulations more difficult, and increasing the risk of incurring fines and bad publicity.

To overcome these emerging obstacles, a new virtual visibility architecture is evolving. As with lab testing, physical and virtual approaches to monitoring live networks are now needed to achieve 100% visibility, replicate field issues, and maintain defenses. New virtualized monitoring Taps (vTaps) add the visibility into inter-VM traffic that traditional tools don’t deliver.

From There On…

The bottom line is that the road to the virtualization of the network will be a long one, without a clear end and filled with potential detours and unforeseeable delays. But with the industry as a whole banding together to pave the way, NFV and its counterpart, Software Defined Networking (SDN) represent a paradigm shift the likes of which the industry hasn’t seen since mobilization itself.

As with mobility, virtualization may cycle through some glitches, retrenching, and iterations on its way to becoming the norm. And once again, providers who embrace the change, validating the core concepts and measuring success each step of the way will benefit most (as well as first), setting themselves up to innovate, lead, and deliver for decades to come.

Thanks to OSP for the article

An Insight into Fault Monitoring

NMSaaS Network Monitoring

Fault monitoring is the process used to monitor all hardware, software, and network fault monitoring configurations for any deviations from normal operating conditions. This monitoring process typically includes major and minor changes to the expected bandwidth, performance, and utilization of the established computer environment.

Some of the features in fault monitoring may include:

  • Automated correlation of root cause events without having to code or update rules
  • Enriches alarm information and dashboards with business impacting data
  • Provides alarm monitors for crucial KPIs of all network assets automatically
  • Supports integration via SMS, pager, email, trouble ticket, and script execution on alarm events

Network fault management is a big challenge when you have a small team. The duty becomes more complicated if you have to manage a remote site and have to dispatch a technician to the site only to find out the problem is something you could have fixed remotely or you could find that you don’t have the right equipment and have to go back and get it that hurts your service restoration time.

In most cases, the time taken to identify the root cause of a problem is actually longer than the time taken to fix it. Having a proactive network fault monitoring tool helps you quickly identify the root cause of the problem and fix it before end-users notice it.

Finding that tool who can do a root cause analysis in real time has many benefits. If you have this tool it means your engineers get to focus on service affecting events and are able to properly prioritize them. Authentic problem analysis in real-time and subsequent problem solving requires precise automation of several interacting components.

If you would like to learn more about this topic please feel free to click below to get our educational whitepaper. It will give you a greater insight into these cloud serves such as fault monitoring and many more.

NMSaaS- 10 Reasons to Consider a SaaS Network Management Solution

Thanks to NMSaaS for the article.

Top Three Policies in Network Configuration Management

Top Three Policies in Network Configuration Management

When a network needs repair, modification, expansion or upgrading, the administrator Network Configuration Management refers to the network configuration management database to determine the best course of action.

Top Three Policies in Network Configuration ManagementThis database contains the locations and network addresses of all hardware devices, as well as information about the programs, versions and updates installed in network computers.

A main focus to consider when discussing network configuration management is Policy checking capabilities. There are three key policy checking capabilities which should not be ignored, and they are as follows

  1. Regulatory Compliance Policy
  2. Vendor Default Policy
  3. Security Access Policy

Regulatory compliance policy

The obvious one is regulatory compliance policy. If you have a network configuration system you should always implement a regular checking system to ensure consistency with design standards, processes and directives with internal and external regulators.

In the past people would use manual processes this is something that was time intensive, costly, inaccurate and more importantly, your business was at risk and open to potential attacks through not having the desired real-time visibility.

Now thanks to the infamous cloud this is all a thing of the past.

Vendor default policy

Vendor default policy is a best practice recommendation to scan the configurations of your infrastructure devices and to eradicate potential holes so that the risk can be mitigated. Furthermore so that the infrastructure security access is maintained to the highest possible levels.

Such holes may arise due to your configuration settings being overlooked. Sometimes a default username and passwords, or SNMP ‘public’ and ‘private’ community strings etc. are not removed, leaving a hole in your security for potential attacks.

Security Access Policy

Access to infrastructure devices are policed and controlled with the use of AAA (Authentication, Authorization, Accounting), TACACS+, RADIUS servers, and ACLs (Access Control Lists) so as to increase security access into device operating systems.

It is very important therefore that the configuration elements of infrastructure devices have the consistency across the managed estate. It is highly recommended to create security policies so that the configurations of security access can be policed for consistency and reported on if changed, or vital elements of the configuration are missing.

Thanks to NMSaaS for the article. 

3 Steps to Server Virtualization Visibility

Network Instruments- 3 Steps to Server Virtualization

Each enterprise has its own reasons for moving to virtual infrastructure, but it all boils down to the demand for better and more efficient server utilization. Ensure comprehensive visibility with three practical steps.

Virtualization is a money-saving technology that allows the enterprise to stretch its IT budget much further by better utilizing server assets. Consider the immediate reduction in data center footprint, maintenance, and capital and operating expense overhead. Then add the promise to dynamically adjust server workloads and service delivery to achieve optimal user experience—the vision of true orchestration. It’s easy to see why server virtualization is key to many organizations’ operational strategy.

But, what if something goes wrong? With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercialware monitoring tools. How can you get the same visibility you need to validate service health within the virtual server hypervisor and vSwitch east/west traffic?

3 Steps to Virtual Visibility Cheat Sheet

Step One:

Get status of host and virtualization components

  • Use polling technologies such as SNMP, WSD, and WMI to provide performance metrics like CPU utilization, memory usage, and virtualized variables like individual VM instance status to find the real cause of service issues.
  • Do your homework. Poor application response time and other service issues can be tied to unexpected sources.

Step Two:

Monitor vSwitch east/west traffic

  • To the network engineer, everything disappears once it hits the virtual server. To combat this “black box effect,” there are two methods to maintain visibility:

1. Inside Virtual Monitoring Model

a. Create a dedicated VM “monitoring instance”.
b. Transmit relevant data to this instance for analysis.
c. Analyze traffic locally with a monitoring solution.
d. Transmit summary or packet data to an external central analysis solution.

2. Outside Virtual Monitoring Model

a. Push copies of raw, unprocessed vSwitch east/west traffic out of the virtualized server.

Step Three:

Inspect perimeter and client north/south conversations

  • Instrument highly-saturated Application Access Layer links with a packet capture device like Observer GigaStor™ to record conversations and rewind for back in time analysis.

To learn more, dowload the white paper here:

Network Instruments- 3 Steps to Server Virtualization

Have You Considered Using a Network Discovery Software Solution

Have you Considered Using a Network Discovery Software Solution?

When you have a network discovery software solution it allows your computer to see another network computers and devices and allows people on other network computers to see your computer. This makes it easier to share files and printers etc, but that’s not all.

You may be asking why is this even important to me? The primary reasons why it is vital for your business to use network discovery is as follows:

  • If you don’t know what you have, you cannot hope to monitor and manage it.
  • You can’t track down interconnected problems.
  • You don’t know when something new comes on the network.
  • You don’t know when you need upgrades.
  • You may be paying too much for maintenance.

Most of the time in network management you’re trying to track down potential issues within your network and how you’re going to resolve these issues. This is a very hard task especially if you’re dealing with a large scale network. If one thing goes down within the network it starts a trickle effect and then more aspects of the network will in return start to go down.

All of these problems are easily fixed. A lot of network discovery capabilities have powerful and flexible tools allowing you to determine what exactly is subject to monitoring.

These elements can be automatically labeled and grouped. This makes automatic data collection possible, as well as threshold monitoring and reporting on already discovered elements.

Another aspect of network discovery software is that it can perform a network topology discovery in the managed network. The discovery process probes each device to determine its configuration and relation to other managed elements.

This information can be then used to create instances as a dependency model. This simplifies event correlation, i.e. no rules programming and the subsystem guarantees identification of critical problems. The discovery detects network devices and topology automatically.

As a result of this we can access critical details like IP address, MAC address, OS, firmware, Services, Memory, Serial Numbers, Interface Information, Routing Information, Neighbor data, these are all available at the click of a button or as a scheduled report.

If you would like to find out more about how we can benefit your enterprise greatly then schedule a technical discussion with one of our experienced engineers.

Contact Us

Thanks to NMSaaS for the article.

Webinar: Cloud Based Network Device Backup and Compliance Policy Checking

NMSaaS- Webinar: Cloud Based Network Device Backup and Compliance Policy Checking

NMSaaS is Cloud based network management system which has features that allow you to capture your network device configurations and perform detailed policy and compliance checks. The Configuration and Change Management (NCCM) module allows to not only proactively search for compliancy issues but also protect devices from having compliance violations inadvertently introduced.

This 30 minute webinar will discuss

  • Backup the running configuration of devices
  • Backup additional “Show” commands
  • Compare older configurations to the current one
  • Restore configurations from previous backups

Join us on January 28Th. from 2:00 to 2:30 EST to discuss how cloud based network management system can help you with policy and configuration management.

Listen to the recording here

b2ap3_thumbnail_listen_button_20150302-150853_1.jpg

Thanks to NMSaaS for the article.

Virtual Server Rx

JDSU Network Instruments- Virtual Server Rx

The ongoing push to increase server virtualization rates is driven by its many benefits for the data center and business. A reduction in data center footprint and maintenance, along with capital and operating cost reductions are key to many organizations’ operational strategy. The ability to dynamically adjust server workloads and service delivery to achieve optimal user experience is a huge plus for IT teams working in the virtualized data center – unless something goes wrong.

With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercial-ware monitoring tools.

How can you get the same visibility you need to validate service health within the virtualized data center?

First Aid for the Virtual Environment

Network teams often act as “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts can be quickly offset by sub-par app performance. Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources.

Health Checks

Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Virtual servers are often highly provisioned and operating at elevated utilization levels. Assessing their underlying health and adding additional resources when necessary, is essential for peak performance.

Use performance monitoring tools to check:

  • CPU Utilization
  • Memory Usage
  • Individual VM Instance Status

Often, these metrics can point to the root cause of service issues that may otherwise manifest themselves indirectly.

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Further Diagnostics

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user. To do so, care must be given in properly instrumenting virtualized server deployments and the supporting network infrastructure.

Ready for more? Download the free white paper 3 Steps to Server Virtualization Visibility, featuring troubleshooting diagrams, metrics, and detailed strategies to help diagnose what’s really going on in your virtual data centers. You’ll learn two methods to monitor VSwitch traffic, as well as how to further inspect perimeter and client conversations.

Download 3 Steps to Server Virtualization Visibility

Thanks to Network Instruments for the article. 

The Advantages and Disadvantages of Network Monitoring

NMSaaS The Advantages and Disadvantages of Network Monitoring

Implementing network monitoring for organization is not something new for large enterprise networks; but for small to medium sized businesses a comprehensive monitoring solution is often not inside their limited budgets.

Network monitoring involves a system that keeps track of the status of the various elements within a network; this can be something as simple as using ICMP (ping) traffic to verify that a device is responsive. However, the more comprehensive options offer a much deeper perspective on the network.

Such elements include:

  • Predictive Analysis
  • Root Cause Analysis
  • Alarm Management
  • SLA Monitoring and Measurement
  • Configuration Management
  • NetFlow Analysis
  • Network Device and Back up

All of these element are key driving factors for any business. If you have read any of my previous blogs you will be aware of the three clear benefits of using a network monitoring system, these benefits include:

  1. Cost savings
  2. Speed
  3. Flexibility

However there a few small cons when looking at this topic.

Security

The security of any solution that requires public connectivity is of the utmost importance; using a cloud network monitoring solution requires a great amount of trust being placed in the cloud provider. If you trust your supplier there should be no reason to worry.

Connectivity

With network monitoring applications that are deployed in-house, the systems themselves typically sit at the most central part of an organization’s network.

With a cloud based solution, the connection to an external entity is not going to have such straightforward connectivity, and with this the risk of losing access to managed elements is a real possibility. However if your provider has an automatic back up device installed this is something which should not deter you away.

Performance

This interlinks with connectivity as the availability of bandwidth between an in-house system and managed elements vs the available bandwidth between an external cloud option and the managed elements can be significant.

If the number of elements that require management is large, it is best to find a cloud option that offers the deployment of an internal collector that sits inside the premises of the organization.

There are certainly a number of advantages that come with cloud offerings which make them very attractive, especially to smaller organizations; however it is important to analyze the whole picture before making a decision.

If you would like to find out more about this topic why not schedule a one on one technical discussion with or experienced technical engineers.

Telnet Networks- Contact us to schedule a live demo

Thanks to NMSaaS for the article.

Top 3 Network Management Solutions to Consider in 2015

Top 3 Network Management Solutions to Consider in 2015

As networking becomes more complex, network management and monitoring has continued to evolve. In a world where technology is continually on the rise people must start becoming more aware of what capabilities are out there for the networks.

There are vast amount of network management solutions out there to help aid your business. The elusive goal of network monitoring tools is not only to alert administrators when there is trouble on the network but to develop trends in the health of the network.

The top 3 solutions to take into consideration in 2015 for all business are:

  1. Detailed Netflow.
  2. Network discovery.
  3. Fault & Event management.

Detailed Netflow

NetFlow is a feature that was introduced on Cisco routers that provides the ability to collect IP network traffic as it enters or exits an interface. By analyzing the data provided by NetFlow a network administrator can determine things such as the source and destination of traffic, class of service, and the causes of congestion. This is a crucial benefit for any organization as it allows them to stay on top of potential difficulties that may arise.

Network Discovery

This is an obvious one to consider for the new year because if you don’t know what you have how are you going to fix it? Network discovery has been around for a while now but not every organization implements it, this is something which should not be ignored.

Why choose network discovery:

  1. You can’t track down interconnected problems.
  2. You don’t know when something new comes on the network.
  3. You may be paying too much for maintenance.

Fault & Event Management

The fault and event management process is essential in a complex technological environment, and most companies have deployed a myriad of tools to collect millions of events, logs, and messages from their network devices, servers and applications.

The advantages of using it are as follows:

  1. Up-to-date knowledge of availability of all monitored network devices and interfaces.
  2. Fully automated Root Cause Analysis allows administrators to focus on actual point of failure and ignore collateral damage.
  3. Instant visibility of network alerts and problems as reported by devices and servers.

These are the top three network management solutions I think every business should take into consideration in 2015. NMSaaS can provide you with numerous amounts of network solutions all integrated into one package. To find out how we can help your business get in contact with one of experienced technical engineers.

Contact UsThanks to NMSaaS for the article. 

Network Performance Monitoring

Ixia's Net Tool Optimizer

Visibility Into the Business

With virtualization, “Big Data,” and the sheer complexity of enterprise networks on the rise, dynamic network monitoring of performance and security provides a critical business advantage. Ixia’s network visibility solutions deliver ongoing insight into production networks to help maximize your company’s productivity and profitability, as well as its return on new and existing IT investments.

Leveraging state-of-the-art technology and techniques, Ixia’s powerful, high-performance network monitoring switches equip network engineers to meet the growing challenge of testing, assessing and monitoring complex, high-performance networks with limited access points. These solutions add intelligence between network access points and sophisticated monitoring tools to streamline the flow of data, ensuring that each tool receives the exact information it needs. Data from multiple TAP and SPAN ports is aggregated and multicast to performance and security monitoring tools, providing network operators with maximum visibility into both physical and virtual networks.

Ixia network visibility solutions:

  • Optimize traffic for monitoring with advanced filtering, aggregation, and replication
  • Extend investments in 1G monitoring tools to 10G and 40G deployments
  • Automate troubleshooting to reduce MTTR
  • Introduce “drag and drop” simplicity to streamline configuration and management
  • Expand network monitoring capacity enabling simultaneous monitoring of multiple connection points from a single port

Poor application performance leads to poor business performance: lost sales, missed opportunities, inefficient operations, and disgruntled customers, weakening the corporate brand. Mitigating this risk, Ixia’s network visibility solutions equip network engineers to leverage actionable insight—maximizing network and application performance while helping to optimize security, compliance, management, scalability, and ROI.

 

Ixia's Net Tool Optimizer Net Optics Network Taps

Net Tool Optimizers
Out-of-band traffic
aggregation, filtering,
dedup, load balancing

Net Optics Network Taps
Passive network access for
security and monitoring tools

 

Thanks to Ixia for the article.