Virtualization Gets Real

Optimizing NFV in Wireless Networks

The promise of virtualization looms large: greater ability to fast-track services with lower costs, and, ultimately, less complexity. As virtualization evolves to encompass network functions and more, service delivery will increasingly benefit from using a common virtual compute and storage infrastructure.

Ultimately, providers will realize:

Lower total cost of ownership (TCO) by replacing dedicated appliances with commodity hardware and software-based control.

Greater service agility and scalability with functions stitched together into dynamic, highly efficient “service chains” in which each function follows the most appropriate and cost-effective path.

Wired and wireless network convergence as the 2 increasingly share converged networks, virtualized billing, signaling, security functions, and other common underlying elements of provisioning. Management and orchestration (M&O) and handoffs between infrastructures will become more seamless as protocol gateways and other systems and devices migrate to the cloud;

On-the-fly self-provisioning with end users empowered to change services, add features, enable security options, and tweak billing plans in near real-time.

At the end of the day, sharing a common pool of hardware and flexibly allocated resources will deliver far greater efficiency, regardless of what functions are being run and the services being delivered. But the challenges inherent in moving vital networking functions to the cloud loom even larger than the promise, and are quickly becoming real.

The Lifecycle NFV Challenge: Through and Beyond Hybrid Networks

Just 2 years after a European Telecommunications Standards Institute (ETSI) Industry Specification Groups (ISG) outlined the concept, carriers worldwide are moving from basic proof of concept (PoC) demonstrations in the lab to serious field trials of Network Functions Virtualization (NFV). Doing so means making sure new devices and unproven techniques deliver the same (or better) performance when deployments go live.

The risks of not doing so — lost revenues, lagging reputations, churn — are enough to prompt operators to take things in stages. Most will look to virtualize the “low-hanging fruit” first.

Devices like firewalls, broadband remote access servers (BRAS), policy servers, IMS components, and customer premises equipment (CPE) make ideal candidates for quickly lowering CapEx and OpEx without tackling huge real-time processing requirements. Core routing and switching functions responsible for data plane traffic will follow as NFV matures and performance increases.

In the meantime, hybrid networks will be a reality for years to come, potentially adding complexity and even cost (redundant systems, additional licenses) near-term. Operators need to ask key questions, and adopt new techniques for answering them, in order to benefit sooner rather than later.

To thoroughly test virtualization, testing itself must partly become virtualized. Working in tandem with traditional strategies throughout the migration life cycle, new virtualized test approaches help providers explore these 4 key questions:

1. What to virtualize and when? To find this answer, operators need to baseline the performance of existing networks functions, and develop realistic goals for the virtualized deployment. New and traditional methods can be used to measure and model quality and new configurations.

2. How do we get it to work? During development and quality assurance, virtualized test capabilities should be used to speed and streamline testing. Multiple engineers need to be able to instantiate and evaluate virtual machines (VMs) on demand, and at the same time.

3. Will it scale? Here, traditional testing is needed, with powerful hardware systems used to simulate high-scale traffic conditions and session rates. Extreme precision and load aid in emulating real-world capacity to gauge elasticity as well as performance.

4. Will it perform in the real world? The performance of newly virtualized network functions (VNFs) must be demonstrated on its own, and in the context of the overall architecture and end-to-end services. New infrastructure components such as hypervisors and virtual switches (vSwitches) need to be fully assessed and their vulnerability minimized.

Avoiding New Bottlenecks and Blind Spots

Each layer of the new architectural model has the potential to compromise performance. In sourcing new devices and devising techniques, several aspects should be explored at each level:

At the hardware layer, server features and performance characteristics will vary from vendor to vendor. Driver-level bottlenecks can be caused by routine aspects such as CPU and memory read/writes.

With more than 1 type of server platform often in play, testing must be conducted to ensure consistent and predictable performance as Virtual Machines (VMs) are deployed and moved from one type of server to another. The performance level of NICs can make or break the entire system as well, with performance dramatically impacted by simply not having the most recent interfaces or drivers.

Virtual switches and implementations vary greatly, with some coming packaged with hypervisors and others functioning standalone. vSwitches may also vary from hypervisor to hypervisor, with some favoring proprietary technology while others leverage open source. Finally, functionality varies widely with some providing very basic L2 bridge functionality and others acting as full-blown virtual routers.

In comparing and evaluating vSwitch options, operators need to weigh performance, throughput, and functionality against utilization. During provisioning, careful attention must also be given to resource allocation and the tuning of the system to accommodate the intended workload (data plane, control plane, signaling).

Moving up the stack, hypervisors deliver virtual access to underlying compute resources, enabling features like fast start/stop of VMs, snapshot, and VM migration. Hypervisors allow virtual resources (memory, CPU, and the like) to be strictly provisioned to each VM, and enable consolidation of physical servers onto a virtual stack on a single server.

Again, operators have multiple choices. Commercial products may offer more advanced features, while open source alternatives have the broader support of the NFV community. In making their selection, operators should evaluate both the overall performance of each potential hypervisor, and the requirements and impact of its unique feature set.

Management and Orchestration is undergoing a profound fundamental shift from managing physical boxes to managing virtualized functionality. Increased automation is required as this layer must interact with both virtualized server and network infrastructures, often using OpenStack protocols, and in many cases SDN.

VMs and VNFs themselves ultimately impact performance as each requires virtualized resources (memory, storage, and vNICs), and involves a certain number of I/O interfaces. In deploying a VM, it must be verified that the host OS is compatible with the hypervisor. For each VNF, operators need to know which hypervisors the VMs have been verified on, and assess the ability of the host OS to talk to both virtual I/O and the physical layer. The ultimate portability, or ability of a VM to be moved between servers, must also be demonstrated.

Once deployments go live, other overarching aspects of performance, like security, need to be safeguarded. With so much now occurring on a single server, migration to the cloud introduces some formidable new visibility challenges that must be dealt with from start to finish:

Pinpointing performance issues grows more difficult. Boundaries may blur between hypervisors, vSwitches, and even VMs themselves. The inability to source issues can quickly give way to finger pointing that wastes valuable time.

New blind spots also arise. In a traditional environment, traffic is visible on the wire connected to the monitoring tools of choice. Inter-VM traffic within virtualized servers, however, is managed by the hypervisor’s vSwitch, without traversing the physical wire visible to monitoring tools. Traditional security and performance monitoring tools can’t see above the vSwitch, where “east-west” traffic now flows between guest VMs. This newly created gap in visibility may attract intruders and mask pressing performance issues.

Monitoring tool requirements increase as tools tasked with filtering data at rates for which they were not designed quickly become overburdened.

Audit trails may be disrupted, making documenting compliance with industry regulations more difficult, and increasing the risk of incurring fines and bad publicity.

To overcome these emerging obstacles, a new virtual visibility architecture is evolving. As with lab testing, physical and virtual approaches to monitoring live networks are now needed to achieve 100% visibility, replicate field issues, and maintain defenses. New virtualized monitoring Taps (vTaps) add the visibility into inter-VM traffic that traditional tools don’t deliver.

From There On…

The bottom line is that the road to the virtualization of the network will be a long one, without a clear end and filled with potential detours and unforeseeable delays. But with the industry as a whole banding together to pave the way, NFV and its counterpart, Software Defined Networking (SDN) represent a paradigm shift the likes of which the industry hasn’t seen since mobilization itself.

As with mobility, virtualization may cycle through some glitches, retrenching, and iterations on its way to becoming the norm. And once again, providers who embrace the change, validating the core concepts and measuring success each step of the way will benefit most (as well as first), setting themselves up to innovate, lead, and deliver for decades to come.

Thanks to OSP for the article

An Insight into Fault Monitoring

NMSaaS Network Monitoring

Fault monitoring is the process used to monitor all hardware, software, and network fault monitoring configurations for any deviations from normal operating conditions. This monitoring process typically includes major and minor changes to the expected bandwidth, performance, and utilization of the established computer environment.

Some of the features in fault monitoring may include:

  • Automated correlation of root cause events without having to code or update rules
  • Enriches alarm information and dashboards with business impacting data
  • Provides alarm monitors for crucial KPIs of all network assets automatically
  • Supports integration via SMS, pager, email, trouble ticket, and script execution on alarm events

Network fault management is a big challenge when you have a small team. The duty becomes more complicated if you have to manage a remote site and have to dispatch a technician to the site only to find out the problem is something you could have fixed remotely or you could find that you don’t have the right equipment and have to go back and get it that hurts your service restoration time.

In most cases, the time taken to identify the root cause of a problem is actually longer than the time taken to fix it. Having a proactive network fault monitoring tool helps you quickly identify the root cause of the problem and fix it before end-users notice it.

Finding that tool who can do a root cause analysis in real time has many benefits. If you have this tool it means your engineers get to focus on service affecting events and are able to properly prioritize them. Authentic problem analysis in real-time and subsequent problem solving requires precise automation of several interacting components.

If you would like to learn more about this topic please feel free to click below to get our educational whitepaper. It will give you a greater insight into these cloud serves such as fault monitoring and many more.

NMSaaS- 10 Reasons to Consider a SaaS Network Management Solution

Thanks to NMSaaS for the article.

Top Three Policies in Network Configuration Management

Top Three Policies in Network Configuration Management

When a network needs repair, modification, expansion or upgrading, the administrator Network Configuration Management refers to the network configuration management database to determine the best course of action.

Top Three Policies in Network Configuration ManagementThis database contains the locations and network addresses of all hardware devices, as well as information about the programs, versions and updates installed in network computers.

A main focus to consider when discussing network configuration management is Policy checking capabilities. There are three key policy checking capabilities which should not be ignored, and they are as follows

  1. Regulatory Compliance Policy
  2. Vendor Default Policy
  3. Security Access Policy

Regulatory compliance policy

The obvious one is regulatory compliance policy. If you have a network configuration system you should always implement a regular checking system to ensure consistency with design standards, processes and directives with internal and external regulators.

In the past people would use manual processes this is something that was time intensive, costly, inaccurate and more importantly, your business was at risk and open to potential attacks through not having the desired real-time visibility.

Now thanks to the infamous cloud this is all a thing of the past.

Vendor default policy

Vendor default policy is a best practice recommendation to scan the configurations of your infrastructure devices and to eradicate potential holes so that the risk can be mitigated. Furthermore so that the infrastructure security access is maintained to the highest possible levels.

Such holes may arise due to your configuration settings being overlooked. Sometimes a default username and passwords, or SNMP ‘public’ and ‘private’ community strings etc. are not removed, leaving a hole in your security for potential attacks.

Security Access Policy

Access to infrastructure devices are policed and controlled with the use of AAA (Authentication, Authorization, Accounting), TACACS+, RADIUS servers, and ACLs (Access Control Lists) so as to increase security access into device operating systems.

It is very important therefore that the configuration elements of infrastructure devices have the consistency across the managed estate. It is highly recommended to create security policies so that the configurations of security access can be policed for consistency and reported on if changed, or vital elements of the configuration are missing.

Thanks to NMSaaS for the article. 

3 Steps to Server Virtualization Visibility

Network Instruments- 3 Steps to Server Virtualization

Each enterprise has its own reasons for moving to virtual infrastructure, but it all boils down to the demand for better and more efficient server utilization. Ensure comprehensive visibility with three practical steps.

Virtualization is a money-saving technology that allows the enterprise to stretch its IT budget much further by better utilizing server assets. Consider the immediate reduction in data center footprint, maintenance, and capital and operating expense overhead. Then add the promise to dynamically adjust server workloads and service delivery to achieve optimal user experience—the vision of true orchestration. It’s easy to see why server virtualization is key to many organizations’ operational strategy.

But, what if something goes wrong? With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercialware monitoring tools. How can you get the same visibility you need to validate service health within the virtual server hypervisor and vSwitch east/west traffic?

3 Steps to Virtual Visibility Cheat Sheet

Step One:

Get status of host and virtualization components

  • Use polling technologies such as SNMP, WSD, and WMI to provide performance metrics like CPU utilization, memory usage, and virtualized variables like individual VM instance status to find the real cause of service issues.
  • Do your homework. Poor application response time and other service issues can be tied to unexpected sources.

Step Two:

Monitor vSwitch east/west traffic

  • To the network engineer, everything disappears once it hits the virtual server. To combat this “black box effect,” there are two methods to maintain visibility:

1. Inside Virtual Monitoring Model

a. Create a dedicated VM “monitoring instance”.
b. Transmit relevant data to this instance for analysis.
c. Analyze traffic locally with a monitoring solution.
d. Transmit summary or packet data to an external central analysis solution.

2. Outside Virtual Monitoring Model

a. Push copies of raw, unprocessed vSwitch east/west traffic out of the virtualized server.

Step Three:

Inspect perimeter and client north/south conversations

  • Instrument highly-saturated Application Access Layer links with a packet capture device like Observer GigaStor™ to record conversations and rewind for back in time analysis.

To learn more, dowload the white paper here:

Network Instruments- 3 Steps to Server Virtualization

Have You Considered Using a Network Discovery Software Solution

Have you Considered Using a Network Discovery Software Solution?

When you have a network discovery software solution it allows your computer to see another network computers and devices and allows people on other network computers to see your computer. This makes it easier to share files and printers etc, but that’s not all.

You may be asking why is this even important to me? The primary reasons why it is vital for your business to use network discovery is as follows:

  • If you don’t know what you have, you cannot hope to monitor and manage it.
  • You can’t track down interconnected problems.
  • You don’t know when something new comes on the network.
  • You don’t know when you need upgrades.
  • You may be paying too much for maintenance.

Most of the time in network management you’re trying to track down potential issues within your network and how you’re going to resolve these issues. This is a very hard task especially if you’re dealing with a large scale network. If one thing goes down within the network it starts a trickle effect and then more aspects of the network will in return start to go down.

All of these problems are easily fixed. A lot of network discovery capabilities have powerful and flexible tools allowing you to determine what exactly is subject to monitoring.

These elements can be automatically labeled and grouped. This makes automatic data collection possible, as well as threshold monitoring and reporting on already discovered elements.

Another aspect of network discovery software is that it can perform a network topology discovery in the managed network. The discovery process probes each device to determine its configuration and relation to other managed elements.

This information can be then used to create instances as a dependency model. This simplifies event correlation, i.e. no rules programming and the subsystem guarantees identification of critical problems. The discovery detects network devices and topology automatically.

As a result of this we can access critical details like IP address, MAC address, OS, firmware, Services, Memory, Serial Numbers, Interface Information, Routing Information, Neighbor data, these are all available at the click of a button or as a scheduled report.

If you would like to find out more about how we can benefit your enterprise greatly then schedule a technical discussion with one of our experienced engineers.

Contact Us

Thanks to NMSaaS for the article.

Mobile Network Optimization

Ixia Anue NTO 7300

Visibility Into Quality

What happens when we offload voice traffic to Wi-Fi? As user demand for high-quality anytime, anywhere communications continues growing exponentially, mobile providers are evolving core networks to higher capacity technologies such as 4G LTE. As they do so, mobile network optimization increasingly relies on detecting and preventing potential performance issues. Accomplishing this detection becomes even more challenging, given the expanding mix of tools, probes, interfaces, processes, functions, and servers involved in network monitoring and optimization.

Ixia’s network visibility solutions provide the ongoing data needed for mobile network optimization. They deliver a high-quality subscriber experience reliably and cost-effectively, despite the growing diversity of network technologies, user devices, and security threats. As operational complexity increases, network engineers at leading mobile service providers can leverage Ixia’s suite of network monitoring switches to ensure the end-to-end visibility needed to minimize OPX, sustain profitability, and safeguard quality and user satisfaction.

Ixia’s mobile network visibility solutions deliver:

  • Traffic optimized for monitoring
  • Automated troubleshooting to reduce MTTR
  • A breakthrough “drag and drop” GUI management interface that streamlines configuration
  • Expanded network monitoring capacity

Carrier-grade Mobile Network Capabilities

Ixia’s expanding suite of network visibility solutions offer a host of new capabilities that equip network engineers at telecommunications providers to achieve end-to-end network visibility—simply and efficiently. NEBS-compliant and suitable for 4G LTE packet cores, these solutions can enable such essential functions as connection of multiple network monitoring tools to a large number of 40GbE, 10GbE, and 1GbE interfaces (up to 16 40GbE ports or up to 64 10GbE ports) in an efficient form factor. Reflecting Ixia’s globally renowned monitoring innovation, these carrier-grade solutions offer such innovative features as:

  • MPLS and GTP filtering
  • Custom dynamic filtering to allow visibility into the first 128 bytes of packets
  • Uninterrupted access for high-availability network monitoring
  • NEBS certification that ensures robustness
  • Redundant, hot-swappable power supplies and fan modules
  • Local and remote alarm relay support
  • Emergency out-of-band reset
  • Intuitive drag-and-drop control panel
  • Aggregation of data from multiple network access points

Ixia provides telecommunications providers easy access to view end-to-end analyses of architected networks, validate field applications, and improve customer loyalty and support. They deliver the actionable insights needed to dynamically detect, avoid and address issues, Overall, Ixia’s robust end-to-end network visibility solutions allow engineers to evaluate and optimize network and application performance under diverse conditions, maximizing ROI and the quality of the user experience.

 

Ixia Anue NTO 7300 Ixia Anue GTP Session Controller

Net Tool Optimizers
Out-of-band traffic aggregation, filtering, dedup, load balancing

GTP Session Controller
Intelligent distribution and control of mobile network traffic

 

Thanks to Ixia for the article

Think About Network Device Backup & Compliance Policy Checking System

Think about Network Device Backup & Compliance Policy Checking system

One of the most underrated and most vital pieces to any organization no matter the size of the business is network device backup.

The reason why you should have a backup device solution in place is as follows:

  1. Quick reestablishment of device configs.
  2. Reduced downtime due to failed devices.
  3. Disaster recovery and business continuity.
  4. Network compliance.

These are all key components to the success of your organization.

Another aspect which should not be ignored is Policy checking. A policy checking system uses an advanced “Snippet” based rules engine, with advanced regular expression based rules and filters to quickly create policies that range from the simple, to the very complex.

These rules can be based on simple text ‘strings’ to finding items present or missing in configuration files; powerful configuration snippets with ‘section’ matching and ‘regular expression’ searching; or advanced scripting languages, (i.e. XML, Perl).

Our CTO John Olson will educate you on why organizations are turning to NMSaaS to capture their network device configurations and then use that information to run detailed compliance checks.

In this technical webinar he will show you how to leverage the power of the NMSaaS Configuration and Change (NCCM) Module to both protect and report on your critical network device configurations.

Examples of automated functions include:

  1. Backup the running configuration of devices.
  2. Compare older configurations to the current one.
  3. Restore configurations from previous backups.
  4. Use logging to watch for device changes and then automatically backup the new configuration.
  5. Run Policy checks against those stored configurations for compliance audits.

To listen to a webinar on cloud based network device backup & compliance policy on demand, click below

listen button

Contact Us for Live Demo

Thanks to NMSaaS for the article. 

Infosim® Global Webinar Day January 29th, 2015 – I Have 1000 Devices From this Vendor to Manage, Now What?

listen button

Join Mike Skripek, Senior Network Engineer for a Webinar and Live Demo on- 

“I Have 1000 Devices From this Vendor to Manage, Now What?”

This Webinar will provide insight into:

  • How custom integration can solve your device management woes
  • Extracting performance data beyond traditional methods
  • 5 examples of custom scripting for custom devices
  • Custom integration can evolve with your code versions
  • Custom integration with Infosim® StableNet® [Live Demo]
  • Stump the presenter: “Can you manage this device?” [Q&A]

A recording of this Webinar will be available to all who register!

listen button

(Take a look at our previous Webinars here.)

Thanks to Infosim for the article. 

Webinar: Cloud Based Network Device Backup and Compliance Policy Checking

NMSaaS- Webinar: Cloud Based Network Device Backup and Compliance Policy Checking

NMSaaS is Cloud based network management system which has features that allow you to capture your network device configurations and perform detailed policy and compliance checks. The Configuration and Change Management (NCCM) module allows to not only proactively search for compliancy issues but also protect devices from having compliance violations inadvertently introduced.

This 30 minute webinar will discuss

  • Backup the running configuration of devices
  • Backup additional “Show” commands
  • Compare older configurations to the current one
  • Restore configurations from previous backups

Join us on January 28Th. from 2:00 to 2:30 EST to discuss how cloud based network management system can help you with policy and configuration management.

Listen to the recording here

b2ap3_thumbnail_listen_button_20150302-150853_1.jpg

Thanks to NMSaaS for the article.

Virtual Server Rx

JDSU Network Instruments- Virtual Server Rx

The ongoing push to increase server virtualization rates is driven by its many benefits for the data center and business. A reduction in data center footprint and maintenance, along with capital and operating cost reductions are key to many organizations’ operational strategy. The ability to dynamically adjust server workloads and service delivery to achieve optimal user experience is a huge plus for IT teams working in the virtualized data center – unless something goes wrong.

With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercial-ware monitoring tools.

How can you get the same visibility you need to validate service health within the virtualized data center?

First Aid for the Virtual Environment

Network teams often act as “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts can be quickly offset by sub-par app performance. Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources.

Health Checks

Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Virtual servers are often highly provisioned and operating at elevated utilization levels. Assessing their underlying health and adding additional resources when necessary, is essential for peak performance.

Use performance monitoring tools to check:

  • CPU Utilization
  • Memory Usage
  • Individual VM Instance Status

Often, these metrics can point to the root cause of service issues that may otherwise manifest themselves indirectly.

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Further Diagnostics

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user. To do so, care must be given in properly instrumenting virtualized server deployments and the supporting network infrastructure.

Ready for more? Download the free white paper 3 Steps to Server Virtualization Visibility, featuring troubleshooting diagrams, metrics, and detailed strategies to help diagnose what’s really going on in your virtual data centers. You’ll learn two methods to monitor VSwitch traffic, as well as how to further inspect perimeter and client conversations.

Download 3 Steps to Server Virtualization Visibility

Thanks to Network Instruments for the article.