Troubleshooting Cheat Sheet: Layers 1-3

Any time you encounter a user complaint, whether regarding slow Internet access, application errors, or other issues that impact productivity, it is important to begin with a thorough understanding of the user’s experience.

Not sure where to begin?  User complaints usually fall into three categories: slow network, inability to access network resources, and application-specific issues.

Based upon the complaint being presented you need to understand the symptoms and then isolate the issue to the correct layer of the Open Systems Interconnection (OSI) model.

The following Troubleshooting Cheat Sheet shows the questions to ask with a typical slow network complaint.

What to Ask What it Means
What type of application is being used? Is it web-based? Is it commercial, or a homegrown application? Determines whether the person is accessing local or external resources.
How long does it take the user to copy a file from the desktop to the mapped network drive and back? Verifies they can send data across the network to a server, and allows you to evaluate the speed and response of the DNS server.
How long does it take to ping the server of interest? Validates they can ping the server and obtain the response time.
If the time is slow for a local server, how many hops are needed to reach the server? Confirms the number of hops taking place. Look at switch and server port connections, speed to the client, and any errors.

Quick OSI Layer Review

With these questions answered, working through the OSI model is a straightforward process. When dealing with the different layers, understanding how each layer delivers data and functions will impact how you would troubleshoot each layer.

Physical Layer

  • If it can blind or shock you, think Physical Layer
  • Defines physical characteristics of cables and connectors
  • Provides the interface between network and network devices
  • Describes the electrical, light, or radio data stream signaling

Data Link Layer

  • Converts signals into bits which become the packet data that everyone wants
  • Performs error detection and correction of the data streams
  • Manages flow and link control between the physical signaling and network
  • Constructs and synchronizes data frame packets

Network Layer

  • Controls logical addressing, routing, and packet generation
  • Carries out congestion control and error handling
  • Performs route monitoring and message forwarding

Assessing the Physical Layer

Generally speaking, Physical Layer symptoms can be classified into two groups of outage and performance issues. In most cases, investigating outage issues is the easiest place to begin, as it’s a matter of confirming the link light is out or that a box is not functioning. Additionally, validating equipment failure is a matter of replacing the cable or switch and confirming everything works.

Physical Layer issues are overlooked by people pinging or looking at NetFlow for the problem, when in reality it’s a Layer 1 issue caused by a cable, jack, or connector.

The next step in investigating Physical Layer issues is delving into performance problems. It’s not just dealing with more complex issues, but also having the correct tools to diagnose degraded performance. Essential tools in your tool box for testing physical issues are a cable tester for cabling problems, and a network analyzer or SNMP poller for other problems.

Assessing Physical Performance Errors

In diagnosing performance issues from a network analyzer, you’ll notice that there are patterns common with these errors, which are usually indicative of what’s causing the Physical Layer problem. These can be divided into intelligent and non-intelligent errors.

Intelligent Errors: An intelligent host is smashing into your network signal and corrupting the data.

Example: Overloaded WiFi network or a busy channel.

Non-Intelligent Errors: An outside entity causing noise that interferes with the signal or flow of data across the network.

Example: A microwave interfering with a WiFi signal.

Climbing Further up the Stack

Confirming performance problems, taking a systematic approach to troubleshooting, and understanding how communication occurs across the layers of the OSI model are key to slashing troubleshooting times and improving resolution accuracy.

Load Balancing Your Security Solution for Fun and Profit!

Maximizing the Value and Resiliency of Your Deployed Enterprise Security Solution with Intelligent Load Balancing

Correctly implementing your security solution in the presence of complex, high-volume user traffic has always been a difficult challenge for network architects. The data in transit on your network originates from many places and fluctuates with respect to data rates, complexity, and the occurrence of malicious events. Internal users create vastly different network traffic than external users using your publically available resources. Synthetic network traffic from bots has exceeded real users as the most prevalent creators of network traffic on the internet . How do you maximize your investment in a security solution while gaining the most value from the deployed solution? The answer is intelligent deployment through realistic preparation.

Let’s say that you have more than one point of ingress and egress into your network, and predicting traffic loads it is very difficult (since your employees and customers are global). Do you simply throw money at the problem by purchasing multiple instances of expensive network security infrastructure that could sit idle at times and then get saturated during others? A massive influx of user traffic could overwhelm your security solution in one rack, causing security policies to not be enforced, while the solution at the other point of ingress has resources to spare.

High speed inline security devices are not just expensive—the more features you enable on them the less network traffic they can successfully parse. If you start turning on features like sandboxing (which spawns virtual machines to deeply analyze potential new security events) you can really feel the pain.

Using a network packet broker with load balancing capability with multiple inline Next Generation Firewalls (NGFW) into a single logical solution, allows you to maximize your secruity investment.  To test the effectiveness we ran 4 scenerio’s using an advanced featured packet broker and load testing tools to see how effective this strategy is.

TESTING PLATFORM

Usung two high end NGFWs, we enabled nearly every feature (including scanning traffic for attacks, identifying user applications, and classifying network security risk based on the geolocation of the client) and load balanced the two devices using an advanced featured packet broker. Then using our load testing tools we created all of my real users and a deluge of different attack scenarios.  Below are the results of 4 testing scenerios

Scenario One: Traffic Spikes

Your 10GbE NGFW will experience inconsistent amounts of network traffic. It is crucial to be able effectively inforce security policies during such events. In the first test I created a baseline of 8Gbps of real user traffic, then introduced a large influx of traffic that pushed the overall volume to 14Gbps. The packet broker load balancer ensured that the traffic was split between the two NGFWs evenly, and all of my security policies were enforced.

Load Balancing Your Security Solution for Fun and Profit!

Figure 1: Network traffic spike

Scenario Two: Endurance Testing

Handling an isolated event is interesting, but maintaining security effectiveness over long periods of time is crucial for a deployed security solution. In the next scenario, I ran all of the applications I anticipated on my network at 11Gbps for 60 hours. The packet broker gave each of my NGFWs just over 5Gbps of traffic, allowing all of my policies to be enforced. Of the 625 million application transactions attempted throughout the duration of the test, users enjoyed a 99.979% success rate.

Load Balancing Your Security Solution for Fun and Profit!

Figure 2: Applications executed during 60 hour endurance test

Scenario Three: Attack Traffic

Where the rubber meets the road for a security solution is during an attack. Security solutions are insurance policies against network failure, data exfiltration, misuse of your resources, and loss of reputation. I created a 10Gbps baseline of the user traffic (described in Figure 2) and added a curveball by launching 7261 remote exploits from one zone to another. Had these events not been load balanced with the packet broker, a single NGFW might have experienced the entire brunt of this attack. The NGFW could have been overwhelmed and failed to inforce policies. The NGFW might have been under such duress mitigating the attacks that legitimate users would have been collateral damage of the NGFW attempting to inforce policies. The deployed solution performed excellently, mitigating all but 152 of my attacks.

Concerning the missed 152 attacks: the load testing tool library contains a comprehensive amount of undisclosed exploits. That being said, as with the 99.979% application success rate experienced during the endurance test, nothing is infallible. If my test worked with 100% success, I wouldn’t believe it and neither should you.

Load Balancing Your Security Solution for Fun and Profit!

Figure 3: Attack success rate

Scenario Four: The Kitchen Sink

Life would indeed be rosy if the totality of a content aware security solution was simply making decisions between legitimate users and known exploits. For my final test I added another wrinkle. The solution also had to deal with large volume of fuzzing to my existing deluge of real users and attacks. Fuzzing is the concept of sending intentionally flawed network traffic through a device or at an endpoint with the hopes of uncovering a bug that could lead to a successful exploitation. Fuzzed traffic can be as simple as incorrectly advertised packet lengths, to erroneously crafted application transactions. My test included those two scenarios and everything in between. The goal of this test was stability. I achieved this by mixing 400Mbps of pure chaos via load testing fuzzing engine, with Scenario Three’s 10Gbps of real user traffic and exploits. I wanted to make certain that my load-balanced pair of NGFWs were not going to topple over when the unexpected took place.

The results were also exceptionally good. Of the 804 million application transactions my users attempted, I only had 4.5 million go awry—leaving me with a 99.436% success rate. This extra measure of maliciousness only changed the user experience by increasing the failures by about ½ of a percent. Nothing crashed and burned.

Load Balancing Your Security Solution for Fun and Profit!

Figure 4: Application Success rates during the “Kitchen Sink” test

Conclusion

All four of the above scenarios illustrate how you can enhance the effectiveness of a security solution while maximizing your budget. However, we are only scratching the surface. What if you needed your security solution to be deployed in a High Availability environment? What if the traffic your network services expand? Setting up the packet broker to operate in HA or adding additional inline security solutions to be load balanced is probably the most effective and affordable way of addressing these issues.

Let us know if you are intrested in seeing a live demonstration of a packet broker load balancing attacks from secruity testing tool over multiple inline security solutions? We would be happy to show you how it is done.

Additional Resources:

Network Packet Brokers

CyPerf

Tracking the Evolution of UC Technology

Defining unified communications is more complicated than it seems, but a thorough understanding of UC technology is required before informed buying decisions can be made. Not only is the UC value proposition difficult to articulate, but it involves multiple decisions that impact both the IT group and end users.

In brief, UC is a platform that seamlessly integrates communications applications across multiple modes — such as voice, data and video — and delivers a consistent end-user experience across various networks and endpoints. While this describes UC’s technical capabilities, its business value is enabling collaboration, improving personal productivity and streamlining business processes.

At face value, this is a compelling value proposition, but UC offerings are not standardized and are constantly evolving. All vendors have similar core features involving telephony and conferencing, but their overall UC offerings vary widely with new capabilities added regularly.

No true precedent exists to mirror UC technology, which is still a fledgling service. The phone system, however, may be the closest comparison — a point reinforced by the fact that the leading UC vendors are telecom vendors.

But while telephony is a static technology, UC is fluid and may never become a finished product like an IP PBX. As such, to properly understand UC, businesses must abandon telecom-centric thinking and view UC as a new model for supporting all modes of communication.

UC technology blends telephony, collaboration, cloud

UC emerged from the features and limitations of legacy technology. Prior to VoIP, phone systems operated independently, running over a dedicated voice network. Using packet-switched technology, VoIP allowed voice to run on the LAN, sharing a common connection with other communications applications.

For the first time, telephony could be integrated with other modes, and this gave rise to unified messaging. This evolution was viewed as a major step forward by creating a common inbox where employees could monitor all modes of communications.

UC took this development further by allowing employees to work with all available modes of communication in real time. Rather than just retrieve messages in one place, employees can use UC technology to conference with others on the fly, share information and manage workflows — all from one screen. Regardless of how many applications a UC service supports, a key value driver is employees can work across different modes from various locations with many types of devices.

Today’s UC offerings cover a wide spectrum, so businesses need a clear set of objectives. In most cases, VoIP is already being used, and UC presents an opportunity to get more value from voice technology.

To derive that value, the spectrum of UC needs to be understood in two ways. First, think of UC as a communications service rather than a telephony service. VoIP will have more value as part of UC by embedding voice into other business applications and processes and not just serving as a telephony system. In this context, UC’s value is enabling new opportunities for richer communication rather than just being another platform for telephony.

Secondly, the UC spectrum enables both communication and collaboration. Most forms of everyday communication are one on one, and UC makes this easier by providing a common interface so users don’t have to switch applications to use multiple modes of communication. Collaboration takes this communication to another level when teams are involved.

A major inhibitor of group productivity has long been the difficulty of organizing and managing a meeting. UC removes these barriers and makes the collaboration process easier and more effective.

Finally, the spectrum of UC is defined by the deployment model. Initially, UC technology was premises-based because it was largely an extension of an enterprise’s on-location phone system. But as the cloud has gained prominence, UC vendors have developed hosted UC services — and this is quickly becoming their model of choice.

Most businesses, however, aren’t ready for a full-scale cloud deployment and are favoring a hybrid model where some elements remain on-premises while others are hosted. As such, UC vendors are trying to support the market with a range of deployment models — premises-based, hosted and hybrid.

How vendors sell UC technology

Since UC is not standardized, vendors sell it in different ways. Depending on the need, UC can be sold as a complete service that includes telephony. In other cases, the phone system is already in place, and UC is deployed as the overriding service with telephony attached. Most UC vendors are also providers of phone systems, so for them, integrating these elements is part of the value proposition.

These vendors, however, are not the only option for businesses. As cloud-based UC platforms mature, the telephony pedigree of a vendor becomes less critical.

Increasingly, service providers are offering hosted UC services under their own brand. Most providers cannot develop their own UC platforms, so they partner with others. Some providers partner with telecom vendors to use their UC platforms, but there is also a well-established cadre of third-party vendors with UC platforms developed specifically for carriers.

Regardless of who provides the platform, deploying UC is complex and usually beyond the capabilities of IT.

Most UC services are sold through channels rather than directly to the business. In this case, value-added resellers, systems integrators and telecom consultants play a key role, as they have expertise on both sides of the sale. They know the UC landscape, and this knowledge helps determine which vendor or service is right for the business and its IT environment. UC providers tend to have more success when selling through these channels.

Why businesses deploy UC services

On a basic level, businesses deploy UC because their phone systems aren’t delivering the value they used to. Telephony can be inefficient, as many calls end up in voicemail, and users waste a lot of time managing messages. For this reason, text-based modes such as chat and messaging are gaining favor, as is the general shift from fixed line to mobile options for voice.

Today, telephony is just one of many communication modes, and businesses are starting to see the value of UC technology as a way to integrate these modes into a singular environment.

The main modes of communication now are Web-based and mobile, and UC provides a platform to incorporate these with the more conventional modes of telephony. Intuitively, this is a better approach than leaving everyone to fend for themselves to make use of these tools. But the UC value proposition is still difficult to express.

UC is a productivity enabler — and that’s the strongest way to build a business case. However, productivity is difficult to measure, and this is a major challenge facing UC vendors. When deployed effectively, UC technology makes for shorter meetings, more efficient decisions, fewer errors and lower communication costs, among other benefits.

All businesses want these outcomes, but very few have metrics in place to gauge UC’s return on investment. Throughout the rest of this series, we will examine the most common use cases for UC adoption and explore the major criteria to consider when purchasing a UC product.

Thanks to Unified Communications for the article. 

Ixia Exposes Hidden Threats in Encrypted Mission-Critical Enterprise Applications

Delivers industry’s first visibility solution that includes stateful SSL decryption to improve application performance and security forensics

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced it has extended its Application and Threat Intelligence (ATI) Processor™ to include stateful, bi-directional SSL decryption capability for application monitoring and security analytics tools. Stateful SSL decryption provides complete session information to better understand the transaction as opposed to stateless decryption that only provides the data packets. As the sole visibility company providing stateful SSL decryption for these tools, Ixia’s Visibility Architecture™ solution is more critical than ever for enterprise organizations looking to improve their application performance and security forensics.

“Together, FireEye and Ixia offer a powerful solution that provides stateful SSL inspection capabilities to help protect and secure our customer’s networks,” said Ed Barry, Vice President of Cyber Security Coalition for FireEye.

As malware and other indicators of compromise are increasingly hidden by SSL, decryption of SSL traffic for monitoring and security purposes is now more important for enterprises. According to Gartner research, for most organizations, SSL traffic is already a significant portion of all outbound Web traffic and is increasing. It represents on average 15 percent to 25 percent of total Web traffic, with strong variations based on the vertical market.1 Additionally, compliance regulations such as the PCI-DSS and HIPAA increasingly require businesses to encrypt all sensitive data in transit. Finally, business applications like Microsoft Exchange, Salesforce.com and Dropbox run over SSL, making application monitoring and security analytics much more difficult for IT organizations.

Enabling visibility without borders – a view into SSL

In June, Ixia enabled seamless visibility across physical, virtual and hybrid cloud data centers. Ixia’s suite of virtual visibility products allows insight into east-west traffic running across the modern data center. The newest update, which includes stateful SSL decryption, extends security teams’ ability to look into encrypted applications revealing anomalies and intrusions.

Visibility for better performance – improve what you can measure

While it may enhance security of transferred data, encryption also limits network teams’ ability to inspect, tune and optimize the performance of applications. Ixia eliminates this blind spot by providing enterprises with full visibility into mission critical applications.

The ATI Processor works with Ixia’s Net Tool Optimizer® (NTO™) solution and brings a new level of intelligence to network packet brokers. It is supported by the Ixia Application & Threat Intelligence research team, which provides fast and accurate updates to application and threat signatures and application identification code. Additionally, the new capabilities will be available to all customers with an ATI Processor and an active subscription.

To learn more about Ixia’s latest innovations read:

ATI processor

Encryption – The Next Big Security Threat

Thanks to Ixia for the article. 

How Not to Rollout New Ideas, or How I Learned to Love Testing

I was recently reading an article in TechCrunch titled “The Problem With The Internet Of Things,” where the author lamented how bad design or rollout of good ideas can kill promising markets. In his example, he discussed how turning on the lights in a room, through the Internet of Things (IoT), became a five step process rather than the simple one step process we currently use (the light switch).

This illustrates the problem between the grand idea, and the practicality of the market: it’s awesome to contemplate a future where exciting technology impacts our lives, but only if the realities of everyday use are taken into account. As he effectively state, “Smart home technology should work with the existing interfaces of households objects, not try to change how we use them.”

Part of the problem is that the IoT is still just a nebulous concept. Its everyday implications haven’t been worked out. What does it mean when all of our appliances, communications, and transportation are connected? How will they work together? How will we control and manage them? Details about how the users of exciting technology will actually participate in the experience is the actual driver of technology success. And too often, this aspect is glossed over or ignored.

And, once everything is connected, will those connections be a door for malware or hacktivists to bypass security?

Part of the solution to getting new technology to customers in a meaningful way, that is both a quality end user experience AND a profitable model for the provider, is network validation and optimization. Application performance and security resilience are key when rolling out, providing, integrating or securing new technology.

What do we mean by these terms? Well:

  • Application performance means we enable successful deployments of applications across our customers’ networks
  • Security resilience means we make sure customer networks are resilient to the growing security threats across the IT landscape

Companies deploying applications and network services—in a physical, virtual, or hybrid network configuration—need to do three things well:

  • Validate. Customers need to validate their network architecture to ensure they have a well-designed network, properly provisioned, with the right third party equipment to achieve their business goals.
  • Secure. Customers must secure their network performance against all the various threat scenarios—a threat list that grows daily and impacts their end users, brand, and profitability.

(Just over last Thanksgiving weekend, Sony Pictures was hacked and five of its upcoming pictures leaked online—with the prime suspect being North Korea!)

  • Optimize. Customers seek network optimization by obtaining solutions that give them 100% visibility into their traffic—eliminating blind spots. They must monitor applications traffic and receive real-time intelligence in order to ensure the network is performing as expected.

Ixia helps customers address these pain points, and achieve their networking goals every day, all over the world. This is the exciting part of our business.

When we discuss solutions with customers, no matter who they are— Bank of America, Visa, Apple, NTT—they all do three things the same way in their networks:

  • Design—Envision and plan the network that meets their business needs
  • Rollout—Deploy network upgrades or updated functionality
  • Operate—Keep the production network seamlessly providing a quality experience

These are the three big lifecycle stages for any network design, application rollout, security solution, or performance design. Achieving these milestones successfully requires three processes:

  • Validate—Test and confirm design meets expectations
  • Secure— Assess the performance and security in real-world threat scenarios
  • Optimize— Scale for performance, visibility, security, and expansion

So when it comes to new technology and new applications of that technology, we are in an amazing time—evidenced by the fact that nine billion devices will be connected to the Internet in 2018. Examples of this include Audio Video Bridging, Automotive Ethernet, Bring Your Own Apps (BYOA), etc. Ixia sees only huge potential. Ixia is a first line defense to creating the kind of quality customer experience that ensures satisfaction, brand excellence, and profitability.

Additional Resources:

Article: The Problem With The Internet Of Things

Ixia visibility solutions

Ixia security solutions

Thanks to Ixia for the article.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

When it comes to IT services, business value and user satisfaction are both dependent upon the server, network, and applications all working together seamlessly.

Failure to adequately monitor each of these and their interactions, means that you could be flying blind – susceptible to degraded service levels.

While application and network monitoring receive a lot of the attention, it is important to also understand what’s going on with the server.

Virtualization changes the face of service delivery

The environment in which modern services run is complex. Superficially, it appears as though we’ve traveled back to the 1960s, with data centers again appearing like big monolithic constructs (whether cloud or internally hosted) with highly-virtualized server farms connecting through large core networks.

The emergence of virtualized clients (with most computing done remotely) takes the analogy a step further and makes it feel as if we are on the set of “Mad Men” with the old dumb terminals connected to the mainframe.

But that may be where the analogy ends. Today’s IT service delivery is almost never performed in a homogeneous vendor setting—from a hardware or software perspective. Likewise, the diversity of complex multi-tier applications and methods by which they are accessed continues to proliferate.

To learn more, download the white paper.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

Thanks to Network Instruments for the article.

State of Networks: Faster, but Under Attack

Two recent studies that look at the state of mobile and fixed networks show that while networks are getting ever faster, security is a paramount concern that is taking up more time and resources.

Akamai recently released its fourth quarter 2014 State of the Internet report. Among the findings:

  • In terms of network security, high tech and public sector targets saw increased numbers of attacks from 2013 to 2014, while enterprise targets had fewer attacks over the course of the year – except Q4, where the commerce and enterprise segment were the most frequently targeted.

“Attacks against public sector targets reported throughout 2014 appear to be primarily motivated by political unrest, while the targeting of the high tech industry does not appear to be driven by any single event or motivation,” Akamai added.

  • Akamai customers saw DDoS attacks up 20% from the third quarter, although the overall number of such attacks held steady from 2013 to 2014 at about 1,150.
  • Average mobile speeds differ widely on a global basis, from 16 megabits per second in the U.K., to 1 Mbps in New Caledonia. Average peak mobile connection speeds continue to increase, from a whopping 157.3 Mbps in Singapore, to 7.5 Mbps in Argentina. And Denmark, Saudi Arabia, Sweden and Venezuela had 97% of unique IP addresses from mobile providers connect to Akamai’s network at speeds faster than the 4 Mbps threshold that is considered the minimum for “broadband.”

Meanwhile, Network Instruments, part of JDSU, recently completed its eighth annual survey of network professionals. It found that security is an increasing area of focus for network teams and that they are spending an increasing amount of time focused on security incidents and prevention.

NI reported that its survey found that the most commonly reported network security challenge is correlating security issues with network performance (reported by 50% of respondents) – meanwhile, the most common method for identifying security issues are “syslogs” (used by 67% of respondents). Other methods included simple network management protocol and tracking performance anomalies, while long-term packet capture and analysis was used by slightly less than half of the survey participants – 48%. Network Instruments said that relatively low utilization of long-term packet capture makes it “an under-utilized resource in security investigations” and that “replaying the events would provide greater context” for investigators.

NI also found that “application overload” is driving a huge increase in bandwidth use expectations, due to users accessing network resources and large files with multiple devices; real-time unified communications applications that require more bandwidth; as well as private cloud and virtualization adoption. See Network Instrument’s full infographic below:

Network Instruments' State of the Network infographic

Thanks to RCR Wireless News for the article.

Virtual Server Rx

JDSU Network Instruments- Virtual Server Rx

The ongoing push to increase server virtualization rates is driven by its many benefits for the data center and business. A reduction in data center footprint and maintenance, along with capital and operating cost reductions are key to many organizations’ operational strategy. The ability to dynamically adjust server workloads and service delivery to achieve optimal user experience is a huge plus for IT teams working in the virtualized data center – unless something goes wrong.

With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercial-ware monitoring tools.

How can you get the same visibility you need to validate service health within the virtualized data center?

First Aid for the Virtual Environment

Network teams often act as “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts can be quickly offset by sub-par app performance. Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources.

Health Checks

Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Virtual servers are often highly provisioned and operating at elevated utilization levels. Assessing their underlying health and adding additional resources when necessary, is essential for peak performance.

Use performance monitoring tools to check:

  • CPU Utilization
  • Memory Usage
  • Individual VM Instance Status

Often, these metrics can point to the root cause of service issues that may otherwise manifest themselves indirectly.

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Further Diagnostics

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user. To do so, care must be given in properly instrumenting virtualized server deployments and the supporting network infrastructure.

Ready for more? Download the free white paper 3 Steps to Server Virtualization Visibility, featuring troubleshooting diagrams, metrics, and detailed strategies to help diagnose what’s really going on in your virtual data centers. You’ll learn two methods to monitor VSwitch traffic, as well as how to further inspect perimeter and client conversations.

Download 3 Steps to Server Virtualization Visibility

Thanks to Network Instruments for the article. 

The Advantages and Disadvantages of Network Monitoring

NMSaaS The Advantages and Disadvantages of Network Monitoring

Implementing network monitoring for organization is not something new for large enterprise networks; but for small to medium sized businesses a comprehensive monitoring solution is often not inside their limited budgets.

Network monitoring involves a system that keeps track of the status of the various elements within a network; this can be something as simple as using ICMP (ping) traffic to verify that a device is responsive. However, the more comprehensive options offer a much deeper perspective on the network.

Such elements include:

  • Predictive Analysis
  • Root Cause Analysis
  • Alarm Management
  • SLA Monitoring and Measurement
  • Configuration Management
  • NetFlow Analysis
  • Network Device and Back up

All of these element are key driving factors for any business. If you have read any of my previous blogs you will be aware of the three clear benefits of using a network monitoring system, these benefits include:

  1. Cost savings
  2. Speed
  3. Flexibility

However there a few small cons when looking at this topic.

Security

The security of any solution that requires public connectivity is of the utmost importance; using a cloud network monitoring solution requires a great amount of trust being placed in the cloud provider. If you trust your supplier there should be no reason to worry.

Connectivity

With network monitoring applications that are deployed in-house, the systems themselves typically sit at the most central part of an organization’s network.

With a cloud based solution, the connection to an external entity is not going to have such straightforward connectivity, and with this the risk of losing access to managed elements is a real possibility. However if your provider has an automatic back up device installed this is something which should not deter you away.

Performance

This interlinks with connectivity as the availability of bandwidth between an in-house system and managed elements vs the available bandwidth between an external cloud option and the managed elements can be significant.

If the number of elements that require management is large, it is best to find a cloud option that offers the deployment of an internal collector that sits inside the premises of the organization.

There are certainly a number of advantages that come with cloud offerings which make them very attractive, especially to smaller organizations; however it is important to analyze the whole picture before making a decision.

If you would like to find out more about this topic why not schedule a one on one technical discussion with or experienced technical engineers.

Telnet Networks- Contact us to schedule a live demo

Thanks to NMSaaS for the article.

Application Performance Monitoring

Ixia NTO-7300

Your network infrastructure exists for one reason: to deliver the services and applications that matter to your customers who demand access now, without interruption. Anything that affects your ability to reach customers has a serious impact on your bottom line.

High-quality application performance requires real-time awareness of what’s happening on the network. Network operators need to monitor, analyze, and report on transactions throughout the IT environment—whether physical, virtual, or in the cloud—to identify issues quickly and resolve problems before they disrupt critical services. This means understanding dependencies between applications and the network, being alerted to issues before business is affected, and accelerating troubleshooting.

For most businesses, network performance must now be evaluated and managed from an application perspective. To accomplish this, you need innovative transaction performance management capabilities that help prioritize problem resolution according to business impact.

Ixia Application Performance Monitoring (APM)

Ixia offers a spectrum of intelligent APM capabilities that work with monitoring devices to capture and analyze network traffic in a scalable solution. Ixia APM solutions accurately, efficiently, and non-disruptively direct out-of-band network traffic from multiple access points, whether SPAN ports or TAPs, to the monitoring device for analysis. The result is application awareness that dramatically raises network performance, availability, and security.

Ixia APM enables

Full network visibility. Ixia’s APM solutions deliver all required traffic from anywhere in the network to the monitoring tools, allowing fully 100 percent of traffic to be monitored and analyzed.

  • Simplified deployment. Flexible enough to work in any network environment, Ixia’s APM shares access with deployed monitoring and security tools.
  • Streamlined scalability. Ixia’s APM allows you to add 1GE, 10GE, 40GE, or 100GE ports, with filters dynamically adjusted to meet bandwidth requirements.
  • Effective security. Ixia’s APM automatically directs traffic as needed to a centralized “farm” of cost effective, high-capacity security tools to monitor distributed buildings and network segments. Traffic of interest is returned to the security tool farm for inspection.
  • Advanced Automation. Ixia’s APM solutions automatically respond in real time to network events that have an impact on applications, including event recording, security analysis, and traffic redirection. This capability improves application performance and availability.

Highlights of Ixia APM

Ixia’s APM’s advanced filtering capabilities work easily with your own monitoring systems across a range of applications. Additionally, our APM performs:

  • Load-balancing of traffic across multiple monitoring input ports
  • Dynamic tightening of filters as needed to ensure that key transactions are always analyzed when total traffic spikes over 10Gbps
  • Traffic redirecting among multiple monitoring appliances on a network to provide high availability
  • Packet capture on demand, based upon NMS/SIEM alerts

Related Products

 

Ixia NTO-7300 Net Optics Network Taps Net Optics Phantom Virtualization Tap

Net Tool Optimizers
Out-of-band traffic aggregation, filtering, dedup, load balancing

Net Optics Network Taps
Passive network access for security and monitoring tools

Phantom Virtualization Tap
Passive network access to traffic passing between VMs

Thanks to Ixia for the article.