Ixia Has Your Secret Weapon Against SSL Threats

It has finally happened: thanks to advances in encryption, legacy security and monitoring tools are now useless when it comes to SSL. Read this white paper from Ixia, to learn how this negatively impacts visibility into network applications, such as e-mail, e-commerce, online banking, and data storage. Or even worse, how advanced malware increasingly uses SSL sessions to hide, confident that security tools will neither inspect nor block its traffic.

  • ​Consider the following challenges:
  • Visibility into ephemeral key traffic
  • Coping with CPU-intensive encryption and decryption tasks
  • Chaining and handling multiple security tools
  • Meeting the demands of regulatory compliance

The very technology that made our applications secure is now a significant threat vector. The good news is, there is an effective solution for all of these problems. Learn how to eliminate SSL related threats in this white paper.

Thanks to Ixia for this article

Private Cloud: The ABCs of Network Visibility

Cloud computing has become the de facto foundation for digital business. As more and more enterprises move critical workloads to private and public clouds, they will face new challenges ensuring security, reliability, and performance of these workloads. If you are responsible for IT security, data center operations, or application performance, make sure you can see what’s happening in the cloud. This is the first of two blogs on the topic of cloud visibility and focuses on private cloud.

VISIBILITY CHALLENGES

If you wondering why cloud visibility is important, consider the following visibility-related concerns that can occur in private cloud environments.

1. Security blind spots. Traditional security monitoring relies on intercepting traffic as it flows through physical network devices. In virtualized data centers and private clouds, this model breaks down because many packets move between virtual machines (VMs) or application instances and never cross a physical “wire” where they can be tapped for inspection. Because of these blind spots, virtual systems can be tempting targets for malicious breaches.

2. Tools not seeing all relevant data. The point of visibility is not merely to see cloud data, but to export that data to powerful analytics and reporting tools. Tools that receive only a limited view of traffic will have a harder time analyzing performance issues or resolving latency issues, especially as cloud traffic increases. Without access to data from cloud traffic, valuable clues to performance issues may not be identified, which can delay problem resolution or impact the user experience.

3. Security during data generation. Some organizations may use port mirroring in their virtualization platform to access traffic moving between virtual machines. However, this practice can create security issues in highly-regulated environments. Security policies need to be consistently applied, even as application instances move within the cloud environment.

4. Complexity of data collection. With multiple data center and cloud environments, gathering all the relevant data needed by security and monitoring tools becomes complex and time-consuming. Solutions that make it easy to collect traffic from cloud and non-cloud sources can lead to immediate operational savings.

5. Cost of monitoring in the data center. The total cost of a private cloud will rise with the volume of traffic that needs to be transported back to the data center for monitoring. The ability to filter cloud traffic at its source can minimize backhaul and the workload on your monitoring tools.

CLOUD VISIBILITY USE CASES

Given these issues, better visibility can provide valuable benefits to an organization, particularly in:

Security and compliance: Keeping your defenses strong in the cloud, as you do in the data center, requires end-to-end visibility for adequate monitoring and control. Packets that are not inspected represent unnecessary risk to the organization and can harbor malware or other attacks. Regulatory compliance may also require proof that you have secured data as it moves between virtual instances.

Performance analytics: As with security, analysis is dependent on having the necessary data—before, during, and after cloud migration. Your monitoring tools must receive the right inputs to produce accurate insights and to quickly detect and isolate performance problems.

Troubleshooting: If an application that runs in your virtual data center experiences an unusual slow-down, how will you pinpoint the source of the problem? Packet data combined with application-layer intelligence can help you isolate traffic associated with specific combinations of application, user, device, and geolocation, to reduce your mean-time-to-resolution.

In each of these areas, you need the ability to see all of the traffic moving between virtual resources. Without full visibility to what’s happening in your clouds, you increase your risk for data breaches, delays in problem resolution, and loss of productivity or customer satisfaction.

VISIBILITY SOLUTIONS

 So, if cloud visibility is essential to security and application performance, what can you do to address the blind spots that naturally occur? Here are a few things to look for:

Virtual Taps 

Tapping is the process of accessing virtual or cloud packets in order to send them to security and performance monitoring tools. In traditional environments, a physical tap accesses traffic flowing through a physical network switch. In cloud environments, a virtual tap is deployed as a virtual instance in the hypervisor and:

  • ​Accesses all traffic passing between VMs or application instances
  • Provides basic (Layer 2-4) filtering of virtual traffic

For maximum flexibility, you should choose virtual taps like those in Ixia CloudLens Private that support all the leading hypervisors, including OpenStack KVM, VMware ESXi/NSX, and Microsoft Hyper-V and are virtual switch agnostic.

Virtual Packet Processors 

Packet processing is used for more advanced manipulation of packets, to trim the data down to only what is necessary, for maximum tool efficiency. Look for solutions that provide data aggregation, deduplication, NetFlow generation, and SSL decryption. Ixia CloudLens Private packet processing can also do more granular filtering using application intelligence to identify traffic by application, user, device, or geolocation. You can do advanced packet processing using a physical packet broker by transmitting your cloud data back to the data center. Teams that already have physical packet brokers in place, or are new to monitoring cloud traffic, may choose this approach. Another approach is to perform advanced packet processing right in the cloud. Only Ixia offers this all-cloud solution. With this option, you can send trimmed data directly to cloud-based security or analysis tools, eliminating the need for backhaul to the data center. This can be an attractive option for organizations with extremely high traffic volume.

Common Management Interface

Deploying cloud is complicated enough without having to worry about how to get an integrated view across physical and virtual traffic. Ixia’s CloudLens solution provides a comprehensive graphical view of all your network traffic, from all sources. With the power of application intelligence, the Ixia dashboard can tell you where all your traffic is coming from, which applications and locations are the most active, and which operating systems and devices are on the network—valuable information for performance management.

SUMMARY

 As you move more workloads to private cloud environments, be sure to consider a visibility solution that will let you access and visualize your cloud traffic. Don’t let blind spots in your network result in security breaches, application bottlenecks, or dissatisfied users.

Thanks to Ixia and author Lora O’Haver for this article.

Viavi: Nearly 90 Percent of Enterprise Network Teams Spend Time Troubleshooting Security Issues; 80 Percent Report More Time Spent on Security vs. Last Year

Tenth Annual “State of the Network” Global Survey from Viavi Reveals Network and Security Trends from over 1,000 Network Professionals

In April 2017, Viavi Solutions (NASDAQ: VIAV) released the results of its tenth annual State of the Network global study today. This year’s study focused on security threats, perhaps explaining why it garnered the highest response rate in the survey’s history. Respondents included 1,035 CIOs, IT directors, and network engineers around the world. The study is now available for download.

“As our State of the Network study shows, enterprise network teams are expending more time and resources than ever before to battle security threats. Not only are they faced with a growing number of attacks, but hackers are becoming increasingly sophisticated in their methods and malware,” said Douglas Roberts, Vice President and General Manager, Enterprise & Cloud Business Unit, Viavi Solutions. “Dealing with these types of advanced, persistent security threats requires planning, resourcefulness and greater visibility throughout the network to ensure that threat intelligence information is always at hand.”

Highlights of the 2017 study include:

  • ​ Network team members’ involvement in security: Eighty-eight percent of respondents say they are involved in troubleshooting security-related issues. Of those, nearly 80 percent report an increase in the time they spend on such issues, with nearly three out of four spending up to 10 hours a week on them.
  • Evolution of security threats: When asked how the nature of security threats has changed in the past year, IT teams have identified a rise in email and browser-based malware attacks (63 percent), and an increase in threat sophistication (52 percent). Nearly one in three also report a surge in distributed denial of service (DDos) attacks.
  • Key sources of security insight: Syslogs were cited by nearly a third of respondents as the primary method for detecting security issues, followed by long-term packet capture and analysis (23 percent) and performance anomalies (15 percent).
  • Overall factors driving network team workload: Bandwidth usage in enterprises continues to surge, with two out of three respondents expecting bandwidth demand to grow by up to 50 percent in 2017. This trend is in turn driving increased adoption of emerging technologies including software-defined networks (SDN), public and private clouds and 100 Gb. Network teams are managing these major initiatives while simultaneously confronting an aggressive rise in security issues.

 “A combination of new technology adoption, accelerating traffic growth and mounting security risks has spawned unprecedented challenges throughout the enterprise market,” commented Shamus McGillicuddy, Senior Analyst at Enterprise Management Associates. “The need to detect and deal with security threats is notably complicated by the diverse mix of today’s enterprise traffic, which spans across virtual, public and hybrid cloud environments in addition to physical servers.”

Key takeaways: what should IT service delivery teams do?

  • ​Know your “normal” – Recognizing abnormal traffic is critical for pinpointing an ongoing attack or security issue. Start comparing network traffic and behavior over points in time, either manually with freeware analyzer Wireshark, or using automated benchmarking in commercial network performance monitoring and diagnostic (NPMD) tools.
  • Speed discovery with traffic evidence – According to the recent Mandiant M-Trends report, the median number of days that attackers were present on a victim’s network before being discovered is still 146 days; despite the use of IDS and other traditional security tools. Using packet capture with retrospective analysis, network teams can rewind to the time of the incident(s) and track exactly what the hackers accessed.
  • Ensure long-term packet retention – For high-traffic enterprise, data center, or security forensics applications, a purpose-built appliance with its own analytics may be the next step. Depending on size and volume, there are appliances that can capture and store up to a petabyte of network traffic for later analysis, simplifying forensic investigation for faster remediation.
  • Facilitate effective network and security team cooperation – Ensure successful collaboration between network and security teams on investigations with documented workflows and integration between security, network forensics, and performance management tools.

Thanks to Viavi for this article ​

Infosim explains how the SaaS model of Network & Services Management can transform your IT operations

 It is really next to impossible to avoid hearing about how “The Cloud” is transforming business everywhere. Well-known technology companies from the old guard like IBM, Oracle, Microsoft, and Adobe, to established tech powerhouses Google and Amazon, to pretty much every new hot startup like Cloudera are all promoting how they are using the cloud to deliver their product using a SaaS delivery model. Infosim® is seeing this trend start to take hold in the Network & Services Management and IT Service Assurance space.

Our customers want to know if they would benefit from using a cloud/SaaS monitoring solution. So, why the big push to SaaS and away from traditional enterprise software sales? Well, from a business standpoint a recurring revenue model is preferred by salespeople and Wall Street alike. It guarantees a revenue stream in the future while also making the customer interactions a little more “sticky” (to use a silicon valley term). But, while that may be great for the technology company, can the same be said for the end-user? Do their customers see an equal, or even greater, benefit from switching from a long-term contract to a more on-demand model? This article seeks to investigate the customer side of the equation to see if the change to SaaS really is a win-win for both the vendor and the customer.

1. Cost

Let’s begin by looking at cost. Make no mistake; the bottom line is always the most important factor in any comparison of delivery mechanisms. If customers can receive the same operations success at a lower price point, they will go that way every time.

The cost analysis can be somewhat complicated because while the entry price is always significantly lower when purchasing software via SaaS, you have to also think about how that cost might add up over time if you continue to use the software for a long period of time. Thankfully, these types of cost analyses have been studied in depth over the last few years and the verdict is very clear, SaaS Total Cost of Ownership (TCO) is lower than the traditional way of purchasing. Most models have come to the conclusion that SaaS reduces total cost (including maintenance, upgrades, personnel, etc.) to between 35% and 50% over the cost of traditional on-premise application systems. This cannot be ignored and is (and will most probably continue to be) the main driver behind the explosion of SaaS deployments.

2. Ease of deployment

The second most popular reason often cited for moving to SaaS is ease of deployment. This really comes down to the fact that when implementing a SaaS model, the end-user typically has to deploy much fewer compute and storage resources vs an on-premise application. Fewer (or zero) servers, databases, storage arrays, etc. This means a smaller datacenter footprint, which means less power, space, cooling, and everything that goes with managing your own datacenter resources. These factors have cost implications in their own right but notwithstanding financial benefits this reduction in infrastructure means a much lower physical burden on the overall IT service organization. Many see this benefit as having an equal if not larger overall impact to the end-user as the pure financial reduction.

3. Increased flexibility

The third-biggest driver of SaaS from the customer point of view is typically thought to be the increased flexibility that this model delivers. Under the old model, once you purchase a software feature set, you are stuck with what you have committed to. If your needs change, you typically have to eat those sunk costs and then buy more/newer features to meet your changing needs. Conversely, with SaaS you can very rapidly turn features on or off, scale up or down and generally make changes to your systems almost on the fly. This makes management much more comfortable committing to a solution when they know that if their needs change quickly, their software vendor can change with them.

4. Availability

 When evaluating any solution, one of the sometimes overlooked but important “abilities” to take into consideration is availability. The concept is simple, if the software is unavailable, then your users cannot get their jobs done. The cost of downtime for manufactures, financial institutions, and most other businesses can be staggering. It can be the difference between success and failure. This is why most companies spend a lot of time and money on creating disaster recovery plans, geo-redundant systems and other solutions to ensure availability. SaaS has an inherent advantage due to the typically large scale of the software solutions global footprint. AWS from Amazon, Azure from Microsoft, and others have spent huge sums of money to build out global datacenters which have more redundancy than a typical organization could afford to build on their own. Even very large companies that could potentially build out these datacenters have begun to move their systems to the cloud when they realize that advantages outsourcing availability.

5. Expertise

 Another consideration that may not come to mind initially is the availability of expertise to help solve issues or drive development of features. When many, many customers are using essentially the same services provided in the same way from the SaaS solution vendor, they tend to encounter the same problems (and find solutions) very quickly. And, those solutions tend to get posted to user community websites or YouTube etc. almost immediately. This means that as an end-user, when you have an issue, you can typically go online and find a fix or training almost immediately. This speeds up time to resolution dramatically.

6. Security

 Many initially see security concerns as a reason against moving to the cloud. They see their very important data as being kept someplace outside their control and open to attacks and hackers, etc. However, the truth is that if you investigate most of the major well-known data breaches over the last few years; you see that the majority of them have happened to organizations that have been breached internally and not via large-scale cloud infiltrations. So many smaller and medium sized organizations do not have the security expertise or budget to effectively block the advanced threats that are commonplace today. In fact, it tends to be the large cloud providers that more effectively create a security moat around important data than a smaller company could. SaaS vendors can apply a critical security patch to all of their customer in minutes, and not have to rely on the end-user downloading and applying the fix themselves. This ultimately creates a much more secure environment for your data.

When combined, both the common features of SaaS along with some of the lesser-known benefits can add up to a complete and very positive transformation of delivering an IT service such as network management and service assurance.

Thanks to Infosim, and author Dietmar Kneidl

3 Key Differences Between NetFlow and Packet Capture Performance Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis at center-stage of the conversation. Granted, both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, however rich in network metrics, requires sniffing devices and agents throughout the network, which invariably require some level of maintenance during their lifespan. In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with packet sniffers can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router or firewall a NetFlow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, NetFlow analyzers of varying feature-sets are available for network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow’s ability to provide WAN-wide metrics in near real-time makes it a suitable troubleshooting companion for engineers. And with version 9 of NetFlow extending the wealth of information it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Packet Capture. Packet Capture tools, however, do what they do best, which is Deep Packet Inspection (DPI), which allows for the identification of aspects in the traffic hidden in the past to Netflow analyzers. But Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR and other DPI solutions who have recognized that all they need to do is use flexible Netflow tools to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on. We could argue that packet sniffing is able to provide much of this information too, but it doesn’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting performance anomalies that could be subscribed to a number of factors such as untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Packet Capture obsolete

The short answer is, no. In fact, Packet Capture, when properly coupled with NetFlow, can make a very elegant solution. For example, using NetFlow to identify an attack profile or illicit traffic and then analyzing corresponding raw packets becomes an attractive solution. However, NetFlow strikes that perfect balance between detail and context and gives NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform. Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring attests to NetFlow’s growing prominence as the monitoring tool of choice. And as it and its various iterations such sFlow, IPFIX and  others continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

Thank you to NetFlow Auditor for this post.

ThreatARMOR Reduces Your Network’s Attack Surface

2014 saw the creation of more than 317 million new pieces of malware. That means an average of nearly one million new threats were released each day.

Here at Ixia we’ve been collecting and organizing threat intelligence data for years to help test the industry’s top network security products. Our Application and Threat Intelligence (ATI) research center maintains one of the most comprehensive lists of malware, botnets, and network incursions for exactly this purpose. We’ve had many requests to leverage that data in support of enterprise security, and this week you are seeing the first product that uses ATI to boost the performance of existing security systems. Ixia’s ThreatARMOR continuously taps into the ATI research center’s list of bad IP sources around the world and blocks them.

Ixia’s ThreatARMOR represents another innovation and an extension for the company’s Visibility Architecture, reducing the ever-increasing size of their global network attack surface.

A network attack surface is the sum of every access avenue an individual can use to gain access to an enterprise network. The expanding enterprise security perimeter must address new classes of attack, advancing breeds of hackers, and an evolving regulatory landscape.

“What’s killing security is not technology, it’s operations,” stated Jon Oltsik, ESG senior principal analyst and the founder of the firm’s cybersecurity service. “Companies are looking for ways to reduce their overall operations requirements and need easy to use, high performance solutions, like ThreatARMOR, to help them do that.”

Spending on IT security is poised to grow tenfold in ten years. Enterprise security tools inspect all traffic, including traffic that shouldn’t be on the network in the first place: traffic from known malicious IPs, hijacked IPs, and unassigned or unused IP space/addresses. These devices, while needed, create a more work than a security team could possible handle. False security attack positives consume an inordinate amount of time and resources: enterprises spend approximately 21,000 hours per year on average dealing with false positive cyber security alerts per a Ponemon Institute report published January 2015. You need to reduce the attack surface in order to only focus on the traffic that needs to be inspected.

“ThreatARMOR delivers a new level of visibility and security by blocking unwanted traffic before many of these unnecessary security events are ever generated. And its protection is always up to date thanks to our Application and Threat Intelligence (ATI) program.” said Dennis Cox, Chief Product Officer at Ixia.

“The ATI program develops the threat intelligence for ThreatARMOR and a detailed ‘Rap Sheet’ that provides proof of malicious activity for all blocked IP addresses, supported with on-screen evidence of the activity such as malware distribution or phishing, including date of the most recent confirmation and screen shots.”

ThreatARMOR: your new front line of defense!

Additional Resources:

ThreatARMOR

Thanks to Ixia for the article.

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.” The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need. In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss.

NetFlow Auditor - Start your free trial today!Thanks to NetFlow Auditor for the article.

{tag}link rel=”canonical” href=”http://blog.netflowauditor.com/two-ways-networks-are-transformed-by-netflow?utm_campaign=September%2015%20-%20NetFlow%20Guide&utm_content=22860989&utm_medium=social&utm_source=linkedin”{/tag}

Don’t Be Lulled to Sleep with a Security Fable. . .

Once upon a time, all you needed was a firewall to call yourself “secure.” But then, things changed. More networks are created every day, every network is visible to the others, and they connect with each other all the time—no matter how far away or how unrelated.

And malicious threats have taken notice . . .

As the Internet got bigger, anonymity got smaller. It’s impossible to go “unnoticed” on the Internet now. Everybody is a target.

Into today’s network landscape, every network is under the threat of attack all the time. In reaction to threats, the network “security perimeter” has expanded in reaction to new attacks, new breeds of hackers, more regions coming online, and emerging regulations.

Security innovation tracks threat innovation by creating more protection—but this comes with more complexity, more maintenance, and more to manage. Security investment rises with expanding requirements. Just a firewall doesn’t nearly cut it anymore.

Next-generation firewalls, IPS/IDS, antivirus software, SIEM, sandboxing, DPI: all of these tools have become part of the security perimeter in an effort to stop traffic from getting in (and out) of your network. And they are overloaded, and overloading your security teams.

In 2014, there were close to 42.8 million cyberattacks (roughly 117,339 attacks each day) in the United States alone. These days, the average North American enterprise fields around 10,000 alerts each day from its security systems—way more than their IT teams can possibly process—a Damballa analysis of traffic found.

Your network’s current attack surface is huge. It is the sum of every access avenue an attacker could use to enter your network (or take data out of your network). Basically, every connection to and/or from anywhere.

There are two types of traffic that hit every network: The traffic worth analyzing for threats, and the traffic not worth analyzing for threats that should be blocked immediately before any security resource is wasted inspecting or following up on it.

Any way to filter out traffic that is either known to be good or known to be bad, and doesn’t need to go through the security system screening, reduces the load on your security staff. With a reduced attack surface, your security resources can focus on a much tighter band of information, and not get distracted by non-threatening (or obviously threatening) noise.

Thanks to Ixia for the article.

5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

We’ve been doing a lot of blogging around here lately about NCCM and the importance of having an automated configuration and change management system. We’ve even published a Best practices guide for NCCM. One of the main points in any NCCM system is having consistent and accurate configuration backups of all of your “key” devices.

When I ask Network Managers to name their key devices, they generally start with WAN / Internet routers and Firewalls. This makes sense of course because, in a modern large-scale network, connectivity (WAN / Internet routers) & security (Firewalls) tend to get most of the attention. However, we think that it’s important not to overlook core and access switching layers. After all, without that “front line” connectivity – the internal user cannot get out to the WAN/Internet in the first place.

With that in mind, today’s blog offers up 5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

1. Switch Failure

LAN switches tend to be some of the most utilized devices in a network. They also don’t generally come with the top quality hardware and redundant power supplies that core devices have. In many cases, they may also be located on less than pristine locations. Dirty manufacturing floors, dormitory closets, remote office kitchens – I have seen access switches in all of these places. When you combine a heavy workload with tough conditions and less expensive part, you have a recipe for devices that will fail at a higher rate.

So, when that time comes to replace / upgrade a switch, having its configuration backed up and a system which can automate the provisioning of the new system can be a real time and workload saver. Just put the IP address and some basic management information on the new device and the NCCM tool should be able to take off the rest in mere minutes.

2. User Tracking

As the front line connectivity device for the majority of LAN users, the switch is the best place to track down user connections. You may want to know where a particular user is located, or maybe you are trying to troubleshoot an application performance issue; no matter what the cause, it’s important to have that connectivity data available to the IT department. NCCM systems may use layer 2 management data from CDP/LLDP as well as other techniques to gather this information. A good system will allow you to search for a particular IP/MAC/DNS and return connectivity information like which device/port it is connected to as well as when it was first and last seen on that port. This data can also be used to draw live topology maps which offer a great visualization of the network.

3. Policy Checking

Another area where the focus tends to be on “gateway” devices such as WAN routers and firewalls is policy checking. While those devices certainly should have lots of attention paid to them, especially in the area of security policies, we believe that it’s equally as important not to neglect the access layer when it comes to compliance. In general terms, there are two aspects of policy checking which need to be addressed on these devices: QoS policies and regulatory compliance policies.

The vast majority of VoIP and Video systems will connect to the network via a traditional LAN switch. These switches, therefore, must have the correct VLAN and QoS configurations in order to accurately forward the traffic in the appropriate manner so that Quality of Service is maintained.

If your organization is subject to regulatory compliance standards such as PCI, HIPAA etc then these regulations are applicable to all devices and systems that are connected to or pass sensitive data.

In both of these cases, it is incredibly important to ensure policy compliance on all of your devices, even the ones on the “edge” of your network.

4. Asset Lifecycle Management

Especially in larger and more spread out organizations, just understanding what you have can be a challenge. At some point (and always when you are least prepared for it) you will get the “What do we have?” question from a manager. An NCCM system is exactly the right tool to use to answer this question. Even though NCCM is generally considered to be the tool for change – it is equally the tool for information. Only devices that are well documented can be managed and that documentation is best supplied through the use of an automated inventory discovery system. Likewise, when it is time for a technology refresh, or even the build out of a new location or network, understanding the current state of the existing network is the first step towards building an effective plan for the future.

5. New Service Initiatives

Whether you are a large IT shop or a service provider – new applications and services are always coming. In many cases, that will require widespread changes to the infrastructure. The change may be small or larger, but if it needs to be implemented on a number of systems at the same time, it will require coordination and automation to get it done efficiently and successfully. In some instances, this will only require changes to the core, but in many cases it will also require changes to the switch infrastructure as well. This is what NCCM tools were designed to do and there is no reason that you should be handcuffed in your efforts to implement change just because you haven’t added all of your devices into the NCCM platform.

Networks are complicated systems of many individual components spread throughout various locations with interdependencies that can be hard to comprehend without the help of network management tools. While the temptation may be to focus on the core systems, we think that it’s critical to view all parts, even the underappreciated LAN switch, as equal pieces to the puzzle and, therefore, should not be overlooked when implementing an NCCM system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

{tag}link rel=”canonical” href=”http://blog.nmsaas.com/5-reasons-why-you-should-include-lan-switches-in-your-nccm-scope”{/tag}

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of. And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered. So what are the benefits?

1. Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded. Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files. This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2. Network speed – This is one of the most important and easily quantified aspects of managing netflow. It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays. Obviously, neither of these is a good problem to have. Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3. Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector. As the business grows, the network will have to grow with it to support more employees and clients. By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed. As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4. Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. An unsecured network is worse than a useless network, and data breaches can ruin a company. So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With proper software, these issues can be not only monitored, but also recorded and corrected.

5. Usability – Unfortunately, not all employees have a working knowledge of how networks operate. In fact, as many in IT support will attest, most employees aren’t tech savvy. However, most employees will need to use the network as part of their daily work. This conflict is why usability is so important. The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly. Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be? Are you able to monitor the network’s performance and flow, or do network forensics to determine where issues are? Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

b2ap3_thumbnail_aec80d47-9384-4ff8-8d9b-294574b3612f_20151014-140219_1.png

Thanks to NetFlow Auditor for the article.