Troubleshooting Cheat Sheet: Layers 1-3

Any time you encounter a user complaint, whether regarding slow Internet access, application errors, or other issues that impact productivity, it is important to begin with a thorough understanding of the user’s experience.

Not sure where to begin?  User complaints usually fall into three categories: slow network, inability to access network resources, and application-specific issues.

Based upon the complaint being presented you need to understand the symptoms and then isolate the issue to the correct layer of the Open Systems Interconnection (OSI) model.

The following Troubleshooting Cheat Sheet shows the questions to ask with a typical slow network complaint.

What to Ask What it Means
What type of application is being used? Is it web-based? Is it commercial, or a homegrown application? Determines whether the person is accessing local or external resources.
How long does it take the user to copy a file from the desktop to the mapped network drive and back? Verifies they can send data across the network to a server, and allows you to evaluate the speed and response of the DNS server.
How long does it take to ping the server of interest? Validates they can ping the server and obtain the response time.
If the time is slow for a local server, how many hops are needed to reach the server? Confirms the number of hops taking place. Look at switch and server port connections, speed to the client, and any errors.

Quick OSI Layer Review

With these questions answered, working through the OSI model is a straightforward process. When dealing with the different layers, understanding how each layer delivers data and functions will impact how you would troubleshoot each layer.

Physical Layer

  • If it can blind or shock you, think Physical Layer
  • Defines physical characteristics of cables and connectors
  • Provides the interface between network and network devices
  • Describes the electrical, light, or radio data stream signaling

Data Link Layer

  • Converts signals into bits which become the packet data that everyone wants
  • Performs error detection and correction of the data streams
  • Manages flow and link control between the physical signaling and network
  • Constructs and synchronizes data frame packets

Network Layer

  • Controls logical addressing, routing, and packet generation
  • Carries out congestion control and error handling
  • Performs route monitoring and message forwarding

Assessing the Physical Layer

Generally speaking, Physical Layer symptoms can be classified into two groups of outage and performance issues. In most cases, investigating outage issues is the easiest place to begin, as it’s a matter of confirming the link light is out or that a box is not functioning. Additionally, validating equipment failure is a matter of replacing the cable or switch and confirming everything works.

Physical Layer issues are overlooked by people pinging or looking at NetFlow for the problem, when in reality it’s a Layer 1 issue caused by a cable, jack, or connector.

The next step in investigating Physical Layer issues is delving into performance problems. It’s not just dealing with more complex issues, but also having the correct tools to diagnose degraded performance. Essential tools in your tool box for testing physical issues are a cable tester for cabling problems, and a network analyzer or SNMP poller for other problems.

Assessing Physical Performance Errors

In diagnosing performance issues from a network analyzer, you’ll notice that there are patterns common with these errors, which are usually indicative of what’s causing the Physical Layer problem. These can be divided into intelligent and non-intelligent errors.

Intelligent Errors: An intelligent host is smashing into your network signal and corrupting the data.

Example: Overloaded WiFi network or a busy channel.

Non-Intelligent Errors: An outside entity causing noise that interferes with the signal or flow of data across the network.

Example: A microwave interfering with a WiFi signal.

Climbing Further up the Stack

Confirming performance problems, taking a systematic approach to troubleshooting, and understanding how communication occurs across the layers of the OSI model are key to slashing troubleshooting times and improving resolution accuracy.

Measuring IPTV Quality

If you’re implementing IPTV, it’s important to know what metrics to monitor to ensure the transmission of high quality video. Managing performance entails more than tracking response time. Let’s look at the key metrics for managing IPTV in the following table:

Performance Area Metric Description
IPTV Service Metrics QoE Video quality of experience measured via Media Delivery Index (MDI), most often displayed as two numbers separated by a colon: delay factor (DF) and the media loss rate (MLR)
Packet loss Defined as the number of lost or out-of-order packets per second. Since many receivers make no attempt to process out-of-order packets, both are treated as lost in the MLR calculation. The maximum acceptable value for MLR is zero, as any packet loss will impact video quality.
Jitter Measures the variability of delay in packet arrival times
Latency Time taken by transport network to deliver video packets to user
QoS Verify precedence settings are the same for all components of IPTV transmission
IPTV System Metrics CPU Amount of CPU available and used
Memory Device memory available and used
Buffer utilization Quantity used and available
Network Metrics CIR utilization User utilization relative to Committed Information Rate (CIR)
Queue drops Queue drops due to congestion

 

With a basic understanding of key metrics and the technical specifications from your IPTV solution, you can set thresholds and alarms on metrics to notify you of potential issues before they impact the user.

5 IPTV Monitoring Best Practices

Chances are you’ve seen Internet Protocol TV (IPTV) but didn’t know it. Different types of IPTV are popping up in our daily lives ranging from Video-On-Demand to being greeted by pre-recorded video messages at the gas pump or ATM. And many businesses are adopting IPTV to broadcast live or on-demand video content to employees, partners, customers, and investors.

But what does this mean for the network team? In this article we’ll outline IPTV basics and focus on primary management challenges and best practices.

IPTV Basics

In regards to your company, IPTV simply means it has systems in place that can send, receive, and display video streams encoded as IP packets. IPTV video signals can be sent either as a unicast or multicast transmission

  • Unicast: involves a single client and server in the process of sending and receiving video and communication transmissions. Video-on-Demand is a great example of this.
  • Multicast: the process of one party broadcasting the same video transmission to multiple destinations. An example would be a retail chain broadcasting the same video to kiosks in all their stores.

Monitoring Challenges & Best Practices

Implementing best practices can ensure IPTV runs smoothly on your network and performance issues are minimized. As IPTV is deployed, make sure your team is doing the following:

  • Get Visibility, Get Resolution: To ensure video quality, monitor at several points along the video delivery path: headend (point of origin), core, distribution, access, and user/receiver. Critical for capturing accurate metrics and isolating problem source.
  • Minimize Delay and Packet Loss: IPTV video quality can often be compromised by small variations in delay or any significant packet loss. Track, baseline, and alarm on IPTV metrics to proactively identify issues.
  • Avoid Bandwidth Surprises: Transporting video across IP infrastructure consumes considerable bandwidth. Monitor regular use to avoid exceeding thresholds and assist in capacity planning. Reduce the impact of outages by confirming backup network paths have required capacity to carry video.
  • Don’t Monitor in a Vacuum: Confirm your existing performance monitoring tools can track IPTV traffic and metrics alongside existing applications. Incomplete performance views will cause your team to waste time attempting to guess IPTV performance or the impact of other applications on IPTV transmissions.
  • Play Nice with the Video Group: On a converged network, troubleshooting any video issue will involve working with the video group. Attempt to establish processes for coordinating troubleshooting efforts, before problems occur.

This article serves as a starting point for understanding IPTV performance challenges and best practices to implement for ensuring success. For more in-depth information on the technologies, critical network preparations, and IPTV monitoring metrics, check out the following resources:

Infosim® provides any-to-any IoT management with StableNet®

The unprecedented complexity of IoT is bringing together a universe of “things” that were not designed to work together or share data. Data is increasing exponentially. Competitive edge often depends on getting new services to market quickly. New management systems can take years to roll out.

Now there’s StableNet® — an innovative, flexible platform from Infosim® that delivers any-to-any management. Based on high-performance Intel® architecture, StableNet brings new levels of interoperability and assurance to both legacy and modern infrastructure.

StableNet® helps ensure protocols, networks, databases, and applications can talk to each other securely. It provides holistic, end-to-end visibility to simplify management of complex systems and speed time to insight for informed decision making.

StableNet® is a certified Operational Support System with integrated configuration, fault, performance, and services management, including fully automated root cause analysis. It works seamlessly across vendors, silos, systems, and technologies and can be operated in legacy or highly dynamic flex-compute or cloud environments.

Intel® architecture adds essential business-level capabilities, enabling increased performance, manageability, connectivity, analytics, and advanced security. Infosim®’s modular licensing model allows companies to pay for what they need, and scale up or down as their business evolves.

With StableNet® powered by Intel technology, the common “zoo” of management systems becomes manageable — helping your business to thrive and compete in our connected world.

Watch Intel’s video below about Infosim’s StableNet

To learn more about Infosim, and StableNet, or to sign up for your free trial, visit our website here

 Thanks to Infosim for this article and video

Eight Steps to take when conducting your first threat hunt

Unlike traditional, reactive approaches to detection, hunting is proactive. With hunting, security professionals don’t wait to take action until they’ve received a security alert or, even worse, suffer a data breach. Instead, hunting entails looking for opponents who are already in your environment.

Hunting leads to discovering undesirable activity in your environment and using this information to improve your security posture. These discoveries happen on the security team’s terms, not the attacker’s. Rather than launching an investigation after receiving an alert, security teams can hunt for threats when their environment is calm instead of in the midst of the chaos that follows after a breach is detected.

To help security professionals better facilitate threat hunting, here are step-by-step instructions on how to conduct a hunt.

1. Internal vs. outsourced 

If you decide to conduct a threat hunting exercise, you first need to decide whether to use your internal security team or outsource it to an external threat hunting service provider. Some organization have skilled security talent that can lead a threat hunt session. To enable a proper exercise, they should solely work on the hunting assignment for the span of the operation, equipping them to solely focus on this task.

When a security team lacks the time and resources hunting requires, they should consider hiring an external hunting team to handle this task.

2. Start with proper planning

Whether using an internal or external vendor, the best hunting engagements start with proper planning. Putting together a process for how to conduct the hunt yields the most value. Treating hunting as an ad hoc activity won’t produce effective results. Proper planning can assure that the hunt will not interfere with an organization’s daily work routines.

3. Select a topic to examine 

Next, security teams need a security topic to examine. The aim should be to either confirm or deny that a certain activity is happening in their environment. For instance, security teams may want to see if they are targeted by advanced threats, using tools like fileless malware, to evade the organization’s current security setup.

4. Develop and test a hypothesis

The analysts then establish a hypothesis by determining the outcomes they expect from the hunt. In the fileless malware example, the purpose of the hunt is to find hackers who are carrying out attacks by using tools like PowerShell and WMI.

Collecting every PowerShell processes in the environment would overwhelm the analysts with data and prevent them from finding any meaningful information. They need to develop a smart approach to testing the hypothesis without reviewing each and every event.

Let’s say the analysts know that only a few desktop and server administrators use PowerShell for their daily operations. Since the scripting language isn’t widely used throughout the company, the analysts executing the hunt can assume to only see limited use of PowerShell. Extensive PowerShell use may indicate malicious activity. One possible approach to testing the hunt’s hypothesis would be to measure the level of PowerShell use as an indicator of potentially malicious activity.

5. Collect information

To review PowerShell activity, analysts would need network information, which can be obtained by reviewing network logs, and endpoint data, which is found in database logs, server logs or Windows event logs.

To figure out what PowerShell use look like in a specific environment, the analyst will collect data including process names, command line files, DNS queries, destination IP addresses and digital signatures. This information will allow the hunting team to build a picture of relationships across different data types and look for connections.

6. Organize the data

Once that data has been compiled, analysts need to determine what tools they’re going to use to organize and analyze this information. Options include the reporting tools in a SIEM, purchasing analytical tools or even using Excel to create pivot tables and sort data. With the data organized, analysts should be able to pick out trends in their environment. In the example reviewing a company’s PowerShell use, they could convert event logs into CSV files and uploaded them to an endpoint analytics tool.

7. Automate routine tasks 

Discussions about automation may turn off some security analysts get turn off. However, automating some tasks is key for hunting team’s’ success. There are some repetitive tasks that analysts will want to automate, and some queries that are better searched and analyzed by automated tools.

Automation spares analysts from the tedious task of manually querying the reams of network and endpoint data they’ve amassed. For example, analysts may want to consider automating the search for tools that use DGAs (domain generation algorithms) to hide their command and control communication. While an analyst could manually dig through DNS logs and build data stacks, this process is time consuming and frequently leads to errors.

8. Get your question answered and plan a course of action

Analyst will should now have enough information to answer their hypothesis, know what’s happening in their environment and take action. If a breach is detected, the incident response team should take over and remediate the issue. If any vulnerabilities are found, the security team should resolve them.

Continuing with the PowerShell example, let’s assume that malicious PowerShell activity was detected. In addition to alerting the incident response team, security teams or IT administrators should the Group Policy Object settings in Windows to prevent PowerShell scripts from executing.

Thanks to Cybereason, and author Sarah Maloney for this article

Infosim: Go with the flow – choose the right tool though!

NetFlow is a handy tool in the daily work of Network Admins and Analysts. It can be used to measure traffic in networks from an End-to-End perspective, and allows the filtering by several types of data:

  • ​Time
  • Source/destination IP
  • Source/destination port numbers (at least for UDP and TCP)
  • Ingress interface SNMP ifIndex TOS information
  • AS numbers
  • TCP flags
  • Protocol type

How does it work?

NetFlow information is exposed by compatible routers or switches. These can be configured to send their NetFlow data in a well-defined format to a dedicated collector that is responsible for storing the data. A reporting engine can then access the data storage and perform analysis, create a statistic, plot charts for the end-user, etc. But, when drilling down into the details, it is unfortunately not as easy as it sounds.

Flow fragmentation

Originally developed by Cisco, NetFlow has developed into an open standard (https://tools.ietf.org/html/rfc3954), but has also spawned other types of Flow implementations over time, like sFlow, cFlow, jFlow, IPFIX, only to name a few. Some of them are compatible with each other, others are not – like NetFlow and sFlow. Furthermore, also the orginal NetFlow specifications have experienced multiple revisions, and in the wild, there are different versions. Maintaining support for all those versions is not trivial.

Data storage

As NetFlow is a passive monitoring tool, you cannot simply query the Flow emitters. Quite the contrary, data is sent once – via UDP. What’s worse is that the data volume, which might be considerably high, is likely to be sent in bursts. That means the Flow collector must be able to handle all the incoming packets with sufficient speed. If there are packet drops or I/O problems, the data is lost. To perform in a Telco-grade network, the monitoring solution must be well-designed and decouple data collection from necessary data preprocessing and storage.

Correlating the data

Once the data is collected, you would want to make use of it via reports, for example. As you likely have multiple Flow emitters in your network, it is also likely that you have to handle multiple Flow versions. Therefore, it is crucial to preprocess and transform the Flows into a unified data format to provide version-spanning reporting. If you have a large network, you will need multiple Flow collectors to handle the vast amount of data. In order to provide also collector-spanning reporting, you will eventually need a central database where it all comes together.

However, bringing it all together is often not enough. Flows only contain “raw” information about source and target: IP addresses, ports, and protocol information. Most of this only becomes valuable in a bigger scope – when you correlate the IP addresses to actual devices and their related services and topology. This enables you to not only track top talkers in your network, but also keep an eye on the impact, which should be done with a method that incorporates fault management.

Integration

Most likely, there are multiple monitoring tools in place. Maybe that is because no tool has covered it all yet, or because of different departments and different preferences of the people in charge. As such, NetFlow tools will presumably never be isolated, but always part of a bigger picture. That’s why it is important to have a rich set of northbound interfaces. A relay functionality can be a big help if certain Flows should be also redirected to legacy systems due to some existing business process, or just temporarily while the new solution is evaluated without touching the daily business of the NOC.

Another integration aspect is the support for multi-tenancy. Service providers often offer statistics and other report data to their customers to give them an overview of their purchased services’ performance. As Flow packets do not carry any customer information, correlation between Flows and customers is not a trivial task.

How to choose the right tool?

​Over the years, many companies and people have realized the benefits of NetFlow. As a result, you now have dozens of free and commercial tools to choose from. As you will have noticed, a good NetFlow solution faces lots of requirements, and it is hard to cover them all.

Many free solutions do not have many contributors and consequently have to focus on different parts of the NetFlow eco system – which is often about collecting, storing, and reporting. Whereas coverage of the different Flow protocols is often good, there is still room for improvement especially on the integration side. It is also the case that those free tools often do not scale well enough to be used in a Telco environment.

Besides community-driven solutions, there are also some free professional solutions developed by companies. Most of them have a business model that allows free usage of the basic functionalities, but limits the integration, correlation, or storage functionalities. To unlock those features, you have to buy the paid version, which is reasonable, as it is made by a company that needs to pay its employees.

The unified approach

Eventually, it is likely that you will end up using a proprietary solution for your NetFlow requirements if you want to implement a solution in an advanced environment and make use of all NetFlow has to offer. That is, in a nutshell, End-to-End visibility of traffic in your network, which can only be achieved if your NetFlow tool can integrate very well in your inventory, performance, and fault management setup. As integration costs can be considerably high, a unified monitoring tool, even if it only provides basic Flow analysis and reporting capabilities, might provide the best ROI in this scenario.

Whatever road you take, make sure to keep the big picture in mind and try not to waste time implementing a solution that leads to a dead end. More information is available here on a Netflow monitoring technology solution.

Thanks to Infosim for this article.

Ixia Has Your Secret Weapon Against SSL Threats

It has finally happened: thanks to advances in encryption, legacy security and monitoring tools are now useless when it comes to SSL. Read this white paper from Ixia, to learn how this negatively impacts visibility into network applications, such as e-mail, e-commerce, online banking, and data storage. Or even worse, how advanced malware increasingly uses SSL sessions to hide, confident that security tools will neither inspect nor block its traffic.

  • ​Consider the following challenges:
  • Visibility into ephemeral key traffic
  • Coping with CPU-intensive encryption and decryption tasks
  • Chaining and handling multiple security tools
  • Meeting the demands of regulatory compliance

The very technology that made our applications secure is now a significant threat vector. The good news is, there is an effective solution for all of these problems. Learn how to eliminate SSL related threats in this white paper.

Thanks to Ixia for this article

Viavi’s On-Demand Application Dependency Mapping Central to Latest Observer Update

New Observer Apex offers faster loading speeds, connection dynamics functionality for packet visibility, on-demand application dependency mapping (ADM), improved widget editor for faster insights, VoIP service-level agreement assurance based on threshold monitoring, faster alarms, and more.

On-Demand ADM

On-demand ADM offers fast discovery of application interdependencies, building maps that visualize these complex relationships with simple clarity and can be launched from any widget containing an IP address, IP pair, or client/server IP. The functionality auto-determines worst connections on application and network delay threshold deviations and sorts all connections by status, indicating whether that status is critical, marginal, or acceptable.

With the appropriate SPAN and TAP network feeds, Apex can produce this detailed application dependency map in seconds, potentially eliminating hours of war room time by identifying the application tier and domain on-demand. This functionality helps to enable true organizational collaboration.

Connection Dynamics

Apex provides for total visibility into every network transaction from your browser to anywhere in the world with an Internet connection. Begin with a dashboard view, drill down in just a few steps within any Apex widget containing an IP pair or an application and get a simple-to-visualize ladder diagram of every packet and associated key operational metric tied to the conversation including network, client, and server delay along with total application fulfillment time.

Observer Analyzer/GigaStor

Users will also enjoy a redesigned and streamlined Observer Analyzer interface featuring a Windows ribbon bar and a modern application look-and-feel. Eliminate complexity and troubleshoot network issues faster with the latest Observer upgrades.

Eliminate complexity and troubleshoot network issues faster with the Observer Platform.

Learn more Observer Apex, and Viavi and their other available products.

NMSaaS’ The Importance of Network Performance Monitoring

In 2017, any drop in your network’s performance will affect your enterprise’s overall productivity. Ensuring the optimal performance of your network, in turn, requires sophisticated network performance monitoring. Through exceptional reporting and optimization, a high-quality network performance monitoring service can ensure that every aspect of your network is always performing at its best. Here we take a close look at how network performance monitoring works, and the key benefits it provides. 

​How it works

Getting outside the system

In order to engage in effective network performance monitoring, the service that is monitoring the network must reside outside the network itself.

This is obvious in the case of monitoring a network for system failure: if the monitoring software resides within the network that fails, then it cannot report the failure.

However, keeping the monitoring service outside the network is just as important when it comes to monitoring performance. Otherwise, the service will be monitoring its own performance as yet another part of the system, which will compromise the accuracy of its performance data.

Monitoring key metrics

A high-quality network performance monitoring service monitors all of the devices that make up your company’s network, as well as all of the applications that depend on them.

One of the key metrics the monitoring service will track is your network’s response time. In the context of a computer network, response time is a measure of the time it takes for various components and applications within your company’s network to respond. For example, suppose you have an Enterprise Resource Planning (ERP) system installed on your network. Further, suppose an employee clicks on a tab in the ERP’s main dashboard and experiences a long delay. This indicates a poor response time, which is often due to a network with sub-optimal performance.

Alerting and reporting

Depending on your company’s preferences, if there is a significant drop in a key measure of network performance, the network monitoring service will generally alert the system administrator. Yet most high-quality monitoring services also provide network optimization; real-time performance data; and detailed periodic reports calling out any weak points the administrator needs to address to preserve and enhance your network’s overall health and performance. 

The key benefits

A competitive edge

Suppose that you and your top competitor both have precisely the same enterprise network hardware, software, configuration and bandwidth. However, your network has sophisticated performance monitoring in place to keep it in optimal health, and your competitor’s network does not.

Consequently, your staff is sailing through the same network-dependent tasks that your competitor’s staff is slogging through. In this case, you have a significant edge over your top competitor in terms of productivity. On the other hand, if the roles are reversed, and your competitor is the one with high-quality network performance monitoring in place, then they are the one with the edge. Either way, it’s clear that network performance monitoring is a significant contributor to enterprise productivity.

Time and cost savings

Even a few seconds of delay in performing common tasks and opening up key pages within a networked enterprise application can eventually result in hundred of hours of lost time each year. Performance monitoring can save you that time. It can also help you optimize your server and other network components to help prolong their expected lifespans. On the other hand, hardware with consistent performance problems is under strain and may fail unexpectedly.

Consistency

Having a network with consistent optimal performance significantly improves productivity and morale. On the other hand, a network with inconsistent performance will lead to unpredictable work conditions and inconsistent output. For example, suppose your staff is working on an urgent project with a tight deadline. If the network’s performance is inconsistent, they will have no way of predicting whether they can reasonably meet the deadline or not. For this reason, a network with inconsistent performance can be as problematic as a network with frequent downtime.

Effective troubleshooting

There are a large number of subtle factors and gradually unfolding events that can degrade the performance of your network and hurt your company’s productivity without your being directly aware of it. One of the most common is a gradual increase in network usage over time without any improvements to the network itself to accommodate the increase. For example, a company might expand its staff without adding servers or increasing network bandwidth. As the new team members ramp up their usage, network performance begins to decline.

In other cases, an enterprise application connected to the network may suffer from sub-optimal performance that staff believe is due to the network, when in fact it’s an enterprise software issue. In all of these cases, a high-quality network performance monitoring service can summon the data to quickly troubleshoot the situation. If usage has gradually increased without any network improvements to support the increase, the monitoring service’s data can identify the need for improvements. If an enterprise application is exhibiting long response times, the monitoring service can identify whether the source of the problem resides in the network or the enterprise software itself. This capacity to quickly and effectively troubleshoot network-related performance problems can save time and heartache.

Moreover, performance monitoring can also serve as a preventative measure by recognizing and troubleshooting performance problems before they become system failures. For example, by identifying a network component that is not performing optimally and that may soon fail, a high-quality performance monitoring service can help prevent the system failure from occurring. 

Getting started with network performance monitoring

Network performance monitoring keeps a close eye on key indicators of your network’s health. Additionally, a high-quality external monitoring service oversees your network from the outside to provide reliable, real-time performance-related reporting, alerting and optimization.

There are several key benefits of network performance monitoring for enterprises: the monitoring service helps your network-dependent applications and tasks perform at their optimal speed, giving you an edge on the competition; it reduces network response times and helps prolong hardware lifespans, saving you time and money; it provides performance consistency, giving you a network you can count on under time-sensitive deadlines; and it greatly accelerates the network troubleshooting process, giving you the freedom to focus on your business and not your network. Get started with NMSaaS here.

Thanks to NMSaaS and author John Olson for this article

Managing Phantom Devices on Your Network

{jcomments on}

Detecting Phantom Devices on Your NetworkWatch the Video - InfoSim

So you run a network discovery and you notice devices that you are not familiar with. A phantom device is a device that is unmanaged that should be monitored by your Network Management System (NMS)

It seems these devices show up even though you have processes in place to prevent this type of behavior. These could be devices connected to the wrong network, printers, BYOD etc. A phantom device is invisible to you so you are unaware of the device, opening a vulnerability, missing patches, Misconfigurations etc.

How to detect and integrate phantom devices

The first step is to find these devices so you know that they exist and track them. Once you find the device you need to extract device information and understand how they are integrated into your network. The detection process cannot interfere with your daily business; you don’t want to add any unnecessary load to the network and false positives need to be avoided.

Once the phantom devices have been discovered you need to set up a process to incorporate them into your Network Management System (NMS) or remove them from the network

InfoSim SableNet

Has the ability to help you in this process by using the automated discovery engine. This allows you to tag and then reporting on phantom devices. You can then see how they are connected to the network and using SNMP and the NCCM module you can then manage or remove these devices from your Network