Infosim® Global Webinar Day- Return On Investment (ROI) for StableNet®

We all use a network performance management system to help improve the performance of your network. But what is the return to the operations bottom line by using or upgrading these systems? This Thursday, March 26th, Jim Duster CEO of Infosim will be holding a webinar “How do I convince my boss to buy a network management solution?”

Jim will discuss-

Why would anyone buy network management system in the first place?

  • Mapping a technology purchase to the business value of making a purchase
  • Calculating a value larger than the technology total cost of ownership (TCO)
  • Two ROI tools (Live Demo)

You can sign up for this 30 minute webinar here

March 26 4:00 – 4:30 EST

b2ap3_thumbnail_register_button_20150323-144626_1.jpg

A recording of this Webinar will be available to all who register!

(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to CIO Review for the article.

The Advancements of VoIP Quality Testing

The Advancements of VoIP Quality Testing

Industry Analysts say that approximately 85% of today’s networks will require voip quality testing upgrades to their data networks to properly support high-quality VoIP and video traffic.

Organizations are always looking for a way to reduce costs, and that’s why they often try to deploy VoIP by switching voice traffic over to a LAN or WAN links.

In a lot of cases the data networks which the business has chosen handle VoIP traffic accordingly, generally speaking voice traffic is uniquely time sensitive, it cannot be qued and if data grams are lost the conversation can become choppy.

To ensure this doesn’t happen many organizations will conduct a VoIP quality test in the pre and post deplomyent stage.

Pre Deployment testing

There are several steps network engineers can take to ensure VoIP technology can meet expectations. Pre-deployment testing is the first step towards ensuring the network is ready to handle the VoIP traffic.

After the testing process, IT staff should be able to:

  • Determine the total VoIP traffic the network can handle without audio deprivation.
  • Discover any configuration errors with the network and VoIP equipment.
  • Identify and resolve erratic problems that affect network and application performance.
  • Identify security holes that allow malicious eavesdropping or denial of service.
  • Guarantee call quality matches user expectations.

Post deployment testing

Places that already have VoIP/video need to constantly and easily monitor the quality of those links to ensure good quality of service. Just because it was fine when you first installed it, doesn’t mean that it is still working well today, or will be tomorrow.

The main objective of post deployment VoIP testing is to measure the quality and standard of the system before you decide to go live with it. This will in return stop people from complaining about poor quality calls.

Post-deployment testing should be done early and often to minimize the cost of fault resolution and also to provide an opportunity to apply lessons learned later on during the installation.

In both pre and post deployment the testing needs to be simple to setup and provide at a glance actionable information including alarms when there is a problem.

Continuous monitoring

In many cases your network changes every day as devices are added or removed, these could include laptops, IP phones or even routers. All of these contribute to the continuous churn of the IP network experience.

A key driving factor for any business is finding any faults before they become a potential hindrance on the company, regular monitoring will eliminate any potential threats.

In this manner, you’ll receive maximum benefit from your VoIP investment. Regular monitoring builds upon all the assessments and testing performed in support of a deployment. You continue to verify key quality metrics of all the devices and the overall IP network health.

If you found this interesting have a look at the recording of one our webinars on this topic, you will get an in-depth look on this topic.

Thanks to NMSaaS for the article.

Infosim® Announces Release of StableNet® 7.0

Infosim® Announces Release of StableNet® 7.0

Infosim®, the technology leader in automated Service Fulfillment and Service Assurance solutions, today announced the release of its award-winning software suite StableNet® version 7.0 for Telco and Enterprise customers.

StableNet® 7.0 provides a significant number of powerful new features, including:

  • StableNet® Embedded Agent (SNEA) that allows for highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things (IoT)
  • StableNet® Network Configuration & Change Management (NCCM) now offers a REST API extension to allow an easy workflow integration
  • New look and feel of the StableNet® GUI to improve the user experience in terms of usability and workflow
  • StableNet® Server is now based on WildFly 8.2, a modern Java Application Server that supports web services for easier integration of 3rd party systems
  • Extended device support for Phybridge, Fortinet Firewalls, Arista, Sofaware (Checkpoint), Mitel, Keysource UPS, Cisco Meraki and Ixia

StableNet® version 7.0 is available for purchase now. Customers with current maintenance contracts may upgrade free of charge as per the terms and conditions of their contract.

Supporting Quotes:

Marius Heuler, CTO Infosim®

“With this new release of StableNet®, we have enhanced our technological basis and laid out the groundwork to support extensive new automation features for our customers. This is another big step forward towards the industrialization of modern network management.”

Thanks to Infosim for the article.

Why SNMP Monitoring is Crucial for your Enterprise

Why SNMP Monitoring is Crucial for your Enterprise

What is SNMP? Why should we use it? These are all common questions people ask when deciding if its the right feature for them, the answers to these questions are simple.

Simple Network Management Protocol is an “internet-standard protocol for managing devices on IP netowrks”. Devices that typically support this solution include routers, switches, servers, workstations, printers, modem racks and more.

Key functions

  • Collects data about its local environment.
  • Stores and retrieves administration information as defined in the MIB.
  • Signals an event to the manager.
  • Acts as a proxy for some non–SNMP manageable network device.

It typicaly uses, one or more administrative computers, called managers, which have the task of monitoring or managing a group of hosts/devices on a computer network.

Each SNMP Monitoring tool provides valuable insight to any network administrator who requires complete visibility into the network, and it acts as a primary component of a complete management solution information via SNMP to the manager.

The specific agents uncover data on the managed systems as variables. The protocol also permits active management tasks, such as modifying and applying a new configuration through remote modification of these variables.

Companies such as Paessler & Manage engine have been providing customers with reliable SNMP for years, and its obvious why.

Why use it?

It delivers information in a common, non-proprietary manner, making it easy for an administrator to manage devices from different vendors using the same tools and interface.

Its power is in the fact that it is a standard: one SNMP-compliant management station can communicate with agents from multiple vendors, and do so simultaneously.

Another advantage of the application is in the type of data that can be acquired. For example, when using a protocol analyzer to monitor network traffic from a switch’s SPAN or mirror port, physical layer errors are invisible. This is because switches do not forward error packets to either the original destination port or to the analysis port.

However, the switch maintains a count of the discarded error frames and this counter can be retrieved via a simple network management protocol query.

Conclusion

When selecting a solution like this, choose a solution that delivers full network coverage for multi-vendor hardware networks including a console for the devices anywhere on your LAN or WAN.

If you want additional information download our free whitepaper below.

NMSaaS- Top 10 Reasons to Consider a SaaS Based Solution

Thanks to NMSaaS for the article. 

Network Device Backup is a Necessity with Increased Cyber Attacks

NMSaaS- Network Device backup is a necessity with increased cyber attacks

In the past few years cyber-attacks have become far more predominant with data, personal records and financial information stolen and sold on the black market in a matter of days. Major companies such as E-Bay, Domino’s, Montana Health Department and even the White House have fallen victim to cyber criminals.

Security Breach

The most recent scandal was Anthem, one of the country largest health insurers. They recently announced that there systems had been hacked into and over 80 million customer’s information had been stolen. This information ranged from social security numbers, email data, addresses and income material.

Systems Crashing

If hackers can break into your system they can take down your system. Back in 2012 Ulster banks systems crashed, it’s still unreported if it was a cyber-attack or not but regardless of the case there was a crisis. Ulster banks entre banking system went down, people couldn’t take money out, pay bills or even pay for food. As a result of their negligence they were forced to pay substantial fines.

This could have all been avoided if they had installed a proper Network Device Backup system.

Why choose a Network Device Backup system

If your system goes down you need to find the easiest and quickest way to get it back up and running, this means having an up-to-date network backup plan in place that enables you to quickly swap out the faulty device and restore the configuration from backup.

Techworld ran a survey and found that 33% of companies do not back up their network device configurations.

The reason why you should have a backup device configuration in place is as follows:

  • Disaster recovery and business continuity.
  • Network compliance.
  • Reduced downtime due to failed devices.
  • Quick reestablishment of device configs.

It’s evident that increased security is a necessity but even more important is backing up your system. If the crash of Ulster bank in 2012 is anything to go by we should all be backing up our systems. If you would like to learn more about this topic click below.

Telnet Networks- Contact UsThanks to NMSaaS for the article. 

The Highs and Lows of SaaS Network Management

The Highs and Lows of SaaS Network Management

In the technology era that we live in something which cannot be ignored is SaaS network management, in business everything you work off is in some shape of form part of the tech network. This may include printers, phones, routers and even electronic note pads, all of these need to be managed successfully within the business to avoid misfortunes.

While looking at SaaS network management there are always going to be some pros and cons.

The ease of deployment

Because SaaS exists in the cloud, it eradicates the necessity of installing software on a system and ensures that it is configured properly. The management of SaaS is naturally handled through simple interfaces that allows the user to configure and provision the service as required.

As more establishments move their formerly in-house systems into the cloud, incorporating it with these existing services requires limited effort.

Lower costs

SaaS has a differential regarding costs since it usually resides in a shared or multitenant environment where the hardware and software license costs are low compared with the traditional models. Maintenance costs are reduced as well, since the SaaS provider owns the environment and it is split among all customers that use that solution.

Scalability and integration

Usually, SaaS solutions reside in cloud environments that are scalable and have integration with other SaaS offerings. Comparing with the traditional model, users do not have to buy another server or software. They only need to enable a new SaaS offering and, in terms of server capacity planning, the SaaS provider will own that.

Obviously in this world nothing is perfect and there are some slight downsides to SaaS network management. They are very minimal and some of them would not account for everyone, however it’s still necessary to mention them.

Limited applications.

SaaS is gaining in popularity. However, there are still many software applications that don’t offer a hosted platform. You may find it essential to still host certain applications on site, especially if your company relies on multiple software solutions.

Maintenance

Obviously SaaS adoption makes maintenance simpler, because the vendor has more control on the full installation. But the task here might be related to the psychological attitude. For an on premise installation, the customer accepts the responsibility for maintenance and allocates human resources for it. With SaaS the customer tends to this that he or she is released from any of these responsibilities, which is fairly true in most cases but you still should always be keeping an eye on the software no matter what.

Dependence on high speed internet

A high speed internet connection is a must for SaaS, while this is not a big challenge in developed nations, it can be a serious limitation in developing nations with poor infrastructure and unreliable connectivity. Thus firms should choose wisely understanding the connectivity bottleneck.

As you can see the pros outweigh the cons and in business today all organization are looking for a cheaper and faster resources, and it’s obvious that SaaS network management is on of them.

The Highs and Lows of SaaS Network Management

Thanks to NMSaaS for the article.

Virtualization Gets Real

Optimizing NFV in Wireless Networks

The promise of virtualization looms large: greater ability to fast-track services with lower costs, and, ultimately, less complexity. As virtualization evolves to encompass network functions and more, service delivery will increasingly benefit from using a common virtual compute and storage infrastructure.

Ultimately, providers will realize:

Lower total cost of ownership (TCO) by replacing dedicated appliances with commodity hardware and software-based control.

Greater service agility and scalability with functions stitched together into dynamic, highly efficient “service chains” in which each function follows the most appropriate and cost-effective path.

Wired and wireless network convergence as the 2 increasingly share converged networks, virtualized billing, signaling, security functions, and other common underlying elements of provisioning. Management and orchestration (M&O) and handoffs between infrastructures will become more seamless as protocol gateways and other systems and devices migrate to the cloud;

On-the-fly self-provisioning with end users empowered to change services, add features, enable security options, and tweak billing plans in near real-time.

At the end of the day, sharing a common pool of hardware and flexibly allocated resources will deliver far greater efficiency, regardless of what functions are being run and the services being delivered. But the challenges inherent in moving vital networking functions to the cloud loom even larger than the promise, and are quickly becoming real.

The Lifecycle NFV Challenge: Through and Beyond Hybrid Networks

Just 2 years after a European Telecommunications Standards Institute (ETSI) Industry Specification Groups (ISG) outlined the concept, carriers worldwide are moving from basic proof of concept (PoC) demonstrations in the lab to serious field trials of Network Functions Virtualization (NFV). Doing so means making sure new devices and unproven techniques deliver the same (or better) performance when deployments go live.

The risks of not doing so — lost revenues, lagging reputations, churn — are enough to prompt operators to take things in stages. Most will look to virtualize the “low-hanging fruit” first.

Devices like firewalls, broadband remote access servers (BRAS), policy servers, IMS components, and customer premises equipment (CPE) make ideal candidates for quickly lowering CapEx and OpEx without tackling huge real-time processing requirements. Core routing and switching functions responsible for data plane traffic will follow as NFV matures and performance increases.

In the meantime, hybrid networks will be a reality for years to come, potentially adding complexity and even cost (redundant systems, additional licenses) near-term. Operators need to ask key questions, and adopt new techniques for answering them, in order to benefit sooner rather than later.

To thoroughly test virtualization, testing itself must partly become virtualized. Working in tandem with traditional strategies throughout the migration life cycle, new virtualized test approaches help providers explore these 4 key questions:

1. What to virtualize and when? To find this answer, operators need to baseline the performance of existing networks functions, and develop realistic goals for the virtualized deployment. New and traditional methods can be used to measure and model quality and new configurations.

2. How do we get it to work? During development and quality assurance, virtualized test capabilities should be used to speed and streamline testing. Multiple engineers need to be able to instantiate and evaluate virtual machines (VMs) on demand, and at the same time.

3. Will it scale? Here, traditional testing is needed, with powerful hardware systems used to simulate high-scale traffic conditions and session rates. Extreme precision and load aid in emulating real-world capacity to gauge elasticity as well as performance.

4. Will it perform in the real world? The performance of newly virtualized network functions (VNFs) must be demonstrated on its own, and in the context of the overall architecture and end-to-end services. New infrastructure components such as hypervisors and virtual switches (vSwitches) need to be fully assessed and their vulnerability minimized.

Avoiding New Bottlenecks and Blind Spots

Each layer of the new architectural model has the potential to compromise performance. In sourcing new devices and devising techniques, several aspects should be explored at each level:

At the hardware layer, server features and performance characteristics will vary from vendor to vendor. Driver-level bottlenecks can be caused by routine aspects such as CPU and memory read/writes.

With more than 1 type of server platform often in play, testing must be conducted to ensure consistent and predictable performance as Virtual Machines (VMs) are deployed and moved from one type of server to another. The performance level of NICs can make or break the entire system as well, with performance dramatically impacted by simply not having the most recent interfaces or drivers.

Virtual switches and implementations vary greatly, with some coming packaged with hypervisors and others functioning standalone. vSwitches may also vary from hypervisor to hypervisor, with some favoring proprietary technology while others leverage open source. Finally, functionality varies widely with some providing very basic L2 bridge functionality and others acting as full-blown virtual routers.

In comparing and evaluating vSwitch options, operators need to weigh performance, throughput, and functionality against utilization. During provisioning, careful attention must also be given to resource allocation and the tuning of the system to accommodate the intended workload (data plane, control plane, signaling).

Moving up the stack, hypervisors deliver virtual access to underlying compute resources, enabling features like fast start/stop of VMs, snapshot, and VM migration. Hypervisors allow virtual resources (memory, CPU, and the like) to be strictly provisioned to each VM, and enable consolidation of physical servers onto a virtual stack on a single server.

Again, operators have multiple choices. Commercial products may offer more advanced features, while open source alternatives have the broader support of the NFV community. In making their selection, operators should evaluate both the overall performance of each potential hypervisor, and the requirements and impact of its unique feature set.

Management and Orchestration is undergoing a profound fundamental shift from managing physical boxes to managing virtualized functionality. Increased automation is required as this layer must interact with both virtualized server and network infrastructures, often using OpenStack protocols, and in many cases SDN.

VMs and VNFs themselves ultimately impact performance as each requires virtualized resources (memory, storage, and vNICs), and involves a certain number of I/O interfaces. In deploying a VM, it must be verified that the host OS is compatible with the hypervisor. For each VNF, operators need to know which hypervisors the VMs have been verified on, and assess the ability of the host OS to talk to both virtual I/O and the physical layer. The ultimate portability, or ability of a VM to be moved between servers, must also be demonstrated.

Once deployments go live, other overarching aspects of performance, like security, need to be safeguarded. With so much now occurring on a single server, migration to the cloud introduces some formidable new visibility challenges that must be dealt with from start to finish:

Pinpointing performance issues grows more difficult. Boundaries may blur between hypervisors, vSwitches, and even VMs themselves. The inability to source issues can quickly give way to finger pointing that wastes valuable time.

New blind spots also arise. In a traditional environment, traffic is visible on the wire connected to the monitoring tools of choice. Inter-VM traffic within virtualized servers, however, is managed by the hypervisor’s vSwitch, without traversing the physical wire visible to monitoring tools. Traditional security and performance monitoring tools can’t see above the vSwitch, where “east-west” traffic now flows between guest VMs. This newly created gap in visibility may attract intruders and mask pressing performance issues.

Monitoring tool requirements increase as tools tasked with filtering data at rates for which they were not designed quickly become overburdened.

Audit trails may be disrupted, making documenting compliance with industry regulations more difficult, and increasing the risk of incurring fines and bad publicity.

To overcome these emerging obstacles, a new virtual visibility architecture is evolving. As with lab testing, physical and virtual approaches to monitoring live networks are now needed to achieve 100% visibility, replicate field issues, and maintain defenses. New virtualized monitoring Taps (vTaps) add the visibility into inter-VM traffic that traditional tools don’t deliver.

From There On…

The bottom line is that the road to the virtualization of the network will be a long one, without a clear end and filled with potential detours and unforeseeable delays. But with the industry as a whole banding together to pave the way, NFV and its counterpart, Software Defined Networking (SDN) represent a paradigm shift the likes of which the industry hasn’t seen since mobilization itself.

As with mobility, virtualization may cycle through some glitches, retrenching, and iterations on its way to becoming the norm. And once again, providers who embrace the change, validating the core concepts and measuring success each step of the way will benefit most (as well as first), setting themselves up to innovate, lead, and deliver for decades to come.

Thanks to OSP for the article

An Insight into Fault Monitoring

NMSaaS Network Monitoring

Fault monitoring is the process used to monitor all hardware, software, and network fault monitoring configurations for any deviations from normal operating conditions. This monitoring process typically includes major and minor changes to the expected bandwidth, performance, and utilization of the established computer environment.

Some of the features in fault monitoring may include:

  • Automated correlation of root cause events without having to code or update rules
  • Enriches alarm information and dashboards with business impacting data
  • Provides alarm monitors for crucial KPIs of all network assets automatically
  • Supports integration via SMS, pager, email, trouble ticket, and script execution on alarm events

Network fault management is a big challenge when you have a small team. The duty becomes more complicated if you have to manage a remote site and have to dispatch a technician to the site only to find out the problem is something you could have fixed remotely or you could find that you don’t have the right equipment and have to go back and get it that hurts your service restoration time.

In most cases, the time taken to identify the root cause of a problem is actually longer than the time taken to fix it. Having a proactive network fault monitoring tool helps you quickly identify the root cause of the problem and fix it before end-users notice it.

Finding that tool who can do a root cause analysis in real time has many benefits. If you have this tool it means your engineers get to focus on service affecting events and are able to properly prioritize them. Authentic problem analysis in real-time and subsequent problem solving requires precise automation of several interacting components.

If you would like to learn more about this topic please feel free to click below to get our educational whitepaper. It will give you a greater insight into these cloud serves such as fault monitoring and many more.

NMSaaS- 10 Reasons to Consider a SaaS Network Management Solution

Thanks to NMSaaS for the article.

Top Three Policies in Network Configuration Management

Top Three Policies in Network Configuration Management

When a network needs repair, modification, expansion or upgrading, the administrator Network Configuration Management refers to the network configuration management database to determine the best course of action.

Top Three Policies in Network Configuration ManagementThis database contains the locations and network addresses of all hardware devices, as well as information about the programs, versions and updates installed in network computers.

A main focus to consider when discussing network configuration management is Policy checking capabilities. There are three key policy checking capabilities which should not be ignored, and they are as follows

  1. Regulatory Compliance Policy
  2. Vendor Default Policy
  3. Security Access Policy

Regulatory compliance policy

The obvious one is regulatory compliance policy. If you have a network configuration system you should always implement a regular checking system to ensure consistency with design standards, processes and directives with internal and external regulators.

In the past people would use manual processes this is something that was time intensive, costly, inaccurate and more importantly, your business was at risk and open to potential attacks through not having the desired real-time visibility.

Now thanks to the infamous cloud this is all a thing of the past.

Vendor default policy

Vendor default policy is a best practice recommendation to scan the configurations of your infrastructure devices and to eradicate potential holes so that the risk can be mitigated. Furthermore so that the infrastructure security access is maintained to the highest possible levels.

Such holes may arise due to your configuration settings being overlooked. Sometimes a default username and passwords, or SNMP ‘public’ and ‘private’ community strings etc. are not removed, leaving a hole in your security for potential attacks.

Security Access Policy

Access to infrastructure devices are policed and controlled with the use of AAA (Authentication, Authorization, Accounting), TACACS+, RADIUS servers, and ACLs (Access Control Lists) so as to increase security access into device operating systems.

It is very important therefore that the configuration elements of infrastructure devices have the consistency across the managed estate. It is highly recommended to create security policies so that the configurations of security access can be policed for consistency and reported on if changed, or vital elements of the configuration are missing.

Thanks to NMSaaS for the article.