The Benefits of Using 2-Wire Digital Master Clock System

If you are considering a wired clock system for your facility, the 2-wired Digital Master Clock System option with Sapling may be the best option for you. Take a look at the unique advantages of this system below. In addition to the written description, check out our video at the bottom to see a visual depiction of how the 2-Wire Digital Clock System works.

Power/Data on the Same Line

Most wired clock systems require three or four wires. With Sapling’s 2-Wire Digital Communication System, the converter box supplies the power and amplifies the data, so that power and data are integrated on the same line. Fewer wires mean a cleaner, less cumbersome and more efficient system.

Instant Correction

As with all of Sapling’s clock systems, our goal is to provide synchronized, accurate time to keep your education, healthcare or business facility operating at its best. The 2-Wire Digital Communication System provides time updates to all of the clocks as often as once per second. With such frequent corrections, your clocks are guaranteed to show the accurate time, all the time. Another auto correction feature is the 5 minute synchronization after a power outage. If power is lost, you won’t have to worry about resetting the clocks or waiting a few hours for them to be re-synchronized. Within five minutes of getting power back, the master clock will send a signal to reset all of the clocks to the accurate time. Even if a power outage causes some temporary chaos in other areas, clock malfunctions or time inaccuracies will not be issues to add to the mix. Sapling takes care of that part for you.

Effortless Installation

The installation of the 2-Wired System is simple and straightforward for a few reasons. First, the low voltage requirement means that you do not need a certified electrician to install the system in most countries. Having two wires going from the master clock to each individual clock instead of four also makes setup quicker and easier. Even if a mistake is made with the two wires, our cutting-edge reverse polarity detection technology will recognize the error and autocorrect it. What could be easier than that?

Hopefully, the only thing easier is making the decision to install Sapling’s 2-Wire Digital Master Clock System for its advanced technological capabilities, ease, accuracy and the superior quality and service that you can expect from Sapling.

Thanks to Sapling for the article.

Key Factors in NCCM and CMDB Integration – Part 1 Discovery

Part I Discovery

“I am a rock, I am an Island…” These lyrics by Simon and Garfunkel pretty appropriately summarize what most IT companies would like you to believe about their products. They are islands that stand alone and don’t need any other products to be useful. Well, despite what they want, the truth is closer to the lyrics by the Rolling Stones – “We all need someone we can lean on”. Music history aside, the fact is that interoperability and integration is one of the most important keys to a successful IT Operations Management system. Why? Because no product truly does it all; and, when done correctly, the whole can be greater than the sum of the individual parts. Let’s take a look at the most common IT asset management structure and investigate the key factors in NCCM and CMDB Integration.

Step 1. Discovery. The heart of any IT operations management system is a database of the assets that are being managed. This database is commonly referred to as the Configurations Management Database or CMDB. The CMDB contains all of the important details about the components of an IT system and the relationships between these items. This includes information regarding the components of an asset like physical parts and operating systems, as well as upstream and downstream dependencies. A typical item in a CMDB may have hundreds of individual pieces of information about it stored in the database. A fully populated and up to date CMDB is an extremely useful data warehouse. But, that begs the question, how does a CMDB get to be fully populated in the first place?

That’s where Discovery software comes in. Inventory discovery systems can be used to automatically gather these critical pieces of asset information directly from the devices themselves. Most hardware and software vendors have built in ways of “pulling” that data from the device. Network systems mainly use SNMP. Windows servers can also use SNMP as well as the Microsoft proprietary WMI protocol. Other vendors like VMware also have an API that can be accessed to gather this data. Once the data has been gathered, the discovery system should be able to transfer that data to the CMDB. It may be a “push” from the discovery system to the CMDB, or it could use a “pull” to go the other way – but there should always be a means of transfer. Especially when the primary “alternative” way of populating the CMDB is either by manually entering the data (sounds like fun) or by uploading spreadsheet csv files (but how do they get populated?).

Step 2. Updating. Once the CMDB is populated and running then you are done with the discovery software right? Um, wrong. Unless your network never changes (please email me if that is the case, because I’d love to talk to you), then you need to constantly update the CMDB. In fact, in many organizations, the CMDB has a place in it for pre-deployment. Meaning that new systems which are to come online soon are entered into the CMDB. The could news is that our discovery system should be able to get that information out of the CMDB and then use it as the basis for a future discovery run, which in turn adds details about the device back to the CMDB and so on. When implemented properly and working well, this cyclical operation really can save enormous amounts of time and effort.

In the next post in this series, I’ll explore how having an up to date asset system makes other aspects of NCCM like Backup, Configuration, and Policy Checking much easier.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

Rogers Set to ‘Ignite’ Gigabit Rollout in Toronto; 4m Homes to be Covered by End-2016

Rogers Communications, Canada’s second largest broadband provider by subscribers, has confirmed that the rollout of its planned 1Gbps internet service will commence this year in downtown Toronto and the Greater Toronto Area (GTA). Parts of the city earmarked for coverage include: Harbourfront, Cabbagetown, Riverdale, King Street West, Queen Street West, the Financial District, the Discovery District, Yonge & Bloor, Vaughan, Markham, Richmond Hill, Pickering, Ajax and Whitby. By the end of 2016 the Gigabit service – which will be branded ‘Ignite’ – will be available to over four million homes, representing Rogers entire cable footprint across Ontario and Atlantic Canada.

Thanks to TeleGeography for the article. 

Benefits of Network Security Forensics

The networks that your business operates on are often open and complex.

Your IT department is responsible for mitigating network risks, managing performance and auditing data to ensure functionality.

Using NetFlow forensics can help your IT team maintain the competitiveness and reliability of the systems required to run your business.

In IT, network security forensics involves the monitoring and analysis of your network’s traffic to gather information, obtain legal evidence and detect network intrusions.

These activities help keep your company perform the following actions.

  • Adjust to increased data and NetFlow volumes
  • Identify heightened security vulnerabilities and threats
  • Align with corporate and legislative compliance requirements
  • Contain network costs
  • Analyze network performance demands
  • Recommend budget-friendly implementations and system upgrades

NetFlow forensics helps your company maintain accountability and trace usage; these functions become increasingly difficult as your network becomes more intricate.

The more systems your network relies on, the more difficult this process becomes.

While your company likely has standard security measures in place, e.g. firewalls, intrusion detection systems and sniffers, they lack the capability to record all network activity.

Tracking all your network activity in real-time at granular levels is critical to the success of your organization.

Until recently, the ability to perform this type of network forensics has been limited due to a lack of scalability.

Now, there are web-based solutions that can collect and store this data to assist your IT department with this daunting task.

Solution capabilities include:

  • Record NetFlow data at a micro level
  • Discover security breaches and alert system administrators in real-time
  • Identify trends and establish performance baselines
  • React to irregular traffic movements and applications
  • Better provisioning of network services

The ability to capture all of this activity will empower your IT department to provide more thorough analysis and take faster action to resolve system issues.

But, before your company can realize the full value of NetFlow forensics, your team needs to have a clear understanding of how to use this intelligence to take full advantage of these detailed investigative activities.

Gathering the data through automation is a relatively simple process once the required automation tools have been implemented.

Understanding how to organize these massive amounts of data into clear, concise and actionable findings is an additional skill set that must be developed within your IT team.

Having a team member, whether internal or via a third-party vendor, that can aggregate your findings and create visual representations that can be understood by non-technical team members is a necessary part of NetFlow forensics. It is important to stress the necessity of visualization; this technique makes it much easier to articulate the importance of findings.

In order to accurately and succinctly visualize security issues, your IT staff must have a deep understanding of the standard protocols of your network. Without this level of understanding, the ability to analyze and investigate security issues is limited, if not impossible.

Utilizing a software to support the audit functions required to perform NetFlow forensics will help your company support the IT staff in the gathering and tracking of these standard protocols.

Being able to identify, track and monitor the protocols in an automated manner will enhance your staff’s ability to understand and assess the impact of these protocols on network performance and security. It will also allow you to quickly assess the impact of changes driven by real-time monitoring of your network processes.

Sound like a daunting task?

It doesn’t have to be. Choose a partner to support your efforts and help you build the right NetFlow forensics configuration to support your business.

Contact us today and let us help you manage your company’s IT network.

Benefits of Network Security ForensicsThanks to NetFlow Auditor for the article.

Tracking the Evolution of UC Technology

Defining unified communications is more complicated than it seems, but a thorough understanding of UC technology is required before informed buying decisions can be made. Not only is the UC value proposition difficult to articulate, but it involves multiple decisions that impact both the IT group and end users.

In brief, UC is a platform that seamlessly integrates communications applications across multiple modes — such as voice, data and video — and delivers a consistent end-user experience across various networks and endpoints. While this describes UC’s technical capabilities, its business value is enabling collaboration, improving personal productivity and streamlining business processes.

At face value, this is a compelling value proposition, but UC offerings are not standardized and are constantly evolving. All vendors have similar core features involving telephony and conferencing, but their overall UC offerings vary widely with new capabilities added regularly.

No true precedent exists to mirror UC technology, which is still a fledgling service. The phone system, however, may be the closest comparison — a point reinforced by the fact that the leading UC vendors are telecom vendors.

But while telephony is a static technology, UC is fluid and may never become a finished product like an IP PBX. As such, to properly understand UC, businesses must abandon telecom-centric thinking and view UC as a new model for supporting all modes of communication.

UC technology blends telephony, collaboration, cloud

UC emerged from the features and limitations of legacy technology. Prior to VoIP, phone systems operated independently, running over a dedicated voice network. Using packet-switched technology, VoIP allowed voice to run on the LAN, sharing a common connection with other communications applications.

For the first time, telephony could be integrated with other modes, and this gave rise to unified messaging. This evolution was viewed as a major step forward by creating a common inbox where employees could monitor all modes of communications.

UC took this development further by allowing employees to work with all available modes of communication in real time. Rather than just retrieve messages in one place, employees can use UC technology to conference with others on the fly, share information and manage workflows — all from one screen. Regardless of how many applications a UC service supports, a key value driver is employees can work across different modes from various locations with many types of devices.

Today’s UC offerings cover a wide spectrum, so businesses need a clear set of objectives. In most cases, VoIP is already being used, and UC presents an opportunity to get more value from voice technology.

To derive that value, the spectrum of UC needs to be understood in two ways. First, think of UC as a communications service rather than a telephony service. VoIP will have more value as part of UC by embedding voice into other business applications and processes and not just serving as a telephony system. In this context, UC’s value is enabling new opportunities for richer communication rather than just being another platform for telephony.

Secondly, the UC spectrum enables both communication and collaboration. Most forms of everyday communication are one on one, and UC makes this easier by providing a common interface so users don’t have to switch applications to use multiple modes of communication. Collaboration takes this communication to another level when teams are involved.

A major inhibitor of group productivity has long been the difficulty of organizing and managing a meeting. UC removes these barriers and makes the collaboration process easier and more effective.

Finally, the spectrum of UC is defined by the deployment model. Initially, UC technology was premises-based because it was largely an extension of an enterprise’s on-location phone system. But as the cloud has gained prominence, UC vendors have developed hosted UC services — and this is quickly becoming their model of choice.

Most businesses, however, aren’t ready for a full-scale cloud deployment and are favoring a hybrid model where some elements remain on-premises while others are hosted. As such, UC vendors are trying to support the market with a range of deployment models — premises-based, hosted and hybrid.

How vendors sell UC technology

Since UC is not standardized, vendors sell it in different ways. Depending on the need, UC can be sold as a complete service that includes telephony. In other cases, the phone system is already in place, and UC is deployed as the overriding service with telephony attached. Most UC vendors are also providers of phone systems, so for them, integrating these elements is part of the value proposition.

These vendors, however, are not the only option for businesses. As cloud-based UC platforms mature, the telephony pedigree of a vendor becomes less critical.

Increasingly, service providers are offering hosted UC services under their own brand. Most providers cannot develop their own UC platforms, so they partner with others. Some providers partner with telecom vendors to use their UC platforms, but there is also a well-established cadre of third-party vendors with UC platforms developed specifically for carriers.

Regardless of who provides the platform, deploying UC is complex and usually beyond the capabilities of IT.

Most UC services are sold through channels rather than directly to the business. In this case, value-added resellers, systems integrators and telecom consultants play a key role, as they have expertise on both sides of the sale. They know the UC landscape, and this knowledge helps determine which vendor or service is right for the business and its IT environment. UC providers tend to have more success when selling through these channels.

Why businesses deploy UC services

On a basic level, businesses deploy UC because their phone systems aren’t delivering the value they used to. Telephony can be inefficient, as many calls end up in voicemail, and users waste a lot of time managing messages. For this reason, text-based modes such as chat and messaging are gaining favor, as is the general shift from fixed line to mobile options for voice.

Today, telephony is just one of many communication modes, and businesses are starting to see the value of UC technology as a way to integrate these modes into a singular environment.

The main modes of communication now are Web-based and mobile, and UC provides a platform to incorporate these with the more conventional modes of telephony. Intuitively, this is a better approach than leaving everyone to fend for themselves to make use of these tools. But the UC value proposition is still difficult to express.

UC is a productivity enabler — and that’s the strongest way to build a business case. However, productivity is difficult to measure, and this is a major challenge facing UC vendors. When deployed effectively, UC technology makes for shorter meetings, more efficient decisions, fewer errors and lower communication costs, among other benefits.

All businesses want these outcomes, but very few have metrics in place to gauge UC’s return on investment. Throughout the rest of this series, we will examine the most common use cases for UC adoption and explore the major criteria to consider when purchasing a UC product.

Thanks to Unified Communications for the article.