Testing Large Contact Centre Systems

Today’s telecommunications and contact center infrastructures have become extremely complex. As you know the design and the reliability of these infrastructures is critical to delivering the level of customer service your client demands.

So whether you are installing a new contact center solution or upgrading an existing solution, you want to make sure your efforts deliver the best possible customer experience and ROI. If the solution does not work as designed and customers do not use it as expected your customer satisfaction and cost savings go out the window.

This white paper – Testing Large Contact Center Systems discusses the steps you can take to be confident that all the integrated elements work together, and go live with confidence. Whether you are a small center or a large center you can make use of these ideas.


RANSOMWARE: DOWN BUT BY NO MEANS OUT

After ransomware attacks crippled public transportation systems, hospitals and city governments over the past two years, 2018 seems poised to be another wildly successful year for attackers who use this threat, which netted criminals more than $1 billion in 2016, according to the FBI. And a recent spate of ransomware infections impacting Boeing, the government of Atlanta and the Colorado Department of Transportation hasn’t done much to dispel this notion.

But don’t expect ransomware to be as menacing in 2018, said Ross Rustici, Cybereason’s Senior Director for Intelligence Services, who looked at some of the myths around ransomware in a webinar this week.

“Despite the fact that 2017 was a banner year for ransomware intrusions, if you look at data points going back to 2015, when ransomware has its zenith in infections, there’s been a steady decline since then,” he said, adding that this doesn’t mean ransomware is no longer a threat.

In this blog, Rustici debunks a few common ransomware myths, talks about what other threats attackers may use instead and explains why organizations still need to remain vigilant against ransomware.

​MYTH: RANSOMWARE IS GROWING A TREND

REALITY: RANSOMWARE ATTACKS HAVE PEAKED

While Rustici labelled 2017 as an “abnormal year” for both ransomware delivery methods and the scale of attacks, 2018 doesn’t hold the same outlook. Looking at the number of ransomware families and infections over the past three years reveals that this threat has crested. He noted that in 2015, when ransomware attacks were at their peak , there were an estimated 350 ransomware families. In 2017, that number decreased to 170, a reduction of approximately 50 percent.

“That shrinkage continues in 2018, but not at the same rate,” he said. “When you are looking at the capabilities that are being baked in to new variants, we’re seeing a consolidation in ransomware.” From a defender’s perspective, this development is positive since it means they’ll be familiar with how the ransomware operates and won’t encounter new techniques, Rustici said.

As for ransomware infection rates, Rustici said that number is trending downwards after temporarily spiking in 2017. There was a high number of infections in 2015, a decline in 2016 and a slight uptick in the middle of 2017 due to NotPetya, BadRabbit and WannaCry and a decline in the end of 2017 that continued into 2018.

“We’ve hit the high water mark of ransomware variants from the technology side and also its widespread usage and infection rate,” Rustici said, noting that ransomware’s effectiveness means there will always be outlying examples like the Boeing and Atlanta government infections. “But in terms of overall spread and growth, [ransomware] is dropping into the same pool as banking Trojans and other malware that we all hate.”

MYTH: THE SECURITY COMMUNITY FAILED TO PROTECT COMPANIES FROM RANSOMWARE

REALITY: THE SECURITY COMMUNITY CONTAINED RANSOMWARE BY WORKING TOGETHER

The security community has addressed ransomware more forcefully compared to other malware. Rustici attributes this response to the industry’s realization that ransomware eclipses other malware as a major public nuisance.

“You’ve seen a groundswell response from the security industry’s overall response, whether it’s free utilities like RansomFree or researchers figuring out how to decrypt files so [victims] don’t have to pay the ransom,” he said.

The security community’s response to WannaCry and NotPetya particularly stood out to Rustici. Within 48 hours of both infections, hot fixes were available for people who were looking to inoculate themselves, not purchase security products. Years earlier the security community realized the danger posed by ransomware and saw collaboration as the only way to address it, he said.

“And on this rare occasion, they succeeded more often than not. There will always be variants coming out but as the security community keeps its attention on this and continues to try to combat ransomware, we’re getting a better handle on it compared to other malware,” he said.

MYTH: CRYPTOMINER ATTACKS ARE THE NEXT BIG THING

REALITY: CRYPTOMINER ATTACKS ARE NOT REPLACING RANSOMWARE ATTACKS

Don’t expect ransomware attacks to be replaced by a deluge of cryptominer attacks. Wild price fluxuations in the cryptocurrency may not justify the amount that’s required to earn a profit, Rustici said.

“Cryptominers are never going to reach the same level of [ransomware] from a deployment perspectives simply because the monetary payoff for these type of tools is tied to the cryptomarkets,” he said.

While one bitcoin is worth $20,000, for instance, cryptomining makes sense since the necessary work would yield a decent profit. But when bitcoin is trading below $10,000, there’s less of an economic incentive to engage in cryptominer attacks, Rustici said. Cryptominer attacks will be less of a sustained trend and more tied to the fluctuations of the cryptocurrency market.

“The money isn’t there as readily as some of the other things they could be doing with same time and effort,” he said.

MYTH: BAD USER BEHAVIOR EXPLAINS RANSOMWARE INFECTIONS

REALITY: RANSOMWARE DELIVERY METHODS HAVE EVOLVED

Downloading malicious attachments in spear phishing emails and visiting sketchy websites account for many ransomware infections, but bad users aren’t the only ones to blame for these attacks. There’s also been an evolution in how infections are delivered with attackers using more technical techniques like exploit kits.

“They’re using more sophisticated capabilities for delivery mechanisms that ignore the stupid user because as security awareness grows and endpoint protections get better the stupid user becomes a less reliable intrusion vector,” Rustici said.

Attackers started with drive-by downloads and increased the level delivery sophistication of to ensure that ransomware remained a profitable threat. In fact, Rustici pointed out that the number of spear phishing emails and infected webpages decreased last year, in relative terms, as they became less effective delivery methods.

Expect attackers to adopt more advanced delivery techniques in 2018 as less sophisticated actors stop using ransomware with the malware becoming more difficult to monetize, Rustici said.

WHAT THREAT COULD TAKE RANSOMWARE’S PLACE

Adversaries are likely to adopt a new technique as ransomware joining the ranks of adware, banking Trojans and the stable of other malware that attackers can use. But predicting the next great threat is challenging, Rustici said.

One possibility is data extortion, perhaps as an unintended consequence of the General Data Protection Regulation, he said. Companies that violate GDPR by exposing personal data on E.U. citizens risk hefty fines. Attackers may give companies the option of paying them to not publicly disclose the breach and pilfered data. Paying the attackers to keep quiet could cost companies less than the fine imposed by GDPR. Rustici noted that companies have already engaged in this behavior. Uber, for instance, paid attackers $130,000 in an attempt to conceal a breach.

Attackers only adopt techniques that earn them a profit, making organizations responsible for the success or failure of data extortion campaigns.

“It’s going to be up to private industries to decide if they want to pay and if this becomes a trend. Once [attackers] realize that there’s easy money to be made, they’re going to jump on the bandwagon,” Rustici said.

RANSOMWARE: DOWN BUT BY NO MEANS OUT

While better endpoint security and shifting economics may mean ransomware is no longer an attacker’s preferred threat, the malware shouldn’t be written off by organizations.

“This isn’t a victory lap. The security industry has done a good job of containing ransomware’s growth, but it’s still out there. We’re seeing a downward trend in its usage and the infections rate but it’s still in the hundred of of thousands. This isn’t by any means a solved problem. It’s just not growing as exponentially as we thought it would,” Rustici said.

Thanks to Cybereason for this article


Webinar – Revealing attacks in RealTime

Rogers revving revenues up by 8% in Q1, driven by cellular

Canadian quad-play operator Rogers Communications reported total revenue up 8% year-on-year to CAD3.633 billion (USD2.876 billion) in the three months ended 31 March 2018. Cable revenue increased 1% to CAD969 million in Q1 2018 as internet revenue growth of 7% continued to drive the segment. The company highlighted the availability of its Ignite Gigabit Internet product over its entire cable footprint as a key differentiator in signing up users, whilst noting the continuing growth in demand for higher speeds; 56% of Rogers’ residential internet customers were on speeds of 100Mbps or higher by end-March 2018, up from 48% twelve months earlier. Cable internet subscribers totalled 2.347 million at end-March 2018, up by 88,000 in twelve months.

The main driver, though, was quarterly wireless revenues which galloped upwards by 9% y-o-y to CAD2.191 billion, as post-paid mobile subscribers increased by 182,000 in twelve months to reach a total of 8.799 million, while the pre-paid user base rose by 43,000 on a yearly net basis to 1.718 million. Rogers’ total EBITDA rose 14% y-o-y in 1Q18 to CAD1.338 billion, and Q1 net profit jumped 37% to CAD425 million.

Thanks to TeleGeography for this article

Big Data and Time: What’s the Connection?

Precision time can play a vital role in improving the operation and efficiency of Big Data systems. However, to understand why or how time can be utilized, the most important thing to understand is that – Big Data is Big!

I know that may sound silly, but really, “big” is actually even an understatement. And that’s the difficult part to grasp when trying to understand Big Data: The volume of data is so large that it’s beyond the scale of things we’re used to seeing or working with every day, and it is almost impossible to imagine. So, before we get into how precision time fits within Big Data, let’s add some context. Specifically, let’s try to get a better sense of the scale of Big Data.

To do that, we are going to use money. When you think about data, let alone Big Data, what do you picture? Maybe you picture a datacenter, a server, some lines of code on a terminal … different people will picture different things because data is relatively intangible. You can’t touch it or feel it. But we’re all familiar with the general size, shape, and feel of money so let’s use that as a proxy for data in understanding the scale of Big Data.

We’ll start with a one hundred dollar bill.

Next, this is what $1M would look like using $100 bills. I know what you’re thinking – it doesn’t look like much.

But if we add a few zeros to that number and turn it into $1B, suddently it starts to become a much more respectable pile of money. You go from something you could easily carry by hand to something you’d need a moving truck for. But at least you could afford a really good moving truck if you had all of that money. The point, though, is as we increase the order of magnitude, the scale that it relates to grows tremendously. And if you don’t believe me, let’s take a look at $1 trillion.

This is what $1T looks like. In our minds, when we think about the difference between one billion and one trillion, it’s typically in the form of a math equation. We don’t tend to think about what it actually looks like. And this is compounded even more when we talk about Big Data, which is just packets of information moving around a network that doesn’t have a physical appearance to begin with. Even though $100 bills are tangible, physical things, did you know that this is what $1T looks like? We all know what money looks like; we all know its size and physical properties, yet despite that, it’s surprising to see what $1T looks like because we don’t tend to have the right perspective of scale to reference against.

But since we’re at it, let’s keep going with the analogy and look at something closer to the scale we’re dealing with in Big Data: $1Q

The point of all of this is to try to understand and appreciate the scale of data we’re dealing with when we talk about Big Data. It’s very difficult to do. But at the same time, it’s very important to understand because that scale is why precision time is so important to Big Data infrastructures.

There are many, many things we take for granted that rely upon these Big Data systems. Everything just works, and we don’t think much about it. However, there is a huge amount of effort and infrastructure that exists to operate and support it. But the pace of the growth of Big Data is accelerating. Big Data is getting even bigger, even faster. And that’s a problem.

Traditionally, the primary solution has been to throw hardware at the problem. The programs that manage Big Data are designed to use massively parallel and distributed systems. So basically, in order to add capacity, you just add more hardware – servers, switches, routers, etc. – and the problem is solved.

But we’re reaching a point where it’s becoming less and less efficient to just throw hardware at the problem. It turns out that continuously adding hardware doesn’t scale linearly. You start to create new issues, not to mention the physical requirements, ranging from datacenter floor space to power, heating, and cooling, etc.

Right now, there is a big push across the industry to rethink things and find new methods capable of supporting the increasing growth. A lot of that focus has been on improving efficiency. And that’s where precision time comes in. First, precision time is needed for network performance monitoring. Simply put, how can you accurately measure the flow of traffic across your network if the time you’re basing those measurements on is not accurate to begin with? Or you can only loosely correlate events between disparate sites?

Network performance monitoring is key to improving efficiencies in Big Data and in order to do that well, you need good synchronization across all of the network elements so that you can see exactly where and when bottlenecks occur.

The second use of precision time for Big Data is a little bit more abstract still. It’s a new methodology centered around making distributed systems more efficient by making decisions based on time. I won’t bore you with the technical details. Instead, I’ll just bore you with the basic concept. And that basic concept is using time to make decisions.

Whenever you do something in the cloud, data is always stored on more than one server and in more than one physical location. Because of this, that same data can arrive at those different servers at different times. What this means is that sometimes, when an application goes to call that a piece of data, it can get different results from different servers. These are called data conflicts and they are especially common in eventually consistent databases.

When data conflicts occur, the application has to stop and talk to the servers to figure out which one is right. That is, which server has the most recent version of data which is what the application needs. This all happens very quickly, but it uses processing power and it uses network bandwidth and it takes time. The new methodology of making decisions based on time would look at the timestamps on the data instead and be able to easily make a decision based on those timestamps.

Now I know that may seem trivial. After all, how much time and processing power and bandwidth is that really saving? However, you can’t lose sight of the scale we’re talking about with Big Data. You need to remember the scale of data that Big Data operators are trying to manage – both in terms of the sheer volume of data and the rate at which that data is changing or updating – because at such scale, even those small numbers can accumulate to large sums.

So if precision time can make a big difference for Big Data, then why aren’t all of the Big Data operators using it in their networks? Well, to be clear, some are. But for more detail on that question, stay tuned. We’ll dive into that next time.

Thanks to Jereremy Onyan, Director of Time Sensitive Networks, at Spectracom. for this article


Rogers, Ericsson report on 5G tests, network development plans

Canada’s Rogers Communications has announced a multi-year mobile network technology development plan in partnership with Ericsson, including 5G tests. Under the plan, Rogers will continue to roll out a Gigabit LTEnetwork based on technology including 4×4 MIMO, four-carrier aggregation and 256 QAM, and will boost and densify its network with small cells and macro sites across the country. Ericsson and Rogers will meanwhile trial 5G in Toronto, Ottawa and other selected cities over the next year.

On 16 April the partners demonstrated multiple live 5G examples at the Rogers Centre testing environment. Participants wore virtual reality (VR) glasses to toss a baseball back and forth, virtually shopped in a retail store, and controlled robots with real-time responsiveness. Rogers also demonstrated quad-band Licensed Assisted Access (LAA) on Gigabit LTE to show how LAA provides high bandwidth, simultaneously across several devices

Thanks to TeleGeography for this article

5 Steps to Preparing Your Network for Cloud Computing

Cloud computing allows organizations to perform tasks or use applications that harness massive third-party computing and processing power via the Internet. This allows them to quickly scale services and applications to meet changing user demand and avoid purchasing network assets for infrequent, intensive computing tasks.

At the outset cloud computing may appear to offer a lot of benefits. No longer will you have to worry about large infrastructure deployments, complex server configurations, and troubleshooting complex delivery on internally-hosted applications. But, diving a little deeper reveals that cloud computing also delivers a host of new challenges.

While providing increased IT flexibility and potentially lowering costs, cloud computing shifts IT management priorities from the network core to the WAN/Internet connection. Cloud computing extends the organization’s network via the Internet, tying into other networks to access services, applications and data. Understanding this shift, IT teams must adequately prepare the network, and adjust management styles to realize the promise of cloud computing.

Here are 5 key considerations organizations should make when planning, employing, and managing cloud computing applications and services:

1. Conduct Pre-Deployment and Readiness Assessments

Determine existing bandwidth demands per user, per department, and for the organization as a whole. With the service provider’s help, calculate the average bandwidth demand per user for each new service you plan to deploy. This allows the IT staff to appropriately scale the Internet connection and prioritize and shape traffic to meet the bandwidth demands of cloud applications.

2. Shift the Network Management Focus

Cloud computing’s advantage lies in placing the burden of applications and data storage and processing on another network. This shifts management priorities from internal data concerns to external ones. Currently, organizations have larger network pipes and infrastructure at the network core, where the computer processing power is located. With cloud computing and Software as a Service (SaaS) applications, the importance of large bandwidth capacities shift away from the core to the Internet connection. The shift in focus will significantly impact the decisions you make from whether your monitoring tools adequately track WAN performance to the personnel and resources you devote to managing WAN-related issues.

3. Determine Priorities

With a massive pipeline to the Internet handling online applications and processing, data prioritization becomes critical. Having an individual IP consuming 30 percent of the organization’s bandwidth becomes unworkable. Prioritize cloud and SaaS applications and throttle traffic to make sure bandwidth is appropriately allocated.

4. Consider ISP Redundancy

Thoroughly assess the reliability of your existing Internet Service Provider. When the Internet connection is down or degraded, business productivity will also be impacted. Consider having multiple providers should one have a performance issue.

5. Develop a Strong Relationship with your Service Providers

Today, if a problem occurs within the network core, the engineer can monitor the entire path of network traffic from the client to the server in order to locate the problem source. With service providers controlling the majority of information in cloud computing, it becomes more difficult to monitor, optimize, and troubleshoot connections.

As a result, Service Level Agreements (SLA), take on greater importance in ensuring expected network and internet performance levels. SLAs should outline the delivery of expected Internet service levels and performance obligations service providers must meet and define unacceptable levels of dropped frames and other performance metrics.

An independent review of your WAN link connections allows you to verify the quality of service and gauge whether the provider is meeting all its SLA obligations. You can utilize a network analyzer with a WAN probe to verify the quality of service.

Cloud computing is more than the latest IT buzzword; it’s a real way for companies to quickly obtain greater network flexibility, scalability, and computing power for less money. But like most technologies, these services are not without risk and require proper preparation and refocused management efforts to succeed.

Cloud Solutions


Bell FTTP network reaches most of Toronto

Bell Canada has announced that its fibre-to-the-premises (FTTP) network now covers ‘most homes and business locations’ throughout Toronto, the country’s most populous city. Bell commenced the Toronto FTTP project in 2015, partnering the city authorities and Toronto Hydro to deploy over 10,000km of fibre, mounted on approximately 90,000 Bell and Toronto Hydro poles and underground via more than 10,000 manhole access points, alongside upgrading 27 Bell central offices across the city. Bell’s ‘Gigabit Fibe’ FTTP-based service currently offers download speeds up to 1Gbps and uploads reaching 940Mbps (also to reach 1Gbps next year via modem upgrades), alongside ‘Fibe TV’ IPTV and telephony. Peak downlink will increase to ‘at least 5Gbps’ next year and ultimately beyond 40Gbps.

Bell’s FTTP network now covers 3.7 million homes and business premises across seven Canadian provinces, a total it expects to grow to 4.5 million by the end of this year. Bell ‘all-fibre’ cities currently include: St. John’s, Gander, Summerside, Charlottetown, Halifax, Sydney, Moncton and Fredericton (all in Atlantic Canada provinces); Quebec City, Trois-Rivieres, Saint-Jerome and Gatineau (all in Quebec); Cornwall, Kingston, Toronto, North Bay and Sudbury (all Ontario), and Steinbach and The Pas in Manitoba, with ‘major new locations’ to be announced during 2018. Bell also unveiled its Montreal fibre project in 2017 and last month announced plans to expand direct fibre connections throughout the ‘GTA/905’ region surrounding Toronto and extending to the US border.

Thanks to TeleGeography for this Article

90 Seconds to Lose a Customer

 A major issue with moving to e-contact centre is to make everything work together. As you purchased multiple best in class single solutions as you put this together you may not end up with a best in class total solution.  Another issue is turning unstructured data, comments from social media platforms – into actionable outcomes.

Customers who are reaching out to you are usually time sensitive. You can lose them in less than 90 seconds, if communication channels aren’t supported adequately.

For example, a request is left on an organization’s Facebook page. Followers and friends of that person see the complaint as do organizations using social listening tools to pick up on disgruntled competitors’ customers. Before a response can be made, that customer may have received a referral or solution from a competitor.

Companies may opt to initially service a limited number of channels, and customers are relatively happy with this.

However, many of the excluded channels should still be monitored for brand and reputation issues – usually a communications department role but increasingly an e-contact center role.

But it is not enough to just put some or all those channels out there. You owe it to your customers and to your brand to know you are in line with the omnichannel mandate.

All those channels must be available when customers want to use them, do what they’re supposed to do, and do so efficiently and effectively. You need to make sure your customers don’t get pushed away by all the technology you put in place to make it easy to e-connect in the first place.

Customer Experience Management

To ensure all these systems are working correctly IR Prognosis for Contact Center’s allows you to test and then monitor these systems to allow you pin point the system that may be involved in impairments

Prognosis StressTest™ provides load and performance cloud-based testing giving you the insight you need to manage, tune and verify contact center performance before you go live

Prognosis HeartBeat™ initiates Virtual Customer® test calls that interact with your system through the public telephone network just like real customers, giving you assurance that your system is working in every location and for every agent.

With Contact Center solutions from IR, you have the power to be on top of everything 24/7 and identify and fix problems before they impact customer experience.