Skip to content

​Virtualization technology has tremendously transformed the modern data centre. Applications and operating systems are installed into virtual machine images instead of physical machines, which in turn are executed by physical servers running a hypervisor. Virtualizing applications provide many benefits, including consolidation — running multiple applications on a single physical machine — and migration — transparently moving applications across physical machines for load balancing and fault tolerance purposes.

In the future, data centres would have much more complex networks. There would be thousands of microservices to monitor with each service running on a separate server/rack/data centre because of redundancy reason. In the worst case, the “application” would be running all over the world in several data centres. To add to this complexity, there are even more difficulties because the applications can jump dynamically through the data centres. As a result, classical tapping will not be possible anymore!

In such a network environment, if you use virtual TAPs on the hypervisors, then you would need a 3rd party software on each hypervisor which would result in performance degradation on the hypervisor. Additionally, the application can run on different hypervisors, so you need to have virtual TAPs for each hypervisor version. There are also security issues because the virtual TAP monitors the full switch and not only one specific microservice. Virtual tapping involves very high maintenance effort when the applications are breathing.

This is the classical approach with a lot of issues like:

  • ​Bandwidth
  • Dynamic Breathing
  • Cost
  • Complex Configuration

Research from Gartner titled ‘Building Data Center Networks in the Digital Business Era’ mentioned that when making investments in data centre networking solutions, organizations should determine if their vendors can provide a single pane of management that extends to both on-premises and public cloud workloads, including:

  • ​Network troubleshooting for workloads in multiple clouds
  • Reporting for workloads in multiple clouds
  • Creation of policy and configuration for workloads
  • Visibility for workloads in multiple clouds

​With better insight, you can set up a cloud environment that’s fully efficient and optimized for business requirements. The end goal of any data centre is to maintain excellent reliability, security and stability for users.

Thank you to Cubro for the article.

Related Posts

Beyond the "Perfect" Lab: Simulating Real-World Network Chaos Before Deployment

Beyond the "Perfect" Lab: Simulating Real-World Network Chaos Before Deployment

It is the classic IT paradox: your application performed flawlessly in the staging lab, but the moment it was deployed…
UNDERSTANDING ZERO TRUST -- WHY VISIBILITY IS THE BEDROCK OF “NEVER TRUST, ALWAYS VERIFY”

UNDERSTANDING ZERO TRUST -- WHY VISIBILITY IS THE BEDROCK OF “NEVER TRUST, ALWAYS VERIFY”

In our first post, we demystified the core philosophy of Zero Trust—shifting from the outdated “castle-and-moat” perimeter to a model…
Precision Timing Applications in Healthcare and Emergency Services

Precision Timing Applications in Healthcare and Emergency Services

Precision timing is often associated with telecommunications, financial trading, or power grids, but its role in healthcare and emergency services…
Understanding Precision Timing in 5G and O-RAN Networks

Understanding Precision Timing in 5G and O-RAN Networks

5G is doing more than just speeding up our downloads—it’s completely changing how mobile networks are architected. Unlike the LTE…
The Heartbeat of Quantum: How White Rabbit Synchronization is Moving Innovation from the Lab to the Network

The Heartbeat of Quantum: How White Rabbit Synchronization is Moving Innovation from the Lab to the Network

Why Sub-Nanosecond Timing is the Missing Link for Distributed Quantum Computing and QKD For quantum scientists and researchers, the challenge…