Cybersecurity Checklist for Secure Timing

​Network cybersecurity is top of mind these days for both government agencies and commercial enterprise. As the heart of network synchronization, time and frequency systems should include a standard suite of security features that give network administrators confidence in the cybersecurity protocols of their time servers. This is our philosophy at Orolia, and the recent recognition of our SecureSync® time server as the only DISA-approved (Defense Information Systems Agency) Timing and Synchronization Device for use in US DoD networks demonstrates our stringent commitment to secure timing.

DISA approval means that a product has been listed on the US Department of Defense Information Network (DoDIN) Approved Products List (APL). The APL process provides for an increased level of confidence through Cybersecurity and Interoperability (IO) certification. The DoDIN APL is the single approving authority for all military departments and DoD agencies in the acquisition of communications equipment that is to be connected to the Defense Information Systems Network.

The APL certification process is rigorous for the purpose of securing military networks in the US and abroad, and this level of security certification could also benefit commercial and private sector businesses that support critical infrastructure, financial transactions or other operations where failure is not an option. The security functional requirements come from an extensive public document called “Unified Capabilities Requirements” as well cybersecurity best practices.

What kinds of cybersecurity features and protocols should you look for in a timing solution?

  • AAA protocol support – refers to Authentication, Authorization and Accounts, a family of computer security protocols including LDAP, RADIUS, and TACACS+ that mediate system access and permissions.
  • Multi-level authorization – permits access by users with different permissions and prevents users from obtaining access to information or making changes for which they lack authorization.
  • Configurable, complex passwords – uses different types of characters in unique ways to increase security. Configure the complexity requirements suitable for your organization.
  • Access control lists (ACLs) – permits or denies access to the system based on user defined network addresses or subnets.
  • HTTPS and NTP – Hyper Text Transfer Protocol Secure (HTTPS) is the secure version of HTTP, the protocol over which data is sent between a browser and website. The communication protocol is encrypted for secure communication over a computer network.
  • SSH, SCP, SFTP with public/private key support – There are a number of security technologies and protocols for linking servers and clients. Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network, typically remote sessions. Secure Copy Protocol (SCP) and Secure File Transfer Protocol (SFTP) are means of securely transferring computer files between a local host and a remote host or between two remote hosts operating over an SSH connection.
  • Authenticated NTP – Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency networks. NTP provides two internal security mechanisms to protect authenticity of the computer systems involved in network clock synchronization.

​Orolia’s SecureSync time and frequency reference solution delivers the highest level of Resilient Positioning, Navigation and Timing (PNT) cybersecurity available today, including all the critical functionality described above, as standard PNT cybersecurity features. At Orolia, we’re committed to protecting military and other critical networks around the world with exceptional engineering and rigorous industry standards.

 Click here to learn more. You can also view the DISA approval letter here.

Thank you to David Sohn of Orolia for the article.

Viavi – NetFlow vs Packet Data

​IT managers and engineers are constantly challenged to maintain application performance, stay ahead of security breaches and resolve complex network problems. Two of the more powerful data sources available today are NetFlow (IPFIX) and packet data, which have been helping network and security teams gain clear visibility into the network for years.

​Troubleshooting expert and network instructor, Chris Greer, dives into the strengths and weaknesses of both flow and packet-based visibility, and highlights specific scenarios to illustrate where both data sources shine.

Download this guide now to understand:

  • Strengths, weaknesses and technical details of flow and packet data
  • Critical performance management and security use cases for each technology
  • Impact of each data source on visibility, monitoring and troubleshooting strategies

Thank you to Viavi Solutions for the whitepaper.

Infosim – Firewall Configuration/Network Configuration and Change Management (NCCM)

Turn on any “techy” TV show or movie these days and you are bound to see some reference to hackers trying to break into corporate or government networks by breaching firewalls. While many of the scenarios are unrealistic as they are portrayed onscreen, the real-life battle between security vendors and hackers does go on. In their effort to defeat the “black hats” firewall and security, vendors have dramatically increased the complexity of security devices and have started to incorporate firewall technology into all sorts of network infrastructure devices, like switches, routers, UC systems, etc.

Unfortunately, now that more devices are “responsible” for network security, that means more devices are potential targets for attack and therefore must be managed with the same higher level of attention that traditional firewalls receive. These systems must be scrutinized for their security postures, adherence to corporate governance policies, and have known vulnerabilities patched rapidly. Simple configuration errors may create holes in firewalls, VPN tunneling errors could expose data to the Internet, and inconsistent settings can cause issues with a regulatory framework.

In modern multi-vendor networks, administrators face many challenges in properly managing firewall configurations, ensuring compliance to regulations, carrying out changes, and minimizing network downtime caused by human error.

This blog looks at the need for an automated NCCM solution to address these concerns, and the main features that one should look for in an NCCM solution.
Configuration management involves identifying the configuration of a firewall system at given points in time, systematically controlling changes to the configuration, and maintaining the integrity and traceability of the configuration throughout the lifecycle.
It also involves the testing of the existing configuration vs known-good policies while simultaneously looking for any configuration that might expose the firewall to security or compliance risk.

Configuration management in this context can be summarized as:

  • Device hardware and software inventory collection
  • Device software management
  • Device configuration collection, backup, viewing, archiving, and comparison
  • Device configuration generation and “push”
  • Device configuration policy checking
  • Restore firewall configuration back to a recent good working state
  • Interwork with fault and performance management to monitor and ensure availability and performance of the firewall platform installations
 

Let’s explore each of these as they relate to firewall and security devices:

 

Device hardware and software inventory collection

The first step in being able to manage any system is to have accurate information about that device. Therefore, any good firewall NCCM system needs to also contain related information from a CMDB, e.g. containing up to date inventory information. It should (at minimum) contain a hardware (chassis, daughter cards, memory, etc) and software (OS, Firmware) information that is regularly updated. Once a week at minimum – once a day is preferred, and changes should be tracked even short-term.

Device software management

This refers to the ability to push software updates (patch) the OS/Firmware of the firewall. A best-practice ability is to both patch on a regular basis – we have seen larger enterprises standardly push two updates per year – as well as to have the capability to push emergency bug fix/vulnerability updates on an ad-hoc basis. The NCCM system needs to be able to perform OS and hardware checks such as software check-sums, available memory, license compatibility, and so forth as part of the update process.

Device configuration collection, backup, viewing, archiving, and comparison


One of the most basic tasks of any firewall NCCM solution is to backup the running configuration of the firewall.

It should be able to store the backup for any length of time the customer requires as well as any number of historically stored configurations. These historical backups are critical when there is a failure or misconfiguration as they can be used to restore the firewall to a known-good state. They are also very valuable as a troubleshooting tool because you can run a “diff” comparison between one or more configs to look for changes that may have impacted service.

Device configuration generation and “push”

One of the most common activities that cause network downtime is simple human error when making an “on the fly” configuration change.

Manually performing rule additions, changes, deletion is not only tedious, and highly error-prone. As the rules increase, the number of possible rule combinations grows rapidly and it becomes virtually impossible to manually figure out the impact of each rule which is added or changed.
In most networked environments, firewalls from multiple vendors exist concurrently. Even though firewalls from different vendors serve a similar purpose, their design, architecture, and management can differ greatly.
This lack of configuration consistency can quickly lead to problems when a new policy must be deployed to a large number of heterogeneous firewall systems simultaneously. Having a central NCCM system with the ability to abstract and thus automate the complex rule creation syntax across vendor devices can greatly ease the burden of configuration roll outs.

Device configuration policy checking

Corporate governance policies such as Sarbanes Oxley (SOX), NERC, PCI-DSS, HIPAA, MiFID II, SAS 70, Basel II, and GDPR have all been introduced to ensure levels of security and integrity are maintained for company financial information and any stored personal details of customers.

However, translating these policies into an actionable firewall configuration can be a huge challenge. For example, the PCI-DSS policy states that the organization will “install and maintain a firewall configuration to protect cardholder data”. However, it does not specify what firewall rules to deploy or what type of firewall to use and so forth.
Standards are used to define the policy goals, but they must be turned into a usable configuration which supports the policy standards.

Policy compliance then verifies that policies are implemented and remain operational.
So, compliance is really a continuing process of configuration and verification. A good NCCM tool can help with both aspects of the job. Providing a mechanism to turn the corporate policy or rule into an electronic policy that can be configured on a firewall. The NCCM system must then be able to periodically test the running firewalls to determine if they still adhere to the originally configured policies and no unwanted changes have been introduced.
Conclusion

If you oversee managing firewalls or security devices, then network configuration management may well be worth investigating. Network configuration management provides the tools to give you an audit trail of changes to your firewalls. It can also help with enforcing corporate or regulatory policies much easier. Lack of efficient and effective device configuration management affects the business continuity of enterprises. Manual configurations of devices eat away the time and efforts of the skilled administrators, who are struggling to keep track of configuration changes and as networks grow larger and larger.Automated NCCM solutions enable network administrators to take total control of the entire life-cycle of firewall configuration management. Changing configurations, managing changes, ensuring compliance and security are all automated. These solutions improve efficiency, enhance productivity, help save time, cost, and resources, and minimize human errors and network downtime.

With a good NCCM solution in place, enterprises can make best use of their firewall infrastructure. They can achieve increased network up-time and reduced security risk.

 Thank you to Peter Moessbauer, of Infosim, for the article.

Security Spends on prevention, detection and remediation

Security is a big concern for every business and yet high-profile breaches continue to hit the headlines. This shows hackers will find a way through the most robust security measures if the prize is lucrative enough.

 Preventative security in-depth isn’t enough to save you, your data or your reputation.

Did you know that organizations currently spend 50 times more on attack prevention than they spend on post-event analysis and remediation according to Gartner? 

Once inside your network, smart criminals will have almost limitless opportunity to access and exploit your most sensitive company assets. 

Can you be absolutely sure your network is uncompromised right now?

Act today and watch our 90 second video to hear network experts along with a cyber security architect at a Fortune 100 company reveal key deficiencies in current security strategies, including a need for more robust post-attack solutions.  

Thank you to Viavi Solutions for the article. 

How to Implement Security Monitoring For Critical Infrastructure

​I ran across an interesting statistic a couple weeks ago. According to a Ponemon Institute, report titled “The State of Cybersecurity in the Oil & Gas Industry”, 68 percent of security and risk managers reported losing confidential information or experiencing disruption over the previous year.

The existence of security breaches for the last five plus years is well documented, so that didn’t bother me. What did bother me is that the security breaches are happening in critical building infrastructure and industrial control systems (ICS). This increases my level of concern as it does not appear to me that these types of breaches are talked about too often.

Security breaches obviously continue to remain a persistent challenge for both data center providers and enterprises monitoring their networks, even as the expenditures on network security appliances increases. When it comes to ICS, there are many systems that can be vulnerable. Here are some examples of vulnerable systems:

  • Heating, ventilation and air conditioning (HVAC)
  • Building power distribution systems
  • Communication systems
 

In addition, many building and system control and data acquisition (SCADA) systems remain unhardened against the multitude of security threats that exist. These threats include:

  • Third-party remote and wireless access since contractors may have lax security processes
  • Proprietary appliances and sensors with potentially outdated software which are prone to vulnerabilities, the use of default/easy passwords, and the lack of encryption safeguards
  • Insufficient attention from NOC/SOC personnel due to auxiliary nature of critical infrastructure networks to their daily tasks
  • The common practice of rotating technical personnel that are servicing critical infrastructure equipment — this provides wider access to the physical infrastructure including the network and USB ports
  • Malware insertion through dedicated attacks that take control of critical infrastructure for criminal and nation-state security attacks Malware and cyberattacks can easily interfere with command and control of critical data infrastructure and also result in successful ransomware attacks that can cost thousands, if not millions, of dollars.
 

Security isn’t the only problem though. ICS systems can suffer simple maintenance failures or overload conditions caused by lightening or other natural factors, fires, and other problems. However, consistent monitoring and the installation of simple network visibility solutions can produce clear and cost-effective ways to manage problems. Critical pieces of network data exposed by a visibility solution and analyzed in either real time or near real time, can prevent the loss of building functionalities like power outages, air conditioning outages, and equipment damage.

For example, modern versions of HVAC systems need continual monitoring to stay energy efficient and to ensure that building occupants are comfortable. Frequent monitoring is necessary because there are numerous environmental sensors and motorized control systems within HVAC systems. Proper monitoring helps maintain a consistent temperature to reduce energy and maintenance costs

The benefits of monitoring ICS systems include the following:

  • Remote access 24 x 7 to critical infrastructure and control systems
  • Cost reduction because of faster alerting of system problems
  • Deployment of n+1 survivability for ICS monitoring tools
  • Testing and validation of critical infrastructure against security threats


Whether you are part of the DevOps or SecOps team makes no difference—threats and problems are a daily, if not hourly, occurrence. What you need is good quality data as fast as you can get it to counter security threats, troubleshoot network outages, and remediate performance problems.

Unfortunately, IT security and analytics tools are only as good as the data they are seeing. An integrated approach for proper network visibility, network security, and network testing ensures that your tools get the right data at the right time, every time. Without an approach like this, IT teams will continue to struggle with preventing security breaches—and many will fail.

If you want more information on this topic, try reading this solution brief Security Monitoring of Critical Infrastructure.

Thank you to Keith Bromley, of Ixia A Keysight Business, for the article.

Hybrid IT Monitoring: The ABCs of Network Visibility

​The recently released 2019 State of the Cloud report by RightScale found that 58% of 800 technical professionals surveyed use a hybrid strategy [1]. Combining on-premises, virtual, and cloud-based resources offers great flexibility, but managing hybrid IT is also more complex. IT teams need new strategies to monitor their hybrid infrastructure effectively.

​THE CHALLENGE OF HYBRID IT MONITORING

Effective performance and security monitoring require detailed information about the traffic flowing in your network. In hybrid environments, traditional network taps and SPAN ports cannot see all the traffic in your network. To identify anomalies and the root cause of performance issues, you need complete visibility to every corner of your network, including virtual and cloud-based infrastructure, just as you have in your on-premises data center.

In virtualized infrastructure and private clouds, traffic moving between virtual resources (referred to as east-west traffic) does not pass through a physical network switch. This traffic is a blind spot to your monitoring tools unless you deploy a solution that can access packets inside the virtualization (hypervisor) layer.

When you use public cloud, you do not have access to either the physical infrastructure or the virtualization layer. Public cloud workloads are highly dynamic and moved at will by the cloud provider to maximize performance of their overall infrastructure. To get access to network traffic in public clouds, you need a container-based, cloud-specific data access solution.

SPECIFIC USE CASES FOR HYBRID IT MONITORING 

Reduce migration risk. Deploying applications on hybrid IT carries more risk since traffic will flow on infrastructure that is not directly controlled or managed by the IT team. You will need a specific strategy for monitoring off-premises infrastructure to avoid application disruption and protect sensitive data. 

Maintain cybersecurity in the cloud. We know that clouds are susceptible to security breaches and data loss. The only way to stay vigilant is to monitor 100% of the network packets flowing through your cloud-based and virtual servers. You must not only gain access to all data, but also have a visibility solution that can scale automatically and cost-efficiently as traffic volume grows.

Troubleshoot performance issues. As enterprises shift more of their operations to virtual and cloud-based infrastructure, the network team must ensure performance monitoring is extended there as well. Hybrid environments increase IT complexity. The network operations team is often the first line of defense for the end-user experience, even if the network is not always the root cause of the issue. Network operations center staff need tools that can provide visibility into all traffic, to better identify the source of performance issues.

Monitor service level agreements of providers. With so much riding on the performance of hybrid IT infrastructure, it makes sense for IT to monitor the quality of service they receive from their cloud providers. Every organization using cloud should make sure their contract gives them access to the data they need for independent verification and audit of cloud services. Some cloud providers issue credits for not meeting the service level agreements stipulated in their contracts.

​CONSIDERATIONS FOR HYBRID IT MONITORING

As you develop your monitoring strategy and architecture, consider the following:

Pre-deployment testing. You can validate whether your network is up to the task of supporting a new application or service using a pre-deployment performance test platform (such as Ixia Hawkeye). The testing process also lets you establish thresholds and performance standards such as target, response time, bandwidth usage, and tolerable packet loss. You can also use these tests to determine the best location in the network to process a particular workload.

Cloud-native data access. Simply accepting your clouds as “blind spots” in your network is not an option because clouds are now the dominant mode of IT operation. To get visibility to data in a hybrid environment, you need to tap and filter data in every cloud. A container-based, cloud-native sensor, such as that used in Ixia CloudLens, can be deployed inside every cloud you spin up. With this solution, you do not need to manually configure data access and you automatically receive a copy of every packet that passes through all your clouds.

Support for a large number of tools. Traditionally, IT has physically connected monitoring tools right onto the network segment they wanted to monitor. That approach works well until the number of tools you want to use exceeds the number of available network access points. A visibility platform with a network packet broker (such as Ixia’s Vision Series) overcomes this limitation by establishing a layer between the network and your monitoring tools. The NPB is able to receive traffic from any number of network segments, consolidate and filter the traffic, and send a customized stream of traffic to any number of monitoring tools.

Data capture at the edge. The expansion of data collection and processing at the network edge presents IT with a new challenge. Now they must actively observe user experience at the edge and resolve issues to keep the business running smoothly. Edge-specific visibility platforms are now available to cost-efficiently capture and filter packet data, so the network edge does not become a blind spot. (See Ixia Vision Edge.)

Tool efficiency. As your traffic volume increases, you should look at increasing the efficiency of your monitoring tools to keep costs from rising. You can use a network visibility platform and NPB to pre-process and streamline data before you deliver it to your tools. The NPB functions like a personal assistant to your tools, offloading non-core tasks and helping your tools run more efficiently. The NPB’s intelligent processing engine reduces workload by filtering packets and sending tools only the data they need. The result is less work for your tools and better use of available tool capacity.

Continuous active monitoring. You can reduce the risk of an application disruption by continually monitoring your production network. An active monitoring platform (such as Ixia Hawkeye) runs test traffic through your network on a regular basis, to identify any new issues that emerge as a result of changes and upgrades to your infrastructure. Active monitoring also provides you with a map of how traffic actually moves through your network, which helps speed up troubleshooting.

​SUMMARY

​As you rely more on hybrid infrastructure, make sure your network monitoring architecture is up to the task of protecting your applications and services from performance disruptions and security breaches. Learn more about Ixia’s approach to total network visibility.

​[1] RightScale: State of the Cloud, sponsored by Flexera, February 2019.

​Thank you to Lora O’Haver of Ixia, a Keysight Business, for the article.