Network and application monitoring tools have always been critical for maintaining the performance of connected resources. These practices are even more important as our IT infrastructure grows more diverse, our connected devices become smarter, and our customers demand newer, better, faster services.
There are hundreds of solutions available for monitoring network and application performance for organizations of every size. Your aggregate investment in these tools is not trivial. How can you make sure your tools deliver the highest return on your investment and have the capacity to scale up as your organization grows?
One way is to increase the efficiency of the monitoring tools you already own and operate. The following four strategies can help you handle more volume with the tool capacity you already have.
Increase Data Access
This may seem counter-intuitive, but providing your monitoring tools with all of the data relevant to the issue you are troubleshooting can actually shorten the time it takes to arrive at a result. If you are only collecting half of the data points available, your analysis can take longer and is at greater risk of being incorrect. A more efficient approach is for you to gather all the relevant data and diagnose the issue correctly the first time, preventing the risk of recurrence.
To ensure your tools have all the data they need, you need to collect network packets from across your entire network. The network segments most often overlooked are those where data is processed off-premises, such as in a public cloud, or where data moves only between virtual servers on the same physical host. If you use cloud-based and virtual infrastructure, you will need to deploy solutions designed specifically to access network packets in those environments.
Filter Data Before Delivering to Tools.
This strategy works by reducing the volume of data packets your monitoring tools must sort through to zero-in on the ones they are meant to process. Not all tools require the same data and it is a waste of tool capacity to process data that is irrelevant to what they do. To perform efficient data filtering, you need a fast processing engine with the ability to discern characteristics such as where the packets originate from, what applications they are associated with, their intended destination, the type of endpoint devices accessing your network, and other details.
Products known as network packet brokers (NPBs) provide the processing power to filter data and deliver at line-rate speed to your NPM and APM solutions. You deploy NPBs between your data collection devices and your monitoring tools. This placement gives you the opportunity to control which packets you send to your tools, which tools receive the data, the speed the data arrives at your tools, and the path the data takes between the tools you connect to the packet broker. You set up these actions on an NPB using an intuitive, drag-and-drop interface that is accessible remotely and eliminates the need for programming.
Once you have established this control layer, you have many more options for increasing the efficiency of your NPM and APM tools and extending their life span. Examples include:
- Ability to decrypt secure traffic one time and deliver the plain text to multiple tools simultaneously
- Ability to aggregate traffic from across your network and allocate to tools based on their available capacity (load balancing)
- Ability to collect traffic from a high-speed network segment and deliver packets at a slower speed to tools that are not upgraded
Reduce the Cost of Monitoring
Many of the tasks performed by the NPB reduce the workload on your monitoring tools. With fewer packets to process, your tools have the capacity they need to keep up as your business grows and your traffic volume increases. Offloading work from your monitoring tools helps you control costs by delaying upgrades to your NPM and APM tools.
You will get the most significant reduction in workload by eliminating duplicate packets from the stream of data you send to your tools. Modern networks are designed with redundancy to increase resiliency and ensure against packet loss. As a result of this practice, you will collect many duplicate packets as you monitor your network.
A high-speed NPB can eliminate duplicate packets at very fast speeds to reduce the workload for your monitoring tools. The best packet processing engines can deduplicate at the same time they are executing your data filters, without increasing latency or dropping packets. Before you choose an NPB, make sure you test it to see exactly how it performs under pressure. The solution you choose should be able to perform all of the functions you want and not force you to choose between one or the other.
Monitor at the Network Edge
In many organizations, critical transactions and operations are shifting to the edge of the network—taking place at remote endpoints or branch offices. Research conducted by Enterprise Management Associates found a steady uptick in the number of remote sites connecting to wide-area networks and the number of devices connecting to the network at those sites. The result is that data which is critical to NPM and APM is farther away from the data center where they are typically deployed.
Many enterprises are choosing to monitor at least partially at the network edge. This strategy allows them to maintain performance and user experience without incurring the latency and cost of transporting data back to the data center. As in the data center, the best way to keep NPM and APM solutions working efficiently is is to establish a data control layer with an NPB. You may not need all of the processing power of a data center-based NPB, but the ability to filter and condense data will increase the efficiency of your edge-located tools. Look for NPBs that are sized and priced appropriately for edge deployments.
Once you are monitoring on the network edge, you can use the processing power of the existing tools in your cloud or data center to conduct deeper analysis. A new trend is for companies to use data center tools to analyze simulation algorithms and create optimization strategies for data collection and analysis on the edge. The merging of edge monitoring with centralized analytics can support significant improvements in operational efficiency and customer experience.
About the Author:
Lora O’Haver, senior solutions marketing manager, Keysight Technologies
Lora O’Haver is a senior solutions marketing manager at Keysight, with over twenty-five years of experience in enterprise computing, networking, and cloud technologies. Lora is responsible for marketing Keysight’s network visibility and security solutions and is passionate about translating product capabilities into solutions that solve business and technology challenges.
She regularly produces articles, blogs, white papers, and presentations on topics related to network security and management, particularly in hybrid IT environments.
Lora joined Keysight through the acquisition of Ixia in 2017 and previously held a variety of senior marketing positions at Cisco and HP.