“It’s time to rethink network performance monitoring.”
– Dan Joe Barry, vice president of marketing, Napatech, says:
In years past, a trip to the DMV or the laundromat meant hours of flipping through dated newspapers and magazines. But today, with the rise of mobility, consumers are no longer bored as they wait for their fluff and fold, or for the next number to be called. Today’s world is connected to an ongoing data stream and whether its corporate applications, or checking out the latest online viral video, consumers are looking for a constant stream of data, whenever and wherever they want it.
While the world is now available at the push of a button, the reality is that today’s mobile computing platforms are tasked with streaming massive quantities of local and social data to end-users which in turn is weighing down network performance.
Network engineers are working overtime to deliver the constant flow of data in real-time, but performance monitoring is losing traction, with frequent usage crises slowing down network performance.
It’s time to rethink network performance monitoring.
What is driving the need for change? These three players are leading the charge:
- Unified communications, including “SoLoMo” services (Social, Local, Mobile)
- Application awareness
- Expanding the monitoring into new areas
With flexibility and scalability in mind, the emergence of unified communication and the need for application awareness are forcing performance monitoring solution vendors to rethink their strategies.
The reality is that performance monitoring has been evolving for a while. Leading vendors anticipated this shift and acted accordingly. The time is now for the rest to rethink their strategy for network performance monitoring.
Surviving the Change
In order to create useful performance monitoring appliances, one must understand the impact of these drivers. It is necessary to keep in mind what is important for the end-user and where rationalization needs to occur.
This article will provide an inside look at the key drivers causing this change and what OEM vendors and system architects will need to remember when designing performance-monitoring appliances.
Rethinking Application Awareness
For the most part, performance monitoring is a troubleshooting tool. Some use this tool to identify network issues while end-users, such as product line managers and CXOs use performance monitoring to identify root cause issues.
Since most applications in the past followed well-known TCP port designations, a network focus was sufficient. However, with the growth of the Internet, web-based applications and thick-client applications running on wide area networks, basic network information is no longer sufficient. Today, end-users are more concerned with which applications are in use and what resources they are consuming. For some enterprises, applications play an essential role in the business process; therefore, understanding how applications are performing is extremely useful.
The primary difficulty encountered by OEM vendors is how to incorporate application awareness into their products while also ensuring throughput performance with growing data loads and network speeds.
Developing for Unified Communications
Until recently, email and web browsing consumed most corporate IP network traffic. However, now with trends such as cloud computing, bring your own device and social networking, networks are being taxed unlike ever before. Moreover, Voice-over-IP and video teleconferencing are prime business tools that demand high quality of service.
It is in today’s unified communications context that performance monitoring needs to operate. OEM vendors, system integrators and CXOs have a clear opportunity to provide significant ROI to corporate enterprises by helping them better understand performance issues and network planning to support individual employee and corporate goals.
The challenge for OEM vendors is building the ability to support various unified communications components into their products while also providing metrics produce meaningful correlations between application performance and network performance. At the same time, CXOs are looking for ways to monitor and report on performance issues to stay on top of their networks and balance the need for the increase in data streams.
Creating New Opportunities
Enterprises are concerned with appliance spread. Appliances are required to capture and analyze the network traffic at high speeds without packet loss. Yet many appliance solutions also exist to address concerns such as lawful intercept, policy enforcement, network security and transaction monitoring.
Appliance vendors might face resistance in attempting to introduce “yet another box” into the network, even if it is passive. However, this challenge presents an opportunity for vendors to consolidate multiple functions into one physical device, thus making their appliance more valuable. Currently deployed performance monitoring appliance probes also provide data to other tools focused on network optimization, security and surveillance.
Best practices suggest that it is time to rethink the design of appliances, allowing them to be more open to including or cooperating with functions that are not normally associated with performance monitoring.
Reconsidering Appliance Designs
Because performance monitoring is focused on the network, competence in network hardware and software are necessary for success. Nevertheless, when considering the need for application awareness and unified communications support, the focus needs to shift to understanding how applications and transactions apply to the network.
Separating the application layer from the network helps this focus and simultaneously opens appliances up to opportunities that support new functions not typically related with these appliances.
We must now reconsider what should be performed in hardware and what should be performed in software. With more network functions put to hardware, application software can focus on application intelligence. Today’s networks demand smart network adapters that can recognize applications in hardware by examining layer one to layer four for header information.
In addition, hardware that provides this information can be used to identify and distribute flows up to 32 server CPU cores allowing massive parallel processing of data. All of this should be provided with low CPU utilization.
Appliance designers should look for features that ensure as much processing power and memory resources as possible and are able to identify applications that require memory intensive packet payload processing.
Planning for Flexibility and Scalability
Networks today must be scalable and flexible. Considering the speed of 40Gbps Ethernet, networks require tools that support varying speeds of transmission. However, because the end-user is focused on application performance, he or she expects that performance monitors will give the same results regardless of line-rate.
Therefore, it is critical to de-couple the application intelligence from network line-speed. A good network adapter will provide the same features and support from 1G to 10G to 40G and will also be accessible via a single Application Programming Interface. This technology enables appliance designers to develop and test application software once, safe in the knowledge that it will perform in the same way no matter the hardware configuration.
It is also necessary to look for adapters that have the capability to merge traffic from multiple ports on multiple adapters into one analysis stream, thus abstracting the hardware configuration from the software programmer. Regardless of the number, or type of ports configured the programmer will only see one “virtual adapter” with multiple ports.
Furthermore, this abstraction gives valuable statistics for each port that can provide essential information for deep-dive analysis of root-cause issues. Transferring data from multiple ports is time synchronized in order to accurately correlate time-stamped packet data.
Preparing for Future Network Challenges
Specialized network applications can be quite pricy, thus making scaling to the increased demand an expensive proposition for carriers, cloud providers, enterprises and telecoms to implement. Even worse, if the market shifts toward adoption of novel network hardware, these organizations must bear the cost of updating their infrastructure in order to stay competitive.
By de-coupling network and application data processing and building-in flexibility and scalability into the design, appliance designers now have the ability to introduce a powerful, high-speed platform into the network that is capable of capturing data with zero packet loss at speeds up to 40 Gbps.
The analysis stream provided by the hardware platform can support multiple applications, not just performance monitoring. Investigate solutions that provide the capability to share captured data between multiple applications without the need for costly replication. Multiple applications running on multiple cores can be executed on the same physical server with software that ensures that each application can access the same data stream as it is captured.
This transforms the performance monitor into a universal appliance for any application requiring a reliable packet capture data stream. With this capability, it is possible to incorporate more functions in the same physical server increasing the value of the appliance.
Staying Ahead of the Curve
With the explosion of growth in today’s mobile data usage, app development and BYOD, networks are being strained more than ever before. Additionally, as Ethernet connectivity speeds increase and network traffic expands, it is clear that today’s standard network monitoring systems are inadequate. In order to remain competitive, cloud providers, OEMs, telecoms and CXOs need to look for new technologies that provide accurate and reliable network measurement and analysis on demand.
About the Author
Daniel Joseph Barry is VP of Marketing at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector. From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.
(0) Readers Comments
March 01, 2013
June 15, 2011
May 20, 2014
January 30, 2012
April 25, 2017
Hi i came across this article on the internet news service and i must
Microsoft Exchange Server is the best choice of the Small business, Ge
I am a Sergeant with our local police department our dispatch center h
"However, the single power supply serving multiple blade servers is a
Above posted Business continuity plans are direct and to the point.Bus