Knowing the state of servers and capacity of your infrastructure
– JF Piot, VP of Product Management at GSX Solutions
For most Fortune 500 companies, hardware availability is no longer a problem. Powerful systems are connected in private and hybrid clouds for efficiency and redundancy. Rack and blade servers are equipped with monitoring and hot fix features that proactively alert IT administrators when a problems arise. Capacity planning tools help IT administrators budget for future projects. Such tools can significantly reduce capital expenditure on hardware and storage in the data center and in the cloud, especially in a world of big data, as well operational costs, particularly around power consumption.
Over the last 10 years, servers have become increasingly virtualized, allowing administrators to dedicate critical applications to a single virtual server. However, those virtual servers and the application workloads running on them still rely heavily on the underlying physical servers. CPU and RAM availability are critical for application performance, and IT administrators need to plan for available capacity and and try to predict potential resource bottlenecks before they happen.
Moreover, since applications and servers are so tightly linked (you don’t buy boxes just for pleasure …), administrators need to understand how critical applications are using server resources, including individual processes, interaction with other applications running on others servers or by users requests, for example.
Administrators need tools to determine how many boxes they will need to support critical business applications, such as Microsoft Exchange and SharePoint. To do so, they need to analyze existing performance and usage statistics at both the application and server level, either virtual or physical. Further, this insight into resource availability and utilization is essential if you want to build a private or hybrid cloud. In order to plan for future budgets for these projects, administrators must analyze historical data on application performance, usage and resource bottlenecks.
Big Data Management
To make resource management matters more complicated, organizations are increasingly moving to hybrid on-premise and public cloud environments. Hybrid cloud infrastructure is highly scalable, letting organizations spin up extra power for server, virtual machine, storage and networking resources on-the-fly and on an as-needed basis. Public clouds like Microsoft Azure, Amazon EC2 can complement private cloud infrastructure at a fraction of the cost since organizations do not need to make capital investments for infrastructure that may only be needed in special circumstances. Hybrid clouds give organizations the flexibility to scale up or down, however, management and visibility into such environments becomes more difficult. Automated monitoring and capacity management tools have become necessity to oil the machine.
Finally, not everybody works in a Fortune 500 company, and a lot of companies don’t have big data centers and extensive private clouds that could guarantee even 99% system availability. Most mid-sized companies that still maintain their own IT environment are often impacted by hardware outages, and have limited financial resources and and inadequate staff to predict or resolve issues as, – or ideally, before – they arise. Organizations need intuitive and automated tools to monitor performance on each server. Further, mid-sized companies, just like their enterprise counterparts, need to be able to analyze the application performance on those servers in order to to ensure optimal availability and performance. Ultimately, the goal is to increase ROI.
The fact is that in today’s data centers, whether hybrid, in the cloud or on-premise, physical servers, virtual machines, the OS and applications are linked like a chain, and each link dependent on the layer before. When one resource fails, the others do, as well. In order to optimize IT costs, avoid availability or performance problems, and ensure proper resource management, monitoring and analyzing how these layers perform together is mandatory. #
About the author
Jean-François Piot, is responsible for articulating the vision of GSX Solutions product range and translating these into detailed specifications. His experience in IT processes and Service Management projects comes from previous business development positions at IBM and Beijaflore (IT Service Provider).
About GSX Solutions
GSX Solutions is the global leader in proactive, consolidated monitoring, analysis, and management of enterprise collaboration and messaging environments, including Microsoft Exchange, SharePoint, BlackBerry Enterprise Server, and IBM Notes, as well as LDAP and SMTP ports, and any URL. GSX Solutions is a Microsoft Systems Center Alliance Partner, a Microsoft Silver Partner, a Blackberry Alliance Elite Partner, and provides automated server maintenance for Domino and Windows-based servers. Monitoring millions of mailboxes for over 600 global enterprises, GSX is headquartered in Geneva, with R&D in Nice, France, and offices in the US, UK and China. For more product information and partner opportunities, please visit www.gsx.com.