network attached storage

Shehan Akmeemana

Shehan Akmeemana, CTO of Data Dynamics, says:

Network Attached Storage

Several years ago I managed a sizable NAS (Network Attached Storage) estate. As a relatively early adopter of NAS storage for user and home directories (CIFS) we had an interest in expanding it to the UNIX application world (NFS). Our adoption of NAS for CIFS was accelerated by the events of 9/11 when most of our Windows file servers were lost during the tragic event. We had just started exploring the use of NAS to replace those file servers and after the fallout our adoption was “all in.”

After 9/11 we moved all of our 12,000 North American users to a new NAS environment.  We had bought NAS heads and split them across our various business units (BU).  Each BU had its own set of replicated filers.  As we became comfortable with the filers, our footprint began to grow, and we added additional capacity in terms of both disk and CPU.  There were some growing pains, but we continued to increase our NAS adoption.

Time to Refresh

Fast forward three years later and it was time to refresh our estate. The gear had become old and technology had come a long way so we decided to upgrade our entire pool of NAS systems.  At that time our option for migrating was to leverage the array-based replication tools and move everything as is. With some small pain we managed to complete a one-to-one NAS refresh.  By this time our bank of systems was growing considerably, and in addition we had just certified NFS and were in the midst of migrating several NFS farms to the NAS environment.  We were well on our way to integrated these specialized filers into our data centers. network attached storage

However, this increase in NAS systems presented a significant challenge for us.  We were running out of capacity in our datacenters and were also in a middle of a growth phase, so we were not able to put capacity on the ground fast enough.  Everything we put out was consumed by our user community. We had 32 NAS clusters, split among 5 business units, during which time both user and group capacity was being significantly consumed by our users. The size of the company had also grown from twelve thousand to about twenty thousand employees, making our data growth and in turn filer growth almost unmanageable.

As this set of hardware came to the end of its maintenance cycle we had no choice but to look at ways to consolidate.  The challenge was made more complex due to the fact that filers were not aligned by business units, creating multi-tenancy, requiring us to look at every user and group directory.   We also had to decide how to consolidate as native tools at that time were not conducive to making granular data migrations.  We could not rely on our vendors as they did not have this capability either and we could not move the data as it was.

During this period of analysis we determined an appropriate architecture for a consolidated estate.  The hardware had developed enough and the CIFS workload was not significant that we were able to propose a four to one consolidation.  That is to shrink our state down from 32 clusters to 8 clusters.  There was initial apprehension about putting all of our “eggs” into one basket, but we were assured by the vendor that we had enough overhead to absorb the primary as well as DR workload if needed.

Moving Data Non-Disruptively

Once the decision to move forward with the design was clear, the next question was how to move the data in a non-disruptive way.  Native tools were not useful to us as they did not provide the flexibility to move data at a granular level.  We not only needed to consolidate at a ratio of four to one, but we also wanted to move users and group directories around.  There were no tools out there to help do this. Our only option was to attempt to do this manually using external host based tools.  Our storage and system administrator teams were familiar with the tools at hand so we started on our migration journey.

At that time this was not an easy undertaking, several of the businesses and users had to suffer from multiple outages as we moved a combination of user, group and profile data from our legacy estate to our new environment.  It took us several months to complete this endeavor.  At the end of it, we not only consolidated our NAS head from 4 to 1, but we were also able to give back about 8 floor tiles of datacenter space.  This was a significant achievement as this allowed us to expand even further.

As a result of the successful transition, the company’s users are now aligned with the business better. In fact, we were able to separate user and group data and found a dedicated place for profiles for greater manageability.  We also put into place how each environment would scale up.  Since there were various pools we expanded by adding capacity to the area of need and leveraged some of the space we had given back for future growth.

In summary our objective of consolidating and restructuring helped us keep our environment from creating a NAS sprawl that would become unmanageable. It gave us a determined growth plan and kept our users happy.  The challenges that we faced at that time, exist today but even more so as the data growth has grown exponentially. Unstructured, user data continues to be one of the fastest contributors to the digital explosion.  Technology refreshes provide a great opportunity to restructure, optimize and ensure right sizing and correct placement of storage based on internal policies and requirements. During these processes, consider leveraging good third party software that will help make the transformation/migration easier and minimally disruptive.

About the Author:

Shehan Akmeenana is CTO of Data Dynamics, Inc. Prior to Data Dynamics, he spent 13 years in the financial industry where he was responsible for storage design, architecture, execution of migrations and bespoke storage projects and data center builds.

Related Video