NetWorker Blog

Commentary from a long term NetWorker consultant and Backup Theorist

  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Enterprise Systems Backup and Recovery

    If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.
  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Twitter

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Aside – That old spindle problem

Posted by Preston on 2009-07-20

When I used to be a system administrator, back when 2GB hard drives were the norm, I remember an Oracle system that only needed about 4GB of storage space, but our Oracle savvy system administrator configured an environment with around 30 x 2GB drives.

That was my first introduction to spindles and their relationship to performance.

As the years have gone by, and drive capacities have increased, the spindle problem only briefly appeared to go away – not due to capacity, but due to increasing rotational speed and other improved performance characteristics of drives.

However, perhaps even more so than high performance databases, virtualisation is forcing system administrators to become reacquainted with spindle configuration issues; multi-core systems supporting dozens of virtualised servers create IO problems even within relatively small environments.

If you’re interested in reading more about spindle issues, you may want to check out this article on Search Storage – Get More IOPS per dollar with SSD, 2.5″ Drives. Regardless of the actual vendors discussed, this article is a good overview of the spindle problem. If you’re struggling with virtualised (or even database) IO performance, and were not previously aware of spindle issues, check it out for an introduction. (As practically all storage vendors are moving into high performance options involving either SSDs and/or massive numbers of 2.5″ drives, the article is relevant regardless of your preferred storage platform.)

(If you’re looking for a backup angle to this posting, consider the following: as you virtualise and or put in applications/systems with increasingly higher demands for IOPS, you affect not only primary production operations, but also protection, maintenance and management functions, including (but not limited to) backups. Sometimes performance issues that exist, but are not yet plaguing production operations, manifest first in backup situations where there are high intensity prolonged sequential reads.)

Advertisements

4 Responses to “Aside – That old spindle problem”

  1. Don said

    One thing to take into consideration for a ‘networker’ environment.

    While everyone talks about the great READ performance, no one ever mentions much about the almost 50% LESS WRITE performance of an SSD vs a hard drive.

    And I would think that in the Networker world write performance is king since most of the IO’s is writing…not just the backup data but all the indexes is writes writes writes.

    So I would not be surprised if you replaced your disks with SSD in a Networker use case you would get WORSE performance.

    • Preston said

      Indeed, SSDs are not the magic silver bullet that some would have us believe (yet). However, regardless of whether you find SSD write performance great/good/mediocre/terrible, the article remains relevant in as much as it helps to point out the IOPS issue caused by ever increasing numbers of applications running on fewer physical systems.

  2. David Magda said

    It depends on how you use SSDs. Currently I think ZFS’ “hybrid storage” idea is the best thing going right now:

    http://blogs.sun.com/studler/entry/zfs_and_the_hybrid_storage

    You use regular spinning rust for your bulk storage needs, and slot in SSDs transparently to improve performance. It can improve write performance with SLC SSDs:

    http://blogs.sun.com/brendan/entry/slog_screenshots

    And optimize reads with MLC SSDs:

    http://blogs.sun.com/brendan/entry/l2arc_screenshots
    http://blogs.sun.com/brendan/entry/test

    Most (all?) other file systems don’t allow for the inserting of a transparent shim layer.

    • Preston said

      That technique is certainly becoming more common – I remember a few years ago attending a Compellent web demo, where the notion of seamless automated tier migration of data depending on access was available in their arrays: i.e., RAID levels as appropriate across spindle speeds as appropriate; as data was accessed more frequently it would be moved up the access chain to increase responsiveness. (Data could also be forcibly assigned to particular levels as well.)

      I think most storage systems – either array or OS/filesystem – are going to need to (no pun intended) migrate to this style of storage, so as to take advantage of the various hardware storage units that are available on a system or SAN.

Sorry, the comment form is closed at this time.

 
%d bloggers like this: