NetWorker Blog

Commentary from a long term NetWorker consultant and Backup Theorist

  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Enterprise Systems Backup and Recovery

    If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.
  • Advertisements
  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Twitter

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archive for the ‘Policies’ Category

Media, CapEx and OpEx

Posted by Preston on 2009-07-02

A common mistake I often see made, particularly when planning new system implementations, is to try to calculate out media costs over the entire planned growth period. That is, assuming a new backup system is going to be installed, accounting/management types will want to plan full tape requirements for the projected growth period the system was planned for (e.g., 3 years) from the outset.

The flaw of this approach is attempting to account for media costs as capital, rather than operational expenditure. This approach often results in unnecessary cost savings being made by cutting out other aspects of the system budget – software and hardware needed now are excluded from budget in order to make way for media that will be needed later.

This CapEx vs OpEx approach becomes most flawed when a system is being put in place making use of the “latest and greatest” media type. Let’s assume for the moment that LTO-5 has just been released, and a system with 4 x LTO-5 drives is installed, with planned capacity requirements for the next 3 years suggesting that 4,000 units of media will be required.

However, at just-released prices, media will be prohibitively expensive. Assume if you will that the RRP for each LTO-5 tape may be around $180 AU. Even with a bulk purchase discount bringing the price down to say, $100 AU per unit of media, that’s $400,000 of media if it is being purchased from the outset.

However, in the backup industry, we know that media gets progressively cheaper as it has been out for a while. Just look at all the LTO series of media. In Australia, each new format came out at around $180 RRP per cartridge. Now a simple search shows that I could pick up, on RRP alone, LTO-4 media for as little as $85 AU. (That’s just from one search, and for individual unit pricing.)

So going back to our not-yet-released LTO-5, assuming 4,000 units of media will be required across 3 years, the operational expenditure for that would be cheaper than the capital expenditure, and the media would only be purchased on an “as needs” or “near as needs” basis, ensuring media doesn’t sit on a shelf for lengthy periods of time before use.

Let’s say that media is purchased every 6 months for such a system, and an equal amount of media is purchased each time. So, every 6 months or so, one would need to order another 666 units of media for the system. Let’s round that up to 670 so we’re talking about packs of ten. We’ll assume an 11% decrease in the cost of the media every 6 months.

We’ll also assume that any bulk order (say, 300 units of media or more) will result in a 40% discount from the RRP. Let’s run a simple numbers game here then:

  • First purchase – 670 @ RRP $180 / Discount $108, $72,360.00
  • Second purchase – 670 @ RRP $160.20 / Discount $96.00, $64,400
  • Third purchase – 670 @ RRP $142.58 / Discount $86.00, $57,316
  • Fourth purchase – 670 @ RRP $126.89 / Discount $76, $51,012
  • Fifth purchase – 670 @ RRP $112.94 / Discount $68, $45,400
  • Sixth purchase – 670 @ RRP $100.51 / Discount $60, $40,406

That’s a total media OpEx budget over 3 years of just under $331,000, as opposed to $400,000 CapEx at the commencement of an implementation.

What’s more, because the media purchases are spread out over the course of the three years, rather than having to find $400,000 or even $331,000 up front, which would seriously put a dent in other budget activities, the most in any one financial year that would be required under OpEx for media would be in the first year, at a much lower $136,760.

Further, because backup is something that logically and operationally should source budget from the entire company, rather than this being OpEx out of the IT budget, it would be OpEx shared from all departmental budgets, or, if there’s a corporate overheads/OpEx budget, from that budget instead.

Contrary to popular belief, media purchases don’t have to be a nightmare or exorbitantly high.

Advertisements

Posted in Backup theory, Policies | Tagged: , , , , | Comments Off on Media, CapEx and OpEx

Backup is insurance

Posted by Preston on 2009-06-28

Over at The Daily WTF, there’s a story at the moment about a company that went out of business due to a developer deleting the company database for which there were no backups. Lamentably, this is still a common story. Oh, in many cases backups may actually be taken, but it’s still the case that we see situations such as:

  • Backups are never taken off-site,

or

  • Backups are never even taken out of a tape drive (i.e., constantly overwritten),

or

  • Backups are never checked.

My book is titled Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. That’s how much backup, to me, represents insurance. It’s the level of insurance necessary for any business to survive a disaster.

Failing to treat backup as insurance is unfortunately still familiar. The ever obvious-stating Gartner is frequently quoted as saying that one in three companies hit by a disaster will be unprepared and lose critical data.

I’d like to hope that within my career we’ll see that percentage shrink considerably – one in three is an unacceptably high number. One in a hundred might be more acceptable, but realistically, one in twenty would be a good number to start aiming for.

How do we aim for such an improvement? It’s remarkably simple, and comes from a few basic rules:

  • Backup is insurance, it’s not an IT process.
  • Backup requires buy-in from all aspects of a company.
  • Backup budget is sourced from the entire company, not the IT budget.
  • Company policies should prohibit deployment of new systems without a backup/recovery policy.

A good backup system comprises no more than 50% IT infrastructure and operations. The rest stems from policies, procedures, planning and awareness. Paraphrasing what I state in the introduction to my book, having backup software does not mean you have a backup system.

Posted in Backup theory, Policies | Tagged: , | Comments Off on Backup is insurance

Thinking green? Think tape

Posted by Preston on 2009-06-27

Many companies are now becoming increasingly aware of the importance of either achieving carbon neutrality, or at least being as green as possible.

If your company is trying to think green, then let me ask you this. For long term backup storage, which of the following two is likely to be more energy efficient?

  • Writing backups to tape which is then stored in a temperature controlled room,

or

  • Writing backups to disk arrays which are kept in temperature controlled rooms and permanently running.

Much has been said of late about deduplication this, or deduplication that, and I’ll agree – deduplication is a valid and important emergant technology in the field of backup and recovery. But it’s not a silver bullet, regardless of how many disk storage vendors want it to be. The problem is that many of the deduplication products currently touted are ineffectual at high speed “tape-out” operations, and thus, rely on keeping backups on-line on disk – with replicas maintained to another location. That’s a whole lot of spinning disk.

The simple fact of the matter is that not only is offline tape safer than spinning disk drives, it’s also considerably more power efficient.

I want it clear here – I’m not arguing that all backup should go exclusively to tape. There’s a middle line between green and practicality that remains necessary to be walked, meaning that more frequently accessed backup for many companies needs to be in some disk form initially.

Long term backups, archives, and offsite copies however are all forms of backups that should be on green, safe technology – and that’s tape.

If you want to think green in your datacentre, think tape.

Posted in Backup theory, NetWorker, Policies | Tagged: , , , | 3 Comments »

Your datazone is only as secure as your NetWorker server

Posted by Preston on 2009-06-26

A topic I discuss in my book that’s worth touching on here is that of datazone security.

Backup is one of those enterprise components that touches on a vast amount of infrastructure; so much so that it’s usually one of those most broadest reaching pieces of software within an environment. As such, the temptation is always there to make it “as easy as possible” to configure. Unfortunately this sometimes leads to making it too easy to configure. By too easy, I mean insecure.

Regardless of the “hassle” that it creates, a backup server must be highly secured. Or to be perhaps even blunter – the entire security of everything backed up by your backup server depends on the security of your backup server. Having an insecure NetWorker server, on the other hand, is like handing over the keys to your datacentre, as well as having the administrator/root password for every server stuck to each machine.

Thinking of it that way, do you really want the administrator list on your backup server to include say, any of the following?

  • *@*
  • *@<host>
  • <user>*@

If your answer is yes, then you’re wrong*.

However, datazone security isn’t only about the administrator list (though that forms an important part). At bare minimum, your datazone should have the following security requirements:

  1. No wild-cards shall be permitted in administrator user list definitions (server, NMC).
  2. No client shall have an empty servers file (client).
  3. No wild-cards shall be permitted in remote access user list definitions (client resources).

Note: With the advent of lockboxes in version 7.5, security options increase – it’s possible, for instance, to have passwords for application modules stored in such a way that only the application module for the designated host can retrieve the password.


* I do make allowance for some extreme recovery issues that have temporarily required users to enter wild-card administrators temporarily where it was not possible to wait for a bug fix.

Posted in NetWorker, Policies, Security | Tagged: , , , , | Comments Off on Your datazone is only as secure as your NetWorker server

Backups are not about being miserly

Posted by Preston on 2009-05-18

Recently Australia’s largest grocery chain followed some of the other chains and started offering unit pricing on their products. For example, packaged food includes not only its actual RRP but also the price per 100gm. That way, you can look at say, two blocks of cheese and work out which one is technically the better price, even if one is larger than the other.

This has reminded me of how miserly some companies can be with backup. While it’s something I cover in my book, it’s also something that’s worth explaining in a bit of detail.

To set this up, I want to use LTO-4 media as the backup destination, and look at one of those areas of systems that are frequently skipped from backups by miserly companies looking to save a buck here and there. That, of course, is the operating system. Too often it’s common to see backup configurations that back up data areas on servers, but leave the operating system unprotected because “that can be rebuilt”. That sort of argument is often a penny-wise/pound-foolish approach that fails to take into account the purpose of backup – recovery.

Sure, useless backups are a waste of money. That is, if you backup an Oracle database using the NetWorker module, but also let filesystem backups pick up the datafiles from the running database, then you’re not only backing up the database twice, but also the non-module backup in the scenario I’m describing is useless because it can’t be recovered from.

However, are operating system backups are waste of money or time? My argument is that except in circumstances where it is architecturally illogical/unnecessary, they’re neither a waste of money nor a waste of time. Let’s look at the why…

At the time of writing, a casual search of “LTO-4 site:.au best price” in Google yields within the first 10 results LTO-4 media as low as $80 for RRP. That’s RRP, which often has little correlation with bulk purchases, but miserly companies don’t make bulk media purchases, so we’ll work off that pricing.

Now, LTO-4 media has a native capacity of 800 GB. Rather than go fuzzy with any numbers, we’ll assume native capacity only for this example. So, at $80 for 800 GB, we’re talking about $0.10 per GB – 10c per GB.

So, our $80/800GB cartridge has a “unit cost” of 10c/GB, which sounds pretty cheap. However, that’s probably not entirely accurate. Let’s say that we’ve got a busy site and in order to facilitate backups of operating systems as well as all the other data, we need another LTO-4 tape drive. Again, looking around at list prices for standalone drives (“LTO-4 drive best price site:.au”) I see prices starting around the $4,500 to $5,000 mark. We should expect to see the average drive (with warranty) last for at least 3 years, so that’s $5,000 for 1,095 days, or $4.56 per day of usage. Let’s round that to $5 per day to account for electricity usage.

So, we’re talking about 10c per GB plus $5 per day. Let’s even round that up to $6 per day to account for staff time in dealing with any additional load caused by operational management of operating system backups.

I’ll go on the basis that the average operating system install is about 1.5GB, which means we’re talking about 15c to back that up as a base rate, plus our daily charge ($6). If you had say, 100 servers, that’s 150GB for full backups, or $15.00 for the full backups plus another $6 on that day. Operating system incremental backups tend to be quite small – let’s say a delta of 20% to be really generous. Over the course of a week then we have:

  • Full: 150GB at $15 + $6 for daily usage.
  • Incremental: 6 x (30GB at $3 + $6 for daily usage).

In total, I make that out to be $63 a week, or $3,276 a year for operating system backups to be folded into your current data backups. Does that seem a lot of money? Think of this: if you’re not backing up operating system data, this usually means that you’re working on the basis that “if it breaks, we’ll rebuild the server”.

I’d suggest to you that in most instances your staff will spend at least 4 hours trying to fix the average problem before the business decision is made to rebuild a server. Even with say, fast provisioning, we’re probably looking at 1 hour for full server reinstall/reprovision, current revision patching, etc. So that equals 5 hours of labour. Assuming a fairly low pay rate for Australian system administrators, we’ll assume you’re paying your sysadmins $25 per hour. So a 5 hour attempted fix + rebuild will cost you $125 in labour. Or will it? Servers are servers because they typically provide access or services for more than one person. Let’s assume 50 staff are also unable to work effectively while this is going on, and their average salary is even as low as $20 per hour. That’s $5,000 for their labour, or being more fair, we’ll assume they’re only 50% affected, so that’s $2,500 for their wasted labour.

How many server rebuilds does it take a year for operating system backups to suddenly be not only cost-effective but also a logically sound business decision? Even when we factor in say, an hour of effort for problem diagnosis plus recovery when actually backing up the operating system regions, there’s still a significant difference in price.

Now, I’m not saying that any company that chooses not to backup operating system data is being miserly, but I will confidently assert that most companies who choose not to backup their operating system data are being miserly. To be more accurate, I’d suggest that if the sole rationale for not doing such a backup is “to save money” (rather than “from an architectural standpoint it is unnecessary”) then it is likely that a company is wasting money, not saving it.

Posted in Backup theory, Policies | Tagged: , | 4 Comments »

Keep your logs

Posted by Preston on 2009-05-15

Something I mention in my book, but which is worth elaborating further upon, is the need to keep backups of your backup server for as long as your longest backups – if not longer. One of the primary reasons for this of course is the indices; recovering older indices is traditionally easier than the laborious alternative of scanning in potentially a multitude of media.

There is however another, equally important reason why your backup server should have at least equally the longest browse/retention time in your site – the logs.

Being able to recover your backup logs (i.e., nsr/logs/daemon*, nsr/logs/messages, etc.) is like having your own personal time machine for the backup system. This becomes important when  you hit recovery situations that you just can’t explain. That is, an error you are getting now, when you try to do a recovery of files backed up 2 years ago, may not make any sense at all. However, if you’re able to recover the backup server logs from that period in time, they may very well fill in the missing information for you. The most common thing I find this helps with is identifying whether what you’re trying to recover was ever actually backed up in the first place. I.e., the scenario runs something along the lines of:

  • User asks for file from arbitrary date – e.g., 29 May 2006.
  • Can’t browse to 29 May 2006, but can browse to 28 May 2006 and 30 May 2006.
  • Recover backup server logs from 30 May 2006 to see that the client could not be contacted for backup on that day.

Now, some would argue that not being able to recover is the real problem – this isn’t always the case. Sometimes, due to circumstances beyond your control, you literally can’t recover – such as say a situation like the above where there was a failure to backup in the first place. In situations such as this, being unable to explain why the recovery can’t be facilitated is equally as bad as not being able to recover.

Posted in NetWorker, Policies | Tagged: , , , | 1 Comment »

When should you clean your drives?

Posted by Preston on 2009-05-14

A recent discussion on the NetWorker Mailing List about configuring cleaning cartridges prompted me that it would be worthwhile to quickly cover off the oft-asked questions:

  • How frequently should I clean my drives?
  • Should I have NetWorker, or the tape library, clean my drives?

There are two schools of thought when it comes to the frequency of the cleaning. The first is to only have the drive(s) cleaned when they request cleaning. The second is to clean religiously, every X weeks, regardless of whether they request cleaning or not.

There are pros and cons to both techniques.

When it comes to only cleaning when necessary, one of the primary reasons for this technique is that cleaning is essentially an abrasive action – by running a cleaning cartridge through a drive, the drive heads are being rubbed clean by the cleaning tape. This obviously introduces some physical wear, however trivial, which may over time affect the longevity of the drive. Therefore, one can extract maximum life out of ones’ tape drives by only cleaning when requested.

The second technique, that being to clean every X weeks, regardless of whether the drives request cleaning or not, is premised on the notion that it reduces any build-up of dust and particulates on the drives, thus reducing the chances of the drive compromising the longevity of the tape.

So, which should you choose? Well, that probably depends on how clean your environment is. If your drives are in a well protected, isolated environment that has excellent dust filtering, you may very well find that using drive-initiated cleaning is the way to go. However, if your environment isn’t so clean, then forcing periodic cleaning may be more appropriate.

These days, given the increase in technology surrounding tape drives, and the replacement timeframes, I suspect the “abrasive action” argument holds perhaps less force than it used to. Ultimately as well, if your primary goal is to ensure healthy backups, then running a cleaning cartridge periodically through drives to reduce the chance of either a backup or recovery failing/requiring a restart due to cleaning being required may be a smart thing to do.

Next, we must move on to whether NetWorker should clean the drives, or whether the tape library should do so.

In versions of NetWorker 7.3.x and lower, I always advocated that NetWorker manage the cleaning. NetWorker in such versions had a tendency to not react well to any situation where it went to use a drive only to find it was already occupied, even if that was with a cleaning cartridge.

However, with 7.4.x and higher, I have noticed NetWorker is significantly more capable of detecting that a drive is being cleaned and not treating it as an error; instead it simply chooses to retry the operation.

Thus, these days I’d suggest that the decision as to whether NetWorker or the library controls drive cleaning is entirely a personal one, in the same way that choosing to wear black or blue socks is a personal one. My personal preference is that if the library/drives I’m using supports TapeAlert, and I’m using NetWorker 7.4.x or higher, I’ll now enable library controlled cleaning. With older libraries/drives or NetWorker 7.3.x/lower, I’ll have NetWorker manage the cleaning.

Posted in NetWorker, Policies | Tagged: | 3 Comments »

Basics – Adding new clients in 7.4.4 and higher

Posted by Preston on 2009-05-13

One of the policy changes made in NetWorker 7.4.4 (and which applies to 7.5.x as well) is that of client parallelism when it comes to new clients.

I have to say, and I’ll be blunt here, I find the policy change reasonably inappropriate.

In a post 7.4.4 world, NetWorker defaults to giving new clients that you create a parallelism of 12. I’d always thought that 4 was a terrible default setting, being too high, in a modern environment; you can imagine then what I thought when I found the new default setting was 12.

There’s a good reason why I find this inappropriate. In fact, it’s implicitly covered in my book by the sheer number of pages I devote to discussing how to plan client parallelism settings. In short, client parallelism settings are typically not something that you should set blindly. Unless you already have very clear ideas of filesystem/LUN layout, processing capabilities, bandwidth, etc., on a client, in my opinion you must start with a parallelism of 1 and work your way up as a result of clear and considered performance testing.

Given the amount of effort that’s been put into the latest NetWorker releases for VMware integration – i.e., the Virtual Client Connection license, etc. – it seems a less than logical choice to increase parallelism settings rather than decrease them (as a default) when you know that over time the number of virtualised hosts being backed up are going to increase.

This is obviously just a small inconvenience, but if you’ve not picked up on this yet, you should be aware of it when you start working with these newer versions of NetWorker.

What the real solution is

For what it’s worth, I actually don’t think the solution is to change the default client parallelism setting to 1, but to start maintaining a “defaults” component within the NetWorker server resource where local administrators can configure default settings for a plethora of new resources to be created (most typically clients, groups and pools).

For example, you might have options where you can specify the following defaults for any new client:

  • Parallelism
  • Priority*
  • Group
  • Schedule
  • Browse Policy
  • Retention Policy
  • Remote Access
  • etc.

These all have their own defaults, but it’s time to move past the point where NetWorker suggests standard defaults, and have all these default settings modifiable by the administrator. I realise that when the server bootstraps itself, it still needs to fall back on standard defaults, and that’s fine. However, once the server is up and running, being able to modify these defaults would be a Time Saving Feature.

This would reduce the amount of work administrators have to do when creating new resources – let’s face it, most of us spend most of the time in new resource creation changing the “default” settings. It also eliminates the amount of human errors introduced when adding to the configuration in a hurry. This sort of “defaults” component would preferably be run as a wizard in NMC on first install, and administrators would be asked if they want to re-run it upon updates.


* Adding priority to this might suggest a need to have the priority field work better than it has of late…

Posted in Aside, Basics, General thoughts, NetWorker, Policies | Tagged: , | 2 Comments »

Meet your library

Posted by Preston on 2009-04-26

For larger sites in particular, we frequently end up in situations where backup or system administrators are sufficiently remote from the datacentre that they rarely interact with servers “face to face”. As remote management features continue to advance, allowing interaction with pseudo-shutdown servers and devices, this will only increase.

This level of remoteness can create unrealistic expectation of operation performance, particularly when the chips are down and something (e.g., a recovery) needs to be done urgently.

So there’s something very important you should do with your tape libraries – you should meet them. By meeting them, I mean the following:

  • Sit in front of them, with a laptop or console.
  • Make sure you can hear the library in operation.
  • Run at least the following commands:
    • Load;
    • Unload;
    • Relabel;
    • Inventory;
    • Import;
    • Device clean;
    • Export.
  • If possible, also do the following:
    • Monitor how long it takes media to rewind and become available for eject once EOM is reached;
    • Generate a SCSI bus reset while media is being read from to and observe how long it takes the library to recover;
    • Generate a SCSI bus reset while media is being written to and observe how long it takes the library to recover.

Knowing how long these operations take to complete fulfill two important (and overlapping) functions:

  1. You now have a timeframe for common activities to rely on when you’re otherwise stressed;
  2. You’re less likely to panic and intervene because something seems to be taking too long, when in actual fact you just don’t normally note how long an operation takes.

This is pretty important – I’ve seen a lot of important recoveries go from say, stressful to full panic when excessive intervention is taken on a tape library and it isn’t given appropriate time to “recover” from errors or interrupts.

Meeting brings understanding, understanding brings patience, patience brings success.

Posted in General thoughts, Policies | Tagged: , | Comments Off on Meet your library

Zeroth tier: the backup director

Posted by Preston on 2009-04-02

(OK, I just made that term up, there is within the NetWorker framework, no reference ever to a “zeroth” tier. That doesn’t preclude me from using the term though.)

The classic 3-tier architecture of NetWorker is:

  • Backup Server
  • 1 or more storage nodes (1 of which is the backup server)
  • Clients

In a standard environment, as it grows, you typically see a situation where clients are hived off to storage nodes, such that the backup server handles only a portion of the backups, with the remainder going to storage nodes.

One thing that’s not always considered is what I’d call the ability to configure the NetWorker server in a zeroth tier; that is, acting only as backup director, and not responsible for the data storage or retrieval of any client.

Is this a new tier? Well, technically no, but it’s a configuration that a lot of companies, even bigger companies, seem reluctant to engage in. It seems for the most part that this is due to the perception that by elevating the backup server to a directorial role only, the machine is ‘wasted’ or the solution is ‘costly’. Unfortunately this means many organisations that could really, really find benefit in having a backup server in this zeroth tier continue to limp along with solutions that suffer random, sporadic, periodic failures that cannot be accounted for, or require periodic restart of services just to “reset” everything, etc.

Now, the backup server still has to have at least one backup device attached to it – the design of NetWorker requires the server itself to write out its media database and resource database. There’s a good reason for this, in fact – if you allow such bootstrap critical data to be written solely to a remote device (i.e., a storage node device), you create too many dependencies and setup tasks in a disaster recovery scenario.

However, if you’re at the point where you need a NetWorker server in the zeroth tier, you should be able to find the budget to allocate at least one device to the NetWorker server. (E.g., a bit of dynamic drive sharing, or dedicated VTL drives, etc., would be one option.) Preferably of course that would be two devices so that cloning could be handled device<->device, rather than across the network to a storage node, but I don’t want to focus too much on the device requirements of a directorial backup server.

There’s actually a surprising amount of work that goes into just directing a backup. This covers such activities as:

  • Checking to see what other backups at any point need to be run (e.g., multiple groups)
  • Enumerating what clients need to be backed up in any group
  • Communicating with each client
  • Receiving index data from each client
  • Coordinating device access
  • Updating media records
  • Updating jobs records
  • Updating configuration database records
  • etc.

If the grand scheme of things where you don’t have “a lot” of clients, this doesn’t represent a substantial overhead. What we have to consider though is the two different types of communication going on – data, and meta-data. Everything in the above list is meta-data related; none of it is actually the backup data itself.

So add to the above list the data streams that have one purpose in a normal backup environment – to saturate network links to maximise throughput to backup devices.

Evaluating these two types of communication – meta-data streams and data streams, there’s one very obvious conclusion: they aren’t mutually satisfying. That is, the data stream is by necessity going to be as greedy with bandwidth as it can be, and just as equally, the meta-data stream must have the bandwidth it requires or else failures start to happen.

So, as an environment grows (or as NetWorker is deployed into a very large environment), the solution should be equally as logical – if it gets to the point where the backup server can’t facilitate meta-data bandwidth and regular data bandwidth, there’s only communications stream that can be cut from its workload – the data stream.

I’m not suggesting that every NetWorker datazone needs to be configured this way; many small datazones operate perfectly with no storage nodes at all (other than the backup server itself); others operate perfectly well with one or more storage nodes deployed and the backup server operating as a storage node. However, if the environment grows to the point where the backup server can be kept fully occupied by directing the backups, then cut the cord and let it be the director.

Posted in NetWorker, Policies | Tagged: , , , | Comments Off on Zeroth tier: the backup director