NetWorker Blog

Commentary from a long term NetWorker consultant and Backup Theorist

  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Enterprise Systems Backup and Recovery

    If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.
  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Twitter

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Posts Tagged ‘Policies’

The 5 Golden Rules of Recovery

Posted by Preston on 2009-11-10

You might think, given that I wrote an article awhile ago about the Procedural Obligations of Backup Administrators that it wouldn’t be necessary to explicitly spell out any recovery rules – but this isn’t quite the case. It’s handy to have a “must follow” list of rules for recovery as well.

In their simplest form, these rules are:

  1. How
  2. Why
  3. Where
  4. When
  5. Who

Let’s look at each one in more detail:

  1. How – Know how to do a recovery, before you need to do it. The worst forms of data loss typically occur when a backup routine is put in place that is untried on the assumption that it will work. If a new type of backup is added to an environment, it must be tested before it is relied on. In testing, it must be documented by those doing the recovery. In being documented, it must be referenced by operational procedures*.
  2. Why – Know why you are doing a recovery. This directly affects the required resources. Are you recovering a production system, or a test system? Is it for the purposes of legal discovery, or because a database collapsed?
  3. Where – Know where you are recovering from and to. If you don’t know this, don’t do the recovery. You do not make assumptions about data locality in recovery situations. Trust me, I know from personal experience.
  4. When – Know when the recovery needs to be completed by. This isn’t always answered by the why factor – you actually need to know both in order to fully schedule and prioritise recoveries.
  5. Who – Know who requested the recovery is authorised to do so. (In order to know this, there should be operational recovery procedures – forms and company policies – that indicate authorisation.)

If you know the how, why, where, when and who, you’re following the golden rules of recovery.


* Or to put it another way – documentation is useless if you don’t know it exists, or you can’t find it!

Posted in Backup theory, NetWorker, Policies | Tagged: , , , , | Comments Off

Zero Error Policy Management

Posted by Preston on 2009-08-25

In the first article on the subject, What is a zero error policy?, I established the three rules that need to be followed to achieve a zero error policy, viz:

  1. All errors shall be known.
  2. All errors shall be resolved.
  3. No error shall be allowed to continue to occur indefinitely.

As a result of various questions and discussions I’ve had about this, I want to expand on the zero error approach to backups to discuss management of such a policy.

Saying that you’re going to implement a zero error policy – indeed, wanting to implement a zero error policy, and actually implementing are significantly different activities. So, in order to properly manage a zero error policy, the following three components must be developed, maintained and followed:

  1. Error classification.
  2. Procedures for dealing with errors.
  3. Documentation of the procedures and the errors.

In various cases I’ve seen companies try to implement a zero error policy by following one or two of the above, but they’ve never succeeded unless they’ve implemented all three.

Let’s look at each one individually.

Error Classification

Classification is at the heart of many activities we perform. In data storage, we classify data by its importance and its speed requirements, and assign tiers. In systems protection, we classify systems by whether they’re operational production, infrastructure support production, development, Q&A, test, etc. Stepping outside of IT, we routinely do things by classification – we pay bills in order of urgency, or we go shopping for the things we need sooner rather than the things we’re going to run out of in three months time, etc. Classification is not only important, but it’s also something we do (and understand the need for) naturally – i.e., it’s not hard to do.

In the most simple sense, errors for data protection systems can be broken down into three types:

  • Critical errors – If error X occurs then data loss occurs.
  • Hard errors – If error X occurs and data loss occurs, then recoverability cannot be achieved.
  • Soft errors – If error X occurs and data loss occurs, then recoverability can still be achieved, but with non-critical data recoverability uncertain.

Here’s a logical follow-up from the above classification – any backup system designed such that it can cause a critical error has been incorrectly designed. What’s an example of a critical error? Consider the following scenario:

  • Database is shutdown at 22:00 for cold backups by scheduled system task
  • Cold backup runs overnight
  • Database is automatically started at 06:00 by scheduled system task

Now obviously our preference would be to use a backup module, but that’s actually not the risk of critical error here: it’s the divorcing of the shutdown/startup from the actual filesystem backup. Why does this create a “critical error” situation, you may ask? On any system where exclusive file locking takes place, if for any reason the backup is still running when the database is started, corruption is likely to occur. (For example, I have seen Oracle databases on Windows destroyed by such scenarios.)

So, a critical error is one where the failure in the backup process will result in data loss. This is an unacceptable error; so, not only must we be able to classify critical errors, but all efforts must be made to ensure that no scenarios which permit critical errors are ever introduced to a system.

Moving on, a hard error is one where we can quantify that if the error occurs and we subsequently have data loss (recovery required), then we will not be able to facilitate that recovery to within our preferred (or required) windows. So if a client completely fails to backup overnight, or one filesystem on the client fails, then we would consider that to be a hard error – the backup did not work and thus if there is a failure on that client we cannot use that backup to recover.

A soft error, on the other hand, is an error that will not prevent core recovery from happening. These are the most difficult to classify. Using NetWorker as an example, you could say that these will often be the warnings issued during the backups where the backup still manages to complete. Perhaps the most common example of this is files being open (and thus inaccessible) during backup. However, we can’t (via a blanket rule) assume that any warning is a soft error – it could be a hard error in disguise.

To use language as an example, a syntax error is one which is immediately obvious. A semantic error is one where the meaning is not obvious. Thus, syntax errors cause an immediate failure, whereas semantic errors usually cause a bug.

Taking that analogy back to soft vs hard errors, and using our file-open example, you can readily imagine a scenario where files open during backup could constitute a hard or a soft error. In the case of a soft error, it may refer to temporary files that are generated by a busy system during backup processing. Such temporary files may have no relevance to the operational state of a recovered system, and thus the recoverability of the temporary files does not affect the recoverability* of the system as a whole. On the other hand, if critical data files are missed due to being open at the time of the backup, then the recoverability of the system as a whole is compromised.

So, to achieve a zero error policy, we must be able to:

  1. Classify critical errors, and ensure situations that can lead to them are designed out of the solution.
  2. Classify hard errors.
  3. Classify soft errors and be able to differentiate them from hard errors.

One (obvious) net result of this is that you must always check your backup results. No ifs, no buts, no maybes. For those who want to automatically parse backup results, as mentioned in the first article, it also means you must configure the automatic parser such that any unknown result is treated as an error for examination and either action or rule updating.

[Note: An interesting newish feature in NetWorker was the introduction of the "success threshold" option for backup groups. Set to "Warning", by default, this will see savesets that generated warnings (but not hard errors) flagged as successful. The other option is "Success", which means that in order for a saveset to be listed as a successful saveset, it must complete without warning. One may be able to argue that in an environment where all attempts have been made to eliminate errors, and the environment operates under a zero-error policy, then this option should be changed from the default to the more severe option.]

Procedures for dealing with errors

The ability to classify an error as critical, hard, or soft is practically useless unless procedures are established for dealing with the errors. Procedures for dealing with errors will be driven, at first, by any existing SLAs within the organisation. I.e., the SLA for either maximum amount of data loss or recovery time will drive the response to any particular error.

That response however shouldn’t be an unplanned reaction. That is, there should be procedures which define:

  1. By what time backup results will be checked.
  2. To whom (job title), to where (documentation), and by when critical and hard errors shall be reported.
  3. To where (documentation) soft errors shall be reported.
  4. For each system that is backed up, responses to hard errors. (E.g., some systems may require immediate re-run of the backup, whereas others may require the backup to be re-run later, etc.)

Note that this isn’t an exhaustive list – for instance, it’s obvious that any critical errors must be immediately responded to, since data loss has occurred. Equally it doesn’t take into account routine testing, etc., but the above procedures are more for the daily procedures associated with enacting a zero error policy.

Now, you may think that that the above requirements don’t constitute the need for procedures – that the processes can be followed informally. It may seem a callous argument to make, but in my experience in data protection, informal policies lead to laxity in following up those policies. (Or: if it isn’t written down, it isn’t done.)

Obviously when checks aren’t done it’s rarely for a malicious reason. However, knowing that “my boss would like a status report on overnight backups by 9am” is elastic – and so if we’re feeling there’s other things we need to look at first, we can choose to interpret that as “would like by 9am, but will settle for later”. If however there’s a procedure that says “management must have backup reports by 9am”, it takes away that elasticity. Where that is important is it actually helps in time management – tasks can be done in a logical and process required order, because there’s a definition of importance of activities within the role. This is critically important – not only for the person who has to perform the tasks, but also for those who would otherwise feel that they can assign other tasks that interrupt these critical processes. You’ve heard that a good offense is a good defense? Well, a good procedure is also a good defense – against lower priority interruptions.

Documentation of the procedures and the errors

There are two acutely different reasons why documentation must be maintained (or three, if you want to start including auditing as a reason). So, to rephrase that, there are three acutely different reasons why documentation must be maintained. These are as follows:

  1. For auditing and compliance reasons it will be necessary to demonstrate that your company has procedures (and documentation for those procedures) for dealing with backup failures.
  2. To deal with sudden staff absence – it may be as simple as someone not being able to make it in on time, or it could be the backup administrator gets hit by a bus and will be in traction in the hospital for two weeks (or worse).
  3. To assist any staff member who does not have an eidetic memory.

In day to day operations, it’s the third reason that’s the most important. Human memory is a wonderfully powerful search and recall tool, yet it’s also remarkably fallible. Sometimes I can remember seeing the exact message 3 years prior in an error log from another customer, but forget that I’d asked a particular question only a day ago and ask it again. We all have those moments. And obviously, I also don’t remember what my colleagues did half an hour ago if I wasn’t there with them at the time.

I.e., we need to document errors because that guarantees us being able to reference them later. Again – no ifs, no buts, no maybes. Perhaps the most important factor in documenting errors in a data protection environment though is documenting in a system that allows for full text search. At bare minimum, you should be able to:

  1. Classify any input error based on:
    • Date/Time
    • System (server and client)
    • Application (if relevant)
    • Error type – critical, hard, soft
    • Response
  2. Conduct a full text search (optionally date restricted):
    • On any of the methods used to classify
    • On the actual error itself

The above scenario fits nicely with Wiki systems, so that may be one good scenario, but there are others out there that can be equally used.

The important thing though is to get the documentation done. What may initially seem time consuming when a zero error policy is enacted will quickly become quick and automatic; combined with the obvious reduction in errors over time in a zero error policy, the automatic procedural response to errors will actually streamline the activities of the backup administrator.

That documentation obviously, on a day to day basis, provides the most assistance to the person(s) in the ongoing role of backup administrator. However, in any situation where someone else has to fill in, this documentation becomes even more important – it allows them to step into the role, data mine for any message they’re not sure of and see what the local response was if a situation had happened before. Put yourself into the shoes of that other person … if you’re required to step into another person’s role temporarily, do you want to do it with plenty of supporting information, or with barely anything more than the name of the system you have to administer?

Wrapping Up

Just like when I first discussed zero error policies, you may be left thinking at the end of this that it sounds like there’s a lot of work involved in managing a zero error policy. It’s important to understand however that there’s always effort involved in any transition from a non-managed system to a managed system (i.e., from informal policies to formal procedures). However, for the most part this extra work mainly comes in at the institution of the procedures – namely in relation to:

  • Determining appropriate error categorisation techniques
  • Establishing the procedures
  • Establishing the documentation of the procedures
  • Establishing the documentation system used for the environment

Once these activities have been done, day to day management and operation of the zero error policy becomes a standard part of the job, and therefore doesn’t represent a significant impact to work. That’s for two key reasons: once these components are in place then following them really doesn’t take a lot of extra time, and that time that it does take is actually factored into the job, so the extra time taken can hardly be considered wasteful or frivolous.

At both a personal and ethical level, it’s also extremely satisfying to be able to answer the question, “How many errors slipped through the net today?” with “None”.

Posted in Backup theory, NetWorker, Policies | Tagged: , , , , , , , | 1 Comment »

Things not to virtualise: backup servers and storage nodes

Posted by Preston on 2009-02-13

Introduction

When it comes to servers, I love virtualisation. No, not to the point where I’d want to marry virtualisation, but it is something I’m particularly keen about. I even use it at home – I’ve gone from 3 servers, one for databases, one as a fileserver, and one as an internet gateway down to one, thanks to VMware Server.

Done rightly, I think the average datacentre should be able to achieve somewhere in the order of 75% to 90% virtualisation. I’m not talking high performance computing environments – just your standard server farms. Indeed, having recently seen a demo for VMware’s Site Recovery Manager (SRM), and having participated in many site failover tests, I’ve become a bigger fan of the time and efficiency savings available through virtualisation.

That being said, I think backup servers fall into that special category of “servers that shouldn’t be virtualised”. In fact, I’d go so far as to say that even if every other machine in your server environment is virtual, your backup server still shouldn’t be a virtual machine.

There are two key reasons why I think having a virtualised backup server is a Really Bad Idea, and I’ll outline them below:

Dependency

In the event of a site disaster, your backup server should be at least equally the first server that is rebuilt. That is, you may start the process of getting equipment ready for restoration of data, but the backup server needs to be up and running in order to achieve data recovery.

If the backup server is configured as a guest within a virtual machine server, it’s hardly going to be the first machine to be configured is it? The virtual machine server will need to be built and configured first, then the backup server after this.

In this scenario, there is a dependency that results in the build of the backup server becoming a bottleneck to recovery.

I realise that we try to avoid scenarios where the entire datacentre needs to be rebuilt, but this still has to remain a factor in mind – what do you want to be spending time on when you need to recover everything?

Performance

Most enterprise class virtualisation systems offer the ability to set performance criteria on a per machine basis – that is, in addition to the basics you’d expect such as “this machine gets 1 CPU and 2GB of RAM”, you can also configure options such as limiting the number of MHz/GHz available to each presented CPU, or guaranteeing performance criteria.

Regardless though, when you’re a guest in a virtual environment, you’re still sharing resources. That might be memory, CPU, backplane performance, SAN paths, etc., but it’s still sharing.

That means at some point, you’re sharing performance. The backup server, which is trying to write data out to the backup medium (be that tape or disk), is potentially either competing with for, or at least sharing backplane throughput with the machines that is backing up.

This may not always make a tangible impact. However, debugging such an impact when it does occur becomes much more challenging. (For instance, in my book, I cover off some of the performance implications of having a lot of machines access storage from a single SAN, and how the performance of any one machine during backup is no longer affected just by that machine. The same non-trivial performance implications come into play when the backup server is virtual.)

In Summary

One way or the other, there’s a good reason why you shouldn’t virtualise your backup environment. It may be that for a small environment, the performance impact isn’t an issue and it seems logical to virtualise. However, if you are in a small environment, it’s likely that your failover to another site is likely to be a very manual process, in which case you’ll be far more likely to hit the dependency issue when it comes time for the full site recovery.

Equally, if you’re a large company that has a full failover site, then while the dependency issue may not be as much of a problem (due to say, replication, snapshots, etc.), there’s a very high chance that backup and recovery operations are very time critical, in which case the performance implications of having a backup server share resources with other machines will likely make a virtual backup server an unpalatable solution.

A final request

As someone who has done a lot of support, I’d make one special request if you do decide to virtualise your backup server*.

Please, please make sure that any time you log a support call with your service provider you let them know you’re running a virtual backup server. Please.


* Much as I’d like everyone to do as I suggest, I (a) recognise this would be a tad boring and (b) am unlikely at any point soon or in the future to become a world dictactor, and thus wouldn’t be able to issue such an edict anyway, not to mention (c) can occasionally be fallible.

Posted in General thoughts, NetWorker, Policies | Tagged: , , | 6 Comments »

Offsite your bootstrap reports

Posted by Preston on 2009-02-09

I can’t stress the importance enough of getting your bootstrap reports offsite. If you don’t have a bootstrap report available and you have to rebuild your NetWorker server, then you may potentially have to scan through a lot of media to find your most recent backup.

It’s all well and good having your bootstrap report emailed to your work email address, but what happens if whatever takes out your backup server takes out your mail server as well?

There’s two things you should therefore do with your bootstrap reports:

  • Print them out and send them offsite with your media – offsite media storage companies will usually store other records for you as well, so there’s a good chance yours will, even if it is for a slightly extra fee. If you send your bootstraps offsite with your media, then in the event of a disaster recovery, a physical printout of your bootstrap report should also come back when you recall your media.
  • Email them to an external, secure email address – I recommend using a free, secure mail service, such as say, Google Mail. Doing so keeps electronic copies of the bootstrap available for easy access in the event of a disaster where internet access is still achievable even if local mail isn’t possible. Of course, the password for this account should be (a) kept secure and (b) changed every time someone who knows it leaves the company.

(Hint: if for some reason you need to generate a bootstrap report outside of the regular email, always remember you can at any time run: mminfo -B).

Obviously this should be done in conjunction with local storage methods – local email, and local printouts.

Posted in NetWorker, Policies, Scripting | Tagged: , , | Comments Off

Monthly rather than weekly fulls using backup to disk

Posted by Preston on 2009-02-05

When backup to disk is deployed, most sites usually just transition from their standard tape backups to disk without any change to the schedules. That is, daily incrementals (or differentials), with weekly fulls. This isn’t necessarily the best way to make use of backup to disk, and I’ll explain in this post way.

One of the traditional reasons why long incremental cycles aren’t used in backup is the load and seek impact during recovery. That is, you’ll certainly reduce the amount of data you backup if you do incrementals for a month, but if they’re all going to tape, then the chances are that if you do a recovery towards the end of that month you may have a lot of tapes to load. Unless you’re using high speed loading tapes (e.g., the StorageTek/Sun 98/99 series drives), this is going to make a significant impact to the recovery. Indeed, even with such drives, you’re still going to have an impact that may be undesirable.

If you’re backing up to disk however, your options change. Disk seek times are orders of magnitude faster than tape seek times, and there’s no ‘load’ time associated with disk as opposed to tape media either.

In an average site where ‘odd’ things aren’t happening (e.g., filesystem backups of databases, etc.), my experience is that nightly incrementals take up somewhere between 5-8% of a full backup. That is, if the full backups are 10TB, the incrementals sit somewhere around 512 GB – 819 GB.

We’ll use these numbers for an example – 10TB full, 820GB incremental. Over the course of an average, 4-week month then, the total data backed up using the weekly-full strategy will be:

  • 4 x 10TB fulls
  • (6 x 820GB) x 4 incrementals

For a total of 59TB of backup.

Looking at a monthly full scenario for a 31-day month however, the sizing will instead be:

  • 1 x 10TB full
  • 30 x 820GB incrementals

This amounts to a total of 34TB of backup.

If you have to pay for a new array for disk backup units that have enough space to hold a months’ worth of backups, which would you rather pay for? 59TB of storage, or 34TB of storage?

(Of course, I know there’s some fudge space required in any such sizing – realistically you’d want to ensure that after you’ve fitted on everything you want to fit, there’s still enough room for another full backup. That way you’ve got sufficient space on disk to continue to backup to it while you’re staging data off.)

Obviously the needs of each individual site must be evaluated, so I’m not advocating a blind switch to this method; instead, it’s a design option you should be aware of.

Posted in NetWorker | Tagged: , , , | 2 Comments »

NetWorker and the Y2038 Problem

Posted by Preston on 2009-01-25

Having spent what seemed like much of 1999 coordinating the system administration efforts of a major Y2K project for an engineering company, I have fundamental problems with the plethora of journalists that claimed the overall limited number of Y2K issues experienced meant it was never a problem and was thus a waste of money. (I invite such journalists to stop filling their cars with fuel, since they don’t run out of fuel after they’re filled – it’s a similar logic.)

Thus I’m also aware of the difficulties posed by 2038 – that’s the point where we reach numbers that can no longer be expressed as seconds since 1 January 1970 in a 32-bit integer.

Interestingly NetWorker doesn’t seem to technically have the Y2038 problem, since that problem is meant to manifest in early 2038, but NetWorker allows retention and browse periods specified for its savesets up to and including 31 December 2038 23:59:59.

However, NetWorker does still appear to have a pseudo-2038 issue in that it currently doesn’t allow you to specify a browse or retention period beyond 31 December 2038 23:59:59.

For instance:

[root@nox ~]# save -qb Yearly -e "12/31/2038 23:59:59" /etc/sysconfig
save: /etc/sysconfig  219 KB 00:00:01    115 files
[root@nox ~]# save -qb Yearly -e "01/01/2039 00:00:01" /etc/sysconfig
6890:(pid 18236): invalid expiration time: 01/01/2039 00:00:01

I have 2 theories for this – neither of which I’m willing to bet on without being an EMC engineer who has access to the source code. Either NetWorker doesn’t really store dates as of 1 January 1970 (instead storing from some time later in January 1970) or it’s only partly surpassed the 32-bit barrier for date/time representation – e.g., the back-end supports it but the front-end doesn’t, or the back-end doesn’t support it and the front-end does, but knows the back-end doesn’t and therefore blocks the request.

Either way, it’s something that companies have to be aware of.

Where does that leave you?
Well, being unable to set a browse/retention period beyond 2038 for now is hardly an insurmountable issue, nor is it an issue that should, for instance, discount NetWorker from active consideration at a site.

Instead, it suggests that for long term data retention requirements – e.g., requirements exceeding 30 years (such as government archives, medical records that must be kept for the life of patients, academic records that must kept for the life of the student, etc.) need to be stored with well established and documented policies in place for extending that data retention as appropriate for backups.

Such policies aren’t difficult to enact. After all, data which is to be stored on tape for even 5+ years really should have policies to deal with recall and testing, and it goes without saying that data which is to be kept on tape for 30 years will most certainly need to be recalled for migration at some point during its lifetime. (Indeed, one could easily argue it would need to be recalled for migration to new media types at least 3 times alone.)

So, until NetWorker fully supports post-2038 dates, I’d recommend that companies with long-term data retention requirements document, and establish extensions to their policies as follows:

  • Technical policies:
    • All backups that should be kept beyond 2038 should be appropriately tagged – whether that simply be by being within a particular pool, or just by having a data expiration period higher than the year 2037 will depend on the individual company requirements.
    • Each new release of NetWorker should be tested, or researched to confirm whether it supports post-2038 dates.
    • At any point that post-2038 dates are supported, the savesets to be kept longer than 2038 should be extended.
  • Human resource policies:
    • New employee kit for system/backup administrators must make note of this requirement as an ongoing part of the job description of those who are responsible for data retention.
    • New employee kit for managers responsible for IT must make note of this requirement as part of an ongoing part of their job description.
    • HR policy guides should clearly state that these policies and requirements must be maintained, and be audited for periodically.

With such policies in place, being unable to set a browse/retention period currently beyond 2038 should be little cause for concern.

Posted in NetWorker, Policies | Tagged: , , | Comments Off

 
Follow

Get every new post delivered to your Inbox.