NetWorker Blog

Commentary from a long term NetWorker consultant and Backup Theorist

  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Enterprise Systems Backup and Recovery

    If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.

Posts Tagged ‘NetWorker’

Fingers Crossed for some New Years Resolutions from EMC

Posted by Preston on 2009-12-15

Over at Storagebod’s Blog, Martin Glassborow has been providing a highly interesting and entertaining set of letters to Father Christmas, which I’d highly recommend reading.

So in the spirit of Martin’s postings, I’m going to do a slight variant – here’s a few things that I’ll be keeping my fingers crossed for that EMC will provide us in NetWorker in 2010.

Dear EMC,

Could you please add to your NetWorker New Years Resolution the following items?

  • Windows 2008 Service Pack 2 support as soon as possible!
  • Improvements for ADV_FILE devices:
    • Savesets can spill over from one disk backup unit to another should the first become full.
    • Better rotation/selection method for devices/volumes.
  • Tapes:
    • Ability to query the “offsite” mminfo flag.
  • Cloning:
    • Inline cloning (simultaneous clone + original generation).
    • Fixing the “validcopies” flag!
  • NetWorker Management Console:
    • Manual client backup and recovery operations integrated into the console.
    • Manual module backup and recovery operations integrated into the console.
  • Resources/Configuration:
    • Ability to rename clients (and other resources).
    • Syntax for including another directive within a directive.
    • PIDs for nsrmmds stored within the device resource for easy viewing.
  • Operations:
    • Client GUI manual backups to support pools other than Default.
    • More granular savesets.

I could think of a bunch of other enhancements, but these are the ones I’m hoping most are on your new years resolution list for NetWorker in 2010. I’m hoping I don’t have to put any of these on a wish list for your 2011 new years resolution list.

Thanks, and have a happy new year!

Preston de Guise.

Posted in NetWorker | Tagged: , , | 2 Comments »

Enhancing NetWorker Security: A theoretical architecture

Posted by Preston on 2009-11-18

It’s fair to say that no one backup product can be all things to all people. More generally, it’s fair to say that no product can be all things to all people.

Security has had a somewhat interesting past in NetWorker; much of the attention to security for a lot of the time has been to with (a) defining administrators, (b) ensuring clients are who they say they are and (c) providing access controls for directed recoveries.

There’s a bunch of areas though that have remained somewhat lacking in NetWorker for security. Not 100% lacking, just not complete. For instance, user accounts that are accessed for the purposes of module backup and recovery frequently need higher levels of authority than standard users. Equally so, some sites want their <X> admins to be able to control as much as possible of the <X> backups, but not to be able to have any administrator privileges over the <Y> backups. I’d like to propose an idea that, if implemented, would both improve security and make NetWorker more flexible.

The change would be to allow the definition of administrator zones. An “administrator zone” would be a subset of a datazone. It would consist of:

  1. User groups:
    • A nominated “administrator” user group.
    • A nominated “user” user group.
    • Any other number of nominated groups with intermediate privileges.
  2. A collection of the following:
    • Clients
    • Groups
    • Pools
    • Devices
    • Schedules
    • Policies
    • Directives
    • etc

These obviously would still be accessible in the global datazone for anyone who is a datazone administrator. Conceptually, this would look like the following:

Datazone with subset "administrator" zonesThe first thing this should point out to you is that administrator zones could, if desired, overlap. For instance, in the above diagram we have:

  1. Minor overlap between Windows and Unix admin zones (e.g., they might both have administrative rights over tape libraries).
  2. Overlap between Unix and Oracle admin zones.
  3. Overlap between Windows and Oracle admin zones.
  4. Overlap between Windows and Exchange admin zones.
  5. Overlap between Windows and MSSQL admin zones.

Notably though, the DMZ Admin zone indicates that you can have some zones that have no overlap/commonality with other zones.

There’d need to be a few rules established in order to make this work. These would be:

  1. Only the global datazone can support “<x>@*” user or group definitions in a user group.
  2. If there is overlap between two zones, then the user will inherit the rights of the highest authority they belong to. I.e., if a user is editing a shared feature between the Windows and Unix admin zones, and is declared an admin in the Unix zone, but only an end-user in the Windows zone, then the user will edit that shared feature with the rights of an admin.
  3. Similarly to the above, if there’s overlap between privileges at the global datazone level and a local administrator zone, the highest privileges will “win” for the local resource.
  4. Resources can only be created and deleted by someone with data zone administrator privileges.
  5. Updates for resources that are shared between multiple administrator zones need to be “approved” by an administrator from each administrator zone that overlaps or a datazone administrator.

Would this be perfect? Not entirely – for instance, it would still require a datazone administrator to create the resources that are then allocated to an administrator zone for control. However, this would prevent a situation occurring where an unprivileged user with “create” options could go ahead and create resources they wouldn’t have authority over. Equally, in an environment that permits overlapping zones, it’s not appropriate for someone from one administrator zone to delete a resource shared by multiple administrator zones. Thus, for safety’s sake, administrator zones should only concern themselves with updating existing resources.

How would the approval process work for edits of resources that are shared by overlapping zones? To start with, the resource that has been updated would continue to function “as is”, and a “copy” would be created (think of it as a temporary resource), with a notification used to trigger a message to the datazone administrators and the other, overlapping administrators. Once the appropriate approval has been done (e.g., an “edit” process in the temporary resource), then the original resource would be overwritten with the temporary resource, and the temporary resource removed.

So what sort of extra resources would we need to establish this? Well, we’ve already got user groups, which is a starting point. The next step is to define an “admin zone” resource, which has fields for:

  1. Administrator user group.
  2. Standard user group.
  3. “Other” user groups.
  4. Clients
  5. Groups
  6. Pools
  7. Schedules
  8. Policies
  9. Directives
  10. Probes
  11. Lockboxes
  12. Notifications
  13. Labels
  14. Staging Policies
  15. Devices
  16. Autochangers
  17. etc.

In fact, pretty much every resource except for the server resource itself, and licenses, should be eligible for inclusion into a localised admin group. In it’s most basic, you might expect to see the following:

nsradmin> print type: NSR admin zone; name: Oracle
type: NSR admin zone;
name: Oracle;
administrators: Oracle Admins;
users: Oracle All Users;
other user groups: ;
clients: delphi, pythia;
groups: Daily Oracle FS, Monthly Oracle FS,
Daily Oracle DB, Monthly Oracle DB;
pools: ;
schedules: Daily Oracle, Monthly Oracle;
policies: Oracle Daily, Oracle Monthly;
directives: pythia exclude oracle, delphi exclude oracle;
...etc...

To date, NetWorker’s administration focus has been far more global. If you’re an administrator, you can do anything to any resource. If you’re a user, you can’t do much with any resource. If you’ve been given a subset of privileges, you can use those privileges against all resources touched by those privileges.

An architecture that worked along these lines would allow for much more flexibility in terms of partial administrative privileges in NetWorker – zones of resources and local administrators for those resources would allow for more granular control of configuration and backup functionality, while still keeping NetWorker configuration maintained at the central server.

Posted in Architecture, Backup theory, NetWorker, Security | Tagged: , , , , | 2 Comments »

Routine filesystem checks on disk backup units

Posted by Preston on 2009-09-28

On Linu, filesystems typically have two settings regarding getting complete checks on boot. These are:

  • Maximum number of mounts before a check
  • Interval between checks

The default settings, while reasonably suitable for smaller partitions, are very unsuitable for large partitions, such as what you find in disk backup units. In fact, if you don’t pay particular attention to these settings, you may find after a routine reboot that your backup server (or storage node) can take hours to become available. For instance, it’s not unheard of to see even sub-20TB DBU environments (as say, 10 x 2TB filesystems) take several hours to complete mandatory checks on filesystems after what should have just been a routine reboot.

There are two approaches that you can take to this:

  • If you want to leave the checks enabled, it’s reasonably imperative to ensure that at most only one disk backup unit filesystem will be checked at one time after a reboot; this will at least reduce the size of any check-on-reboot. Thus, ensure you:
    • Configure each filesystem so that it will have a different number of maximum mounts before check than any other filesystem, and,
    • Configure the interval (days) between checks for each filesystem to be a significantly different number.
  • If you don’t want periodic filesystem checks to ever interfere with the reboot process, you need to:
    • Ensure that following a non-graceful restart of the server the DBU filesystems are unmounted and checked before any new backup or recovery activities are done, and,
    • Ensure that there are processes – planned maintenance windows if you will – for manual running of the filesystem checks that are being skipped.

Neither option is particularly “attractive”. In the first case, you can still, if you cherish uptime or don’t need to reboot your backup server often, get into a situation where multiple filesystems need to be checked on reboot if they’ve all exceeded their days-between-checks parameter. In the second instance, you’re having to insert human driven processes into what should normally be a routine operating system function. In particular with the manual option, there must be a process in place to NetWorker shutdown + checking even in the middle of the night if an OS crash occurs.

Actually, the above list is a little limited – there’s a couple of other options that you can consider as well – though they’re a little more left of field:

  • Build into the change control process the timings for complete filesystem checks in case they happen, or
  • Build into the change control process or reboot procedure for the backup server/storage nodes the requirement to temporarily disable filesystem checks (using say, tune2fs) so that you know the reboot to be done won’t be costly in terms of time.

Personally, I’m looking forward to btrfs – in reality, a modern filesystem such as that should solve most, if not all, of the problems discussed above.

Posted in Linux, NetWorker | Tagged: , , , , | Comments Off on Routine filesystem checks on disk backup units

Aside – Is NetWorker fast enough for my needs?

Posted by Preston on 2009-07-30

Most days my blog stats shows at least one search coming into the blog along the lines of “how fast is NetWorker”, etc. It’s understandable. A lot of people selling products other than NetWorker try to push old FUD that it’s not fast enough. Equally, a lot of people who are considering NetWorker are understandably curious as to whether it will be fast enough to suit their needs.

I thought I should write a (brief) piece on this.

To cut to the chase, NetWorker is as fast as your hardware will allow. Yes, there are obviously some software limitations, but that’s true of any backup product.

Looking at the facts though, we can refer back as far as 2003, where NetWorker broke the (let’s call it) “land speed record” for backup by achieving backup performance of 10TB per hour. Most companies now would still be happy with 10TB an hour, but obviously that performance metric was bound by the devices and infrastructure available at the time. These days, it would obviously come out much faster.

I’m currently struggling to find the original Legato piece about this performance record, but my recollection is that it was:

  • Averaging 10TB/h
  • Achieving 2.86GB/s (that’s gigabytes per second, not gigabits per second)
  • Using real customer data

I did find the (very brief) SGI announcement about the speed achieved here. I also found a Sun/Legato presentation here (search for “10TB/h”), and a “press clipping” here.

The net result? Well, I’m not claiming every environment will get that sort of speed, but what I will reasonably confidently assert is that NetWorker will scale to meet your needs, so long as you have budget.

Backup performance isn’t really a p–ssing competition that you want to get into – in reality, if you want to worry about “speeds and feeds”, look at restore performance. NetWorker does admirably there – that 10TB/h filesystem backup restored at 4.5TB/h, and a block level backup run at 7.2TB/h restored at 7.9TB/h.

So the next time someone tries to tell you that “NetWorker isn’t fast enough to be enterprise”, remember one thing: they’re wrong.

Posted in Aside, NetWorker | Tagged: , , | Comments Off on Aside – Is NetWorker fast enough for my needs?

My Personal NetWorker Wish List

Posted by Preston on 2009-07-01

This is not necessarily reflective of my customers, though several of them have expressed desire for at least one or two features in the following list. However, that doesn’t mean there’s features currently not in NetWorker that I feel would significantly enhance it.

(I should point out that even missing these features, I still think it’s superior to other backup products.)

So, here’s my personal wish-list for NetWorker (in no particular order), coming from a continuous use of the product since 1996:

  • Enhancements to ADV_FILE backup:
    • The nsrmmd service needs to be able to support some form of proxying, such that savesets which are being written to a disk backup unit which fills can be “moved” to another nsrmmd service for completion on another ADV_FILE unit.
    • When backing up to ADV_FILE units, NetWorker needs to pick next volume to write to by capacity, not age. This would prevent the same ADV_FILE unit(s) filling frequently and stagger writes better across all devices.
  • Enhancements to backup, more generally:
    • Simultaneously generate a backup and one or more clones, with the option available as to whether failure of one, a particular number or all constitutes a failure.
    • Support for expiration/browse dates beyond 2038.
  • Enhancements to recovery:
    • New recover log that gets automatically populated with details of who recovered what. This should be at the NetWorker server, rather than NMC level.
    • Administrators should be able to optionally allow cross platform directed recoveries. (This will become particularly pertinent as companies complete migration activities from Novell NetWare to Novell Open Enterprise Server. As one platform is NetWare, and the other Linux, recovering old NetWare data to an OES server is not supported.)
    • Preferred recovery pools – currently NetWorker picks savesets to use for recovery based on nsavetime, which can result in disk backup environments where data has been cloned, then staged, in NetWorker requesting recovery from a clone rather than stage volume. While this can be ameliorated by individually setting the “offsite” flag for each volume, it would be preferable to be able to nominate “priorities” for pools, such that NetWorker will recommend volumes from the “highest priority pool” when a recovery is required and no volumes are on-line.

The wish list is by no means complete, but these are the things I tend to chafe against more frequently than others.

Since I don’t want it to be said that I just come up with a wish-list but don’t have any idea of how topics on it would be accomplished, I’ll go focus on what would (theoretically) need to be done to fix up disk backup, accomplish simultaneous (inline) cloning, etc. (Admittedly, this is my “I don’t have access to the NetWorker source code and never have but architecturally I think EMC should do this” train of thought, but that doesn’t invalidate the theoretical architecture.)

The current “tiering” used to get data to a volume needs to be extended by one layer. Currently the (general) process is that the client save process (or other nominated process) sends data to the nominated nsrmmd process for writing directly to a volume.

It’s this model that needs to be changed. Rather than the client sending directly to the nsrmmd process, instead it needs to send to a proxy process – let’s call that the fictional nsrmmpd process. This process would act as a “broker”. It would receive the individual backup processes from each client backup, and determine which nsrmmd would facilitate the backup. The important feature however would be that there would not be a 1-to-1 relationship between nsrmmds and nsrmmpds – rather, there would be a 1-to-1 relationship between client backups and nsrmmpds, and the nsrmmpd would be able to redirect the client data stream to another nsrmmd on an as-needs basis. This would resemble the following:

Proposed nsrmmpd architecture

Proposed nsrmmpd architecture

The advantage of this style of daemon layout would be that client backup processes would not have to re-negotiate with nsrmmd processes, but rather, would be passed-through by the proxy process to whichever nsrmmd process was most suitable at any given time. This would in theory be completely seamless and undetectable to the client.

Posted in Backup theory, NetWorker | Tagged: , | 3 Comments »

Basics – Using datazone encryption with NetWorker

Posted by Preston on 2009-03-22

I’m not fond of software encryption (or compression, for that matter). Particularly in a 24×7 enterprise environment, clients (i.e., production servers) have better things to be doing than doing on-the-fly software encryption or compression. In these environments, hardware encryption routers should be the product of choice for achieving totally secure backups. Such devices also have advantages in terms of key management – much more flexible, scalable and appropriate for role based data access.

That being said, in smaller environments, or environments where servers are relatively idle overnight, NetWorker’s datazone encryption can be sufficient to achieve a reasonable modicum of backup protection with minimum effort – and most importantly, cost.

To get started using NetWorker datazone encryption, you first need to assign a pass phrase. This is done in the NetWorker server properties (typically accessed within NMC):

datazone-encryptionWith the pass phrase in place, you can then configure directives within NetWorker to make use of AES 256 bit encryption. However! As soon as you turn encryption on, you lose all potential for hardware based compression for your media. Why? Quite simply, compression is about finding patterns in data and reducing all the matching patterns to a single reference point; however, encryption is all about eliminating patterns, making the data appear completely random.

Thus, if you want to still get some measure of compression, you should, when using this method, employ software based compression in your directive as well.

Thus, a base directive might look like the following:

<< / >>
+compressasm: .
+aes: .

This will apply compression first to all files encountered, then once the file has been compressed, it will be encrypted. A side benefit of this is that by compressing first, you reduce the amount of data to be encrypted*.

So long as the datazone pass phrase is stored in the server, encryption will occur, and no password will be required to recover the data. Remember, this style of encryption, using a single pass-phrase, isn’t about being able to restrict whom within the datazone can recover the data, but instead it’s about keeping the data stored on-tape (which is potentially off-site, or otherwise at higher risk of theft), from being recovered.

[Edit, 2009-08-15]

It’s been pointed out to me that you can’t compress + encrypt at the client side. Indeed, I’ve now found the part in the administration guide that explicitly says this. What is extremely disappointing about this is that NetWorker actually doesn’t warn you that it’s not going to compress + encrypt! To me, that’s a security issue.

So, for the examples above, forget about enabling client side compression as well as encryption – you can have one or the other, but not both.


* In the same way that ice-cream that’s 99% fat free, but 87% sugar is a “benefit”.

Posted in Basics, NetWorker | Tagged: , , , | 4 Comments »

Does your backup administrator have a say in change control?

Posted by Preston on 2009-03-05

…and if not, why?

A common mistake made in many companies is the failure to include the backup administrator (or, if there is a team, the team leader for data protection) in the change control approval process.

Typically the sorts of roles involved in change control include:

  • CIO or other nominated “final say” manager.
  • Tech writing the change request.
  • Tech’s manager approving the change request.
  • Network team.

Obviously there’s exceptions, and many companies will have variances – for instance, in most consulting companies, a sales manager will also get to have a say in change control, since interruptions to sales processes at the wrong time can break a deal.

Too infrequently included in change control is the backup administrator, or the team responsible for backup administration. The common sense approach to data protection would seem to suggest this is lunacy. After all, if a change fails, surely one potential remedy will be to recover from backup?

The error is three-fold:

  • Implicit assumption that any issue is recoverable from;
  • Implicit assumption that the backup system is always available;
  • Implicit assumption that what you need backed up is backed up.

Out of all of those assumptions, perhaps only the last is forgivable. As I point out in my book, and many have pointed out before me, it’s always better to backup a little too much than not quite enough. Thus, in a reasonable environment that has been properly configured, systems should be protected.

The three-fold assumptions error can actually be sumarised more succinctly though – assuming that having a backup system is a blank cheque on data recovery.

Common issues I’ve seen caused by failures to include backup administrators in change control include:

  • Having major changes timed to occur at the same time as scheduled down-time in the backup environment;
  • Kicking off full backups of large systems prior to changes without notification to the backup administrators, swamping media availability;
  • Scheduling changes to occur just prior to the next backup, making possible the maximal amount of data loss within the periodic backup frequency;
  • Not running fresh, full backups of version-critical database content after upgrades, and thus suffering significant outages later when a cross-version recovery is required;
  • Not checking version compatibility for applications or operating systems, resulting in “upgrades” that can’t be backed up;
  • Wasting backup administrators time searching for reasons why failures occurred because change outages ran during the backups.

To be blunt, any of the above scenarios that occur without pre-change signoff are inexcusable and represent a communications flaw within an organisation.

Any change that has potential to impact on or be impacted by the backup system should be subject to approval, or at the least, notification by the backup administrators. The logical consequence of this rule is: any change that has anything to do with IT systems should logically impact on or be impacted by the backup system.

Note that by impact on, I don’t mean just cause a deleterious effect to the backup system, but also more simply, require resources from the backup system (e.g., for the purposes of recovery, or even additional resources for more backups).

All of this falls into establishing policies surrounding the backup system, and I’m not talking what backs up when – but rather, implications that companies must face as a result of having backup systems in place. Helping organisations understand those policies is a major focus of my book.

Posted in General thoughts, NetWorker, Policies | Tagged: , | 1 Comment »

Is your backup server fast enough?

Posted by Preston on 2009-02-26

Is your backup server a modern, state of the art machine with high speed disk, significant IO throughput capabilities and ample RAM so as to not be a bottleneck in your environment?

If not, why?

Given the nature of what it does – support systems via backup and recovery – your backup server is, by extension, “part of” your most critical production server(s). I’m not saying that your backup server should be more powerful than any of your production servers, but what I do want to say is that your backup server shouldn’t be a restricting agent in relation to the performance requirements of those production servers.

Let me give you an example – the NetWorker index region. Using Unix for convenience, we’re talking about /nsr/index. This region should either be on equally high speed drives as your fastest production system drives, or on something that is still suitably fast.

For instance, in much smaller companies, I’ve often seen the production servers have SCSI drives or SCSI JBODs, but the backup server just be a machine with a couple of mirrored SATA drives.

In larger companies, you’ll have the backup server connected to the SAN with the rest of the production systems, but while the production systems will get access to 15,000 RPM SCSI drives, the backup server will get instead 7,200 RPM SATA drives (or worse, previously, 5,400 RPM ATA drives).

This is a flawed design process for one very important reason – for every file you backup, you need to generate and maintain index data. That is, NetWorker server disk IO occurs in conjunction with backups*.

More importantly, when it comes time to do a recovery, and indices must be accessed, do you want to pull index records for say, 20,000,000 files from slow disk drives or fast disk drives?

(Now, as we move towards flash drives for critical performance systems, I’m not going to suggest that if you’re using flash storage for key systems you should also use it for backup systems. There is always a price point at which you have to start scaling back what you want vs what you need. However, in those instances I’d suggest that if you can afford flash drives for critical production systems, you can afford 15,000 RPM SCSI drives for the backup servers’ /nsr/index region.)

Where cost for higher speed drives becomes an issue, another option is to scale back the speed of the individual drives but use more spindles, even if the actual space used on each drive is less than the capacity of the drive**.

In that case for instance, you might have 15,000 RPM drives for your primary production servers, but the backup servers’ /nsr/index region might reside on 7,200 RPM SATA drives successfully, so long as they’re arrayed (no pun intended) in such a way that there’s sufficient spindles to make reading back data fast. Equally then, in such a situation, hardware RAID (or software RAID on systems that have sufficient CPUs and cores that it equals or exceeds hardware RAID performance) will allow for faster processing of data for writing (e.g., RAID-5 or RAID-3).

In the end, your backup server should be like a butler (or a personal assistant, if you prefer the term) – always there, always ready and able to assist with whatever it is you want done, but never, ever an impediment.


* I see this as a similar design flaw to say, using 7,200 RPM drives as a copy-on-write snapshot area for 15,000 RPM drives.
** Ah, back in the ‘old’ days, where a database might be spread across 40 x 2GB drives, using only 100 MB from each drive!

Posted in NetWorker, Policies | Tagged: , , | 2 Comments »

Getting jobquery to talk to you

Posted by Preston on 2009-02-19

I’ve been using nsradmin for the last 12 years. So when I read about a new utility, ‘jobquery’ in NetWorker 7.5, that’s designed to work in a similar way to nsradmin but query the jobs database instead of the media database, I was looking forward to giving it a go. (This was in no small part to lingering disappointment over how nsrjobsd has been practically a black box since it was introduced.)

So I was rather … disappointed when I ran jobquery for the first time and it appeared to hang.

Running, say:

# jobquery

Appeared to hang.

Running, say:

# jobquery -s server

Also appeared to hang.

Running, say:

# jobquery -s server print

Didn’t return a thing.

So I thought maybe this is a tool that was let out of the barn a little too soon, and even went to the point of logging a question case with EMC about it. After all, it appeared to not really work at all.

It turns out I’d not anticipated that there might have been a simpler problem with jobquery, that being … less than desirable interface design. Let’s be blunt: if you write an interactive “shell” style query interface, it should tell the user when it’s waiting for input.

The problem with the initial invocation attempts was a simple one – it wasn’t hanging, but instead, it was waiting for input without telling me it was waiting for input. Consequently, I’m currently asking EMC to file a bug about this. I know the difference between RFEs and bugs – an RFE is a request for enhancement, or to change something that’s there by design, but a bug is a problem with the actual implementation. Now, someone might argue that maybe this should be filed as an RFE if it was originally designed to not show any prompt, but my take on it is that any interface that doesn’t differentiate between “waiting for input” and “processing/stuck” is, in actuality, a buggy design.

Oh, and jobquery just doesn’t like being told what to do in relation to queries on the command line, even though the man page says it will accept it.

If you’ve been trying to use jobquery and not getting much satisfaction, try it again without waiting for a prompt. Once I got past the lack of prompt, I was quite excited by the promise of jobquery – in fact, I’m hoping that a future release will actually implement the ability to even stop jobs – e.g., kill off a single saveset, or even say, pause a clone/stage operation.

No doubt jobquery needs some improvements, but it wasn’t quite the aborted attempt I’d been initially worried about, and you should give it a go – you’ll be pleasantly surprised.

Posted in NetWorker | Tagged: , | Comments Off on Getting jobquery to talk to you

Basics – Parallelism in NetWorker

Posted by Preston on 2009-02-17

Hi!

The text for this article has moved, and is now at the permanent NetWorker Information Hub Site. Please read it here.

Posted in Basics, NetWorker | Tagged: , | 19 Comments »