NetWorker Blog

Commentary from a long term NetWorker consultant and Backup Theorist

  • This blog has moved!

    This blog has now moved to Please jump across to the new site for the latest articles (and all old archived articles).



  • Enterprise Systems Backup and Recovery

    If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.
  • Advertisements
  • This blog has moved!

    This blog has now moved to Please jump across to the new site for the latest articles (and all old archived articles).



  • Twitter

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archive for the ‘Security’ Category

Storage Tiering vs ILM

Posted by Preston on 2009-11-24

Over at StorageNerve, and on Twitter, Devang Panchigar has been asking Is Storage Tiering ILM or a subset of ILM, but where is ILM? I think it’s an important question with some interesting answers.

Devang starts with defining ILM from a storage perspective:

1) A user or an application creates data and possibly over time that data is modified.
2) The data needs to be stored and possibly be protected through RAID, snaps, clones, replication and backups.
3) The data now needs to be archived as it gets old, and retention policies & laws kick in.
4) The data needs to be search-able and retrievable NOW.
5) Finally the data needs to be deleted.

I agree with items 1, 3, 4 and 5 – as per previous posts, for what it’s worth, I believe that 2 belongs to a sister activity which I define as Information Lifecycle Protection (ILP) – something that Devang acknowledges as an alternative theory. (I liken the logic to separation between ILM and ILP to that between operational production servers and support production servers.)

The above list, for what it’s worth, is actually a fairly astute/accurate summary of the involvement of the storage industry thus far in ILM. Devang rightly points out that Storage Tiering (migrating data between different speed/capacity/cost storage based on usage, etc.), doesn’t address all of the above points – in particular, data creation and data deletion. That’s certainly true.

What’s missing from ILM from a storage perspective are the components that storage can only peripherally control. Perhaps that’s not entirely accurate – the storage industry can certainly participate in the remaining components (indeed, particularly in NAS systems it’s absolutely necessary, as a prime example) – but it’s more than just the storage industry. It’s operating system vendors. It’s application vendors. It’s database vendors. It is, quite frankly, the whole kit and caboodle.

What’s missing in the storage-centric approach to ILM is identity management – or to be more accurate in this context, identity management systems. The brief outline of identity management is that it’s about moving access control and content control out of the hands of the system, application and database administrators, and into the hands of human resources/corporate management. So a system administrator could have total systems access over an entire host and all its data but not be able to open files that (from a corporate management perspective) they have no right to access. A database administrator can fully control the corporate database, but can’t access commercially sensitive or staff salary details, etc.

Most typically though, it’s about corporate roles, as defined in human resources, being reflected from the ground up in system access options. That is, human resources, when they setup a new employee as having a particular role within the organisation (e.g., “personal assistant”), triggering the appropriate workflows to setup that person’s accounts and access privileges for IT systems as well.

If you think that’s insane, you probably don’t appreciate the purpose of it. System/app/database administrators I talk to about identity management frequently raise trust (or the perceived lack thereof) involved in such systems. I.e., they think that if the company they work for wants to implement identity management they don’t trust the people who are tasked with protecting the systems. I won’t lie, I think in a very small number of instances, this may be the case. Maybe 1%, maybe as high as 2%. But let’s look at the bigger picture here – we, as system/application/database administrators currently have access to such data not because we should have access to such data but because until recently there’s been very few options in place to limit data access to only those who, from a corporate governance perspective, should have access to that data. As such, most system/app/database administrators are highly ethical – they know that being able to access data doesn’t equate to actually accessing that data. (Case in point: as the engineering manager and sysadmin at my last job, if I’d been less ethical, I would have seen the writing on the wall long before the company fell down under financial stresses around my ears!)

Trust doesn’t wash in legal proceedings. Trust doesn’t wash in financial auditing. Particularly in situations where accurate logs aren’t maintained in an appropriately secured manner to prove that person A didn’t access data X. The fact that the system was designed to permit A to access X (even as part of A’s job) is in some financial, legal and data sensitivity areas, significant cause for concern.

Returning to the primary point though, it’s about ensuring that the people who have authority over someone’s role within a company (human resources/management) having control over the the processes that configure the access permissions that person has. It’s also about making sure that those work flows are properly configured and automated so there’s no room for error.

So what’s missing – or what’s only at the barest starting point, is the integration of identity/access control with ILM (including storage tiering) and ILP. This, as you can imagine, is not an easy task. Hell, it’s not even a hard task – it’s a monumentally difficult task. It involves a level of cooperation and coordination between different technical tiers (storage, backup, operating systems, applications) that we rarely, if ever see beyond the basic “must all work together or else it will just spend all the time crashing” perspective.

That’s the bit that gives the extra components – control over content creation and destruction. The storage industry on its own does not have the correct levels of exposure to an organisation in order to provide this functionality of ILM. Nor do the operating system vendors. Nor do the database vendors or the application vendors – they all have to work together to provide a total solution on this front.

I think this answers (indirectly) Devang’s question/comment on why storage vendors, and indeed, most of the storage industry, has stopped talking about ILM – the easy parts are well established, but the hard parts are only in their infancy. We are after all seeing some very early processes around integrating identity management and ILM/ILP. For instance, key management on backups, if handled correctly, can allow for situations where backup administrators can’t by themselves perform the recovery of sensitive systems or data – it requires corporate permissions (e.g., the input of a data access key by someone in HR, etc.) Various operating systems and databases/applications are now providing hooks for identity management (to name just one, here’s Oracle’s details on it.)

So no, I think we can confidently say that storage tiering in and of itself is not the answer to ILM. As to why the storage industry has for the most part stopped talking about ILM, we’re left with one of two choices – it’s hard enough that they don’t want to progress it further, or it’s sufficiently commercially sensitive that it’s not something discussed without the strongest of NDAs.

We’ve seen in the past that the storage industry can cooperate on shared formats and standards. We wouldn’t be in the era of pervasive storage we currently are without that cooperation. Fibre-channel, SCSI, iSCSI, FCoE, NDMP, etc., are proof positive that cooperation is possible. What’s different this time is the cooperation extends over a much larger realm to also encompass operating systems, applications, databases, etc., as well as all the storage components in ILM and ILP. (It makes backups seem to have a small footprint, and backups are amongst the most pervasive of technologies you can deploy within an enterprise environment.)

So we can hope that the reason we’re not hearing a lot of talk about ILM any more is that all the interested parties are either working on this level of integration, or even making the appropriate preparations themselves in order to start working together on this level of integration.

Fingers crossed people, but don’t hold your breath – no matter how closely they’re talking, it’s a long way off.


Posted in Architecture, General Technology, General thoughts, Security | Tagged: , , , , , , , , | 2 Comments »

Enhancing NetWorker Security: A theoretical architecture

Posted by Preston on 2009-11-18

It’s fair to say that no one backup product can be all things to all people. More generally, it’s fair to say that no product can be all things to all people.

Security has had a somewhat interesting past in NetWorker; much of the attention to security for a lot of the time has been to with (a) defining administrators, (b) ensuring clients are who they say they are and (c) providing access controls for directed recoveries.

There’s a bunch of areas though that have remained somewhat lacking in NetWorker for security. Not 100% lacking, just not complete. For instance, user accounts that are accessed for the purposes of module backup and recovery frequently need higher levels of authority than standard users. Equally so, some sites want their <X> admins to be able to control as much as possible of the <X> backups, but not to be able to have any administrator privileges over the <Y> backups. I’d like to propose an idea that, if implemented, would both improve security and make NetWorker more flexible.

The change would be to allow the definition of administrator zones. An “administrator zone” would be a subset of a datazone. It would consist of:

  1. User groups:
    • A nominated “administrator” user group.
    • A nominated “user” user group.
    • Any other number of nominated groups with intermediate privileges.
  2. A collection of the following:
    • Clients
    • Groups
    • Pools
    • Devices
    • Schedules
    • Policies
    • Directives
    • etc

These obviously would still be accessible in the global datazone for anyone who is a datazone administrator. Conceptually, this would look like the following:

Datazone with subset "administrator" zonesThe first thing this should point out to you is that administrator zones could, if desired, overlap. For instance, in the above diagram we have:

  1. Minor overlap between Windows and Unix admin zones (e.g., they might both have administrative rights over tape libraries).
  2. Overlap between Unix and Oracle admin zones.
  3. Overlap between Windows and Oracle admin zones.
  4. Overlap between Windows and Exchange admin zones.
  5. Overlap between Windows and MSSQL admin zones.

Notably though, the DMZ Admin zone indicates that you can have some zones that have no overlap/commonality with other zones.

There’d need to be a few rules established in order to make this work. These would be:

  1. Only the global datazone can support “<x>@*” user or group definitions in a user group.
  2. If there is overlap between two zones, then the user will inherit the rights of the highest authority they belong to. I.e., if a user is editing a shared feature between the Windows and Unix admin zones, and is declared an admin in the Unix zone, but only an end-user in the Windows zone, then the user will edit that shared feature with the rights of an admin.
  3. Similarly to the above, if there’s overlap between privileges at the global datazone level and a local administrator zone, the highest privileges will “win” for the local resource.
  4. Resources can only be created and deleted by someone with data zone administrator privileges.
  5. Updates for resources that are shared between multiple administrator zones need to be “approved” by an administrator from each administrator zone that overlaps or a datazone administrator.

Would this be perfect? Not entirely – for instance, it would still require a datazone administrator to create the resources that are then allocated to an administrator zone for control. However, this would prevent a situation occurring where an unprivileged user with “create” options could go ahead and create resources they wouldn’t have authority over. Equally, in an environment that permits overlapping zones, it’s not appropriate for someone from one administrator zone to delete a resource shared by multiple administrator zones. Thus, for safety’s sake, administrator zones should only concern themselves with updating existing resources.

How would the approval process work for edits of resources that are shared by overlapping zones? To start with, the resource that has been updated would continue to function “as is”, and a “copy” would be created (think of it as a temporary resource), with a notification used to trigger a message to the datazone administrators and the other, overlapping administrators. Once the appropriate approval has been done (e.g., an “edit” process in the temporary resource), then the original resource would be overwritten with the temporary resource, and the temporary resource removed.

So what sort of extra resources would we need to establish this? Well, we’ve already got user groups, which is a starting point. The next step is to define an “admin zone” resource, which has fields for:

  1. Administrator user group.
  2. Standard user group.
  3. “Other” user groups.
  4. Clients
  5. Groups
  6. Pools
  7. Schedules
  8. Policies
  9. Directives
  10. Probes
  11. Lockboxes
  12. Notifications
  13. Labels
  14. Staging Policies
  15. Devices
  16. Autochangers
  17. etc.

In fact, pretty much every resource except for the server resource itself, and licenses, should be eligible for inclusion into a localised admin group. In it’s most basic, you might expect to see the following:

nsradmin> print type: NSR admin zone; name: Oracle
type: NSR admin zone;
name: Oracle;
administrators: Oracle Admins;
users: Oracle All Users;
other user groups: ;
clients: delphi, pythia;
groups: Daily Oracle FS, Monthly Oracle FS,
Daily Oracle DB, Monthly Oracle DB;
pools: ;
schedules: Daily Oracle, Monthly Oracle;
policies: Oracle Daily, Oracle Monthly;
directives: pythia exclude oracle, delphi exclude oracle;

To date, NetWorker’s administration focus has been far more global. If you’re an administrator, you can do anything to any resource. If you’re a user, you can’t do much with any resource. If you’ve been given a subset of privileges, you can use those privileges against all resources touched by those privileges.

An architecture that worked along these lines would allow for much more flexibility in terms of partial administrative privileges in NetWorker – zones of resources and local administrators for those resources would allow for more granular control of configuration and backup functionality, while still keeping NetWorker configuration maintained at the central server.

Posted in Architecture, Backup theory, NetWorker, Security | Tagged: , , , , | 2 Comments »

Preventing users seeing backups from other hosts

Posted by Preston on 2009-09-24

Something I’ve seen a few people complain about – and indeed that I’ve also complained about in the past, is that in high security environments, NetWorker allows end users on one host to be able to see the backups done for other hosts. This is obviously a security concern.

After a brief discussion with EMC, it was also obviously something that is readily changeable with only a couple of clicks of the mouse button – so I feel somewhat sheepish that I hadn’t picked up on it before. All you have to do is take away the “Monitor NetWorker” privilege from the Users usergroup.

Here’s the (to some environments) offending setting:

Monitor users privilege

Monitor users privilege

Once that setting is unchecked, end users won’t be able to view the backups for other hosts – just their own.

Posted in NetWorker, Security | Tagged: | 4 Comments »

Avoiding 2GB saveset chunks

Posted by Preston on 2009-08-19

Periodically a customer will report to me that a client is generating savesets in 2GB chunks. That is, they get savesets like the following:

  • C:\ – 2GB
  • <1>C:\ – 2GB
  • <2>C:\ – 2GB
  • <3>C:\ – 1538MB

Under much earlier versions of NetWorker, this was expected; these days, it really shouldn’t happen. (In fact, if it does happen, it should be considered a potential error condition.)

The release notes for 7.4.5 suggest that if you’re currently experiencing chunking in the 7.4.x series, going to 7.4.5 may very well resolve the issue. However, if that doesn’t do the trick for you, the other way of doing it is to switch from nsrauth to oldauth authentication on the backup server for the client exhibiting the problem.

To do this, you need to fire up nsradmin against the client process on the server and adjust the NSRLA record. Here’s an example server output/session, using a NetWorker backup server of ‘tara’ as our example:

[root@tara ~]# nsradmin -p 390113 -s tara
NetWorker administration program.
Use the "help" command for help, "visual" for full-screen mode.
nsradmin> show type:; name:; auth methods:
nsradmin> print type: NSRLA
                        type: NSRLA;
                        name: tara.pmdg.lab;
                auth methods: ",nsrauth/oldauth";

So, what we want to do is adjust the ‘auth methods’ for the client that is chunking data, and we want to switch it to using ‘oldauth’ instead. Assuming we have a client called ‘cyclops’ that is exhibiting this problem, and we want to only adjust cyclops, we would run the command:

nsradmin> update auth methods: "cyclops,oldauth",",nsrauth/oldauth"
                auth methods: "cyclops,oldauth", ",nsrauth/oldauth";
Update? y
updated resource id

Once this has been done, it’s necessary to stop and restart the NetWorker services on the backup server for the changes to take effect.

So the obvious follow up questions and their answers are:

  • Why would you need to change the security model from nsrauth to oldauth to fix this problem? It seems the case that in some instances the security/authentication model can lead to NetWorker having issues with some clients that forces a reversion to chunking. By switching to the oldauth method it prevents this behaviour.
  • Should you just change every client to using oldauth? No – oldauth is being retired over time, and nsrauth is more secure, so it’s best to only do this as a last resort. Indeed, if you can upgrade to 7.4.5 that may be the better solution.

[Edit – 2009-10-27]

If you’re on 7.5.1, then in order to avoid chunking you need to be at least on (that’s cumulative patch cluster 5 for 7.5.1.); if you’re one of those sites experiencing recovery problems from continuation/chunked savesets, you are going to need Alternatively, you’ll need LGTsc31925 for whatever platform/release of 7.5.1 that you’re running.

Posted in NetWorker, Security | Tagged: , , , , , , , | Comments Off on Avoiding 2GB saveset chunks

What’s wrong with the NMC installation process?

Posted by Preston on 2009-08-17

There is, in my opinion, an unpleasant security hole in the NMC installation/configuration process.

The security hole is simple: it does not prompt for the administrator password on installation. This is inappropriate for a data protection product, and I think it’s something that EMC should fix.

The NMC installation process is slightly different depending on whether you’re working with 7.5.x or 7.4.x and lower.

For 7.4.x and lower, the process works as follows:

  • Install NetWorker management console.
  • (On Unix platforms, manually run the /opt/lgtonmc/bin/nmc_config file to initialise the configuration.)
  • Launch NMC.
  • Use the default username/password until you get around to changing the password.

For 7.5.x and higher installations, the process works as follows:

  • Install NetWorker management console.
  • First person to logon gets to set the administrator password.

In both instances, this represents a clear security threat to the environment, particularly when installing NetWorker on the backup server or another host that already has administrator access to the datazone, and needs to be managed carefully. Two clear options, depending on the level of trust you have within your environment are:

  • Use firewall/network security configuration options to restrict access to the NMC console port (9000) to a single, known and trusted host, until you are able to log on and change the password.


  • Be prepared to log onto NMC as soon as the installation (or for Unix, installation/configuration) is complete and trust that you “get there first”.

In reality, the second option would not be declared secure by any security expert, but for small environments where the trust level is high, it may be acceptable for local security policies.

The real solution though is simple: EMC must change the NMC installation process to force the input of a secure administrator password at install time. That way, by the time the daemons are first started, they are already secured.

Posted in NetWorker, Security | Tagged: , , , , | Comments Off on What’s wrong with the NMC installation process?

Using SELinux with NetWorker

Posted by Preston on 2009-07-24

I’m not all that conversant with SELinux, and for the most part, disable it on systems that I configure simply because these days 99% of the systems I configure are within a lab and already heavily firewalled. When NetWorker 7.5 came out and the release notes explicitly stated that SELinux was not supported, it seemed inevitable that my involvement with SELinux would continue to decrease.

When SELinux was recently discussed on the NetWorker mailing list, I responded citing the release notes indicating it wasn’t supported. I was therefore surprised to discover there was a workaround. Responding to the thread, Rich Graves posted the following SELinux adjustments that are necessary to get NetWorker and SELinux working together. I present them unaltered, but can attest to having confirmed they do indeed work. Here’s what Rich had to say:

This has worked for me for about a year, on both client and server. The textrel_shlib change is fairly common for proprietary binaries.

semanage fcontext -a -t textrel_shlib_t “/usr/lib/nsr(/.*)?”
semanage fcontext -a -t var_log_t “/nsr/logs(/.*)?”
restorecon -R /usr/lib/nsr
restorecon -R /nsr/logs

Another approach for the logs is to edit syslog.conf and drop them in /var/log instead of /nsr/logs.

If you’re needing to work with NetWorker and SELinux, hopefully the above tips will help.

Posted in NetWorker, Security | Tagged: | 2 Comments »

The Dual Evils – Malware, and Malware Protection

Posted by Preston on 2009-07-05

I worry that I may in this entry come across as a smug Mac user, but that’s not my intent, so if you’re initially worried that’s where I’m heading, stick with me.

Much has been said about viruses and other malware over the last few years. Apple has certainly been quick to point out the relatively malware-free ecosystem it provides through its TV and internet commercials. I’ll be frank – it’s an ecosystem I enjoy being part of on the desktop, and at the server level I equally enjoy being in a (mostly) Linux/Solaris virus-free ecosystem as well.

Without a doubt, the entire malware industry is evil. The individuals and nefarious organisations that thrive on producing malware and subsequently using either the data collected or the systems hijacked for their own purposes is a yet another sad and pathetic reflection on the state of collective human moral evolution.

An equally frustrating part of that entire ecosystem is the ‘solution’ – the anti-virus/anti-malware industry. You see, it seems to me that people who need to run vulnerable systems can either spend hundreds, or thousands of dollars on a system and then either do one of the two following things:

  • ‘Allow’ it to be hijacked or otherwise misused by nefarious individuals or organisations for anything from identity and data theft through to coordinated denial of service attacks or hacking networks.
  • ‘Allow’ it to have anti-malware software running which will considerably compromise the performance of the machine, introduce additional layers of software interaction that may prevent other software and equipment from running correctly, and may periodically destroy the system itself.

In a long series of anti-virus software failures, The Register has an article at the moment over yet another cock-up where the latest updates from an anti-virus software vendor results in serious damage being done to operating systems running the software. What’s disappointing is that this is not an uncommon scenario.

Yet, not running anti-malware software and latest patches on vulnerable systems seem hardly the solution either. Running my own web-server and gateway at home I’m amazed (and horrified) at the continual stream of hack attempts, particularly at the height of big worms – I can honestly believe that at these times unpatched Windows XP systems, for instance, can be infected within seconds of connecting to the internet.

Those of us who work in backup are acutely aware of many of the aspects of the dual evils of malware and malware protection software. The evils of malware are easy to quantify – we have to clean up the mess left by them by way of data and/or system recoveries.

The relative failings of malware protection software however acutely impact backup and recovery in that the interplay between OS, anti-malware software and backup agent can be a difficult beast to coordinate. If the anti-malware software allows it, various settings have to be adjusted to preven it from scanning what the backup software is reading and transferring. If the anti-malware software doesn’t allow those sorts of settings, performance is abysmal. One well-known vendor lagged years behind others and instead of allowing the exclusion of particular processes, recommended that its protection agent be turned off during the backup process. Tests, using the same hardware first running Linux, then running Windows XP were damning: 26MB/s throughput, averaged, for Linux; 24MB/s throughput, averaged, for Windows XP with the anti-malware agent turned off. 4.5MB/s throughput, averaged, for Windows XP with the anti-malware agent left on and constantly scanning the backup process. (Is it any wonder that some computer users feel endlessly compelled to upgrade their systems given the continual performance drain created by anti-malware software?)

That’s just for backup. When it comes to recovery, anti-malware software can become even more insidiously interruptive if not configured correctly. The complex interplay between permissions of the account running the anti-malware software and the account running the backup agent, not to mention the account logged into by the user running the recovery, can result in bizarre recovery issues that simply should not happen.

I’m not suggesting the solution is to turn off anti-malware software, or that even having the world jump en masse to less vulnerable operating systems will do the trick – it’s likely that will just shift the problem as well.

At the moment we’re stuck with the dual evils – malware, and malware protection. The more vulnerable the operating system or environment, the more likely the case that you can’t live with one and can’t live without the other.

As a long-term computer user, I periodically get the disbelieving/incredulous reaction talking to younger computer users that aren’t aware of industry history. It seems that saying things like “I remember how amazed I was to get a 19K expansion cartridge for my Vic-20” is akin to someone saying “I still remember watching the Wright brothers make their first flight”.

I’d like to think one day there’ll be disbelieving and incredulous reaction from younger computer users to someone saying “I remember the time when malware and anti-malware software used to steal 90% of the processing power of computers…”

Posted in Aside, Security | Tagged: , , , | Comments Off on The Dual Evils – Malware, and Malware Protection

Are you Monitoring RAP?

Posted by Preston on 2009-07-03

Introduced in NetWorker v7 was a feature called “Monitor RAP”. There’s two unfortunate aspects to this setting in the NetWorker server resource (“NSR”):

  • It is not enabled by default;
  • The setting name obfuscates the purpose of the setting for most users.

Personally, I would have preferred when this option was made available that it was called “Audit Changes”, and that it was enabled by default.

In the NetWorker management console, with diagnostic mode enabled, this option is available in the first tab of the NetWorker server properties:

Monitor RAP setting

Monitor RAP setting

With this setting enabled, NetWorker maintains a new log, rap.log, within the logs directory. This tracks changes that are made to the NetWorker configuration, as they are made.

Here’s an example:

06/30/2009 06:49:19 PM MONITOR_RAP: preston@archon CHANGED 'NSR group'
resource, Staging Development:
 autostart: Enabled;
 autostart: Disabled;

This tells us that at 18:49:19 on 30 June 2009, user ‘preston’ on host ‘archon’ changed the group ‘Staging Development’, changing ‘autostart’ from ‘Enabled’ to ‘Disabled’.

This means that from the NetWorker level*, you can easily keep track of who does what to the NetWorker configuration. Interestingly, you can also use this information to also track self-changes to the system – i.e., where NetWorker updates its own configuration. As an example, if you use a license manager, then whenever NetWorker updates/checks its licenses against the license server, you’ll get entries in the logs such as:

06/30/2009 05:10:00 PM MONITOR_RAP: root@nox CHANGED 'NSR license' resource,
Autochanger Module, 40 slots/40:
 checksum: \
 checksum: \

Using the Monitor RAP setting allows you to easily monitor changes to the NetWorker configuration, and I believe that every NetWorker site should always have this setting enabled.

* Auditing is also available within NMC. For maximum auditing, I always recommend that both options be used.

Posted in NetWorker, Security | Tagged: , , | Comments Off on Are you Monitoring RAP?

Your datazone is only as secure as your NetWorker server

Posted by Preston on 2009-06-26

A topic I discuss in my book that’s worth touching on here is that of datazone security.

Backup is one of those enterprise components that touches on a vast amount of infrastructure; so much so that it’s usually one of those most broadest reaching pieces of software within an environment. As such, the temptation is always there to make it “as easy as possible” to configure. Unfortunately this sometimes leads to making it too easy to configure. By too easy, I mean insecure.

Regardless of the “hassle” that it creates, a backup server must be highly secured. Or to be perhaps even blunter – the entire security of everything backed up by your backup server depends on the security of your backup server. Having an insecure NetWorker server, on the other hand, is like handing over the keys to your datacentre, as well as having the administrator/root password for every server stuck to each machine.

Thinking of it that way, do you really want the administrator list on your backup server to include say, any of the following?

  • *@*
  • *@<host>
  • <user>*@

If your answer is yes, then you’re wrong*.

However, datazone security isn’t only about the administrator list (though that forms an important part). At bare minimum, your datazone should have the following security requirements:

  1. No wild-cards shall be permitted in administrator user list definitions (server, NMC).
  2. No client shall have an empty servers file (client).
  3. No wild-cards shall be permitted in remote access user list definitions (client resources).

Note: With the advent of lockboxes in version 7.5, security options increase – it’s possible, for instance, to have passwords for application modules stored in such a way that only the application module for the designated host can retrieve the password.

* I do make allowance for some extreme recovery issues that have temporarily required users to enter wild-card administrators temporarily where it was not possible to wait for a bug fix.

Posted in NetWorker, Policies, Security | Tagged: , , , , | Comments Off on Your datazone is only as secure as your NetWorker server

(In)securing your daemon.raw with nsr_render_log -z

Posted by Preston on 2009-06-04

As you may know, the jump from NetWorker 7.3 to 7.4 saw the introduction of a language/locale-neutral log format in NetWorker, referred to as “raw” format. The primary purpose of this format is to allow logs to be generated by NetWorker that can then be rendered into a support-addressable language for EMC.

One of the options for nsr_render_log is “-z”, which according to the man page:

-z   Obfuscate secure information. Hostnames, usernames and network
     addresses shall be aliased.

In theory, this replaces hostnames with neutral hostnames – e.g., the backup server gets renamed to ‘host1’.

If you’re relying on nsr_render_log to totally mask your site details, don’t. You still need to manually review the file and determine whether there are any references to hostnames, usernames, etc., that need to be modified.

Here’s a few examples of where details aren’t aliased:

  • Index paths in initial startup of the NetWorker server.
  • License count details in initial startup of the NetWorker server.
  • Entries of the form client:Saveset Name when referencing savesets starting, stopping, etc. This includes the server hostname, which “-z” mainly seems to be trying to masquerade (e.g., you’ll get lines like: ‘host1 nsrd cerberus:index:mars’).
  • The infamous “NSR peer information” entries.
  • Usernames from browsing for browsing recoveries and completing recoveries.

While I don’t normally like to poke sticks at NetWorker, this isn’t a good implementation of security. Security by obfuscation never is, but if you say you’re going to hide hostnames and usernames, you should at least make every effort to do just that. In fact, using the Australian vernacular, this is a very half arsed implementation of an advertised feature.

In short, if you’re needing to completely “secure” your daemon.raw output before sending to your support provider, don’t rely on -z, but instead do a manual search and replace.

As a starting point, you may want to consider a procedure such as:

  1. Using nsradmin, extract a list of all client names.
  2. Search and replace each client name with an arbitrary name in the daemon.raw file.
  3. Search for “done browsing” and extract the unique usernames.
  4. Map those unique usernames to arbitrary usernames, and search and replace in the daemon.raw file.

That will not likely replace everything, but will give you a good starting point.

Posted in NetWorker, Security | Tagged: | Comments Off on (In)securing your daemon.raw with nsr_render_log -z