NetWorker Blog

Commentary from a long term NetWorker consultant and Backup Theorist

  • This blog has moved!

    This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).
  •  


     


     

  • Enterprise Systems Backup and Recovery

    If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.

Archive for the ‘General thoughts’ Category

Show me the man pages

Posted by Preston on 2009-12-21

As a long term Unix admin, it’s frustrating when there are commands on my systems for which there aren’t man pages. As a long-term NetWorker user, it’s equally frustrating when there aren’t man pages for particular NetWorker commands.

When I’ve discussed this in the past, I’ve usually had a response of “that’s because you shouldn’t be running that command”. That’s a bad response. The correct response should be something along the lines of “oops, we’ll write a man page for the next release that states:

That command is for internal NetWorker use only. It does X. It should not be run manually.

Having undocumented commands that give no output, hang or produce strange results is just inviting frustration. Of just the nsr prefixed commands, on my current 7.6 lab server, the following commands are undocumented:

  • nsravamar
  • nsravtar
  • nsrbmr
  • nsrcatconfig
  • nsr_cp_install
  • nsrdmpix
  • nsrdsa_recover
  • nsrdsa_save
  • nsrfile
  • nsrfsra
  • nsrlmc
  • nsrndmp_2fh
  • nsrrcopy
  • nsrrcopy2
  • nsrvcbserv_tool

So out of the 55 nsr prefixed commands I have on my server, 15 (or 27%) are undocumented.

Note to EMC: This does not produce a healthy level of trust. Please – get some documentation on these commands, even if that documentation gives us a one line overview of where they’re used and tells us not to run them ourselves.

Posted in General Technology, General thoughts, NetWorker | Tagged: , , , | Comments Off on Show me the man pages

Aside – My Number #1 Pet Peeve in Interface Design

Posted by Preston on 2009-12-20

At University, I had a fascinating lecturer. His typical mode of dress was a t-shirt, stubbies and to go barefoot around the campus. He had a great big bushy beard that barrelled along in front of him which at times looked like a mane. He had a reputation for reciting the entirety of The Ballad of Eskimo Nell (a rather ribald poem – I’m not providing a link) – though by the time I was at University, he could only ever be encouraged to let fly with a single verse.

None of this though made him fascinating.

What made him fascinating was his name. For your reference, his full name is:

Simon

That’s right, Simon. Just a first name, no last name. You see, at some point in the past Simon had decided to legally remove his surname. So he literally did not have a last name.

Simon was a fascinating case study in the implications of unexpected input in computer programmes. He was in fact a walking case study in the implications of unexpected input in computer programmes – almost exclusively due to his name. (This led me to having some joy in pointing out this XKCD cartoon to him a couple of years ago.) Every year, the people who made the phone book struggled to work out where to put him. He confounded registration systems everywhere, and turned compulsory fields on forms to rubbish. Simon was a walking lesson in the lessons of designing interfaces to handle unexpected inputs.

Not long after I finished University, I decided to change my name. Not anything so drastic as a removal of my surname; in fact, it was to add to my surname. You see, when my family emigrated to Australia several generations ago, they changed their surname from “de Guise” to just “Guise” so they could more easily assimilate. (So the story goes.)

Not being all that interested in blending in, and having an appreciation of the long term history of the name “de Guise”, I decided to reinstate it. (Some might question why I didn’t remove my middle name or at least change it from “Macdonald” – but that’s another story, to be told another time.)

It was at that point that I started to get an appreciation of the daily struggle Simon must have had in dealing with systems that were not adequately designed to work with non-conformist input.

I’ve learned therefore over the years that there’s far too many programmers with names like:

  • Mary Jones
  • Bob Smith
  • David Peterson
  • Jane Davidson

And far too few programmers with names like:

  • Simon
  • Preston de Guise
  • Carlos de la Cruz
  • Peter O’Toole

(I had already learned, by the way, that there were far too few companies that simultaneously employed a McDonald, Macdonald and MacDonald.)

So here’s my pet peeve in interface design, stated as examples:

  • I am not Preston De Guise
  • I am not Preston De guise
  • I am not Preston de
  • I am not Preston De
  • I am not Preston Deguise
  • I am not Preston DeGuise
  • I am not De, Preston Guise
  • I am not even, any longer, just Preston Guise (and I certainly don’t have a middle name of de).

There are too many lazy and/or inconsiderate programmers out there. (There’s also too many lazy and/or inconsiderate data entry operators as well.*)

If you’re a programmer, and want to get onto my good side in 2010, make sure your system gets my name right.


* In the past I have been guilty of name mutilation myself. In my last job I setup an account for someone, mistaking the first word of her surname as a middle name, and egregiously never got around to correcting it. It is actually something I genuinely regret.

Posted in Aside, General thoughts | Tagged: , | Comments Off on Aside – My Number #1 Pet Peeve in Interface Design

This holiday season, give your inner geek a gift

Posted by Preston on 2009-12-11

As it approaches that time for giving, it’s worth pointing out that with just a simple purchase, you can simultaneously give yourself and me a present. I’m assuming regular readers of the blog would like to thank me, and the best thanks I could get this year would be to get a nice spike in sales in my book before the end of the year.

Enterprise Systems Backup and Recovery: A corporate insurance policy” is a book aimed not just at companies only now starting to look at implementing a comprehensive backup system. It’s equally aimed at companies who are already doing enterprise backup and need that extra direction to move from a collection of backup products to an actual backup system.

What’s a backup system? At the most simple, it’s an environment that is geared towards recovery. However, it’s not just having the right software and the right hardware – it’s also about having:

  • The right policies
  • The right procedures
  • The right people
  • The right attitude

Most organisations actually do pretty well in relation to getting the right software and the right hardware. However, that’s only about 40% of achieving a backup system. It’s the human components – that last remaining 60% that’s far more challenging and important to get right. For instance, at your company:

  • Are backups seen as an “IT” function?
  • Are backups assigned to junior staff?
  • Are results not checked until there’s a recovery required?
  • Are backups only tested in an adhoc manner?
  • Are recurring errors that aren’t really errors tolerated?
  • Are procedures for requesting recoveries adhoc?
  • Are backups thought of after systems are added or expanded?
  • Are backups highly limited to “save space”?
  • Is the backup server seen as a “non-production” server?

If the answer to even a single one of those questions is yes, then your company doesn’t have a backup system, and your ability to guarantee recoverability is considerably diminished.

Backup systems, by integrating the technical and the human aspect of a company, provide a much better guarantee of recoverability than a collection of untested random copies that have no formal procedures for their creation and use.

And if the answer to even a single one of those questions is yes, you’ll get something useful and important out of my book.

So, if you’re interested in buying the book, you can grab it from Amazon using this link, or from the publisher, CRC press, using this link.

Posted in Backup theory, General thoughts | Tagged: , , | 4 Comments »

How complex is your backup environment?

Posted by Preston on 2009-12-07

Something I’ve periodically mentioned to various people over the years is that when it comes to data protection, simplicity is King. This can be best summed up with the following rule to follow when designing a backup system:

If you can’t summarise your backup solution on the back of a napkin, it’s too complicated.

Now, the first reaction a lot of people have to that is “but if I do X and Y and Z and then A and B on top, then it’s not going to fit, but we don’t have a complex environment”.

Well, there’s two answers to that:

  1. We’re not talking a detailed technical summary of the environment, we’re talking a high level overview.
  2. If you still can’t give a high level overview on the back of a napkin, it is too complicated.

Another way to approach the complexity issue, if you happen to have a phobia about using the back of a napkin is – if you can’t give a 30 second elevator summary of your solution, it’s too complicated.

If you’re struggling to think of why it’s important you can summarise your solution in such a short period of time, or such limited space, I’ll give you a few examples:

  1. You need to summarise it in a meeting with senior management.
  2. You need to summarise it in a meeting with your management and a vendor.
  3. You’ve got 5 minutes or less to pitch getting an upgrade budget.
  4. You’ve got a new assistant starting and you’re about to go into a meeting.
  5. You’ve got a new assistant starting and you’re about to go on holiday.
  6. You’ve got consultant(s) (or contractors) coming in to do some work and you’re going to have to leave them on their own.
  7. The CIO asks “so what is it?” as a follow-up question when (s)he accosts you in the hallway and asks, “Do we have a backup policy?”

I can think of a variety of other reasons, but the point remains – a backup system should not be so complex that it can’t be easily described. That’s not to ever say that it can’t either (a) do complex tasks or (b) have complex components, but if the backup administrator can’t readily describe the functioning whole, then the chances are that there is no functioning whole, just a whole lot of mess.

Posted in Backup theory, General thoughts, Policies | Tagged: , , , , , | Comments Off on How complex is your backup environment?

Introducing my new blog

Posted by Preston on 2009-11-28

As frequent visitors to my blog will know, I don’t buy into all the Cloud Hype that threatens to overwhelm the technology industry at the moment. While I’ve periodically written about the Cloud on this blog when something particularly unsettling has come up, I’ve decided that it’s time to fire up a new blog dedicated to providing an alternative view on Cloud Computing.

So, over at my new blog, you’ll find ongoing commentary about Cloud Computing that will be refreshingly free of the hype that we so often find ourselves exposed to on a daily basis.

I will strive to be as honest as possible, will willingly point out anything being done in Cloud initiatives that is fresh and new, and will be open to people trying to convince me that I’m wrong.

I’m a backup consultant. I don’t go for bleeding edge for the sake of it, I don’t buy into hype, and I don’t recommend or accept anything that jeopardises user data.

So, without further adieu, please feel free to visit I Am The Anti-Cloud.

(Moving forward, unless something significantly overlaps NetWorker and The Cloud, I’ll not be posting about the Cloud on this blog.)

Posted in General Technology, General thoughts | Tagged: | Comments Off on Introducing my new blog

First thoughts – VMware Fusion 3 vs Parallels Desktop v5

Posted by Preston on 2009-11-27

As an employee of an EMC partner, I periodically get access to nifty demos as VMs. Unfortunately these are usually heavily geared towards running within a VMware hosted environment, and rarely if ever port across to Parallels.

While this wasn’t previously an issue having an ESX server in my lab, I’ve slowly become less tolerant of noisy computers and so it’s been less desirable to have on – part of the reason why I went out and bought a Mac Pro. (Honestly, PC server manufacturers just don’t even try to make their systems quiet. How Dull.)

With the recent upgrade to Parallels v5 being a mixed bag (much better performance, Coherence broken for 3+ weeks whenever multiple monitors are attached), on Thursday I decided I’d had enough and felt it was time to start at least trying VMware Fusion. As I only have one VM on my Mac Book Pro, as opposed to 34 on my Mac Pro, I felt that testing Fusion out on my Mac Book Pro to start with would be a good idea.

[Edit 2009-12-08 – Parallels tech support came through, the solution is to decrease the amount of VRAM available to a virtual machine. Having more than 64MB of VRAM assigned in v5 currently prevents Parallels from entering Coherence mode.]

So, what are my thoughts of it so far after a day of running with it?

Advantages over Parallels Desktop:

  • VMware’s Unity feature in v3 isn’t broken (as opposed to Coherence with dual monitors currently being dead).
  • VMware’s Unity feature actually merges Coherence and Crystal without needing to just drop all barriers between the VM and the host.
  • VMware Fusion will happily install ESX as a guest machine.
  • (For the above reason, I suspect, though I’ve not yet had time to test, that I’ll be able to install all the other cool demos I’ve got sitting on a spare drive)
  • VMware’s Unity feature extends across multiple monitors in a way that doesn’t suck. Coherence, when it extends across multiple monitors, extends the Windows Task Bar across multiple monitors in the same position. This means that it can run across the middle of the secondary monitor, depending on how your monitors are layed out. (Maybe Coherence in v5 works better … oops, no, wait, it doesn’t work at all for multiple monitors so I can’t even begin to think that.)

Areas where Parallels kicks Fusion’s Butt:

  • Even under Parallels Desktop v4, Coherence mode was significantly faster than Unity. I’m talking seamless window movement in Coherence, with noticeable ghosting in Unity. It’s distracting and I can live with it, but it’s pretty shoddy.
  • For standard Linux and Windows guests, I’ve imported at least 30 different machines from VMware ESX and VMware Server hosted environments into Parallels Desktop. Not once did I have a problem with “standard” machines. I tried to use VMware’s import utility this morning on both a Windows 2003 guest and a Linux guest and both were completely unusable. The Windows 2003 guest went through a non-stop boot cycle where after 5 seconds or so of booting it would reset. The Linux guest wouldn’t even get past the LILO prompt. Bad VMware, very Bad.
  • When creating pre-allocated disks, Parallels is at least twice as fast as Fusion. Creating a pre-allocated 60GB disk this morning took almost an hour. That’s someone’s idea of a bad joke. Testing creating a few other drives all exhibited similarly terrible performance.
  • Interface (subjective): Parallels Desktop v5 is beautiful – it’s crisp and clean. VMware Fusion’s interface looks like it’s been cobbled together with sticks and duct tape.

Areas where Desktop Virtualisation continues to suck, no matter what product you use:

  • Why do I have to buy a server class virtualisation product to simulate turning the monitor off and putting the keyboard away? That’s not minimising the window, it’s called closing the window, and I should be able to do that regardless of what virtualisation software I’m running.
  • Why does the default for new drives remain splitting them in 2GB chunks? Honestly, I have no sympathy for anyone still running an OS old enough that it can’t (as the virtual machine host) support files bigger than 2GB. At least give me a preference to turn the damn behaviour off.

I’ll be continuing to trial Fusion for the next few weeks before I decide whether I want to transition my Mac Pro from Parallels Desktop to Fusion. The big factor will be whether I think the advantages of running more interesting operating systems (e.g., ESX) within the virtualisation system is worth the potential hassle of having to recreate all my VMs, given how terribly VMware’s Fusion import routine works…

[Edit 2009-12-08 – Parallels tech support came through, the solution is to decrease the amount of VRAM available to a virtual machine. Having more than 64MB of VRAM assigned in v5 currently prevents Parallels from entering Coherence mode.]

Posted in Aside, General thoughts | Tagged: , , | 11 Comments »

Nybbles

Posted by Preston on 2009-11-26

If you thought in the storage blogosphere that this week had seen the second coming, you’d be right. Well, the second coming of Drobo, that is. With new connectivity options and capacity for more drives, Drobo has had so many reviews this week I’ve struggled to find non-Drobo things to read at times. (That being said, the new versions do look nifty, and with my power bills shortly to cutover to have high on-peak costs and high “not quite on-peak” costs, one or two Drobos may just do the trick as far as reducing the high number of external drives I have at any given point in time.)

Tearing myself away from non-Drobo news, over at Going Virtual, Brian has an excellent overview of NetWorker v7.6 Virtualisation Features. (I’m starting to think that the main reason why I don’t get into VCBs much though is the ongoing limited support for anything other than Windows.)

At The Backup Blog, Scott asks the perennial question, Do You Need Backup? The answer, unsurprisingly is yes – that was a given. What remains depressing is that backup consultants such as Scott and myself still need to answer that question!

StorageZilla has started Parting Shot, a fairly rapid fire mini-blogging area with frequent updates that are often great to read, so it’s worth bookmarking and returning to it frequently.

Over at PenguinPunk, Dan has been having such a hard time with Mozy that it’s making me question my continued use of them – particularly when bandwidth in Australia is often index-locked to the price of gold. [Edit: Make that has convinced me to cancel my use of them, particularly in light of a couple of recent glitches I’ve had myself with it.]

Palm continues to demonstrate why it’s a dead company walking with the latest mobile backup scare coming from their department. I’d have prepared a blog entry about it, but I don’t like blogging about dead things.

Grumpy Storage asks for comments and feedback on storage LUN sizings and standards. I think a lot of it is governed by the question “how long is a piece of string”, but there are some interesting points regarding procurement and performance that are useful to have stuck in your head next time you go to bind a LUN or plan a SAN.

Finally, the Buzzword Compliance (aka “Yawn”) award goes to this quote from Lauren Whitehouse of the “Enterprise Strategy Group” that got quoted on half a zillion websites covering EMC’s release of Avamar v5:

“Data deduplicated on tape can expire at different rates — CommVault and [IBM] TSM have a pretty good handle on that,” she said. “EMC Avamar positions the feature for very long retention, but as far as a long-term repository, it would seem to be easy for them to implement a cloud connection for EMC Avamar, given their other products like Mozy, rather than the whole dedupe-on-tape thing.”

(That Lauren quote, for the record, came from Search Storage – but it reads the same pretty much anywhere you find it.)

Honestly, Cloud Cloud Cloud. Cloud Cloud Cloud Cloud Cloud. Look, there’s a product! Why doesn’t it have Cloud!? As you can tell, my Cloud Filter is getting a little strained these days.

Don’t even get me started on the crazy assumption that just because a company owns A and B they can merge A and B with the wave of a magic wand. Getting two disparate development teams to merge disparate code in a rush, rather than as a gradual evolution, is usually akin to seeing if you can merge a face and a brick together. Sure, it’ll work, but it won’t be pretty.

Posted in Aside, General thoughts | Tagged: , , , , , , , , , | Comments Off on Nybbles

Storage Tiering vs ILM

Posted by Preston on 2009-11-24

Over at StorageNerve, and on Twitter, Devang Panchigar has been asking Is Storage Tiering ILM or a subset of ILM, but where is ILM? I think it’s an important question with some interesting answers.

Devang starts with defining ILM from a storage perspective:

1) A user or an application creates data and possibly over time that data is modified.
2) The data needs to be stored and possibly be protected through RAID, snaps, clones, replication and backups.
3) The data now needs to be archived as it gets old, and retention policies & laws kick in.
4) The data needs to be search-able and retrievable NOW.
5) Finally the data needs to be deleted.

I agree with items 1, 3, 4 and 5 – as per previous posts, for what it’s worth, I believe that 2 belongs to a sister activity which I define as Information Lifecycle Protection (ILP) – something that Devang acknowledges as an alternative theory. (I liken the logic to separation between ILM and ILP to that between operational production servers and support production servers.)

The above list, for what it’s worth, is actually a fairly astute/accurate summary of the involvement of the storage industry thus far in ILM. Devang rightly points out that Storage Tiering (migrating data between different speed/capacity/cost storage based on usage, etc.), doesn’t address all of the above points – in particular, data creation and data deletion. That’s certainly true.

What’s missing from ILM from a storage perspective are the components that storage can only peripherally control. Perhaps that’s not entirely accurate – the storage industry can certainly participate in the remaining components (indeed, particularly in NAS systems it’s absolutely necessary, as a prime example) – but it’s more than just the storage industry. It’s operating system vendors. It’s application vendors. It’s database vendors. It is, quite frankly, the whole kit and caboodle.

What’s missing in the storage-centric approach to ILM is identity management – or to be more accurate in this context, identity management systems. The brief outline of identity management is that it’s about moving access control and content control out of the hands of the system, application and database administrators, and into the hands of human resources/corporate management. So a system administrator could have total systems access over an entire host and all its data but not be able to open files that (from a corporate management perspective) they have no right to access. A database administrator can fully control the corporate database, but can’t access commercially sensitive or staff salary details, etc.

Most typically though, it’s about corporate roles, as defined in human resources, being reflected from the ground up in system access options. That is, human resources, when they setup a new employee as having a particular role within the organisation (e.g., “personal assistant”), triggering the appropriate workflows to setup that person’s accounts and access privileges for IT systems as well.

If you think that’s insane, you probably don’t appreciate the purpose of it. System/app/database administrators I talk to about identity management frequently raise trust (or the perceived lack thereof) involved in such systems. I.e., they think that if the company they work for wants to implement identity management they don’t trust the people who are tasked with protecting the systems. I won’t lie, I think in a very small number of instances, this may be the case. Maybe 1%, maybe as high as 2%. But let’s look at the bigger picture here – we, as system/application/database administrators currently have access to such data not because we should have access to such data but because until recently there’s been very few options in place to limit data access to only those who, from a corporate governance perspective, should have access to that data. As such, most system/app/database administrators are highly ethical – they know that being able to access data doesn’t equate to actually accessing that data. (Case in point: as the engineering manager and sysadmin at my last job, if I’d been less ethical, I would have seen the writing on the wall long before the company fell down under financial stresses around my ears!)

Trust doesn’t wash in legal proceedings. Trust doesn’t wash in financial auditing. Particularly in situations where accurate logs aren’t maintained in an appropriately secured manner to prove that person A didn’t access data X. The fact that the system was designed to permit A to access X (even as part of A’s job) is in some financial, legal and data sensitivity areas, significant cause for concern.

Returning to the primary point though, it’s about ensuring that the people who have authority over someone’s role within a company (human resources/management) having control over the the processes that configure the access permissions that person has. It’s also about making sure that those work flows are properly configured and automated so there’s no room for error.

So what’s missing – or what’s only at the barest starting point, is the integration of identity/access control with ILM (including storage tiering) and ILP. This, as you can imagine, is not an easy task. Hell, it’s not even a hard task – it’s a monumentally difficult task. It involves a level of cooperation and coordination between different technical tiers (storage, backup, operating systems, applications) that we rarely, if ever see beyond the basic “must all work together or else it will just spend all the time crashing” perspective.

That’s the bit that gives the extra components – control over content creation and destruction. The storage industry on its own does not have the correct levels of exposure to an organisation in order to provide this functionality of ILM. Nor do the operating system vendors. Nor do the database vendors or the application vendors – they all have to work together to provide a total solution on this front.

I think this answers (indirectly) Devang’s question/comment on why storage vendors, and indeed, most of the storage industry, has stopped talking about ILM – the easy parts are well established, but the hard parts are only in their infancy. We are after all seeing some very early processes around integrating identity management and ILM/ILP. For instance, key management on backups, if handled correctly, can allow for situations where backup administrators can’t by themselves perform the recovery of sensitive systems or data – it requires corporate permissions (e.g., the input of a data access key by someone in HR, etc.) Various operating systems and databases/applications are now providing hooks for identity management (to name just one, here’s Oracle’s details on it.)

So no, I think we can confidently say that storage tiering in and of itself is not the answer to ILM. As to why the storage industry has for the most part stopped talking about ILM, we’re left with one of two choices – it’s hard enough that they don’t want to progress it further, or it’s sufficiently commercially sensitive that it’s not something discussed without the strongest of NDAs.

We’ve seen in the past that the storage industry can cooperate on shared formats and standards. We wouldn’t be in the era of pervasive storage we currently are without that cooperation. Fibre-channel, SCSI, iSCSI, FCoE, NDMP, etc., are proof positive that cooperation is possible. What’s different this time is the cooperation extends over a much larger realm to also encompass operating systems, applications, databases, etc., as well as all the storage components in ILM and ILP. (It makes backups seem to have a small footprint, and backups are amongst the most pervasive of technologies you can deploy within an enterprise environment.)

So we can hope that the reason we’re not hearing a lot of talk about ILM any more is that all the interested parties are either working on this level of integration, or even making the appropriate preparations themselves in order to start working together on this level of integration.

Fingers crossed people, but don’t hold your breath – no matter how closely they’re talking, it’s a long way off.

Posted in Architecture, General Technology, General thoughts, Security | Tagged: , , , , , , , , | 2 Comments »

Can you trust Azure?

Posted by Preston on 2009-11-18

So The Register has a story about how Microsoft is edging closer to delivering it’s cloud based system, Azure.

It seems inept that through the entire article, there wasn’t a single mention of the Sidekick Debacle. As you may remember, that debacle was sponsored by ‘Danger’, a Microsoft subsidiary. If you think Microsoft weren’t involved because Danger was a subsidiary, think again.

If we can learn anything from this, it’s that too many people like to close one eye and half shut the other one to make sure they don’t see all those dark and dangerous storm clouds racing around their silver linings.

Based on Microsoft’s track record, I wouldn’t trust Azure for a minute with a KB of my data even if they were paying me. Not until there’s an industry-wide alliance for certifying cloud based solutions and ensuring vendors actually treat customer data as if it were their own most sensitive and important data. Not until Microsoft are a gold member of that alliance and have come out of their first two audits with shining covers.

Until then when it comes to Azure, all I see are dark Clouds with no silver linings.

Posted in Aside, General Technology, General thoughts | Tagged: , , | 2 Comments »

Lessons I’ve recently learned…

Posted by Preston on 2009-11-11

When I was at University, a philosophy lecturer remarked rather sagely that University is the last place people can go to learn for the sake of learning.

That’s sort of correct, but not always so. People can fumble through their jobs on a day to day basis learning what they have to, but they can also work along the basis of trying to soak up as much information as they can along the way. I’m not always a knowledge sponge – particularly if my caffeine quota is on the light side for the day, but I like to think I learn the odd thing here and there.

In the spirit of knowledge acquisition, here’s a few smaller things I’ve learned recently:

  • When simulating network connectivity problems, there’s a big difference between yanking the network cable and shutting down the network interface. (I was doing the interface shutdown, another person was doing the network cable unplug – and our results didn’t correlate.) Lesson: When escalating a case to vendor support, always spell out how you’re simulating the “comms failure” a customer is having.
  • The ‘bigasm’ utility starts to fall in a heap and becomes extremely unreliable once you exceed about 2100 GB of data generated for a single file. Lesson: When setting out to generate 2.3+ TB of backup data, create a bunch of files and have a bigasm directive to generate a smaller amount of data per file.
  • When setting up tests that will take a couple of days to run, always triple check what you’re about to do before you start it. Lesson: If you make a typo of 250 files at 100 GB each instead of 250 files at 10 GB each, bigasm/NetWorker won’t interpolate what you really meant.
  • There’s a hell of a difference between Solaris 10 AMD release 2 and release 8. Lesson: If wanting to get a Solaris 10 AMD 64-bit OS working in Parallels Desktop for Mac v5 with networking, go for release 8. It will save many forehead bruises.
  • ext3 is about as “modern” a filesystem as I am an elite sportsperson. Lesson: If wanting to achieve decent operational activities with backup to disk under Linux, use XFS instead of ext3.
  • All eSATA is not created equal. Lesson: When using an motherboard SATA -> eSATA converter, make sure the dual drive dock you order doesn’t work as a port multiplier.

Posted in Basics, General thoughts, NetWorker | Tagged: , , , , | Comments Off on Lessons I’ve recently learned…