INFO-VAX Fri, 21 Mar 2008 Volume 2008 : Issue 162 Contents: Re: Can't get into SRM on PWS 433au Re: Divining the full pathname of a file, all logicals translated Re: Divining the full pathname of a file, all logicals translated Re: Divining the full pathname of a file, all logicals translated Re: Divining the full pathname of a file, all logicals translated Re: Please critique my backup practices Re: Please critique my backup practices Re: Please critique my backup practices Re: Please critique my backup practices Re: Please critique my backup practices Re: Please critique my backup practices Re: Please critique my backup practices Re: Revised time in DIR / FULL output Re: RIP Arthur C. Clarke Re: Too many files in one directory (again) Re: Too many files in one directory (again) Re: Too many files in one directory (again) RE: Too many files in one directory (again) Re: Too many files in one directory (again) Re: Too many files in one directory (again) Re: Too many files in one directory (again) Re: Too many files in one directory (again) Re: Too many files in one directory (again) Re: Too many files in one directory (again) Re: VMS Mail translates incoming tilde character into a dollar sign. Re: VMS Mail translates incoming tilde character into a dollar sign. Re: What sysgen param needs to be changed? RE: What sysgen param needs to be changed? RE: What sysgen param needs to be changed? Re: What sysgen param needs to be changed? RE: What sysgen param needs to be changed? ---------------------------------------------------------------------- Date: 21 Mar 2008 07:23:17 GMT From: "David Weatherall" Subject: Re: Can't get into SRM on PWS 433au Message-ID: <64h635F2ajlevU1@mid.individual.net> John Santos wrote: > Bob Koehler wrote: > >In article , John Santos > writes: > > > > > Label from 0, count from 1, no matter what base you are using. > > > > > > Type 0 - Don't understand binary, 01 types > > > Type 1 - Do understand binary, 10 types. > > > > > > > > > We're not all !@#$%^&*() C programmers here. Counting starts at > > 1, if you actually have any. > > > Hey, watch it bud! I'm no stinking C programmer. Macro-11. Real > programmers program in octal. (OK, Macro-32 kind of forces you to > use hex, And counting starts a 0, so you don't have to special-case > when there aren't any. Do FORTRAN arrays start at 1? It's been about > 33 years, brain cells have died :-( I finally replaced the remaining octal constants in our DEC-Fortran code with hexadecimal just this month. The code started out on DOS-11 in '70's (IIRC - it predates my employment on the project that started 29 years ago :-) Fortran arrays start at 1 by dafault but you can start them at 0 with the array(0:n) syntax. Very useful for lots of things that really do start at 0, e.g. 1553 MilBus word numbers. I'm sure Bob was kidding. Cheers - Dave. -- ------------------------------ Date: Fri, 21 Mar 2008 09:21:42 +0200 From: "Teijo Forsell" Subject: Re: Divining the full pathname of a file, all logicals translated Message-ID: "Antonio Carlini" wrote in message news:Xns9A66EE72E7524arcarliniONieeorg@80.5.182.99... > On 19 Mar 2008, you wrote in comp.os.vms: > >> >> This returns: >> >> NODE$DKA0:[SYS0.SYSCOMMON.][SYSMGR]LOGIN.COM;12 >> >> Thats workable but I really need it to be normalized as >> >> "NODE$DKA0:[SYS0.SYSCOMMON.SYSMGR]LOGIN.COM;12 > > When I've needed to do this I've used lexical functions: > > Given string = "NODE$DKA0:[SYS0.SYSCOMMON.SYSMGR]LOGIN.COM;12" > > loc = F$LOCATE(".][", string) > len = F$LENGTH(string) > IF loc .GT. len THEN > string = F$EXTRACT(0,loc,string) + "." + F$EXTRACT(loc+3,len,string) > ENDIF > > Not terribly concise, but no image activation involved! > > I don't recall if it's possible to get more than one ".][" > sequence in a string ... if it is then you may need a loop. > > Antonio Also can be done simply: string = string - "][" Regards, Teijo ------------------------------ Date: Fri, 21 Mar 2008 08:41:33 -0700 (PDT) From: Ken.Fairfield@gmail.com Subject: Re: Divining the full pathname of a file, all logicals translated Message-ID: <692bd5d0-fcdc-4df7-9279-1955a49d7022@e23g2000prf.googlegroups.com> On Mar 20, 8:05 am, koeh...@eisner.nospam.encompasserve.org (Bob Koehler) wrote: > In article , Antonio Carlini writes: > > > > > I don't recall if it's possible to get more than one ".][" > > sequence in a string ... if it is then you may need a loop. > > No, but it is legititmate to have ".><", ".>[", or ".]<" at that > location unless you're sure that the string has been returned from > one of VMS' parsers. (File names can be entered with <> for > directory delimiters, but they are always translated to [] during > parsing.) True, which is why I used to write this sort of thing so: $ string = string - "][" - "]<" - ">[" - "><" That pretty much covers all cases. And "subtracting" a substring that doesn't exist from another string is a no-op, so no worries. :-) -Ken ------------------------------ Date: Fri, 21 Mar 2008 11:31:28 -0500 From: David J Dachtera Subject: Re: Divining the full pathname of a file, all logicals translated Message-ID: <47E3E2DF.4376A267@spam.comcast.net> Antonio Carlini wrote: > > David J Dachtera wrote in > news:47E1C21E.953D8662@spam.comcast.net: > > > Antonio's solution will work, of course. > > > > Myself, I'd lean toward Rich Gilbert's idea of simply reducing "][" out > > of the string unconditionally: > > So would I now that I see how obvious it is :-) > > Too much Perl and Ruby lately I guess ! I just had a conversation yesterday with a Cerner AIX guy who still does some VMS. He said he found DCL harder to code in then shell scripting. *SIGH* David J Dachtera (formerly dba) DJE Systems ------------------------------ Date: Fri, 21 Mar 2008 13:13:43 -0400 From: "Richard B. Gilbert" Subject: Re: Divining the full pathname of a file, all logicals translated Message-ID: <47E3ECC7.7060805@comcast.net> David J Dachtera wrote: > Antonio Carlini wrote: > >>David J Dachtera wrote in >>news:47E1C21E.953D8662@spam.comcast.net: >> >> >>>Antonio's solution will work, of course. >>> >>>Myself, I'd lean toward Rich Gilbert's idea of simply reducing "][" out >>>of the string unconditionally: >> >>So would I now that I see how obvious it is :-) >> >>Too much Perl and Ruby lately I guess ! > > > I just had a conversation yesterday with a Cerner AIX guy who still does > some VMS. He said he found DCL harder to code in then shell scripting. > > *SIGH* I think that all that proves is that you can grow accustomed to anything! I can do either DCL or shell but I find DCL easier even though it requires more typing. ------------------------------ Date: 21 Mar 2008 08:26:16 -0600 From: Kilgallen@SpamCop.net (Larry Kilgallen) Subject: Re: Please critique my backup practices Message-ID: In article , helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply) writes: > In article > <26eb56f6-ec7b-42be-bf46-64def0b22f3e@m44g2000hsc.googlegroups.com>, > tadamsmar writes: > >> To do a backup, I pop a drive out of a shadowset, back it up and put >> it back. > > Are there any open files on the shadow set? If there are, then quiescing the disk is the way to improve this process. The advantage of breaking the shadow set over just doing a backup is that breaking the shadow set gives a quick way to snapshot a disk that has been momentarily quiesced. ------------------------------ Date: Fri, 21 Mar 2008 13:37:35 -0000 From: "John Wallace" Subject: Re: Please critique my backup practices Message-ID: <13u7ehd2lf5005d@corp.supernews.com> "John Santos" wrote in message news:MPG.224cae69cfdc8d4b989793@news.bellatlantic.net... > In article <13u5gub59gtbmca@corp.supernews.com>, johnwallace4 > @yahoo.spam.co.uk says... > > > > "tadamsmar" wrote in message > > news:26eb56f6-ec7b-42be-bf46-64def0b22f3e@m44g2000hsc.googlegroups.com... > > > To do a backup, I pop a drive out of a shadowset, back it up and put > > > it back. > > > > > > The problem is, I don't record the backup dates. > > > > > > I have a incremental that runs every night that does record backup > > > dates. > > > > > > If I had to restore, I would apply the last image and all the > > > incrementals after it. But I guess I would get some extra files. > > > > > > This is easy, I never have to shutdown, but what are the gotchas? > > > > > > Note that if I am *planning* to do a restore I don't use incrementals, > > > I just use a fresh image backup. > > > > > > I have never had to do an emergency image restore using incrementals. > > > > Give the community some more background and you may get better answers. > > Relevant practices may vary depending on what you're backing up (a system > > disk, a generic data disk with various files on it, a "pure database" disk > > with nothing on it except the database itself in which case it may have its > > own backup tools...). > > > > E.g. when I cared about backups, in a software development environment, it > > was always a goal to have files on at least two sets of media (even files > > which may only have existed for a couple of days), so that if there was a > > problem and one set of media wasn't usable for whatever reason, the relevant > > files would still be recoverable from another set. This was done by > > abandoning the usual "incremental" or "differential" schemes and doing a > > weekly full backup and a daily backup with files created in the previous two > > (or do I mean three) days, so each new file would go on the following day's > > daily backup, AND the daily one after that. Others might have a "version > > management" tool in place to achieve the same end result, which might have > > changed the backup requirement. One backup strategy does not necessarily fit > > all, not comfortably anyway. > > > > Usually the only time you need completely shut down VMS for doing a backup > > is if you want a clean image backup of your system disk - always assuming > > that any applications you may have active can be persuaded to close any > > files they may have opened, without shutting VMS down. For example, BASEstar > > (which you have previously mentioned) sometimes has lots of global section > > files, which should disappear (or at least close cleanly) when you shut down > > BASEstar. Then again, some BASEstar installations I have known have needed > > BASEstar running 24x7x365. Keeping the system up and running was more > > important than the small risk of global sections being inconsistent on the > > backup (generally the global sections could be recreated correctly and > > simply from other data within BASEstar). Other applications may not all be > > so well behaved. You need to understand your needs and relevant application > > behaviours. > > > > Regards > > John > > > > > > > > I want to second everything John said... > > The split-the-shadow-set, backup, rejoin the shadow-set method > can be useful and reasonably safe, but you have to understand your > applications. > > It's like hot-splicing an electrical circuit or mid-air refueling, > if you have to ask, you probably shouldn't be doing it! > > If you can quiesce your application (close all files or at least > flush all I/o buffers and hold it there, or activate a recovery > journal or any of a dozen other techniques, and hold it there for > a little while (10 seconds to a minute or so, depending on how > automated everything is), you can dismount one member of the shadow > set and be guaranteed it is complete and consistent as of that > time. If you can't do this (like for a system disk), if you can > make sure this is done at a quiet time, you can at least be > reasonably sure that you've got a good snapshot, but you really > need to understand what might be inconsistent and how to recognize > and recover from any problems that occur when you try to restore. > Often times, it will be just like trying to recover from a crash. > > For a system disk, the issues might be people logging in and changing > info in the UAF, people changing passwords, creating, modifying or > deleting user accounts, batch or print jobs (queue manager issues), > etc. Don't be editing systartup_vms.com while starting up the > backup! If you can be sure these things aren't happening (middle of > the night, nothing in the batch or print queues currently printing > or scheduled to execute in the next several minutes, etc.), then > your odds of getting a usable backup are pretty good, but never 100%. > > Once you've split off the shadow set member, you can resume normal > operations, unquiescing the application or restarting it as the > case may be. Your time tag for the backup date is any time during > the interval while everything is quiet. You'll need to write this > down. > > Mount the split-out member privately with /nowrite, and back it up > any way you wish (image, incremental, differential, random bunch of > specific files, or whatever, depending on what's on the disk and your > restore strategy.) > > *DON'T* use /RECORD! > (You can't on a disk mounted /nowrite, and even if you did, the > date stamps would all get erased when you add it back to the shadow > set.) > > When the backup completes, dismount the disk, remount back into > the shadow set (minicopy is a HUGE win here, reducing the copy > time from hours to seconds.) > > There is no need to quiesce anything while adding the disk back > into the set. > > The advantages of this over just doing a live backup of the shadow > set are several-fold. Since the snapshot is done instantly and > while there is no (or very little) activity, the chances of getting > an inconsistent backup are greatly reduced (to zero if you can > quiesce or shut down the application.) Even if the best you can > do is do this during a relatively quiet time this is much better > than doing a live backup of the active shadow set. > > The advantage of this method over an offline backup is speed. You > don't have to shut the system down first and reboot it afterward. > The application only has to be offline for a few seconds (or not > at all if you have the hooks in it to record your own journal files > to another disk while the disk being backed up is being split.) > > It doesn't matter how long the backup itself takes (well, if it > takes too long, minicopy will approach normal copy times, and if > it's a 2-member shadow set, you'll be working without a net until > the member rejoins.) > > An advantage to doing the backup online, vs. booting the VMS CD > and backing up from there (the only truly safe method of backing > up a system disk) is you can script everything in DCL, so there > are no typos (or at least no random, non-repeating operator typos), > no mounting and backing up the wrong disk, the time critical bits > (quiescing, dismounting, resuming) happen as fast as possible, > which minimizes downtime, and you can keep a log of everything. > > BTW, since the time required by the backup doesn't matter much, > I almost always do full image backups when using this method. > > To repeat all the warnings though, this method only gives > reliable backups if the application can be quiesced or > completely shutdown while the shadow set is being split. > Otherwise some operation will write some data to both disks > before the dismount and only to the remaining disk after the > dismount, and your backup disk will be inconsistent, possibly > in a way you won't notice till months later. (Data dry rot!) > > > -- > John Thank you for the kind words. However, I actually forgot some of the most important words (they were implied, but not made explicit). That is: the backup strategy has to be driven by what's important when doing a *restore*. E.g. A fast non-selective bare-metal restore may lead to different procedures than something designed for ease of restoration of individual files. Again, one size does not fit all. It's helpful to actually test the restore procedures occasionally too, though testing does not necessarily expose latent flaws such as the "data dry rot" you mention (you don't even need a shadowset for that kind of problem, back in the days of IAS, ANALYSE/DISK/REPAIR didn't properly lock the disk, and when two apps both thought they had control of space allocation... well, the damage was instant, but it wasn't instantly *obvious*, till corrupt files started appearing, because blocks had become "owned" by more than one file. A serious learning exercise.). Doing backups/restores properly isn't always easy. But if they're not done properly, there's little point doing them at all, it's just a false sense of security. 2p John ------------------------------ Date: Fri, 21 Mar 2008 07:38:08 -0700 (PDT) From: tadamsmar Subject: Re: Please critique my backup practices Message-ID: <205bf752-bb6f-4abf-8c55-e0c7ddd4f92a@m34g2000hsc.googlegroups.com> On Mar 20, 6:24=A0pm, John Santos wrote: > In article <13u5gub59gtb...@corp.supernews.com>, johnwallace4 > @yahoo.spam.co.uk says... > > > > > > > > > "tadamsmar" wrote in message > >news:26eb56f6-ec7b-42be-bf46-64def0b22f3e@m44g2000hsc.googlegroups.com...= > > > To do a backup, I pop a drive out of a shadowset, back it up and put > > > it back. > > > > The problem is, I don't record the backup dates. > > > > I have a incremental that runs every night that does record backup > > > dates. > > > > If I had to restore, I would apply the last image and all the > > > incrementals after it. =A0But I guess I would get some extra files. > > > > This is easy, I never have to shutdown, but what are the gotchas? > > > > Note that if I am *planning* to do a restore I don't use incrementals,= > > > I just use a fresh image backup. > > > > I have never had to do an emergency image restore using incrementals. > > > Give the community some more background and you may get better answers. > > Relevant practices may vary depending on what you're backing up (a syste= m > > disk, a generic data disk with various files on it, a "pure database" di= sk > > with nothing on it except the database itself in which case it may have = its > > own backup tools...). > > > E.g. when I cared about backups, in a software development environment, = it > > was always a goal to have files on at least two sets of media (even file= s > > which may only have existed for a couple of days), so that if there was = a > > problem and one set of media wasn't usable for whatever reason, the rele= vant > > files would still be recoverable from another set. This was done by > > abandoning the usual "incremental" or "differential" schemes and doing a= > > weekly full backup and a daily backup with files created in the previous= two > > (or do I mean three) days, so each new file would go on the following da= y's > > daily backup, AND the daily one after that. Others might have a "version= > > management" tool in place to achieve the same end result, which might ha= ve > > changed the backup requirement. One backup strategy does not necessarily= fit > > all, not comfortably anyway. > > > Usually the only time you need completely shut down VMS for doing a back= up > > is if you want a clean image backup of your system disk - always assumin= g > > that any applications you may have active can be persuaded to close any > > files they may have opened, without shutting VMS down. For example, BASE= star > > (which you have previously mentioned) sometimes has lots of global secti= on > > files, which should disappear (or at least close cleanly) when you shut = down > > BASEstar. Then again, some BASEstar installations I have known have need= ed > > BASEstar running 24x7x365. Keeping the system up and running was more > > important than the small risk of global sections being inconsistent on t= he > > backup (generally the global sections could be recreated correctly and > > simply from other data within BASEstar). Other applications may not all = be > > so well behaved. You need to understand your needs and relevant applicat= ion > > behaviours. > > > Regards > > John > > I want to second everything John said... > > The split-the-shadow-set, backup, rejoin the shadow-set method > can be useful and reasonably safe, but you have to understand your > applications. > > It's like hot-splicing an electrical circuit or mid-air refueling, > if you have to ask, you probably shouldn't be doing it! > > If you can quiesce your application (close all files or at least > flush all I/o buffers and hold it there, or activate a recovery > journal or any of a dozen other techniques, and hold it there for > a little while (10 seconds to a minute or so, depending on how > automated everything is), you can dismount one member of the shadow > set and be guaranteed it is complete and consistent as of that > time. =A0If you can't do this (like for a system disk), if you can > make sure this is done at a quiet time, you can at least be > reasonably sure that you've got a good snapshot, but you really > need to understand what might be inconsistent and how to recognize > and recover from any problems that occur when you try to restore. > Often times, it will be just like trying to recover from a crash. Does VMS have any real problems coming back up after a power failure? I have never seen any? I wrote the app on this computers. Its designed to come up clean after a power failure. I am backing up the system disk. > > For a system disk, the issues might be people logging in and changing > info in the UAF, people changing passwords, creating, modifying or > deleting user accounts, batch or print jobs (queue manager issues), > etc. =A0Don't be editing systartup_vms.com while starting up the > backup! =A0If you can be sure these things aren't happening (middle of > the night, nothing in the batch or print queues currently printing > or scheduled to execute in the next several minutes, etc.), then > your odds of getting a usable backup are pretty good, but never 100%. > > Once you've split off the shadow set member, you can resume normal > operations, unquiescing the application or restarting it as the > case may be. =A0Your time tag for the backup date is any time during > the interval while everything is quiet. =A0You'll need to write this > down. > > Mount the split-out member privately with /nowrite, and back it up > any way you wish (image, incremental, differential, random bunch of > specific files, or whatever, depending on what's on the disk and your > restore strategy.) > > *DON'T* use /RECORD! > (You can't on a disk mounted /nowrite, and even if you did, the > date stamps would all get erased when you add it back to the shadow > set.) > > When the backup completes, dismount the disk, remount back into > the shadow set (minicopy is a HUGE win here, reducing the copy > time from hours to seconds.) > > There is no need to quiesce anything while adding the disk back > into the set. > > The advantages of this over just doing a live backup of the shadow > set are several-fold. =A0Since the snapshot is done instantly and > while there is no (or very little) activity, the chances of getting > an inconsistent backup are greatly reduced (to zero if you can > quiesce or shut down the application.) =A0Even if the best you can > do is do this during a relatively quiet time this is much better > than doing a live backup of the active shadow set. > > The advantage of this method over an offline backup is speed. =A0You > don't have to shut the system down first and reboot it afterward. > The application only has to be offline for a few seconds (or not > at all if you have the hooks in it to record your own journal files > to another disk while the disk being backed up is being split.) > > It doesn't matter how long the backup itself takes (well, if it > takes too long, minicopy will approach normal copy times, and if > it's a 2-member shadow set, you'll be working without a net until > the member rejoins.) > > An advantage to doing the backup online, vs. booting the VMS CD > and backing up from there (the only truly safe method of backing > up a system disk) is you can script everything in DCL, Anybody that has an extra disk can script everything. You can probably put it on a floppy, but I never tried that. You just mount the extra disk and run the script (command file) off that. > so there > are no typos (or at least no random, non-repeating operator typos), > no mounting and backing up the wrong disk, the time critical bits > (quiescing, dismounting, resuming) happen as fast as possible, > which minimizes downtime, and you can keep a log of everything. > > BTW, since the time required by the backup doesn't matter much, > I almost always do full image backups when using this method. > > To repeat all the warnings though, this method only gives > reliable backups if the application can be quiesced or > completely shutdown while the shadow set is being split. > Otherwise some operation will write some data to both disks > before the dismount and only to the remaining disk after the > dismount, and your backup disk will be inconsistent, possibly > in a way you won't notice till months later. =A0(Data dry rot!) > > -- > John- Hide quoted text - > > - Show quoted text - ------------------------------ Date: Fri, 21 Mar 2008 07:49:33 -0700 (PDT) From: tadamsmar Subject: Re: Please critique my backup practices Message-ID: <359d69a9-ec29-4f7f-9d3e-ebdd9773f046@n77g2000hse.googlegroups.com> On Mar 21, 9:37=A0am, "John Wallace" wrote: > "John Santos" wrote in message > > news:MPG.224cae69cfdc8d4b989793@news.bellatlantic.net...> In article <13u5= gub59gtb...@corp.supernews.com>, johnwallace4 > > @yahoo.spam.co.uk says... > > > > "tadamsmar" wrote in message > > news:26eb56f6-ec7b-42be-bf46-64def0b22f3e@m44g2000hsc.googlegroups.com... > > > > > > > > > To do a backup, I pop a drive out of a shadowset, back it up and put= > > > > it back. > > > > > The problem is, I don't record the backup dates. > > > > > I have a incremental that runs every night that does record backup > > > > dates. > > > > > If I had to restore, I would apply the last image and all the > > > > incrementals after it. =A0But I guess I would get some extra files. > > > > > This is easy, I never have to shutdown, but what are the gotchas? > > > > > Note that if I am *planning* to do a restore I don't use incremental= s, > > > > I just use a fresh image backup. > > > > > I have never had to do an emergency image restore using incrementals= . > > > > Give the community some more background and you may get better answers= . > > > Relevant practices may vary depending on what you're backing up (a > system > > > disk, a generic data disk with various files on it, a "pure database" > disk > > > with nothing on it except the database itself in which case it may hav= e > its > > > own backup tools...). > > > > E.g. when I cared about backups, in a software development environment= , > it > > > was always a goal to have files on at least two sets of media (even > files > > > which may only have existed for a couple of days), so that if there wa= s > a > > > problem and one set of media wasn't usable for whatever reason, the > relevant > > > files would still be recoverable from another set. This was done by > > > abandoning the usual "incremental" or "differential" schemes and doing= a > > > weekly full backup and a daily backup with files created in the previo= us > two > > > (or do I mean three) days, so each new file would go on the following > day's > > > daily backup, AND the daily one after that. Others might have a "versi= on > > > management" tool in place to achieve the same end result, which might > have > > > changed the backup requirement. One backup strategy does not necessari= ly > fit > > > all, not comfortably anyway. > > > > Usually the only time you need completely shut down VMS for doing a > backup > > > is if you want a clean image backup of your system disk - always > assuming > > > that any applications you may have active can be persuaded to close an= y > > > files they may have opened, without shutting VMS down. For example, > BASEstar > > > (which you have previously mentioned) sometimes has lots of global > section > > > files, which should disappear (or at least close cleanly) when you shu= t > down > > > BASEstar. Then again, some BASEstar installations I have known have > needed > > > BASEstar running 24x7x365. Keeping the system up and running was more > > > important than the small risk of global sections being inconsistent on= > the > > > backup (generally the global sections could be recreated correctly and= > > > simply from other data within BASEstar). Other applications may not al= l > be > > > so well behaved. You need to understand your needs and relevant > application > > > behaviours. > > > > Regards > > > John > > > I want to second everything John said... > > > The split-the-shadow-set, backup, rejoin the shadow-set method > > can be useful and reasonably safe, but you have to understand your > > applications. > > > It's like hot-splicing an electrical circuit or mid-air refueling, > > if you have to ask, you probably shouldn't be doing it! > > > If you can quiesce your application (close all files or at least > > flush all I/o buffers and hold it there, or activate a recovery > > journal or any of a dozen other techniques, and hold it there for > > a little while (10 seconds to a minute or so, depending on how > > automated everything is), you can dismount one member of the shadow > > set and be guaranteed it is complete and consistent as of that > > time. =A0If you can't do this (like for a system disk), if you can > > make sure this is done at a quiet time, you can at least be > > reasonably sure that you've got a good snapshot, but you really > > need to understand what might be inconsistent and how to recognize > > and recover from any problems that occur when you try to restore. > > Often times, it will be just like trying to recover from a crash. > > > For a system disk, the issues might be people logging in and changing > > info in the UAF, people changing passwords, creating, modifying or > > deleting user accounts, batch or print jobs (queue manager issues), > > etc. =A0Don't be editing systartup_vms.com while starting up the > > backup! =A0If you can be sure these things aren't happening (middle of > > the night, nothing in the batch or print queues currently printing > > or scheduled to execute in the next several minutes, etc.), then > > your odds of getting a usable backup are pretty good, but never 100%. > > > Once you've split off the shadow set member, you can resume normal > > operations, unquiescing the application or restarting it as the > > case may be. =A0Your time tag for the backup date is any time during > > the interval while everything is quiet. =A0You'll need to write this > > down. > > > Mount the split-out member privately with /nowrite, and back it up > > any way you wish (image, incremental, differential, random bunch of > > specific files, or whatever, depending on what's on the disk and your > > restore strategy.) > > > *DON'T* use /RECORD! > > (You can't on a disk mounted /nowrite, and even if you did, the > > date stamps would all get erased when you add it back to the shadow > > set.) > > > When the backup completes, dismount the disk, remount back into > > the shadow set (minicopy is a HUGE win here, reducing the copy > > time from hours to seconds.) > > > There is no need to quiesce anything while adding the disk back > > into the set. > > > The advantages of this over just doing a live backup of the shadow > > set are several-fold. =A0Since the snapshot is done instantly and > > while there is no (or very little) activity, the chances of getting > > an inconsistent backup are greatly reduced (to zero if you can > > quiesce or shut down the application.) =A0Even if the best you can > > do is do this during a relatively quiet time this is much better > > than doing a live backup of the active shadow set. > > > The advantage of this method over an offline backup is speed. =A0You > > don't have to shut the system down first and reboot it afterward. > > The application only has to be offline for a few seconds (or not > > at all if you have the hooks in it to record your own journal files > > to another disk while the disk being backed up is being split.) > > > It doesn't matter how long the backup itself takes (well, if it > > takes too long, minicopy will approach normal copy times, and if > > it's a 2-member shadow set, you'll be working without a net until > > the member rejoins.) > > > An advantage to doing the backup online, vs. booting the VMS CD > > and backing up from there (the only truly safe method of backing > > up a system disk) is you can script everything in DCL, so there > > are no typos (or at least no random, non-repeating operator typos), > > no mounting and backing up the wrong disk, the time critical bits > > (quiescing, dismounting, resuming) happen as fast as possible, > > which minimizes downtime, and you can keep a log of everything. > > > BTW, since the time required by the backup doesn't matter much, > > I almost always do full image backups when using this method. > > > To repeat all the warnings though, this method only gives > > reliable backups if the application can be quiesced or > > completely shutdown while the shadow set is being split. > > Otherwise some operation will write some data to both disks > > before the dismount and only to the remaining disk after the > > dismount, and your backup disk will be inconsistent, possibly > > in a way you won't notice till months later. =A0(Data dry rot!) > > > -- > > John > > Thank you for the kind words. However, I actually forgot some of the most > important words (they were implied, but not made explicit). That is: the > backup strategy has to be driven by what's important when doing a *restore= *. > E.g. A fast non-selective bare-metal restore may lead to different > procedures than something designed for ease of restoration of individual > files. Again, one size does not fit all. > > It's helpful to actually test the restore procedures occasionally too, > though testing does not necessarily expose latent flaws such as the "data > dry rot" you mention (you don't even need a shadowset for that kind of > problem, back in the days of IAS, ANALYSE/DISK/REPAIR didn't properly lock= > the disk, and when two apps both thought they had control of space > allocation... well, the damage was instant, but it wasn't instantly > *obvious*, till corrupt files started appearing, because blocks had become= > "owned" by more than one file. A serious learning exercise.). > > Doing backups/restores properly isn't always easy. But if they're not done= > properly, there's little point doing them at all, it's just a false sense = of > security. > > 2p > John- Hide quoted text - > > - Show quoted text - BTW, the image/incrementals are not my only recovery strategy. I backup key application files every night to two other alphas. If a system fails, I can get the app up on a backup computer in less than 30 minutes without any tapes. I have never had to do a restore an image from tape. I think in terms of a fire or something that would destroy both disks in the shadowset. If I just had an ordinary computer failure, swapping out a disk (or both disks) to another computer would be the fastest recovery perhaps. (I do have one oddball AS800 with disks I can't swap, but its not really being used for production.) I guess somebody could issue a massive delete by accident, that might force an image recovery. ------------------------------ Date: Fri, 21 Mar 2008 07:53:15 -0700 (PDT) From: tadamsmar Subject: Re: Please critique my backup practices Message-ID: On Mar 21, 10:26=A0am, Kilgal...@SpamCop.net (Larry Kilgallen) wrote: > In article , hel...@astro.multiCLOTHESvax.de (Phil= lip Helbig---remove CLOTHES to reply) writes: > > > In article > > <26eb56f6-ec7b-42be-bf46-64def0b22...@m44g2000hsc.googlegroups.com>, > > tadamsmar writes: > > >> To do a backup, I pop a drive out of a shadowset, back it up and put > >> it back. > > > Are there any open files on the shadow set? > > If there are, then quiescing the disk is the way to improve this process. > > The advantage of breaking the shadow set over just doing a backup is > that breaking the shadow set gives a quick way to snapshot a disk that > has been momentarily quiesced. Does VMS 7.3.2 have a power failure problem that I have never seen? I don't recall anything more than an automate disk rebuild. I wrote the app and its designed for power failure recovery. So what does quiescence buy me? ------------------------------ Date: Fri, 21 Mar 2008 08:22:28 -0700 (PDT) From: AEF Subject: Re: Please critique my backup practices Message-ID: On Mar 21, 9:49 am, tadamsmar wrote: [...] > > - Show quoted text - > > BTW, the image/incrementals are not my only recovery strategy. > > I backup key application files every night to two other alphas. > > If a system fails, I can get the app up on a backup computer in less > than 30 minutes without any tapes. > > I have never had to do a restore an image from tape. I think in terms > of a fire or something that would destroy both disks in the > shadowset. If I just had an ordinary computer failure, swapping out a > disk (or both disks) to another computer would be the fastest > recovery perhaps. (I do have one oddball AS800 with disks I can't > swap, but its not really being used for production.) > > I guess somebody could issue a massive delete by accident, that might > force an image recovery. Yes. This is why volume shadowing is not a substitute for doing backups. Also, do you send any backup tapes off-site in case of building fire or the like? AEF ------------------------------ Date: Fri, 21 Mar 2008 11:41:49 -0500 From: David J Dachtera Subject: Re: Please critique my backup practices Message-ID: <47E3E54D.30F5D852@spam.comcast.net> Dale Dellutri wrote: > > On Thu, 20 Mar 2008 12:36:19 -0700 (PDT), tadamsmar wrote: > > On Mar 20, 2:36?pm, Dale Dellutri wrote: > > > On Thu, 20 Mar 2008 07:02:14 -0700 (PDT), tadamsmar wrote: > > > > To do a backup, I pop a drive out of a shadowset, back it up and put > > > > it back. > > > > > > I assume you're talking about VMS Volume Shadowing. ?As far as I know, > > > taking a drive out of a shadowset causes the drive to look the same as > > > if there'd been a power failure. ?In other words, it does not properly > > > close open files. > > > > > > I assume that you take image backups. ?If so, there's no benefit to > > > taking the drive out of the shadowset to take the image backup. > > > According to service techs that I talked to when I first set up > > > volume shadowing, there's no advantage over taking an image backup > > > of the volume set (the DSA device). > > > > > > I assume that this is still true. > > > > > > > The problem is, I don't record the backup dates. > > > > > > You could record backup dates if you simply took an image backup > > > of the volume set. > > > Don't I have to shutdown my system to do that? > > I finally found the relevant item in "HP Volume Shadowing for > OpenVMS", v7.3-2. Chapter 7, Section titled "Data Consistency > Requirements": "Removal of a shadow set member results in what > is called a crash-consistent copy. That is, the copy of the > data on the removed member is of the same level of consistency > as what would result if the system had failed at that instant." > (pg 124 in my copy). > > Actually, reading the entire section titled "Guidelines for > for Using a Shadow Member for Backup" would be very useful. The usual technique is to quiesce the application (cause it to close its files), THEN split the shadow-sets. We do something similar with BCVs on an EMC DMX array. This minimizes your "backup window". Once upon many moons ago, I developed some code that would take a BACKUP/LIST file and produce a file list for use with DFU's SET command such that backup dates could be recorded after a shadow-set split backup. I could probably resurrect that, if needed. It uses SEARCH to reduce the data to the list of files and EDT in batch(!!) to perform the final edits and cleanup. The result gets fed to the SET command in DFU: $ MCR DFU SET/BACKUP='date' "@filespec" Use SET PROC/PRIO=0 before invoking DFU to minimize impact on production. David J Dachtera (formerly dba) DJE Systems ------------------------------ Date: Fri, 21 Mar 2008 07:14:05 GMT From: Tad Winters Subject: Re: Revised time in DIR / FULL output Message-ID: Hein RMS van den Heuvel wrote in news:82d976fd-d9af-4945-b35e-4643dccba618@8g2000hse.googlegroups.com: > On Mar 18, 10:22 am, koeh...@eisner.nospam.encompasserve.org (Bob > Koehler) wrote: >> In article >> , >> shofu...@yahoo.com.au writes: >> >> > Hi Group, >> >> > I have noticed that in a dir / full output that after the revised >> > time that there is a number in brackets. >> >>    It's in parentheses on my system, and it's the number of times >> the    file has been closed after being opened with write access, >> and    therefor the number of times the revised date has changed. > > And just to complete all the other fine answers... > > 1) No it is NOT relyable, as it is a 16 bit counter which silently > wraps around. > > 2) The file contents might not have changed that often. > Just opening with write intend is enough for a bump. > > Hein. > As I recall, that value is at least 1 greater than the number of keys on an indexed file, when it's been rebuilt (convert/fdl=fdl-for-indexed-file.fdl source-file destination-file) That number can also be set with DFU. ------------------------------ Date: Fri, 21 Mar 2008 11:46:42 +0000 (UTC) From: helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply) Subject: Re: RIP Arthur C. Clarke Message-ID: In article , "PL" writes: > One of my big fans, I got a copy of a Discovery program when he unveils fake > ghost pictures using Clarke was one of YOUR big fans? ------------------------------ Date: Fri, 21 Mar 2008 11:27:56 GMT From: =?ISO-8859-1?Q?Jan-Erik_S=F6derholm?= Subject: Re: Too many files in one directory (again) Message-ID: <0ZMEj.5139$R_4.4363@newsb.telia.net> Steven M. Schweda wrote: > I've heard the lecture(s) before, so please spare us all a repeat, > but I recently had occasion to (try to) unpack a "tar" archive which > wants to create about 190000 files in one directory. On an HP PA-RISC > workstation c3700 running HP-UX 11.11 it took about 35 minutes. On an > HP IA64 workstation zx2000 running VMS V8.3-1H1, it's about eight hours > into the VMSTAR job, it hasn't created half the files yet, and it does > not seem to be getting faster as it goes. > > The file names all look like "020989f4d6c2f32768d0535c1815344d.zip", > "11509dd158a696797eca5700f902ce03.zip", and so on. > > I just pass this along as a reminder that there's still some room > for improvement in dealing with cases like this which,... And some of them are under your control, such as using a large enough /ALLOCATION=n on the CRE/DIR command. I guess that it would also help to INIT the device with a large enought /HEADERS=n to begin with. Even if 190.000 files "works" on VMS, it might not be what VMS was primarily designed for... Jan-Erik. ------------------------------ Date: Fri, 21 Mar 2008 05:43:18 -0700 (PDT) From: Hein RMS van den Heuvel Subject: Re: Too many files in one directory (again) Message-ID: <6b8b3af8-e738-4b79-b2b9-452f43a74cdd@n77g2000hse.googlegroups.com> On Mar 20, 10:19=A0pm, s...@antinode.org (Steven M. Schweda) wrote: : > =A0 =A0I just pass this along as a reminder that there's still some room > for improvement in dealing with cases like this which, bad design or > not, don't cause nearly so much trouble on other operating systems. Agree with the sentiment. If it is caused by directory IO, the you may be able to improve the speed by setting sysgen param ACP_MAXREAD to its max of 64. ( $ mcr sysgen show acp_maxread ) And the earlier directory pre-allocation is a good hint, if you know what's coming. But why would an end user need to worry about that in the firs place. And it will still take hours! Are we sure it is directory IO, or just the file creates themself? If the files are entered IN ORDER, then the directory adds would NOT be the biggest cost. It would be the INDEXF.SYS IO + data itself. If the files are NOT in order and largely to a single directly, and if this sort of thing needs to happen frequently, and I was paid very well or had nothing better to do, then I would: - create files WITHOUT directory entry (FAB$V_TMP =3D 1) - add name + fileid to sequential temp file, flushing every so often (RAB$B_MBC=3D1 :-) - when all files are created, call sort using the record interface. - for each record returned call SYS$ENTER. - For some more money I'd create the directory myself. grins, Hein. ------------------------------ Date: Fri, 21 Mar 2008 08:55:55 -0500 (CDT) From: sms@antinode.org (Steven M. Schweda) Subject: Re: Too many files in one directory (again) Message-ID: <08032108555556_2020CE0A@antinode.org> From: Hein RMS van den Heuvel > On Mar 20, 10:19=A0pm, s...@antinode.org (Steven M. Schweda) wrote: > : > > =A0 =A0I just pass this along as a reminder that there's still some room > > for improvement in dealing with cases like this which, bad design or > > not, don't cause nearly so much trouble on other operating systems. > > Agree with the sentiment. > If it is caused by directory IO, the you may be able to improve the > speed > by setting sysgen param ACP_MAXREAD to its max of 64. > ( $ mcr sysgen show acp_maxread ) Currently 32 ("Default") > And the earlier directory pre-allocation is a good hint, if you know > what's coming. Currently about 22000 blocks, growing by one block about every four seconds or so. > But why would an end user need to worry about that in the firs place. Yeah, you'd like to think that if fancy tuning were so critical, that it'd get done automatically. Even growing the allocation by, say, 25% instead of one block might pay off without a big penalty (if multiple small allocations were really a time consumer). MONI /SYST shows a Direct I/O Rate of about 1900/s, with about 47% of the CPU busy (36% for the working process). On the bright side, with 4GB of memory (and nothing else to do), the Page Fault Rate is a steady zero. > And it will still take hours! Or, perhaps, days. It's over 130000 files this morning, though, so there may be some hope. > Are we sure it is directory IO, or just the file creates themself? > If the files are entered IN ORDER, then the directory adds would NOT > be the > biggest cost. It would be the INDEXF.SYS IO + data itself. As I recall, UNIX "tar" is generally not very reliable on file order, but in this archive the files seem to be pretty well ordered. > If the files are NOT in order and largely to a single directly, and if > this sort > of thing needs to happen frequently, and I was paid very well or had > nothing > better to do, then I would: > [...] It's not the only disqualifier there, but "paid" does stand out. ------------------------------------------------------------------------ Steven M. Schweda sms@antinode-org 382 South Warwick Street (+1) 651-699-9818 Saint Paul MN 55105-2547 ------------------------------ Date: Fri, 21 Mar 2008 14:28:07 +0000 From: "Main, Kerry" Subject: RE: Too many files in one directory (again) Message-ID: > -----Original Message----- > From: Steven M. Schweda [mailto:sms@antinode.org] > Sent: March 20, 2008 10:19 PM > To: Info-VAX@Mvb.Saic.Com > Subject: Too many files in one directory (again) > > I've heard the lecture(s) before, so please spare us all a repeat, > but I recently had occasion to (try to) unpack a "tar" archive which > wants to create about 190000 files in one directory. On an HP PA-RISC > workstation c3700 running HP-UX 11.11 it took about 35 minutes. On an > HP IA64 workstation zx2000 running VMS V8.3-1H1, it's about eight hours > into the VMSTAR job, it hasn't created half the files yet, and it does > not seem to be getting faster as it goes. > > The file names all look like "020989f4d6c2f32768d0535c1815344d.zip", > "11509dd158a696797eca5700f902ce03.zip", and so on. > > I just pass this along as a reminder that there's still some room > for improvement in dealing with cases like this which, bad design or > not, don't cause nearly so much trouble on other operating systems. > > ----------------------------------------------------------------------- > - > > Steven M. Schweda sms@antinode-org > 382 South Warwick Street (+1) 651-699-9818 > Saint Paul MN 55105-2547 Just a WAG, but since you are doing mostly write activities, can we assume That you have removed disk highwater marking and set OpenVMS to the file system default That UNIX uses (write back) vs OpenVMS's file system default (write through)? Regards Kerry Main Senior Consultant HP Services Canada Voice: 613-254-8911 Fax: 613-591-4477 kerryDOTmainAThpDOTcom (remove the DOT's and AT) OpenVMS - the secure, multi-site OS that just works. ------------------------------ Date: Fri, 21 Mar 2008 15:06:18 GMT From: =?ISO-8859-1?Q?Jan-Erik_S=F6derholm?= Subject: Re: Too many files in one directory (again) Message-ID: Steven M. Schweda wrote: >> And the earlier directory pre-allocation is a good hint, if you know >> what's coming. > > Currently about 22000 blocks, growing by one block about every four > seconds or so. I would think that a pre-allocated .DIR file would have helped here. It would be interesting to see how those 22000 blocks are allocated (cont?, number of frags? and so on)... :-) And hope you don't hit SYSTEM-F-HEADERFULL with just a few files left... :-) How large/small is each individual file ? Would it be possible (with 4 Gb available) to create a DECram disk and run the un-tar against that ? Jan-Erik. ------------------------------ Date: Fri, 21 Mar 2008 15:21:27 +0000 From: "R.A.Omond" Subject: Re: Too many files in one directory (again) Message-ID: Jan-Erik Söderholm wrote: > Steven M. Schweda wrote: > >>> And the earlier directory pre-allocation is a good hint, if you know >>> what's coming. >> >> Currently about 22000 blocks, growing by one block about every four >> seconds or so. > > I would think that a pre-allocated .DIR file would have > helped here. It would be interesting to see how those > 22000 blocks are allocated (cont?, number of frags? and > so on)... :-) Jan-Erik, it's a directory, so it's contiguous, *and* it only has one extent :-) The pre-allocated, humungously-sized directory would have sped things up considerably. ------------------------------ Date: Fri, 21 Mar 2008 16:14:51 GMT From: =?ISO-8859-1?Q?Jan-Erik_S=F6derholm?= Subject: Re: Too many files in one directory (again) Message-ID: <%9REj.5152$R_4.4314@newsb.telia.net> R.A.Omond wrote: > Jan-Erik Söderholm wrote: >> Steven M. Schweda wrote: >> >>>> And the earlier directory pre-allocation is a good hint, if you know >>>> what's coming. >>> >>> Currently about 22000 blocks, growing by one block about every four >>> seconds or so. >> >> I would think that a pre-allocated .DIR file would have >> helped here. It would be interesting to see how those >> 22000 blocks are allocated (cont?, number of frags? and >> so on)... :-) > > Jan-Erik, it's a directory, so it's contiguous, *and* it > only has one extent :-) OK, ok... :-) Then I guess one could have other potential problems with no space to extend the DIR file (or to move it to another place in whole, if it deas that at all, I'm unsure there...). > The pre-allocated, humungously-sized directory would have > sped things up considerably. Yes, and a pre-allocated DIR file (and a correct /HEADERS=n on INIT) makes a lot of potential fault cases go away. Right now, at the end of the un-tar, it can break for a number of reasons... :-) Jan-Erik. ------------------------------ Date: Fri, 21 Mar 2008 09:24:56 -0700 (PDT) From: Hein RMS van den Heuvel Subject: Re: Too many files in one directory (again) Message-ID: <4d7b4414-11d2-4002-a280-1a6dae60b027@e39g2000hsf.googlegroups.com> On Mar 21, 9:55=A0am, s...@antinode.org (Steven M. Schweda) wrote: > From: Hein RMS van den Heuvel > > > On Mar 20, 10:19=3DA0pm, s...@antinode.org (Steven M. Schweda) wrote: > > : > > > =3DA0 =3DA0I just pass this along as a reminder that there's still som= e room > > > for improvement in dealing with cases like this which, bad design or > > > not, don't cause nearly so much trouble on other operating systems. > =A0 =A0Yeah, you'd like to think that if fancy tuning were so critical, th= at > it'd get done automatically. =A0Even growing the allocation by, say, 25% > instead of one block might pay off without a big penalty It will be growing by at least a disk cluster at a time. I don't think it'll honor the SET RMS/EXT >=A0MONI /SYST shows a Direct I/O Rate of about 1900/s MONI FILE would be interesting > > And it will still take hours! > Currently about 22000 blocks, growing by one block about every four seconds or so. Well with the file names as per example, each entry will be about 50 bytes. ( $ dump/dir/blo=3Dcount=3D1 ) So 10 per block and 10 per 4 second. By that estimation alone it should take: $ write sys$output (190000*4/10) 76000 seconds or $ write sys$output (190000*4/10)/3600 21 hours > Or, perhaps, days. It's over 130000 files this morning, though, so > there may be some hope. Sounds like you are right on track! btw, do a semi random: $ pipe dump/dir/blo=3D(star=3D10000,count=3D10) bad.dir | searc sys$pipe "End of records" If those -1's are hovering around 0x0100 then random inserts are happening If they are more around 0x01C0 then the blocks are packed, suggesting a series of ordered inserts in the sampled zone. To be more precise: $ perl -le "foreach (qx(dump/dir [-]hein.dir)){if (/^0(\w+) End/){$c+ +; $t+=3Dhex($1)}} print $t/$c" 228 $ mcr dfu directory/comp [-]hein.dir %DFU ... HEIN.DIR;1 : 805 files; was : 66/81, now : 32/81 blocks $ perl -le "foreach (qx(dump/dir [-]hein.dir)){if (/^0(\w+) End/){$c+ +; $t+=3Dhex($1)}} print $t/$c" 470.25 I would also suggest a $ pipe dump/header/bloc=3Dcount=3D0 bad.dir | searc sys$pipe lbn,Allocated A changing LBN will show you the directory being re-allocated and thus copied over 32 blocks at a time to its new place. That's 700+ additional reads and as many extra writes every time it moves. That would not explain 1900 IO/sec. An average file being inserted in the middle of a directory, causing a full block every 10th time or so (depending on split point), could just about explain that. Hein. ------------------------------ Date: Fri, 21 Mar 2008 12:00:08 -0500 From: David J Dachtera Subject: Re: Too many files in one directory (again) Message-ID: <47E3E998.78C29106@spam.comcast.net> "Steven M. Schweda" wrote: > > I've heard the lecture(s) before, so please spare us all a repeat, > but I recently had occasion to (try to) unpack a "tar" archive which > wants to create about 190000 files in one directory. On an HP PA-RISC > workstation c3700 running HP-UX 11.11 it took about 35 minutes. On an > HP IA64 workstation zx2000 running VMS V8.3-1H1, it's about eight hours > into the VMSTAR job, it hasn't created half the files yet, and it does > not seem to be getting faster as it goes. > > The file names all look like "020989f4d6c2f32768d0535c1815344d.zip", > "11509dd158a696797eca5700f902ce03.zip", and so on. Even (especially) if they are not in sequence, pre-allocating the directory would likely have been a big help here. Knowing that 190,000(!!!) files were coming in, I'd have pre-allocated the directory to 190000 blocks to prevent the system having to find a new contiguous extent every time it needs to extend. I could always SET FILE/TRUNCATE it later, if needs be. > I just pass this along as a reminder that there's still some room > for improvement in dealing with cases like this which, bad design or > not, don't cause nearly so much trouble on other operating systems. This would cause a different set of problems on other systems, I should think. David J Dachtera (formerly dba) DJE Systems ------------------------------ Date: 21 Mar 2008 13:05:53 -0600 From: koehler@eisner.nospam.encompasserve.org (Bob Koehler) Subject: Re: Too many files in one directory (again) Message-ID: In article <08032021190335_2020CE0B@antinode.org>, sms@antinode.org (Steven M. Schweda) writes: > I've heard the lecture(s) before, so please spare us all a repeat, > but I recently had occasion to (try to) unpack a "tar" archive which > wants to create about 190000 files in one directory. On an HP PA-RISC > workstation c3700 running HP-UX 11.11 it took about 35 minutes. On an > HP IA64 workstation zx2000 running VMS V8.3-1H1, it's about eight hours > into the VMSTAR job, it hasn't created half the files yet, and it does > not seem to be getting faster as it goes. > > The file names all look like "020989f4d6c2f32768d0535c1815344d.zip", > "11509dd158a696797eca5700f902ce03.zip", and so on. Can you put a listing in a file using tar -t and then break down the output into multiple tar passes into multiple directories? I sure think I'd try that if I were in your situation. A little time editing to create a script might save a great many hours. ------------------------------ Date: 21 Mar 2008 11:34:02 GMT From: VAXman- @SendSpamHere.ORG Subject: Re: VMS Mail translates incoming tilde character into a dollar sign. Message-ID: <47e39d2a$0$5638$607ed4bc@cv.net> In article <47e32132$1$23906$c3e8da3@news.astraweb.com>, JF Mezei writes: >Phillip Helbig---remove CLOTHES to reply wrote: > >> This really shows how quality control has gone down the drain. > >The only thing of value VMS still has is the clustering software. The >rest is all "legacy". When HP retires VMS, if the clustering is still >state of the art, they may get a few pennies giving it to Microsoft (again). Where it will be lost to antiquity or it will be so completely fucked up by Micro$oft that it will be too bloated to be useful for anything. I'd highly doubt that latter as Micro$oft can't seem to get Weendoze to run and coordinate several applications/processes on one machine; why would they be interested in trying it amongst several instances of Weendoze? >The rest doesn't need to have much quality control or development. Sounds like it's perfect for that miniscule-n-flaccid organization. -- VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)COM "Well my son, life is like a beanstalk, isn't it?" http://tmesis.com/drat.html ------------------------------ Date: 21 Mar 2008 11:41:44 GMT From: billg999@cs.uofs.edu (Bill Gunshannon) Subject: Re: VMS Mail translates incoming tilde character into a dollar sign. Message-ID: <64hl7oF2c3h0sU1@mid.individual.net> In article <47e32132$1$23906$c3e8da3@news.astraweb.com>, JF Mezei writes: > Phillip Helbig---remove CLOTHES to reply wrote: > >> This really shows how quality control has gone down the drain. > > The only thing of value VMS still has is the clustering software. The > rest is all "legacy". When HP retires VMS, if the clustering is still > state of the art, they may get a few pennies giving it to Microsoft (again). > > The rest doesn't need to have much quality control or development. Considering the internal differences between VMS and everything else, not even the clustering code would be worth anything to anybody else. bill -- Bill Gunshannon | de-moc-ra-cy (di mok' ra see) n. Three wolves billg999@cs.scranton.edu | and a sheep voting on what's for dinner. University of Scranton | Scranton, Pennsylvania | #include ------------------------------ Date: 21 Mar 2008 08:17:19 -0600 From: koehler@eisner.nospam.encompasserve.org (Bob Koehler) Subject: Re: What sysgen param needs to be changed? Message-ID: In article , Rob Brown writes: > On Thu, 20 Mar 2008, Hank Vander Waal wrote: > >> Trying to open files over DECNET (IV) and I get the error below: >> >> -RMS-E-ACC, ACP file access failed >> -SYSTEM-F-REMRSRC, insufficient system resources at remote node >> >> Is this caused by a SYSGEN param. or an account setting for the >> DECNET account or both ? > > What does NETSERVER.LOG on the remote node say? This file will be in > the login in directory of the remote user. If there weren't enough resouces to open the link, there may not have been enough accomplished on the remote node to open the log. ------------------------------ Date: 21 Mar 2008 08:18:09 -0600 From: koehler@eisner.nospam.encompasserve.org (Bob Koehler) Subject: RE: What sysgen param needs to be changed? Message-ID: In article <009001c88ab1$e57f2a60$6500a8c0@dellxp30>, "Hank Vander Waal" writes: > > Connect request received at 20-MAR-2008 12:43:14.92 Can you corrrelate that system time to your access attempt? ------------------------------ Date: Fri, 21 Mar 2008 11:13:51 -0400 From: "Hank Vander Waal" Subject: RE: What sysgen param needs to be changed? Message-ID: <010201c88b66$2c3c8d30$6500a8c0@dellxp30> Bob & all, I have several users on the remote site that are working its when I get more than a couple of them, and each one opens about 15 files, that I get the error. I am not sure if it is a user setting or a SYSGEN setting? The netserver.log file does not report any errors -----Original Message----- From: Bob Koehler [mailto:koehler@eisner.nospam.encompasserve.org] Sent: Friday, March 21, 2008 10:17 AM To: Info-VAX@Mvb.Saic.Com Subject: Re: What sysgen param needs to be changed? In article , Rob Brown writes: > On Thu, 20 Mar 2008, Hank Vander Waal wrote: > >> Trying to open files over DECNET (IV) and I get the error below: >> >> -RMS-E-ACC, ACP file access failed >> -SYSTEM-F-REMRSRC, insufficient system resources at remote node >> >> Is this caused by a SYSGEN param. or an account setting for the >> DECNET account or both ? > > What does NETSERVER.LOG on the remote node say? This file will be in > the login in directory of the remote user. If there weren't enough resouces to open the link, there may not have been enough accomplished on the remote node to open the log. ------------------------------ Date: Fri, 21 Mar 2008 08:26:49 -0700 (PDT) From: AEF Subject: Re: What sysgen param needs to be changed? Message-ID: <79df2e12-3303-4d96-a210-faabba50aed0@m3g2000hsc.googlegroups.com> On Mar 20, 12:41 pm, "Hank Vander Waal" wrote: > I am running the program on I64 and the programs are on that machine. I am > opening files on alpha 7.1 system > The netserver.log file does not show any errors > > -----Original Message----- > From: Rob Brooks [mailto:bro...@cuebid.zko.hp.nospam] > Sent: Thursday, March 20, 2008 2:21 PM > To: Info-...@Mvb.Saic.Com > Subject: Re: What sysgen param needs to be changed? > > "Hank Vander Waal" writes: > > Trying to open files over DECNET (IV) and I get the error below: > > > -RMS-E-ACC, ACP file access failed > > -SYSTEM-F-REMRSRC, insufficient system resources at remote node > > > Is this caused by a SYSGEN param. or an account setting for the DECNET > > account or both ? > > What version of the Operating system and platform are you using? > > I believe there is an issue with activating I64 images over the network, > although I don't know how that error would be signalled. > > -- > > Rob Brooks MSL -- Nashua brooks!cuebid.zko.hp.com Is it possible that you've reached the maximum number of links allowed on the remote node? Run MCR NCP SHOW EXEC and MCR NCP SHOW KNOWN LINKS on the remote node and see if the numbers are close or equal. AEF ------------------------------ Date: 21 Mar 2008 13:03:49 -0600 From: koehler@eisner.nospam.encompasserve.org (Bob Koehler) Subject: RE: What sysgen param needs to be changed? Message-ID: In article <010201c88b66$2c3c8d30$6500a8c0@dellxp30>, "Hank Vander Waal" writes: > Bob & all, > I have several users on the remote site that are working its when I get more > than a couple of them, and each one opens about 15 files, that I get the > error. I am not sure if it is a user setting or a SYSGEN setting? > The netserver.log file does not report any errors > Do you by any chance have a limited number of users in the VMS license for the remote system? If not, I'd check the pool and page file, using SHOW MEMORY. ------------------------------ End of INFO-VAX 2008.162 ************************