INFO-VAX Tue, 16 Jan 2007 Volume 2007 : Issue 32 Contents: "no such file" from one node only Re: "no such file" from one node only BACKUP causing alpha node to crash Re: Blast from the 1988s (DEC proposal) Re: Blast from the 1988s (DEC proposal) Re: Cluster connection lost when one link fails? Re: Cluster connection lost when one link fails? Re: Cluster connection lost when one link fails? Re: Cluster connection lost when one link fails? Re: Diskmizer 2.1 License Re: ES45 versus ES47 Re: Odd DSSI/Cluster behaviour. Phanton disks appear online Re: Odd DSSI/Cluster behaviour. Phanton disks appear online Re: Odd DSSI/Cluster behaviour. Phanton disks appear online Re: ODS5 and hardlinks Re: PDP11 Tape Copy to VAX: DOS11_BLOCKSIZE Re: Purpose of TCPIP NETWORK database ? Re: RMS-E-WLK, device currently write locked Re: SHOW CLUSTER and Quorum Re: Who has a record locked ---------------------------------------------------------------------- Date: 16 Jan 2007 10:07:19 -0800 From: "BIO" Subject: "no such file" from one node only Message-ID: <1168970839.022683.10030@m58g2000cwm.googlegroups.com> OpenVMS V7.1-2 cluster w/ 2 alphas on one node (G): $ sho sym dir DIR == "DIRE/SIZE=ALLO/DATE=MODIF/PROT/WIDTH=(FILENAME=24,SIZE=8)" $ dir glauto:ddsinv-ar Directory GL_DATA_ROOT:[GLDATA.GL_INTERFACE] DDSINV-AR.SFD;2 86 16-JAN-2007 04:48:45.76 (RWED,RWED,RWED,) DDSINV-AR.SF_ERROR;1 2408 16-JAN-2007 04:50:22.40 (RWED,RWED,RWED,) Total of 2 files, 2494 blocks. $ whereas on the other node (A): $ dir glauto:ddsinv-ar Directory GL_DATA_ROOT:[GLDATA.GL_INTERFACE] DDSINV-AR.SFD;2 no such file DDSINV-AR.SF_ERROR;1 2408 16-JAN-2007 04:50:22.40 (RWED,RWED,RWED,) Total of 2 files, 2408 blocks. $ The glauto and gl_data_root logicals are the same on both nodes, but note that only ONE of these TWO files (in the same directory) has a problem, so any issues with the disk directory structure should affect both files the same, no?; there are numerous other files in this directory and those are all visible from both nodes; and, no, there are no concealed or rooted directories on this disk. I have already $ ANAL/DISK/REPAIR with no effect. I suppose I will have to delete or set file/remove the problematic one, but I'd like to understand what is causing the problem. Has anyone seen anything like this? Is there a fix (short of a reboot, which is not an option right now)? Ingemar ------------------------------ Date: 16 Jan 2007 10:40:36 -0800 From: "Hein RMS van den Heuvel" Subject: Re: "no such file" from one node only Message-ID: <1168972834.312957.149940@38g2000cwa.googlegroups.com> The critical information missing from the topic is a 'simple' directory: $DIRECTORY/FILE_ID ... Nothin more, nothing less. That will show you whether the system is looking for the right file or not and thus tell you whether to suspect the directory or deeper down. You may also want to use DUMP/DIRE hth, Hein. BIO wrote: > OpenVMS V7.1-2 cluster w/ 2 alphas > > on one node (G): > $ sho sym dir > DIR == "DIRE/SIZE=ALLO/DATE=MODIF/PROT/WIDTH=(FILENAME=24,SIZE=8)" > $ dir glauto:ddsinv-ar > Directory GL_DATA_ROOT:[GLDATA.GL_INTERFACE] > DDSINV-AR.SFD;2 86 16-JAN-2007 04:48:45.76 > (RWED,RWED,RWED,) > DDSINV-AR.SF_ERROR;1 2408 16-JAN-2007 04:50:22.40 > (RWED,RWED,RWED,) > Total of 2 files, 2494 blocks. > $ > > whereas on the other node (A): > $ dir glauto:ddsinv-ar > Directory GL_DATA_ROOT:[GLDATA.GL_INTERFACE] > DDSINV-AR.SFD;2 no such file > DDSINV-AR.SF_ERROR;1 2408 16-JAN-2007 04:50:22.40 > (RWED,RWED,RWED,) > Total of 2 files, 2408 blocks. > $ > > The glauto and gl_data_root logicals are the same on both nodes, but > note that only ONE of these TWO files (in the same directory) has a > problem, so any issues with the disk directory structure should affect > both files the same, no?; there are numerous other files in this > directory and those are all visible from both nodes; and, no, there are > no concealed or rooted directories on this disk. > I have already $ ANAL/DISK/REPAIR with no effect. > > I suppose I will have to delete or set file/remove the problematic one, > but I'd like to understand what is causing the > problem. Has anyone seen anything like this? Is there a fix (short of a > reboot, which is not an option right now)? > > Ingemar ------------------------------ Date: Tue, 16 Jan 2007 08:27:32 -0500 From: JF Mezei Subject: BACKUP causing alpha node to crash Message-ID: <45acd2fc$0$8627$c3e8da3@news.astraweb.com> Alpha 8.3 $backup/log *.* $7$mia0:temp.save/save/label=BKP1 $7$mia0 is served by a VAX node (VMS 7.3) and is very flaky. The tape mounts succesfully. Moments later, there is a PEA0 message indicating loss of connection to the tape device (TF85 /DSSI) (and the red light goes on indicating it is a read-only mode). Then mount verification in progress message. Then the alpha crashes. This is reproducible. Backup has not saved any files at that time. If there is any interest in following this up, let me know exactly what I should do to collect information you need to track this down. Assume I am a newbie. Interestingly, the tape drive is served by only one of the 2 nodes attached to the DSSI. The following SHOW DEV/FULL is after a reboot) Magtape $7$MIA0: (WHEEL), device type TF85, is online, controller supports tape data caching (write-back cache enabled), file-oriented device, available to cluster, error logging is enabled. Error count 0 Operations completed 0 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 0 Default buffer size 2048 Density 1600 bpi Format Normal-11 Host name "WHEEL" Host type, avail VAX 4000-200, yes Allocation class 7 Volume status: no-unload on dismount, odd parity. Both VELO and WHEEL have TMSCP_LOAD 1 and TMSCP_SERVE_ALL = 1 However: the alpha that crashes has TMSCP_LOAD set to 0. Does that matter ? ------------------------------ Date: Tue, 16 Jan 2007 19:36:52 +1030 From: Mark Daniel Subject: Re: Blast from the 1988s (DEC proposal) Message-ID: <12qp5hbomi7s326@corp.supernews.com> Larry Kilgallen wrote: > In article <87ac0kdrjo.fsf@k9.prep.synonet.com>, prep@k9.prep.synonet.com writes: > >>JF Mezei writes: >> >> >>>At the time, the graphs showed from the all mighty microvax II to >>>the VAX 8978. The document mentions up to 15 8800 nodes in a >>>cluster. (when did it go from 15 to 96 nodes in a cluster ?) >> >>AIR, the 15* was the limit on the number of nodes on a single system >>volume, not the cluster size limit. I *THINK* that back then the size >>of a cluster was limited by the size of the statically allocated >>System Director Vector for the DLM. > > > Formerly there was a distinction between the total number of nodes > supported in a cluster and the number of large nodes supported in > a cluster. This was based on what had been tested, not theoretical > limits. From memory (and it's been a while since I've actually seen a CI - though I think we've still got the cables between our two, redundant machine rooms); it was 32 nodes per CI (the star-coupler), half of which could be VMS systems and the other half storage controllers (e.g. HSCs). The number of CI's (redundant paths or independent storage) on any one system depended on the size of the iron. We (HFRD->WASD->SSD->ISRD) had over the span in excess of a decade a time-variable collection of systems based on a mixed-interconnect cluster, including a handful of CI-based systems (including at one stage a VAX9000), along with NI-based VAXservers, VAXstations, DECstations, AlphaServers, AlphaStations, etc. Our main cluster exceeded 70 systems at one stage. ------------------------------ Date: 16 Jan 2007 05:21:24 -0800 From: etmsreec@yahoo.co.uk Subject: Re: Blast from the 1988s (DEC proposal) Message-ID: <1168953683.119479.272300@51g2000cwl.googlegroups.com> I think you're right Mark - the message I've heard from a number of people is that it's a case of needing to take lots of peoples' workstations over for a weekend in order to gather enough systems together to form a cluster that big - most companies only have that number of PCs available for this kind of job. Few have that many VMS systems available. The limit probably got changed with the introduction of LAVc. Steve Mark Daniel wrote: > Mark Daniel wrote: > > Larry Kilgallen wrote: > > > >> In article <87ac0kdrjo.fsf@k9.prep.synonet.com>, > >> prep@k9.prep.synonet.com writes: > >> > >>> JF Mezei writes: > >>> > >>> > >>>> At the time, the graphs showed from the all mighty microvax II to > >>>> the VAX 8978. The document mentions up to 15 8800 nodes in a > >>>> cluster. (when did it go from 15 to 96 nodes in a cluster ?) > >>> > >>> > >>> AIR, the 15* was the limit on the number of nodes on a single system > >>> volume, not the cluster size limit. I *THINK* that back then the size > >>> of a cluster was limited by the size of the statically allocated > >>> System Director Vector for the DLM. > >> > >> > >> > >> Formerly there was a distinction between the total number of nodes > >> supported in a cluster and the number of large nodes supported in > >> a cluster. This was based on what had been tested, not theoretical > >> limits. > > > > > > From memory (and it's been a while since I've actually seen a CI - > > though I think we've still got the cables between our two, redundant > > machine rooms); it was 32 nodes per CI (the star-coupler), half of which > > could be VMS systems and the other half storage controllers (e.g. HSCs). > > The number of CI's (redundant paths or independent storage) on any one > > system depended on the size of the iron. We (HFRD->WASD->SSD->ISRD) had > > over the span in excess of a decade a time-variable collection of > > systems based on a mixed-interconnect cluster, including a handful of > > CI-based systems (including at one stage a VAX9000), along with NI-based > > VAXservers, VAXstations, DECstations, AlphaServers, AlphaStations, etc. > > Our main cluster exceeded 70 systems at one stage. > > What I would have added had I not inadvertantly hit [send]; IIRC the 96 > nodes was a supported maximum because that was the largest cluster > Engineering had managed to configure on the test-bench. I'm (idly) > curious about design limitations (ignoring the practical limitations of > memory, bandwidth, etc.) on cluster size. Things like the 65k DECnet > node limit. (Hope the spelling's less objectionable this time, Steve.) ------------------------------ Date: 16 Jan 2007 03:52:02 -0800 From: "Volker Halle" Subject: Re: Cluster connection lost when one link fails? Message-ID: <1168948322.838833.195460@a75g2000cwd.googlegroups.com> Roy, if you've seen this problem with DEGXAs, did you escalate it to HP ? This is the only way to get problems solved these days and both Malcom and I went this route to make sure this bug got fixed - once and for all. If the problem would have been in a common code module used by all LAN drivers, then chances are high that the solution would also be included in all LAN drivers in the next patch kit. But if this was a specifc fix in the DE500BA LAN driver module, other LAN driver will most likely not be analyzed and fixed, even if the 'bad code' had been used in the other drivers as well. Volker. ------------------------------ Date: Tue, 16 Jan 2007 12:17:45 +0000 From: "R.A.Omond" Subject: Re: Cluster connection lost when one link fails? Message-ID: Volker Halle wrote: > > if you've seen this problem with DEGXAs, did you escalate it to HP ? > This is the only way to get problems solved these days and both Malcom > and I went this route to make sure this bug got fixed - once and for > all. Hallo Volker, as it happens, we're right this moment in the process of raising it with HP. We need to sort out some contractual issues first, since being VMS 7.3-2, this is subject to prior-version support. > If the problem would have been in a common code module used by all LAN > drivers, then chances are high that the solution would also be included > in all LAN drivers in the next patch kit. But if this was a specifc fix > in the DE500BA LAN driver module, other LAN driver will most likely not > be analyzed and fixed, even if the 'bad code' had been used in the > other drivers as well. This is what surprises me; given the existence of the problem in more than one LAN driver, I'd expect it to be highly likely to be a generic problem. So we have the problem in DE500s and DEGXAs. Has anyone else seen this in any other types of LAN cluster connections ? Obvious question: what exactly did the fix change ? Roy Omond Blue Bubble Ltd. ------------------------------ Date: Tue, 16 Jan 2007 07:02:03 -0700 From: John Nebel Subject: Re: Cluster connection lost when one link fails? Message-ID: <45ACDADB.10202@csdco.com> R.A.Omond wrote: > Volker Halle wrote: > >> >> if you've seen this problem with DEGXAs, did you escalate it to HP ? >> This is the only way to get problems solved these days and both Malcom >> and I went this route to make sure this bug got fixed - once and for >> all. > > > Hallo Volker, > > as it happens, we're right this moment in the process of raising it > with HP. We need to sort out some contractual issues first, since > being VMS 7.3-2, this is subject to prior-version support. > >> If the problem would have been in a common code module used by all LAN >> drivers, then chances are high that the solution would also be included >> in all LAN drivers in the next patch kit. But if this was a specifc fix >> in the DE500BA LAN driver module, other LAN driver will most likely not >> be analyzed and fixed, even if the 'bad code' had been used in the >> other drivers as well. > > > This is what surprises me; given the existence of the problem in > more than one LAN driver, I'd expect it to be highly likely to be > a generic problem. > > So we have the problem in DE500s and DEGXAs. Has anyone else seen > this in any other types of LAN cluster connections ? > > Obvious question: what exactly did the fix change ? > > Roy Omond > Blue Bubble Ltd. This does not appear to be a problem with DEXGA and DEGPA - I moved a set of backup LAN connections over the weekend and yesterday and did plenty of unplugging and plugging, enough so it was not likely dumb luck that things stayed up. VMS 7.3-2 update 9, VMS 7.3-2 update 7, VAX 7.3 2xDEMNA, remote node over QMOE (single ethernet connection) at VMS 7.3-2 update 5. Local nodes also on CI. John Nebel ------------------------------ Date: 16 Jan 2007 07:41:37 -0800 From: "Volker Halle" Subject: Re: Cluster connection lost when one link fails? Message-ID: <1168962097.824798.90090@m58g2000cwm.googlegroups.com> Roy, the DEGXA driver is written in C whereas the DE500BA driver is written in MACRO-32. It seems unlikely, that both would have the SAME error ! Sure - there can be different coding errors leading to the same symptoms... The circumstances of this problem showing up may be sufficiently rare, which would explain, why it's not been seen (and reported !) up to now. You would have to ask HP for a description of the fix or wait for a future LAN patch kit. Volker. ------------------------------ Date: 16 Jan 2007 08:17:42 -0800 From: "bclaremont" Subject: Re: Diskmizer 2.1 License Message-ID: <1168964262.067191.270330@a75g2000cwd.googlegroups.com> Nope, no MOD_UNITS available. ------------------------------ Date: 16 Jan 2007 05:33:24 -0800 From: etmsreec@yahoo.co.uk Subject: Re: ES45 versus ES47 Message-ID: <1168954402.409891.113150@l53g2000cwa.googlegroups.com> >From what I've been told and seen... Integrity's cheaper, but if you want Alpha then you probably want to run your own benchmarks on the ES45 and ES47. For some workloads the 45 is the winner, for others it's the 47. No clear outright winner for all cases. In other words, YMMV. Steve Robert Deininger wrote: > In article <1168531305.301331.216830@p59g2000hsd.googlegroups.com>, > "jbigboote" wrote: > > >>From what I've read it looks like there is no clear winner in > >performance between the ES45 and ES47; specifically with a 4 CPU > >configuration (assming 1.25GHz and 1.15GHz respectively), and 8-16GB > >RAM. Of course the ES45s are cheaper than the ES47s, and cost is a > >consideration. Does the performance change at all when they are > >clustered (two or three nodes)? > >How about long-term supportability? Any reason to think the ES47s might > >have a longer supported lifespan than the ES45s? > > Unless there is a specific reason to use an Alpha, an rx3600 or rx6600 > Integrity server would probably be a better choice. Less expensive, > probably much better performance (depending on workload) and far better > access to new I/O technology. ------------------------------ Date: 16 Jan 2007 05:25:34 -0800 From: etmsreec@yahoo.co.uk Subject: Re: Odd DSSI/Cluster behaviour. Phanton disks appear online Message-ID: <1168953934.196467.197210@38g2000cwa.googlegroups.com> Whether it makes a difference or not I'm not sure, but remember that DSSI disk and tape devices are not seen in the same way as SCSI devices - the DSSI devices are seen as nodes in their own right for the cluster. Maybe SHOW CLUSTER/CONT adds some explanation as to what's going on and what the cluster sees? Steve JF Mezei wrote: > This in not important, but interesting/ odd ! > > Node Velo and Wheel (VAX 7.3) have shared DSSI access to 5 drives $4$dia1 > to $4$dia5. > > Nodes Chain and Bike are Alphas at 8.3 > > 4 of those drives are dismounted from all 4 nodes. > > The 4 drives are physically taken out of the DSSI slots. > All 4 nodes now show them as HostUnavailable. > > Velo is rebooted. It, and the 2 alphas now see those 4 drives as on-online > and served by the newly rebooted VELO ! Wheel still sees them as > HostUnavailable. > > I assume WHEEL made those drives available to VELO which ignored the > HostUnavailable status and announced it could serve those drives too > without checking that it could in fact access them. So Chain and Bike > though the drives were on-line again ! > > > I tried to mount one of the drives for fun, and it asked me to load the > device (like for a tape). > > ---- > > Now, I shutdown both VELO and WHEEL at the same time and rebooted them > (after disconnecting the 5th drive which was their system drive). Both > rebooted without any knowledge of those 5 drives. > > > On the alphas, the drives remain seen (normal since disk drives never go > away), but they are still shown as ONLINE ! None of the VAXes know about > those devices since when they rebooted, these was no trace of those drives. > > SYSMAN> do show dev $4$d > > %SYSMAN-I-OUTPUT, command execution on node BIKE > Device Device Error Volume Free > Trans Mnt > Name Status Count Label Blocks > Count Cnt > $4$DIA1: (VELO) Online 0 > $4$DIA2: (VELO) Online 0 > $4$DIA3: (VELO) Online 0 > $4$DIA4: (VELO) Online 0 > $4$DIA5: (WHEEL) Online 0 > > > %SYSMAN-I-OUTPUT, command execution on node CHAIN > Device Device Error Volume Free > Trans Mnt > Name Status Count Label Blocks > Count Cnt > $4$DIA1: (VELO) Online 0 > $4$DIA2: (VELO) Online 0 > $4$DIA3: (VELO) Online 0 > $4$DIA4: (VELO) Online 0 > $4$DIA5: (WHEEL) Online 0 > > %SYSMAN-I-OUTPUT, command execution on node VELO > %SYSTEM-W-NOSUCHDEV, no such device available > > %SYSMAN-I-OUTPUT, command execution on node WHEEL > %SYSTEM-W-NOSUCHDEV, no such device available > > > I guess that since neither VELO or WHEEL know about those devices, they are > not sending any messages to the rest of the cluster to advise they are offline. > > > Now, if I try to mount it on an alpha, I get: > > > $ mount $4$dia1/override=id > %MOUNT-F-MEDOFL, medium is offline > > Disk $4$DIA1: (VELO), device type RF73, is online, file-oriented device, > shareable, available to cluster, error logging is enabled. > > Error count 0 Operations completed 3244 > Owner process "" Owner UIC [SYSTEM] > Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W > Reference count 0 Default buffer size 512 > Current preferred CPU Id 0 Fastpath 1 > Total blocks 3906420 Sectors per track 71 > Total cylinders 2620 Tracks per cylinder 21 > Host name "VELO" Host type, avail VAX 4000-600A, yes > Alternate host name "WHEEL" Alt. type, avail VAX 4000-200, yes > Allocation class 4 > > > You'd think that after a failed mounting attempt, the device would be > marked as "offline" and host-unavailable. ------------------------------ Date: Tue, 16 Jan 2007 08:33:50 -0500 From: JF Mezei Subject: Re: Odd DSSI/Cluster behaviour. Phanton disks appear online Message-ID: <45acd475$0$8627$c3e8da3@news.astraweb.com> etmsreec@yahoo.co.uk wrote: > Maybe SHOW CLUSTER/CONT adds some explanation as to what's going on and > what the cluster sees? The alpha SHOW CLUSTER never shows those VAX served drives because to the alphas, those drives are MSCP served via ethernet and are thus not seen as nodes. And on the VAXes, since the devices no longer exist, SHOW CLUSTER no longer shows them. ------------------------------ Date: Tue, 16 Jan 2007 13:49:45 +0000 From: baldrick Subject: Re: Odd DSSI/Cluster behaviour. Phanton disks appear online Message-ID: <45acd7f8$0$8742$ed2619ec@ptn-nntp-reader02.plus.net> JF Mezei wrote: > This in not important, but interesting/ odd ! > > Node Velo and Wheel (VAX 7.3) have shared DSSI access to 5 drives > $4$dia1 to $4$dia5. > > Nodes Chain and Bike are Alphas at 8.3 > > 4 of those drives are dismounted from all 4 nodes. > > The 4 drives are physically taken out of the DSSI slots. > All 4 nodes now show them as HostUnavailable. > > Velo is rebooted. It, and the 2 alphas now see those 4 drives as > on-online and served by the newly rebooted VELO ! Wheel still sees them > as HostUnavailable. > ... > You'd think that after a failed mounting attempt, the device would be > marked as "offline" and host-unavailable. The system(s) not rebooted will see either a physical or a logical representation of any device they can access, regardless of that device's final status (or fate) since that system's boot. It will not be marked as unavailable because the absence of that device may be temporary, and MSCP serving is flexible enough to allow devices to be transitory, providing that no IO references remain. If the device returns, it can be remounted normally. MSCP is multiple path capable on the back of any usable cluster interconnect. MSCP will when applicable report "alternate host" for this reason. The dodgy territory is where a device actually changes but is named the same (e.g. swapping a 1 gig for a 2 gig disk, etc.) which comes under the "mostly harmless" category. To clear out s system's IO structures, you reboot it, and to clear all references you can roll reboot the whole cluster (it is not necessary to down the whole cluster). Needless to say if you application is a cluster aware distributed one, service is uninterrupted. Regards, Nic. -- aka. Mr. C. P. Charges, nclews "at" csc dot c o m ------------------------------ Date: 16 Jan 2007 05:24:29 -0800 From: "AEF" Subject: Re: ODS5 and hardlinks Message-ID: <1168953868.255529.208360@11g2000cwr.googlegroups.com> Rob Brown wrote: > On Mon, 15 Jan 2007, Jeff Campbell wrote: > > > prep@k9.prep.synonet.com wrote: > >> Jeff Campbell writes: > >> > >>> A hard link keeps the file 'alive', the reverse of VMS file > >>> aliases. If I alias your file and you delete it, I end up with a > >>> FNF error when I next access it. > >> > >> No, you SHOULD get a FILE ID/SEQUENCE NUMBER CHECK error. Only if the header of the deleted primary file is reused and only when you run ANAL/DISK. See example appended below. > > > > Funny, looks like a -FNF error to me. > > It probably depends on whether or not the file header has been reused > when you try to access it. > > > -- > > Rob Brown b r o w n a t g m c l d o t c o m > G. Michaels Consulting Ltd. (780)438-9343 (voice) > Edmonton (780)437-3367 (FAX) > http://gmcl.com/ $ COPY NL: PRIMARY.FILE %COPY-S-COPIED, _NLA0: copied to DISK$DATA1:[SCRATCH]PRIMARY.FILE;1 (0 records) $ SET FILE PRIMARY.FILE/ENTER=ALIAS $ DIR/FILE Directory DISK$DATA1:[SCRATCH] ALIAS.FILE;1 (3760,14,0) 0/0 16-JAN-2007 08:15:53.43 EQUITIES.DIR;1 (4614,8,0) 1/3 27-DEC-2006 14:28:19.52 PRIMARY.FILE;1 (3760,14,0) 0/0 16-JAN-2007 08:15:53.43 Total of 3 files, 1/3 blocks. $ DEL PRIMARY.FILE; %DELETE-I-FILDEL, DISK$DATA1:[SCRATCH]PRIMARY.FILE;1 deleted (0 blocks) $ DIR/FILE Directory DISK$DATA1:[SCRATCH] ALIAS.FILE;1 no such file EQUITIES.DIR;1 (4614,8,0) 1/3 27-DEC-2006 14:28:19.52 Total of 2 files, 1/3 blocks. $ ANAL/DISK DISK$DATA1: Analyze/Disk_Structure for _DSA1: started on 16-JAN-2007 08:17:17.14 %ANALDISK-I-OPENQUOTA, error opening QUOTA.SYS -SYSTEM-W-NOSUCHFILE, no such file %ANALDISK-W-BADDIRENT, invalid file identification in directory entry [SCRATCH]ALIAS.FILE;1 -ANALDISK-I-BAD_DIRHEADER, no valid file header for directory $ TYPE ALIAS.FILE %TYPE-W-OPENIN, error opening DISK$DATA1:[SCRATCH]ALIAS.FILE;1 as input -RMS-E-FNF, file not found $ ED ALIAS.FILE Input file does not exist [EOB] *QUIT $ COPY NL: NEW.FILE %COPY-S-COPIED, _NLA0: copied to DISK$DATA1:[SCRATCH]NEW.FILE;1 (0 records) $ DIR/FILE Directory DISK$DATA1:[SCRATCH] ALIAS.FILE;1 no such file EQUITIES.DIR;1 (4614,8,0) 1/3 27-DEC-2006 14:28:19.52 NEW.FILE;1 (3760,16,0) 0/0 16-JAN-2007 08:18:08.27 Total of 3 files, 1/3 blocks. $ TYPE ALIAS.FILE %TYPE-W-OPENIN, error opening DISK$DATA1:[SCRATCH]ALIAS.FILE;1 as input -RMS-E-FNF, file not found $ ANAL/DISK DISK$DATA1 Analyze/Disk_Structure for _DSA1: started on 16-JAN-2007 08:18:45.95 %ANALDISK-I-OPENQUOTA, error opening QUOTA.SYS -SYSTEM-W-NOSUCHFILE, no such file %ANALDISK-W-BADDIRENT, invalid file identification in directory entry [SCRATCH]ALIAS.FILE;1 -ANALDISK-I-BAD_DIRFIDSEQ, invalid file sequence number in directory file ID $ AEF ------------------------------ Date: 16 Jan 2007 07:26:38 -0600 From: koehler@eisner.nospam.encompasserve.org (Bob Koehler) Subject: Re: PDP11 Tape Copy to VAX: DOS11_BLOCKSIZE Message-ID: In article <1168896897.24443@smirk>, Alan Frisbie writes: > > That is a DOS-11 format tape. The RSX-11 FLX program will read/write > this format, but I don't know if there is a VMS utility that will. > Check the early VMS Freeware and SIG tape collections. If it's not DOS-11, what is it (you seem to recognise it)? EXCHANGE is the VMS equivalent of FLX. Early releases of VMS included FLX as a compatability mode utility. ------------------------------ Date: 16 Jan 2007 04:46:40 -0800 From: "Kari" Subject: Re: Purpose of TCPIP NETWORK database ? Message-ID: <1168951599.097429.46150@11g2000cwr.googlegroups.com> I think you can only define SET NETWORK MYNET/ADDRESS=10.0.0.0 meaning MYNET equals 10.0.0.0. The information that the 10.0.0.0 actually belongs to subnet /16 must be defined elsewhere (router/switch ?). With (BSD anyway) unix variants there is a file called /etc/netmasks whre you can do netmask definitions, but VMS seems to lack this feature. Yes, I confess that this doesn't look very useful command... -Kari- JF Mezei kirjoitti: > Kari wrote: > > /etc/networks files to same purposes. After defining network name > > applications can use that instead of using plain ip address class. > > But if my subnet is 10.0.*.* (255.255.0.0) or 10.0.0.0/16 , how can I > define a network name in TCPIP> SET NETWORK if that command does not allow > one to specify the network mask ? > > If this is meant to define just one node in a network, what is the > difference betewen SET HOST and SET NETWORK ? ------------------------------ Date: 16 Jan 2007 09:05:15 -0800 From: rajib_agarwala@hotmail.com Subject: Re: RMS-E-WLK, device currently write locked Message-ID: <1168967106.287345.75780@q2g2000cwa.googlegroups.com> Thanks to all! It was the button causing the writelock. Its unlocked now. ------------------------------ Date: Tue, 16 Jan 2007 11:18:13 +0000 From: baldrick Subject: Re: SHOW CLUSTER and Quorum Message-ID: <45acb474$0$8714$ed2619ec@ptn-nntp-reader02.plus.net> Hans Bachner wrote: > JF Mezei wrote: > > >>View of Cluster from system ID 1034 node: BIKE 5-JAN-2007 09:16:42 >> > > >>Once booted, is it also correct to state that the node specific QUORUM >>value is not really used since there is a real cluster-wide quorum >>value ? > > > Yes. When a node boots, the quorum computed from the new nodes expected > votes, the current quorum and the number of present votes including the > new node are compared - the biggest of these values is used. > > >>Would the node-specific quorum value be of any use beyond the initial >>boot? (perhaps a limit on how low a SET CLUSTER/EXPECTED_VOTES could >>go?) > > > No - SET CLUSTER /EXPECTED is the mechanism to go *below* the expected > votes value computed from the individual nodes' settings (or the current > active settings in the cluster), e.g. because a voting node has left the > cluster for a longer period of time. Just to add to this, when you have your last two voting nodes (and no quorum disk), when you specify "REMOVE NODE" the system shutting down enables itself to run without quorum, removes its vote, then issues effectively a SET CLUSTER/EXPECT which for 1 vote will yield quorum of 1, hence the cluster may be reduced to a single voting node. The other node of course will hang, but cluster membership processing continues, and the hang is resolved when that above instruction is issued. Nic. -- aka. Mr. C. P. Charges, nclews "at" csc dot c o m ------------------------------ Date: 16 Jan 2007 10:21:44 -0800 From: "Hein RMS van den Heuvel" Subject: Re: Who has a record locked Message-ID: <1168971704.469157.138300@11g2000cwr.googlegroups.com> Bart Z. Lederman wrote: > In article <45ad0c58$0$496$815e3792@news.qwest.net>, "Michael D. Ober" writes: > >I occasionally need to be able to find out which process has a specific RMS > >record locked in an indexed file. : > There are a couple of ways to do this, but the easiest way is > with Availability Manager. It decodes RMS locks very well. Just > get AM running, open the node, and go to the LOCKS display. Yeah but... you'll need some process actively waiting for the lock to trigger AM and other tools, So it is easy when the program does a RMS: GET+ROP=WAT (Dcl 8.3: READ/WAIT ) Just use my BLOCKING tool (http://h71000.www7.hp.com/freeware/freeware60/rms_tools/) Or David North's BLOCKING.ZIP or AM or even a simple ANAL/SYS... SET PROC ... SHOW PROC/LOCK ,,, SHOW LOCK... (or SHOW PROC/RMS=(RAB,RLB) to see what the process was trying to get) However, if the process just returns "record locked" then it gets trickier! See also: http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=824830 http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1068472 and others... Hth, Hein van den Heuvel HvdH Performance Consulting ------------------------------ End of INFO-VAX 2007.032 ************************