From: SMTP%"hasan@state.demon.co.uk" 12-MAY-1993 12:41:50.92 To: EVERHART CC: Subj: Re: Pros and Cons of single system disc in DSSI VAXcluster X-Newsgroups: comp.os.vms From: hasan@state.demon.co.uk (Hasan Ali) Subject: Re: Pros and Cons of single system disc in DSSI VAXcluster Distribution: world Organization: State Modules LTD Reply-To: hasan@state.demon.co.uk X-Mailer: Simple NEWS 1.90 (ka9q DIS 1.19) Lines: 64 Date: Tue, 11 May 1993 13:22:36 +0000 Message-Id: <737126556snz@state.demon.co.uk> Sender: usenet@demon.co.uk To: Info-VAX@kl.sri.com X-Gateway-Source-Info: USENET In article <737071339snz@kestrel.demon.co.uk> ken@kestrel.demon.co.uk writes: >I am about to set up our first dual host DSSI VAXcluster and I am looking for >advice on configuration of system discs. I'm sure this is a tiny system >compared to the large clusters many of you are managing but I would be grateful >for the benefit of your knowledge and experience. > >I can think of three possibilities: > >1. A single system disc. Easiest to manage but lose it and you lose the whole > cluster. Also there must be a price to pay in performance having both > systems using the the same system disc. I set up a system of this type back in November last year. The system consisted of a couple of 4400s, 10 RF72s and an RF35 as the system disk. We had some money floating around in my department that had to be spent so I managed to solve the problem in a nice, but expensive, way - phase II volume shadowing and another RF35 - if you're an academic site though the license costs next to nothing (under DEC's all licenses for about 2000 deal.) There hasn't been any particularly significant improvement in performance, but I have a system running a rather maverick Oracle based application, and the IO queues on the database disks, outweigh everything else :-) BTW this takes my system up to maximum DSSI capacity - if I wanted more disks I'd have to go SCSI adaptor (MTI do something like this that allows about 40 DSSI nodes!) >What would the quorum setting be for these configurations? On my setup I have a disk designated as the 'quorum' disk, so QDSKVOTES=1 VOTES=1 ! on each node EXPECTED_VOTES=3 ! THIS is the relevant one to you: QUORUM has been ! replaced with EXPECTED_VOTES since VMS 5.0 EXPECTED_VOTES should be equal to the maximum number of votes in your cluster (sum of nodes votes + quorum disk) >What are the pros and cons of having all storage in an expansion box separate >from the two processor boxes? I would say mostly pros - I have all my disks in two expansion boxes - both coming off different power supplies (though same phase of course :-)). If I lose one power supply half the system keeps going; the disk mapping is such that the remaining node should be able to continue to run unimpeded - of course this wouldn't be possible without volume shadowing, or similar. The cons are mainly cost - the boxes are pretty small and there shouldn't be any problem in physically accomodating them. >Most of DEC's discussion on this subject focuses on redundancy to protect >against CPU failure. I'm more concerned about the system disc being a >single point of failure which in my experience is more common (though I haven't >had an RF disc fail yet...). Have I got my priorities wrong? > >Ken CPU failure is unlikely (although I've recently heard of a combination of CPU and memory failure bringing a 6640 to it's knees - DEC eventually did a box swap); and disks *are* far more likely to fail, so I would say your priorities are reasonably appropriate! -- Hasan Ali