Path: news.mitre.org!blanket.mitre.org!philabs!newsjunkie.ans.net!newsfeeds.ans.net!news-was.dfn.de!news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!nntprelay.mathworks.com!news.mathworks.com!uunet!in3.uu.net!bulb.garlic.com!not-for-mail From: Anne & Lynn Wheeler Newsgroups: comp.arch Subject: Re: IA64 Self Virtualizable? Date: 20 Nov 1997 17:03:30 -0800 Organization: South Valley Internet Lines: 85 Message-ID: References: <64q6l9$q0v@crl.crl.com> <3470A482.CD1229E3@cs.wisc.edu> <64tj62$pm6$1@murrow.corp.sgi.com> <3474A736.36A6@boston.sgi.com> Reply-To: Anne & Lynn Wheeler NNTP-Posting-Host: lynn-18.garlic.com X-Newsreader: Gnus v5.5/Emacs 19.34 the whole count-key-data out-board search was design point trade-off regarding 64kbyte operating system memory and several hundred kbyte/sec smart adapters. configurations were on the wrong side of the trade-off by the mid-70s; system memory for caching disk locations was by then cheaper than tieing the I/O subsystem with linear searches. I could get 3* speed-up for most i/o instensive workloads with fixed-block architecture and reasonably filesystem. it wasn't just the fancy ISAM stuff ... but just the simple vtoc and pds operations. The problem with vtoc, pds, and fancy ISAM is that they did multi-track searches which tied up the device, controller, and channel (bus) for the duration of the search (multiple revolutions). i was called in as solution of last resort to very large customer with a large cluster of mainframes managing large national business. they had a frequent performance bottleneck which brought all processers in the complex to a grinding halt at the same time. It had been several months and unable to fiqure out the problem. They gave me a classroom of 15-20 tables ... all covered with paper listings 3' high of performance data. After about 3hrs of eye-balling the data ... only correlation pattern that I observered was that one drive (out of upwards of hundred) would peak around 6-7 I/Os per second. In non-bottleneck periods ... the drive would only be doing 3-4 I/Os per second (these are drives commonly clocked at 40-60 I/Os per second). Turns out that the shared application program library was located on that drive ... with a large number of members and a three cylinder vtoc. Most application program loads would require a PDS vtoc search; on average the vtoc search 1.5 cylinders. For drives spinning 3600rpm, and 19 tracks/cylinder ... a full-cylinder multitrack search would take .3 seconds elapsed time (during which time the drive, controller, and channel/bus were all totally locked out). A typical program load was taking 3 disk I/Os (two searches and a read) which took an aggregate elapsed time of approximately .45 seconds. The effect of doing multi-track search (of full cylinder) slowed the disk I/O rate down from 40-60 per second to 4-6 per second (max. thruput). furthermore the multitrack search not only tied up the disk drive, but tied up a significant portion of the rest of (shared) I/O resources in the system. The Q&D solution for the customer was to spread the program library across multiple disks with limit on the size of vtoc no more than 1/2 cylinder. Another scenerio where we ran into the obsolescence of search-id technology was in large shared disk configuration consisting of both MVS and VM processors. The "rule" was that the shared disk configuration only existed for availability and NEVER was a MVS disk to be mounted on a "string" controlled by a "VM" controller. The problem was that placing an MVS disk on a string belonging to a nominally VM controller would subject the VM controller to the same (but less severe) multi-track search "lock-out" scenerios as the shared program library problem. MVS users nominally never realized the performance degradation caused by multi-track search. However, a single MVS drive on a VM controller could be immediately perceived as a 20-30% degradation in thruput (i.e. MVS users didn't know any better ... it was only if you were use to running in non-multi-track search environment that you would perceive the significant slow down). The counter that that was used when the MVS group accidently mis-mounted a disk was to bring up a souped up VS1 system on VM ... and turn it loose on the mis-mounted mvs drive. In the severe case, they could bring up a souped up VS1 on a fully loaded VM system running at 100% utilization, and bring the MVS system to its knees (even when the MVS system was only moderate loaded and had a processor 4* faster) .... i.e. souped-up VS1 with about 3% of the resources of a stand-alone MVS system ... could still turn SIOs to the disk around faster. The killer that has been with us for a long time ... was that even tho I could show a 300% speed-up for common set of disk intensive workload converting to a fixed-block infrastructure ... the business case to rewrite MVS PDS & VTOC support was set at something like $26m. -- -- Anne & Lynn Wheeler | lynn@garlic.com, lynn@netcom.com | finger for pgp key