From: SMTP%"MACRO32%WKUVX1.BITNET@uu7.psi.com" 20-MAY-1993 09:41:40.97 To: EVERHART CC: Subj: Re: Buffered or Direct I/O in DECnet-VAX? X-Listname: "VMS Internals, MACRO, & BLISS Discussions" Warnings-To: <> Errors-To: MacroMan%WKUVX1.BITNET@uu7.psi.com Sender: MacroMan%WKUVX1.BITNET@uu7.psi.com From: jeh%cmkrnl.com@ULKYVM.LOUISVILLE.EDU Reply-To: MACRO32%WKUVX1.BITNET@uu7.psi.com X-Newsgroups: comp.os.vms,vmsnet.internals Subject: Re: Buffered or Direct I/O in DECnet-VAX? Message-Id: <1993May19.190214.2020@cmkrnl.com> Date: 19 May 93 19:02:14 PDT Organization: Kernel Mode Systems, San Diego, CA Lines: 78 To: MACRO32@WKUVX1.BITNET X-Gateway-Source-Info: USENET In article , mme32505@uxa.cso.uiuc.edu (Matthew England) writes: > I'm looking into message-size limitations in DECnet-VAX. > > I want to be able to know whether or not the DECnet-VAX operations > QIO(READVBLK, WRITEVBLK) are using direct or buffered I/O when > exchanging data messages between DECnet-VAX tasks. For writes, the behavior of the implementation which I most recently looked at (VMS V5.5) is that it's normally buffered; but if both tasks are running on the same node, and if the buffer is large enough to make it worthwhile, it's a form of direct I/O (the process buffer is double-mapped into system address space). But you aren't supposed to know any of this, and you certainly shouldn't take advantage of it in your code; it's undocumented behavior, therefore it may change. ie if you are queueing multiple writes to the link (not that that will buy you anything), you should use a separate process-space buffer for each, even though buffered I/O is being used. > It would be even > nicer if I could _control_ whether or not it does so. Is there any way > to support either of these capabilities? No, it's up to NETDRIVER. > I'd like to do this in order to know whether or not there might be a > problem with the BYTLM, BIOLM, or DIOLM quotas (since these seem to be > the only limits in terms of buffer transfer size for $QIO) before I > haul off and start exchaning messages between two tasks. $QIOs to DECnet links aren't limited by BYTLM even though buffered I/O is the norm. The BIOLM and DIOLM is complicated. Normally they count against BIOLM, but if NETDRIVER decides to make it a direct I/O, the charge is returned to your BIOLM and your DIOLM is checked and decremented! So for the truly general case you need one available BIOLM *and* DIOLM. But there is NO performance advantage in doing multiple queued writes or reads to DECnet links, so why worry about it? Just leave one read queued all the time, and write to the link when you need to. > It would also > be nice to limit the possibilities of what might have happened upon > receiving a SS$_EXQUOTA status returned from a QIO call (lets say I'm > using QIOW). Why don't you just leave process resource wait enabled? (And it doesn't matter whether you're using $QIO or $QIOW.) > (I am familiar with the IO_MULTIPLE function for DECnet-VAX QIO; I'd > like to avoid having to use that in my program unless I have to, i.e., > if I know that I failed because of a BYTLM violation.) If there is a limit these days as to the size of buffers that may be sent over DECnet logical links, I haven't found it. It's definitely much more than 64K, and it's definitely more than the PIPELINE QUOTA. I've sent 100K buffers with no problems (o SS$_EXQUOTA messages either). The main reason for the existence of IO$M_MULTIPLE, as far as I can tell, is to let a writing program send multiple messages that are received as a single message, without requiring the writing program to "assemble" the message itself. Or, to deal with messages in "pieces" when the single message would be too large for the program's buffers. > The main purpose that I can see for using > The only thing that I could think of was maybe there is a correlation > between the virtual, logical, and physcial I/O classification set and > the buffered and direct I/O classication set, but I haven't read > anything in any documentation to suggest that. Nor will you ever. V, L, and P I/O has nothing to do with direct vs. buffered. (I suppose someone could write a driver so that this relationship does hold, but it'd be counter to the way things are supposed to be done on VMS.) --- Jamie Hanrahan, Kernel Mode Systems, San Diego CA drivers, internals, networks, applications, and training for VMS and Windows NT uucp 'g' protocol guru and release coordinator, VMSnet (DECUS uucp) W.G., and Chair, Programming and Internals Working Group, U.S. DECUS VMS Systems SIG Internet: jeh@cmkrnl.com Uucp: uunet!cmkrnl!jeh CIS: 74140,2055