From: MERC::"uunet!CRVAX.SRI.COM!RELAY-INFO-VAX" 12-APR-1993 23:05:55.33 To: Info-VAX@KL.SRI.COM CC: Subj: *Still* More Dangerous Code (was Re: Declaring Exit Handler) In article <1993Apr12.011306.19321@dbased.nuo.dec.com>, lionel@quark.enet.dec.com (Steve Lionel) writes: > > [Preceding discussion deleted to conserve bandwidth] > > Going back to the original question - I'd like to know how the data > is "buffered". If it is waiting in an application data area and the > application doesn't actually write it until sometime later, perhaps > it would be better to change the application so that it did the writes > as the data becomes available, and use RMS multibuffering, available > through a FORM string, to allow RMS to do the I/O in parallel with the > application (though Ada I/O is already asynchronous). If this happens, > them RMS will get the data written upon image exit. > First, I'll preface this by remarking that one of the shortcomings of my rundown handler was that you could never do RMS or language-specific I/O with it, because it would (again) crash the system. However, reading the above posting reminded me of another driver I had written (and it _still_ works :) that *might* be applicable for this requirement. I invite anyone to comment on it and let's see where this one leads. CAUTION: This post is long. Awhile back I wrote a device driver whose purpose was to serve as an "application interface." It is a somewhat enhanced (and therefore specialized) version of the mailbox driver. Schematically: +------------------+ +-----------------+ ! User Application !----+ +------! Server Process ! +------------------+ +--------+ +-----------------+ ! Driver ! +------------------+ +--------+ +-----------------+ ! User Application !-----+ +------! Server Process ! +------------------+ +-----------------+ It allowed any number of user connections and/or server connections. The driver itself did not do any data synchronization: That remains the responsibility of the server processes, and it was expected that the servers would have the requisite intelligence to know what each counterpart was doing. The driver maintains several queues of IRPs. They are: - A global "virgin" list of untouched client IRPs - A global list of pending server "read" IRPs - A list (per server) of assigned client IRPs - A list of exit-notification servers The procedure begins when the server process(es) assign a channel to the driver, declares itself to be a server process, and then queues up an IO$_READVBLK to the channel, and waits for incoming traffic. Next, a client comes along and assigns a channel to the driver. The client can issue one of several I/O functions to the driver. They are: IO$_READVBLK - Roughly analogous to the TTDRIVER's IO$_READPROMPT, but in which the prompt and read buffers are merged into one entity passed indirectly in P1. IO$_WRITEVBLK - This is an "asynch" version of READVBLK, to allow the client to pass one-way information to the server. IO$_READLCHUNK - Client optionally issues this when it needs to receive unsolicited I/O from the server(s). The server issues any of the following functions: IO$_ASSIGN - Informs the driver that this is a server process. IO$_READVBLK - Receives an incoming buffer from a client. The server process acts upon the buffer, updates any applicable fields in the buffer, and then returns it to the driver via WRITE. IO$_WRITEVBLK - Returns the buffer to the driver for disposal. If the client's function was a READ, then the updated buffer is returned to the client's buffer (via client's P1) and the client's IRP is sent to COM$POST. If the client's function was a WRITE, then only the IOSB is updated, and the client's IRP is sent to COM$POST. IO$_READLCHUNK - Receive unsolicited event data. For the server process, this includes the exit of any client, or any server who had outstanding client requests. Salient points of how the I/O path through the driver operates is as follows: 1) Client I/O is passed to the server process(es) in a FIFO manner, based upon order of READs received by the driver. 2) If there are no pending Server READs, the Client's I/O is placed in the Virgin queue, in FIFO fashion. 3) When Server issues READ, anything in the Virgin queue is extracted first. If Virgin queue is empty, Server READ goes into Pending Queue. 4) If a server exits, the driver scans that server's queue for any Client IRPs assigned to it. Those IRPs are then re-distributed to the remaining Servers (if they having pending READs), or they are placed back into Virgin queue, but in LIFO fashion. Any remaining Servers are notified of the exiting Server's status. 5) If a Client exits, each Server receives an Exit message, informing it that the Client has exited, but only if that Server has any outstanding I/Os from that Client. The driver could be expanded with little effort to accept alternate entry points that could be called by a user-written system service, which also happens to have a rundown handler built into it. Then, if your process was killed, your rundown code could set up a final buffer with messages in it that could be passed to the driver (and thus to the servers) without fear of crashing the system. Thus, you now have a language-independent means of fulfilling your I/O requirements that can also handle process terminations AND have redundant process capability built in, and can even serve as a passive analog of a "watchdog timer" within the system. What does everyone think about this idea? -- Bill Laut Internet: laut@alien.gici.com Gull Island Consultants, Inc. Phone: (616) 780-3321 Muskegon, MI 49440 >> "Usual disclaimers, apply within" <<