I have thought at times about using techniques like fddriver for a sort of remote client disk. A great deal is needed, though, to get it to work properly. First, fddriver and its variants only move logical and physical I/O across the net. The second problem is related...the disk can only be mounted r/w from one site if corruption is to be avoided. That there are hacks to avoid this in a cluster (i.e., setting the cluster avail bit in the device characteristics and giving the driver a unique allocation class), they are useless there since MSCP serving works just fine already. What is needed begins with some hefty additions to the fqdriver FDT table processing, so that file operations are done for the local system by the remote one where the disk actually is. These could be able to allow the disk to be mounted r/w from several sites. One would arrange in these FDT processing areas to send requests for access, deaccess, delete, modify, etc. to the remote side. On the local side, one marks an open file by having an FCB and a window block in existence (though they don't have to do much) and linked to the channel. If the driver fields virtual read/write for the user (and it would have to come up with some tagging scheme so the remote server understands which files are which), then the content of the window block doesn't much matter. You'd have to be able to read attributes from the real file so RMS would stay happy. If the remote disk were an ODS-2 disk, the existing logic to move logical I/O would do. If it isn't, it's necessary to interpret the requests for header info so the correct info can be generated, perhaps on the fly. Stuff like retrieval pointers is unimportant, but stuff like file attributes IS. While on the subject, it's likely for non-ods2 disks too that you'll want to bypass the RMS behavior of reading (and caching) directories with read-logical-block. I don't know if RMS still does this in V6, but in V5 VMS it does. It can be turned of by setting the dev$v_sdi bit in the virtual device characteristics block. To restore normal RMS operation it is desirable to have RMS (in rms0srch.mar) test the dev$v_seq bit instead of the dev$v_sdi bit for tapes. For VMS 5.5, I found that changing the byte at 8f26 in rms.exe from 4 to 5 does this. Once this is done, the XQP does all directory lookups, so FDT code can catch them. (The new instruction becomes 00008F25: BBS #05,(R9),00008F39 in case that's of interest.) Obviously the interpretation of all possible ACP / XQP calls can be a significant amount of work; the more I think about it, the more I respect the job that people like messrs. Kashtan and Adelman have done. Where you are just remoting an already-ODS2 disk, you can get by with much less of this interpretation, since argument lists can be passed more or less intact to the remote side. (Indeed, these arguments are packaged somewhat similarly in order to get them to the XQP even locally, since the user buffers are in general not available directly to an ACP and the XQP interface is very similar.) Marking the fd type disk mounted again is a matter of allocating the VCB and AQB and filling some of the fields in, and of course making sure the rest of the logic is ready to roll. If your disk driver is going to field essentially all FDT type requests that can be delivered there the normal mount probably will not want to be used. The fakeup code could be hidden in the driver itself, of course. Sone other difficulties that exist are that at FDT time an i/o request can be of most any size, and there's no provision for its serialization, as a network link would most likely want. It would be necessary to route the IRP and its' (packaged-for-transport) arguments through start-io, and allow use of postprocessing hooks, either the DEC ones in ioc$iopost or home-grown, to continue too-long I/O. Final I/O completion would need to get data back in the right format for things like complex buffers, also...not that bad for a remote ODS2- speaking disk...harder otherwise. On the whole, I think using a user-mode network transport is a very good hack. Making VMS believe it's talking to a local disk, and synchronize that access, looks to me to require significant kernel mode code whose complexity I hope I have begun to touch on. In principle, it is also possible to do something else...you can totally try to generate correctly formatted file header blocks, directory blocks, retrieval pointers, and so on on the fly. There are interesting possible uses for such a hack. Suppose for instance that you have a relational dbms that can be told to pull some of its fields off different disks. If you had other processing that needed access to a memory array in microseconds, but some needs for accessing the data relationally (esp. if only for reading and reporting), you might generate a faked disk structure to hand to the DBMS that returned index blocks, header blocks, etc. as normal, but pulled the data out of (say) a global common containing the array so the relational DBMS would see that data when "reading the disk". (This could save you from having to keep updates to the array stored in a conventional Btree stucture for a relational DBMS at rates that would swamp any one of them on the market.) While such a hack, where most of the structures were static, is feasible in some cases, I suspect that trying to fake an entire file structure to look like one thing and be something else when presented only with logical block numbers is far more difficult than using the FDT access points, at which one knows when a directory is asked for, when one is opening/closing a file, when one is reading or writing a file, and so on. At least by using the FDT entry points you know something about the file structured meaning of the I/O that is going on. If all the fakery were done by faking what is returned on reading certain blocks, the host would need far more book-keeping to know when each type of operation was going on. I believe an extension of the type outlined is feasible and might make sense, but is a lot of work. The prospect of semi-transparent disk access to other file structures could make it very handy... NFS (or even AFS) clients should be the tip of the iceberg. (I need in fairness to point out too that handling security is a big issue for fddriver and any such descendants. The server end might not even be able to support VMS security, and having a single process doing the remote access (or even some small number of such) means the remote end's security may not be helpful. On fddriver as it stands, the implications are obvious...all info on disk is interpreted as if it were for the remote system. Where file access is remotely handled, some mapping of security domains needs to be done. It should not require that numerical UICs or identifiers be identical. Glenn Everhart Everhart@Raxco.com