Date: 11/13/97 8:05:30 AM From: "Glenn C. Everhart" Subject: My proposal, slight rehash, as you requested To: Charles Pratt To: Glenn Everhart Date: 8/13/97 5:44:51 PM From: Subject: send secfileprop.text To: (""@LOCAL) Notations added in minor ways 11/12/1997 Improved, Secure Filesystem Access by Glenn C. Everhart, PhD. Problem Statement: Computers are used to provide information access, as a rule. However, this access does not scale well to large systems, is notoriously insecure, and is difficult to manage. Poor Scaling: Locating information in even a modest computer system is often grotesquely slow and frequently involves automated searches through very large numbers of computer files for series of keywords. The search for these is purely ad hoc and is not machine assisted in any sensible way, save for a few application based approaches which work only if one uses a particular vendor's packages. Poor Manageability: Large systems can have huge disk farms. Distribute them and they get more still. If these are presented as lots of separate devices, that's a lot to search through. If they are all linked a la unix mount points into a single hierarchy, the hierarchy develops an unreasonably huge number of branches. Finding things is hard. Determining what is important and what is not is impeded due to sheer complexity. Finally, tools that provide a seemingly usable interface for access of one or two disks on a desktop totally break down when brought to large scales. Poor Security: Reference monitors are accessed from all over the place in existing operating systems, with different calls and results, and often without complete coverage. As a result, security holes appear and are not easily closed. (This is particularly difficult where the vendor won't release or even document sources.) Errors in either reference monitor calls or implementations produce huge numbers of errors whose remedies are not obvious. Moreover, identifying what is or is not sensitive depends on separate action by users, which is generally done by use of obscure or unusual commands which have nothing to do with normal system use. Naturally, this leads to very poor coverage of actual sensitive information. What is worse, default system configurations often are not secure, and providing security is idiosyncratic and very often an error prone activity. Beyond the basic issues of security, most computer systems are very poor at encoding "need to know". In general, discretionary security is considered adequate if an individual is either given or denied access to an object. Need to know, however, is a test in the human world that means one may access data if one needs to perform some particular 1 operation with it. Computers cannot of course read minds, but they have far more information available than a user identity (hopefully the right user identity!). In general, the program being used to access a file, the user, the time of day, some information about the user location, user privilege levels, and such things as whether the user has a password in hand and perhaps some bits of information about recent actions all can be made available in deciding whether a particular piece of information should be accessed or not. These kinds of information are not a perfect mapping to operations on data, but they begin to give some of the kind of sensitivity to actions. As an example of being sensitive to actions, consider someone reading the master customer list somewhere. A person reading a few records with the customer database application seems possibly innocuous. Someone reading the same file with the copy application, on the other hand, is a different case. (Reading the same file in the middle of the night, from some unknown location, or with use of system privileges also could be far from harmless.) By allowing more of what the computer can know about the access being done to be used in deciding whether to allow it, we make it possible to approach "need to know" authorization. These considerations begin to show what is wrong. Proposal: I will describe here first a complete system, and then the steps that can lead to it. This was worked out some time ago, and a good bit of the VMS material has been implemented. The NT architecture has been clear to me since late 1996. A unix secured system based on a local encrypted filesystem and server which would decrypt and implement the processing has been in my mind since 1995 at least, maybe earlier, though if kernel source should be available, a more direct method could be used. Much of this scheme is of course implicit in the publication of the Safety specifications and user documents. Some of the rest has been discussed at DECUS, though I've been thinking the entire scheme would make a decent paper to present somewhere in its entirety. I don't believe anything like it has surfaced before (though I'd be glad to find out whether you or anyone else has seen it.) I will add that a certain minimum security of the base OS is needed for this kind of scheme to make sense, and arguably it makes sense first to determine whether NT meets this. The scheme I've had for Unix requires much less inherent OS security (since it relies on encryption to enforce its abstraction) but would tend to be slower. The VMS version was measured and cost something like 1% runtime for opens...and nothing for r/w accesses. I would expect the cost of an NT version to be similar. But the base OS must be able to do a reasonable job enforcing reference abstractions on its own before this should be attempted. Such things as poor system service argument checking can be improved on with add-in code which would do a better job, but complete coverage of NT is infeasible for a third party, because of undocumented services and because other kernel mode components appear numerous and could be holey too. The scheme here is an improvement one, not an absolute security add-in. I think of it as a firewall, just above the filesystem level, and I'm chiefly concerned that OS holes not make it too easy to disable this firewall. It is possible to construct a single monitor which will work in VMS (mentioned first because an early demonstration is possible due to existing code) and Windows NT, and in unix dialects where filesystem interface specifications can be obtained, which will address all of the problem statement issues insofar as access to the underlying local computer storage, via local file system is concerned. This monitor acts to improve the usability of any file systems that pre-exist and to provide a robust reference monitor which sits at a relatively well documented point. Rather than trying to plug hundreds of leaks due to quirks of a vendor OS which people identify, we will solve the problem of **file system mis-access** with a single well-placed reference monitor ahead of the underlying file system. Thus any access to a filesystem (whether local or from a remote site) which goes through the filesystem routines is monitored (and we can if we like block raw device access to monitored devices in very similar fashion). An authenticator architecture will be devised so that our filesystem monitor will be able to contact an authenticator at the source of the file request to obtain information about what and who is doing the access, plus any other desired information. (Remember the example; we would want to know user, what program is in use, privileges, local time, etc. The authenticator must supply this information, which must be obtained from the ultimate source of 2 the request, not from the local server system, and is more information than current OSs generally supply.) The path between the two must be made secure; I will not discuss particulars of doing this, since there are known methods for accomplishing it. Moreover, when IP Version 6 becomes available, it is expected to have the desired infrastructure ready made. The monitor will (in the full blown version) implement a disk amalgamator as well, together with a database relation based file access system. This will permit huge arrays of storage to be treated as a single but growable management entity, facilitating backup and adding of storage. (Each underlying disk will remain a valid separate file structure, so that backup of each underlying disk can be done separately, instead of having to be done together as would be needed for a large stripeset or the like. Since each disk is a valid file structure, the aggregate can have new members added or members removed with minimal work. Instead of forcing users to rely on ever-longer file names to give clues as to content, file contents and attributes will be directly encoded within the filesystem retrieval (within the monitor system, not touching underlying filesystems). This will make setting one's current path a more general operation (specifying not only path, but content information and other characteristics) and make machine assistance in finding information both routine and fast. The way this works is that when a create occurs, the filesystem interface is normally supplied with a filename and some information about a directory path to use. We substitute a call to some new code which selects a disk, creates the file on it using whatever filesystem is available, and sets up our code's database entry or entries, which contain the filename and path information, the file location (including device), and can contain additional information. An open of an existing file will read the database entry, redirect the user's I/O channel pointers to the correct disk, and let the open access the file on the disk where it is. In NT, this would amount to just opening a different file; these operations should be by file identifier to ensure they get the "right" files. Directory searches, however, are queries to the database which ask for a file in order to list what is there or find a file. In these searches, we again get control first, and instead of calling an underlying filesystem directly we let a database manager (dbms) do the query on a relation which contains files' names and locations and other information. The "other information" consists of additional tags which can help classify what is in the file. These would contain security information and things like keywords, gathered perhaps in searches. To make use of this information, we will define some pseudo directory levels to encode additional criteria used when selecting files. These would generally lead to requests to open the directories and retrieve contents, but we will capture those requests from the OS components above our monitor and use them to qualify searches done later. Thus it would be possible to have a command like "cd foo", "cd bar","cd $$key__payroll", or (which would be made equivalent) "cd foo/bar/$$key__payroll" to 3 open files in foo/bar which contain the keyword "payroll". (Obviously the exact syntactic sugar to be used needs some more thought.) Security tagging would be simple too. A simple command to "cd $$sensitive" could add the "sensitive" tag to files created, and a rename which included the pseudo directory "$$sensitive" could add a sensitivity tag to a file. The best such tags would come from those who created the files, and by making it easy to do this, we make it easy for users to help. Other tags, for sensitivity or keywords, can be driven by keyword searches through files, though with less precision. The design will build on technology I implemented in the Safety product for VMS (Vax and Alpha). The proposed VMS and NT intercepts and kernel code will derive from this in concept and many details of implementation, since most of the kernel code for the functions described has been implemented for VMS for some years now. The existence of the Safety product clearly shows that the proposed functions are both feasible and efficient. The code not present in Safety would include the communication with authenticators across a network to gain the user information and the code to do directories with a database manager. My notion would be to use a dbms system already in existence for those functions. The code which filters pseudodirectories also does not exist. The security monitor code, however, the reference monitor, file redirection, and logic to allow user mode code to redirect what files are accessed all exists and works. ( Safety is, by the way, my personal property.) The system looks something like this in block diagram: 4 User App Authenticator Net Monitor Softw Filesystem Netwk sw and Drivers Machine 1 Machine 2 Network Disks Open, Close, Delete, Create etc. Virtual Connection Open, Close Note the virtual circuit between Monitor and AutheDelete,rCreate The authenticator "peeks" at the user app to grab etc.rmation needed by Monitor. Monitor may also use data on Machine 2 to keep its security records. Note that this means that control never trusts a bare machine. 5 The above diagram does not show much detail about the monitor, but the monitor is implemented with intercept drivers on VMS and NT. It also contains a DBMS part which will be obtained from a pre-existing DBMS as the engine to store file information. When file access is performed, the monitor will find the file desired, based on the then-current pseudo-path (ie, including file characteristics, attributes, security defaults, etc.) or create it with the then current security characteristics on an automatically selected disk from the monitor's known disk farm, provided that the monitor's access checks indicate such access is permitted. Should access be denied, either an error will be returned to the user, or the monitor will arrange for some other file to be opened instead (the while producing audit alarms). The detailed processing is essentially the same as Safety now uses for the reference monitor, though extensions for network authentication and for extending filesystem semantics need to be crafted. The DBMS used will of course "live" within each machine connected to storage and cover the storage on that machine (whether the machine be a single processor or a coupled set of SMP boxes clustered). The exact syntactic "sugar" to be used to specify additional DBMS queries is to be determined. By overloading "directory path" syntax in this way, however, and filtering out extra requests for lookups at the monitor which correspond to this sugar, we make it possible for software which has no idea about the added facilities to use them efficiently. Note that the database information about keywords or contents of files could (in the final phase) be driven by software similar to that now used for cataloguing web sites, run now and then to classify new files. This sort of keyword search can be used to flag probably-sensitive information in files automatically, refined by user environment settings at file creation time or user "rename" commands should such be desired. Note too that our intercept can be set to tag information which may be of little sensitivity singly, but for which aggregate information may be sensitive. This kind of thing exists all over the place. Consider a customer list. A clerk's access to one or two names on the list could well be perfectly OK. However, rapid-fire access to thousands of names could be rejected by the simple expedient of counting I/Os to files tagged "aggregate sensitive" and delaying them where a user was not authorized for aggregate access. Similar controls on directories full of sensitive files (vs. access to one or two only) could be handled. You can't remove aggregation problems, any more than covert channels. We can here bandwidth limit them. (What is needed is a tag that says there is information here that should not all be disclosed without authorization to do so. It is simple to count successive reads to such information and delay them progressively more and more if they are attempted too close together. This doesn't completely block the access, but it can make the ability to do separate queries for such information less useful to those not permitted access, even as bandwidth limits on covert channels have long been used to make them less useful to those wanting to subvert security rules. In both cases, the channels are 6 artifacts of having a working system which cannot be removed completely. This kind of feature offers a way to address the data vulnerability.) The system to be implemented can readily have all of Safety's security features (even to the paranoid mode for web browsing) and more. It will largely solve the problems of insecure file access, identification of sensitive information, and will go a long way to making it easier to locate information in computers. No underlying OS modifications are needed, and so long as we define a reliable scheme for the monitor and authenticators to communicate, the network protocols are largely removed as sources of concern. Any organization wanting to deploy its information securely could use VMS or NT or unix and know, regardless of vendor secrecy or weakness, that its data store was being monitored reliably. In actually constructing a system, a VMS based demonstrator could be done to show the authenticator communications issues, taking probably less than a month. This would allow the authenticators to be developed for non-VMS systems (Windows NT would be the most logical first target) and tested with working kernel code. The dbms as directory and disk amalgamators would not be done in the first phase implementation. This would also show the expected overhead (the experience with Safety is to expect 1-2% overhead in open, but we can check network effects.) The next phase, taking a year or less, would be to construct the NT based reference monitor using available information about the NT file system interface. The monitor would have, as the VMS code has, kernel components and user components. While initially there would not be a full dbms based directory system (that gets bitten off in a later phase), the user mode code would have access to its own data store to hold security relevant information. Security tagging files can still be handled as they are created or at later times with an environment variable scheme or something similar, so that it is simple to do. Following an NT implementation, a unix reference monitor system should be devised, also using the same authentication and to the degree possible the same decision code. Once these are complete, code can be added to the monitor to handle disk amalgamation and the generalized directory scheme discussed as a path to make the system more usable and helpful to those who need it. Then the security database gets used for the full file characteristics as a path to making it simpler to find information that is currently difficult to locate. Speedups of finding information of several orders of magnitude can reasonably be expected from such a system. This may have valuable benefits for C3I applications. Once the first phase of the development is done, however, there will be in hand a reference monitor for a substantial part of the security relevant data handled for Windows NT and Unix, a monitor whose code will be available for inspection. By placing 7 such a monitor at the upper boundary of the filesystem code, we gain two key benefits: * The interface is at least semi-documented, so we avoid major changes to that interface and thus avoid having to constantly change our code to keep up with OS version changes. * There isn't much between our monitor and the data being guarded. Thus not much can go wrong. In a conventional firewall situation, there are cases where web browsers have had bugs exploited to let websites outside a firewall grab files inside. There is just too much code in between the firewall and the data to be protected to ensure that the firewall can plug leaks. Our reference monitor is at the upper boundary where common access to the data is, with very little beneath. Thus it is difficult to subvert. As an appendix, I will include the Software Product Description of Safety. Safety was distributed on the Spring 1997 DECUS VMS SIG tapes, free for individuals and asking a small charge for commercial users. The key sources were also distributed. Further Notes: It should be noted that the full-blown monitor discussed here would be able to (and usually would) keep its own database containing security and access characteristics of files, for those files it was doing access control on. (These don't need to be all files; there are some fast-executing filters that can be used to avoid any lookup overhead on most files that don't have special access controls.) Because of this, a filesystem that has no room to hold such information can still have access controls done on its contents. This means that a W95 filesystem or an NT based FAT file system could be protected the same way. Doing so would mean needing a W95 monitor if controlling W95 access; this is a future possibility. Users would find installing the full package attractive because it would greatly facilitate finding information. While setting up the pointers to authenticators is yet another step in installing an NT system, it is arguably simpler than many of the fiddling around steps needed to configure NT security. The DBMS to be used would preferably be a free one. It is, by the way, possible to configure a monitor such as the one described so that underlying disks are exactly the same when treated separately as if the package is running. It is an essential design feature that all underlying file systems are complete and legal to the pre existing file system (which makes separate backup and restore possible). Note too that the top of the filesystem is an area for which documentation is in hand, both for VMS and Windows NT, and which is generally available for other OSs more readily than other 8 "internal" interfaces. It also tends to be a relatively stable interface point because it is accessed ubiquitously. Thus an intercept such as this can be expected not to need much maintenance to keep current with new OS versions. VMS experience for this kind of thing has been that changes of ~ a score of lines of code or so are needed over a 10 year period. The resulting system will protect computer filesystems without the need to insert fixes all over unknown code, can be verified, and can be made easy to use; it should be given out widely for greatest benefit. It will solve, not patch over, most of the security issues one has with these OSs and provide moreover a way to extend their security model as any organization needs, not just as the OS vendor sees fit. By this I mean that the code implementing the reference monitor will be available and clear, not hidden by vendor secrecy, and there won't be much "below" it to cause other security issues. (It will still be necessary to configure the system in various ways, though hopefully making it simple to let users tag security attributes will make it possible for this to be much simpler than otherwise.) This makes construction of a system such as this appropriate as an activity...at least as much so as building testing tools, though such tools are likely to fall out from the effort also. Also, as designer and implementer of the Safety product, I am uniquely well qualified to supply these functions. I don't know of anyone else working in this precise area, but I do know exactly what has to be done and how to do it. Appendix A. Safety SPD Software Product Description Safety V1.3 Comprehensive Data Safety for your VMS systems. from General Cybernetic Enterprises Executive Summary: There are many perils your data faces, and loss of data can cost time, money, and jobs. Intruders, disgruntled insiders, or hidden flaws in installed software can destroy records. What is more, mistaken losses occur constantly. Safety protects your system and your critical data in three ways: 1. A comprehensive security system adds extra checks for access to VMS files so that access by intruders or by people in non-job-required ways can be regulated or prevented. This allows 9 your business - critical data to finally be protected against misuse, tampering, or abuse. Access from programs doing background dirty work (viruses, Trojans, worms, and the like, or even programs with security holes which can be exploited remotely (like Java browsers)) can also be blocked without damaging normal use. This active protection works three ways: by checking integrity of your files against tampering, by preventing of untrusted images from gaining privilege, and by regulating what other parts of the system an image may access. 2. A deletion protection system provides a way to undelete files which were deleted by mistake and to optionally copy deleted files to backup facilities before removal. Unlike all other VMS "undelete" programs on the market, this facility does not rely on finding the disk storage that contained the file and reclaiming it before it is overwritten. Rather, it changes the semantics of the file system delete to use a "wastebasket" system and captures the file intact. Thus, this system works reliably. No others do. This facility is also useful where you have a requirement to keep all files of a certain set of types, since the backup function can be used to capture such files while permitting otherwise normal system function. The shelving or linking functions are also available for moving copies offline if this is desired. The Safety protection features are fully integrated with the DPS subsystem, so that deletion protection does not involve destroying file security. 3. When space runs out, hasty decisions about what to keep online often must be made, and the risk of accidentally losing something important is high. Safety protects you from running out of space. Space can be monitored and older items in the wastebasket deleted if it is becoming low, without manual intervention. In addition, Safety is able to "shelve" files so that they are stored anywhere else desired on your system, and they are brought back automatically when accessed. Thus no manual arrangements need be made for reloading them. Safety can also keep the files on secondary storage, keeping a "soft link" to the files at their original site so they will be accessed on the secondary storage instead. Also, Safety can store files compressed, or can store them on secondary storage so that read access is done on the secondary storage, but write access causes the file to be copied back to its original site. Standard VMS utilities are used for all file movement, and moved files are also directly accessible in their swapped sites with standard VMS utilities. The VMS file system remains completely valid at all times. Safety gives you a full complement of tools for dealing with space issues automatically according to your site policy. These facilities are safe and easily understood. A comprehensive utility is provided by which you set your site policy to select which files are and are not eligible for automatic shelving. Also you are provided with screen oritented utilities for 10 selecting files to shelve at any time. Access to the shelved files of course causes unshelving if the normal shelving-by-copy mode is used. Also, a simple set of rules permit locating shelved or softlink target files at any time, even without Safety running. Safety at no time invalidates your file structures for normal VMS access...not even for an instant. In addition Safety contains functions to speed file access and inhibit disk fragmentation. The major subsystems of Safety will now be described. The Security Function System: Summary: Managing access to data critical to your business using ACL facilities in native VMS can be cumbersome and still is vulnerable to intruders or people acting in excess of their authority. Want to be sure your critical records can't be accessed save at authorized places, times, and with the programs that are supposed to access them (instead of, say, COPY.EXE)? Want to have protection against privileged users bypasssing access controls? Want to be able to password protect individual files? Want to be able to invisibly hide selected files from unauthorized intruders? Have you read that attacks on machines can happen because a Java browser points at a web site that damages the system (as has been reported in the press)? Want to be able to protect your systems? The Safety security subsystem builds in facilities permitting all of these, and is not vulnerable to intruders who disable the AUDIT facility as all other commercial packages which purport to monitor access are. Description: When your business depends on critical files, or when you are obliged by law or contract to maintain confidentiality of data on your system, in most cases the options provided by VMS for securing this data can be cumbersome and far too coarse-grained. The problem is that certain kinds of access to data are often needed by people in a shop, but other access should be prevented and audited. Moreover, the wide system access that can come as a result of having system privileges often does not mean that it should be used to browse or disclose data stored on the system. A system manager will in general not, for example, have any 11 valid reason to browse the customer contact file, the payroll database, or a contract negotiation file, save in a few cases where these files need to be repaired or reloaded from backups. Likewise, a payroll clerk may need read and write access to the payroll file, but not in general with the COPY utility, nor from a modem, nor in most cases at 4AM. Finally, a person who must have privileges to design a driver and test it should ordinarily not have the run of the file system as well. Given examples like these, it is easy to see that simple authorization of user access to files is inadequate. While it is possible to build systems that grant identifiers to attempt some extra control, these can be circumvented by privilege, and create very long ACLs which become impossible to administer over a long period as users come and go. What is needed is a mechanism that is secure, cannot be circumvented by turning on privileges, and which provides a simple to administer and fine grained control that lets you specify who can get at your critical files, with what images, when, from where, and with what privileges. It is also desirable to be able to control what privileges the images ever see, and to be able to check critical command files or images for tampering before use, so that they cannot be used as back doors to your system. It should be possible to demand extra authentication for particular files as well, and to prevent a malicious user from even seeing a particularly critical file unless he can be permitted access. The Safety security subsystem is a VMS add-in security package which provides abilities to control security problems due to intruders, to damage or loss by system "insiders" (users exceeding their authority), and to covert code (worms and viruses). It provides a much easier management interface to handle security permissions than bare VMS and provides facilities permitting control over even privileged file accesses, for cases where there are privileged users whose access should be limited. Unlike systems which only intercept the AUDIT output, EACF can and does protect against ANY file accesses, and can protect files against deletion by unauthorized people or programs in real time as well as against access. The Safety security subsystem offers the following capabilities: * Files can be password protected individually. If a file open or delete is attempted for such a file and no password has been entered, the open or delete fails. * Access can be controlled by time of day. Added protections can be in place only some of the time, access can be denied some times of day, write accesses can be denied at certain times, or various other modalities of access can be allowed. 12 * You can control who may access a file, where they may be (or may not be), with what images they may or may not access the file, and with what privileges the file may be accessed. Thus, for instance, it is trivial to allow a clerk access to the payroll file with the payroll programs, but not with COPY or BACKUP, not on dialup lines, and not if they have unexpected privileges. The privilege checks can be helpful where there are consultants working on a system who should be denied access to sensitive corporate information but who need privileges to develop programs, or in similar circumstances. You specify what privileges are permitted for opening the file, and a process with excess privileges is prevented from access. Vital business data access should not always be implied by someone having privilege. With this system you can be sure your proprietary plans or data stay in house, and are available only to those with business reasons to need them, not to everyone needing system privileges for unrelated reasons. Unlike packages using the VMS Audit facility's output (which can be silently turned off by public domain code), Safety cannot be circumvented by well known means. Its controls are designed to leave evidence of what was done with them as well. * You can hide files from unauthorized access. If someone not authorized to access a file tries to open it, they can be set to open instead some other file anywhere on the system. Meanwhile, Safety generates alarms and can execute site specific commands to react to the illegal access before it can happen. This can be helpful in gathering evidence of what a saboteur is up to without exposing real sensitive files to danger. Normal access goes through transparently. * You can arrange that opening a file grants identifiers to the process that opens it and that closing it revokes these identifiers. Set an interpretive file to do this and set it to be openable only by the interpreter and you have a protected subsystem capability that works for 4GLs which are interpretive. (Safety identifier granting, privilege modification, and base priority alteration is protected by a cryptographic authenticator preventing forging or duplication.) * You can actively prevent covert code ( viruses and worms) from running in two ways. First, Safety can attach a cryptographic checksum to a file such that the file will not open if it has been tampered with. Second, Safety can attach a privilege mask to a file which will replace all privilege masks for the process that opens it. By setting such a mask to minimal privileges, you can ensure that an untrusted image will never see a very privileged environment, and thus will be unable to perform privilege-based intrusions into your system even if run from a privileged user's account. * You can control base priority by image. Thus, a particularly CPU intensive image can be made to run at lower than normal base 13 priority even if it is run interactively. * You can run a site-chosen script to further refine selection criteria. (Some facilities for doing additional checking while an image runs exist also.) * You can have "suspect" images set a "low-integrity-image" mode in which all file opens are checked with a site script which can report or veto access. This can be used to track or regulate what a Java applet can do, in case someone happens to browse a web site which exploits a Java hole to browse your system or damage it. Safety allows you to exempt certain images (e.g., disk defragmenters) from access checks, and it is possible to put a process into a temporary override mode also (leaving a record this was done) where this is needed. Safety facilities are controllable per disk, and impose generally negligible overhead. Safety will work with any VMS file structure using the normal driver interfaces. Also, Safety marking information resides sufficiently in kernel space that it cannot be removed from lower access modes, yet it uses a limited amount of memory regardless of volume size. Best of all, the Safety protection is provided within the file system and does not depend on the audit facility. Thus it prevents file access or loss before it happens, and does not have to react to it afterwards. Safety allows all of its security provisions to be managed together in a simple screen-oriented display in which files, or groups of files, can be tagged with the desired security profiles or edited as desired. Safety protections are in addition to normal VMS file protections, which are left completely intact. Therefore, no existing security is broken or even altered. Safety simply adds additional checking which finally provides a usable machine encoding of "need to know" for the files where it matters. The Safety Deletion Protection Subsystem. Description: The Safety Deletion Protection System is designed to provide protection against accidental deletion of file types chosen by the site, and to allow files to be routed by the system to backup media before they are finally removed from the system. This is accomplished by an add-in to the VMS file system so that security holes are not introduced by the system's action. The user interface is an UNDELETE command which permits one or more files to be restored to their original locations provided 14 it is issued within the site-chosen time window after the undesired deletion took place. In addition, an EXPUNGE command is provided which allows files to be deleted at once, irretrievably, where space for such is required. Provision for automatic safe-storing of files prior to final deletion is present also in Safety DPS. Safety DPS is implemented as a VMS file system add-in which functions by intercepting the DELETE operation and allowing the file to be deleted to be copied or renamed to a "wastebasket" holding area pending final action, and to be disposed of by a disposal agent. The supplied agent will allow a site script to save the files if this is desired, and then finally deletes any files which have been deleted more than some number N seconds ago. If the UNDELETE command is given, the file(s) undeleted are replaced in their original sites. The supplied system can also be configured to rename files to a wastebasket area or to copy them directly, for undeletion by systems people only. (These options are faster than the site command file option.) Safety DPS can be configured to omit certain file types from deletion protection (for example, *.LIS* or *.MAP* could be omitted), to include only certain files in the protected sets, or both. This can reduce the overhead of saving files which are likely to be easily recreated, or tailor the system for such actions as saving all mail files (by selecting *.MAI for inclusion). In addition, Safety DPS monitors free space on disks, and when a file create or extend would cause space exhaustion, Safety DPS runs a site script. By setting this script to perform final deletions, Safety DPS can be run in a purely automatic mode in which deleted files are saved as long as possible, but never less than some minimum period (e.g., 5 or 10 minutes). Safety DPS files can be stored in any location accessible to VMS. If they are renamed, they must reside on the same disk they came from. Otherwise they can be stored in any desired place. Safety DPS is installed and configured using a screen oriented configuration utility to set it up, and basically runs unattended once installed. The Safety Storage Migration Subsystem Description: Safety has the ability to move files to secondary storage and automatically retrieve them when they are accessed. This backing can be similar to what HSM systems call "shelving", though it can be done in multiple levels, or it can be done in a way which permits files moved to secondary storage to be accessed there as though the files remained online. This resembles what are called "soft links" in Unix systems, in that file opens are transparently redirected to a file stored somewhere else reachable on the system, and the channel reset to the original device on close. A "readonly link" mode acts like a soft link for readonly access, and like an unshelve operation where a file is opened read/write, should this be desired. Full control over this shelving and unshelving is provided. This provides a great deal of flexibility in reclaiming space when the Safety space monitoring function detects that space is needed. Not only can previously deleted files be finally moved to backup destinations and deleted, but the system can migrate seldom accessed files to nearline storage transparently. The site policy can drive this, or utilities provided can be used instead. Where it is chosen to run Safety in a lights-out fashion (with Safety reacting to low disk situations by emptying older deleted files from the wastebasket and/or file migration to backing store), the policy chosen for controlling such setting is handled by a full-screen, easily used, tool which sets the policy. Should still greater flexibility be needed, the scripts used for a number of operations are supplied together with a full description of the command line interface of the underlying software. This facilitates linking Safety file management functions with other packages should such be desired. Safety can be run in a mode where there is essentially no overhead at all imposed (just a few instructions added along some paths and no disk access) for any files except those which need softlinks or possible unshelving. There is no limit to how many files may be so marked on a disk. A fullscreen setup script allows one to select the Safety run modes. Even if Safety is forced to examine all files for its markings, the overhead imposes no added disk access and costs only a tiny added time (typically a percent or two) in open intensive applications. In addition, Safety can be turned off or back on at any convenient point should this be desired. (This must be done using special tools provided for use by those specially authorized to do so.) Support: Safety runs on VAX VMS 5.5 or greater or AXP VMS 6.1 or greater. The same facilities exist across all systems. Safety must be installed on each cluster node of a VMScluster where it is to be used but imposes no restrictions on types of disk it works for. Safety will work with any file structure used by VMS, so long as a disk class device is used to hold it. It is specifically NOT limited to use with ODS-2 disks. Safety is available for 45 day trial use licenses or can be licensed permanently. Safety is available for 45 day trial use licenses or can be licensed permanently. Safety is required on every node of a cluster using it, or its benefits will not be available on nodes not having the software running. Apart from this, there are no problems with having Safety available on only part of a VMS cluster. Safety may also be used, free of charge, on a single disk indefinitely. The Safety kit may also be distributed freely, though keys to permit unlimited use are proprietary and may not be distributed save to those who have bought them. Safety is brought to you by General Cybernetic Enterprises 18 Colburn Lane Hollis, NH 03049 603 465 9517 voice 603 465 9518 fax For orders, contact the above address or Sales@GCE.COM. For technical information contact Info@GCE.Com For support contact Support@GCE.Com via email. (We regret there is no web site at this time.) Do not contact DEC or anyone else. Do inspect the documents in the Safety kit however.