From: SMTP%"RELAY-INFO-VAX@CRVAX.SRI.COM" 19-JUL-1993 09:01:41.61 To: EVERHART CC: Subj: Re: SLIP X-Newsgroups: comp.os.vms Subject: Re: SLIP Message-Id: <14143197@zl2tnm.gen.nz> From: don@zl2tnm.gen.nz (Don Stokes) Date: 17 Jul 93 04:44:41 GMT Sender: news@zl2tnm.gen.nz (GNEWS Version 2.0 news poster.) Distribution: world Organization: The Wolery Lines: 120 To: Info-VAX@kl.sri.com X-Gateway-Source-Info: USENET zrepachol@cc.curtin.edu.au (Paul Repacholi) writes: > Why should I run a different protocol over say a DUP like dumb sync > interface with a 9600 bps modem versus a dumb DL like async devise > with a 9600 bps modem? Simple: bit-oriented vs byte-oriented. In async comms, you have characters coming at you with an easy-to-spot start bit to say when the character starts. In sync comms, you don't; all you know is where the bits start and end (the modems do this; clocking is (usually) done in the DCE and fed to the DTE, whereas in async comms the clocking is done inside the UART in the DTE). This means that sync comms requires periodic synchronisation so that the two ends agree about which bits are coming when. There are two ways to do this: synch characters and out-of-band framing. Synch characters are easy -- they're just characters, eight-bit patterns that the receiver looks for when it's not sure about where in the bit stream the characters start; typically, a packet will start with one or more SYN characters. Having recognised a SYN, the rest of the packet can be read off the bit stream as bytes. Out-of-band framing is a little more interesting. Each packet both starts and (usually) ends with a frame delimiter, usually 01111110, and usually the hardware is looking for this bit pattern to delimit the packet and pass it to the software. Because that pattern may appear in the packet data, a technique called bit-stuffing is used, and this really requires hardware assistance to do efficiently. If the sender finds six '1' bits to send, it sends five '1's and then follows them with a '0'. It then sends the sixth '1' and continues. The receiver then on receipt of five '1's checks the following bit; if it's a '1', this is a frame delimiter, if it's a '0' it discards the '0' and takes the next bit as the sixth; thus six '1's never appear in the bit-stream except when they represent a frame delimiter or an error. Most modern synchronous links these days use HDLC or SDLC, which utilise the latter method. I'm not sure what DDCMP does (I only ever just plugged it in and watched it go 8-). Braindamaged protocols such as 2780 tended to use SYN characters. (I think I have permanent psychological scars from staring too long at 2780/3780 traces. Gack. 7-layer model? Eh wassat?) The problem here is that sync links usually have hardware helping them along to do bit-stuffing, framing and often FCS computation, whereas async hardware tends to be basically stupid. If you try to model the async protocols on the sync protocols, you find that you have to do something special to create an eight-bit clean protocol, since you have no method of doing out-of-band framing (see below). You also have to do a lot of the stuff you were doing in hardware in software, which means that the solutions chosen have to be lightweight or they burden the CPU. Enter SLIP. SLIP, Serial Line IP is the minimalist's minimalist protocol. All SLIP provides is framing; it has no FCS, relying on the higher level protocols to decide whether a packet is bad. (In my own SLIP code, I do a few things to try to discard packets that might be bad, but the lack of a real frame check means that this checking is pretty minimal.) This is what SLIP does: SLIP defines four special characters: END (0300 octal), ESC (0333), ESC_END (0334) and ESC_ESC (0335). Receipt of an END character terminates the packet. Note that there is no packet start character. If an END character appears in the data stream, it is replaced with two bytes: ESC ESC_END. To complete the picture, and ESC character in the data stream is escaped with ESC ESC_ESC. That's it. No FCS, no protocol type, nothing but simple-minded framing. Piss-easy to implement. Bugger-all overhead. Really easy to do in VMS: just sling a bunch of largish QIOs at the terminal driver with terminator masks specifying just the END character and what you get back are properly framed packets. There is one optimisation to this, which is optional, and that is to send an END character at the beginning of each packet; this means that your SLIP driver gets to deal with zero-length packets, but if there's any noise between packets, the noise and the next packet are treated as separate things and the real packet doesn't get lost. SLIP is described by RFC 1055. It includes a C code implementation, and discussion of what SLIP doesn't do. It's six pages long. 8-) There are a few gotchas with SLIP: since there is no frame checking in the protocol, your higher level protocols had better check their packets themselves, and in UDP checking the checksum field is *optional*. NFS for example can be used without UDP checksums, but you should be able to turn UDP checksumming on in your NFS implementation. TCP does check what it's doing. PPP is a bit more complex, and includes most of the features found in the synchronous protocols. Packets are frame-checked and bad packets discarded before being passed up the protocol stack, and there is a packet type field to allow more than one protocol to be forwarded over a PPP link. There are PPP versions for both sync and async lines; the async stuff includes dealling with things like XON/XOFF flow control and other nastyness intended for dealling with character oriented terminals rather than packet oriented network links. However, PPP has more overhead on async serial links than SLIP, both in terms of bytes sent and processing required. If you're just shipping IP around the place, SLIP is easiest, most common and lowest overhead, but has a few caveats. One other factor in all this is TCP header compression. A single keystroke in a telnet session gets sent with 20 bytes of IP header and another 20 bytes of TCP header as well as any data-link layer overhead. Van Jacobson TCP header compression can collapse that 40 byte overhead into four bytes (plus data plus DLL overhead) using the principle that very little in that 40 bytes actually changes from packet to packet and you only need to send what changes plus the checksum. This makes telnet sessions over 9k6 or worse links beareable. Since SLIP lacks a protocol type field, TCP compression on SLIP links has to be implemented in the SLIP driver, rather than implementing it separately and passing the compressed packets to SLIP as a separate protocol. RFC 1144 describes how all this works. (There's a section entitled "Compatibility with past mistakes" that describes the hackery required to do this -- I've implemented SLIP with TCP compression, and yes, it is a bit yucky.... 8-) -- Don Stokes, CSC Network Manager, Victoria University of Wellington, New Zealand Ph+64 4 495-5052 Fax+64 4 471-5386 Work:don@vuw.ac.nz Home:don@zl2tnm.gen.nz