PSAS/ CanMuxer

This document represents a rough-draft proposal. Everything in it is subject to change. It serves mostly to formalize my thoughts. Please post comments and suggestions. However, since it has been largely implemented, if you want something to be different

you

have to do it. :)

Requirements

Downlink Goals

Observe: UDP is well suited to this type of communication.

Uplink goals

Observe: TCP is well suited to this type of communication.

-- ?JamesPerkins - 31 Aug 2001

We should test changing the values of these two settings under less-than-ideal situations:

The Linux tcp(7) manual page documents a sysctl, "tcp_retries2", which "[d]efines how many times a TCP packet is retransmitted in established state before giving up." If we set this to a small value, perhaps high-loss situations will be handled OK. Though I don't know what Linux does with the connection when "giving up" - it may signal end-of-file, which would be bad.

The Nagle algorithm (RFC1122?), controlled by the TCP_NODELAY socket option, delays delivering packets until the last packet transmitted has been acknowledged. Turning it off (enabling TCP_NODELAY) will clearly give us lower latency when we're losing a bunch of acknowledgement packets. Though leaving it on will probably give better throughput...

-- ?JameySharp - 06 Sep 2001

Specification

The purpose of the muxer is to get messages from the CAN bus and to deliver these messages to processes which want them. The muxer will filter messages on a per-client basis so that only the messages which the client is interested in will be sent to it. The implementation may use the filters of its clients to request filtering of the messages it recieves.

A further purpose of the muxer is to deliver messages from its clients to the CAN bus. This requires authentication: collected data is open to the public anyway, but there is a potential risk to the public if control of the rocket is compromised. For the September launch, this is not likely to be implemented. In the future, the muxer should support SSL for uplink messages.

The muxer will provide these services by way of IP, at a minimum; Unix sockets and System V IPC are additional possibilities. Providing access by IP allows the muxer to be used directly from both the debugging ethernet port and, during flight, the wireless telemetry port. Transports may be stream- or datagram-oriented. The muxer will not provide reliability itself. ?JamesPerkins suggests that reliability on the downlink is not required, but is required on the uplink; so both TCP and UDP should be supported.

If the connection to a client blocks for writes, the muxer must not be blocked. The muxer is permitted to discard data still queued for output to some clients when it runs out of memory in which to grow the queue. The muxer should not be allowed to use all of the flight computer's memory, so ulimit should be used to limit its memory usage.

Protocol Specification

The protocol is packet-oriented; each packet contains a header followed by variable-length data. Behavior in the presence of unexpected data is unspecified. Since the protocol is datagram-oriented but provides its own framing, either stream- or datagram-oriented protocols can be used for the underlying transport.

The header is a two-byte ("short") number in network byte order, which is most-significant byte first.

The high-order 11 bits (5-15) of the header contain the message ID of the packet, corresponding to the CAN message ID for encapsulation packets or to a protocol command (specified below) otherwise. Bit 4 is 0 if the packet encapsulates a CAN message, or 1 if the packet is for protocol control. The low-order 4 bits (0-3) contain the length in bytes of the attached data. CAN limits the data length to 8 bytes, but protocol control packets may contain up to 15 bytes. A data length of 0 is allowed and indicates that no data follows the header.

These protocol controls are available:

add filter (0x002)
remove filter (0x003)

The default filter state drops all packets; each client must add filters for all message IDs it wants to recieve. Add and remove filter requests are cumulative and the implementation is not required to maintain any state beyond the current contents of the filter.

Filter specifications consist of two shorts in network byte order. Only the low-order 11 bits of each is significant. The first number is bitwise-XORed with each CAN message ID being filtered; the result is bitwise-ANDed with the second number. If the result is 0, the message is forwarded.

Suggestions welcome on whether this filter thing is overly complicated. I think the notion of generic filters will be helpful, but I'm not sure what the most appropriate way to implement them is.

Architectural Design

Larry wants to know how this could work and why I'm convinced it's simple. Simple is, of course, a relative term.

For an example of the simple case (analogous to one CAN bus, one client), take a look at the attached C program. It communicates with a serial port and its standard input/output, copying all data between the two. It does this by forking off a separate process; see the serial2tty function in particular. (Most of the rest of the code is to set up the serial port.) So there are two processes; neither one runs at all until there's some data to read, and then it writes it and goes back to waiting for more data to read.

How do we generalize this to many-inputs, many-outputs? Let's imagine forking off a pair of processes for every connection - one to read from and one to write to that connection. To use this approach we need a queue (let's make it a ring buffer, no less) to hold any messages which have yet to be delivered by at least one process. But wait, we need some memory to be shared between these processes then. We can use System V IPC to do that, but that would be silly. Instead, we can replace the processes with threads. Threads behave just like processes (on Linux, anyway) except that they don't get their own address space. Well, that's exactly what we want.

So now we have a collection of mostly-identical threads running small loops which wait for input and copy it to their output. Each one uses the queue as either input or output. The queue handles synchronization between the threads - this is important to avoid seriously mangling data structures - and removes an item once all threads have read it.

The basic queue data structure should be easy to implement for anyone who's passed the equivalent of PSU's CS163; and given a little experience with threads, all of the possible synchronization issues are easy to spot and fix. The loops running in the threads are scarcely longer than those in the serial2tty function. I estimate 30 lines of code to implement the TCP communications threads, and 20-30 lines for CAN communications. Overall, I estimate between 600 and 1,500 lines of code, partially depending on coding style of course. :)

Turns out that, as of 06 Sep 2001, there are 771 lines of code in the implementation of the muxer, counting blank lines and such. I seem to be getting better at these estimates. :)

-- ?JameySharp - 15 Aug 2001
-- ?JameySharp - last updated 22 Aug 2001