PSAS/ news/ 2004-12-15

Software Team Meeting

Who: Ian, David, Jamey, Josh, Keith, Eric (welcome, Eric!) Where: Andrew's cabin

Software Overview

Andrew asks: Can we be ready in three weeks? What needs to be done?

Critical Tasks

  1. Simulator:
    • Simulating all critical CAN nodes and rocket flight.
  2. Sequencer:
    • algorithms
    • test
  3. Data logging:
    • DMA access to disk
    • How much network bandwidth we have if we don't want to lose IMU
    • Netforward T to capture data on the psas field server [DONE]
  4. Launch Control:
    • light status [DONE]

Nice Tasks

Working on the Flight Computer

IDE Testing

Review of the IDE controller to learn that our system is probably not capable of DMA. We did a bandwidth measurement using hdparm and using dd from /dev/zero. We got 73kB/s to the CF card. CPU usage was up to 10% which is well within our CPU budget.

Given our current CAN format of 16 bytes per message, that's about 4,000 msps which is 48kB/s. That's within budget, but barely.

Wireless Testing

When sending max length packets (1400 bytes), we can send 154 packets per second. 215kB/s which is well over our expected 48kB/s data rate to the ground. CPU useage was about 20% at 200kB/s.... so that's hopefully about 5% at 50kB/s.

If we're using 16 bytes per payload, we can only send 852 packets/second. This means of course, we have to pack messages into packets. We want to pack as few messages as posssible in case the packet gets killed, so we're thinking 20 messages per packet, with an overlap of 10 on either side so that we can get retransmission... sort of a FEC by sending twice. CPU usage may limit this, however.

At 20 messages @ 16 bytes, how many 320 byte packets can we send? 435 packets/second when we need 400 packets/s. At 40 message @ 16 bytes, how many 480 byte packets can we send? 285 packets/second when we need 200 packets/s.

Testing packet loss:

Were able to send 640 byte packets at 280 packets/s with no packet loss and ~20% CPU useage: and that was LV2 to pfield.

run_threads at full IMU data rate

Initial tests showed run_threads still taking 100% of CPU when logging full data rate IMU messages to both flash and network.

Logging only to flash left about 2% idle; logging only to network left 0% idle. Usage was about 3:1 system time over user time.

When disabling network writes and directing flash writes to /dev/null, idle time was still only about 2%.

If both network and flash writes were disabled, usage dropped to about 60% idle.

With network writes disabled, and a 1ms/10ms/100ms sleep after every disk write, usage was about 55% idle.

With 1ms delay after every network write and every disk write, usage was about 55% idle, and off-hand it looked like all messages were getting through. Success!

measuring actual IMU data rate

When we ask for a divisor of 1 for accelerometer messages, our test for correct data rate seems to be reporting 2.7kmps. (At divisor of 25, reported data rate is very close to 100mps.) Log file analysis suggests we're getting an average of 2.42kmps over a 52 second period. Not sure what caused the discrepancy.

netforward logging

netforward can now accept "-" instead of a destination address, to send data to standard output. The invocation of netforward in /etc/network/interfaces on pfield now uses this to log all data sent down by the rocket into /var/log/psas/netforward.log.XXXXXX , where XXXXXX is six random characters. To determine the newest logfile, sort files by date (for example, ls -t).