Posts about Telemetry


YTelemetry


Telemetry

There are a few ways of developing your code on Raspberry Pis for PiWars: - write your code on Raspberry Pi using monitor and keyboard attached to the RPi - write your code on Raspberry Pi using ssh - write your code on Raspberry Pi using X11 tunnelled back to your computer/laptop's X11 Terminal (client program) - write Python on your laptop/computer, copy it to Raspberry Pi (scp/samba/...), ssh back to RPi to run it - use sophisticated IDEs like Pro version of PyCharm that know how to execute Python code remotely piping shell output back to IDE's console - use PyROS

I've enumerated options from simplest to most complex and from least convenient (it is quite hard chasing your rover with monitor and keyboard in your hands while attached to it) to most convenient. Having IDE doing heavy lifting for you (Pro version of PyCharm) is very useful but it costs money (£70 a year for individuals). Using PyROS is almost as easy as IDE - at least can be made easy.

$ pyros myrover upload -s -r -f wheels wheels-controller.py

This would be an example how to upload new version of 'service' wheels (python file wheels-controller.py) and keep the 'connection' on (-f option) - pumping 'logs' (stdout) back. If at any point you stop it you can continue monitoring stdout from a service by:

$ pyros myrover log wheels

But, all those predicate you use print statements in Python to log what is going on with you code. In case (as it is ours) we have a few loops running at significant speeds (over 200Hz) gathering data (from AS5600 for wheel orientation, accelerometer, gyroscope and compass from IMU and such). Just imagine the amount of text printed out all the time to stdout. Also, as PyROS utilises MQTT for communication between processes (and shell commands as above) it would make output really big and 'clog' network throughput. Additional problem is that not all output is coming from the same PyROS 'service' (an unix process maintained by PyROS) - so you would need to simultaneously monitor several process logs.

Requirements

One option would be to write to log files. But, then, how far you can go with writing to SD card? It has limited throughput as well. Then you would create several files, which would need to be downloaded after the run and analysed in parallel. Not an easy job. All of this lead us to decide to create simple data gathering system - a simple telemetry library!

Ideas were like following: - ability to define the structure of data - ability to 'pump' larger amounts of structured data to the logging system - ability to fetch data on demand from client/analysing software - nice to have: ability to produce aggregate of gathered data for real time telemetry

So, here it is - small side project of Games Creators Club: GCC-Telemetry

Design

Telemetry Diagram

This is high level overview how GCC-Telemetry works: each process has two 'channels' of communication to central 'telemetry' service. One is 'general', low bandwidth used for setting up a 'stream' of data (stream of records of same type) and one is as fast as we can devise - an unix pipe to log data. Second channel is supposed to be as quick and unobtrusive to the system as possible. It would be unfair for rest of the system to suffer from a few services dumping larger amounts of data to the centra place (telemetry service). Unix pipes seem to be one of the fastest means of interprocess communication - especially if shared memory is not easy option due to several processes needed same service. Local sockets seem to be ever so slightly slower and still are kept as a 'Plan-B', a fallback solution in case it is needed. Also, sockets would work across more Raspberry Pis, too.

The telemetry service, then collects all the log records thrown at it and stores them in the memory. Since most of the PiWars challenges last around 10min max (600 seconds) and if we extrapolate amount of data to example of 200Hz feed or 20-ish float point numbers (double precision - 8 bytes long) each record would be 160bytes x 200 times a second x 600 seconds ~= 18MB - amount of memory we can sacrifice for logging. PiZero is quite tight with memory but 50-100MB (in the worst possible case) is something that can be put aside for logging if really needed. The downside of keeping it all in memory is losing data in case of a process dying or PiZero browns out before data is extracted. There's still option of storing that data slowly (for example at half the SD I/O throughput) to SD card, but it is not implemented. Yet.

The last piece of the puzzle is a client that can request data when convenient (and won't affect finely tuned software and sensor processing) - at the end of the run or trickling data through the run. Client would use MQTT (as rest of the PyROS architecture) and fetch each stream of data to local memory and do something with it.

Stream

Telemetry Stream Class Diagram

Stream is defining something like fixed size record of byte fields. Using TelemetryStreamDefinition class you can define your stream by giving it a name in constructor and then define fields by calling addXXX methods (addByte, addDouble, ...). Available types (and methods) are:

  • addByte(self, name, signed=False)
  • addWord(self, name, signed=False)
  • addInt(self, name, signed=True)
  • addLong(self, name, signed=True)
  • addFloat(self, name)
  • addDouble(self, name)
  • addTimestamp(self, name)
  • addFixedString(self, name, size)
  • addFixedBytes(self, name, size)
  • addVarLenString(self, name, size)
  • addVarLenBytes(self, name, size)

Unfortunately variable length strings and variable byte arrays are not yet implemented.

Logger

Telemetry Logger Class Diagram

Stream on its own is just an definition. In order for that definition to be put in use we've extended the class to get StreamLogger. Stream logger is main entry point for processes that need to store telemetry data to the system. Also, we can use logger to define stream. For instance:

For example:

        steer_logger = telemetry.MQTTLocalPipeTelemetryLogger('wheel-steer')
        steer_logger.addFixedString('wheel', 2)
        steer_logger.addFixedString('action', 2)
        steer_logger.addDouble('current')
        steer_logger.addByte('status')
        steer_logger.addDouble('speed')
        steer_logger.addDouble('pid_last_output')
        steer_logger.addDouble('pid_last_delta')
        steer_logger.addDouble('pid_set_point')
        steer_logger.addDouble('pid_i')
        steer_logger.addDouble('pid_d')
        steer_logger.addDouble('pid_last_error')
        steer_logger.init()

Telemetry server is going to reject new stream created with different definition for exactly the same stream name. You can defined exactly the same stream on two different places and send interleaving logs in.

When stream was successfully created we can then use it to log our data in through log method. For instance:

steer_logger.log(time.time(), bytes(wheelName, 'ascii'), STOP_OVERHEAT, curDeg, status | STATUS_ERROR_MOTOR_OVERHEAT, speed, pid.last_output, pid.last_delta, pid.set_point, pid.i, pid.d, pid.last_error)

Server and Storage

Telemetry Server Class Diagram

Telemetry server needs to be started as an service (a process for instance or a separate thread) to collect and keep data.

        server = MQTTLocalPipeTelemetryServer()

That will subscribe to appropriate MQTT topics (you can change prefix but default is 'telemetry/') and start listening to requests. Also, server needs two more details: where data is stored (TelemetryStorage) and how data is collected. For instance PubSubLocalPipeTelemetryServer constructor has following defaults:

    def __init__(self, topic='telemetry', pub_method=None, sub_method=None, telemetry_fifo="~/telemetry-fifo", stream_storage=MemoryTelemetryStorage()):

There are three kinds of extensions for storage: - MemoryTelemetryStorage - it stores all data to local array (you can limit number of records, by default it is limited to 100000) - LocalPipePubSubTelemetryStorage - was to be used with TelemetryLogger but it was implemented in a simpler way - ClientPubSubTelemetryStorage - clients that want to add to log storage remotely (not used)

Telemetry Storage Class Diagram

Client

Last component is client TelemetryClient class which is to be used on laptop/computer to retrieve logs.

Telemetry Client Class Diagram

Methods are:

  • getStreams(callback) - retrieving all defined streams
  • getStreamDefinition(stream_name, callback) - getting stream definition
  • getOldestTimestamp(stream, callback) - finding out what the oldest timestamp in the set of logs
  • trim(stream, to_timestamp) - removing all logs older than given timestamp
  • retrieve(stream, from_timestamp, to_timestmap, callback) - retrieving all records from given timestamp older than to_timestamp

As you can see all methods have a callback method that is going to be invoked when data from the server are ready. An example is in download-stream - a utility that downloads particular stream from the server into a file. You can choose byte representation or CSV (human readable) representation.

Conclusion

Python is OO programming language (Object Oriented) it was quite easy to make a simple logging system. Also, `struct' built package allowed bandwidth reduction of logged data (in comparison to ASCII representation) as all numbers can be packed as bytes, words, integers, longs, floats and doubles (as unsigned as well) allowing a smaller memory footprint. After all - it was quite a fun playing with code for this small library.

Currently the main implementation is over MQTT (which PyROS heavily relies on) and UNIX Pipes. It is now quite easy to extend it for logging to happen over IP sockets (for logging over different machines); replacing MQTT with some custom implementation that can go over same sockets; add REST implementation for low bandwidth communication and such...

YThis Year's Distraction


Rovers work no matter what the 'client' side apps look like, right?

PyROS Clients and Agents

PyROS (Python Rover Operating System) is, in essence, simple Linux service that starts one Python program which listens to particular topics on MQTT (local queue broker). The client (computer or laptop) program communicates with Rover's code by the same MQTT that sends instructions to that Python program (imaginatively called just PyROS). The most important command clients can send to the PyROS is to upload a whole Python code (file with file extension '.py' - a Python program). There is set of command line tools (pyros) that can do various things to the rover - upload program/service/agent, query what is running on the rover, start/stop program/service/agent, check stdout (read logs), etc.. PyROS 'recognises' three types of programs: services, programs and agents. Services are maintained by PyROS and the service programs are started by PyROS at start up and kept running all the time. The most important services on our rovers are:
  • wheels - driving servos and motors individual wheels
  • drive - accepts 'high level' commands like drive (forward), rotate, steer and 'translates' them to the 'low level' commands - to wheels
  • jcontroller - reads (bluetooth) joystick inputs and translates them to 'high level' commands for drive service
  • mpu9250 (and similar) - reads mpu9250 board for gyro/accelerometer/compass and provides readouts for other programs/agents need gyro/accel input
  • vl53l0x - distance servo  - provides read distances
  • lights - service that turns rover lights (underneath the rover - originally intended to be used for follow the line)
  • shutdown - service that reads a switch and shuts down the Raspberry Pi
  • discovery - service that listens to UDP boradcasts and replies with rover IP/port and name (simple discovery service)
  • camera - service that reads camera and sends stills to agent/program or client
  • storage - service that when written to stores data in a tree (similar to Windows Registry) and when requested, emits values and changes to values back to all listeners
and a few others.

Unlike services, programs are a 'one off' code that is started and when stopped left alone. They are not started at start up but otherwise do not differ from services. There is one use for the programs - libraries. All the Python code on PyROS on the Raspberry Pi (including all programs/services and agents) is exposed as Python modules to each other. So, if needed (still to be considered if its good) one service can import another service directly and use their code). That means if something is uploaded as a program and it just does minimal initialisation (if needed at all!) and stops - it can be treated as a library (module) for other programs. Currently we have only two:

pyroslib - set of helper methods to subscribe/publish to MQTT queue and similar frequently used

storagelib - set of helper methods to read/write to the common storage

And slowly we closed to the last part of this tangent before getting to the distraction: the agents

Agents

The agents are closer to the programs than to the services. But, unlike programs which are 'left alone' by the PyROS agents are closely 'watched' by it. Actually not that agents are closely watched but the 'interest' in the agents is. But, let's go on another smaller tangent: what are software agents really?

PiWars rules dictate that for autonomous challenges Robots must perform a function autonomously. That means without a help of operator or another computer. But, with PyROS we have chance to write code on a laptop and upload it for execution on the rovers themselves. Uploading a program to execute remotely is fine - but those programs are not expected to work and work perfectly immediately.  And what is the output from such, remote, programs? Normally one would use scp to copy python code, ssh to the Raspberry Pi (Raspbian Linux) and start a Python program watching the 'stdout' for the 'debug info'. With PyROS we can do the same: upload program and use the log function to read the debug information from the stdout of the remote program. But that is not as convenient, nor visually effective as it would be to run program on the 'client' (a laptop) which will on start up upload an agent to the Raspberry Pi and keep close communication with the agent using the same MQTT mechanism. Then, the client could send various 'commands' like 'start' and 'stop' for the challenge, and many other smaller, less important stuff needed for initial programming of the code for the autonomous challenge (like breaking down steps in smaller, more manageable bits) and turning on/off different debug info and presenting it in convenient form on the screen.

Back to PyROS - what PyROS does for the agents is following: after an agent is uploaded and started it will expect the client to send 'ping' message for the agent in regular intervals no longer than 5 seconds (subject to change). If a ping is missed the agent is going to be killed. That will allow client being stopped on the laptop and PyROS will take care, eventually, of the agent code.

We had a few moments when we were scratching our heads when rover was doing odd things just to realise that one of the agents kept working because we didn't have this 'automatic kill switch' in place.

And now this years:

Distraction!

Where ever we turn we can see 'quality' images of futuristic UI - screens of made up applications that run in space ships or projected space suite visors. Many SciFi films or animated films have them. For instance:

blame Still frame from the anime Blame!

or

Expanse Still frame from the TV series The Expanse

We have a small collection of applications that are used for our rovers. Almost every autonomous challenge has client application (sometimes called 'the controller'). Like one we started writing for the Somewhere Over the Rainbow challenge:

camera-example-old

Or application for calibration of servos and ESCs for the wheels:

calibration-old

Or one of the latest addition - tiny application that reads local computer's joystick (or keyboard) and sends data to the drive service.

jcontroller-old

But, even though they are functional they didn't have the style or anything that cutting edge science feel in them. And, after all, we are dealing with some kind of progress. So, there came in our little distraction. So, after a weekend of technically useless and visually pleasing work we've spruced up our UI side of our apps. So, Something Over the Rainbow controller ended up looking like this:

camera-example

Calibration app:

selecting-rover

Joystick controller:

jcontroller

Since almost all of our apps follow very similar pattern it was very easy converting other old apps to use new UI style. Here is, for instance, app we created to test accelerometer:

accelerometer

And there are a few others as well... If competition is to be won by fancy graphics, I think we would be among the winners!