Posts about SOTR

YOur Take on OpenCV (for SOTR Challenge)


OpenCV is fun. It looked scary before we tried it but when we did, it turned out to be much easier than we anticipated. Shame we didn't start with it sooner (and by sooner I mean for last year's competition). Our rovers were equipped with Raspberry Pi cameras since day one. The idea was to use them for follow the line challenge, for recording and first person driving - none of which really worked well due to lack of time to spend on it. Now, for the Somewhere Over the Rainbow challenge, we finally made a use of it!

Setting Up the Picture

We read a few tutorials online and decided to go with an HSV picture as a base for image analysis. Our rovers have a camera service that delivers 'raw' byte data of an image in RGB format directly from the camera and delivers it to all interested parties over MQTT. That allows us not only to break the code to smaller chunks and make services where code provides access to hardware or software resources, but also to easily implement a monitor of what is really happening to the rover at any time.

So, as soon as we receive image we prepare it to be used in OpenCV:

pilImage = toPILImage(message)
openCVImage = numpy.array(pilImage)
results = processImageCV(openCVImage)

Next is to blur image a bit and convert it to HSV components inside of OpenCV:

blurredImage = cv2.GaussianBlur(image, (5, 5), 0)
hsvImage = cv2.cvtColor(blurredImage, cv2.COLOR_RGB2HSV)
hueChannel, satChannel, valChannel = cv2.split(hsvImage)

pyroslib.publish("overtherainbow/processed", PIL.Image.fromarray(cv2.cvtColor(hueChannel, cv2.COLOR_GRAY2RGB)).tobytes("raw"))
pyroslib.publish("overtherainbow/processed", PIL.Image.fromarray(cv2.cvtColor(valChannel, cv2.COLOR_GRAY2RGB)).tobytes("raw"))
pyroslib.publish("overtherainbow/processed", PIL.Image.fromarray(cv2.cvtColor(satChannel, cv2.COLOR_GRAY2RGB)).tobytes("raw"))


Finding Contours

The following step was one of the most important that we spend lots of time on tweaking, but at the end, the solution ended up relatively simple. Also, recompiling OpenCV with NEON and FVPV3 optimisations helped - a lot!

The problem is finding the right channel to apply the threshold and the right threshold value to nicely select the balls on the black background. The main issue was that with lots of light, the saturation channel was quite noisy as colour was found everywhere, while the value channel was really nice. In lower light conditions, value channel was not that useful, while saturation channel was jumping up and down yelling 'pick me'!

The algorithm we used goes something like this:

  1. combine saturation and value channels with some weights (current values are: 0.4 for saturation and 0.6 for value)
  2. start with value for threshold of 225 (25 less of 250 which is nearly at the top)
  3. find contours
  4. sanitise contours
  5. check if the correct number of contours was detected (i.e. more than 0 and less than many)
  6. if not, drop the threshold value by 25 and repeat from step 3

With that we can see slowly how  ball is forming at the middle of the picture.

Here's the code:

gray = sChannel.copy()
cv2.addWeighted(sChannel, 0.4, vChannel, 0.6, 0, gray)

threshLimit = 225
iteration = 0

while True:
thresh = cv2.threshold(gray, threshLimit, 255, cv2.THRESH_BINARY)[1]
iteration += 1

cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[1]

initialCntNum = len(cnts)

pyroslib.publish("overtherainbow/processed", PIL.Image.fromarray(cv2.cvtColor(thresh, cv2.COLOR_GRAY2RGB)).tobytes("raw"))

if 0 < len(cnts) < 6:
  log(DEBUG_LEVEL_ALL, "Found good number of areas after " + str(iteration) + " iterations, contours " + str(len(cnts)))
  return cnts, thresh

if threshLimit < 30:
  log(DEBUG_LEVEL_ALL, "Failed to find good number of areas after " + str(iteration) + " iterations")
  return cnts, thresh

threshLimit -= 25

As you can see finding the contours was already a given function of OpenCV. Here are the steps our rover did finding green colour:


Sanitising Contours

We know we are searching for a ball. Round contour. Or something that looks round to human eye. Even young moon looks quite round as our brain fill in the gaps. And that problem of a young moon - area of the ball where too much light reflects or where there is not enough light is preventing us to use simple a 'find circle' in each contour. So, we went on checking contours radius (radius of minimal circle that can be drawn over the contour) and area:

for i in range(len(cnts) - 1, -1, -1):
  center, radius = cv2.minEnclosingCircle(cnts[i])
  area = cv2.contourArea(cnts[i])
  if radius = 128:
    del cnts[i]

Since our camera is at the back of the rover, the lower half of the picture is the rover itself, so all contours at that area are immediately ignored (centre > 128).

MIN_RADIUS, MIN_AREA and MAX_AREA are fetched from real life running of the code for a given resolution, position of the camera and rover given size of arena, etc... And a fudge factor of 0.7!

MIN_AREA = MIN_RADUIS * MIN_RADUIS * math.pi * 0.7
MAX_AREA = 13000.0

Processing Results

After we have found contours on the picture we needed to find a colour of area of the contour. First we use contour to make a mask and apply it to hue channel (only look at the pixels inside of the contour.

Now the colour itself. Seems easy but it wasn't. Remember the young moon? Our brain immediately makes it into a full circle - filling in the gaps. If a ball is recognised for less then half of the area of the circle, and colours vary (red and yellow are quite close to each other) it is a problem finding out what exactly the colour is. Taking average skews the results so we decided to take a histogram of all colours and pick the most predominant. And it seems to be working well:

mask = numpy.zeros(hChannel.shape[:2], dtype="uint8")
cv2.drawContours(mask, [contour], -1, 255, -1)
mask = cv2.erode(mask, None, iterations=2)

maskAnd = hChannel.copy()
cv2.bitwise_and(hChannel, mask, maskAnd)

pyroslib.publish("overtherainbow/processed", PIL.Image.fromarray(cv2.cvtColor(maskAnd, cv2.COLOR_GRAY2RGB)).tobytes("raw"))

hist = cv2.calcHist([hChannel], [0], mask, [255], [0, 255], False)

value = numpy.argmax(hist)

if value  145:
  return "red", value
elif 19 <= value <= 34:
  return "yellow", value
elif 40 <= value <= 76:
  return "green", value
elif 90 <= value <= 138:
  return "blue", value
  return "", value

Here it is when colour is spotted and mask applied to the hue channel:GreenFinalThe left image is of the found contour, middle of mask applied to the hue channel (see above what hue was looking like as complete) and last image is the result... well, for looks!

The rest is for the main Somewhere Over the Rainbow agent to process the recognised colours. When there is only one we take it as it is. When there are more than one coloured balls recognised we check if any of them are red and if so discard them as they are usually mainly from the noise of the background. If still undetermined - we take more pictures and process more. Speaking of red - red and yellow we take multiple takes of reading picture as camera's adaptive lighting can change over several frames and produce better results. For green and blue this are far more deterministic...

Here it is when all was put together:

YSomewhere Over The Rainbow Analysis

The Somewhere Over the Rainbow challenge is new this year and one of the most interesting. We started by breaking down what the rover needs to do for it in simple tasks/steps (no matter how complex each step is):

  • turn round the arena at 90º steps (starting with initial 45º turn) - but we need 135º and 180º turns, too
  • scan the colour of a ball that that rover is facing
  • going towards a corner
  • follow left or right wall of the arena to an adjacent corner

Turning around

In Somewhere Over the Rainbow there are several precise angles that the rover should turn:
  • 45º at the beginning of the scanning phase
  • 90º for scanning each new corner or to visit the next corner
  • 135º when facing a corner and needs to go to the adjacent corner on the left or the right
  • 180º when visiting the opposite corner
The first (45º) angle is always to one side while the other 90º and 135º are in both directions. For 180º is really doesn't matter which direction it is executed at. We've tried to implement it using internal gyro and PID algorithm. 'P' component says how quickly it should turn, 'D' component dampens it down if it start moving way too fast while 'I' component is giving us a 'nudge' when we are close to the target and speed (PWM percentage) is not enough to really drive motors. When 'D' component is relatively big, 'I' is not collected (reset to zero), but the moment 'D' falls below certain threshold we add errors for 'I' component and it allow us to continue moving when the power could be low

Scanning for Ball Colours

Scanning for ball colours step is 'simple': turn 45º, fetch the colour of the ball we face (and add the first letter of the colour to a string), turn 90º, scan, add to string and repeat. Doing it four times should give us 4 different colours and from the order we can deduct the following steps.

Now, the 'simple' task was originally done by counting RGB pixels and sorting them as red, green, yellow and blue, but that method itself turned to be quite simplistic and not reliable enough. The next blog will explain more about how we used Open CV...

Finding Nemo

Finding a corner is another small autonomous challenge we've done. To do so our V formation of distance sensors works like a charm:


On the picture above we use the middle configuration for this process. The left and right sensor should return a similar distance. If not rover will drive at an angle which would balance distances. Our rover can continue to 'look' forward while driving to the left or right (strafe at some degree). Amount of 'strafe' (angle at which all wheels are going to be) is directly proportionate to proportion of distances of left and right sensor. When left squared distance plus right squared distance is squared target distance (let's say 120 or 150mm) then we've got quite close to the corner. A PID algorithm is responsible for our rover to not slam into the corner nor stopping way too soon. That algorithm will dictate the speed of the rover.

Following Walls

The last piece of the puzzle is following left or right wall. If the next ball is in corner that is left or right to the current corner, the rover just needs to turn 135º and follow the wall. In configuration above it will be one on the left or right. One sensor will be used to judge the distance from the wall and one distance from the corner (opposite wall). The forward sensor will be used with a PID algorithm as above to calculate the speed of the rover. The side sensors will tell us what the rover is supposed to be doing:
  • if delta distances (current distance minus previous distance) are close to zero (some small number) rover will continue with driving forward gently strafing to adjust to asked distance from the wall (120 or 150mm).
  • if delta distances are increasing - that would mean that rover is going away from the wall and corrective action is needed. That corrective action is calculated as following:
    • front sensors deltas will determine rover's current speed and with that speed we can calculate distance rover will travel for one pass of the loop (~ 0.1s as that much it takes for vl53l0x to calculate distance)
    • rover will rotate around a point that is a the side of it so circle's arc is of length of calculated travel. That would mean rover will adjust direction so it is parallel to the wallturning-maths
  • if delta distances are decreasing - that would mean rover is going towards the wall and similar action as above is needed but around the point at opposite side
With that we can assume by the time the rover arrives to the corner, it will be adequately parallel to the wall no matter which angle it started from. Since the gyro is not absolutely correct (they have a bad tendency to drift) this step is were we expect to correct accumulated errors.

Putting It Together

With the above steps now we can do the challenge.

First we'll turn 45º to face first ball. Then scan, turn 90º and repeat three times to collect all corner colours. If any corner colour is inconclusive we'll put 'X' in the list.

If there are 'X' characters in the list we can turn again to those corners and re-do the scanning until we can determine four distinctive colours. The idea is that if we get two of the same colours out of four - we can just invalidate both and re-scan those corners.

Using OpenCV we can even find the 'moment' (centre) of the ball and adjust required angle to move to the next corner.

When the colour of the balls are determined we need to rotate to the red (shifting our string of colour's first letter accordingly). As a result we'll have a string that always starts with 'R'.  Analysis gave us 6 distinct combinations as following:


The strings are: RGYB, RBGY, RYGB, RGBY, RBYG or RYBG only! And those 6 combinations can be translated into series of 'find corner'; turn -135º, -90º, 90º or 135º; follow left or right wall! Just as described below. Here are those steps:


Simple, right?

YSomewhere Over The Rainbow

This is blog post about making the 'arena' for the Something Over The Rainbow challenge.

The Bill:

9mm MDF 2440mm x 1220mm board - £16.80

Corner Brace 40mm - £7.74

3.5 x 12mm screws - £2.88

Blackboard paint - £5.90 (£11.60 as second coat was needed for removing grey quarter circles)

Art Attach PVA glue - kids had grown up sufficiently not to notice half is missing

Total: £33.32 (£39.32 - with second coat of paint)

Start of the build: [gallery ids="1215,1216" type="rectangular"]

Ready for carrying it around:


Checking the size of the arena vs rover:


Sketching corners:


First, live test...


... has successfully passed - with a hamster!

Priming equipment:



[gallery ids="1206,1205" type="rectangular"]

Putting it all together:


Question: Grey corners or not?


First test of software coloured balls detection (happy path - almost no cheating):


(don't get confused - camera is mounted at 90º CCW)

Update: Patience is a virtue... So mere couple of hours all was done:


So, here we go again - second coat of paint:


Grey quarter circles: now you see them - now you don't!

And - over the weekend we discovered that screws on the bottom of the arena have bad habit of damaging furniture (i.e. dinning tables). So, here are finishing touches to address it:

[gallery ids="1223,1222,1221" type="rectangular"]

Next step: coding!