Rescue Line 2018 – Part 1

Intro

This is the first of a 2 part blog post featuring Team Chicken McNuggets, who attained 1st place in RoboCup Singapore Open in 2018 and went on to represent Singapore at RoboCup Internationals held Montreal, Canada held from 16-22 June. 

Part 1 will go through the team’s progress leading up to the Singapore Open while Part 2 will dive into the adjustments they made leading up to our first international appearance in this category.

It all started with a vision...

After years of sticking solely to the soccer category in RoboCup, we decided that we were finally ready to expand into Rescue Line, something which Vice-President Xu En has been pushing for ever since he joined us as a JAE student in 2017. 

We sent 2 teams with a mix of experienced and inexperienced members in each team, with the goal of a podium sweep together with the soccer teams and punching our ticket to the international competition held in Montreal, Canada later in the year.

Team members: Deng Jun, Huy, Hochi, Ian

Mechanical Design

In order to tackle every aspect of the mission, we went through multiple design iterations (see below). Some design considerations include:

  • A stable base with a low CG to go up and down the ramp
  • Wheels must be able to clear speed bumps
  • Method of picking up balls (aka victims)
  • Conform to size limit
  • Sensor allocation and placement

We eventually settled on a design with tracks which helps the robot to clear speed bumps easily. Furthermore, the wheels are geared for torque rather than speed, which gives the robot more overall power and control when line tracking. 

The EV3 brain and motors were also fitted in a manner such that the CG of the robot is low enough clear the ramp without toppling over on the way down.

Rescue mechanism

For the rescuing of the victims, we settled for a design that aims to sweep the balls towards the wall, before lifting them up into a basket that can topple and dump the balls backwards into the elevated evacuation zone when triggered.

Sensor allocation

With regards to sensor allocation, we purchased an EV3 mutiplexer which gives the robot 6 sensor ports instead of 4.

This allowed us to fit 3 light sensors for line tracking, 1 touch sensor to locate the evacuation zone, and 1 ultrasonic sensor to detect obstacles in the form of a solid brick or a water bottle. 

All 4 motor ports were used, 2 to drive the wheels and 2 to power a lifting mechanism.

Mindstorms mutiplexer gives our robot 2 additional sensor ports
Breakdown of final design

Evacuation Room Strategy

To rescue the ‘victims’, we decided to sweep the room in a rectangular manner repeatedly in order to catch as many victims as possible within the remaining time. 

We also attached cable ties to help sweep the balls towards the wall. Once the robot herds all the balls against the wall, it then lifts it up onto the cage on the top of the robot. 

For the detection of the evacuation zone, we settled on an ‘elephant tusk’ design, adding 2 angled beams to trigger the touch sensor when it comes into contact with the angled edge of the evacuation zone.

Programming

Initially, we decided on using the Proportional, Integral, Derivative (PID) method for line tracking and even experimented using a Mindstorms light array in order to save sensor ports and line track more accurately.

Mindstorms light array ft. 8 mini light sensors

 However, as newer designs of the robot became bulkier after adding the lifting mechanism and cage for rescuing the victims, we decided that the PID method was ultimately unsuitable and difficult to fine tune properly. The tight turns on certain tiles, intersections, as well as the inconsistent performance of the light array also made us abandon it.

Trial and error

At one point, we also experimented using NXC (Not eXactly C) instead of the LEGO Mindstorms programming software, which is a programming language that has a syntax similar to C. This was because our Mindstorms program was too large and robot is processing alot of sensor readings per loop. This resulted in each program loop taking much longer than expected. (around 0.25s instead of the usual 0.033s) 

This small difference had a huge impact on our line tracking program as the robot often veers off the line or turns incorrectly as it could not react in time. The C-based syntax of NXC has a smaller file size and is easier for the robot to process and complete each loop.

However, we eventually reverted back to using LEGO Mindstorms as we had to completely recode everything from scratch using an unfamiliar IDE and it is almost impossible to fine tune everything with 2 weeks left to the competition. 

Fortunately, we managed to clean up and optimised much of the old code such that the loop time is reduced to a more manageable one.

Final iteration

For our final program. we used a single light sensor to line track. We settled on the traditional method of line tracking (i.e. when on black, move left; when on white, move white) as it provides the consistency needed to score points more reliably. We placed 2 other colour sensors right behind it so as to detect the small green squares that determines where the robot turns at an intersection. 

What actually happened...

Despite our best efforts, our robot did not perform to the best of its ability. We were unable to fine tune and debug the line tracking program, which led to relatively inconsistent detection of the green squares. 

This led to multiple wrong turns and costed us valuable restarts and points. Another thing we did not account for was how well the colour sensor can detect the reflective tape at the entrance of the last room to trigger the evacuation room program, again wasting more time.

During the first run, the evacuation room program was not ready and we were forced to cut our run short without even attempting to rescue the victims. When we finally got to test the evacuation room program, we realised that our touch sensor could not detect the zone properly and thus, we could not dump the balls. At the end of day 1, we were sitting in the middle of the standings with only 1 more run on the last day.

Nonetheless, we worked through the night and modified our robot slightly so we could perform better for the evacuation of the victims. In order to identify the elevated evacuation zone, our team attached a touch sensor that is angled 45 degrees so that it can only be triggered when it presses against the triangular evacuation zone and not the edges of the room.

 Dummy wheels are also fitted to the side of the lifting mechanism and the robot so that it can glide against the evacuation zone smoothly and prevent the robot from stalling and having to restart.

Thanks to these modifications, we managed to pick up most of the balls, finishing the 3rd run with 279 points and clinching first place out of the local teams.

Our 2nd last design iteration

Concluding remarks

While things did not turn out perfect, we did the best we could within a 2 month window. The adjustments we made to our robot design right before our final run probably won us the competition as our line tracking program was imperfect and relatively unreliable compared to our evacuation strategy. Given that half of our team had little to none prior experience, our performance was a huge success.

Look out for Part 2 where we will document our international trip to Canada to represent Singapore at RoboCup Montreal 2018!

Leave a Comment

Your email address will not be published. Required fields are marked *