First Lego League Lessons¶
Software (Python)¶
Python is much easier to write and organize than block. It also has more powerful functions, such as moving motors to relative positions.
In the Spike app, the python editor is a very stripped-down version of VS Code. There is no documentation, but some VS Code keyboard shortcuts work. Code navigation is poor. Best to use find.
Useful: the app provides hover hints for Spike functions. Not sure that works in VS Code.
Organizing code into files is possible, but very tedious. See prime lessons.
The code for driving, turning, line finding, and arm moving is general. It carries over from year to year. Reusing that code is key. If one starts from scratch each year, there is not enough time to do a lot of missions.
Editing in VS Code¶
It is possible to write the code in VS Code (but not Windsurf) with the Lego Spike extension. Somewhat tedious: One has to manually upload changes to the hub before each run.
Workflow:
- Edit as usual.
- Upload to a slot on the hub (using the command panel).
- Run on the hub using either command panel or the "run" button in the extension menu bar.
- There is a way of placing a comment into the file to auto-run on upload.
Unresolved¶
- Can one run different missions using the same code?
- Since there is a lot of shared code, one has to hand-edit the code before uploading to each slot for each mission. Is that avoidable?
- One option would be to detect the slot while running the Python program. But that does not seem possible. One team used a color sensor to detect which attachment was installed. That's an option. But using a sensor slot is too expensive.
- Error messages are usually not displayed in terminal. Serialization error. Basically means that debugging has to be done in the Spike app.
- How to keep arm in fixed position?
- There is a built-in function. But it would have to be run in parallel with other code.
- How to structure that? Ask Claude.
Ideas¶
Keep track of robot history¶
This is a key idea for understanding and debugging code.
Is it possible to write the history of all sensors and motor positions to a file? Let's say every 1/10 of a second?
How would that file be retrieved from the brick? Do we need pybricks for that?
See Claude "Logging robot state data".
The only solution is to print to console. If at end of run: how to interrupt run without losing log? Wrap in try catch?
General arc driving function¶
- Directly control motor power.
- Constantly read wheel positions. For an arc, they should change in fixed proportion.
- Also constantly read gyro. Keep track of gyro history. While gyro changes smoothly, it is probably reliable. Then use it to adjust motor power and compensate for wheel slippage (and to stop when rotation is complete). But when gyro is not moving smoothly, ignore it and rely solely on wheel rotation.
- General function that stops the movement. It can handle:
- Robot is lifted or bumped - hitting an obstacle.
- Rotation is complete - based on gyro or wheel position.
- Distance sensor
- Light sensor
- That function could, for example, drive in an arc until the robot has turned 45 degrees. But stop if x cm from mission model or if robot hits something.
- Possible application
- Drive towards a mission model and stop in front of it.
- Drive arc with stopping condition rotation, distance sensor or bump
- If bump: handling depends on situation.
- If distance sensor: turn to target angle and proceed
- If rotation: continue driving straight until distance sensor.
Documentation and Tutorials¶
There is no complete documentation of the API.
- Knowledge base as website
- Some python code for driving (not very well written, IMHO) - useful for inspiration.
For Python:
Precise Driving¶
This is key! Get that right first.
Resources:
Principles¶
Keep track of the robot position¶
Think of the table as an (x,y) plane. The the left lower corner is point (x=0, y=0). Moving right increases x. Moving up increases y.
Set up the robot in the start position. Record its (x,y) position in variables (xPos,yPos). Each time the robot moves, update the (xPos,yPos) variables. That way, you always know where the robot is on the table. Driving to any (x0,y0) on the table, such as the next mission, then simply means driving along a vector (x0-xPos, y0-yPos). Write a function for that.
Reset to known position¶
Over time, the robot position gets less and less accurate. Use the black-and-white lines on the table and the walls to recalibrate (xPos,yPos) whenever possible.
Don't touch the yaw angle¶
Always set the gyro yaw angle so that 0 means the robot points "north" along the y-axis.
A lot of code constantly resets the yaw angle to zero. That makes it very hard to keep track of which way the robot is pointing at any point in time. Set the yaw angle to 0 at the start and then only reset it when the robot is in a known position.
Caveats¶
- When the robot runs into a wall at speed, the next driving action is "wrong."
- Reproducible example: Back the bot into a wall at speed. Then drive forward by a small distance. The robot will drive about 20 cm.
- Always run into walls at low speed.
- Driving short distances with acceleration does not work. The bot drives too far.
Driving straight¶
Use the gyroscope. The internet has code that uses gyro to drive in straight line. The robot wobbles without the gyro. See prime lessons.
Tricks:
- A good strategy: Use the built-in turn function as a first pass at high speed. Then use the gyro to make the turn more precise at low speed.
How to drive a fixed distance:
- Without gyro: Set the mapping from motor rotations to distance using the built-in block. Then just use the "move a distance" block.
- With gyro: Not clear how to run a loop up to a certain number of motor rotations. Need to calibrate robot to translate time and speed into distance.
To do¶
- Make a test track, so that driving code can be tested when changed.
- Can one accelerate / decelerate smoothly while driving a precise distance? Less wheel slippage.
Turning¶
The gyro yaw angle ranges from -180 to +180. That makes the math awkward. For example, turning from +170 degrees to -170 degrees is a 20 degree right turn. It is best to always work with 0 to 360 degree angles instead. It simplifies the math.
Note that the gyro has "drift." It may take some time for it to settle and produce a precise reading. Should wrap reading gyro into a function that repeats reading until it settles.
Note that the robot tends to overshoot when it turns because it takes a bit of time to stop.
To do¶
Can one use gyro during turns?
- Look for existing code
- Test it
- How to detect whether gyro is reliable during the turn? Keep track of gyro history.
Line detection and following¶
Sample code from Lego Ed. Prime lessons
Can either look for color (black or white) or look for change in reflectivity of surface. Reflectivity seems more reliable. Since most of the table is light-colored, it is usally best to look for black lines.
Drive into the vicinity of the line before starting the line finder. That way, the chance that the light sensor gets confused by other colors along the way is reduced.
With two light sensors: Start with, say, right seeing black and left seeing white. When color or right sensor switches to white, the robot has drifted left and needs to steer right.
Tasks:
- Follow to end of black and white line - how?
Driving arcs (not implemented)¶
Instead of driving + stopping + turning + driving why not drive in arcs?
Function: drive linear distance X in direction Y (degrees). End up with yaw of Z degrees.
Need: Convert (X,Y,Z) into circle radius and distance to drive.
Function: drive along a circle with radius R for distance D
Approach: - inner wheels drive circle with radius R-W - outer wheels drive radius R+W - where 2W is the distance between wheels - relative length of the two diameters determines relative speed of the two wheels (rotations per second)
Use gyro to make arcs precise Need yaw as function of distance driven
All of the math can be worked out as function of radius and fraction of circle driven (terminal angle)
Robot Startup¶
Before turning it on, let the robot sit for a few seconds on a perfectly horizontal surface. Otherwise, the gyro does not initialize correctly.
After booting, set yaw to 0. Then never touch it again, until it can be set based on a known position (against a wall or line).
Strategies for missions¶
Most teams design an attachment for 1 or 2 missions. Then they run the robot for a few seconds and spend a lot of time swapping out attachments.
That strategy can work, IF:
- The team has enough people to develop custom attachments and run tests.
- Each attachment does at least 2 tasks.
- The robot can move 2 attachments at the same time.
- Static attachments can be put on the robot's side at any height. The robot needs to be enclosed in a box.
- Attachments drop into the top of the robot using just gravity. No pushing required.
- The robot is aligned to the same start position for each mission using a box.
- It really helps to let the robot sense the attachment being used. A color sensor on top of the bot can be used for that.
An alternative strategy is to use only 1 or 2 attachments for all missions.
- Most missions require just lifting / pushing down / pushing sideways. A simple horizontal bar with side panels can do most of those missions.
- Then precise driving becomes key. That is much slower than the driving for short missions (because errors must not compound). Alignment against lines is needed. But one saves a lot of time swapping out attachments.
- That strategy is a good one for small teams.
In practice, a mix of both approaches is likely optimal.
Lessons from 2025¶
In 2025, we tried few runs with several missions each. Success was mixed. One wrong alignment produces cascading failures. What works on the home table, may not work on the competition table.
Chaining missions is a great time saver. But then the early missions have to be very robust. The driving code has to work really well. The robot cannot drive onto mission models, which gets it out of alignment.
The idea is still good. But the missions need to be designed to avoid cascading failures.
Most teams try the opposite strategy: many runs with 1-2 missions each. That also does not work well. It's a lot of stress during competition. One wrong button press crashes an entire run. Changing attachments takes a lot of time.
My view is that a combination approach is optimal. Chain missions when the ones at the start are reliable. Run short missions when one can get lots of points in a short period of time and when a mission requires a very specific attachment.
Alignment¶
Jigs really help. Robot can start far foward and at an angle.
Also makes it possible to align everything on a few major lines.
Testing¶
Separate driving from executing mission.
- Mark where robot is supposed to stand with tape on the table
- Get the mission going without driving
- Separately add code for driving to the mark
- Write a test function for each mission model.
Robot Design¶
The Coop bot is not a bad starting point. It is compact, can handle two attachments and light sensors, and is well balanced.
The Coop bot's light sensors are partly obstructed and therefore don't work properly. The area around the light sensors needs to be redesigned. It also needs wire management.
The robot should have a completely flat back side, so it can be aligned easily against a wall.
Make sure weight is distributed so the robot does not tip when starting or stopping. That's a problem with the Advanced Driving Base.
Attachments should drop into the top of the robot using just gravity. No pushing required. Unless only 1 attachment is used.
The robot "walls" should be totally straight and very close to the ground. That prevents the robot from running over anything (loose items and mission models).
Use large motors to move arms. Small motors do not have enough power to move very long attachments precisely (e.g., heavy lifting).
A design idea¶
Wheels in the front, inside a box.
Arm motors mounted facing left and right. Attachments can be stuck directly on the axles. No slack. Each attachment.
In addition, use the back of the motor to drive a horizontal gear. Attachments that drop in from the top can use those.
- It may be possible to operate four attachments at the same time.
It is not clear that the light sensors are all that helpful.
- Without calibration, they can only see black lines. And alignment is slow.
- Sometimes, a dark area on the table is misinterpreted as a line.
Distance sensor
- worth trying
- If it is precise enough, it could be very helpful for alignment.
- Perhaps one on the front and one on the side?
Tools¶
Bricklink Studio is probably the standard design tool for Spike Prime.
- FLL Tutorials tutorial
- Droids robotics video tutorial
Questions¶
Attachments¶
Mounting efficiency is key. Build a complete box around the bot, so that we have attachment points at top of bot. Simply drop it on and let gravity hold it.
An easy attachment: arms Have motors pointing sideways at front of bot. Simply stick arms directly into the rotating pieces (probably need to attach one piece to hold an axle for that?)
Avoid many gears. They introduce slack.
Look for robust ways of moving a mission model.
- 2025 raise roof: reach through from the other side and pull on horizontal cross bar; or pull on edge of roof. More robust than pushing for some reason.
- 2025 statue: pushing on tail turned out to be not robust (reason again not clear).
- Keep in mind that attachments can be flexible (e.g., long axles).
- There are even brush pieces that can slide over items and pull them.
With long attachments, avoid turning the robot.
- The robot does not have enough power to turn precisely with a heavy attachment in front.
- Turns need to be very precise with long attachments. E.g., 2025 "heavy lifting."
Forklift attachment It can press levers, lift stuff, push stuff, scoop stuff (capture and drag using a wide attachment) Highly versatile
Each attachment needs a limiter.
- When arm is moved up all the way, it needs to be in a known position. A bar that limits how far the arm can move ensures that.
Moving wall attachment¶
Can move horizontally and vertically - highly versatile. Could likely complete nearly all 2025 missions. More compact version. There are no build instructions, but one can follow along based on the video.
Main downside: The attachment cannot be exchanged. It is an integral part of the robot.
Moving attachments¶
During a run, arm positions are fully repeatable.
Motor positions are absolute. They do not get set to 0 when the robot starts.
During robot startup (before attachments go on), move all attachment motors to 0.
Move the attachments by moving the motor to fixed positions. This makes the movements repeatable. The attachments can be moved to a known position, regardless of their current position after a task.
On the current robot (coop):
- start with arm down and motor in position 0 (motor D)
- counter-clockwise is up
- 270 is about as far up as it goes
- angles close to 0 tend to roll over to 359. Round those angles to 0.
Initialize attachments¶
How to ensure that the robot knows the attachment position at the start of each run?
- Run arm against a limiter. Takes time. Have to wait for motor to stall.
- Start attachment in "natural" position (usually all the way down; gravity). Does not work for long attachments that stick out of the start box. They would have to be started vertically.
- Always run motor to position 0 at end of each run. Seems error prone. What if the run has to be interrupted? What if attachment gets stuck?
Sources:
Holding an arm in a fixed position¶
Motor commands accept a stop option. That is useful mainly for holding the arm in a fixed position. Example:
- We are pushing against some part of the mission model while driving.
await run_for_degrees(ARM_MOTOR, 90, 200, stop = BRAKE)- Now the motor is held until the next motor command is issued.
- One can also issue
motor.stop(ARM_MOTOR, stop = BRAKE)after an arm movement. But that may be less reliable b/c it holds the arm wherever it happens to be.
Questions¶
How to initialize an attachment?
- Let's say we have a simple up-down arm. It gets mounted after a run, so the motor position is random. The arm position is also somewhat random.
- How to ensure that the robot "knows" the arm position?
Challenge Project¶
Items the judges want to know about ...
The process of coming up with the idea:
- How did you come up with the idea? What other ideas did you consider? Why did you dismiss them?
- What research did you do? Did you talk to experts?
- What feedback did you get on your ideas?
How the work on developing the idea was organized:
- How did you keep track of / communicate progress?
Robot run strategy:
- How did you come up with the list of targets? How did you organize them into runs?
- How was labor divided for the different runs?
- Which sensors are used? Why not others?
- How are attachments designed?
Competitions¶
Things to bring:
- Chargers for robot, laptops.
- Wheel wipes.
- Extension cord.
- Food and water.
- Printed code.
- Spare robot parts.
Preparation:
- Upload current code to slots (in the right order).
Judging¶
Students have 5 minute (each) presentations for project and robot design.
Judges are looking for specific rubric items. Check the scoring guides for the current ones.
Judges are looking for evidence and resources. For the code:
- What resources were used to develop code?
- How did code evolve?
- How is it structured?
- What makes it special / reliable?
- Evidence of testing: judges want to see run logs with pass / fail results and lessons learned. Those don't make much sense. But keep track of changes made each day.
- Same for robot: How was it designed? How did it improve over time?
- What is the mission strategy?
Organizing the Season¶
Students need to come in prepared. If we start from scratch in September, there is not enough time to produce much without excessive help from coaches.
If coaches do the work, it defeats the purpose. General driving code can be written by coaches and carry over from season to season. Students need to design and implement missions.
Run a short "Intro to Python" at the start. Students need to know very little. Perhaps make an introduction video that covers:
- How to connect robot to spike.
- How to get basic driving code into spike.
- How to run something in python (
runloop). - How to implement a very simple mission with a given robot and attachment: driving straight, turning, moving arm, finding lines.
Block Coding¶
- Functions cannot return values, but one can simulate return values and avoid side-effects by always storing output values in
vOut. The flow is then:- Call a function that computes a value.
- Copy
vOutinto a variable with a name that makes sense.
- Bool inputs
- They are awkward to create when calling a function. One has to input a comparison, such as
0<1to get atrue. Makes the code hard to read. - For user-facing functions, it is generally easier to use string inputs ("left", "up", etc.)
- They are awkward to create when calling a function. One has to input a comparison, such as
Driving functions¶
These are used in our (outdated) block code.
Main user-facing functions:¶
- DriveToXy`: Drive from current position to a given (x,y).
- This is the main function used for driving.
- Turns the robot into the direction of point (x,y). Computes the distance, and drives straight using that gyro.
- Currently does not handle driving backwards. To do that, turn the robot in the right direction using
TurnToAngleand then useGyroDistanceto drive in a straight line.
GyroDistance: Drive in direction of current yaw angle for a given distance.- Requires that the conversion from wheel rotations to distance is set correctly (see
Calibrate rotationsfunction). For medium wheels, the conversion is 17.5cm per rotation.
- Requires that the conversion from wheel rotations to distance is set correctly (see
Helper functions:
GyroDistance: Drives straight in the current direction using gyro.- The function can handle driving backwards (set
speed < 0).
- The function can handle driving backwards (set
Functions for turning:¶
-
TurnToAngle: Turns until the yaw (converted into 360 degrees) equals a given angle.- Decides automatically whether turning left or right is shorter.
-
TurnRight: Turns right to a given angle, even if a left turn is shorter. TurnLeft: Same turning left
Helper functions:
DxDyToAngle: Convert a vector, described by(dx, dy)into an angle (0 to 360 degrees).- The math uses
atanfor the conversion from the slopedy/dxto an angle. atan2would simplify the math, but is not available.
- The math uses
AngleToDxDy: Does the reverse calculation.
Functions for line detection¶
LineFinder: Finds either a white or a black line, using either the left or the right sensor.LineAlign: Once a line has been found, rotate the robot until the other sensor also sees the line.
FLL Resources¶
Mr Schaefer's google sheet - Veracross course page - MS FLL site
FLL Unearthed site - Tech warriors - PrimeLessons - Build books
Bock code for driving the robot (outdated, messy)
Python code for driving the robot - driving straight, turning, arm movements, line finding. Used for 2025 state championships.
Field mapping tool for tracing out drive paths
Robot designs:
- Discovery robots
- Very simple but effective base design (but older robot)
- Compact and well designed. With a lift attachment, but the mounting accommodates other attachments
- Search for FLL masterclass on youtube
- FLLCasts has tons of robots and attachments. Subscription required, but affordable.
- Rollie is a good starting point
Archaeology Ideas¶
Sifting¶
Claude research report - ChatGPT deep research report
Dr. Shebalyn points out two problems with sifting:
- Requires water to sift fine material.
- Requires power. Therefore often done manually.
Pros of wet sifting:
- Separates items that stuck together. Fewer artifacts are missed.
- Cleans artifacts. Easier to see.
- Makes lumpy soil siftable.
Cons of wet sifting:
- Water often not available. And one needs lots of it. Even recycling water cuts water use only about by half.
- Takes more time. Samples usually need to be dry-sifted first. Then they get lightly soaked in buckets. Only after they have been sitting for some time, can they be sifted again. Finally, samples need to get dried.
- Some artifacts cannot get wet.
Possible benefits or air sifting (our proposal):
- Can separate clumped items to some extent. Because material is vigorously moved around.
- Not labor intensive. Can automatically sift by running sieves through the agitated material.
Limitations:
- Fragile artifacts could be damaged. To some extent also true for conventional dry sifting.
- Does not work on all soils; e.g., clay.
Other Ideas¶
Samples are fragile. Freeing them from the surrounding material takes a lot of time (manual labor). Could that task be partially automated?
Reassembling fragments. Could that be done with 3d scans and software. Probably not a new idea. May be a newer idea: make it into a computer game.
Related: Some residues are just tiny fragments or even stains that are left. How to identify those from just surrounding dirt.
How to investigate site without disturbing it.
Finding sites: Researchers have converted scanning of aerial images for buried sites into computer games. So far, this has only been done for photographic images, which don't work in rainforests. Applying the same idea to LiDAR would work in rainforests. But the innovation is pretty small.
- Or crowdsourcing.
- Someone mentioned the idea of a robot with ground-penetrating radar. That exists!
Preparing fossils. Very time consuming. Why not train a model to recognize the parts that are certainly not fossil and let a robot remove those. Leave the tricky details to humans (for now).
Related idea: Robotic surgery enhances human precision by running movements (which have to be tiny) through actuators that smake the movements smaller (and could build in safeguards against cutting the wrong parts). Why not apply that to archaeology?
Mechanical sifter - why does it not exist? Power access? Make a chain of sifters. Exists for water screening.
- the idea of dry liquid
Recycle the water used in wafter sifters. Solar energy.
Taking photos with uniform light is hard.