1
Entries : Category [ Robotics ]
Articles about robots and robotics.
[Miscellaneous]  [Computers and Technology]  [Travel]  [Education]  [Hacks]  [Robotics]  [Science]  [Programming and Software]  [iPhone]  [Digital TV and Video]  [Intellectual Property & Copyright]  [Personal] 

18 July
2007

The next pool trial of the Aqua robot family is schedule for tomorrow. We will be using the huge McGill University pool to run serval kinds of experiments.

We will be doing tests with both Aqua 1.0 and well as the newer model called Ramius. Ramius has a problem with the motor controllors for one of the legs, probably a defective SA60, but we should be OK despite that especially since it will be used mainly for testing vision systems.

Olivia will be doing a whole slow of tests with her model-based controllers. These will use Aqua 1.0.

Mike Jenkin and some of the people from York University will be here also. In addition to doing planning and helping with the experiments, they will be getting ready to take one of the Aqua-family robots back to York permanently. This model is known as Kroy, but it's not quite finished yet.

Anqui and Junaed will be testing vision based gesture control, tracking and servo control, and some neat not-yet-disclosed ideas.


By Gregory Dudek at | Leave a comment |    
20 July
2007

CBC, the Canadian Broadcasting Company, has a feature on consumer robots on their on-line edition today. It includes articles discussing the destiny and state of consumer robotics, as well as the need for investment in robotics research and the robotics industry to stay in competition.

It also has a nice bit on robot lore (i.e. notable robots from
fiction) as well as a popular robotics quiz. There photo gallery on robots from "fiction to fact" is a bit of a disappointment though: is has a lot of standard fictional movie robots, but misses several of the more exotic yet interesting ones (for example the robots in the movie "Silent Running" or Gort). It also has very limited coverage of real robots, missing all interesting research robots and all interesting industrial and military robots, which is quite a set of omissions.


By Gregory Dudek at | Leave a comment |    
25 July
2007

The pool trials with simultaneous use of two underwater robots went pretty smoothly, despite major last-minute roadblocks.

The night before the pool trial we discovered that a custom-made video-to-fiber-optic transducer board was not working. Chris tracked it down to a bad integrated circuit. He managed to desolder the surface-mount and order a replacement from digikey who expressed it to use the next day. It was soldered back in place in tiem for the pool trial.

Then, just a few hours before the pool trial we found a problem with one of the IMU's. This was tracked to a bad solder joint, which may be been undetected since a leak many months ago. That got fixed too and we made it to the pool trial just a little bit late.



The experiements went well and we did everything on the "high priority" list. We always have alist of extra experiments to do it there is time, but not many of these got performed. We did manage to validate several different model-based controllers for the robot. They seem to have done really well. We also collected various kinds of vision system data for training and testing, and validating human interaction experiments.

By Gregory Dudek at | Leave a comment |    
01 September
2007

The slate of invited keynote speakers for RSS 2008 (Robotics Science and Systems) is starting to get populated. One of the speakers will be Kevin O'Regan. He does very influential work on human perception, specifically including visual attention, color perception, and change blindness. Change blindness deals with our use of attention and the fact that if our attention is misdirected in the right way we can fail to observe truly significant changes in the visual world.

There are a slew of other very exciting and diverse speakers also planned for RSS. One of the strategies for RSS keynote speakers is to invite well-established figures who are "tangential" to the conventional robotics community, and who can introduce new ideas, issues and research.


By Gregory Dudek at | Leave a comment |    
18 September
2007

We submitted a paper on automatically extracting interesting pictures from robot navigation video to the Robotics and Automation (ICRA) conference. Normally I don't blog about real work, or write about unpublished results, but this time the conditions are right. This work is part of John-Paul Lobos' thesis research and it extends the thing dubbed the "Vacation Snapshop Problem" in the work by Eric Bourque and me a few years ago (pdf paper). The idea of the Vacation Snapshop Problem is to select a few descriptive "Vacation Snapshots" from the potentially large set of pictures you (or a robot) might take while navigating along a route (i.e. a video that is a so-called navigation record). In this work we looked that what defines the most appropriate "interesting" photographs that summarize what a robot saw.

An aspect of the approach involves building clusters of images that are depict about the same content as one another (even though the pixel-to-pixel content of these images may vary quite a bit). The figure here illustrates the kind of results we get fully automatically, but there are a lot of details that determine what kind of pictures are selected.


By Gregory Dudek at | Leave a comment |    
Rate item 78: Rating: 99.0/10 (3 votes cast)
03 November
2007

Pictures and narrative from the 2007 urban challange for robot vehicle

2007/11/03: I'm at the DARPA grand challenge and it just ended. 6 finalist teams finished including MIT, Stanford and CMU. MIT and Cornell's Skynet vehicle came in quit a bit after the other's and my guess is Stanford University and/or CMU will be declared the winners in the top 2 spots (getting $1M and $2M each).


Stanford car.
DARPA is the US Defense Advanced Research Projects Agency, the same organized that really did sponsor the birth of the Internet and the supporting IP-based protocols that make it all run (originally called the DARPANET, and then the ARPANet). The idea of "Grand Challenge" events has become very popular recently as a technique for spurring applied computer science and engineering research. In this case, a prestigious set of prizes of $2 million, $1M and $500K were fronted by DARPA for vehicles that could drive safely in an (artificial) urban environment. This is a followup to an earlier challenge for outdoor driving. Note that that actual money itself is probably not the spur for any of the serious teams, since the actual investment in building a winning entry probably substantially exceeds the expected financial return. Rather, it's a chance to do some very visible ground-breaking research, and to stimulate lots of energetic research activity. Getting in the good books with DARPA specifically probably doesn't hurt either. The benefit to the reputations of the teams (both the people and the institutions) that do well is enormous. Based on a what one of the teams told me, the equipment on their vehicle was worth about $500,000. This is actually less than a quote I heard previously. Of course, much or all of that was presumably donated and it does not include the huge amount of labor or custom engineering. All the vehicles depended heavily on a large suite of LIDAR sensors, and there was apparently fairly limited use of computer vision, in comparison. LIDAR is distance measurement based on the time-of-flight for a laser beam.

Most of the teams sported the impressive rotating Velodyne sensors. It spins rapidly and uses 64 separate laser beams that together return about one million measurements per second (it can also run at a higher data rate and rotate at up to 900 RPM). This means at least a fair amount of computation of crunch that data. It's interesting that a few years ago such sensors were much talked about, and one was produced in Canada by Hymark. This should help make the Velodyne much more popular (I'd love one), but they're quite expensive (i.e. more costly than the basic cars they are mounted on.) Stanford has a rotating Velodyne on top, but also uses about 5 other brands of LIDAR sensor as well a microwave radar. They use two different kinds of SICK-brand lasers in various configurations, which were a mainstay of their prior outdoor challenge vehicle.

2007/11/05: The team from Carnegie -Mellon University was declared the winning team, with Stanford getting second place. As far as I know the precise reason for this is not available. That's because the precise scoring function was not announced in advance.

The general rules for the event were defined in advance, such as observing the rules of the road (such as stopping at stop signs), avoiding collisions, and staying under the speed limit. The final detailed scoring, however, was not determined in advance and it looks like Tony Tether from DARPA and/or his associates will make the determination taking into account subjective factors. This makes some sense since there are many subtle factors that characterize a good driver, especially a robot driver. For example, the MIT vehicle tended to hesitate a lot which led to legal but inelegant behavior. The evaluation needed to be flexible to take this kind of thing into account.

Out-and-out collisions, such as the one experienced by Georgia Tech's vehicle were grounds for elimination, and this kind of thing is why the slate of eleven finalist robots was reduced to a slate of only six that finished the competition. The eleven finalists in this event were all that remained from a much larger field of 53 potential participants who were eliminated in pre-trail evaluations, site visits, and qualification tests.

The race took place at an abandoned air field at Victorville, CA. DARPA constructed a small set of roads resembling a hunk of artificial town. While it was quite simple, it still cost an estimated $21M to build. This is probably because the evaluation area included extensive video surveillance of the whole area so the performance could be evaluated. It also included seating for spectators, an information tent, and a media tent.

The event was interesting both for the technical details, as well as to observe the infrastructure and social context, and to be able to congratulate some friend who were involved. Lastly, I can say I was there when the robot uprising was seeded.

Georgia Tech computing infrastructure in the trunk of the car.
Gregory Dudek and David Meger Me and Dave Meger (former student of mine, now at UBC) with all the vehicles in the background.
Indoor area for viewing the event. This room was pretty full most of the time, and showed footage from inside the course that was not directly viewable.
Interior of the Velodyne HDL-64E Lidar (laser) range sensor used on many of the vehicles.
Half the Stanford team, with Sebastian (blue shirt), the overall team leader, in the foreground. They had just been instructed to cheer (which I think came very naturally).
More of the Stanford team, with Mike Montemerlo (blue hat), software lead, near the front.
Side view of the Stanford vehicle. Several different laser systems can be seen. The Velodyne is on top.
MIT vehicle just finished and being parked.


Dave and I were attending the IROS conference in San Diego, and only found we could attend the Grand Challenge at the last minute. We rented a nice mustand convertible and zipped up to Victorville, about two and a half hours North East of San Diego.
Main observation stands. The crowd had thinned a bit at this point.
Here's Dave standing in front of the huge TerraMax vehicle from Oshkosh. They were eliminated during the finals because they hit a concrete barrier. This is one serious vehicle.
A couple of teams on the grounds just as the finals ended. Each team had a color-coded shirt. Note home big the teams were. I didn't count, but the smallest of teams looked like 30 people or more, with large teams being much bigger.
This is the Victorville base area just beyond the region where the event was. Clearly the space was available.
| |

By Gregory Dudek at | Leave a comment |    
Rate item 84: Rating: 10.1/10 (17 votes cast)
Prev  1   2   [3]   4   5   6   7   8   9   10   11   12   13   14   15   16   Next