Autonomous robotic plane flies indoors at MIT



For decades, academic and industry researchers have been working on control algorithms for autonomous helicopters — robotic helicopters that pilot themselves, rather than requiring remote human guidance. Dozens of research teams have competed in a series of autonomous-helicopter challenges posed by the Association for Unmanned Vehicle Systems International (AUVSI); progress has been so rapid that the last two challenges have involved indoor navigation without the use of GPS.

But MIT’s Robust Robotics Group — which fielded the team that won the last AUVSI contest — has set itself an even tougher challenge: developing autonomous-control algorithms for the indoor flight of GPS-denied airplanes. At the 2011 International Conference on Robotics and Automation (ICRA), a team of researchers from the group described an algorithm for calculating a plane’s trajectory; in 2012, at the same conference, they presented an algorithm for determining its “state” — its location, physical orientation, velocity and acceleration. Now, the MIT researchers have completed a series of flight tests in which an autonomous robotic plane running their state-estimation algorithm successfully threaded its way among pillars in the parking garage under MIT’s Stata Center.

Read more: http://web.mit.edu/newsoffice/2012/autonomous-robotic-plane-flies-indoors-0810.html

Video: Melanie Gonick, MIT News

Additional footage courtesy of: Adam Bry, Nicholas Roy, Abraham Bachrach of the Robust Robotics Group, Computer Science and Artificial Intelligence Laboratory, Department of Aeronautics and Astronautics at Massachusetts Institute of Technology.

Special thanks to the Office of Naval Research under MURI N00014-09-1-1052 and the Army Research Office under the Micro Autonomous System Technologies program.

41 thoughts on “Autonomous robotic plane flies indoors at MIT

  1. Amol Khade

    With complete due respect, I didn't found its such a gr8 at all… Its not so complex algorithm as well, me too belongs to AI(Artificial Intelligence) and NN(Neural Networking) field.. Expect lot more from MIT. 🙂 Cheers !!!

    Reply
  2. Ryukachoo

    you bet. bootstrap like crazy.
    what he said in the video was an extremely high level description of something much more finicky, mostly the state estimation. marrying the LIDAR with accelerometers and gyroscopes sounds like a huge pain, however a system like this would be a huge boon to something like APM, the arduino pilot open source project

    Reply
  3. Ryukachoo

    i've been meaning to look at ROS, didn't realize they had features that varied already. that's only half the story though, the other half is the state estimation using that simplified aerodynamics model. although that might be possible using a rudimentary physics engine for games

    Reply
  4. prajñā prajñā

    Fermat's last Theorem is the transformer bumblebee Robot.
    Original equation:
    z^n=x^n+y^n.
    Mean:
    z^(n-3)*z^3=x^(n-3)*x^3+y^(n-3)*y^3.
    Using the formula z^3=[z(z+1)/2]^2 – [z(z-1)/2]^2 to convert z^3 become the exponent 2. Then using the formula [z(z+1)/2]^2=1^3+2^3+……..+z^3 to convert the exponent 2 become the exponent 3.
    Repeated several times with the same method.
    The transformer bumblebee Robot was created according your own structure.

    Simplest format about Transformer  Bumblebee Robot.
    Using two formulas:
    z^3=[z(z+1)/2]^2 – [z(z-1)/2]^2 
    And define x<x+a<y.
    x^3+y^3=[y(y+1)/2]^2 – [x(x-1)/2]^2  – [(x+1)^3+(x+2)^3+……..+(x+a-1)^3+(x+a)^3+(x+a+1)^3+……..+(y-1)^3] 
    Because:
    (x+a)^3= [(x+a)(x+a+1)/2]^2 – [(x+a)(x+a-1)/2]^2
    So also:
    x^3+y^3=[y(y+1)/2]^2 – [x(x-1)/2]^2 – [(x+a)(x+a+1)/2]^2 + [(x+a)(x+a-1)/2]^2 – [(x+1)^3+(x+2)^3+……..+(x+a-1)^3+(x+a+1)^3+……..+(y-1)^3] 

    Original equation:
    z^3=x^3+y^3.
    According to above method, the  transformer bumblebee Robot system was created:
    [z(z+1)/2]^2 – [z(z-1)/2]^2=[y(y+1)/2]^2 – [x(x-1)/2]^2 – [(x+a)(x+a+1)/2]^2 + [(x+a)(x+a-1)/2]^2 – [(x+1)^3+(x+2)^3+……..+(x+a-1)^3+(x+a+1)^3+……..+(y-1)^3] 

    [z(z+1)/2]^2 – [z(z-1)/2]^2=[y(y+1)/2]^2 – [x(x-1)/2]^2 – [(x+b)(x+b+1)/2]^2 + [(x+b)(x+b-1)/2]^2 – [(x+1)^3+(x+2)^3+……..+(x+b-1)^3+(x+b+1)^3+……..+(y-1)^3] 

    [z(z+1)/2]^2 – [z(z-1)/2]^2=[y(y+1)/2]^2 – [x(x-1)/2]^2 – [(x+c)(x+c+1)/2]^2 + [(x+c)(x+c-1)/2]^2 – [(x+1)^3+(x+2)^3+……..+(x+c-1)^3+(x+c+1)^3+……..+(y-1)^3] 

    [z(z+1)/2]^2 – [z(z-1)/2]^2=[y(y+1)/2]^2 – [x(x-1)/2]^2 – [(x+d)(x+d+1)/2]^2 + [(x+d)(x+d-1)/2]^2 – [(x+1)^3+(x+2)^3+……..+(x+d-1)^3+(x+d+1)^3+……..+(y-1)^3].

    ……..
    Flood robot on the planet Orion galaxy looking for the integers to eat.
    Certainly not enough the integer for the large number of transformer bumblebee Robot

    ADIEU.

    Reply
  5. Oehcs Inc.

    This is only possible with todays fast processors, light weight lasers, and advanced batteries, Only more proof that self driving cars will be in the future.

    Reply
  6. Robert Lopez

    can the problem for dynamic navigation be fix by adding some kind of motor that gives the back tail more pivoting motion, or would it make the plan loose balance.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *