Peter Corke

Peter Corke

Brisbane, Queensland, Australia
8K followers 500+ connections

About

I am a researcher, educator and writer in robotics and robotic vision. My research interests include visual-control of robots and the application of robots to problems such as large-scale environmental monitoring and agriculture.

My education interests include internet-based teaching at scale (https://www.robotacademy.net), open-source software (https://github.com/petercorke), and writing books (Robotics, Vision & Control: Fundamental Algorithms in Python|MATLAB, currently in 3rd edition). I received the Australian University Teacher of the Year award (2017), and the QS-Wharton (2015) Engineering & IT gold award and the Teaching Delivery silver award.

Previous roles include founder & director QUT Centre for Robotics (2020-3), director of Australian Centre of Excellence for Robotic Vision (2014-20), founding and leading a x-CSIRO research theme in wireless sensor networks (2006-8), and before that founding Research Director of the Autonomous Systems Laboratory (2004-2007). Member of the Administrative Committee (AdCom or board of governors) of the IEEE Robotics & Automation Society for the term (at large 2008-2010; 2011-2013; at large 2016-18).

Editorial activities include editor-in-chief of the IEEE Robotics & Automation Magazine (2010-13); founding multi-media editor, member of editorial board, International Journal of Robotics Research (2004-2019); founding editor and associate editor, Journal of Field Robotics, Wiley (2006); editorial advisory Board for Springer Tracts in Advanced Robotics (STAR) book series, Springer (2004-).

Fellow of the Australian Academy of Science (2019), Fellow of the Australian Academy of Technology and Engineering (2017), Senior Fellow of the Higher Education Academy, UK (2017), Fellow of the IEEE (2007). Other awards include: IEEE George Saridis Award for Research Leadership in Robotics and Automation (2020); Eureka Prize Finalist (research and innovation in environmental science), COTSBot team (2016).

Biographical video (2 hours): Australian Robotic Vision https://www.youtube.com/watch?v=zI95yjzgjRs

Specialties: research, wireless sensors networks, robotics, aerial robot, UAV, underwater robot, AUV

Articles by Peter

See all articles

Activity

Join now to see all activity

Experience

  • Lyro Robotics Graphic
  • -

    Brisbane, Queensland, Australia

  • -

    Brisbane, Queensland, Australia

  • -

  • -

    Australia

  • -

    Brisbane, Queensland, Australia

  • -

    Brisbane, Queensland, Australia

  • -

    Brisbane, Queensland, Australia

  • -

  • -

    Brisbane, Australia

  • -

    Brisbane

  • -

    Brisbane, Queensland, Australia

  • -

    Brisbane, Queensland, Australia

  • -

    Kingston, Tasmania, Australia

  • -

    Brisbane

  • -

  • -

  • -

  • -

  • -

  • -

Education

  •  Graphic

    -

    PhD thesis "Vision-based Robot Control". Supervisors Prof Malcolm Good and Dr Paul Dunn.

  • -

    Thesis "Computer-aided control system design", supervisor Prof John Anderson

  • -

    Activities and Societies: electrical engineering

    B.Eng in Electrical Engineering, M.Eng.Sc. is Electrical Engineering and PhD in Mechanical and Manufacturing Engineering

  • -

Publications

  • Automatic image scaling for place recognition in changing environments

    Proceedings of the IEEE International Conference on Robotics and Automation

    Robustness to variations in environmental conditions and camera viewpoint is essential for long-term place recognition, navigation and SLAM. Existing systems typically solve either of these problems, but invariance to both remains a challenge. This paper presents a training-free approach to lateral viewpoint- and condition-invariant, vision-based place recognition. Our successive frame patch-tracking technique infers average scene depth along traverses and automatically rescales views of the…

    Robustness to variations in environmental conditions and camera viewpoint is essential for long-term place recognition, navigation and SLAM. Existing systems typically solve either of these problems, but invariance to both remains a challenge. This paper presents a training-free approach to lateral viewpoint- and condition-invariant, vision-based place recognition. Our successive frame patch-tracking technique infers average scene depth along traverses and automatically rescales views of the same place at different depths to increase their similarity. We combine our system with the condition-invariant SMART algorithm and demonstrate place recognition between day and night, across entire 4-lane-plus-median-strip roads, where current algorithms fail.

    Other authors
  • Towards Vision-Based Pose- and Condition-Invariant Place Recognition along Routes

    Proceedings of the Australasian Conference on Robotics and Automation 2014

    Vision-based place recognition involves recognising familiar places despite changes in environmental conditions or camera viewpoint (pose). Existing training-free methods exhibit excellent invariance to either of these challenges, but not both simultaneously. In this paper, we present a technique for condition-invariant place recognition across large lateral platform pose variance for vehicles or robots travelling along routes. Our approach combines sideways facing cameras with a new…

    Vision-based place recognition involves recognising familiar places despite changes in environmental conditions or camera viewpoint (pose). Existing training-free methods exhibit excellent invariance to either of these challenges, but not both simultaneously. In this paper, we present a technique for condition-invariant place recognition across large lateral platform pose variance for vehicles or robots travelling along routes. Our approach combines sideways facing cameras with a new multi-scale image comparison technique that generates synthetic views for input into the condition-invariant Sequence Matching Across Route Traversals (SMART) algorithm. We evaluate the system’s performance on multi-lane roads in two different environments across day-night cycles. In the extreme case of day-night place recognition across the entire width of a four-lane-plus-median-strip highway, we demonstrate performance of up to 44% recall at 100% precision, where current state-of-the-art fails.

    Other authors
    See publication
  • A tutorial in visual servo control

    IEEE TRA

    This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual frame- work. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The…

    This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual frame- work. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image fea- tures in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.

    Other authors
    See publication

Patents

Projects

  • Introduction to Robotics MOOC

    The first open (free) online undergraduate-level robotics course.

    See project
  • Australian Research Council Centre of Excellence for Robotic Vision

    - Present

    Grant Amount: $19,000,000
    Duration: 2014-2020

    Centre Overview: The Centre’s research will allow robots to see, to understand their environment using the sense of vision. This is the missing capability that currently prevents robots from performing useful tasks in the complex, unstructured and dynamically changing environments in which we live and work.

    The entire team of investigators comprises: Peter Corke, Ian Reid, Tom Drummond, Robert Mahony, Gordon Wyeth, Michael Milford ,…

    Grant Amount: $19,000,000
    Duration: 2014-2020

    Centre Overview: The Centre’s research will allow robots to see, to understand their environment using the sense of vision. This is the missing capability that currently prevents robots from performing useful tasks in the complex, unstructured and dynamically changing environments in which we live and work.

    The entire team of investigators comprises: Peter Corke, Ian Reid, Tom Drummond, Robert Mahony, Gordon Wyeth, Michael Milford , Ben Upcroft, Anton van den Hengel, Chunhua Shen, Richard Hartley, Hongdong Li, Stephen Gould, Gustavo Carneiro, Paul Newman, Philip Torr, Francois Chaumette, Frank Dellaert, Andrew Davison and Marc Pollefeys

    The organization list includes: Queensland University of Technology, The University of Adelaide, Monash University, the Australian National University, University of Oxford, INRIA Rennes Bretagne, Georgia Institute of Technology, Imperial College London, Swiss Federal Institute of Technology, Zurich and National ICT Australia.

    Other creators
    See project
  • Robotics, Vision & Control (the book)

    - Present

    A MATLAB based introduction to mobile and arm robots, machine vision and vision-based control. Leverages the Robotics and Machine Vision Toolboxes for MATLAB. Aimed squarely at advanced undergraduates and beginning graduate students.

    Second edition currently underway, lots more content: Lie algebras, product of exponentials, twists, inertial navigation, differential and omnidirectional robots, lattice planners, pose graph SLAM, series elastic actuators, Lab color space, light field…

    A MATLAB based introduction to mobile and arm robots, machine vision and vision-based control. Leverages the Robotics and Machine Vision Toolboxes for MATLAB. Aimed squarely at advanced undergraduates and beginning graduate students.

    Second edition currently underway, lots more content: Lie algebras, product of exponentials, twists, inertial navigation, differential and omnidirectional robots, lattice planners, pose graph SLAM, series elastic actuators, Lab color space, light field cameras, structured light, bundle adjustment and photometric visual servoing!

    See project
  • Robotics Toolbox

    - Present

    Two toolboxes, one for robotics (kinematics, dynamics, mobile navigation, localisation, planning and one for vision (image processing, filters, feature extraction, multiview geometry)

    See project
  • Load Haul Dump

    -

    An LHD is a mid sized (up to 60 tonne) underground mining vehicle that loads, hauls and dumps (hence its name) metaliferous ore from an open stope (where there is broken rock) to a crusher or waiting truck to be transported to the surface. Since the roof of the tunnel in open stope areas is unstable, this type of operation presents a number of safety issues and provides a perfect opportunity for automation. Our aim in this project was to automate the haulage and dumping cycle for an LHD.

    Other creators
    See project

Honors & Awards

  • Fellow

    Australian Academy of Science

  • Australian Award for Teaching Excellence

    Australian Government Department of Education and Training

  • Australian University Teacher of the Year

    Australian Government Department of Education and Training

  • Fellow

    Australian Academy of Technology and Engineering

  • Senior Fellow

    Higher Education Academy (UK)

  • Citation for Outstanding Contributions to Student Learning

    Australian Office for Learning and Teaching

  • Engineering & IT award (gold)

    QS-Wharton Reimagine Education

  • Fellow

    IEEE

    for contributions to vision-based robot control

  • Australian Engineering Excellence award

    Engineers Australia

    Starbug project team

More activity by Peter

View Peter’s full profile

  • See who you know in common
  • Get introduced
  • Contact Peter directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Peter Corke in Australia

Add new skills with these courses