Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complete scene example for obstacle avoidance and machine learning #42

Open
wbadry opened this issue Jul 23, 2020 · 8 comments
Open

Complete scene example for obstacle avoidance and machine learning #42

wbadry opened this issue Jul 23, 2020 · 8 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@wbadry
Copy link

wbadry commented Jul 23, 2020

Hello,
I was wondering if it is possible to share some complete scenes like indoor places or any scene for obstacle avoidance and potential machine learning. Is there any way to add objects, textures and so on to the scene? Thanks

@issue-label-bot
Copy link

Issue-Label Bot is automatically applying the label question to this issue, with a confidence of 0.65. Please mark this comment with 👍 or 👎 to give our bot feedback!

Links: app homepage, dashboard and code for this bot.

@issue-label-bot issue-label-bot bot added the question Further information is requested label Jul 23, 2020
@mbusy
Copy link
Member

mbusy commented Jul 24, 2020

Hi,
We might eventually share more complex environments in the future, but that's not a priority.

You can easily create your own environments, a simple way to spawn objects in the simulation is to use the pybullet API and and the objects in pybullet_data (check #36 for more details). You can also check this repository, that uses qibullet for machine learning.

@wagenaartje
Copy link

Hi @wbadry, I would like to add that what you want to do is completely possible with this library. We have for example created a scenario where Pepper has to detect and identify objects, and then perform an action based on the object. This is all done with qibullet and tensorflow only. I will ask if it is possible to make this public, and will update you in this thread if so :)

@wbadry
Copy link
Author

wbadry commented Sep 2, 2020

@wagenaartje hopefully to get it this soon as I badly need such a thing this month. Thank you so much.

@mbusy
Copy link
Member

mbusy commented Sep 21, 2020

@wbadry, @wagenaartje any updates on that issue ?

@wbadry
Copy link
Author

wbadry commented Sep 22, 2020

I got no feedback since the last one. That would make it amazing if there is such an example.

@mbusy
Copy link
Member

mbusy commented Sep 30, 2020

I'll modify the issue, explicitly requesting an example implementing a complex scene. Since we don't want the size of the repo to be too big, we won't store additional meshes, the example will use meshes from the pybullet_data package. We could also cite projects that use qiBullet for machine learning applications in the wiki, giving extra examples and pointers.

@mbusy mbusy added enhancement New feature or request good first issue Good for newcomers and removed question Further information is requested labels Sep 30, 2020
@mbusy mbusy changed the title [request] complete scenes for obstacle avoidance and machine learning Complete scene example for obstacle avoidance and machine learning Sep 30, 2020
@wagenaartje
Copy link

wagenaartje commented Oct 3, 2020

Sorry for not getting back to you faster, I have uploaded the example here. There are four files, and I will quickly explain what each of them does:

  • generation.py: this creates training data by generating images using the pybullet camera at random angles around the object to take pictures. The height of the camera is around the height of Pepper's front camera. You can add different objects as you wish, but different objects might require a different neural network architecture.

  • training.py: uses the generated images to train a convolutional neural network (using keras). It consists of 3 convolutional layers and one dense layer. It will stop training once the validation accuracy is above 90% (typically you need 95% for good detection)

  • simulator.py sets up the simulation environment.

  • control.py defines the control class of Pepper. Basically object detection works as follows: if the laser sensors detect an object, it will start classifying the object at every timestep. Once Pepper is very close, it will group all the previous classifications to make one final classification. Afterwards, Pepper turns to avoid the object. It's very basic, but you could for example program different actions based on the object.

It is quite basic and I haven't looked at it for a while. It is really important that you have 95% validation accuracy - there is a decently working model in the folder model. If you have any questions let me know, I will get back to you when I can!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants