Inspiration
Imagine the future of robotics, where you can hand a robot an object, tell the robot what that object is, and it will always be able to remember and accurately find that object. RenderMan is the backbone of this task.
What it does
Current machine learning models such as ImageNet and COCO are large and generalized, which results in slow runtimes and misclassifications. RenderMan allows for hyper-specific, optimized image segmentation models that can be stood up on remote edge computing platforms.
How we built it
Using a combination of blender scripting and independent python scripts, we developed an efficient, start-to-finish pipeline that can be deployed on any device provided the proper python packages are downloaded, we are looking to eventually create a docker container for ease of use.
For our demo, we sourced previously developed renders, however we plan to expand this to include accepting assets generated from 3D photogrammetry programs.
Challenges we ran into
Developing a U-net model from scratch to handle the output from our renders proved to be a time-consuming task.
Accomplishments that we're proud of
Developing an efficient model that could completely run in a reasonable time of 30 minutes.
What we learned
The importance of pre-planning model development given expected training data and inputs.
What's next for RenderMan
Continue to build and optimize our code for the most efficient and accurate results. Standing up preprocessing on a GPU will significantly increase speed.

Log in or sign up for Devpost to join the conversation.