We had problems with integration with MagicLeap SDK, but the whole algorithm supposed to be:

  1. App segments and recognizes objects by computer vision
  2. Using custom API apps recognize foods, its' components and etc
  3. By depth buffer and point cloud we're trying to restore of the dish\packed food its bounding box. Accumulate samplers, and do it by compute shader
  4. Spawn some rigid bodies as gpu container elements and simulate them by GPU acceleration -> pass of computing velocities, new positions and pass of processing collision. Using nvidia flex. Every objects is just a combination of several particles-spheres from the view of the physical system. For collision, the app refresh the combination of world planes.
  5. Show additional text information about the food.

Built With

Share this project:
×

Updates