We had problems with integration with MagicLeap SDK, but the whole algorithm supposed to be:
- App segments and recognizes objects by computer vision
- Using custom API apps recognize foods, its' components and etc
- By depth buffer and point cloud we're trying to restore of the dish\packed food its bounding box. Accumulate samplers, and do it by compute shader
- Spawn some rigid bodies as gpu container elements and simulate them by GPU acceleration -> pass of computing velocities, new positions and pass of processing collision. Using nvidia flex. Every objects is just a combination of several particles-spheres from the view of the physical system. For collision, the app refresh the combination of world planes.
- Show additional text information about the food.