Photocopiers have been around for almost a century. Press a button and you've got as many copies as you want. What if you could do the same with an object? What if you could scan and replicate a 3-D object in any size, colour or material using a device you already own?
The global 3D printing market is expected to grow to $7.2 billion by 2021 and one of the driving factors for the market growth is the reducing prices of personal 3D printers. However, consumer 3D printing is still limited because of technical skills required to operate the machine and the time consuming process of preparing a print.
We set out to make 3-D printing easier, faster and more versatile, allowing you to scan and print an object using a face-ID enabled smartphone.
What it does
InPrimo is an application that allows a user to scan an object using their iOS device (faceID enabled) and print it wirelessly with impressive detail. Users can replicate hand-made models, artwork, and machine parts. 3D scanning circumvents the long hours required to build 3D models from scratch in computer software, and users have the freedom to scan their own models, or choose from a large database of premade 3-D scans. The wireless automated conversions and slicing reduces steps preformed manually, reducing preparation time by 85%, (calculated based on average prep times).
READY. SCAN. PRINT.
- Once you’ve installed the app, you can choose to create a new scan.
- You scan the item by rotating the phone around the subject.
- InPrimo uses the TrueDepth sensor to create a point cloud map of the object.
- After you view the scan you can choose to rescan or send it to the printer.
- Once you click print, the app automatically converts the point cloud scan into an STL file, slices it and sends the file to the printer. The printer will convert to G code, adjust the printer settings and begin printing.
How I built it
Our app was built in python and swift. We have used almost every library we could think of to build the database server, application, and 3D printing interface. Our application communicates with a server via the http protocol and sends encoded files from our app to the server to be processed into 3D printable .gcode files and are passed to the printer for printing! We built our client app in swift and used a recently developed library for processing lidar information from the new TrueDepth sensor on the iPhone X using AI computer vision. We then handled the clients request through a VM django server which processes our file into a 3D printable .stl file using 3D Matrix transformation algorithms. Finally, the file is sent to the 3D printer.
Growth, Challenges, Accomplishments
In order to execute this project there was an extensive list of things we had to learn in order to accomplish our end goal.
Swift Moya Framework
In order to get our app to network with our server, we needed to learn how to use Swift’s Moya framework. This was quite a challenge but fortunately we figured out how to send a POST request from the iPhone and have our server act upon it accordingly by also learning to use Django and host a server.
Wireless File Transfer
Another thing that we learned about is the file conversion is necessary for sending our scan from the phone to a server. We learned that our 3D Models needed to be convert to a Binary Array, so we created functions to encode and decode a PLY and STL file. We have added the expandability in our code to convert to USDZ as well to enable viewing in augmented reality.
PointCloud to 3D model Conversion
We also learned the actual conversion process of a PointCloud scan to a 3D printable model takes place. In order to get our scan to work, we had to learn how to leverage the LiDAR in the latest model of iPhones to capture an object. After this object was scanned it needed to go through a process called Poisson Reconstruction and be further refined to have a polished STL file. So we learned about the necessary softwares that are involved in the process how to automate and optimize it so that our server’s back-end can run it through in a matter of seconds.
Backend Integration - Raspberry Pi and server
With our back-end server completed and our app finished up, we just need to know how to wirelessly send this converted STL file to a 3D Printer. In order to accomplish this we leveraged a Raspberry Pi running OctoPrint connected to a 3D Printer hosting a service which employed a server to remotely slice and print jobs. So we needed to learn how to send a STL file from our Django server to upon the request of a mobile app user, to OctiPrint.
Even though this whole process had a steep learning curve, we all gained a lot from this experience and it was a gratifying to have a fully functional system that is able to scan an object, 3d print it, and view it in augmented reality.
What's next for ImPrimo
As facial recognition functions on smartphones improve, so will the accuracy of our 3D scans. Beyond its early use for students, artists and hobbyists, ImPrimo can be used in the medical industry and archeology. Accurate scanning of an object allows specialists, researchers, and doctors to dimensionally analyze different components and reverse engineering process to modify and manufacture as per set standards.
In archaeology, the scanning of artifacts to create reproducible 3D printed versions or for archiving and curating is one of the major uses within this industry. 3D scanning enables every form of culture heritage to be classified, measured, analysed and even shared amongst the researcher community.
ImPrimo's technology is affordable and expansive, making it adaptable to many industries moving forward.