Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information. We have already seen its varied applications in tours, medical training, modeling, etc. Usage of AR in education is limited but it is expanding at a faster pace. It’s the age of learning more by visualization and less by reading. Nowadays educational institutions have upgraded their teaching methods to employ more AVs in the classroom. But these are in 2D and do not give freedom to the student to visualize it in the way they understand, a mindset that differs from student to student.


One of the other fields where the usage of AR is growing is classroom education. But what happens to learn beyond the classroom? The prime objective of this project is to help students learn and understand concepts in a much better and streamlined manner (beyond what was taught using conventional classroom methods), quashing any doubts that linger in the minds of students due to the lack of visualizing capacity.


As said above, what if a student wants to instantly visualize in 3D a particular chemical structure or a particular phenomenon from all possible angles or enlarging or rotating it as per his/her needs? We are going to achieve this using Augmented Reality and what better platform can we deploy it in than smartphones, which just isn’t a do-away these days. Students can visualize any model in 3D and also in any orientation that they wish using the app.


Some may argue that he/she can refer to videos on YouTube or similar websites or even search on the Internet for the same. But at the same time, one must note that one cannot rotate or resize the 3D models alias the video on YouTube according to their wish. They must move the video back and forth to achieve this which can sometimes be frustrating. In this app, all a student needs to do is use his/her fingers to zoom (pinch) or rotate (swipe). While there may be many AR apps that can achieve this, the feature that makes our app stand out is that it can identify which model you may require for a particular concept. All you need to do is point the phone towards the book so that it can identify the concept and display the relevant AR model. No searches needed, no video scrubbing needed…all that's between you can your 3D model is just a single scan.


  1. The user scans the paragraph/title/image caption related to the concept he/she is studying.
  2. The app captures the image and identifies the text in it.
  3. Using machine learning algorithms, the app detects the keywords present in the detected text and thus predicts the concept that the user is studying.
  4. The system then pulls out the relevant 3D model from the wide range of models available matching the requirement of the user as predicted by the ML algorithm.
  5. The user has to just hold his/her device in a manner that a surface can be detected by the app.
  6. And it's done! The 3D model will now appear on the users surface and then he/she can manually rotate it or resize it as per his/her wish and also, move with their phone throughout the place so that they can visualize the model from all places if they wish to do so (rather than rotating the model), and thus can analyze and understand the concept in-depth and forever.


  1. Creating the UI of the app for the user to have a smooth experience whilst using it.
  2. Adding ARCore support to the app so that it can use the camera and required sensors of the phone to display the 3D model.
  3. Add the text identification feature using ML Kit or open-source libraries.
  4. Training an ML model so that it can extract the keyword (s) from the identified text.
  5. Storing 3D models in the cloud can be retrieved by the user anytime, anywhere using an active internet connection.

Future scope

The app, for now, runs on a cloud-based system i.e. one has to be connected to the internet to view the 3D models. In the future, we plan to group these models subject/course-wise and allow the user to download the models for those subjects which they wish to view. This would facilitate offline learning and at the same time, the user need not have an active internet connection, thus removing the barrier of a good and active internet connection between the user and learning. This app can be very useful for those who are following self-learning practices or those who cannot afford to go to the big coaching institutes (where advanced teaching techniques are applied).

Tech stack

  1. Android Studio
  2. Google ML Kit
  3. AR Core
  4. Echo AR

What has been implemented

  1. Text recognition (won't be shown to the user; added a button to show it temporarily)
  2. 3D/AR model finding and viewing

Work in progress

  1. Keyword finding algorithm (we have included 3 sample choices till then in the app)
  2. Overall UI (homepage) and app optimization
  3. Bug due to which text is recognized only when camera is in upright landscape mode

Built With

Share this project: