(DRAFT v 0.1)


Reading is a complex process that starts when light reflected in words enters the eye. I want to simulate and understand the process of reading until the letter recognition phase occurs in the brain.

What it does

In this very initial stage, the plan is to use an 'eye simulator', and make it read words. The output of the eye simulator is going to be letters read with eye saccades, so there will be temporal and motor information. The images will be the result of applying normal image letters to the simulator, with the aim of obtaining retinal images after the cone absorptions.

How I built it

The idea is to complicate things step by step: first of all I will start to code using the mnist code examples located in nupic.vision as a guide. Then I plan to include the motor commands related to the eye saccades, so that instead of static images, the system will have to input a stream of letters affected by eye saccadic movements. Last, I plan to use images with grey scales instead of using the binary representations and I will try to add topology in the spatial pooler.

Challenges I ran into

To be done.

Accomplishments that I'm proud of

To be done.

What I learned

To be done.

What's next for ReadingEye

If all of the above works (hopefully by November 14th), I plan to continue the work with the eye simulator and nupic in vision and reading related problems/simulations/research. I will like to start working with hierarchy related problems in the visual stream from the eye until the brain's vOT area, where supposedly the letter and word reading is decoded. I will try to integrate the simulations with real life neuroimaging data.

Built With

Share this project: