With an increasingly ageing population, social care faces continuous challenges in providing services of health, housing, welfare and leisure.

For the scope of this project, we aimed at improving the process whereby patients receive mental health support. This is currently a manual process which may results in delays in therapy which may be detrimental to the health of the patient.

What it does

Chronicles simplifies the process of collecting information from the patients and helps open up a communication channel between social care workers and patients.

The patient may be prompted to fill a questionnaire on their phone, after a regular period of time set up by the social care worker (eg. weekly, bi-weekly...). The results from this questionnaire are sent to a web version of Chronicle which allows doctors to visualize the progress of the patient over time.

The doctor can also access vital signs data from the phone Health App such as step count and heart beat. This data can be used to track the effects of specific medications and are analysed by an Echo State Neural Network (ESN) to predict whether the patient may be depressed.

How I built it

Mobile App

To make the service universal and as available to as many people as possible we used the flexibility of the web to build the mobile app, which was developed as a Progressive Web App with the Vue.js framerork and depolyed to Google Firebase.

Web App

Similar to the mobile app, the web app was built with Vue.js framerork and depolyed to Google Firebase. Its aim is to visualize the data received by the patient, helping doctors making informed decisions.

ESN (Neural Network) and CNN

In this experiment an ESN is used to detect depression in patients via activity from a smart watch (DepresJson dataset). This was used a a benchmark to demonstrate how quantitative data can be used to indicate symptoms of mental illness. This will improve doctors/social workers understanding of their client without relying on the client to accurately depict their mental health.

What's next for Chronicle

In the future we believe that data from chronicle could be used to help better understand side effects of medication by linking patients data to descriptions. This would help medical companies see how their drugs negatively or positively effect their clients and provide a better quality service. Having a large user base of patients and their health data would provide an ideal framework for research in big pharma.

Appendix - Echo State Network Theory

Feed-forward neural networks (FNN) are a key part of machine learning. They are well defined and exhibit linear, non-dynamic behaviour which is very well suited to some tasks. However, they often struggle to solve problems with temporal data. This is due to difficulties in preparation on the input vector and their lack of long term memory.

FFNs traditionally require a fixed-sized input vector. This means that some pre-processing of the input signal is almost always required. Basic methods include re-sampling or feature extraction. Both involve information loss of the original signal. Temporal significance of the input is lost in both cases.

To improve upon the problems FNNs have with temporal data various other models have been introduced. In 1982 Hopfield introduced a network architecture to model associative memory that some believe have similarities to human memory. In Hopfield networks, the input signal is sequentially inputted into the system. The input is mapped via a non-linear function to individual attractors of the networks state which contains 'memories'. The non-linearity of Hopfield networks come from the cyclic nature of the network, where neurons feed data back into previous neurons each step. This feedback gives Hopfield networks the echo state property, which allows the previous inputs to retain temporal significance. A generalisation of Hopfield networks is now studied as recurrent neural networks (RNN).

Although RNN's have a lot of theory supporting them they are very hard to implement in practice. This is due to various training problems, especially the vanishing gradient problem.

ESNs are a lesser known class networks that retain some of the benefits of RNNs but are significanty quicker to train as they only have a single trainable readout layer. Internally an ESN is a sparsely connected RNN where each node can be describe by a random non-linear function. The theory is that a linear mapping of these functions can act as a universal approximator. This linear mapping is trained whilst the internal weights of the ESN are kept fixed.

Built With

Share this project: