Inspiration

SISU is a social construct in Finnish culture. It means many things to many people - so we would love to use technology to understand it better - and use the results of our challenge to create an experience that inspires and helps people develop sisu within their lives.

What’s this all about?

Every culture around the world develops a common language. Complex sets of region and people-specific terminology, which is both part of their self-identity, as well as how others perceive them.

For Finnish people, few terms sum this up better than sisu.

The thing with terms such as this is that they remain purely subjective - with every individual having a potentially different definition for it. And yet, sisu is ‘visible’ everywhere and very pervasive. It’s a social construct.

But what would a Finn say Sisu ‘looks’ like? What does it sound like? How does it fuel the actions of people? And does it play a role in overcoming adversity, or taking on challenges? Can it be ‘bottled’ and utilised to fuel social sustainability and empower the next wave of young entrepreneurs? Can it be ‘summoned’ or controlled, like a superpower?

Before we can control it, we should understand it better.


What it does

How is AI (Machine learning) relevant here?

The best way to get to the core of sisu is not by asking for a definition - but paying attention to how it forms a part of people’s stories and how it is represented out in the world. And this is where data science can help.

By applying different types of machine learning models - we can aim to understand how sisu is present in everyday living, business, sports or overcoming adversity: starting from personal stories which are meaningful to the protagonists.

How I built it

Gathering themes and with Q&A about “What SISU means to you?” from interviews, we collect these keywords badges that describe SISU factors in quotes datasets. After identifying what SISU factors visualize, we started to practice on an example dataset : 500 quotes dataset. The idea of this experiment are: Categorize “SISU factors” in quotes from the dataset. Classify and predict context of SISU in a train dataset. Then we start our experiment with step 1: Step 1: Train the Machines to understand language Sentimental Analysis LSTM using Pytorch Sentiment analysis is the task of classifying the polarity of a given text. In this task, we train the machines to understand which quotes contain SISU factors, based on its category described.

Challenges I ran into

There are few main challenges I met during the Phrase 1 Experiment:

  • The Data and the SISU factors context. There is few training data we can find. It's brand new and undefined "SISU" anywhere If you haven't known about it before. So although the quotes data, social media posts, articles, videos + posts you can crawl from the internet, 1st step is to identify the "SISU" factors based on former researches. Our keyword badges given based on the interviews and researches that provided by Virsity Challenge. We use that to categorize the content we found.
  • The Sentimental Analysis LSTM give us 56% of accuratecy ( compare to the train dataset is just abour 49% categoried). Although the model can give output which is "Sisu", which isn't if you input a quote / tweet, this model hasn't perfectly perform in other forms of content like such as: articles, videos without categories them 1st.

Accomplishments that I'm proud of

As it performs well in Phrase 1 - Experiment SISU factors - I'm proud of the commitment of my team that we work hard to categorize the quote data, and perform the model.

What I learned

I learn what is SISU and how it perform in our life.

What's next for Virsity Challenge

From this experiment, the model is able to understand the language context based on a given labeled category. So for now, the model can help to identify the SISU factors in simple quotes if it’s given a single quote.

We checked on Kaggle as references for style generation using the LSTM method however the result has not been well-regarded or make sense. We think this method has to work on more in order to evaluate the SISU factor in the content it generates.

Therefore, before we can really work on style generation, next step we will need to test the accuracy of the model with more diversified content such as: long articles, stories, videos with articles that would contain SISU keywords badges and its meaningfulness.

Our next step to try will be :

Deep LSTM Reader : (https://paperswithcode.com/method/deep-lstm-reader) The Deep LSTM Reader is a neural network for reading comprehension. We feed documents one word at a time into a Deep LSTM encoder, after a delimiter we then also feed the query into the encoder. The model therefore processes each document query pair as a single long sequence. Given the embedded document and query the network predicts which token in the document answers the query.

For SISU studying, we can collect multiple resources of documents about SISU application in real life, SISU stories, then ask the trained model questions about SISU factors. This allows us to develop a class of attention based deep neural networks that learn to read real SISU documents and answer complex questions with minimal prior knowledge of language structure.

Another way to improve, rather from training a network from scratch, you can reuse one or more already trained networks on Sentiment analysis, and perform "fine tuning". As Sentiment analysis has been quite widely studied, there is probably a lot of training data available, so the corresponding models should be quite stable ( like LSTM and Deep LSTM mentioned). The idea of "fine tuning" is to start with these models and just adapt the last layers by training on your (small) data that you can find and process. This is a common technique applied to new problems that doesn't have much available training data but also related to older common problems. For teaching machines to understand an abstract concept like SISU, we perform it to categorize the resources 1st. Then we will have it to understand the abstract concept by keep giving Q&A to read.

We're excited to work on Phrase 2!

Built With

Share this project:

Updates