Inspiration

Main inspiration is: The Historical European Martial Arts (HEMA) (https://en.wikipedia.org/wiki/Historical_European_martial_arts) longsword teachings of Johannes Liechtenauer, Joachim Meyer and other medieval German fencing masters (http://wiktenauer.com/wiki/Masters).

And of course... SWORDS! (https://en.wikipedia.org/wiki/Classification_of_swords)

What it does

Longsword Deep Learning trainer is a:

  • Internet of Things
  • Real-Time
  • Voice-Interacting
  • Medieval Longsword martial arts trainer

...using Cloud Computing, Big Data and Deep Learning.

Interacts:

  • Prompts the trainee to perform martial art techniques:
    • Guards
    • Strikes

using a Voice User Interface.

Streams:

Real-time:

  • Inertial sensor data, from a training longsword:
    • Accelerometers (Speed & Force)
    • Gyroscopes (Angular Velocity)
    • Magnetometer (Orientation)
  • Dumps: processed data with Feature Engineering

Learns:

  • From supervised data
  • From training session data with offline learning

Predicts:

  • The Guard/Strike intent, using the real-time sword sensor stream

Measures:

  • Technique Accuracy %.
  • Classification Confidence %.

Feedback:

  • Announces: Final Accuracy%.
  • Recommends: Technique to improve.
  • Stores: Session data for future learning.

How I built it

Guard/Strike training and classification with Deep Learning:

Repositories

There are 5 public repositories:

  1. Longsword Data MQTT Publisher: The main IoT app. It runs on Nvidia Jetson TX2 (embedded super-computer). Connects with sensors via BLE and publishes all the real-time IMU data to the AWS IoT stream via MQTT. It also interacts with the user via AWS Polly text-to-speech (https://bitbucket.org/aviagistemp3rr0r/longsword-data-mqtt-publisher).

  2. Longsword IMU BLE sensors: The .ino arduino (Genuino 101) src code, that acquires raw IMU data from I2C sensors and sends them via Bluetooth Low Energy to the Nvidia Jetson TX2. The sampling frequency is 25 Hz. It publishes data to the BLE characteristics from 3-axis: Internal Accelerometer, Gyroscope & step counter, MPU-6050 Accelerometer + Gyroscope, HMC5883L Magnetometer (https://bitbucket.org/aviagistemp3rr0r/longsword-imu-ble-sensors).

  3. Longsword Stance Model Training: Deep Learning model & training python scripts. The model is genenerated with Keras, as a multivariate Bidirectional Long-Short Term Memory (LSTM) network, for classification of longsword movement gestures. Softmax is being used as the activation function and sparse-categorical cross-entropy on the final dense layer. The trained model weights & model structruture are being stored as json and hdf5 files. They can later be restored for real-time predictions, with minimal execution time: ~3-5 milliseconds for 1-4 rows x 12 features (https://bitbucket.org/aviagistemp3rr0r/longsword-stance-model-training).

  4. Longsword Stance RESTful Service: Invokes prediction results with real-time multivariate time series data. Using Flask and python, the pre-trained bidirectional LSTM deep learning model is loaded to memory. RESTful post requests containing real-time rows of IMU data, can be used to classify the longsword movement stance. Information on the classification confidence and execution time in milliseconds is also provided (https://bitbucket.org/aviagistemp3rr0r/longsword-stance-restful-service).

  5. Longsword Web Socket Server: Provides live sensor data feed, technique classification view (via image) & scores/confidence %. The web-page is being served by node-js. An active web-socket connection is being maintained with the RESTful deep learning model service. The messages push real-time data and classification results to the web page's graphs (https://bitbucket.org/aviagistemp3rr0r/longsword-web-socket-server).

And 1 private repository:

  • Lambda python scripts: DynamoDB, Kinesis Stream & Kinesis Firehose. They perform real-time feature engineering, by adding extra features and by converting predictor values to a positive sub-space (important for the sequential Keras LSTM model input: hidden layers generate positive states only from input values). DynamoDB stores raw json sensor data. Kinesis Firehose dumps ready-to-use for model training session data, to S3 buckets (these data can be directly used for offline model training).

Technologies

  • Real-time (second to sub-second) data streams
  • Multivariate Time-series Classification
  • Deep Learning
    • Bidirectional Recurrent Neural Networks (RNN)
    • Long-Short Term Memory (LSTM)
  • Message Queue Telemetry Transport (MQTT)
  • Publisher-Subscriber pattern
  • Web Sockets
  • Representational State Transfer Application Programming Interface (RESTful API)
  • AWS
    • Internet Of Things (IoT)
    • DynamoDB (NoSQL storage & streams)
    • Lambda
    • S3
    • Kinesis Stream
    • Kinesis Firehose
    • Text-to-speech (Polly)
  • Inertial Measurement Units (IMU): Accelerometer, Magnetometer, Gyroscope, Step counter
  • Bluetooth Low Energy (BLE) byte stream

Languages, SDKs & Libraries

  • Javascript
    • Google Charts
    • jquery
  • Node.js
    • express
    • socket.io
    • realtime
    • node
    • websocket
  • Python
    • csv, json, re
    • requests
    • base64
    • boto3 (AWS)
    • Pygame Mixer (Audio playback)
    • gattlib (BLE data transfer)
    • AWS Iot MQTT Python SDK
    • keras
    • numpy
    • h5py
    • Flask
    • socket
  • Arduino
    • Wire
    • Imu
    • BLE
  • Speech Synthesis Markup Language (SSML)
  • Hyper Text Markup Language (HTML)

Hardware

Built With

Share this project:
×

Updates