Inspiration
Riding on aelf’s powerful yet simple video files capabilities over decentralized infrastructure, we bring a unique capability out of the box on aelf’s uploaded videos through exposed APIs in addition to existing playable link.
People across the globe are accepting new ways of working - in varied industries with various degrees of adaption surrounded by people, process and technologies.
Common elements to address: productivity, innovative models in different sectors and bringing attentiveness (individual level feedback, capture emotions, proactive actions and so on).
What it does
aelf’s decentralized infrastructure is a powerful platform in addressing lot of challenges around Video platform on remote ways of working be it productivity, collaboration & more. What gets missed out is which is very common in F2F mode, that is facial expressions that directly affects emotional experience.
Excited to bring this innovative Video analytics platform through this hackathon which is useful in lot of sectors.
Use Cases Corporates meeting videos Education – teacher delivering courses through Videos Bankers – talking to customers through any platform that generated Videos Doctors – basic treatment – through any platform that generated Videos Buy & Sell like for LIC agents and so on and on Connected enterprise - Bring more effectiveness, be sentient and responsive.
How we built it
aelf, Web3.storage, Moralis, Node.js, sendgrid, app services from hyperscaler platform, Video indexer APIs Web3.storage and Moralis IPFS storage APIs enhanced to generate capabilities for below features.
Attentiveness analytics – highlighting multiple attributed as retrieved from Video indexer capabilities Transcript from uploaded Video on aelf infrastructure Word Cloud – This will give visual representation of text that is captured from video and the importance of each tag is shown with font size or color Top Keywords used – this gives another way to represent the word cloud in simple format (refer app pictures) Topics covered in the video – This segment shares insights on important aspects derived from video statistics based on participant appearance and voice in the video – We have prepared statistics based on the voice of speakers in the video. This has details about the Speakers talk to listen ratio, Speakers longest monologue, word counts and number of fragments. We have calculated this based on the advanced voice analytics technology. About the participants - See the faces appeared in the video Named entities - We have calculated below list of entity details based on the voice of the speakers.
Challenges we ran into
Classifying video that is suitable for such analytics Ensure data privacy is enabled in design and executing video to generate analytics Accomplishments that we're proud of Deepening Video analytics on diverse constructs Applying power of aelf Blockchain Storage and surrounding tech stack primarily open source Validating the problem with stakeholders through empathy discussions
Accomplishments that we're proud of
Use of aelf blockchain
What we learned
Deriving intelligence from the video content. The way we have lot of NLP algorithms, this space could be explored as well.
What's next for aelf ALT
Reference models that can act as patterns for users to select and apply on varied types of videos Cross platform interoperability
Built With
- aelf
- blockchain
- css3
- dotnet
- html5
- javascript
Log in or sign up for Devpost to join the conversation.