

We use Microsoft Azure's speech-to-text API to process audio on the microphone of the device. Our various NLP tasks are facilitated through a custom back-end API in Python and Microsoft Azure's speech-to-text functionality. To build Textify, we used React-native as a front-end framework, coupled with Firebase for our database functioning.

By tapping at any of the keywords one can navigate quickly to the timestamp of the message. It also utilizes topic detection, to give summarized information to the message's recipient. Using sentiment analysis, Textify gives visual feedback of the other person's emotions in an audio message. Textify is an audio messaging app that automatically shortens long voice messages to a more manageable length, and provides a transcription of the message for those of us who are too busy to listen to the entire thing! 🤯 We thus wanted to explore and enable new ways of communication, only available now because of recent developments in computation and machine learning.

I'm sure we all regularly face the problem of receiving uncomfortably long and unclear voice messages from our friends because they're too lazy to type it out! Even with today's robust technologies, communication still happens to be one of the bottlenecks in human interaction.
