Web App Development, Full-stack Web Development, NodeJS, MongoDB, AWS S3, AWS Transcribe, Docker, DigitalOcean
- It records everything in a large audio file
- It splits the audio file by reply using FFMPEG. Everytime someone speaks, an audio file is being created. If there is a pause, a new file will be created.
- It records every file’s name, parent folder name and date into a database.
- It uploads the audio file to Amazon S3.
- The file is than sent to be processed by Amazon’s AWS Transcribe system, which is based on Aritifical Intelligence.
- After a couple of seconds, a response is generated, based on what AWS Transcribe thinks it has being said.
- The system grabs the generated response from AWS Transcribe and inserts it into the database, corelated with the audio file that was used as input for the transcription process.
- Finally, all the responses from the database can be accessed through a web page. The messages are displayed in a reverse chronological order, and can be filtered based on certain words.
The system records the “radio” using streamripper. The files are being split using FFMPEG (based on silence periods). The generated files are being processed using NodeJS. The filename, folder and date are being stored on MongoDB. Than, each file is sent to Amazon AWS S3. After uploading to AWS S3, the file is being sent for processing to Transcribe, another tool from Amazon AWS. The response is than inserted into the same MongoDB database. The results from the database are being displayed using a basic NodeJS web server. The web page(s) have basic styling, making use of the free version of Material Bootstrap. Everything is wrapped into a docker container. The docker container “lives” in a Digital Ocean droplet.