Inspiration
The Inspiration behind this project came from our love of guitars and music. We wanted to create an application that allows us to analyze music on a deeper scale and increase our knowledge of musical theory, artificial intelligence, and software development in the process of making it.
What it does
Our web app includes a highly advanced guitar tuner that analyzes frequencies and pitch to determine what note is being played on what string and octave of the guitar. On top of that, we also included an application that allows you to input your choice of string instrument and chord to generate the tablature to play that chord in detailed imagery. Both of these applications cater more to guitarists and significantly increase our understanding of musical theory, as many complex computations go into deriving the correct notes based on frequency and pitch alone. However, we still needed to finish there; we wanted to implement artificial intelligence, which our team was very interested in, so we decided to implement two other applications. One of which uses open AI that generates songs with similar lyrics based on audio input. We are allowing guitarists and non-guitarists to find music they love based on their favorite songs. The other application uses two models from Hugging Face that generate the chords and genre of a song uploaded as an mp3 file.
How we built it
Song Finder: We built the song finder by incorporating functions from Open AI's API and Web Audio API into a JavaScript program. The program starts when the user presses the "start" button, which will then set up audio processing, access the user's microphone, and record any audio implemented until the "stop" button is pressed. The audio will then be saved as a "wav" file, which is compatible with open AI, and then it will get passed into Open AI's audio-to-text endpoint. After the text is converted from the audio, the text will be saved in a transcribed text variable and then sent again into a "get-3.5-turbo" model that is designed to return songs with the same lyrics as the prompt. The back end of this program was written in JavaScript, and the front end was written in HTML and CSS.
Tuner: We built the Tuner application using four key components that work together to create a musical tuner interface. The Tuner class uses the Web Audio API (used in Song Finder as well) and the AUBIO library for real-time pitch detection. It then sets up audio processing, connects to the user's microphone, and triggers events when musical notes are detected. The Notes class focuses more on the user interface and creates a dynamic set of clickable musical notes using the DOM. It allows users to interact with the notes, plays or stops the corresponding pitch, and updates the UI based on the detected frequency. The Frequency Bars class contributes a visually engaging representation of frequency data in bars on an HTML canvas. It dynamically adjusts its size upon window resizing events and updates the bars based on incoming frequency data. Finally, the Application class serves as the orchestrator, initializing the A4 frequency, creating instances of the Tuner and Notes classes, and managing the overall application flow. These components integrate pitch detection, user interaction, visual feedback, and overall application logic to deliver a comprehensive and interactive musical tuner experience. How the conversion works: We used three logarithmic formulas to convert standard frequencies to notes, in getnote we used a logarithmic formula note = 12 * (Math.log(frequency / middleA) / Math.log(2)) + semitone that calculates the musical note based on a given frequency and the middleA (the standard musical pitch that is used as a reference to tune musical instruments at 440hz) frequency, and the semitone value. Then in getstandardFrequency we used standardfrequeny = middleA * math.pow(2, (note-semitone)/12) which computes the standard frequency for a given note using the middleA frequency. Lastly, in getcents we used the formula cents = Math.floor((1200*Math.log(frequency / getstandardFrequency(note)))/Math.log(2)) to get the cents difference between a given frequency and the standard frequency of a musical note which provides us with an accurate difference in pitch between two musical notes. These formulas establish the relationship between frequencies and the 12-note equal temperament system.
Song Genres: For the Song chord and genre finder, we used two models available on Hugging Face; these models specialize in identifying the chords of an audio file, and the other model identifies the genres of the song. We pass the user's input to the application using the API token provided to us through the hugging face. JavaScript powers this interface for the backend, and the front end is written in HTML and CSS. Next, we displayed the outputs of both these models on a results page where the user can see precise predictions of chords and genres. This section of the project aims to help the user get the chords for a song, which can be helpful for the user to play with his guitar. The user can use our tuner page to ensure he plays the song right.
Chord Library: It initializes by populating dropdown menus for instrument selection, musical keys, and chord types upon the document's content being fully loaded. Users can then select their desired instrument, key, and chord type, upon which the application dynamically imports the specific chord data using an asynchronous function that constructs the module path based on user selections. On successfully fetching the chord data, it generates an SVG chord diagram, adjusting for the number of strings appropriate to the selected instrument and visually representing finger positions, open strings, and barre chords according to the imported chord data. The application efficiently handles user interactions and errors, providing a responsive and interactive experience for exploring musical chords.
Challenges we ran into
Challenges we ran into:
- The model we used for the chord classification gave out the chords in Spanish, so we had to make sure we converted the chord names to English. -The tuner application involved a lot of mathematical calculations, and we had to do extensive research to convert standard frequencies to notes. Luckily, our knowledge of musical theory helped us quite a bit. -Displaying the notes dynamically and the frequency bars while making sure they resize correctly was very difficult on the front end side. -We had many committing issues with GitHub as we used code space and had syncing conflicts that would mess up our code. -Training the Open AI model to display the correct information and figuring out how to transcribe speech into text and generate more information based on that text.
Accomplishments that we're proud of
- We are proud to have used our music theory knowledge and learn even more in order to create all of our applications
- We are proud to have successfully implemented AI into a music application
- We are proud that we were able to pool our knowledge and create one cohesive product
- We are proud to have worked through all of our problems and find solutions
- We are proud of making an application that works on genre classification and chord detection, as these things are trivial and helpful for musicians who like to analyze and practice playing songs on different instruments. But this application focuses on guitar, and it makes sure to provide the correct output to the user.
What we learned
- We learned a lot about music theory and all of the math that goes on behind the scenes
- We learned how to integrate AI into JavaScript and then embed in websites
- We learned how to calculate frequency and use it to compute notes through advanced math problems
- We learned how to use open source ML models and integrate them into our application, this was all done from the models on hugging face using the interface API. We also learned how to use these models and pick the right one based on their performance results, as using the correct model is crucial for the application to provide the right results.
What's next for Guitar Hub
We would like to go explore 2 different directions for Guitar Hub:
We would like to take in frequency from guitar playing, convert it to notes, and then dynamically generate tablatures so that peoplek can create their sheet music for solos and other playing while they are playing the music
Next, we will work on creating our own model that can process audio and display information that is important in the song making process. Let’s say a musician is making a song and would like to use a reference track. Our model would suggest chords based on the reference track and also help the user understand the different properties of the song, like the loudness, the frequency distribution of the song. These things are important and trivial when it comes to making a song that meets the standards of streaming platforms.
Built With
- ai
- artificial-intelligence
- chatgpt
- css3
- express.js
- html5
- javascript
- json
- machine-learning
- node.js
- openai
Log in or sign up for Devpost to join the conversation.