Inspiration:

In this project, we were inspired by our experiences in taking a class on music technology and its advancements from the 1950s onward. In the class, we covered the first computer-made music, which worked through utilizing the warning sounds computers emitted upon finding errors. From there, computers proved a new avenue for music creation, and synthesizers have been a groundbreaking advancement in that field. Synthesizers create notes through complex sound waves. Each wave can have specific qualities to mimic acoustic instruments, use existing electronic sounds, or make something entirely new in any genre or style. Users must create note by note and instrument by instrument to make a song. This is a complicated process that requires complex knowledge of sine waves, sawtooth waves, box waves, and more. Users must understand how amplitudes, frequencies, and resonance work to make the simplest of notes a reality. Existing software is not beginner-friendly, often involving far too many parameters and dials. People looking to just casually create or start from scratch are essentially barred from this field unless they have an in-depth education on sound physics. Our application aims to change that. With a short AI prompt, the user can get waves that mimic their desired instrument or instruments. They can play their new sound with the tap of a button and see the oscillations and visuals of how the wave looks in real time. Using a single prompt, a user can have the AI refine the sounds and adapt them to their needs. With the tap of a button or push of a slider, the user can easily change the volume, frequency, and more. Music is now a tool for all. With this app, users can learn how synthesizers work in a safe, easy setting made for them and with their dream songs in mind.

Learned:

Our team learned how to use typescripts for the front end of the app to make the sections and buttons more intuitive for the user. We also learned how to use and implement Vertex AI as an aid in writing code.

How it was built:

We built a project starting with a base or a bare bones code that does one thing that we wanted such as creating one sound. Then we expanded by altering code to allow the user to refine that one wave continuously. Then just kept adding and altering more and more features as we went, such as having multiple sounds at once, a piano, and recording features. We utilized Vertex AI, Gemini, html, typescript, and other AIs.

Challenges:

Some challenges we faced was getting the graphs to display correctly without breaking everything else. Getting the recording to not only capture the audio from the piano but also the audio from the composition feature at the same time and exporting it together as one .WAV file.

The Future:

We would like to add a feature to allow the user to download the sound waves as other types of files such as a .mp3 file. Also add more sophisticated soundtracking systems. Another feature could be replaying the sound recorded, the option to save it as another sound wave, and editing it within the app before the final download.

Accomplishments that we’re proud of:

The main accomplishments that we’re proud of is that we were not only able to fully utilize Gemini but we were able to accomplish our main goal. Furthermore, Synth♪ny is able to produce any synth wave sound that is similar to an instrument along with recording and downloading separate and mixed sounds. In addition, not only was artificial intelligence used to create Synth♪ny, it also utilizes AI to help adjust the sounds and of course generate the sounds. Especially since more than half our team is new to hackathons and significantly utilizing AI in real world applications.

Built With

Share this project:

Updates