Intelligenza AI-Powered Python to Assembly Compiler — Huimin Sun huiminsun20@g.ucla.edu — Hadi Ibrahim Hadiibrahim@ucla.edu — Caleb Alberto calberto3418@gmail.com — Yinglin Wu yinglin127@g.ucla.edu
Inspiration We wanted to build a tool that accelerates Python execution by using AI to translate Python code directly into optimized assembly. Traditional compilers work in a fixed pipeline. They scan the code, parse it into tokens, build syntax trees, and then generate machine code using a set of predefined rules and optimization steps. Intelligenza takes a different approach, it has the potential to improve both the performance and the design of the final machine code. Instead of just applying rule-based optimizations, it uses a large language model to understand the intent and logic of the Python code. It can reimagine the structure of the program, replace inefficient algorithms, and output low-level assembly that is not just optimized in form, but also in thinking.
What It Does Intelligenza is a command-line compiler that takes in Python files and generates x86-64 assembly code. Unlike traditional compilers that use rule-based optimization techniques, Intelligenza uses OpenAI’s large language models (LLMs) to intelligently interpret and transform Python code into low-level assembly. The user interface is designed to be familiar and flexible for developers. Users can select from different OpenAI model types, such as o4-mini and GPT-4.1, depending on how much reasoning power or context handling they want the model to use. They can also choose a "thinking level" — low, medium, or high — which affects how deeply the model analyzes the code logic. For example, higher thinking levels may allow the model to restructure inefficient loops or replace algorithms. To simulate traditional compiler behavior, we implemented multiple optimization levels. With the -o0, -o1, and -o2 flags, users can control the level of transformation applied. -o0 is a straightforward line-by-line translation. -o1 applies standard optimizations, and -o2 includes deeper algorithmic or structural changes that go beyond surface-level edits. Once the code is compiled, users can optionally run the resulting binary and delete the intermediate .s assembly file. This adds flexibility depending on whether the user wants to inspect the assembly or just execute the result. Intelligenza fits within the Tech Innovation & Evolving Workplaces track. It reimagines how LLMs can integrate directly into a developer's workflow by assisting with the compilation pipeline itself.
How We Built It We began by creating a basic engine that loads a .py file and sends the content to OpenAI’s API using a model and instruction prompt. Next, we implemented optimization-level flags so users can control how much transformation the model performs. This was handled through customized prompt instructions. For instance, the prompt for -o2 encourages the model to use aggressive and intelligent code restructuring techniques. One of the major improvements was making the save system dynamic. Instead of generating generic filenames, the output .s file now matches the input Python file's basename. This meant carefully stripping the .py extension and ensuring the filename flowed correctly across saving, compiling, running, and optionally deleting. We also added a model selection system, allowing the user to pick between multiple OpenAI models. This includes selecting a reasoning level, which we simulate by altering how the prompt guides the model’s depth of analysis. To improve usability, we added options to execute the compiled binary and delete the .s file after use. This meant writing clean and error-proof logic for calling GCC, setting executable permissions, running the output, and handling file deletion at the right stage. Finally, we began work on a benchmarking script to compare how long Python takes to execute vs. the compiled assembly. This is still in progress and will help measure whether AI-generated assembly offers meaningful performance benefits. Team Roles Huimin: Focused on implementing the model selection logic and creating the system for adjusting reasoning levels.
Yinglin: Designed the save/delete logic for .s files and handled safe file operations after execution.
Hadi: Built the naming system for saving output files and refined the command-line interface for usability and consistency.
Caleb: Worked on the benchmarking system and runtime performance comparisons between Python and compiled binaries.
Challenges We Ran Into Model selection and reasoning simulation: OpenAI models don’t have a built-in "thinking level" option, so we had to simulate it through smart prompt design and control over the input structure.
Filename handling: Ensuring that .py extensions were stripped properly and reused across all stages (saving, compiling, execution, deletion) required careful coordination.
Execution and deletion timing: We needed to make sure that the .s file wasn’t deleted too early, especially when the user wanted to run the compiled binary first.
Accomplishments That We're Proud Of We successfully allowed users to pick both the model and the depth of reasoning. We do this the same way open ai does classifying them as different models.
We created a clean, intuitive CLI that feels like using a real system-level compiler.
We made the assembly generation process intelligent, not just mechanical, letting the model make optimization decisions beyond fixed rules.
We enabled complete compile-and-run functionality, making it possible to use AI-generated machine code in a real execution environment.
What We Learned Each team member gained hands-on experience with AI integration, prompt engineering, and system-level programming. We also learned the importance of clean file handling and error-proof CLI design. Working with large models taught us how to communicate clearly through prompts and simulate parameters not directly supported by the API.
What’s Next for Our Project In the future, we want to potentially create a web version where users can upload Python files and download optimized assembly. We are also considering deploying Intelligenza as a cloud service, with GitHub integration, usage tracking, and collaborative features. This could make AI-assisted compilation accessible to a much wider audience.
Built With
- openai
- python
Log in or sign up for Devpost to join the conversation.