Inspiration

Machine learning and quantum machine learning is often gated behind programming knowledge for PyTorch, Pennylane, and several different libraries. We wanted to build a clean, visual platform for students and researchers to develop models, learn about ML and QML architectures, and be able to quickly prototype, test, and reuse the models they create.

What it does

Lasagna is a full-stack, drag-and-drop visual editor for building, training, and testing hybrid neural networks.

  • User can drag classical (PyTorch) and quantum (PennyLane) layers from a library onto a canvas to create a sequential model, kind of like using code blocks in Scratch.
  • The editor performs real-time validation of input/output types, ensures that quantum circuits are terminated with measurement nodes, and provides a simple interface to change layer parameters.
  • An error free model can be saved as a JSON schema and "transpiled" (translated + compiled) into working PyTorch and PennyLane code, that can be copied and used in development projects.
  • After transpiling, the model can be trained directly from the platform using datasets, with a variety of training hyperparameters able to be changed by the user. Additionally, a preflight check is performed by Lasagna to ensure that layers are correctly structured.

How we built it (architecture flow)

Frontend (Node + React): Built with React via Vite (entry point at lasagna/src/main.jsx), the frontend allows users to construct hybrid model architectures visually. The model graph forms a LasagnaModel object that encapsulates the layers and node parameters in JSON. Backend (FastAPI): The backend API (lasagna_backend/main.py) handles incoming LasagnaModel objects. Model Encoding: A LasagnaModel contains metadata (model name, input/output shape, device targets) and a node_list (a sequential list of classical and quantum model nodes, along with their parameters and weight shapes). Model Transpilation: The converter (lasagna_backend/transpiler/model_converter.py) unpacks the JSON layers into PyTorch and PennyLane code. Classical layers are mapped to torch.nn modules, while quantum nodes are converted into PennyLane qml.qnode circuits bound into qml.qnn.TorchLayer modules. The transpiler guarantees that quantum segments end with appropriate measurement nodes. Execution, Training, and Inference: The transpiled model code is saved into lasagna_backend/lasagna_models/. The backend API then provides endpoints (/train/{model_id} and /inference/{model_id}) inside main.py to instantiate the generated PyTorch module, start background training loops, query training status, and evaluate the model via inference. More info in the README.

Challenges We Ran Into

  • PyTorch and Pennylane have a few different structural details, such as the PyTorch forward function not preserving state while Pennylane circuits preserving them. Thus, creating the transpiler was rather challenging, beyond simply translating layers one to one. Referencing the documentation and generating code examples was helpful in working out exactly how the translation needed to work.
  • The UI design on the frontend, especially linking nodes together with React Flow and working on the dynamic type checking between nodes was challenging at times.

What we learned

Quantum-Classical Interop: We gained an understanding of how PennyLane's TorchLayer acts as the intermediary between specialized quantum circuits and standard gradient-based optimization. Quantum Literacy: We didn't have to deal with the technical workings of qbits and wires, as they are abstracted by PennyLane, but we nonetheless gained quite a bit of intuition regarding quantum gates, how properties of linear algebra allow matrices and their eigenpairs to represent transform states or encode energies of quantum, and much more! Transpilation Logic: In effect, to map between React objects and Python code, we happened to build a domain-specific language for torch.nn, and qml templates, and a few miscellaneous utilities for neural networks. This DSL could be easily extended to other features or even libraries.

What's next for Lasagna

Branching Architectures: Moving beyond sequential chains to support more complex branching paths, e.g. "residual" or "U-Net" style. Live Visualization: Integrating a dashboard for real-time loss curves and quantum state-sphere visualizations during training. Multi-Backend Export: Adding support for exporting models to JAX or directly into cloud quantum hardware providers like IBM Quantum or IonQ. Formalize the DSL, improve it with dynamic updating.

Built With

Share this project:

Updates