Inspiration
Generative AI has achieved tremendous success across various use cases, but it is often energy-intensive. In line with the commitment to energy awareness and reducing our carbon footprint, exemplified by GCP, I sought to harness the power of large language models (LLMs) to audit and provide actionable insights on machine energy efficiency using concrete statistics. 🌍💡
What it does
- Audits machine components such as memory, CPU, disk, GPU, processes, battery, and all available sensors using Python's
psutil. - Offers a chatbot experience to discuss and identify effective actions based on these metrics. 🤖
- Utilizes the latest Gemini 1.5 models and a straightforward prompt to converse about reducing energy consumption.
- Use it as a Python's package for local machine or on servers
- Demonstration using CloudRun services with Docker so the server's energy consumption
How we built it
- Developed with Python 3.11.
- Integrated Chainlit for the chatbot interface.
- Employed FastAPI for backend services.
- Deployed on GCP CloudRun or locally.
- Leveraged Gemini with VertexAI.
- Used Langchain to simplify prompt engineering.
Challenges we ran into
- Difficulty in accessing comprehensive and accurate metrics.
- Limited access to GCP service metrics.
- Python's
psutilpackage limitations in measuring machine energy consumption.
Accomplishments that we're proud of
- Successfully created a functional chatbot experience.
- Achieved impressive results in utilizing metrics on both laptops and GCP CloudRun services.
- Implemented live recommendations from Energemin.
- Simplified access to real measurements, making the tool user-friendly and practical.
What we learned
- GCP CloudRun and Vertex AI with Python
- Avalable metrics on energy consumption is hard to access
- Limited availability of tools for assessing carbon emissions and energy consumption in IT
What's next for Energemin
- Integrate Nvidia GPU metrics to monitor energy consumption and raise awareness of GPU usage in training and deploying machine learning models.
- Develop continuous monitoring capabilities.
- Enable access to LLM metrics within Vertex AI.
Log in or sign up for Devpost to join the conversation.