Hey folks! ๐
Super excited to share something Iโve been working on for a while:
DoCoreAI โ a dynamic temperature profiler for LLMs that auto-optimizes your prompts without manual tuning.
When working with AI models like GPT or Mixtral via OpenAI or Groq, I noticed a lot of devs (including me) were stuck tweaking prompts manuallyโchanging temperature, adding random system messages, guessing what might help the output.
Fine-tuning? Too expensive and time-consuming.
Prompt engineering? Trial and error.
DoCoreAI takes your raw prompt input and dynamically adjusts:
- ๐ Precision
- ๐จ Creativity
- ๐ค Reasoning style
- ๐ก๏ธ Temperature
All based on the context of your input โ giving you more accurate, useful, and optimized responses on the fly.
It works with both OpenAI and Groq, and itโs completely open source.
This started as a side project while experimenting with customer support AI.
I wanted AI to adapt intelligently to user intent, without hardcoding tons of prompt templates or tweaking settings manually.
Now itโs evolved into something I believe can help:
- Developers
- Prompt engineers
- AI researchers
- Anyone building AI-powered tools
- Python + FastAPI
- OpenAI & Groq support
- Swagger Docs for easy testing
- Cosine similarity for reuse of past โintelligence profilesโ
Check it out on GitHub: https://github.com/SajiJohnMiranda/DoCoreAI
Or explore the idea behind it: Reddit
Would love thoughts, issues, stars โญ, and feedback from fellow builders!
Thanks Huzzler fam! ๐
Letโs build smarter AI toolsโwithout overengineering the hard parts.
Login to post a comment.