Will there be a follow-up toolkit for Artificial General Intelligence called Agitator?
Dumb jokes aside, I took a look at your GitHub page, and this is exactly what I've been looking for when I do local LLM work. Cogitator seems like a nice, pythonic approach vs. using the raw `ollama run` command, esp. given the focus on chain of thought. I think I'll start using this tool. Nice work!
habedi0 4 days ago [-]
Thanks.
tomaytotomato 5 days ago [-]
Very nice, will there be support for other models in the future?
This technology is not heretical, praise the Omnissiah!
habedi0 5 days ago [-]
Lol. No. Not heretical at all.
I might add support for other model providers (like Google and Azure) in the future. Although I'm trying to keep the scope of the project very small because it's easier for me to maintain it.
Anyway, I think adding new LLM providers is pretty straightforward if you want to do it yourself. You just need to implement the API of the BaseLLM (see the `cogitator/model/base.py` file) for your provider. After that, you just use it like how you use OllamaLLM or OpenAILLM.
nico 5 days ago [-]
Very interesting. I saw the examples and would have loved to see the results, maybe some text showing the whole process or a little gif/video
Great work
habedi0 5 days ago [-]
Sounds like a great idea. In the next release, I'll add a visualization for how things work and related to each other, and possibly also include some benchmark results.
nico 5 days ago [-]
Cool, yes benchmark results are great to show
Also, are you using this tool as part of another project? It’d be interesting to see what the main applications of CoT prompting are (the examples are great but a little basic)
habedi0 5 days ago [-]
I'm not using it in a larger project at the moment. The examples right now are mainly included to help people get started quickly. About the applications, they are somewhat context-dependent, but I might add one or two larger examples later if I have the time.
Dumb jokes aside, I took a look at your GitHub page, and this is exactly what I've been looking for when I do local LLM work. Cogitator seems like a nice, pythonic approach vs. using the raw `ollama run` command, esp. given the focus on chain of thought. I think I'll start using this tool. Nice work!
This technology is not heretical, praise the Omnissiah!
I might add support for other model providers (like Google and Azure) in the future. Although I'm trying to keep the scope of the project very small because it's easier for me to maintain it.
Anyway, I think adding new LLM providers is pretty straightforward if you want to do it yourself. You just need to implement the API of the BaseLLM (see the `cogitator/model/base.py` file) for your provider. After that, you just use it like how you use OllamaLLM or OpenAILLM.
Great work
Also, are you using this tool as part of another project? It’d be interesting to see what the main applications of CoT prompting are (the examples are great but a little basic)
I guess CoT prompting could be used for ARC Prize (https://arcprize.org)