tough to answer. depends on the llm, on the api provider or hosting method, on the prompt
but you could get a simple estimate by running a CoT prompt, copying the output tokens, and then using an online token counter to count the length & tokens of the CoT part of the completion
The issue is that I can give it a try on a set of toy problems, but finding a real-world-class problem is totally a different story IMHO. That is exactly the reason I asked about people's "experiences".
tough to answer. depends on the llm, on the api provider or hosting method, on the prompt
but you could get a simple estimate by running a CoT prompt, copying the output tokens, and then using an online token counter to count the length & tokens of the CoT part of the completion
The issue is that I can give it a try on a set of toy problems, but finding a real-world-class problem is totally a different story IMHO. That is exactly the reason I asked about people's "experiences".
Anyway thanks for the response.
> real-world-class problem
The issue is that this notion is nonsense. There are just problems nothing about them makes them “world class” whatever that means.
Fair enough, it's "nonsense". Curious to know about YOUR experience. Sounds "nonsense" yet?