One reason some observers are a bit skeptical about DeepSeek cost claims (training and inference) is that the architecture, including use of "chain of thought" for inference, compared to the"simple inference" models used by ChatGPT, for example, require more processing, not less.
Granted, chain of thought models models break down problems into smaller steps, showing the reasoning at each step. So one might argue each of the smaller steps require less processing. On the other hand, more processing steps must occur.
So chain of thought models require more processing power and time compared to simple inference approaches. On the other hand, CoT is viewed as better for more-complex problems.
Still, the CoT "more processing" profile is hard to square with claims that DeepSeek requires less compute.
No comments:
Post a Comment