Perplexity has released its Deep Research feature, using several techniques to improve results for complex queries.
Deep Research combines chain-of-thought (CoT) prompting, dynamic computation allocation and hybrid neural-symbolic processing. Basically, Deep Research reasons about how to solve a problem and also dynamically adjusts the abstraction level of reasoning steps based on problem complexity.
Answers take longer (the model has to do more reasoning, hence more computing), compared to more common (at the moment) linear reasoning, so I would not recommend using it for simple queries. You’ll get answers faster using the legacy approach.
Chain-of-thought prompting guides language models to break down complex problems into a series of logical steps, sometimes improving answer accuracy.
No comments:
Post a Comment