Friday, February 21, 2025

Perplexity Activates "Deep Research" Using Chain of Thought

Perplexity has released its Deep Research feature, using several techniques to improve results for complex queries. 


Deep Research combines chain-of-thought (CoT) prompting, dynamic computation allocation and hybrid neural-symbolic processing. Basically, Deep Research reasons  about how to solve a problem and also dynamically adjusts the abstraction level of reasoning steps based on problem complexity. 


Answers take longer (the model has to do more reasoning, hence more computing), compared to more common (at the moment) linear reasoning, so I would not recommend using it for simple queries. You’ll get answers faster using the legacy approach. 


Chain-of-thought prompting guides language models to break down complex problems into a series of logical steps, sometimes improving answer accuracy. 



No comments:

Yes, Follow the Data. Even if it Does Not Fit Your Agenda

When people argue we need to “follow the science” that should be true in all cases, not only in cases where the data fits one’s political pr...