Published on: April 19, 2025 / update from: April 19, 2025 - Author: Konrad Wolfenstein

Ki Open Source alternative: Together AI publishes the source-open “Open Deep Research” for detailed web research-Image: Xpert.digital
Structured, source, powerful: Together AI brings Deep Research to a new level
Together AI introduces “Open Deep Research”: An open source alternative to Openais Deep Research
On April 16, 2025, Together AI released “Open Deep Research” - a source -open system for structured web research, which was designed as an alternative to Openais Deep Research. The tool can answer complex questions through multi -stage web research and create comprehensive, source -based reports. In contrast to proprietary solutions, Together AI publicly provides the complete code, data records and system architecture to promote community-based further development.
Suitable for:
- Openai Deep Research: For users, a hybrid approach is recommended: AI Deep Research as an initial screening tool
The architecture of Open Deep Research
Open Deep Research works with a four -stage workflow that imitates the human research process. The process begins with a planning step in which a AI model creates a list of relevant search queries. Appropriate content from the web is then collected via the Tavily search API. An evaluation model then checks whether there are any knowledge gaps before a writing model finally creates the final report.
The special approach of Together AI lies in the use of different specialized models for different tasks in the workflow-a so-called “Mixture-of agent” (MOA) approach. The following AI models are used for implementation:
- Planner: QWEN2.5-72B instruct turbo from Alibaba for planning and reasoning skills
- Summary: LLAMA 3.3-70B instruct turbo from Meta to summarize long web content
- JSON extractor: LLAMA 3.1-70B instruct turbo from Meta for structured information extraction
- Reporting manufacturer: Deepseek-V3 for the aggregation of information and creation of high-quality research reports
In order to be able to deal with longer texts, the summary model summarizes the content compactly and evaluates its relevance. This prevents the context windows of the voice models from overflowing.
Technical stack and integration
As a technical basis, the models are provided via their own Together AI Cloud platform. The web search and content query takes place via Tavily, whereby a particular advantage is that both the search and the search of the website content can be called in a single API call.
The processing time for a typical request is between 2 and 5 minutes, depending on the complexity of the request and the number of evaluation and reflection loops.
Multimodal editions and extended functions
Open Deep Research is not only limited to text editions, but also offers a number of multimodal functions:
- HTML edition: The results are presented in a structured HTML format, the text and visual elements are combined
- Diagrams: Automatic creation of diagrams via the JavaScript library Mermaid JS
- Cover images: Generation of thematically suitable images with the help of Black Forest Labs' Flux models
- Podcast function: Automatic creation of a compact audio podcast that summarizes the main points of the report using the Sonic language models from Cartesia
These multimodal output formats enable a more comprehensive and appealing presentation of the researched information.
Performance evaluation and benchmarks
Together Ai evaluated the performance of Open Deep Research using three popular benchmarks:
- Frames: test for multi -stage logical conclusions
- Simpleqa: Examination of factual knowledge
- Hotpotqa: Evaluation of multi-hop questions that require several conclusion steps
In all three benchmarks, Open Deep Research cut off much better than basic models without search tools. Also compared to similar open systems such as Langchains Open Deep Research (LDR) and Hugging Faces Smolagen (SearchCodeagent), the system usually achieved higher quality of answer.
A particularly important result of the evaluation was the realization that several consecutive research steps significantly improve the answer quality. When limited to a single search run, the accuracy dropped noticeably.
Known restrictions and challenges
Despite the progress, Together Ai indicates various restrictions on its system:
- Error continuation: Errors in early steps of the workflow can continue through the entire pipeline and lead to incorrect end results
- Hallucinations: Hallucinations can occur when interpreting sources, especially with ambiguous or contradictory information
- Structural distortions: Bias in training data or search indices can influence the results
- Topularity: topics with high up -to -date needs or low web cover are a special challenge
- Caching problem: The implemented caching can reduce costs, but leads to the delivery of outdated information without an appropriate expiry time
These restrictions are typical of current AI research tools and represent important challenges for future improvements.
Suitable for:
- Gemini Deep Research 2.0-Google Ki-Modell Upgrade-Information about Gemini 2.0 Flash, Flash Thinking and Pro (Experimental)
Open Deep Research compared to other offers
The development of deep research functions is currently a trend among AI providers. Openaai originally introduced the concept, but now Google, GROK and perplexity also offer similar functions. Anthropic recently also presented an agent-based research function for its Claude model.
Hugging Face had already presented a source -open alternative shortly after Openai's publication, but did not develop it further. As a AI search engine, Perplexity offers a free alternative to Chatgpts Deep Research, whereby users can carry out up to five searches with “deep research” every day.
In contrast to closed, paid systems such as Openais Deep Research (the part of the Chatgpt Pro subscription is for about $ 200 per month), Together AI offers a completely open and source-open alternative.
Community focus and expandability
Together AI deliberately designed Open Deep Research as an open platform that can be expanded and improved by the community. The architecture was designed so that it can be easily expanded - developers can integrate their own models, adjust data sources or add new output formats.
The complete code and the documentation were published on Github, together with an evaluation data set and detailed explanations in the company blog. Together AI sees its system as the basis for further experiments and improvements from the open source community.
This openness stands in contrast to the closed approaches of other large AI companies and reflects Together Ais broader engagement for open source AI, which was also expressed in previous projects, such as the recent publication of a source-open coding model at the level of O3-Mini, but with significantly fewer parameters than the closed competition.
Meaning for the AI research landscape
The publication of Open Deep Research by Together AI marks an important step in democratization of advanced AI research tools. With the combination of powerful AI models, structured multi-level web research and multimodal output formats, the system offers a promising alternative to proprietary solutions.
The open approach enables developers and researchers to adapt, expand and improve the system to their needs. This could lead to more innovative and diverse applications in the long term than would be possible with closed systems.
Although there are still challenges, especially with regard to hallucinations, bias and topicality, Together Ais Open Deep Research shows that powerful AI research tools do not have to be limited to proprietary platforms. The initiative not only promotes open access to advanced AI technology, but also contributes to transparency and traceability-important factors for trust in AI-supported research results.
Suitable for:
Your AI transformation, AI integration and AI platform industry expert
☑️ Our business language is English or German
☑️ NEW: Correspondence in your national language!
I would be happy to serve you and my team as a personal advisor.
You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein ∂ xpert.digital
I'm looking forward to our joint project.