Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Moonshot AI Releases Kimi K2.6 with Lengthy-Horizon Coding, Agent Swarm Scaling to 300 Sub-Brokers and 4,000 Coordinated Steps
  • In China, a humanoid robot set a record for the half-marathon.
  • Prego Has a Dinner-Conversation-Recording Device, Capisce?
  • AI CEOs think they can be everywhere at once
  • OpenAI’s GPT-5.4 Cyber: A Finely Tuned Model for Verified Security Defenders
  • Code Implementation for an AI-Powered Pipeline to Detect File Types and Perform Security Analysis with OpenAI and Magika
  • TabPFN’s superior accuracy on tabular data sets is achieved by leveraging in-context learning compared to Random Forest or CatBoost
  • Moonshot AI Researchers and Tsinghua Researchers propose PrfaaS, a cross-datacenter KVCache architecture that rethinks how LLMs can be served at scale.
AI-trends.todayAI-trends.today
Home»Tech»How to Create a Multi-Agent System Using CAMEL with Web-Augmented reasoning, Criticism, and Persistent memory

How to Create a Multi-Agent System Using CAMEL with Web-Augmented reasoning, Criticism, and Persistent memory

Tech By Gavin Wallace30/12/20255 Mins Read
Facebook Twitter LinkedIn Email
Can LLMs Really Judge with Reasoning? Microsoft and Tsinghua Researchers
Can LLMs Really Judge with Reasoning? Microsoft and Tsinghua Researchers
Share
Facebook Twitter LinkedIn Email

This tutorial shows you how to build a multi-agent workflow that is end-to-end and advanced. CAMEL framework. We build a coordinated group of agents (Planner, Researcher Writer Critic and Finalizer) that work collaboratively to turn a topic from a high level into an evidence-based research report. The OpenAI API is securely integrated, agent interactions are orchestrated programmatically and lightweight persistent memory added to maintain knowledge between runs. We demonstrate that CAMEL is a powerful tool for building reliable and scalable pipelines by structuring it around roles and JSON contracts. Click here to see the FULL CODES here.

!pip -q install "camel-ai[all]" "python-dotenv" "rich"


Import os
Import json
import time
You can import any text by typing in Dict.
From rich import print to rprint


Def load_openai_key() -> str:
 "key = None"
   try:
 From Google.colab, import Userdata
       key = userdata.get("OPENAI_API_KEY")
 The exception:
 Key = None
 If you do not have the key,
       import getpass
       key = getpass.getpass("Enter OPENAI_API_KEY (hidden): ").strip()
 If you do not have the key,
       raise ValueError("OPENAI_API_KEY is required.")
   return key


os.environ["OPENAI_API_KEY"] = load_openai_key()

Set up an execution environment. Securely load the OpenAI API Key using Colab Secrets or a hidden command. Installing dependencies, configuring authentication and installing dependencies ensures the runtime can be ready to run without having credentials exposed. See the FULL CODES here.

ModelFactory imports camel.models
ModelPlatformType and ModelType can be imported from camel.types
Import ChatAgent from camel.agents
SearchToolkit import from camel.toolkits


MODEL_CFG = {"temperature": 0.2}


model = ModelFactory.create(
   model_platform=ModelPlatformType.OPENAI,
   model_type=ModelType.GPT_4O,
   model_config_dict=MODEL_CFG,
)

Initialize the CAMEL configuration, and create a language model shared instance by using the ModelFactory abstract. Standardizing model behavior among all agents ensures consistent reasoning across the entire multi-agent pipeline. See the FULL CODES here.

MEM_PATH = "camel_memory.json"


Def mem_loadReturn() -> Dict[str, Any]:
   if not os.path.exists(MEM_PATH):
       return {"runs": []}
 With open(MEM_PATH "r", encoding="utf-8"As f:
       return json.load(f)


Def mem_save (mem): Dict[str, Any]) -> None:
 With open(MEM_PATH "w", encoding="utf-8"As f:
       json.dump(mem, f, ensure_ascii=False, indent=2)


def mem_add_run(topic: str, artifacts: Dict[str, str]) -> None:
 Mem = mem_load()
 Mem["runs"].append({"ts": int(time.time()), "topic": topic, "artifacts": artifacts})
   mem_save(mem)


def mem_last_summaries(n: int = 3) -> str:
 Mem = mem_load()
   runs = mem.get("runs", [])[-n:]
 If not, then:
 You can return to your original language by clicking here. "No past runs."
 Return to the Homepage "n".join([f"{i+1}. topic={r['topic']} | ts={r['ts']}" For i,r in [enumerate (runs)]

Implementing a lightweight, persistent memory layer supported by a JSON-based file is our solution. The artifacts of each execution are stored and we can retrieve the summaries from previous sessions. This allows us to maintain continuity across sessions. Click here to see the FULL CODES here.

def make_agent(role: str, goal: str, extra_rules: str = "") -> ChatAgent:
 SYSTEM =
 The f"You are {role}.n"
 The f"Goal: {goal}n"
 The f"{extra_rules}n"
       "Output must be crisp, structured, and directly usable by the next agent."
   )
   return ChatAgent(model=model, system_message=system)


Planner = Make_Agent
   "Planner",
   "Create a compact plan and research questions with acceptance criteria.",
   "Return JSON with keys: plan, questions, acceptance_criteria."
)


Researchers = Make_Agent
   "Researcher",
   "Answer questions using web search results.",
   "Return JSON with keys: findings, sources, open_questions."
)


Writer = Make_Agent
   "Writer",
   "Draft a structured research brief.",
   "Return Markdown only."
)


Make a Agent = critic
   "Critic",
   "Identify weaknesses and suggest fixes.",
   "Return JSON with keys: issues, fixes, rewrite_instructions."
)


finalizer = make_agent(
   "Finalizer",
   "Produce the final improved brief.",
   "Return Markdown only."
)


SearchToolkit = search_tool().search_duckduckgo
ChatAgent = researcher
   model=model,
   system_message=researcher.system_message,
   tools=[search_tool],
)

Define the roles of the agents and the responsibilities they have within the work flow. We build specialized Agents with output contracts that are clear and have specific goals. In addition, we add a Web Search tool to the researcher for more evidence-based answers. Click here to see the FULL CODES here.

def step_json(agent: ChatAgent, prompt: str) -> Dict[str, Any]:
   res = agent.step(prompt)
 Text messages are txt.[0].content.strip()
   try:
       return json.loads(txt)
 The exception:Return
       return {"raw": txt}


def step_text(agent: ChatAgent, prompt: str) -> str:
   res = agent.step(prompt)
 Return messages[0].content

Our helper functions abstract interactions with agents and enforce JSON outputs or free text. By centralizing parsing logic and fallback logic, we simplify orchestration. The pipeline is more resilient to format variations. See the FULL CODES here.

def run_workflow(topic: str) -> Dict[str, str]:
   rprint(mem_last_summaries(3))


 Plan = step_json
       planner,
 The f"Topic: {topic}nCreate a tight plan and research questions."
   )


   research = step_json(
       researcher,
 The f"Research the topic using web search.n{json.dumps(plan)}"
   )


 Draft = step_text
       writer,
 The f"Write a research brief using:n{json.dumps(research)}"
   )


   critique = step_json(
       critic,
 F"Critique the draft:n{draft}"
   )


 Final = step_text
 The f? artifacts=inalizer,
       f"Rewrite using critique:n{json.dumps(critique)}nDraft:n{draft}"
   )


   artifacts = {
       "plan_json": json.dumps(plan, indent=2),
       "research_json": json.dumps(research, indent=2),
       "draft_md": draft,
       "critique_json": json.dumps(critique, indent=2),
       "final_md": final,
   }


   mem_add_run(topic, artifacts)
   return artifacts


TOPIC = "Agentic multi-agent research workflow with quality control"
Run_workflow (TOPIC).
print(artifacts["final_md"])

From planning through to completion, we orchestrate a multi-agent workflow. Our workflow involves passing artifacts sequentially between agents. We also apply refinement based on critique, store results in memory and create a finished research brief.

In conclusion, we developed a practical multi-agent CAMEL system to mimic real world research and review workflows. In our demonstration, we showed that clearly defined roles for agents, tool-augmented reason, and refinement driven by critique lead to better quality outputs, while also reducing the likelihood of hallucinations. In addition, by enabling reuse and persisting artifacts we laid a solid foundation for extensibility. The approach we used allows us to go beyond simple interactions, towards robust systems capable of being adapted at large scale for various tasks, including research, analytics, reporting and decision support.


Take a look at the FULL CODES here. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe Now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.


Michal Sutter, a data scientist with a master’s degree in Data Science at the University of Padova. Michal Sutter excels in transforming large datasets to actionable insight. He has a strong foundation in statistics, machine learning and data engineering.

stem
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Moonshot AI Releases Kimi K2.6 with Lengthy-Horizon Coding, Agent Swarm Scaling to 300 Sub-Brokers and 4,000 Coordinated Steps

21/04/2026

OpenAI’s GPT-5.4 Cyber: A Finely Tuned Model for Verified Security Defenders

20/04/2026

Code Implementation for an AI-Powered Pipeline to Detect File Types and Perform Security Analysis with OpenAI and Magika

20/04/2026

TabPFN’s superior accuracy on tabular data sets is achieved by leveraging in-context learning compared to Random Forest or CatBoost

20/04/2026
Top News

I Let Google’s ‘Auto Browse’ AI Agent Take Over Chrome. It didn’t quite click

You can forget SEO. Generative Engine Optimization: Welcome to the World

Inside Jeffrey Epstein’s Forgotten AI Summit

OpenAI Acquires Tech Talk Show ‘TBPN’—and Buys Itself Some Positive News

GPT-5 is a mixed bag, say developers

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Huawei CloudMatrix is a peer-to-peer AI datacenter architecture for scalable and efficient LLM serving

23/08/2025

Best AI Video Generators for 2025

30/05/2025
Latest News

Moonshot AI Releases Kimi K2.6 with Lengthy-Horizon Coding, Agent Swarm Scaling to 300 Sub-Brokers and 4,000 Coordinated Steps

21/04/2026

In China, a humanoid robot set a record for the half-marathon.

20/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.