How can we make every node in a graph its own intelligent agent—capable of personalized reasoning, adaptive retrieval, and autonomous decision-making? Researchers from Rutgers University have explored this question. The research team introduced ReaGAN—a Retrieval-augmented Graph Agentic Network that reimagines each node as an independent reasoning agent.
The Traditional GNNs are Struggling
Graph Neural Networks are used for many different tasks, including citation analysis, scientific classification, recommendation systems and categorization. GNNs are traditionally operated via Messages that are static and homogeneousEach node aggregates data from its immediate neighbor using the same rules.
There are two challenges that persist:
- Unbalanced Node Information: Some nodes have more information than others. Not all nodes are created equal. Some carry useful, rich information and others are noisy. If treated the same, useful signals may be lost or noises can overwhelm context.
- The Locality Restrictions: GNNs focus on local structure—information from nearby nodes—often missing out on meaningful, semantically similar but distant nodes within the larger graph.
ReaGAN Approach – Nodes as autonomous agents
ReaGAN flips a script. ReaGAN uses active nodes instead of passive ones. Each node is an agent That actively plans its move on the basis of its memories and context. Here’s how:
- Agentic planning: The nodes use a large language model, such as Qwen2-14B to decide dynamically on actions.”Should I gather more info? Predict my label? Pause?”).
- Take Flexible Actions
- Local Aggregation Gather information about your neighbors.
- Global Aggregation Retrieve relevant insights from anywhere on the graph—using retrieval-augmented generation (RAG).
- NoOp”Do Nothing”): Sometimes, the best move is to wait—pausing to avoid information overload or noise.
- Memory Matters Every agent maintains its own private buffer. This contains the raw text, context aggregated, and labeled example. The prompts and reasoning can be tailored at each stage.

ReaGAN: How does it work?
The ReaGAN Workflow is broken down into the following simple steps:
- Perception: This node can be used to get context immediately from the state of its memory buffer and current state.
- Planning: An LLM recommends the action to be taken after constructing a prompt (summarizing node memory, neighbor information, and features).
- Acting: Nodes can aggregate locally, retrieve globally, forecast their label or do nothing. Results are written to memory.
- Iterate: The reasoning loop is repeated several times, which allows for information refinement and integration.
- Predict: In the final stage, the node aims to make a label prediction—supported by the blended local and global evidence it’s gathered.
This is a novel system because each node makes its own decisions, independently and synchronously. No global clock, no shared parameters to force uniformity.
Results: Surprisingly Strong—Even Without Training
ReaGAN delivers on its promise. On classic benchmarks (Cora, Citeseer, Chameleon), it achieves competitive accuracy, often matching or outperforming baseline GNNs—Without any supervision or fine tuning.
Example Results
| Model | Cora | Citeseer | Chameleon |
|---|---|---|---|
| GCN | 84.71 | 72.56 | 28.18 |
| GraphSAGE | 84.35 | 78.24 | 62.15 |
| ReaGAN | 84.95 | 60.25 | 43.80 |
ReaGAN uses a frozen LLM for planning and context gathering—highlighting the power of prompt engineering and semantic retrieval.
The Key Takeaways
- The importance of prompt engineering How well nodes integrate local and globally stored memory into prompts affects accuracy. And the best strategy relies on label and graph locality.
- Label Semantics: Anonymizing the labels is better than exposing their names, which can cause bias in predictions.
- Flexible Agents: ReaGAN’s reasoning at the node level is especially effective when dealing with graphs that are sparse, or have noisy areas.
The following is a summary of the information that you will find on this page.
ReaGAN is a standard in graph-learning with agents. With the increasing sophistication of LLMs and retrieval-augmented architectures, we might soon see graphs where every node is not just a number or an embedding, but an adaptive, contextually-aware reasoning agent—ready to tackle the challenges of tomorrow’s data networks.
Click here to find out more Paper here. Check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe now our Newsletter.
Nikhil has been an intern consultant with Marktechpost. The Indian Institute of Technology in Kharagpur offers him a dual degree integrated with Materials. Nikhil has a passion for AI/ML and is continually researching its applications to fields such as biomaterials, biomedical sciences, etc. Material Science is his background. His passion for exploring and contributing new advances comes from this.

