Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
  • Top 19 AI Red Teaming Tools (2026): Secure Your ML Models
  • OpenAI’s Kevin Weil is Leaving The Company
AI-trends.todayAI-trends.today
Home»Tech»How to build a secure local-first agent runtime using OpenClaw gateway, skills, and controlled tool execution

How to build a secure local-first agent runtime using OpenClaw gateway, skills, and controlled tool execution

Tech By Gavin Wallace11/04/20267 Mins Read
Facebook Twitter LinkedIn Email
Microsoft Releases NLWeb: An Open Project that Allows Developers to
Microsoft Releases NLWeb: An Open Project that Allows Developers to
Share
Facebook Twitter LinkedIn Email

We will build and use a local schema-valid database in this tutorial. OpenClaw runtime. Configure the OpenClaw Gateway with strict loopback bound, authenticate model access using environment variables and define a secured execution environment by using the integrated exec tool. Then, we create a custom structured skill which the OpenClaw Agent can discover deterministically and execute. OpenClaw orchestrates model reasoning and skill selection through its runtime, instead of manually running Python. Our focus is on OpenClaw’s gateway control plane and architecture. Agent defaults, skill abstraction, model route, and agent defaults are also discussed.

import os, json, textwrap, subprocess, time, re, pathlib, shlex
From getpass import Getpass


Define sh (cmd; check=True; capture=False and env=None).
   p = subprocess.run(
       ["bash", "-lc", cmd],
       check=check,
       text=True,
       capture_output=capture,
 If you want to copy a file, use os.environ.copy instead of env=env.(),
   )
 If capture is not possible, return p.stdout.


def require_secret_env(var="OPENAI_API_KEY"):
   if os.environ.get(var, "").strip():
 You can return to your original language by clicking here.
   key = getpass(f"Enter {var} (hidden): ").strip()
 If not key
       raise RuntimeError(f"{var} is required.")
   os.environ[var] Key


def install_node_22_and_openclaw():
   sh("sudo apt-get update -y")
   sh("sudo apt-get install -y ca-certificates curl gnupg")
   sh("curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -")
   sh("sudo apt-get install -y nodejs")
   sh("node -v && npm -v")
   sh("npm install -g openclaw@latest")
   sh("openclaw --version", check=False)

The core utilities functions allow us to securely capture the environment variables and install OpenClaw along with the Node.js required runtime. Here, we create the control interface to connect Python with OpenClaw CLI. We prepare the environment for OpenClaw to function as the main agent runtime within Colab.

def write_openclaw_config_valid():
 Path = home()
 Base = Home /Cfg = ".openclaw"
   workspace = base / "workspace"
   (workspace / "skills").mkdir(parents=True, exist_ok=True)


   cfg = {
       "gateway": {
           "mode": "local",
           "port": 18789,
           "bind": "loopback",
           "auth": {"mode": "none"},
           "controlUi": {
               "enabled"True,
               "basePath": "/openclaw",
               "dangerouslyDisableDeviceAuth": True
           }
       },
       "agents": {
           "defaults": {
               "workspace": str(workspace),
               "model": {"primary": "openai/gpt-4o-mini"}
           }
       },
       "tools": {
           "exec": {
               "backgroundMs": 10000,
               "timeoutSec": 1800,
               "cleanupMs": 1800000,
               "notifyOnExit": True,
               "notifyOnExitEmptySuccess": False,
               "applyPatch": {"enabled": False, "allowModels": ["openai/gpt-5.2"]}
           }
       }
   }


   base.mkdir(parents=True, exist_ok=True)
   (base / "openclaw.json").write_text(json.dumps(cfg, indent=2))
 Return str (base / "openclaw.json")


def start_gateway_background():
   sh("rm -f /tmp/openclaw_gateway.log /tmp/openclaw_gateway.pid", check=False)
   sh("nohup openclaw gateway --port 18789 --bind loopback --verbose > /tmp/openclaw_gateway.log 2>&1 & echo $! > /tmp/openclaw_gateway.pid")


 For _ within range(60)
       time.sleep(1)
       log = sh("tail -n 120 /tmp/openclaw_gateway.log || true"If you are using a web browser, please make sure to check the following: (capture=true,check=false). ""
 if you re.search."(listening|ready|ws|http).*18789|18789.*listening", log, re.IGNORECASE):
 Return True


   print("Gateway log tail:n", sh("tail -n 220 /tmp/openclaw_gateway.log || true", capture=True, check=False))
   raise RuntimeError("OpenClaw gateway did not start cleanly.")

Write a valid schema OpenClaw configuration and initialize local gateway settings. In accordance with OpenClaw’s official configuration, we define workspace, model routing and tool execution behavior. The OpenClaw Gateway is then started in loopback mode, to make sure the agent launch occurs correctly.

def pick_model_from_openclaw():
 Out = sh"openclaw models list --json"" capture=True" and " check=False". ""
   refs = []
   try:
 Data = json.loads (out)
 If isInstance(data, Dict):
 Fork in ["models", "items", "list"]:
               if isinstance(data.get(k), list):
 The data is data[k]
 Breaks
 If data, then list is instance:
 It is a data-driven approach
 If it is instance(it, str), and "/" In it
                   refs.append(it)
 "elif" isinstance (it, dict).
 Key in ["ref", "id", "model", "name"]:
                       v = it.get(key)
 If isinstance (v, str), then "/" v
                           refs.append(v)
 Breakaway
 Except Exception
       pass


   refs = [r for r in refs if r.startswith("openai/")]
  ["openai/gpt-4o-mini", "openai/gpt-4.1-mini", "openai/gpt-4o", "openai/gpt-5.2-mini", "openai/gpt-5.2"]
 P in the preferred order:
 If p is in the refs
 The return of p
 Return refs[0] if refs else "openai/gpt-4o-mini"


def set_default_model(model_ref):
   sh(f'openclaw config set agents.defaults.model.primary "{model_ref}"', check=False)

OpenClaw is dynamically queried for the available OpenAI models, and an appropriate model is selected. We set the defaults for the agents programmatically, so OpenClaw can route all reasoning requests via the model we choose. OpenClaw is allowed to manage model abstractions and provider authentications seamlessly.

def create_custom_skill_rag():
 Path = home()
   skill_dir = home / ".openclaw" / "workspace" / "skills" / "colab_rag_lab"
   skill_dir.mkdir(parents=True, exist_ok=True)


   tool_py = skill_dir / "rag_tool.py"
   tool_py.write_text(textwrap.dedent(r"""
 Import sys., re., subprocess
       def pip(*args): subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", *args])


       q = " ".join(sys.argv[1:]).strip()
 q.
           print("Usage: python3 rag_tool.py ", file=sys.stderr)
           raise SystemExit(2)


       try:
 Numpy can be imported as np
 The exception:
           pip("numpy"Numpy can be imported as np


       try:
           import faiss
 The exception:
           pip("faiss-cpu"); import faiss


       try:
           from sentence_transformers import SentenceTransformer
 The exception:
           pip("sentence-transformers"); from sentence_transformers import SentenceTransformer


 CORPUS = [
           ("OpenClaw basics", "OpenClaw runs an agent runtime behind a local gateway and can execute tools and skills in a controlled way."),
           ("Strict config schema", "OpenClaw gateway refuses to start if openclaw.json has unknown keys; use openclaw doctor to diagnose issues."),
           ("Exec tool config", "tools.exec config sets timeouts and behavior; it does not use an enabled flag in the config schema."),
           ("Gateway auth", "Even on localhost, gateway auth exists; auth.mode can be none for trusted loopback-only setups."),
           ("Skills", "Skills define repeatable tool-use patterns; agents can select a skill and then call exec with a fixed command template.")
       ]


 Documents = []
 For title, see CORPUS
 Your sents is split into two '(?= zeros.
 Refer, txt is docs[idx]
               hits.append((score, ref, txt))


       print("Answer (grounded to retrieved snippets):n")
       print("Question:", q, "n")
       print("Key points:")
 For score, refer, txt, in hits
           print(f"- ({score:.3f}) {txt} [{ref}]")
       print("nCitations:")
 For _, _ and _:
           print(f"- {ref}")
   """).strip() + "n")
   sh(f"chmod +x {shlex.quote(str(tool_py))}")


   skill_md = skill_dir / "SKILL.md"
   skill_md.write_text(textwrap.dedent(f"""
       ---
       name: colab_rag_lab
 Description: Fixed exec commands invoke a deterministic local RAG.
       ---


       # Colab RAG Lab


       ## Tooling rule (strict)
 Run exactly as you always have:
       `python3 {tool_py} ""`


       ## Output rule
 Restore the tool's output.
   """).strip() + "n")

Create a new OpenClaw custom skill in the specified workspace directory. In SKILL.md we create a deterministic scripting pattern and pair this with a RAG toolscript that can be invoked by the agent. OpenClaw’s skill loading mechanism is used to register and operationalize the tool in the runtime of our agent.

Def refresh_skills():
 Openclaw Agent Message "refresh skills" --thinking low', check=False)


def run_openclaw_agent_demo():
   prompt = (
 Use the skill colab_rag_lab in order to:
       'Why did my gateway refuse to start when I used agents.defaults.thinking and tools.exec.enabled, '
 What are the proper configuration knobs?
   )
   out = sh(f'openclaw agent --message {shlex.quote(prompt)} --thinking high', capture=True, check=False)
   print(out)


require_secret_env("OPENAI_API_KEY")
install_node_22_and_openclaw()


cfg_path = write_openclaw_config_valid()
print("Wrote schema-valid config:", cfg_path)


print("n--- openclaw doctor ---n")
print(sh("openclaw doctor", capture=True, check=False))


start_gateway_background()


model = pick_model_from_openclaw()
set_default_model(model)
print("Selected model:", model)


create_custom_skill_rag()
refresh_skills()


print("n--- OpenClaw agent run (skill-driven) ---n")
run_openclaw_agent_demo()


print("n--- Gateway log tail ---n")
print(sh("tail -n 180 /tmp/openclaw_gateway.log || true", capture=True, check=False))

The OpenClaw skills registry is refreshed and the OpenClaw Agent with structured instructions are invoked. OpenClaw performs reasoning to select a skill, run the executable tool and then return the grounded output. We demonstrate here the entire OpenClaw Orchestration Cycle, from configuration through to autonomous agent execution.

We deployed an advanced OpenClaw work flow in a controlled Colab setting. Validating the schema for the OpenClaw workflow, we started the gateway and dynamically chose a model service provider. We registered the skill through OpenClaw’s agent interface, then executed the task. We used OpenClaw to manage authentication, skill load, tool execution and runtime governance. OpenClaw’s ability to enforce structured execution and enable autonomous reasoning was demonstrated. It can be used as a solid foundation in building extensible, secure agent systems for production environments.


Check out the Full Codes here. Also, feel free to follow us on Twitter Join our Facebook group! 120k+ ML SubReddit Subscribe Now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.

You can partner with us to promote your GitHub Repository OR Hugging Page OR New Product Launch OR Webinar, etc.? Connect with us


Law x
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale

18/04/2026

This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.

18/04/2026

The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.

18/04/2026
Top News

Trump Administration will not rule out more action against Anthropic

Palantir Demos: How AI chatbots could be used by the military to generate war plans

Data Centers, a Trump Administration initiative, could open the door to more chemicals.

This Chatbot Software Pays Customers $50 a Month for Their Suggestions on AI Fashions

The AI Slur ‘Clanker’ Has Become a Cover for Racist TikTok Skits

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

A robot with feelings: Tactile AI could transform human-robot relationships

27/05/2025

Zhipu AI releases GLM-4.6 to achieve enhancements in real-world coding, long-context processing, reasoning, searching, and agentic AI

01/10/2025
Latest News

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In

18/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.