Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Join Us for Our Livestream: Musk and Altman on the Future of OpenAI
  • A detection tool claims that the Pope’s warnings about AI were AI-generated.
  • Photon releases Spectrum, an open-source TypeScript framework that deploys AI agents directly to iMessages, WhatsApp and Telegram
  • OpenAI Open-Sources – Euphony: a web-based visualization tool for Harmony session data and Codex chat logs
  • Hugging face releases mlintern: A Open-Source AI agent that automates LLM post-training workflow
  • Mozilla Fixed 271 Firefox Bugs Using Anthropic Mythos
  • YouTube will allow celebrities to request the removal of AI fakes.
  • OpenAI enhances ChatGPT’s image generation model
AI-trends.todayAI-trends.today
Home»Tech»OpenAI GPT-5 Modeling Capabilities: A Developer Guide

OpenAI GPT-5 Modeling Capabilities: A Developer Guide

Tech By Gavin Wallace08/08/20255 Mins Read
Facebook Twitter LinkedIn Email
This AI Paper Introduces MMaDA: A Unified Multimodal Diffusion Model
This AI Paper Introduces MMaDA: A Unified Multimodal Diffusion Model
Share
Facebook Twitter LinkedIn Email

This tutorial will explore some of the latest features introduced by OpenAI in its GPT-5 model. This update includes several powerful features including Free-form function calling, Context Free Grammar (CFG), Minimal Reasoning and the Verbosity Parameter. How to use these powerful features will be explained. See the Full Codes here.

Installing the Libraries

Open pandas using!pip

Visit OpenAI API Key to get your OpenAI API key https://platform.openai.com/settings/organization/api-keys Create a new API key. For new users, billing information may be needed and a $5 payment is required. You can check out the Full Codes here.

Import os
Getpass Import
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')

Font Size Parameter

You can control the level of detail in your model’s responses by using the Verbosity Parameter.

  • low → Short and concise, minimal extra text.
  • medium (default) → Balanced detail and clarity.
  • high → Very detailed, ideal for explanations, audits, or teaching. Take a look at the Full Codes here.
OpenAI import OpenAI
import pandas as pd
Import display from IPython

Client = OpenAI()

Question "Write a poem about a detective and his first solve"

Data []

Verbosity is not acceptable ["low", "medium", "high"]:
    response = client.responses.create(
        model="gpt-5-mini",
        input=question,
        text={"verbosity": verbosity}
    )

 Extraction text
    output_text = ""
 For item in response:
 If item hasattr, "content"):
 For content of item.content
 if content hasattr, "text"):
                    output_text += content.text

 Usage = Response
    data.append({
        "Verbosity": verbosity,
        "Sample Output": output_text,
        "Output Tokens": usage.output_tokens
    })
# Create DataFrame
DataFrame (data) = df

Center your headers to display them nicely
pd.set_option('display.max_colwidth', None)
styled_df = df.style.set_table_styles(
    [
        {'selector': 'th', 'props': [('text-align', 'center')]},  # Center column headers
        {'selector': 'td', 'props': [('text-align', 'left')]}     # Left-align table cells
    ]
)

display(styled_df)

The output tokens scale roughly linearly with verbosity: low (731) → medium (1017) → high (1263).

Free-Form Calling

Free-form function calling lets GPT-5 send raw text payloads—like Python scripts, SQL queries, or shell commands—directly to your tool, without the JSON formatting used in GPT-4. See the Full Codes here.

It is now easier to integrate GPT-5 with external runtimes, such as:

  • Code sandboxes (Python, C++, Java, etc.)
  • SQL databases (outputs SQL in raw form)
  • Shell Environments (ready-to-run bash outputs)
  • Config generators
OpenAI import OpenAI

Client = OpenAI()

response = client.responses.create(
    model="gpt-5-mini",
    input="Please use the code_exec tool to calculate the cube of the number of vowels in the word 'pineapple'",
    text={"format": {"type": "text"}},
    tools=[
        {
            "type": "custom",
            "name": "code_exec",
            "description": "Executes arbitrary python code",
        }
    ]
)
print(response.output[1].input)

GPT-5 generates raw Python code to count the vowels of the word “pineapple”, calculate the cube and print both values. GPT-5 returns plain executable Python code, instead of a JSON object as GPT-4 does for most tool calls. It is possible to directly feed the results into a Python Runtime, without additional parsing.

Context-Free Grammar

Context-Free Grammars (CFGs) are a series of rules which define the valid strings for any language. Each rule transforms a symbol that is not a terminal into corresponding non-terminals or terminals, regardless of the context.

CFGs are useful when you want to strictly constrain the model’s output so it always follows the syntax of a programming language, data format, or other structured text — for example, ensuring generated SQL, JSON, or code is always syntactically correct.

We’ll compare the output of GPT-5 and GPT-4 with the identical CFG. This will show how the models differ on accuracy and speed. Click here to see the Full Codes here.

OpenAI import OpenAI
Import re

Client = OpenAI()

email_regex = r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,}$"

prompt = "Give me a valid email address for John Doe. It can be a dummy email"

Models may not adhere to strict grammar rules and produce prose, or a format that is invalid.
response = client.responses.create(
    model="gpt-4o"If you are unsure, please call us at.
    input=prompt
)

output = response.output_text.strip()
print("GPT Output:", output)
print("Valid?", bool(re.match(email_regex, output)))
OpenAI import OpenAI

Client = OpenAI()

email_regex = r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,}$"

prompt = "Give me a valid email address for John Doe. It can be a dummy email"

response = client.responses.create(
    model="gpt-5"# Grammar-constrained Model
    input=prompt,
    text={"format": {"type": "text"}},
    tools=[
        {
            "type": "custom",
            "name": "email_grammar",
            "description": "Outputs a valid email address.",
            "format": {
                "type": "grammar",
                "syntax": "regex",
                "definition": email_regex
            }
        }
    ],
    parallel_tool_calls=False
)

print("GPT-5 Output:", response.output[1].input)

The example below shows that GPT-5 adheres more closely to the format specified when using Context-Free Grammar.

GPT-4 added extra text to the email address using the same grammar rules (“Sure! Here’s an email that you can send John Doe. [email protected]This is invalid as it does not conform to strict formatting requirements.

GPT-5 is a very precise output. [email protected]This shows that GPT-5 is better at following CFG constraints precisely. GPT-5 can now follow CFG restrictions more precisely. Take a look at the Full Codes here.

The Minimalist Reasoning

GPT-5 is run in minimal reasoning mode with few reasoning tokens or none at all, which reduces the latency.

The lightweight version is perfect for tasks that are deterministic and lightweight.

  • Data extraction
  • Formatting
  • Rewrites in short form
  • Simple classification

Responses are concise and quick because the model bypasses many intermediate steps. The default reasoning effort is medium, if it’s not specified. Look at the Full Codes here.

import time
OpenAI import OpenAI

Client = OpenAI()

prompt = "Classify the given number as odd or even. Return one word only."

start_time = time.time()  # Start timer

response = client.responses.create(
    model="gpt-5",
    input=[
        { "role": "developer", "content": prompt },
        { "role": "user", "content": "57" }
    ],
    reasoning={
        "effort": "minimal"  # Faster time-to-first-token
    },
)

latency = time.time() Start timer # -

# Extracted model's output
output_text = ""
Output for the item:
 If hasattr (item) "content"):
 Content in the item:
 If hasattr (content), "text"):
                output_text += content.text

print("--------------------------------")
print("Output:", output_text)
print(f"Latency: {latency:.3f} seconds")


I’m a Civil Engineering graduate (2022) at Jamia Millia Islamia in New Delhi. I have an interest in Data Science and in particular, Neural Networks, as well as their applications in different areas.

AI modeling openai
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Photon releases Spectrum, an open-source TypeScript framework that deploys AI agents directly to iMessages, WhatsApp and Telegram

22/04/2026

OpenAI Open-Sources – Euphony: a web-based visualization tool for Harmony session data and Codex chat logs

22/04/2026

Hugging face releases mlintern: A Open-Source AI agent that automates LLM post-training workflow

22/04/2026

Google Simula: A Framework that Uses Reasoning to Generate Synthetic Datasets in Specialized AI Domains

21/04/2026
Top News

Taiwan is rushing to make its own drones before it’s too late

Power Play: The Great Big Power Play

This Startup Is Trying to Create a DeepSeek Moment in the US

Then, is it? “AI bubble” About to burst late in 2025 or even 2026?

Billion-Dollar data centers are taking over the world

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

TII Abu-Dhabi has released Falcon H1R-7B – a new reasoner model outperforming others in Math and Coding using only 7B params, with a 256k context window.

07/01/2026

OpenAI GPT-5 Modeling Capabilities: A Developer Guide

08/08/2025
Latest News

Join Us for Our Livestream: Musk and Altman on the Future of OpenAI

22/04/2026

A detection tool claims that the Pope’s warnings about AI were AI-generated.

22/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.