This tutorial will explore some of the latest features introduced by OpenAI in its GPT-5 model. This update includes several powerful features including Free-form function calling, Context Free Grammar (CFG), Minimal Reasoning and the Verbosity Parameter. How to use these powerful features will be explained. See the Full Codes here.
Installing the Libraries
Open pandas using!pip
Visit OpenAI API Key to get your OpenAI API key https://platform.openai.com/settings/organization/api-keys Create a new API key. For new users, billing information may be needed and a $5 payment is required. You can check out the Full Codes here.
Import os
Getpass Import
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
Font Size Parameter
You can control the level of detail in your model’s responses by using the Verbosity Parameter.
- low → Short and concise, minimal extra text.
- medium (default) → Balanced detail and clarity.
- high → Very detailed, ideal for explanations, audits, or teaching. Take a look at the Full Codes here.
OpenAI import OpenAI
import pandas as pd
Import display from IPython
Client = OpenAI()
Question "Write a poem about a detective and his first solve"
Data []
Verbosity is not acceptable ["low", "medium", "high"]:
response = client.responses.create(
model="gpt-5-mini",
input=question,
text={"verbosity": verbosity}
)
Extraction text
output_text = ""
For item in response:
If item hasattr, "content"):
For content of item.content
if content hasattr, "text"):
output_text += content.text
Usage = Response
data.append({
"Verbosity": verbosity,
"Sample Output": output_text,
"Output Tokens": usage.output_tokens
})
# Create DataFrame
DataFrame (data) = df
Center your headers to display them nicely
pd.set_option('display.max_colwidth', None)
styled_df = df.style.set_table_styles(
[
{'selector': 'th', 'props': [('text-align', 'center')]}, # Center column headers
{'selector': 'td', 'props': [('text-align', 'left')]} # Left-align table cells
]
)
display(styled_df)
The output tokens scale roughly linearly with verbosity: low (731) → medium (1017) → high (1263).
Free-Form Calling
Free-form function calling lets GPT-5 send raw text payloads—like Python scripts, SQL queries, or shell commands—directly to your tool, without the JSON formatting used in GPT-4. See the Full Codes here.
It is now easier to integrate GPT-5 with external runtimes, such as:
- Code sandboxes (Python, C++, Java, etc.)
- SQL databases (outputs SQL in raw form)
- Shell Environments (ready-to-run bash outputs)
- Config generators
OpenAI import OpenAI
Client = OpenAI()
response = client.responses.create(
model="gpt-5-mini",
input="Please use the code_exec tool to calculate the cube of the number of vowels in the word 'pineapple'",
text={"format": {"type": "text"}},
tools=[
{
"type": "custom",
"name": "code_exec",
"description": "Executes arbitrary python code",
}
]
)
print(response.output[1].input)
GPT-5 generates raw Python code to count the vowels of the word “pineapple”, calculate the cube and print both values. GPT-5 returns plain executable Python code, instead of a JSON object as GPT-4 does for most tool calls. It is possible to directly feed the results into a Python Runtime, without additional parsing.
Context-Free Grammar
Context-Free Grammars (CFGs) are a series of rules which define the valid strings for any language. Each rule transforms a symbol that is not a terminal into corresponding non-terminals or terminals, regardless of the context.
CFGs are useful when you want to strictly constrain the model’s output so it always follows the syntax of a programming language, data format, or other structured text — for example, ensuring generated SQL, JSON, or code is always syntactically correct.
We’ll compare the output of GPT-5 and GPT-4 with the identical CFG. This will show how the models differ on accuracy and speed. Click here to see the Full Codes here.
OpenAI import OpenAI
Import re
Client = OpenAI()
email_regex = r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,}$"
prompt = "Give me a valid email address for John Doe. It can be a dummy email"
Models may not adhere to strict grammar rules and produce prose, or a format that is invalid.
response = client.responses.create(
model="gpt-4o"If you are unsure, please call us at.
input=prompt
)
output = response.output_text.strip()
print("GPT Output:", output)
print("Valid?", bool(re.match(email_regex, output)))

OpenAI import OpenAI
Client = OpenAI()
email_regex = r"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,}$"
prompt = "Give me a valid email address for John Doe. It can be a dummy email"
response = client.responses.create(
model="gpt-5"# Grammar-constrained Model
input=prompt,
text={"format": {"type": "text"}},
tools=[
{
"type": "custom",
"name": "email_grammar",
"description": "Outputs a valid email address.",
"format": {
"type": "grammar",
"syntax": "regex",
"definition": email_regex
}
}
],
parallel_tool_calls=False
)
print("GPT-5 Output:", response.output[1].input)

The example below shows that GPT-5 adheres more closely to the format specified when using Context-Free Grammar.
GPT-4 added extra text to the email address using the same grammar rules (“Sure! Here’s an email that you can send John Doe. [email protected]This is invalid as it does not conform to strict formatting requirements.
GPT-5 is a very precise output. [email protected]This shows that GPT-5 is better at following CFG constraints precisely. GPT-5 can now follow CFG restrictions more precisely. Take a look at the Full Codes here.
The Minimalist Reasoning
GPT-5 is run in minimal reasoning mode with few reasoning tokens or none at all, which reduces the latency.
The lightweight version is perfect for tasks that are deterministic and lightweight.
- Data extraction
- Formatting
- Rewrites in short form
- Simple classification
Responses are concise and quick because the model bypasses many intermediate steps. The default reasoning effort is medium, if it’s not specified. Look at the Full Codes here.
import time
OpenAI import OpenAI
Client = OpenAI()
prompt = "Classify the given number as odd or even. Return one word only."
start_time = time.time() # Start timer
response = client.responses.create(
model="gpt-5",
input=[
{ "role": "developer", "content": prompt },
{ "role": "user", "content": "57" }
],
reasoning={
"effort": "minimal" # Faster time-to-first-token
},
)
latency = time.time() Start timer # -
# Extracted model's output
output_text = ""
Output for the item:
If hasattr (item) "content"):
Content in the item:
If hasattr (content), "text"):
output_text += content.text
print("--------------------------------")
print("Output:", output_text)
print(f"Latency: {latency:.3f} seconds")


