Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • AI-Designed drugs by a DeepMind spinoff are headed to human trials
  • Apple’s new CEO must launch an AI killer product
  • OpenMythos Coding Tutorial: Recurrent-Depth Transformers, Depth Extrapolation and Mixture of Experts Routing
  • 5 Reasons to Think Twice Before Using ChatGPT—or Any Chatbot—for Financial Advice
  • OpenAI Releases GPT-5.5, a Absolutely Retrained Agentic Mannequin That Scores 82.7% on Terminal-Bench 2.0 and 84.9% on GDPval
  • Your Favorite AI Gay Thirst Traps: The Men Behind them
  • Mend Releases AI Safety Governance Framework: Masking Asset Stock, Danger Tiering, AI Provide Chain Safety, and Maturity Mannequin
  • Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Coaching Structure Attaining 88% Goodput Below Excessive {Hardware} Failure Charges
AI-trends.todayAI-trends.today
Home»Tech»The Advanced PyTest Coding to Create Customized and Automatic Testing With Plugins Fixtures and JSON Reporting

The Advanced PyTest Coding to Create Customized and Automatic Testing With Plugins Fixtures and JSON Reporting

Tech By Gavin Wallace14/10/20256 Mins Read
Facebook Twitter LinkedIn Email
NVIDIA AI Introduces AceReason-Nemotron for Advancing Math and Code Reasoning
NVIDIA AI Introduces AceReason-Nemotron for Advancing Math and Code Reasoning
Share
Facebook Twitter LinkedIn Email

This tutorial explores the capabilities of Advanced XML. PyTestWe build a complete mini-project from scratch that demonstrates fixtures, markers, plugins and parameterization. From scratch, we build a mini-project that shows fixtures, markers and plugins. We also demonstrate parameterization and custom configuration. Our focus is on how PyTest evolves from a simple tester into a robust and extensible system that can be used for real-world application. We will learn not only how to create tests but also how to customize PyTest to suit any project. Visit the FULL CODES here.

import sys, subprocess, os, textwrap, pathlib, json


subprocess.run([sys.executable, "-m", "pip", "install", "-q", "pytest>=8.0"], check=True)


root = pathlib.Path"pytest_advanced_tutorial").absolute()
If root.exists():
 Import shutil.rmtree (root);
(root / "calc").mkdir(parents=True)
(root / "app").mkdir()
(root / "tests").mkdir()

Setup our environment by importing the essential Python libraries to handle files and execute subprocesses. To ensure compatibility we install PyTest and create a new project structure. This includes folders for the main code, our application modules, as well as tests. The project structure is now set up to allow us to create test logic before we write any code. Click here to see the FULL CODES here.

(root / "pytest.ini").write_text(textwrap.dedent("""
[pytest]
addopts= -q --maxfail=1 "not slow"
The testpath is the same as tests
Markers
 Tests that run slowly (run with --runslow).
 File system tests in io
 Tests patching of external calls
""").strip()+"n")


(root / "conftest.py").write_text(textwrap.dedent(r'''
import os, time, pytest, json
def pytest_addoption(parser):
   parser.addoption("--runslow", action="store_true", help="run slow tests")
def pytest_configure(config):
   config.addinivalue_line("markers", "slow: slow tests")
   config._summary = {"passed":0,"failed":0,"skipped":0,"slow_ran":0}
def pytest_collection_modifyitems(config, items):
 If config.getoption()"--runslow"):
 You can return to your original language by clicking here.
   skip = pytest.mark.skip(reason="need --runslow to run")
 For item within items
 If you want to know more about if "slow" in item.keywords: item.add_marker(skip)
def pytest_runtest_logreport(report):
   cfg = report.config._summary
 if you report.when=="call":
 If report.passed : cfg["passed"]+=1
       elif report.failed: cfg["failed"]+=1
 cfg["skipped"]+=1
 If you want to know more about if "slow" In report.Keywords.Passed: Cfg["slow_ran"]+=1
def pytest_terminal_summary(terminalreporter, exitstatus, config):
   s=config._summary
   terminalreporter.write_sep("=", "SESSION SUMMARY (custom plugin)")
   terminalreporter.write_line(f"Passed: {s['passed']} | Failed: {s['failed']} | Skipped: {s['skipped']}")
   terminalreporter.write_line(f"Slow tests run: {s['slow_ran']}")
   terminalreporter.write_line("PyTest finished successfully ✅" If you s["failed"]Then ==0 "Some tests failed ❌")


@pytest.fixture(scope="session")
Def settingReturn to(): return {"env":"prod","max_retries":2}
@pytest.fixture(scope="function")
Def event_log(): logs=[]The yield log; Print ("nEVENT LOG:", logs)
@pytest.fixture
def temp_json_file(tmp_path):
   p=tmp_path/"data.json"; p.write_text('{"msg":"hi"}'); return p
@pytest.fixture
def fake_clock(monkeypatch):
   t={"now":1000.0}; monkeypatch.setattr(time,"time"The.lambda is a way to say "t["now"]Return to t
'''))

Now we create the PyTest plugin and configuration files. We define default options and test paths in pytest.ini to determine how tests will be discovered and filtered. In conftest.py, we implement a custom plugin that tracks passed, failed, and skipped tests, adds a –runslow option, and provides fixtures for reusable test resources. The conftest.py plugin allows us to add new functionality without affecting the core PyTest behavior. Take a look at the FULL CODES here.

(root/"calc"/"__init__.py").write_text(textwrap.dedent('''
Vector import.vector
Def add(a.b): Return a+b
def div(a,b):
   if b==0: raise ZeroDivisionError("division by zero")
 Return A/B
Def Moving_Avg(xsk,k)
   if klen(xs): raise ValueError("bad window")
   out=[]; s=sum(xs[:k]); out.append(s/k)
   for i in range(k,len(xs)):
       s+=xs[i]-xs[i-k]; out.append(s/k)
 Return out
'''))


(root/"calc"/"vector.py").write_text(textwrap.dedent('''
Class Vector
   __slots__=("x","y","z")
   def __init__(self,x=0,y=0,z=0): self.x,self.y,self.z=float(x),float(y),float(z)
   def __add__(self,o): return Vector(self.x+o.x,self.y+o.y,self.z+o.z)
   def __sub__(self,o): return Vector(self.x-o.x,self.y-o.y,self.z-o.z)
   def __mul__(self,s): return Vector(self.x*s,self.y*s,self.z*s)
   __rmul__=__mul__
   def norm(self): return (self.x**2+self.y**2+self.z**2)**0.5
   def __eq__(self,o): return abs(self.x-o.x)

Now we will build our core module of calculations. To demonstrate logic tests, we have defined simple mathematical utilities in the calc package. These include addition, division, with error-handling, and moving average function. We also create a Vector that can be used to test custom objects, comparisons, and equality checks. Visit the FULL CODES here.

(root/"app"/"io_utils.py").write_text(textwrap.dedent('''
Import pathlib, json
Save json using path and object:
   path=pathlib.Path(path); path.write_text(json.dumps(obj)); return path
def load_json(path): return json.loads(pathlib.Path(path).read_text())
def timed_operation(fn,*a,**kw):
   t0=time.time(); out=fn(*a,**kw); t1=time.time()Return to the previous page,t1-t0
'''))
(root/"app"/"api.py").write_text(textwrap.dedent('''
import os, time, random
Def fetch_username():
   if os.environ.get("API_MODE")=="offline"Return f"cached_{uid}"
 Time.sleep (0.001); Return F"user_{uid}_{random.randint(100,999)}"
'''))


(root/"tests"/"test_calc.py").write_text(textwrap.dedent('''
Import pytest and math
Add,div,moving_avg to calc
From calc.vector, import vector
@pytest.mark.parametrize("a,b,exp",[(1,2,3),(0,0,0),(-1,1,0)])
Test_add(a.b.exp) asserts add(a.b.exp).
@pytest.mark.parametrize("a,b,exp",[(6,3,2),(8,2,4)])
Def test_div[a,b]: assert div(a.b)==exp
@pytest.mark.xfail(raises=ZeroDivisionError)
Def test_div_zero(): div(1,0)
Def test_avg()Moving_avg()[1,2,3,4,5],3)==[2,3,4]
Def test_vector_ops(): v=Vector(1,2,3)+Vector(4,5,6); assert v==Vector(5,7,9)
'''))


(root/"tests"/"test_io_api.py").write_text(textwrap.dedent('''
import pytest, os
from app.io_utils import save_json,load_json,timed_operation
From app.api, import fetch_username
@pytest.mark.io
def test_io(temp_json_file,tmp_path):
   d={"x":5}; p=tmp_path/"a.json"Save_json (p,d); assert load_json (p)==d
   assert load_json(temp_json_file)=={"msg":"hi"}
Def test_timed:
   val,dt=timed_operation(lambda x:x*3,7); print("dt=",dt); out=capsys.readouterr().out
   assert "dt=" In out and val==21
@pytest.mark.api
def test_api(monkeypatch):
   monkeypatch.setenv("API_MODE","offline")
   assert fetch_username(9)=="cached_9"
'''))


(root/"tests"/"test_slow.py").write_text(textwrap.dedent('''
import time, pytest
@pytest.mark.slow
def test_slow(event_log,fake_clock):
   event_log.append(f"start@{fake_clock['now']}")
   fake_clock["now"]+=3.0
   event_log.append(f"end@{fake_clock['now']}")
   assert len(event_log)==2
'''))

To simulate the real world, we add mocked services and lightweight JSON I/O utilities. To validate side effects and logic, we write tests using xfail and other tools such as markers, tmp_paths, monkeypatch, xfail and parametrization. Wiring our slow test to the event_log fixture and fake_clock fixture allows us to control timing and show session-wide states. Take a look at the FULL CODES here.

print("📦 Project created at:", root)
print("n▶️ RUN #1 (default, skips @slow)n")
r1=subprocess.run([sys.executable,"-m","pytest",str(root)],text=True)
print("n▶️ RUN #2 (--runslow)n")
r2=subprocess.run([sys.executable,"-m","pytest",str(root),"--runslow"],text=True)


summary_file=root/"summary.json"
summary={
   "total_tests":sum("test_" Root.rglob (p str) = p"test_*.py")),
   "runs": ["default","--runslow"],
   "results": ["success" if r1.returncode==0 else "fail",
               "success" if r2.returncode==0 else "fail"],
   "contains_slow_tests": True,
   "example_event_log":["[email protected]","[email protected]"]
}
summary_file.write_text(json.dumps(summary,indent=2))
print("n📊 FINAL SUMMARY")
print(json.dumps(summary,indent=2))
print("n✅ Tutorial completed — all tests & summary generated successfully.")

We now run our test suite twice: first with the default configuration that skips slow tests, and then again with the –runslow flag to include them. The JSON summary contains the results of both tests, as well as the number and type of files. The final summary provides a snapshot of the testing status for our project, and confirms that every component works flawlessly.

PyTest is a tool that helps you test faster and smarter. Our plugin tracks tests, utilizes fixtures to manage state, and allows for custom control of slow tests. All this is done while maintaining a clean workflow. The JSON summary we provide illustrates how PyTest integrates with modern CI pipelines and analytics. Now that PyTest has a solid foundation in place, we feel confident about extending it further to combine coverage, benchmarking and even parallel execution, for professional-grade large-scale testing.


Click here to find out more FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe Now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.


Asif Razzaq serves as the CEO at Marktechpost Media Inc. As an entrepreneur, Asif has a passion for harnessing Artificial Intelligence’s potential to benefit society. Marktechpost was his most recent venture. This platform, which specializes in covering machine learning and deep-learning news, is well known for being both technically correct and understandable to a broad audience. This platform has over 2,000,000 monthly views which shows its popularity.

🙌 Follow MARKTECHPOST: Add us as a preferred source on Google.

coding van x
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

OpenMythos Coding Tutorial: Recurrent-Depth Transformers, Depth Extrapolation and Mixture of Experts Routing

24/04/2026

OpenAI Releases GPT-5.5, a Absolutely Retrained Agentic Mannequin That Scores 82.7% on Terminal-Bench 2.0 and 84.9% on GDPval

24/04/2026

Mend Releases AI Safety Governance Framework: Masking Asset Stock, Danger Tiering, AI Provide Chain Safety, and Maturity Mannequin

24/04/2026

Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Coaching Structure Attaining 88% Goodput Below Excessive {Hardware} Failure Charges

24/04/2026
Top News

ChatGPT Atlas wrote this blog post

Why AI Wants Massive Numerical Fashions (LNMs) for Mathematical Mastery • AI Weblog

Walmart and OpenAI are reshaping their agentic shopping deal

Amazon Explains how its AWS outage brought down the web

Engineers will soon be able to use Vibe Coding

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

YouTube’s mobile editor will soon be available on iOS

27/06/2025

How to add a link to your Instagram Story (3 steps + examples)

15/12/2025
Latest News

AI-Designed drugs by a DeepMind spinoff are headed to human trials

24/04/2026

Apple’s new CEO must launch an AI killer product

24/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.