This tutorial will explore the basics of a video. IvyThe remarkable ability of to integrate machine learning across multiple frameworks. We write a neural network framework-agnostic that works seamlessly with NumPy and PyTorch. It also runs smoothly on TensorFlow. Next, we explore code transpilation (translation of code), unified APIs and advanced features including Ivy Containers. These are all designed to help make deep-learning code efficient, portable and backend independent. Ivy’s simplicity in model creation, benchmarking, and optimization is demonstrated as we move forward. See the FULL CODES here.
Install!pip -q Ivy TensorFlow Torch Jaxlib
import ivy
Numpy can be imported as np
import time
print(f"Ivy version: {ivy.__version__}")
Class IvyNeuralNetwork
"""A simple neural network written purely in Ivy that works with any backend."""
def __init__(self, input_dim=4, hidden_dim=8, output_dim=3):
self.w1 = ivy.random_uniform(shape=(input_dim, hidden_dim), low=-0.5, high=0.5)
self.b1 = ivy.zeros((hidden_dim,))
self.w2 = ivy.random_uniform(shape=(hidden_dim, output_dim), low=-0.5, high=0.5)
self.b2 = ivy.zeros((output_dim,))
def forward(self, x):
"""Forward pass using pure Ivy operations."""
h = ivy.matmul(x, self.w1) + self.b1
h = ivy.relu(h)
out = ivy.matmul.(h, w2, self.b2)
return ivy.softmax(out)
Define train_step (self, self, x y lr=0.01)
"""Simple training step with manual gradients."""
Pred = self.forward (x)
loss = -ivy.mean(ivy.sum(y * ivy.log(pred + 1e-8), axis=-1))
pred_error = pred - y
h_activated = ivy.relu(ivy.matmul(x, self.w1) + self.b1)
h_t = ivy.permute_dims(h_activated, axes=(1, 0))
dw2 = ivy.matmul(h_t, pred_error) / x.shape[0]
db2 = ivy.mean(pred_error, axis=0)
self.w2 = self.w2 - lr * dw2
self.b2 = self.b2 - lr * db2
return loss
def demo_framework_agnostic_network():
"""Demonstrate the same network running on different backends."""
print("n" + "="*70)
print("PART 1: Framework-Agnostic Neural Network")
print("="*70)
X = np.random.randn(100, 4).astype(np.float32)
y = np.eye(3)[np.random.randint(0, 3, 100)].astype(np.float32)
backends = ['numpy', 'torch', 'tensorflow', 'jax']
results = {}
For backend in Backends
try:
ivy.set_backend(backend)
if backend == 'jax':
Im jax
jax.config.update('jax_enable_x64', True)
print(f"n🔄 Running with {backend.upper()} backend...")
X_ivy = ivy.array(X)
y_ivy = ivy.array(y)
IvyNeuralNetwork()
start_time = time.time()
For epoch within range(50)
loss = net.train_step(X_ivy, y_ivy, lr=0.1)
Elapsed = Time() - start_time
You can predict the future by using net.forward.
accuracy = ivy.mean(
ivy.astype(ivy.argmax(predictions, axis=-1) == ivy.argmax(y_ivy, axis=-1), 'float32')
)
The following are some of the results.[backend] = {
'loss': float(ivy.to_numpy(loss)),
'accuracy': float(ivy.to_numpy(accuracy)),
"Time":
}
print(f" Final Loss: {results[backend]['loss']:.4f}")
print(f" Accuracy: {results[backend]['accuracy']:.2%}")
print(f" Time: {results[backend]['time']:.3f}s")
Except Exception As e.
print(f" ⚠️ {backend} error: {str(e)[:80]}")
The following are some of the results.[backend] No.
ivy.unset_backend()
Return results
In order to show true framework agnostic architecture, we create and train a neural network using Ivy. We observe consistent behavior across NumPy and PyTorch backends as well as TensorFlow or JAX. This allows us to see the way Ivy can abstract framework differences, while still maintaining accuracy and efficiency. See the FULL CODES here.
def demo_transpilation():
"""Demonstrate transpiling code from PyTorch to TensorFlow and JAX."""
print("n" + "="*70)
print("PART 2: Framework Transpilation")
print("="*70)
try:
Buy a torch
Tensorflow can be imported as a tf
def pytorch_computation(x):
"""A simple PyTorch computation."""
return torch.mean(torch.relu(x * 2.0 + 1.0))
x_torch = torch.randn(10, 5)
print("n📦 Original PyTorch function:")
result_torch = pytorch_computation(x_torch)
print(f" PyTorch result: {result_torch.item():.6f}")
print("n🔄 Transpilation Demo:")
print(" Note: ivy.transpile() is powerful but complex.")
print(" It works best with traced/compiled functions.")
print(" For simple demonstrations, we'll show the unified API instead.")
print("n✨ Equivalent computation across frameworks:")
x_np = x_torch.numpy()
ivy.set_backend('numpy')
x_ivy = ivy.array(x_np)
result_np = ivy.mean(ivy.relu(x_ivy * 2.0 + 1.0))
print(f" NumPy result: {float(ivy.to_numpy(result_np)):.6f}")
ivy.set_backend('tensorflow')
x_ivy = ivy.array(x_np)
result_tf = ivy.mean(ivy.relu(x_ivy * 2.0 + 1.0))
print(f" TensorFlow result: {float(ivy.to_numpy(result_tf)):.6f}")
ivy.set_backend('jax')
Im jax
jax.config.update('jax_enable_x64', True)
x_ivy = ivy.array(x_np)
result_jax = ivy.mean(ivy.relu(x_ivy * 2.0 + 1.0))
print(f" JAX result: {float(ivy.to_numpy(result_jax)):.6f}")
print(f"n ✅ All results match within numerical precision!")
ivy.unset_backend()
print(f"⚠️ Demo error: {str(e)[:80]}")
We will explore in this section how Ivy facilitates smooth interoperability and transpilation across frameworks. We use Ivy to reproduce a PyTorch calculation in TensorFlow and NumPy. We can see that Ivy is able to bridge framework boundaries and provide consistent results in different deep-learning ecosystems. Visit the FULL CODES here.
def demo_unified_api():
"""Show how Ivy's unified API works across different operations."""
print("n" + "="*70)
print("PART 3: Unified API Across Frameworks")
print("="*70)
Operation = [
("Matrix Multiplication", lambda x: ivy.matmul(x, ivy.permute_dims(x, axes=(1, 0)))),
("Element-wise Operations", lambda x: ivy.add(ivy.multiply(x, x), 2)),
("Reductions", lambda x: ivy.mean(ivy.sum(x, axis=0))),
("Neural Net Ops", lambda x: ivy.mean(ivy.relu(x))),
("Statistical Ops", lambda x: ivy.std(x)),
("Broadcasting", lambda x: ivy.multiply(x, ivy.array([1.0, 2.0, 3.0, 4.0]))),
]
X = np.random.randn(5, 4).astype(np.float32)
For op_name and op_func:
print(f"n🔧 {op_name}:")
Backend is a term used to describe the backend of a website. ['numpy', 'torch', 'tensorflow', 'jax']:
try:
ivy.set_backend(backend)
if backend == 'jax':
Im jax
jax.config.update('jax_enable_x64', True)
x_ivy = ivy.array(X)
result = op_func(x_ivy)
result_np = ivy.to_numpy(result)
If shape == result_np.shape ():
print(f" {backend:12s}: scalar value = {float(result_np):.4f}")
else:
print(f" {backend:12s}: shape={result_np.shape}, mean={np.mean(result_np):.4f}")
Exception to the rule:
print(f" {backend:12s}: ⚠️ {str(e)[:60]}")
ivy.unset_backend()
We test Ivy’s unifying API in this section by performing various numerical, neural and statistical operations using multiple backends. We execute the code in NumPy on PyTorch and TensorFlow while confirming consistency. This allows us to see the ease of multi-framework programming with Ivy. Visit the FULL CODES here.
def demo_advanced_features():
"""Demonstrate advanced Ivy features."""
print("n" + "="*70)
print("PART 4: Advanced Ivy Features")
print("="*70)
print("n📦 Ivy Containers - Nested Data Structures:")
try:
ivy.set_backend('torch')
container = ivy.Container({
'layer1': {'weights': ivy.random_uniform(shape=(4, 8)), 'bias': ivy.zeros((8,))},
'layer2': {'weights': ivy.random_uniform(shape=(8, 3)), 'bias': ivy.zeros((3,))}
})
print(f" Container keys: {list(container.keys())}")
print(f" Layer1 weight shape: {container['layer1']['weights'].shape}")
print(f" Layer2 bias shape: {container['layer2']['bias'].shape}")
def scale_fn(x, _):
return x * 2.0
scaled_container = container.cont_map(scale_fn)
print(f" ✅ Applied scaling to all tensors in container")
Except Exception As e.
print(f" ⚠️ Container demo: {str(e)[:80]}")
print("n🔗 Array API Standard Compliance:")
backends_tested = []
Backend is a term used to describe the backend of a website. ['numpy', 'torch', 'tensorflow', 'jax']:
try:
ivy.set_backend(backend)
if backend == 'jax':
Im jax
jax.config.update('jax_enable_x64', True)
x = ivy.array([1.0, 2.0, 3.0])
y = ivy.array([4.0, 5.0, 6.0])
result = ivy.sqrt(ivy.square(x) + ivy.square(y))
print(f" {backend:12s}: L2 norm operations work ✅")
backends_tested.append(backend)
Exception to the rule:
print(f" {backend:12s}: {str(e)[:50]}")
print(f"n Successfully tested {len(backends_tested)} backends")
print("n🎯 Complex Multi-step Operations:")
try:
ivy.set_backend('torch')
x = ivy.random_uniform(shape=(10, 5), low=0, high=1)
result = ivy.mean(
ivy.relu(
ivy.matmul(x, ivy.permute_dims(x, axes=(1, 0)))
),
axis=0
)
print(f" Chained operations (matmul → relu → mean)")
print(f" Input shape: (10, 5), Output shape: {result.shape}")
print(f" ✅ Complex operation graph executed successfully")
Excluded from e.
print(f" ⚠️ {str(e)[:80]}")
ivy.unset_backend()
Ivy is more than just the basics. We organize parameters with ivy.Container, validate Array API–style ops across NumPy, PyTorch, TensorFlow, and JAX, and chain complex steps (matmul → ReLU → mean) to see graph-like execution flow. Ivy is able to scale from simple data structures up to multi-backend computation. Take a look at the FULL CODES here.
def benchmark_operation(op_func, x, iterations=50):
"""Benchmark an operation."""
Start = Time()
Please enter _ as the number of iterations in your range.
result = op_func (x)
Return time.time() Startseite
def demo_performance():
"""Compare performance across backends."""
print("n" + "="*70)
print("PART 5: Performance Benchmarking")
print("="*70)
X = np.random.randn(100, 100).astype(np.float32)
def complex_operation(x):
"""A more complex computation."""
z = ivy.matmul(x, ivy.permute_dims(x, axes=(1, 0)))
z = ivy.relu(z)
z = ivy.mean(z, axis=0)
return ivy.sum(z)
print("n⏱️ Benchmarking matrix operations (50 iterations):")
print(" Operation: matmul → relu → mean → sum")
Backend is a term used to describe the backend of a website. ['numpy', 'torch', 'tensorflow', 'jax']:
try:
ivy.set_backend(backend)
if backend == 'jax':
Im jax
jax.config.update('jax_enable_x64', True)
x_ivy = ivy.array(X)
_ = complex_operation(x_ivy)
elapsed = benchmark_operation(complex_operation, x_ivy, iterations=50)
print(f" {backend:12s}: {elapsed:.4f}s ({elapsed/50*1000:.2f}ms per op)")
With Exception to e.
print(f" {backend:12s}: ⚠️ {str(e)[:60]}")
ivy.unset_backend()
If the __name__ equals "__main__":
Advanced Ivy Tutorial print("""
You only need to write once and you can run anywhere! ╔════════════════════════════════════════════════════════════════════╗
║ Advanced Ivy Tutorial - Framework-Agnostic ML ║
║ Write Once, Run Everywhere! ║
╚════════════════════════════════════════════════════════════════════╝
""")
results = demo_framework_agnostic_network()
demo_transpilation()
demo_unified_api()
demo_advanced_features()
demo_performance()
print("n" + "="*70)
print("🎉 Tutorial Complete!")
print("="*70)
print("n📚 Key Takeaways:")
print(" 1. Ivy enables writing ML code once that runs on any framework")
print(" 2. Same operations work identically across NumPy, PyTorch, TF, JAX")
print(" 3. Unified API provides consistent operations across backends")
print(" 4. Switch backends dynamically for optimal performance")
print(" 5. Containers help manage complex nested model structures")
print("n💡 Next Steps:")
print(" - Build your own framework-agnostic models")
print(" - Use ivy.Container for managing model parameters")
print(" - Explore ivy.trace_graph() for computation graph optimization")
print(" - Try different backends to find optimal performance")
print(" - Check docs at: https://docs.ivy.dev/")
print("="*70)
The same complex operation is benchmarked across NumPy (PyTorch), TensorFlow (JAX), and JAX in order to compare the real-world performance. Warming up each backend and running 50 iterations is the first step. Then we log time, per-op, and total latency to determine which stack will be fastest for our workload.
We have seen first-hand the power of Ivy. “write once and run everywhere.” We have observed identical model behaviour, seamless switching between backends, and consistent performance on multiple frameworks. Ivy’s unification of APIs, interoperability and advanced container and graph optimization features pave the way to a future that is more modular and flexible for machine learning. Now we are able to create and deploy models across multiple environments using the elegant Ivy codebase.
Click here to find out more FULL CODES here. Check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe Now our Newsletter. Wait! What? now you can join us on telegram as well.
Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to leveraging the power of Artificial Intelligence (AI) for the social good. Marktechpost was his most recent venture. This platform, which specializes in covering machine learning and deep-learning news, is both technically solid and understandable to a broad audience. This platform has over 2,000,000 monthly views which shows its popularity.

