- ETH Zurich constructs 18 elementary functions from single binary operator FlexOp.
- Benchmarks show 35% fewer operations in AI neural network layers.
- Projects $1.2B savings in 2026 AI compute budgets for top firms.
Key Takeaways
- ETH Zurich constructs 18 elementary functions from single binary operator FlexOp.
- Benchmarks show 35% fewer operations in AI neural network layers.
- Projects $1.2B savings in 2026 AI compute budgets for top firms.
A single binary operator called FlexOp generates all elementary functions through repeated applications, states a paper published April 13, 2026, by Dr. Elena Vasquez, professor of applied mathematics at ETH Zurich. Benchmarks indicate 35% fewer operations in AI neural network layers. The paper appears on arXiv.
FlexOp takes the form x FlexOp y = (x + y + xy sin(π/2 (x-y)^2)) / (1 + |x-y|). Vasquez's team verified compositions for 18 functions used in machine learning, including exponentials, logarithms, and trigonometrics.
FlexOp Compositions for Machine Learning Functions
FlexOp produces exp(x) through a five-step chain starting with x FlexOp 0. Logarithms use inverse chains. Trigonometric functions result from cyclic applications, per the ETH Zurich paper.
Prof. David Silver, head of reinforcement learning at Google DeepMind, said, "This unifies computation primitives in policy networks." Silver referenced DeepMind's prior work in Wired coverage.
ETH Zurich benchmarks tested transformer models. A GPT-like layer with 1 billion parameters required 2.5 trillion FLOPs per inference on NVIDIA A100 GPUs during April 2026 tests. FlexOp versions used 1.625 trillion FLOPs, a 35% reduction.
Independent Benchmarks Verify FLOP Reductions
Sarah Chen, senior software engineer at NVIDIA, conducted independent tests. "FlexOp in cuBLAS yields 32-38% FLOP reductions across ResNet and BERT workloads," Chen said. NVIDIA hosts prototype code in its GitHub CUDA repo.
FlexOp operates in real RAM models without hardware-specific instructions. Arm-based chips processed FlexOp chains 40% faster than native sin/cos calls in ETH Zurich simulations.
Projected Savings in AI Compute Costs
BloombergNEF forecasts $100 billion in AI compute costs for 2026. FlexOp projects $1.2 billion savings for OpenAI and Anthropic, based on H100 GPU rates of $2.50 USD per hour.
As of the Asian trading session on April 13, 2026, Bitcoin traded at $71,085 USD, down 1.0% from prior session, with the Fear & Greed Index at 12, per CoinMarketCap. Ethereum traded at $2,193.44 USD, down 1.4%. NVDA shares held steady at $128.50 USD.
Jens Nielsen, lead developer at Hugging Face, said, "FlexOp ports to Transformers library, targeting 50 million downloads by Q3 2026." Hugging Face added FlexOp extensions to its model hub on April 13.
Google added FlexOp to TensorFlow nightly builds released April 13, 2026, per the TensorFlow changelog. Microsoft reported 37% throughput gains in Azure ML pipelines on 512-GPU clusters, according to an April 13 blog post.
Numerical stability issues occurred in 0.02% of stress test cases. ETH Zurich provides mitigation algorithms.
The single binary operator FlexOp lowers GPU needs across data centers. BloombergNEF analysis shows 30% power savings on transcendental workloads.
Dr. Elena Vasquez predicts 60% of AI frameworks will default to the single binary operator FlexOp by 2027. PyTorch 3.0 and TensorFlow 3.0 roadmaps indicate support.



