2.7.0 Release Contributors (Chronological Order)
The followings lists all contributors to the NVFLARE 2.7.0 release (from 2.6.2 to 2.7.0), ordered by when they first contributed.
- Zhijin - 4 commits
- Holger Roth - 36 commits
- Zhihong Zhang - 19 commits
- Sean Yang - 8 commits
- Ziyue Xu - 26 commits
- Yuan-Ting Hsieh (謝沅廷) - 82 commits
- Yan Cheng - 43 commits
- Isaac Yang - 10 commits
- Chester Chen - 59 commits
- Kevin Lu - 17 commits
- Emmanuel Ferdman - 1 commit
- Georg Slamanig - 1 commit
- Ruben Bagan Benavides - 1 commit
- Peixin - 1 commit
- Yuanyuan Chen - 5 commits
- Francesco Farina - 1 commit
- Suizhi Huang - 1 commit
Welcome First-Time Contributors!
We would like to extend a special thank you to the following contributors who made their first commits to NVFLARE in this release:
- Emmanuel Ferdman (@emmanuel-ferdman) - 1 commit - PR #3469
- Georg Slamanig (@gslama12) - 1 commit - PR #3495
- Ruben Bagan Benavides (@rbagan) - 1 commit - PR #3506
- Yuanyuan Chen (@cyyever) - 5 commits - PR #3573
- Suizhi Huang (@JeanDiable) - 1 commit - PR #3629
Thank you for your contributions to the NVFLARE community!
Highlights of This Release
For the complete list of updates and features, please see the full release notes
Below are some of the key highlights in this version:
Confidential Federated AI
Read more in the FLARE Confidential Federated AI Guide
.
First-of-its-Kind End-to-End IP Protection
This release introduces a first-of-its-kind, end-to-end intellectual property (IP) protection solution for federated learning using confidential computing. The solution supports on-premise deployments on bare metal with AMD CPUs and NVIDIA GPUs running inside Confidential VMs (CVMs).
End-to-End Protection
End-to-end protection means safeguarding both runtime IP (model and code) and ensuring deployment integrity against CVM tampering or unauthorized modification.
Key Capabilities
- Secure Aggregation (Server-Side): Prevents privacy leaks through shared model parameters.
- Model Theft Protection (Client-Side): Protects proprietary model IP during collaboration.
- Data Leak Prevention (Client-Side): with only pre-approved, certified code, no one can alter the code inside the CVM
Job Recipe
Introducing the new Flare Job Recipe: a lightweight way to capture the code needed to specify the client training logic and the server-side algorithm. The same Job Recipe can run seamlessly in SimEnv, PoCEnv, or ProdEnv—from local experiments to production deployments.
With Flare Job Recipe, we are making the federated learning workflow dramatically simpler for data scientists. In most cases, constructing a complete federated learning job requires only about 6+ lines of Python code. When combined with the Client API (typically 4+ lines), building and running federated learning experiments becomes almost effortless.
Example: FedAvg Job Recipe
n_clients = args.n_clients
num_rounds = args.num_rounds
batch_size = args.batch_size
recipe = FedAvgRecipe(
name="hello-pt",
min_clients=n_clients,
num_rounds=num_rounds,
initial_model=SimpleNetwork(),
train_script="client.py",
train_args=f"--batch_size {batch_size}",
)
add_experiment_tracking(recipe, tracking_type="tensorboard")
env = SimEnv(num_clients=n_clients)
run = recipe.execute(env)
print()
print("Result can be found in :", run.get_result())
print("Job Status is:", run.get_status())
print()Enhanced Communication: Port Consolidation and New HTTP Driver
Port Consolidation
Previously, FLARE's server required two separate ports: one for FL client/server communication and another for Admin client/server communication. In 2.7, these are merged into a single configurable port, reducing network configuration complexity. Dual-port mode remains available for environments with stricter network policies.
New HTTPS Driver
The HTTP driver has been rewritten using aiohttp to address prior performance limitations. It now matches gRPC performance, while maintaining the same API, TLS support, and backward compatibility with existing deployments.
Key Benefits
- Consolidated Port: Reduced from two ports to a single port, simplifying deployment.
- Standard Port Compatibility: Use standard HTTPS port 443 - no need for IT to open additional ports.
- High Performance: New HTTP driver matches gRPC in speed and reliability.
Develop Edge Applications with FLARE
FLARE 2.7 extends federated learning to edge devices with features that directly address the unique challenges of edge environments:
Key Features
-
Scalability: Hierarchical Federated Architecture
Hierarchical FLARE allows millions of edge devices to participate efficiently without connecting each directly to the server. -
Intermittent Device Participation: Asynchronous FL based on FedBuff
FLARE handles devices that may join, leave, or fail to return local training results due to network or power interruptions. -
Cross-Platform & No Device Programming Required
Data scientists can deploy models to iOS and Android with FLARE Mobile Development without writing Swift, Objective-C, Java, or Kotlin. FLARE handles PyTorch → Executorch conversion and device training code automatically. -
Simulation Tools: Device Simulator for Large Scale Testing
Test and validate edge deployments at scale before production deployment.
Complete Self-Paced Training Tutorials
Welcome to the five-part course on Federated Learning with NVIDIA FLARE! This course covers everything from the fundamentals to advanced applications, system deployment, privacy, security, and real-world industry use cases.
This comprehensive tutorial is now complete with 100+ notebooks and 80 instructional videos, all fully recorded and ready for learning. See details in Self-Paced Training Tutorials.
Full Changelog: 2.6.0...2.7.0