Skip to content

2.7.0: Major Release

Latest

Choose a tag to compare

@IsaacYangSLA IsaacYangSLA released this 17 Oct 19:57
· 8 commits to main since this release
56b2c16

2.7.0 Release Contributors (Chronological Order)

The followings lists all contributors to the NVFLARE 2.7.0 release (from 2.6.2 to 2.7.0), ordered by when they first contributed.

  1. Zhijin - 4 commits
  2. Holger Roth - 36 commits
  3. Zhihong Zhang - 19 commits
  4. Sean Yang - 8 commits
  5. Ziyue Xu - 26 commits
  6. Yuan-Ting Hsieh (謝沅廷) - 82 commits
  7. Yan Cheng - 43 commits
  8. Isaac Yang - 10 commits
  9. Chester Chen - 59 commits
  10. Kevin Lu - 17 commits
  11. Emmanuel Ferdman - 1 commit
  12. Georg Slamanig - 1 commit
  13. Ruben Bagan Benavides - 1 commit
  14. Peixin - 1 commit
  15. Yuanyuan Chen - 5 commits
  16. Francesco Farina - 1 commit
  17. Suizhi Huang - 1 commit

Welcome First-Time Contributors!

We would like to extend a special thank you to the following contributors who made their first commits to NVFLARE in this release:

Thank you for your contributions to the NVFLARE community!


Highlights of This Release

For the complete list of updates and features, please see the full release notes
Below are some of the key highlights in this version:

Confidential Federated AI

Read more in the FLARE Confidential Federated AI Guide
.

First-of-its-Kind End-to-End IP Protection

This release introduces a first-of-its-kind, end-to-end intellectual property (IP) protection solution for federated learning using confidential computing. The solution supports on-premise deployments on bare metal with AMD CPUs and NVIDIA GPUs running inside Confidential VMs (CVMs).

End-to-End Protection

End-to-end protection means safeguarding both runtime IP (model and code) and ensuring deployment integrity against CVM tampering or unauthorized modification.

Key Capabilities

  • Secure Aggregation (Server-Side): Prevents privacy leaks through shared model parameters.
  • Model Theft Protection (Client-Side): Protects proprietary model IP during collaboration.
  • Data Leak Prevention (Client-Side): with only pre-approved, certified code, no one can alter the code inside the CVM

Job Recipe

Introducing the new Flare Job Recipe: a lightweight way to capture the code needed to specify the client training logic and the server-side algorithm. The same Job Recipe can run seamlessly in SimEnv, PoCEnv, or ProdEnv—from local experiments to production deployments.

With Flare Job Recipe, we are making the federated learning workflow dramatically simpler for data scientists. In most cases, constructing a complete federated learning job requires only about 6+ lines of Python code. When combined with the Client API (typically 4+ lines), building and running federated learning experiments becomes almost effortless.

Example: FedAvg Job Recipe

n_clients = args.n_clients
num_rounds = args.num_rounds
batch_size = args.batch_size

recipe = FedAvgRecipe(
    name="hello-pt",
    min_clients=n_clients,
    num_rounds=num_rounds,
    initial_model=SimpleNetwork(),
    train_script="client.py",
    train_args=f"--batch_size {batch_size}",
)
add_experiment_tracking(recipe, tracking_type="tensorboard")

env = SimEnv(num_clients=n_clients)
run = recipe.execute(env)
print()
print("Result can be found in :", run.get_result())
print("Job Status is:", run.get_status())
print()

Enhanced Communication: Port Consolidation and New HTTP Driver

Port Consolidation

Previously, FLARE's server required two separate ports: one for FL client/server communication and another for Admin client/server communication. In 2.7, these are merged into a single configurable port, reducing network configuration complexity. Dual-port mode remains available for environments with stricter network policies.

New HTTPS Driver

The HTTP driver has been rewritten using aiohttp to address prior performance limitations. It now matches gRPC performance, while maintaining the same API, TLS support, and backward compatibility with existing deployments.

Key Benefits
  • Consolidated Port: Reduced from two ports to a single port, simplifying deployment.
  • Standard Port Compatibility: Use standard HTTPS port 443 - no need for IT to open additional ports.
  • High Performance: New HTTP driver matches gRPC in speed and reliability.

Develop Edge Applications with FLARE

FLARE 2.7 extends federated learning to edge devices with features that directly address the unique challenges of edge environments:

Key Features

  • Scalability: Hierarchical Federated Architecture
    Hierarchical FLARE allows millions of edge devices to participate efficiently without connecting each directly to the server.

  • Intermittent Device Participation: Asynchronous FL based on FedBuff
    FLARE handles devices that may join, leave, or fail to return local training results due to network or power interruptions.

  • Cross-Platform & No Device Programming Required
    Data scientists can deploy models to iOS and Android with FLARE Mobile Development without writing Swift, Objective-C, Java, or Kotlin. FLARE handles PyTorch → Executorch conversion and device training code automatically.

  • Simulation Tools: Device Simulator for Large Scale Testing
    Test and validate edge deployments at scale before production deployment.


Complete Self-Paced Training Tutorials

Welcome to the five-part course on Federated Learning with NVIDIA FLARE! This course covers everything from the fundamentals to advanced applications, system deployment, privacy, security, and real-world industry use cases.

This comprehensive tutorial is now complete with 100+ notebooks and 80 instructional videos, all fully recorded and ready for learning. See details in Self-Paced Training Tutorials.


Full Changelog: 2.6.0...2.7.0