Senior Software Engineer, NVIDIA
Yuan-Ting is currently focused on NVFlare, an application runtime environment designed for NVIDIA's federated learning initiatives. Before his work on NVFlare, Yuan-Ting was an integral part of the team that developed Clara-Train SDK and AIAA (Artificial Intelligence Assisted Annotation), which has since been integrated into MONAI (Medical Open Network for AI). Yuan-Ting holds a Master of Science in Computer Science from the University of Wisconsin-Madison and a Bachelor of Science in Electrical Engineering from National Taiwan University. He is particularly interested in the intersection of Machine Learning and Distributed Systems in his professional pursuits.
NVIDIA FLARE: Federated Learning from simulation to production.
Federated Learning represents a paradigm shift from the centralized data lake focused machine learning. Instead of centralizing data in one location, federated learning allows models to be trained directly on the devices or servers where the data resides, such as smartphones, edge devices and machines. This not only preserves data privacy and security but also significantly reduces data transfer requirements, making it ideal for scenarios with sensitive data, like healthcare or finance. Federated Learning enables collaborative model training across a vast network of distributed machines or devices, leading to more personalized and efficient AI solutions while respecting user privacy.
NVIDIA FLARE (NVFLARE), an open-source initiative by NVIDIA, is dedicated to bringing privacy-preserving compute and machine learning to the federated setting while maintaining simplicity and production readiness. In this presentation, we will demonstrate how NVFLARE seamlessly transforms deep learning training code into federated learning code.
We will cover few examples and tutorials of large number of examples in NVFLARE repository.
Federated Statistics
Federated XGBoost, Linear and logistics regression, KMeans, SVM, Random Forest
LLM Prompt Tuning, Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) with Nemo
Training Protein Classifiers with Graph Neural Networks (GNN)
Vertical Federated Learning (Vertical XGBoost and Split Learning)
Enabling Cyclic and Swarm Learning Workflows
Experiment Tracking with MLflow and Weights & Biases
Production deployment of FL system in Azure and AWS clouds
Notebook interactive experience with FLARE API
CLI commands and interactive admin console
We also touch on:
NVFLARE’s component-based, layered architecture, as well as
discuss use cases in autonomous driving and health care
Furthermore, if time permits, we will conduct a live demonstration illustrating how to simulate multiple clients using your local host to execute federated learning jobs effectively. Join us on this journey towards unlocking the potential of federated learning with NVFLARE!
Comments