FLaaS: Federated Learning as a Service

Innovating Federated Learning models through a common effort
Thematic area: Projects, Tech
Financing: ICSC Innovation Grants
Enabling Technology: Advanced Modeling and Simulation, Machine Learning

Federated Learning is a Machine Learning paradigm experiencing a particular research interest due to its capability of training a model collaboratively across multiple partners without sharing or exchanging the actual data. Unfortunately, current frameworks require expertise to be handled correctly, and most of them create “disposable” federations, which are destroyed once their specific job is done Federated Learning as a Service (FLaaS) will bring an innovative vision to the FL community, allowing research centers and industries to better handle and exploit their data for training distributed, privacy-preserving ML models. On top of that, FLaaS opens the door for new business models based on the capability of the parties to allow federated access to the local data and computational facilities. On the one hand such a system would improve the accessibility, scalability and widespread use of FL. On the other hand, it would need careful design to handle distributed data curation, programming and security issues.

Italian Research Center on High Performance Computing Big Data and Quantum Computing (ICSC), project funded by European Union – NextGenerationEU – and National Recovery and Resilience Plan (NRRP) – Mission 4 Component 2. 

The goal

FLaaS aims to offer a federated model training service, which allows users to submit training tasks through a simple interface, executed on a distributed learning platform that includes a community of peers who contribute their local data.  

This platform aims to demonstrate the technological feasibility and key benefits of a model where organizations can collaborate on training machine learning models without directly sharing their data. This will allow to construct a variety of datasets without having to endanger the privacy of the partners’ data. 

Moreover, the system will be accessible and easy to use, so that the users will be able to set up and start training models efficiently and securely. 

The initial challenge

Large machine learning models are being implemented by various organizations, but there are concerns around the data needed to train those models. There are two main problems: on one hand, a single organization may not have enough data to train large models or lack data on critical cases. On the other hand, even within the same organization, data may be sensitive to business activity, preventing it from being brought together in a central repository for model training.

The solution

The FLaaS architecture requires each data provider to participate by running a peer service, registering with a coordination layer at startup, and describing the data available for training. Coordination should manage the availability of each peer’s data and resources, providing an interface for users to submit training tasks, and matching task requirements to a list of compatible peers. This will be done in respect of current policy surrounding data and limitations around their use. 


The framework that will result from this project will allow us to overcome the main challenges surrounding federated learning: there will be no need to implement a new federation from scratch every time a new model is trained. Moreover, the architecture will make it possible to have larger datasets to train on, thanks to the collaboration between partners. This will be done in respect of policies, making the use of data safer. 


Participating Spoke

Spoke 1


For further information, please contact: barbara.vecchi@ifabfoundation.org

Sustainable Development Goals


Share on:

You may also be interested in:

Stay updated on the latest IFAB events and projects.Subscribe to our monthly newsletter