TY - GEN
T1 - A Game Theory Reward Model for Federated Learning with Probabilistic Verification
AU - Auricchio, Gennaro
AU - Clough, Harry J.
AU - Ho, Christopher
AU - Bian, Kaigui
AU - Dong, Changyu
AU - Yang, Kan
AU - Zhang, Jie
PY - 2025/9/16
Y1 - 2025/9/16
N2 - In Federated Learning, a Central Node (CN) coordinates a group of agents to collectively train a shared neural network. However, due to the inherent information asymmetry, some agents may behave as free riders and exploit the system by reaping rewards or by passively benefiting from the common model without contributing to the training process. Proof-of-Training (PoT) effectively allows the CN to verify that an agent has completed training honestly and correctly. However, this method incurs high costs, including proof generation by the agent, communication expenses, and proof verification by the CN. Conducting Proof-of-Training in each FL round is impractical due to these expenses. To enhance verification efficiency, a feasible strategy is to conduct probabilistic verification, where only a subset of agents is sampled for verification in each FL round. This paper aims to design a new incentive mechanism to motivate the agents behave honestly and potentially mitigate free riders. Our model hinges on two parameters: (i)the reward allocated to the local trainers, namely R, and (ii) a probability vector, denoted as , indicating the likelihood of subjecting each agent to PoT scrutiny. We show that it is possible to characterize a set of parameters R and that minimizes the total CN cost and makes the routine Individually Rational and Incentive Compatible, so that every agent will actively train their local model. Finally, we validate our model through extensive experiments. Our findings show that our characterization of the best reward and validation scheme is correct as they minimize the cost of the training routine without compromising the convergence speed. All our experiments are conducted on various datasets, demonstrating the wide applicability of our results.
AB - In Federated Learning, a Central Node (CN) coordinates a group of agents to collectively train a shared neural network. However, due to the inherent information asymmetry, some agents may behave as free riders and exploit the system by reaping rewards or by passively benefiting from the common model without contributing to the training process. Proof-of-Training (PoT) effectively allows the CN to verify that an agent has completed training honestly and correctly. However, this method incurs high costs, including proof generation by the agent, communication expenses, and proof verification by the CN. Conducting Proof-of-Training in each FL round is impractical due to these expenses. To enhance verification efficiency, a feasible strategy is to conduct probabilistic verification, where only a subset of agents is sampled for verification in each FL round. This paper aims to design a new incentive mechanism to motivate the agents behave honestly and potentially mitigate free riders. Our model hinges on two parameters: (i)the reward allocated to the local trainers, namely R, and (ii) a probability vector, denoted as , indicating the likelihood of subjecting each agent to PoT scrutiny. We show that it is possible to characterize a set of parameters R and that minimizes the total CN cost and makes the routine Individually Rational and Incentive Compatible, so that every agent will actively train their local model. Finally, we validate our model through extensive experiments. Our findings show that our characterization of the best reward and validation scheme is correct as they minimize the cost of the training routine without compromising the convergence speed. All our experiments are conducted on various datasets, demonstrating the wide applicability of our results.
KW - Federated Learning
KW - Free-rider Problem
KW - Incentive Schemes
UR - https://www.scopus.com/pages/publications/105020016223
U2 - 10.1145/3719545.3721106
DO - 10.1145/3719545.3721106
M3 - Chapter in a published conference proceeding
AN - SCOPUS:105020016223
T3 - ACM International Conference Proceeding Series
SP - 13
EP - 21
BT - DAI '24: Proceedings of the 2024 6th International Conference on Distributed Artificial Intelligences
PB - Association for Computing Machinery
CY - New York, U. S. A.
T2 - 6th International Conference on Distributed Artificial Intelligence, DAI 2024
Y2 - 18 December 2024 through 22 December 2024
ER -