Skip to Main content Skip to Navigation
Conference papers

Training Many Neural Networks in Parallel via Back-Propagation

Abstract : This paper presents two parallel implementations of the Back-propagation algorithm, a widely used approach for Artificial Neural Networks (ANNs) training. These implementations permit one to increase the number of ANNs trained simultaneously taking advantage of the thread-level massive parallelism of GPUs and multi-core architecture of modern CPUs, respectively. Computational experiments are carried out with time series taken from the product demand of a Mexican brewery company; the goal is to optimize delivery of products. We consider also time series of the M3-competition benchmark. The results obtained show the benefits of training several ANNs in parallel compared to other forecasting methods used in the competition. Indeed, training several ANNs in parallel yields to a better fitting of the weights of the network and allows to train in a short time many ANNs for different time series.
Complete list of metadata

Cited literature [26 references]  Display  Hide  Download

https://hal.laas.fr/hal-02115612
Contributor : Didier El Baz <>
Submitted on : Tuesday, April 30, 2019 - 1:36:19 PM
Last modification on : Thursday, June 10, 2021 - 3:06:16 AM

File

Paper.pdf
Files produced by the author(s)

Identifiers

Citation

Javier Cruz-López, Vincent Boyer, Didier El Baz. Training Many Neural Networks in Parallel via Back-Propagation. IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW 2017), May 2017, Orlando, United States. pp.501-509, ⟨10.1109/IPDPSW.2017.72⟩. ⟨hal-02115612⟩

Share

Metrics

Record views

238

Files downloads

711