lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
CHAOS: A Parallelization Scheme for Training Convolutional Neural Networks on Intel Xeon Phi
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). (Parallel Computing;Ctr Data Intens Sci & Applicat, DISA;HPCC)
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). (Parallel Computing;Ctr Data Intens Sci & Applicat, DISA;HPCC)
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). (Parallel Computing;Ctr Data Intens Sci & Applicat, DISA;HPCC)
Machine Intelligence Research Labs (MIR Labs), USA.
2019 (English)In: Journal of Supercomputing, ISSN 0920-8542, E-ISSN 1573-0484, Vol. 75, no 1, p. 197-227Article in journal (Refereed) Published
Abstract [en]

Deep learning is an important component of big-data analytic tools and intelligent applications, such as, self-driving cars, computer vision, speech recognition, or precision medicine. However, the training process is computationally intensive, and often requires a large amount of time if performed sequentially. Modern parallel computing systems provide the capability to reduce the required training time of deep neural networks.In this paper, we present our parallelization scheme for training convolutional neural networks (CNN) named Controlled Hogwild with Arbitrary Order of Synchronization (CHAOS). Major features of CHAOS include the support for thread and vector parallelism, non-instant updates of weight parameters during back-propagation without a significant delay, and implicit synchronization in arbitrary order. CHAOS is tailored for parallel computing systems that are accelerated with the Intel Xeon Phi. We evaluate our parallelization approach empirically using measurement techniques and performance modeling for various numbers of threads and CNN architectures. Experimental results for the MNIST dataset of handwritten digits using the total number of threads on the Xeon Phi show speedups of up to 103x compared to the execution on one thread of the Xeon Phi, 14x compared to the sequential execution on Intel Xeon E5, and 58x compared to the sequential execution on Intel Core i5.

Place, publisher, year, edition, pages
Springer, 2019. Vol. 75, no 1, p. 197-227
National Category
Computer Systems
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
URN: urn:nbn:se:lnu:diva-60938DOI: 10.1007/s11227-017-1994-xISI: 000456629400014Scopus ID: 2-s2.0-85014542478OAI: oai:DiVA.org:lnu-60938DiVA, id: diva2:1077130
Available from: 2017-02-25 Created: 2017-02-25 Last updated: 2019-08-29Bibliographically approved

Open Access in DiVA

fulltext(810 kB)462 downloads
File information
File name FULLTEXT01.pdfFile size 810 kBChecksum SHA-512
3f4a70098440d8a5d19dcc5984c46ecefe4c981737e74ddfbc0ed86482d7bac23a7af13450dcadcc51b2240708ac78e884b0444bdec9bd9c7156836b52d92d05
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Memeti, SuejbPllana, Sabri

Search in DiVA

By author/editor
Viebke, AndreMemeti, SuejbPllana, Sabri
By organisation
Department of computer science and media technology (CM)
In the same journal
Journal of Supercomputing
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 462 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 599 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf