lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Block-Free Concurrent GC: Stack Scanning and Copying
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. Linnaeus Univ, Växjö, Sweden..
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. Linnaeus Univ, Växjö, Sweden..ORCID iD: 0000-0002-7565-3714
2016 (English)In: SIGPLAN notices, ISSN 0362-1340, E-ISSN 1558-1160, Vol. 51, no 11, p. 1-12Article in journal (Refereed) Published
Abstract [en]

On-the-fly Garbage Collectors (GCs) are the state-of-the-art concurrent GC algorithms today. Everything is done concurrently, but phases are separated by blocking handshakes. Hence, progress relies on the scheduler to let application threads (mutators) run into GC checkpoints to reply to the handshakes. For a non-blocking GC, these blocking handshakes need to be addressed. Therefore, we propose a new non-blocking handshake to replace previous blocking handshakes. It guarantees schedulingindependent operation level progress without blocking. It is scheduling independent but requires some other OS support. It allows bounded waiting for threads that are currently running on a processor, regardless of threads that are not running on a processor. We discuss this non-blocking handshake in two GC algorithms for stack scanning and copying objects. They pave way for a future completely non-blocking GC by solving hard open theory problems when OS support is permitted. The GC algorithms were integrated to the G1 GC of OpenJDK for Java. GC pause times were reduced to 12.5% compared to the original G1 on average in DaCapo. For a memory intense benchmark, latencies were reduced from 174 ms to 0.67 ms for the 99.99% percentile. The improved latency comes at a cost of 15% lower throughput.

Place, publisher, year, edition, pages
ACM Publications, 2016. Vol. 51, no 11, p. 1-12
Keywords [en]
non-blocking, block-free, compaction, stack scanning, garbage collection
National Category
Computer Systems
Research subject
Computer and Information Sciences Computer Science
Identifiers
URN: urn:nbn:se:lnu:diva-94748DOI: 10.1145/3241624.2926701ISI: 000439639900002OAI: oai:DiVA.org:lnu-94748DiVA, id: diva2:1430254
Conference
15th ACM SIGPLAN International Symposium on Memory Management (ISMM 2016), JUN 14, 2016, Santa Barbara, CA
Available from: 2020-05-14 Created: 2020-05-14 Last updated: 2022-11-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Österlund, ErikLöwe, Welf

Search in DiVA

By author/editor
Österlund, ErikLöwe, Welf
By organisation
Department of Computer Science
In the same journal
SIGPLAN notices
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 57 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf