Compare commits

...

137 Commits

Author SHA1 Message Date
Chris Plummer
2ca1a22319 Merge branch 'master' into 8375477_coreutils
Merge
2026-01-26 11:22:48 -08:00
Phil Race
c69275ddfe 8376232: Remove AppContext from Swing synth related classes
Reviewed-by: serb, azvegint
2026-01-26 18:53:39 +00:00
Chen Liang
3220c4cb43 8372696: Allow boot classes to explicitly opt-in for final field trusting
Reviewed-by: jvernee, jrose, alanb
2026-01-26 18:32:15 +00:00
Henry Jen
b42861a2aa 8373699: JLink: ModuleReader should be closed in JlinkTask.getReleaseInfo(mref)
Reviewed-by: alanb
2026-01-26 17:19:44 +00:00
Henry Jen
67beb9cd81 8373924: Remove unreferenced ImageDecompressor::image_decompressor_close
Reviewed-by: alanb
2026-01-26 16:38:12 +00:00
Christian Hagedorn
bbae38e510 8375272: [IR Framework] Miscellaneous clean-ups
Reviewed-by: mchevalier, dfenacci, thartmann
2026-01-26 16:23:30 +00:00
Axel Boldt-Christmas
f4607ed0a7 8374684: ZGC: Convert zMark to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 15:35:59 +00:00
Axel Boldt-Christmas
6648567574 8374683: ZGC: Convert zLock to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 15:14:42 +00:00
Axel Boldt-Christmas
99b4e05d50 8374682: ZGC: Convert zLiveMap to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 15:05:24 +00:00
Axel Boldt-Christmas
61b722d59a 8374681: ZGC: Convert zJNICritical to use Atomic<T>
Reviewed-by: tschatzl, stefank
2026-01-26 14:45:24 +00:00
Axel Boldt-Christmas
b59f49a1c3 8374680: ZGC: Convert zGeneration to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 14:28:39 +00:00
Axel Boldt-Christmas
fef85ff932 8374679: ZGC: Convert zForwardingAllocator to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 14:13:48 +00:00
Axel Boldt-Christmas
512f95cf26 8374678: ZGC: Convert zForwarding to use Atomic<T>
Reviewed-by: stefank, eosterlund
2026-01-26 13:53:12 +00:00
Axel Boldt-Christmas
319e21e9b4 8374677: ZGC: Convert zArray to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 13:44:06 +00:00
Hannes Wallnöfer
37cb22826a 8373679: Link color accessibility issue in dark theme
Reviewed-by: liach, nbenalla
2026-01-26 13:28:04 +00:00
Daniel Fuchs
8a9127fc2d 8376118: java/net/httpclient/StreamingBody.java fails intermittently on Windows
Reviewed-by: vyazici, jpai
2026-01-26 12:57:23 +00:00
Axel Boldt-Christmas
de5c7a9e86 8374676: ZGC: Convert zAbort to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 12:16:05 +00:00
Matthias Baesken
0f1b96a50a 8375684: Avoid leak in KeystoreImpl.m when using CFArrayCreateMutable
Reviewed-by: clanger
2026-01-26 11:38:05 +00:00
Quan Anh Mai
30675faa67 8375653: C2: CmpUNode::sub is not monotonic
Reviewed-by: chagedorn, mchevalier
2026-01-26 11:18:21 +00:00
Thomas Schatzl
48d636872f 8376293: Bad copyright header in g1ConcurrentRefineStats.inline.hpp breaks the build
Reviewed-by: mhaessig, chagedorn
2026-01-26 10:15:57 +00:00
Thomas Schatzl
42c0126fb2 8376119: G1: Convert volatiles in G1CMMarkStack to Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:47:52 +00:00
Jan Lahoda
90d065e677 8375712: Convert java/lang/runtime tests to use JUnit
Reviewed-by: liach
2026-01-26 09:42:49 +00:00
Thomas Schatzl
0bc2dc3401 8375971: G1: Convert G1EvacStats to use Atomic<T>
Reviewed-by: iwalulya, kbarrett
2026-01-26 09:17:22 +00:00
Thomas Schatzl
c3360ff511 8375983: G1: Convert G1ConcurrentRefineStats to use Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:17:01 +00:00
Thomas Schatzl
a49986c62f 8375964: G1: Convert G1BuildCandidateRegionsTask to use Atomic<T>
Reviewed-by: shade, iwalulya
2026-01-26 09:16:41 +00:00
Thomas Schatzl
4597046984 8375974: G1: Convert G1FullCollector to use Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:16:11 +00:00
Thomas Schatzl
e7cadd90b2 8375981: G1: Convert G1RemSet helper classes to use Atomic<T>
Reviewed-by: shade, iwalulya
2026-01-26 09:15:32 +00:00
Thomas Schatzl
2af271e5e6 8375436: G1: Convert G1CardSet classes to use Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:12:39 +00:00
Arno Zeller
90b5469253 8375999: com/sun/jndi/ldap/LdapPoolTimeoutTest.java fails sporadically on Windows
Reviewed-by: jpai, mbaesken
2026-01-26 08:34:56 +00:00
Xiaohong Gong
38b66b1258 8374043: C2: assert(_base >= VectorMask && _base <= VectorZ) failed: Not a Vector
Reviewed-by: qamai, vlivanov
2026-01-26 01:50:57 +00:00
SendaoYan
932556026d 8375683: Add notes for sctp tests
Reviewed-by: erikj, vyazici
2026-01-25 01:08:31 +00:00
Lei Zhu
a40dbce495 8374293: Jshell throws an error and crashes when using keyword Public
Reviewed-by: jlahoda
2026-01-24 14:19:40 +00:00
Yasumasa Suenaga
a3b1aa9f7d 8374482: SA does not handle signal handler frame in mixed jstack
Reviewed-by: cjplummer, kevinw
2026-01-24 08:43:37 +00:00
Phil Race
44b74e165e 8375351: Remove usage of AppContext from print implementation
Reviewed-by: serb, tr
2026-01-23 20:20:22 +00:00
Valerie Peng
e55124041e 8375549: ConcurrentModificationException if jdk.crypto.disabledAlgorithms has multiple entries with known oid
Reviewed-by: mullan, coffeys
2026-01-23 19:46:40 +00:00
Phil Race
e617ccd529 8375480: Remove usage of AppContext from javax/swing/text
Reviewed-by: serb, psadhukhan
2026-01-23 19:12:54 +00:00
Phil Race
e88edd0bc6 8375338: sun/awt/image/ImageRepresentation/LUTCompareTest.java fails with -Xcheck:jni
Reviewed-by: aivanov, serb, krk
2026-01-23 18:53:48 +00:00
Phil Race
e08fb3a914 8375221: Update code to get PrinterResolution from CUPS/IPP print service
Reviewed-by: serb, psadhukhan
2026-01-23 18:19:23 +00:00
Cesar Soares Lucas
2c3ad0f425 8373021: aarch64: MacroAssembler::arrays_equals reads out of bounds
Reviewed-by: rkennke, aph
2026-01-23 17:56:04 +00:00
Chen Liang
40f7a18b2d 8373935: Migrate java/lang/invoke tests away from TestNG
Reviewed-by: jvernee, alanb
2026-01-23 17:32:53 +00:00
Severin Gehwolf
3fb118a29e 8375692: Hotspot container tests assert with non-ascii vendor name
Reviewed-by: naoto, dholmes, syan
2026-01-23 16:55:38 +00:00
Guanqiang Han
6f6966b28b 8374862: assert(false) failed: Attempting to acquire lock MDOExtraData_lock/nosafepoint-1 out of order with lock tty_lock/tty -- possible deadlock (running with -XX:+Verbose -XX:+WizardMode -XX:+PrintDeoptimizationDetails)
Reviewed-by: dholmes, dlong
2026-01-23 11:37:30 +00:00
Thomas Schatzl
fa20391e73 8375966: G1: Convert G1UpdateRegionLivenessAndSelectForRebuildTask to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-23 08:31:31 +00:00
Volkan Yazici
ca37dba4d4 8376089: Increase QUIC idle timeout in H3FixedThreadPoolTest to collect more diagnostic
Reviewed-by: dfuchs, jpai
2026-01-23 08:27:27 +00:00
Jan Lahoda
315bf07b23 8375119: SwitchBoostraps.enumSwitch does not throw an NPE when lookup is null in some cases
Reviewed-by: liach
2026-01-23 07:40:52 +00:00
Julian Waters
39f0e6d6f9 8375241: Simplify --with-native-debug-symbols-level option implementation
Reviewed-by: erikj, shade
2026-01-23 07:07:51 +00:00
Ioi Lam
7f2aa59f82 8375654: Exclude all array classes from dynamic CDS archive
Reviewed-by: kvn, vlivanov
2026-01-23 06:24:47 +00:00
SendaoYan
0f087a7fef 8376051: gc/stress/TestStressG1Uncommit.java fails assertLessThan: expected that xxx < xxx
Reviewed-by: tschatzl, shade
2026-01-23 00:57:25 +00:00
Daniel Jeliński
25d2b52ab9 8328046: Need to keep leading zeros in TlsPremasterSecret of TLS1.3 DHKeyAgreement
Reviewed-by: hchao
2026-01-22 21:48:28 +00:00
Kelvin Nilsen
d6ebcf8a4f 8357471: GenShen: Share collector reserves between young and old
Reviewed-by: wkemper
2026-01-22 21:28:57 +00:00
Phil Race
f3121d1023 8373931: Test javax/sound/sampled/Clip/AutoCloseTimeCheck.java timed out
Reviewed-by: dholmes, dnguyen, kizune
2026-01-22 20:16:44 +00:00
Hai-May Chao
96a2649e29 8373408: SHA1withECDSA is not required for ECDHE and ECDSA
Reviewed-by: djelinski, ascarpino
2026-01-22 17:41:00 +00:00
Henry Jen
5dfda66e13 8373928: 4 Dangling pointer defect groups in java.c
Reviewed-by: bpb, alanb, jpai, jwaters
2026-01-22 17:21:44 +00:00
Alexander Zuev
8c82b58db9 8286258: [Accessibility,macOS,VoiceOver] VoiceOver reads the spinner value wrong and sometime partially
Reviewed-by: psadhukhan, asemenov
2026-01-22 16:36:24 +00:00
Brian Burkhalter
07f6617e0b 8367284: (fs) Support current working directory target in SecureDirectoryStream.move
Reviewed-by: alanb
2026-01-22 16:11:33 +00:00
Patricio Chilano Mateo
26aab3cccd 8373120: Virtual thread stuck in BLOCKED state
Co-authored-by: Alan Bateman <alanb@openjdk.org>
Reviewed-by: alanb
2026-01-22 14:56:23 +00:00
Artur Barashev
025041ba04 8370885: Default namedGroups values are not being filtered against algorithm constraints
Reviewed-by: hchao
2026-01-22 13:11:42 +00:00
Weijun Wang
eda15aa19c 8277489: Rewrite JAAS UnixLoginModule with FFM
Co-authored-by: Martin Doerr <mdoerr@openjdk.org>
Reviewed-by: mdoerr, ascarpino, erikj
2026-01-22 12:16:09 +00:00
Roland Westrelin
0d1d4d07b9 8374725: C2: assert(x_ctrl == get_late_ctrl_with_anti_dep(x->as_Load(), early_ctrl, x_ctrl)) failed: anti-dependences were already checked
Reviewed-by: chagedorn, qamai, dfenacci
2026-01-22 12:09:11 +00:00
Thomas Schatzl
5e0ed3f408 8375982: G1: Convert G1YoungCollector helper classes to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-22 11:51:37 +00:00
Ivan Walulya
66e950e9b6 8340470: G1: Adopt PartialArrayState to consolidate marking stack in Full GC
Co-authored-by: Stefan Johansson <sjohanss@openjdk.org>
Reviewed-by: sjohanss, tschatzl
2026-01-22 11:07:42 +00:00
Thomas Schatzl
0ad81fbd16 8375541: G1: Race in G1BarrierSet::write_ref_field_post()
Reviewed-by: iwalulya, sjohanss, shade
2026-01-22 11:04:09 +00:00
Roland Westrelin
6e9256cb61 8373343: C2: verify AddP base input only set for heap addresses
Reviewed-by: dlong, chagedorn, qamai
2026-01-22 10:37:26 +00:00
Liam Miller-Cushon
e8eb218ca2 8374643: Fix reference to implMethodKind in LambdaToMethod debug printf statement
Reviewed-by: vromero, liach
2026-01-22 10:05:05 +00:00
Casper Norrbin
ddbd4617a6 8303470: containers/docker/TestMemoryAwareness.java failed with "'memory_limit_in_bytes:.*512000 k' missing from stdout/stderr"
Reviewed-by: sgehwolf, dholmes
2026-01-22 09:45:40 +00:00
Matthias Baesken
6165daf03c 8375458: Check legal folder of JDK image for unwanted files
Reviewed-by: erikj
2026-01-22 08:50:11 +00:00
Thomas Schatzl
03038d802c 8375978: G1: Convert G1Policy to use Atomic<T>
Reviewed-by: kbarrett
2026-01-22 08:35:32 +00:00
Thomas Schatzl
63be87d7f3 8375977: G1: Convert JVMCICleaningTask to use Atomic<T>
Reviewed-by: kbarrett
2026-01-22 08:35:03 +00:00
Quan Anh Mai
92236ead1d 8375618: Incorrect assert in CastLLNode::Ideal
Reviewed-by: chagedorn, dlong
2026-01-22 08:32:01 +00:00
Thomas Schatzl
e50bf1f2a4 8375616: G1: Convert G1BatchedTask to use Atomic<T>
Reviewed-by: sjohanss, kbarrett
2026-01-22 08:29:27 +00:00
Thomas Schatzl
f3381f0ffe 8375314: Parallel: Crash iterating over unloaded classes for ObjectCountAfterGC event
Reviewed-by: rkennke, sjohanss, iwalulya
2026-01-22 08:29:05 +00:00
Tobias Hartmann
0f4d775085 8375534: Debug method 'pp' should support compressed oops
Reviewed-by: vlivanov, phubner
2026-01-22 06:56:51 +00:00
Ivan Walulya
38a8309b3f 8341630: G1: Adopt PartialArrayState to consolidate marking stack in concurrent marking
Co-authored-by: Stefan Johansson <sjohanss@openjdk.org>
Reviewed-by: tschatzl, sjohanss
2026-01-22 05:38:32 +00:00
Serguei Spitsyn
3d919ad43a 8373366: HandshakeState should disallow suspend ops for disabler threads
8375362: Deadlock with unmount of suspended virtual thread interrupting another virtual thread

Reviewed-by: lmesnik, pchilanomate
2026-01-22 01:53:42 +00:00
Damon Nguyen
a0ac5b34a7 8375775: JDK 26 RDP2 L10n resource files update
Reviewed-by: naoto, jlu, liach
2026-01-21 18:47:39 +00:00
Maurizio Cimadamore
17086d3119 8375646: Some parser flags seem unused
Reviewed-by: jlahoda, vromero
2026-01-21 16:14:35 +00:00
Kim Barrett
3033e6f421 8375544: JfrSet::clear should not use memset
Reviewed-by: mgronlun
2026-01-21 14:55:26 +00:00
Matthias Baesken
4c9103f7b6 8374998: Failing os::write - remove bad file
Reviewed-by: mdoerr, lucy
2026-01-21 14:14:33 +00:00
Jatin Bhateja
983ae96f60 8375498: [VectorAPI] Dump primary vector IR details with -XX:+TraceNewVectors
Reviewed-by: epeter
2026-01-21 11:20:18 +00:00
Francesco Andreuzzi
5c7c2f093b 8375717: Outdated link in jdk.jfr.internal.JVM javadoc
Reviewed-by: egahlin
2026-01-21 10:42:05 +00:00
Ivan Walulya
b1340305c8 8238686: G1 may waste lots of space or fail to uncommit when observing MinHeapFreeRatio during sizing after full gc
Reviewed-by: tschatzl, sjohanss
2026-01-21 09:51:01 +00:00
Thomas Schatzl
4f87fb53ee 8375622: G1: Convert G1CodeRootSet to use Atomic<T>
Reviewed-by: shade, sjohanss
2026-01-21 09:01:00 +00:00
Jie Fu
560a92a632 8375787: compiler/vectorapi/TestCastShapeBadOpc.java fails with release VMs
Reviewed-by: syan, lmesnik, fyang, epeter
2026-01-21 06:33:54 +00:00
Kim Barrett
b5727d2762 8375738: Fix -Wzero-as-null-pointer-constant warnings in MacOSX/bsd code
Reviewed-by: erikj, dholmes
2026-01-21 06:04:09 +00:00
Kim Barrett
34d6e5e07b 8375737: Fix -Wzero-as-null-pointer-constant warnings in arm32 code
Reviewed-by: dholmes
2026-01-21 05:56:19 +00:00
SendaoYan
a448f0b9f4 8375668: Compiler warning implicit-const-int-float-conversion by clang23
Reviewed-by: dholmes, cnorrbin
2026-01-21 03:39:26 +00:00
SendaoYan
599ed0bb5f 8375485: Tests in vmTestbase/nsk are failing due to missing class unloading after 8373945
Reviewed-by: lmesnik, cjplummer
2026-01-21 03:39:02 +00:00
Jayathirth D V
a2e749572e 8375063: Update Libpng to 1.6.54
Reviewed-by: serb, prr
2026-01-21 03:12:18 +00:00
Brent Christian
e25a5a4821 Merge
Reviewed-by: kcr, prr, smarks
2026-01-21 01:28:38 +00:00
Dingli Zhang
ca3e6236a2 8375657: RISC-V: Need to check size in SharedRuntime::is_wide_vector
Reviewed-by: fjiang, fyang
2026-01-20 23:48:42 +00:00
Naoto Sato
4fd7595f1b 8374905: Clarify ZonedDateTime#toString() documentation regarding omitted zero seconds
Reviewed-by: rriggs, bpb
2026-01-20 22:45:39 +00:00
Chen Liang
aaca0a2c1f 8375742: Test java/lang/invoke/MethodHandleProxies/Driver.java does not run Unnamed.java
Reviewed-by: jvernee
2026-01-20 21:54:56 +00:00
Emanuel Peter
42439eb60c 8374889: C2 VectorAPI: must handle impossible combination of signed cast from float
Reviewed-by: dlong, qamai
2026-01-20 18:30:42 +00:00
Thomas Schatzl
5f8cb30fc0 8375626: G1: Convert G1CollectionSetChooser to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-20 18:16:39 +00:00
Kelvin Nilsen
72bf0bb6f6 8353115: GenShen: mixed evacuation candidate regions need accurate live_data
Reviewed-by: wkemper
2026-01-20 16:49:02 +00:00
Christian Stein
b2b4729ba2 8375015: CompletionAPITest::testDocumentation failed - AssertionFailedError: expected: <null> but was: <jshelltest.JShellTest>
Reviewed-by: jlahoda
2026-01-20 16:28:23 +00:00
Hai-May Chao
21dc41f744 8314323: Implement JEP 527: TLS 1.3 Hybrid Key Exchange
Co-authored-by: Jamil Nimeh <jnimeh@openjdk.org>
Co-authored-by: Weijun Wang <weijun@openjdk.org>
Reviewed-by: wetmore, mullan
2026-01-20 16:16:38 +00:00
Christian Heilmann
5ba91fed34 8297191: [macos] Printing a page range with starting page > 1 results in missing pages
Reviewed-by: aivanov, prr
2026-01-20 15:00:14 +00:00
Thomas Schatzl
037040129e 8375643: G1: Convert G1RegionMarkStatsCache to use Atomic<T>
Reviewed-by: shade, kbarrett
2026-01-20 13:22:25 +00:00
Jonas Norlinder
3cc713fa29 8374945: Avoid fstat in os::open
Reviewed-by: dholmes, jsjolen, redestad
2026-01-20 11:40:19 +00:00
Thomas Schatzl
fe102918dd 8375630: G1: Convert G1ConcurrentMark to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-20 10:34:16 +00:00
Thomas Schatzl
8c615190e6 8375624: G1: Convert G1JavaThreadsListClaimer to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-20 10:34:00 +00:00
Thomas Schatzl
afbb3a0415 8375620: G1: Convert G1CardTableClaimTable to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-20 10:31:22 +00:00
Leo Korinth
c5f288e2ae 8373253: Re-work InjectGCWorkerCreationFailure for future changes
Reviewed-by: stefank, tschatzl, iwalulya, sjohanss
2026-01-20 09:30:12 +00:00
Thomas Schatzl
d9db4fb36e 8373894: G1: Count evacuation-failed garbage collections in gc cpu usage
Reviewed-by: iwalulya, kbarrett
2026-01-20 08:01:54 +00:00
Prasanta Sadhukhan
e45f5656bc 8373650: Test "javax/swing/JMenuItem/6458123/ManualBug6458123.java" fails because the check icons are not aligned properly as expected
Reviewed-by: tr, dnguyen
2026-01-20 07:10:46 +00:00
David Holmes
ca6925ec6b 8370112: Remove VM_Version::supports_fast_class_init_checks() in platform-specific code
Reviewed-by: shade, fyang
2026-01-20 06:18:07 +00:00
Xiaohong Gong
303de9a3f2 8370666: VectorAPI: Add clear comments for vector relative code in c2
Reviewed-by: epeter, jbhateja, qamai
2026-01-20 01:43:40 +00:00
Kim Barrett
496af3cf47 8375093: Convert GlobalCounter to use Atomic<T>
Reviewed-by: dholmes, iwalulya
2026-01-19 18:05:22 +00:00
Casper Norrbin
f2d5290c29 8367319: Add os interfaces to get machine and container values separately
Reviewed-by: eosterlund, sgehwolf
2026-01-19 14:44:37 +00:00
Quan Anh Mai
c44a99a758 8374180: C2 crash in PhaseCCP::verify_type - fatal error: Not monotonic
Reviewed-by: hgreule, bmaillard, epeter
2026-01-19 14:20:18 +00:00
Christian Hagedorn
e7f1f16a88 8375271: [IR Framework] Rename IREncoding to ApplicableIRRules and driver/flag/test VM to Driver/Flag/Test VM
Reviewed-by: dfenacci, thartmann, mhaessig
2026-01-19 14:02:02 +00:00
Andreas Steiner
6942bb2b31 8374802: java/net/DatagramSocket/SendReceiveMaxSize.java fails on AIX due to small default RCVBUF size
Reviewed-by: alanb
2026-01-19 13:54:06 +00:00
Thomas Schatzl
e0edc65624 8375463: G1: Remove AtomicAccess include from files that do not use it
Reviewed-by: stefank, iwalulya
2026-01-19 12:57:44 +00:00
Thomas Schatzl
3e18148570 8375439: G1: Convert G1MonotonicArena class to use Atomic<T>
Reviewed-by: stefank, iwalulya
2026-01-19 09:02:33 +00:00
David Briemann
30f39d88e5 8375530: PPC64: incorrect quick verify_method_data_pointer check causes poor performance in debug build
Reviewed-by: mdoerr, shade
2026-01-19 08:54:18 +00:00
Thomas Schatzl
9d7ecd51d7 8375437: G1: Convert G1EvacFailureRegions to use Atomic<T>
Reviewed-by: stefank, iwalulya
2026-01-19 08:32:03 +00:00
Per Minborg
75172e0658 8374717: Unclear wording in docs for recursion for List, Map and LazyConstant
Reviewed-by: rriggs
2026-01-19 07:45:21 +00:00
Jamil Nimeh
07f981f6b0 8368032: Enhance Certificate Checking
Reviewed-by: ahgross, coffeys, rhalade, mullan, abarashev
2026-01-18 20:22:55 -08:00
Prasanta Sadhukhan
82e5771b0b 8365280: Enhance JOptionPane
Reviewed-by: rhalade, prr, tr, aivanov
2026-01-18 20:22:55 -08:00
Harshitha Onkar
eddbd35965 8359501: Enhance Handling of URIs
Reviewed-by: rhalade, ahgross, azvegint, prr
2026-01-18 20:22:55 -08:00
Michael McMahon
f24fadc624 8362632: Improve HttpServer Request handling
Reviewed-by: djelinski, dfuchs
2026-01-18 20:22:55 -08:00
Stuart Marks
7e3e35abef 8367277: Fix copyright header in JMXInterfaceBindingTest.java
Reviewed-by: dfuchs, rhalade, iris, coffeys
2026-01-18 20:22:55 -08:00
Renjith Kannath Pariyangad
84ee4f976b 8366446: Test java/awt/geom/ConcurrentDrawPolygonTest.java fails intermittently
Reviewed-by: jdv, aivanov, prr, rhalade
2026-01-18 20:22:55 -08:00
Stuart Marks
3afb831ae4 8341496: Improve JMX connections
Co-authored-by: Daniel Fuchs <dfuchs@openjdk.org>
Reviewed-by: skoivu, rhalade, coffeys, dfuchs, kevinw, jnimeh
2026-01-18 20:22:55 -08:00
Justin Lu
dc46a17f1e 8365058: Enhance CopyOnWriteArraySet
Reviewed-by: rhalade, skoivu, vklang, rriggs
2026-01-18 20:22:55 -08:00
Prasanta Sadhukhan
97bd445841 8365271: Improve Swing supports
Reviewed-by: tr, prr, rhalade, aivanov
2026-01-18 20:22:55 -08:00
Jayathirth D V
3b6ac2af9c 8362308: Enhance Bitmap operations
Reviewed-by: mschoene, rhalade, psadhukhan, prr
2026-01-18 20:22:54 -08:00
Jayathirth D V
9f3f960b36 8364214: Enhance polygon data support
Reviewed-by: rhalade, psadhukhan, mschoene, prr
2026-01-18 20:22:54 -08:00
Valerie Peng
f8fb780426 8265429: Improve GCM encryption
Co-authored-by: Daniel Jelinski <daniel.jelinski@oracle.com>
Reviewed-by: rhalade, pkumaraswamy, ahgross, jnimeh, djelinski
2026-01-18 20:22:54 -08:00
Guanqiang Han
a67979c4e6 8375125: assert(false) failed: "Attempting to acquire lock NativeHeapTrimmer_lock/nosafepoint out of order with lock ConcurrentHashTableResize_lock/nosafepoint-2 -- possible deadlock" when using native heap trimmer
Reviewed-by: dholmes, stuefe
2026-01-19 02:33:18 +00:00
Yasumasa Suenaga
1cdb817422 8375575: AttachNotSupportedException constructor missing @since 27
Reviewed-by: liach
2026-01-18 07:35:12 +00:00
Shawn M Emery
a0e6f028a8 8360934: Add AVX-512 intrinsics for ML-KEM - enhancement on AVX512_VBMI
Co-authored-by: Sandhya Viswanathan <sviswanathan@openjdk.org>
Reviewed-by: jbhateja, vpaprotski
2026-01-17 11:08:30 +00:00
Yasumasa Suenaga
436c62afd2 8373867: Improve robustness of Attach API for finding tmp directory
Reviewed-by: sspitsyn, amenkov
2026-01-17 06:24:31 +00:00
SendaoYan
0dd5b59194 8375370: XRBackendNative.c reported variable uninitialized by clang23
Reviewed-by: prr
2026-01-17 04:30:02 +00:00
Alexey Semenyuk
9b47c23b4b 8375242: [macos] Improve jpackage signing coverage
Reviewed-by: almatvee
2026-01-16 23:16:43 +00:00
Alexey Semenyuk
e7432d5745 8375323: Improve handling of the "--app-content" and "--input" options in jpackage
Reviewed-by: almatvee
2026-01-16 20:03:00 +00:00
662 changed files with 20239 additions and 12832 deletions

View File

@ -72,6 +72,7 @@ id="toc-notes-for-specific-tests">Notes for Specific Tests</a>
<li><a href="#non-us-locale" id="toc-non-us-locale">Non-US
locale</a></li>
<li><a href="#pkcs11-tests" id="toc-pkcs11-tests">PKCS11 Tests</a></li>
<li><a href="#sctp-tests" id="toc-sctp-tests">SCTP Tests</a></li>
<li><a href="#testing-ahead-of-time-optimizations"
id="toc-testing-ahead-of-time-optimizations">Testing Ahead-of-time
Optimizations</a></li>
@ -621,6 +622,21 @@ element of the appropriate <code>@Artifact</code> class. (See
JTREG=&quot;JAVA_OPTIONS=-Djdk.test.lib.artifacts.nsslib-linux_aarch64=/path/to/NSS-libs&quot;</code></pre>
<p>For more notes about the PKCS11 tests, please refer to
test/jdk/sun/security/pkcs11/README.</p>
<h3 id="sctp-tests">SCTP Tests</h3>
<p>The SCTP tests require the SCTP runtime library, which is often not
installed by default in popular Linux distributions. Without this
library, the SCTP tests will be skipped. If you want to enable the SCTP
tests, you should install the SCTP library before running the tests.</p>
<p>For distributions using the .deb packaging format and the apt tool
(such as Debian, Ubuntu, etc.), try this:</p>
<pre><code>sudo apt install libsctp1
sudo modprobe sctp
lsmod | grep sctp</code></pre>
<p>For distributions using the .rpm packaging format and the dnf tool
(such as Fedora, Red Hat, etc.), try this:</p>
<pre><code>sudo dnf install -y lksctp-tools
sudo modprobe sctp
lsmod | grep sctp</code></pre>
<h3 id="testing-ahead-of-time-optimizations">Testing Ahead-of-time
Optimizations</h3>
<p>One way to improve test coverage of ahead-of-time (AOT) optimizations

View File

@ -640,6 +640,32 @@ $ make test TEST="jtreg:sun/security/pkcs11/Secmod/AddTrustedCert.java" \
For more notes about the PKCS11 tests, please refer to
test/jdk/sun/security/pkcs11/README.
### SCTP Tests
The SCTP tests require the SCTP runtime library, which is often not installed
by default in popular Linux distributions. Without this library, the SCTP tests
will be skipped. If you want to enable the SCTP tests, you should install the
SCTP library before running the tests.
For distributions using the .deb packaging format and the apt tool
(such as Debian, Ubuntu, etc.), try this:
```
sudo apt install libsctp1
sudo modprobe sctp
lsmod | grep sctp
```
For distributions using the .rpm packaging format and the dnf tool
(such as Fedora, Red Hat, etc.), try this:
```
sudo dnf install -y lksctp-tools
sudo modprobe sctp
lsmod | grep sctp
```
### Testing Ahead-of-time Optimizations
One way to improve test coverage of ahead-of-time (AOT) optimizations in

View File

@ -69,22 +69,18 @@ AC_DEFUN([FLAGS_SETUP_DEBUG_SYMBOLS],
# Debug prefix mapping if supported by compiler
DEBUG_PREFIX_CFLAGS=
UTIL_ARG_WITH(NAME: native-debug-symbols-level, TYPE: string,
DEFAULT: "",
RESULT: DEBUG_SYMBOLS_LEVEL,
UTIL_ARG_WITH(NAME: native-debug-symbols-level, TYPE: literal,
DEFAULT: [auto], VALID_VALUES: [auto 1 2 3],
CHECK_AVAILABLE: [
if test x$TOOLCHAIN_TYPE = xmicrosoft; then
AVAILABLE=false
fi
],
DESC: [set the native debug symbol level (GCC and Clang only)],
DEFAULT_DESC: [toolchain default])
AC_SUBST(DEBUG_SYMBOLS_LEVEL)
if test "x${TOOLCHAIN_TYPE}" = xgcc || \
test "x${TOOLCHAIN_TYPE}" = xclang; then
DEBUG_SYMBOLS_LEVEL_FLAGS="-g"
if test "x${DEBUG_SYMBOLS_LEVEL}" != "x"; then
DEBUG_SYMBOLS_LEVEL_FLAGS="-g${DEBUG_SYMBOLS_LEVEL}"
FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [${DEBUG_SYMBOLS_LEVEL_FLAGS}],
IF_FALSE: AC_MSG_ERROR("Debug info level ${DEBUG_SYMBOLS_LEVEL} is not supported"))
fi
fi
DEFAULT_DESC: [toolchain default],
IF_AUTO: [
RESULT=""
])
# Debug symbols
if test "x$TOOLCHAIN_TYPE" = xgcc; then
@ -111,8 +107,8 @@ AC_DEFUN([FLAGS_SETUP_DEBUG_SYMBOLS],
fi
# Debug info level should follow the debug format to be effective.
CFLAGS_DEBUG_SYMBOLS="-gdwarf-4 ${DEBUG_SYMBOLS_LEVEL_FLAGS}"
ASFLAGS_DEBUG_SYMBOLS="${DEBUG_SYMBOLS_LEVEL_FLAGS}"
CFLAGS_DEBUG_SYMBOLS="-gdwarf-4 -g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
ASFLAGS_DEBUG_SYMBOLS="-g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
elif test "x$TOOLCHAIN_TYPE" = xclang; then
if test "x$ALLOW_ABSOLUTE_PATHS_IN_OUTPUT" = "xfalse"; then
# Check if compiler supports -fdebug-prefix-map. If so, use that to make
@ -132,8 +128,8 @@ AC_DEFUN([FLAGS_SETUP_DEBUG_SYMBOLS],
IF_FALSE: [GDWARF_FLAGS=""])
# Debug info level should follow the debug format to be effective.
CFLAGS_DEBUG_SYMBOLS="${GDWARF_FLAGS} ${DEBUG_SYMBOLS_LEVEL_FLAGS}"
ASFLAGS_DEBUG_SYMBOLS="${DEBUG_SYMBOLS_LEVEL_FLAGS}"
CFLAGS_DEBUG_SYMBOLS="${GDWARF_FLAGS} -g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
ASFLAGS_DEBUG_SYMBOLS="-g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
elif test "x$TOOLCHAIN_TYPE" = xmicrosoft; then
CFLAGS_DEBUG_SYMBOLS="-Z7"
fi

View File

@ -1,5 +1,5 @@
#
# Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
# Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@ -61,7 +61,8 @@ $(eval $(call SetupJdkLibrary, BUILD_GTEST_LIBGTEST, \
INCLUDE_FILES := gtest-all.cc gmock-all.cc, \
DISABLED_WARNINGS_gcc := format-nonliteral maybe-uninitialized undef \
unused-result zero-as-null-pointer-constant, \
DISABLED_WARNINGS_clang := format-nonliteral undef unused-result, \
DISABLED_WARNINGS_clang := format-nonliteral undef unused-result \
zero-as-null-pointer-constant, \
DISABLED_WARNINGS_microsoft := 4530, \
DEFAULT_CFLAGS := false, \
CFLAGS := $(JVM_CFLAGS) \

View File

@ -31,13 +31,14 @@ include LibCommon.gmk
## Build libjaas
################################################################################
$(eval $(call SetupJdkLibrary, BUILD_LIBJAAS, \
NAME := jaas, \
OPTIMIZATION := LOW, \
EXTRA_HEADER_DIRS := java.base:libjava, \
LIBS_windows := advapi32.lib mpr.lib netapi32.lib user32.lib, \
))
TARGETS += $(BUILD_LIBJAAS)
ifeq ($(call isTargetOs, windows), true)
$(eval $(call SetupJdkLibrary, BUILD_LIBJAAS, \
NAME := jaas, \
OPTIMIZATION := LOW, \
EXTRA_HEADER_DIRS := java.base:libjava, \
LIBS_windows := advapi32.lib mpr.lib netapi32.lib user32.lib, \
))
TARGETS += $(BUILD_LIBJAAS)
endif
################################################################################

View File

@ -5782,6 +5782,9 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
// return false;
bind(A_IS_NOT_NULL);
ldrw(cnt1, Address(a1, length_offset));
ldrw(tmp5, Address(a2, length_offset));
cmp(cnt1, tmp5);
br(NE, DONE); // If lengths differ, return false
// Increase loop counter by diff between base- and actual start-offset.
addw(cnt1, cnt1, extra_length);
lea(a1, Address(a1, start_offset));
@ -5848,6 +5851,9 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
cbz(a1, DONE);
ldrw(cnt1, Address(a1, length_offset));
cbz(a2, DONE);
ldrw(tmp5, Address(a2, length_offset));
cmp(cnt1, tmp5);
br(NE, DONE); // If lengths differ, return false
// Increase loop counter by diff between base- and actual start-offset.
addw(cnt1, cnt1, extra_length);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2003, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2003, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2021, Red Hat Inc. All rights reserved.
* Copyright (c) 2021, Azul Systems, Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
@ -722,22 +722,20 @@ void SharedRuntime::generate_i2c2i_adapters(MacroAssembler *masm,
// Class initialization barrier for static methods
entry_address[AdapterBlob::C2I_No_Clinit_Check] = nullptr;
if (VM_Version::supports_fast_class_init_checks()) {
Label L_skip_barrier;
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
{ // Bypass the barrier for non-static methods
__ ldrh(rscratch1, Address(rmethod, Method::access_flags_offset()));
__ andsw(zr, rscratch1, JVM_ACC_STATIC);
__ br(Assembler::EQ, L_skip_barrier); // non-static
}
// Bypass the barrier for non-static methods
__ ldrh(rscratch1, Address(rmethod, Method::access_flags_offset()));
__ andsw(zr, rscratch1, JVM_ACC_STATIC);
__ br(Assembler::EQ, L_skip_barrier); // non-static
__ load_method_holder(rscratch2, rmethod);
__ clinit_barrier(rscratch2, rscratch1, &L_skip_barrier);
__ far_jump(RuntimeAddress(SharedRuntime::get_handle_wrong_method_stub()));
__ load_method_holder(rscratch2, rmethod);
__ clinit_barrier(rscratch2, rscratch1, &L_skip_barrier);
__ far_jump(RuntimeAddress(SharedRuntime::get_handle_wrong_method_stub()));
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
}
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
BarrierSetAssembler* bs = BarrierSet::barrier_set()->barrier_set_assembler();
bs->c2i_entry_barrier(masm);
@ -1508,7 +1506,8 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
// SVC, HVC, or SMC. Make it a NOP.
__ nop();
if (VM_Version::supports_fast_class_init_checks() && method->needs_clinit_barrier()) {
if (method->needs_clinit_barrier()) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
__ mov_metadata(rscratch2, method->method_holder()); // InstanceKlass*
__ clinit_barrier(rscratch2, rscratch1, &L_skip_barrier);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2003, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2003, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, Red Hat Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -2290,7 +2290,8 @@ void TemplateTable::resolve_cache_and_index_for_method(int byte_no,
__ subs(zr, temp, (int) code); // have we resolved this bytecode?
// Class initialization barrier for static methods
if (VM_Version::supports_fast_class_init_checks() && bytecode() == Bytecodes::_invokestatic) {
if (bytecode() == Bytecodes::_invokestatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
__ br(Assembler::NE, L_clinit_barrier_slow);
__ ldr(temp, Address(Rcache, in_bytes(ResolvedMethodEntry::method_offset())));
__ load_method_holder(temp, temp);
@ -2340,8 +2341,8 @@ void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
__ subs(zr, temp, (int) code); // have we resolved this bytecode?
// Class initialization barrier for static fields
if (VM_Version::supports_fast_class_init_checks() &&
(bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic)) {
if (bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register field_holder = temp;
__ br(Assembler::NE, L_clinit_barrier_slow);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2008, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2008, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -356,10 +356,10 @@ frame frame::sender_for_interpreter_frame(RegisterMap* map) const {
bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
assert(is_interpreted_frame(), "Not an interpreted frame");
// These are reasonable sanity checks
if (fp() == 0 || (intptr_t(fp()) & (wordSize-1)) != 0) {
if (fp() == nullptr || (intptr_t(fp()) & (wordSize-1)) != 0) {
return false;
}
if (sp() == 0 || (intptr_t(sp()) & (wordSize-1)) != 0) {
if (sp() == nullptr || (intptr_t(sp()) & (wordSize-1)) != 0) {
return false;
}
if (fp() + interpreter_frame_initial_sp_offset < sp()) {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2008, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2008, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -172,7 +172,7 @@ void NativeMovConstReg::set_data(intptr_t x, address pc) {
address addr = oop_addr != nullptr ? (address)oop_addr : (address)metadata_addr;
if(pc == 0) {
if (pc == nullptr) {
offset = addr - instruction_address() - 8;
} else {
offset = addr - pc - 8;
@ -228,7 +228,7 @@ void NativeMovConstReg::set_data(intptr_t x, address pc) {
void NativeMovConstReg::set_pc_relative_offset(address addr, address pc) {
int offset;
if (pc == 0) {
if (pc == nullptr) {
offset = addr - instruction_address() - 8;
} else {
offset = addr - pc - 8;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2008, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2008, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -371,7 +371,7 @@ class NativeMovConstReg: public NativeInstruction {
public:
intptr_t data() const;
void set_data(intptr_t x, address pc = 0);
void set_data(intptr_t x, address pc = nullptr);
bool is_pc_relative() {
return !is_movw();
}

View File

@ -1109,11 +1109,11 @@ void InterpreterMacroAssembler::verify_method_data_pointer() {
lhz(R11_scratch1, in_bytes(DataLayout::bci_offset()), R28_mdx);
ld(R12_scratch2, in_bytes(Method::const_offset()), R19_method);
addi(R11_scratch1, R11_scratch1, in_bytes(ConstMethod::codes_offset()));
add(R11_scratch1, R12_scratch2, R12_scratch2);
add(R11_scratch1, R11_scratch1, R12_scratch2);
cmpd(CR0, R11_scratch1, R14_bcp);
beq(CR0, verify_continue);
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::verify_mdp ), R19_method, R14_bcp, R28_mdx);
call_VM_leaf(CAST_FROM_FN_PTR(address, InterpreterRuntime::verify_mdp), R19_method, R14_bcp, R28_mdx);
bind(verify_continue);
#endif

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2025 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -1237,26 +1237,24 @@ void SharedRuntime::generate_i2c2i_adapters(MacroAssembler *masm,
// Class initialization barrier for static methods
entry_address[AdapterBlob::C2I_No_Clinit_Check] = nullptr;
if (VM_Version::supports_fast_class_init_checks()) {
Label L_skip_barrier;
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
{ // Bypass the barrier for non-static methods
__ lhz(R0, in_bytes(Method::access_flags_offset()), R19_method);
__ andi_(R0, R0, JVM_ACC_STATIC);
__ beq(CR0, L_skip_barrier); // non-static
}
// Bypass the barrier for non-static methods
__ lhz(R0, in_bytes(Method::access_flags_offset()), R19_method);
__ andi_(R0, R0, JVM_ACC_STATIC);
__ beq(CR0, L_skip_barrier); // non-static
Register klass = R11_scratch1;
__ load_method_holder(klass, R19_method);
__ clinit_barrier(klass, R16_thread, &L_skip_barrier /*L_fast_path*/);
Register klass = R11_scratch1;
__ load_method_holder(klass, R19_method);
__ clinit_barrier(klass, R16_thread, &L_skip_barrier /*L_fast_path*/);
__ load_const_optimized(klass, SharedRuntime::get_handle_wrong_method_stub(), R0);
__ mtctr(klass);
__ bctr();
__ load_const_optimized(klass, SharedRuntime::get_handle_wrong_method_stub(), R0);
__ mtctr(klass);
__ bctr();
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
}
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
BarrierSetAssembler* bs = BarrierSet::barrier_set()->barrier_set_assembler();
bs->c2i_entry_barrier(masm, /* tmp register*/ ic_klass, /* tmp register*/ receiver_klass, /* tmp register*/ code);
@ -2210,7 +2208,8 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
// --------------------------------------------------------------------------
vep_start_pc = (intptr_t)__ pc();
if (VM_Version::supports_fast_class_init_checks() && method->needs_clinit_barrier()) {
if (method->needs_clinit_barrier()) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
Register klass = r_temp_1;
// Notify OOP recorder (don't need the relocation)

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2014, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2013, 2025 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -2199,7 +2199,8 @@ void TemplateTable::resolve_cache_and_index_for_method(int byte_no, Register Rca
__ isync(); // Order load wrt. succeeding loads.
// Class initialization barrier for static methods
if (VM_Version::supports_fast_class_init_checks() && bytecode() == Bytecodes::_invokestatic) {
if (bytecode() == Bytecodes::_invokestatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register method = Rscratch;
const Register klass = Rscratch;
@ -2244,8 +2245,8 @@ void TemplateTable::resolve_cache_and_index_for_field(int byte_no, Register Rcac
__ isync(); // Order load wrt. succeeding loads.
// Class initialization barrier for static fields
if (VM_Version::supports_fast_class_init_checks() &&
(bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic)) {
if (bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register field_holder = R4_ARG2;
// InterpreterRuntime::resolve_get_put sets field_holder and finally release-stores put_code.

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2003, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2003, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
* Copyright (c) 2020, 2023, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
@ -213,7 +213,7 @@ void RegisterSaver::restore_live_registers(MacroAssembler* masm) {
// Is vector's size (in bytes) bigger than a size saved by default?
// riscv does not ovlerlay the floating-point registers on vector registers like aarch64.
bool SharedRuntime::is_wide_vector(int size) {
return UseRVV;
return UseRVV && size > 0;
}
// ---------------------------------------------------------------------------
@ -637,22 +637,20 @@ void SharedRuntime::generate_i2c2i_adapters(MacroAssembler *masm,
// Class initialization barrier for static methods
entry_address[AdapterBlob::C2I_No_Clinit_Check] = nullptr;
if (VM_Version::supports_fast_class_init_checks()) {
Label L_skip_barrier;
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
{ // Bypass the barrier for non-static methods
__ load_unsigned_short(t0, Address(xmethod, Method::access_flags_offset()));
__ test_bit(t1, t0, exact_log2(JVM_ACC_STATIC));
__ beqz(t1, L_skip_barrier); // non-static
}
// Bypass the barrier for non-static methods
__ load_unsigned_short(t0, Address(xmethod, Method::access_flags_offset()));
__ test_bit(t1, t0, exact_log2(JVM_ACC_STATIC));
__ beqz(t1, L_skip_barrier); // non-static
__ load_method_holder(t1, xmethod);
__ clinit_barrier(t1, t0, &L_skip_barrier);
__ far_jump(RuntimeAddress(SharedRuntime::get_handle_wrong_method_stub()));
__ load_method_holder(t1, xmethod);
__ clinit_barrier(t1, t0, &L_skip_barrier);
__ far_jump(RuntimeAddress(SharedRuntime::get_handle_wrong_method_stub()));
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
}
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
BarrierSetAssembler* bs = BarrierSet::barrier_set()->barrier_set_assembler();
bs->c2i_entry_barrier(masm);
@ -1443,7 +1441,8 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
__ nop(); // 4 bytes
}
if (VM_Version::supports_fast_class_init_checks() && method->needs_clinit_barrier()) {
if (method->needs_clinit_barrier()) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
__ mov_metadata(t1, method->method_holder()); // InstanceKlass*
__ clinit_barrier(t1, t0, &L_skip_barrier);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2003, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2003, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, Red Hat Inc. All rights reserved.
* Copyright (c) 2020, 2023, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
@ -2192,7 +2192,8 @@ void TemplateTable::resolve_cache_and_index_for_method(int byte_no,
__ mv(t0, (int) code);
// Class initialization barrier for static methods
if (VM_Version::supports_fast_class_init_checks() && bytecode() == Bytecodes::_invokestatic) {
if (bytecode() == Bytecodes::_invokestatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
__ bne(temp, t0, L_clinit_barrier_slow); // have we resolved this bytecode?
__ ld(temp, Address(Rcache, in_bytes(ResolvedMethodEntry::method_offset())));
__ load_method_holder(temp, temp);
@ -2243,8 +2244,8 @@ void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
__ mv(t0, (int) code); // have we resolved this bytecode?
// Class initialization barrier for static fields
if (VM_Version::supports_fast_class_init_checks() &&
(bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic)) {
if (bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register field_holder = temp;
__ bne(temp, t0, L_clinit_barrier_slow);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2024 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -1567,7 +1567,8 @@ nmethod *SharedRuntime::generate_native_wrapper(MacroAssembler *masm,
//---------------------------------------------------------------------
wrapper_VEPStart = __ offset();
if (VM_Version::supports_fast_class_init_checks() && method->needs_clinit_barrier()) {
if (method->needs_clinit_barrier()) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
Register klass = Z_R1_scratch;
// Notify OOP recorder (don't need the relocation)
@ -2378,24 +2379,22 @@ void SharedRuntime::generate_i2c2i_adapters(MacroAssembler *masm,
// Class initialization barrier for static methods
entry_address[AdapterBlob::C2I_No_Clinit_Check] = nullptr;
if (VM_Version::supports_fast_class_init_checks()) {
Label L_skip_barrier;
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
{ // Bypass the barrier for non-static methods
__ testbit_ushort(Address(Z_method, Method::access_flags_offset()), JVM_ACC_STATIC_BIT);
__ z_bfalse(L_skip_barrier); // non-static
}
// Bypass the barrier for non-static methods
__ testbit_ushort(Address(Z_method, Method::access_flags_offset()), JVM_ACC_STATIC_BIT);
__ z_bfalse(L_skip_barrier); // non-static
Register klass = Z_R11;
__ load_method_holder(klass, Z_method);
__ clinit_barrier(klass, Z_thread, &L_skip_barrier /*L_fast_path*/);
Register klass = Z_R11;
__ load_method_holder(klass, Z_method);
__ clinit_barrier(klass, Z_thread, &L_skip_barrier /*L_fast_path*/);
__ load_const_optimized(klass, SharedRuntime::get_handle_wrong_method_stub());
__ z_br(klass);
__ load_const_optimized(klass, SharedRuntime::get_handle_wrong_method_stub());
__ z_br(klass);
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
}
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
gen_c2i_adapter(masm, total_args_passed, comp_args_on_stack, sig_bt, regs, skip_fixup);
return;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2024 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -2377,7 +2377,8 @@ void TemplateTable::resolve_cache_and_index_for_method(int byte_no,
__ z_cli(Address(Rcache, bc_offset), code);
// Class initialization barrier for static methods
if (VM_Version::supports_fast_class_init_checks() && bytecode() == Bytecodes::_invokestatic) {
if (bytecode() == Bytecodes::_invokestatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register method = Z_R1_scratch;
const Register klass = Z_R1_scratch;
__ z_brne(L_clinit_barrier_slow);
@ -2427,8 +2428,8 @@ void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
__ z_cli(Address(cache, code_offset), code);
// Class initialization barrier for static fields
if (VM_Version::supports_fast_class_init_checks() &&
(bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic)) {
if (bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register field_holder = index;
__ z_brne(L_clinit_barrier_slow);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2003, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2003, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -1043,26 +1043,24 @@ void SharedRuntime::generate_i2c2i_adapters(MacroAssembler *masm,
// Class initialization barrier for static methods
entry_address[AdapterBlob::C2I_No_Clinit_Check] = nullptr;
if (VM_Version::supports_fast_class_init_checks()) {
Label L_skip_barrier;
Register method = rbx;
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
Register method = rbx;
{ // Bypass the barrier for non-static methods
Register flags = rscratch1;
__ load_unsigned_short(flags, Address(method, Method::access_flags_offset()));
__ testl(flags, JVM_ACC_STATIC);
__ jcc(Assembler::zero, L_skip_barrier); // non-static
}
// Bypass the barrier for non-static methods
Register flags = rscratch1;
__ load_unsigned_short(flags, Address(method, Method::access_flags_offset()));
__ testl(flags, JVM_ACC_STATIC);
__ jcc(Assembler::zero, L_skip_barrier); // non-static
Register klass = rscratch1;
__ load_method_holder(klass, method);
__ clinit_barrier(klass, &L_skip_barrier /*L_fast_path*/);
Register klass = rscratch1;
__ load_method_holder(klass, method);
__ clinit_barrier(klass, &L_skip_barrier /*L_fast_path*/);
__ jump(RuntimeAddress(SharedRuntime::get_handle_wrong_method_stub())); // slow path
__ jump(RuntimeAddress(SharedRuntime::get_handle_wrong_method_stub())); // slow path
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
}
__ bind(L_skip_barrier);
entry_address[AdapterBlob::C2I_No_Clinit_Check] = __ pc();
BarrierSetAssembler* bs = BarrierSet::barrier_set()->barrier_set_assembler();
bs->c2i_entry_barrier(masm);
@ -1904,7 +1902,8 @@ nmethod* SharedRuntime::generate_native_wrapper(MacroAssembler* masm,
int vep_offset = ((intptr_t)__ pc()) - start;
if (VM_Version::supports_fast_class_init_checks() && method->needs_clinit_barrier()) {
if (method->needs_clinit_barrier()) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
Label L_skip_barrier;
Register klass = r10;
__ mov_metadata(klass, method->method_holder()); // InstanceKlass*
@ -3602,4 +3601,3 @@ RuntimeStub* SharedRuntime::generate_jfr_return_lease() {
}
#endif // INCLUDE_JFR

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -64,6 +64,39 @@ static address kyberAvx512ConstsAddr(int offset) {
const Register scratch = r10;
ATTRIBUTE_ALIGNED(64) static const uint8_t kyberAvx512_12To16Dup[] = {
// 0 - 63
0, 1, 1, 2, 3, 4, 4, 5, 6, 7, 7, 8, 9, 10, 10, 11, 12, 13, 13, 14, 15, 16,
16, 17, 18, 19, 19, 20, 21, 22, 22, 23, 24, 25, 25, 26, 27, 28, 28, 29, 30,
31, 31, 32, 33, 34, 34, 35, 36, 37, 37, 38, 39, 40, 40, 41, 42, 43, 43, 44,
45, 46, 46, 47
};
static address kyberAvx512_12To16DupAddr() {
return (address) kyberAvx512_12To16Dup;
}
ATTRIBUTE_ALIGNED(64) static const uint16_t kyberAvx512_12To16Shift[] = {
// 0 - 31
0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0, 4, 0,
4, 0, 4, 0, 4, 0, 4
};
static address kyberAvx512_12To16ShiftAddr() {
return (address) kyberAvx512_12To16Shift;
}
ATTRIBUTE_ALIGNED(64) static const uint64_t kyberAvx512_12To16And[] = {
// 0 - 7
0x0FFF0FFF0FFF0FFF, 0x0FFF0FFF0FFF0FFF, 0x0FFF0FFF0FFF0FFF,
0x0FFF0FFF0FFF0FFF, 0x0FFF0FFF0FFF0FFF, 0x0FFF0FFF0FFF0FFF,
0x0FFF0FFF0FFF0FFF, 0x0FFF0FFF0FFF0FFF
};
static address kyberAvx512_12To16AndAddr() {
return (address) kyberAvx512_12To16And;
}
ATTRIBUTE_ALIGNED(64) static const uint16_t kyberAvx512NttPerms[] = {
// 0
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
@ -822,10 +855,65 @@ address generate_kyber12To16_avx512(StubGenerator *stubgen,
const Register perms = r11;
Label Loop;
Label Loop, VBMILoop;
__ addptr(condensed, condensedOffs);
if (VM_Version::supports_avx512_vbmi()) {
// mask load for the first 48 bytes of each vector
__ mov64(rax, 0x0000FFFFFFFFFFFF);
__ kmovql(k1, rax);
__ lea(perms, ExternalAddress(kyberAvx512_12To16DupAddr()));
__ evmovdqub(xmm20, Address(perms), Assembler::AVX_512bit);
__ lea(perms, ExternalAddress(kyberAvx512_12To16ShiftAddr()));
__ evmovdquw(xmm21, Address(perms), Assembler::AVX_512bit);
__ lea(perms, ExternalAddress(kyberAvx512_12To16AndAddr()));
__ evmovdquq(xmm22, Address(perms), Assembler::AVX_512bit);
__ align(OptoLoopAlignment);
__ BIND(VBMILoop);
__ evmovdqub(xmm0, k1, Address(condensed, 0), false,
Assembler::AVX_512bit);
__ evmovdqub(xmm1, k1, Address(condensed, 48), false,
Assembler::AVX_512bit);
__ evmovdqub(xmm2, k1, Address(condensed, 96), false,
Assembler::AVX_512bit);
__ evmovdqub(xmm3, k1, Address(condensed, 144), false,
Assembler::AVX_512bit);
__ evpermb(xmm4, k0, xmm20, xmm0, false, Assembler::AVX_512bit);
__ evpermb(xmm5, k0, xmm20, xmm1, false, Assembler::AVX_512bit);
__ evpermb(xmm6, k0, xmm20, xmm2, false, Assembler::AVX_512bit);
__ evpermb(xmm7, k0, xmm20, xmm3, false, Assembler::AVX_512bit);
__ evpsrlvw(xmm4, xmm4, xmm21, Assembler::AVX_512bit);
__ evpsrlvw(xmm5, xmm5, xmm21, Assembler::AVX_512bit);
__ evpsrlvw(xmm6, xmm6, xmm21, Assembler::AVX_512bit);
__ evpsrlvw(xmm7, xmm7, xmm21, Assembler::AVX_512bit);
__ evpandq(xmm0, xmm22, xmm4, Assembler::AVX_512bit);
__ evpandq(xmm1, xmm22, xmm5, Assembler::AVX_512bit);
__ evpandq(xmm2, xmm22, xmm6, Assembler::AVX_512bit);
__ evpandq(xmm3, xmm22, xmm7, Assembler::AVX_512bit);
store4regs(parsed, 0, xmm0_3, _masm);
__ addptr(condensed, 192);
__ addptr(parsed, 256);
__ subl(parsedLength, 128);
__ jcc(Assembler::greater, VBMILoop);
__ leave(); // required for proper stackwalking of RuntimeStub frame
__ mov64(rax, 0); // return 0
__ ret(0);
return start;
}
__ lea(perms, ExternalAddress(kyberAvx512_12To16PermsAddr()));
load4regs(xmm24_27, perms, 0, _masm);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -2216,7 +2216,8 @@ void TemplateTable::resolve_cache_and_index_for_method(int byte_no,
__ cmpl(temp, code); // have we resolved this bytecode?
// Class initialization barrier for static methods
if (VM_Version::supports_fast_class_init_checks() && bytecode() == Bytecodes::_invokestatic) {
if (bytecode() == Bytecodes::_invokestatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register method = temp;
const Register klass = temp;
@ -2264,8 +2265,8 @@ void TemplateTable::resolve_cache_and_index_for_field(int byte_no,
__ cmpl(temp, code); // have we resolved this bytecode?
// Class initialization barrier for static fields
if (VM_Version::supports_fast_class_init_checks() &&
(bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic)) {
if (bytecode() == Bytecodes::_getstatic || bytecode() == Bytecodes::_putstatic) {
assert(VM_Version::supports_fast_class_init_checks(), "sanity");
const Register field_holder = temp;
__ jcc(Assembler::notEqual, L_clinit_barrier_slow);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1999, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1999, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2025 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -258,10 +258,18 @@ bool os::free_memory(physical_memory_size_type& value) {
return Aix::available_memory(value);
}
bool os::Machine::free_memory(physical_memory_size_type& value) {
return Aix::available_memory(value);
}
bool os::available_memory(physical_memory_size_type& value) {
return Aix::available_memory(value);
}
bool os::Machine::available_memory(physical_memory_size_type& value) {
return Aix::available_memory(value);
}
bool os::Aix::available_memory(physical_memory_size_type& value) {
os::Aix::meminfo_t mi;
if (os::Aix::get_meminfo(&mi)) {
@ -273,6 +281,10 @@ bool os::Aix::available_memory(physical_memory_size_type& value) {
}
bool os::total_swap_space(physical_memory_size_type& value) {
return Machine::total_swap_space(value);
}
bool os::Machine::total_swap_space(physical_memory_size_type& value) {
perfstat_memory_total_t memory_info;
if (libperfstat::perfstat_memory_total(nullptr, &memory_info, sizeof(perfstat_memory_total_t), 1) == -1) {
return false;
@ -282,6 +294,10 @@ bool os::total_swap_space(physical_memory_size_type& value) {
}
bool os::free_swap_space(physical_memory_size_type& value) {
return Machine::free_swap_space(value);
}
bool os::Machine::free_swap_space(physical_memory_size_type& value) {
perfstat_memory_total_t memory_info;
if (libperfstat::perfstat_memory_total(nullptr, &memory_info, sizeof(perfstat_memory_total_t), 1) == -1) {
return false;
@ -294,6 +310,10 @@ physical_memory_size_type os::physical_memory() {
return Aix::physical_memory();
}
physical_memory_size_type os::Machine::physical_memory() {
return Aix::physical_memory();
}
size_t os::rss() { return (size_t)0; }
// Cpu architecture string
@ -2264,6 +2284,10 @@ int os::active_processor_count() {
return ActiveProcessorCount;
}
return Machine::active_processor_count();
}
int os::Machine::active_processor_count() {
int online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN);
assert(online_cpus > 0 && online_cpus <= processor_count(), "sanity check");
return online_cpus;

View File

@ -132,7 +132,7 @@ public:
static const char* tagToStr(uint32_t user_tag) {
switch (user_tag) {
case 0:
return 0;
return nullptr;
X1(MALLOC, malloc);
X1(MALLOC_SMALL, malloc_small);
X1(MALLOC_LARGE, malloc_large);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1999, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1999, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -137,10 +137,18 @@ bool os::available_memory(physical_memory_size_type& value) {
return Bsd::available_memory(value);
}
bool os::Machine::available_memory(physical_memory_size_type& value) {
return Bsd::available_memory(value);
}
bool os::free_memory(physical_memory_size_type& value) {
return Bsd::available_memory(value);
}
bool os::Machine::free_memory(physical_memory_size_type& value) {
return Bsd::available_memory(value);
}
// Available here means free. Note that this number is of no much use. As an estimate
// for future memory pressure it is far too conservative, since MacOS will use a lot
// of unused memory for caches, and return it willingly in case of needs.
@ -181,6 +189,10 @@ void os::Bsd::print_uptime_info(outputStream* st) {
}
bool os::total_swap_space(physical_memory_size_type& value) {
return Machine::total_swap_space(value);
}
bool os::Machine::total_swap_space(physical_memory_size_type& value) {
#if defined(__APPLE__)
struct xsw_usage vmusage;
size_t size = sizeof(vmusage);
@ -195,6 +207,10 @@ bool os::total_swap_space(physical_memory_size_type& value) {
}
bool os::free_swap_space(physical_memory_size_type& value) {
return Machine::free_swap_space(value);
}
bool os::Machine::free_swap_space(physical_memory_size_type& value) {
#if defined(__APPLE__)
struct xsw_usage vmusage;
size_t size = sizeof(vmusage);
@ -212,6 +228,10 @@ physical_memory_size_type os::physical_memory() {
return Bsd::physical_memory();
}
physical_memory_size_type os::Machine::physical_memory() {
return Bsd::physical_memory();
}
size_t os::rss() {
size_t rss = 0;
#ifdef __APPLE__
@ -608,7 +628,7 @@ static void *thread_native_entry(Thread *thread) {
log_info(os, thread)("Thread finished (tid: %zu, pthread id: %zu).",
os::current_thread_id(), (uintx) pthread_self());
return 0;
return nullptr;
}
bool os::create_thread(Thread* thread, ThreadType thr_type,
@ -1400,7 +1420,7 @@ int os::get_loaded_modules_info(os::LoadedModulesCallbackFunc callback, void *pa
#elif defined(__APPLE__)
for (uint32_t i = 1; i < _dyld_image_count(); i++) {
// Value for top_address is returned as 0 since we don't have any information about module size
if (callback(_dyld_get_image_name(i), (address)_dyld_get_image_header(i), (address)0, param)) {
if (callback(_dyld_get_image_name(i), (address)_dyld_get_image_header(i), nullptr, param)) {
return 1;
}
}
@ -2189,6 +2209,10 @@ int os::active_processor_count() {
return ActiveProcessorCount;
}
return Machine::active_processor_count();
}
int os::Machine::active_processor_count() {
return _processor_count;
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -631,22 +631,20 @@ void CgroupSubsystemFactory::cleanup(CgroupInfo* cg_infos) {
* return:
* true if there were no errors. false otherwise.
*/
bool CgroupSubsystem::active_processor_count(int& value) {
int cpu_count;
int result = -1;
bool CgroupSubsystem::active_processor_count(double& value) {
// We use a cache with a timeout to avoid performing expensive
// computations in the event this function is called frequently.
// [See 8227006].
CachingCgroupController<CgroupCpuController>* contrl = cpu_controller();
CachedMetric* cpu_limit = contrl->metrics_cache();
CachingCgroupController<CgroupCpuController, double>* contrl = cpu_controller();
CachedMetric<double>* cpu_limit = contrl->metrics_cache();
if (!cpu_limit->should_check_metric()) {
value = (int)cpu_limit->value();
log_trace(os, container)("CgroupSubsystem::active_processor_count (cached): %d", value);
value = cpu_limit->value();
log_trace(os, container)("CgroupSubsystem::active_processor_count (cached): %.2f", value);
return true;
}
cpu_count = os::Linux::active_processor_count();
int cpu_count = os::Linux::active_processor_count();
double result = -1;
if (!CgroupUtil::processor_count(contrl->controller(), cpu_count, result)) {
return false;
}
@ -671,8 +669,8 @@ bool CgroupSubsystem::active_processor_count(int& value) {
*/
bool CgroupSubsystem::memory_limit_in_bytes(physical_memory_size_type upper_bound,
physical_memory_size_type& value) {
CachingCgroupController<CgroupMemoryController>* contrl = memory_controller();
CachedMetric* memory_limit = contrl->metrics_cache();
CachingCgroupController<CgroupMemoryController, physical_memory_size_type>* contrl = memory_controller();
CachedMetric<physical_memory_size_type>* memory_limit = contrl->metrics_cache();
if (!memory_limit->should_check_metric()) {
value = memory_limit->value();
return true;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -181,20 +181,21 @@ class CgroupController: public CHeapObj<mtInternal> {
static bool limit_from_str(char* limit_str, physical_memory_size_type& value);
};
template <typename MetricType>
class CachedMetric : public CHeapObj<mtInternal>{
private:
volatile physical_memory_size_type _metric;
volatile MetricType _metric;
volatile jlong _next_check_counter;
public:
CachedMetric() {
_metric = value_unlimited;
_metric = static_cast<MetricType>(value_unlimited);
_next_check_counter = min_jlong;
}
bool should_check_metric() {
return os::elapsed_counter() > _next_check_counter;
}
physical_memory_size_type value() { return _metric; }
void set_value(physical_memory_size_type value, jlong timeout) {
MetricType value() { return _metric; }
void set_value(MetricType value, jlong timeout) {
_metric = value;
// Metric is unlikely to change, but we want to remain
// responsive to configuration changes. A very short grace time
@ -205,19 +206,19 @@ class CachedMetric : public CHeapObj<mtInternal>{
}
};
template <class T>
template <class T, typename MetricType>
class CachingCgroupController : public CHeapObj<mtInternal> {
private:
T* _controller;
CachedMetric* _metrics_cache;
CachedMetric<MetricType>* _metrics_cache;
public:
CachingCgroupController(T* cont) {
_controller = cont;
_metrics_cache = new CachedMetric();
_metrics_cache = new CachedMetric<MetricType>();
}
CachedMetric* metrics_cache() { return _metrics_cache; }
CachedMetric<MetricType>* metrics_cache() { return _metrics_cache; }
T* controller() { return _controller; }
};
@ -277,7 +278,7 @@ class CgroupMemoryController: public CHeapObj<mtInternal> {
class CgroupSubsystem: public CHeapObj<mtInternal> {
public:
bool memory_limit_in_bytes(physical_memory_size_type upper_bound, physical_memory_size_type& value);
bool active_processor_count(int& value);
bool active_processor_count(double& value);
virtual bool pids_max(uint64_t& value) = 0;
virtual bool pids_current(uint64_t& value) = 0;
@ -286,8 +287,8 @@ class CgroupSubsystem: public CHeapObj<mtInternal> {
virtual char * cpu_cpuset_cpus() = 0;
virtual char * cpu_cpuset_memory_nodes() = 0;
virtual const char * container_type() = 0;
virtual CachingCgroupController<CgroupMemoryController>* memory_controller() = 0;
virtual CachingCgroupController<CgroupCpuController>* cpu_controller() = 0;
virtual CachingCgroupController<CgroupMemoryController, physical_memory_size_type>* memory_controller() = 0;
virtual CachingCgroupController<CgroupCpuController, double>* cpu_controller() = 0;
virtual CgroupCpuacctController* cpuacct_controller() = 0;
bool cpu_quota(int& value);

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2024, 2025, Red Hat, Inc.
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -25,9 +26,8 @@
#include "cgroupUtil_linux.hpp"
#include "os_linux.hpp"
bool CgroupUtil::processor_count(CgroupCpuController* cpu_ctrl, int upper_bound, int& value) {
bool CgroupUtil::processor_count(CgroupCpuController* cpu_ctrl, int upper_bound, double& value) {
assert(upper_bound > 0, "upper bound of cpus must be positive");
int limit_count = upper_bound;
int quota = -1;
int period = -1;
if (!cpu_ctrl->cpu_quota(quota)) {
@ -37,20 +37,15 @@ bool CgroupUtil::processor_count(CgroupCpuController* cpu_ctrl, int upper_bound,
return false;
}
int quota_count = 0;
int result = upper_bound;
double result = upper_bound;
if (quota > -1 && period > 0) {
quota_count = ceilf((float)quota / (float)period);
log_trace(os, container)("CPU Quota count based on quota/period: %d", quota_count);
if (quota > 0 && period > 0) { // Use quotas
double cpu_quota = static_cast<double>(quota) / period;
log_trace(os, container)("CPU Quota based on quota/period: %.2f", cpu_quota);
result = MIN2(result, cpu_quota);
}
// Use quotas
if (quota_count != 0) {
limit_count = quota_count;
}
result = MIN2(upper_bound, limit_count);
log_trace(os, container)("OSContainer::active_processor_count: %d", result);
log_trace(os, container)("OSContainer::active_processor_count: %.2f", result);
value = result;
return true;
}
@ -73,11 +68,11 @@ physical_memory_size_type CgroupUtil::get_updated_mem_limit(CgroupMemoryControll
// Get an updated cpu limit. The return value is strictly less than or equal to the
// passed in 'lowest' value.
int CgroupUtil::get_updated_cpu_limit(CgroupCpuController* cpu,
double CgroupUtil::get_updated_cpu_limit(CgroupCpuController* cpu,
int lowest,
int upper_bound) {
assert(lowest > 0 && lowest <= upper_bound, "invariant");
int cpu_limit_val = -1;
double cpu_limit_val = -1;
if (CgroupUtil::processor_count(cpu, upper_bound, cpu_limit_val) && cpu_limit_val != upper_bound) {
assert(cpu_limit_val <= upper_bound, "invariant");
if (lowest > cpu_limit_val) {
@ -172,7 +167,7 @@ void CgroupUtil::adjust_controller(CgroupCpuController* cpu) {
assert(cg_path[0] == '/', "cgroup path must start with '/'");
int host_cpus = os::Linux::active_processor_count();
int lowest_limit = host_cpus;
int cpus = get_updated_cpu_limit(cpu, lowest_limit, host_cpus);
double cpus = get_updated_cpu_limit(cpu, lowest_limit, host_cpus);
int orig_limit = lowest_limit != host_cpus ? lowest_limit : host_cpus;
char* limit_cg_path = nullptr;
while ((last_slash = strrchr(cg_path, '/')) != cg_path) {

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2024, Red Hat, Inc.
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -31,7 +32,7 @@
class CgroupUtil: AllStatic {
public:
static bool processor_count(CgroupCpuController* cpu, int upper_bound, int& value);
static bool processor_count(CgroupCpuController* cpu, int upper_bound, double& value);
// Given a memory controller, adjust its path to a point in the hierarchy
// that represents the closest memory limit.
static void adjust_controller(CgroupMemoryController* m);
@ -42,9 +43,7 @@ class CgroupUtil: AllStatic {
static physical_memory_size_type get_updated_mem_limit(CgroupMemoryController* m,
physical_memory_size_type lowest,
physical_memory_size_type upper_bound);
static int get_updated_cpu_limit(CgroupCpuController* c,
int lowest,
int upper_bound);
static double get_updated_cpu_limit(CgroupCpuController* c, int lowest, int upper_bound);
};
#endif // CGROUP_UTIL_LINUX_HPP

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -328,8 +328,8 @@ CgroupV1Subsystem::CgroupV1Subsystem(CgroupV1Controller* cpuset,
_pids(pids) {
CgroupUtil::adjust_controller(memory);
CgroupUtil::adjust_controller(cpu);
_memory = new CachingCgroupController<CgroupMemoryController>(memory);
_cpu = new CachingCgroupController<CgroupCpuController>(cpu);
_memory = new CachingCgroupController<CgroupMemoryController, physical_memory_size_type>(memory);
_cpu = new CachingCgroupController<CgroupCpuController, double>(cpu);
}
bool CgroupV1Subsystem::is_containerized() {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2019, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2019, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -214,15 +214,15 @@ class CgroupV1Subsystem: public CgroupSubsystem {
const char * container_type() override {
return "cgroupv1";
}
CachingCgroupController<CgroupMemoryController>* memory_controller() override { return _memory; }
CachingCgroupController<CgroupCpuController>* cpu_controller() override { return _cpu; }
CachingCgroupController<CgroupMemoryController, physical_memory_size_type>* memory_controller() override { return _memory; }
CachingCgroupController<CgroupCpuController, double>* cpu_controller() override { return _cpu; }
CgroupCpuacctController* cpuacct_controller() override { return _cpuacct; }
private:
/* controllers */
CachingCgroupController<CgroupMemoryController>* _memory = nullptr;
CachingCgroupController<CgroupMemoryController, physical_memory_size_type>* _memory = nullptr;
CgroupV1Controller* _cpuset = nullptr;
CachingCgroupController<CgroupCpuController>* _cpu = nullptr;
CachingCgroupController<CgroupCpuController, double>* _cpu = nullptr;
CgroupV1CpuacctController* _cpuacct = nullptr;
CgroupV1Controller* _pids = nullptr;

View File

@ -1,6 +1,6 @@
/*
* Copyright (c) 2020, 2025, Red Hat Inc.
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -156,8 +156,8 @@ CgroupV2Subsystem::CgroupV2Subsystem(CgroupV2MemoryController * memory,
_unified(unified) {
CgroupUtil::adjust_controller(memory);
CgroupUtil::adjust_controller(cpu);
_memory = new CachingCgroupController<CgroupMemoryController>(memory);
_cpu = new CachingCgroupController<CgroupCpuController>(cpu);
_memory = new CachingCgroupController<CgroupMemoryController, physical_memory_size_type>(memory);
_cpu = new CachingCgroupController<CgroupCpuController, double>(cpu);
_cpuacct = cpuacct;
}

View File

@ -1,6 +1,6 @@
/*
* Copyright (c) 2020, 2024, Red Hat Inc.
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -152,8 +152,8 @@ class CgroupV2Subsystem: public CgroupSubsystem {
/* One unified controller */
CgroupV2Controller _unified;
/* Caching wrappers for cpu/memory metrics */
CachingCgroupController<CgroupMemoryController>* _memory = nullptr;
CachingCgroupController<CgroupCpuController>* _cpu = nullptr;
CachingCgroupController<CgroupMemoryController, physical_memory_size_type>* _memory = nullptr;
CachingCgroupController<CgroupCpuController, double>* _cpu = nullptr;
CgroupCpuacctController* _cpuacct = nullptr;
@ -175,8 +175,8 @@ class CgroupV2Subsystem: public CgroupSubsystem {
const char * container_type() override {
return "cgroupv2";
}
CachingCgroupController<CgroupMemoryController>* memory_controller() override { return _memory; }
CachingCgroupController<CgroupCpuController>* cpu_controller() override { return _cpu; }
CachingCgroupController<CgroupMemoryController, physical_memory_size_type>* memory_controller() override { return _memory; }
CachingCgroupController<CgroupCpuController, double>* cpu_controller() override { return _cpu; }
CgroupCpuacctController* cpuacct_controller() override { return _cpuacct; };
};

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -86,8 +86,8 @@ void OSContainer::init() {
// 2.) On a physical Linux system with a limit enforced by other means (like systemd slice)
physical_memory_size_type mem_limit_val = value_unlimited;
(void)memory_limit_in_bytes(mem_limit_val); // discard error and use default
int host_cpus = os::Linux::active_processor_count();
int cpus = host_cpus;
double host_cpus = os::Linux::active_processor_count();
double cpus = host_cpus;
(void)active_processor_count(cpus); // discard error and use default
any_mem_cpu_limit_present = mem_limit_val != value_unlimited || host_cpus != cpus;
if (any_mem_cpu_limit_present) {
@ -127,8 +127,7 @@ bool OSContainer::available_memory_in_bytes(physical_memory_size_type& value) {
return false;
}
bool OSContainer::available_swap_in_bytes(physical_memory_size_type host_free_swap,
physical_memory_size_type& value) {
bool OSContainer::available_swap_in_bytes(physical_memory_size_type& value) {
physical_memory_size_type mem_limit = 0;
physical_memory_size_type mem_swap_limit = 0;
if (memory_limit_in_bytes(mem_limit) &&
@ -179,8 +178,7 @@ bool OSContainer::available_swap_in_bytes(physical_memory_size_type host_free_sw
assert(num < 25, "buffer too small");
mem_limit_buf[num] = '\0';
log_trace(os,container)("OSContainer::available_swap_in_bytes: container_swap_limit=%s"
" container_mem_limit=%s, host_free_swap: " PHYS_MEM_TYPE_FORMAT,
mem_swap_buf, mem_limit_buf, host_free_swap);
" container_mem_limit=%s", mem_swap_buf, mem_limit_buf);
}
return false;
}
@ -252,7 +250,7 @@ char * OSContainer::cpu_cpuset_memory_nodes() {
return cgroup_subsystem->cpu_cpuset_memory_nodes();
}
bool OSContainer::active_processor_count(int& value) {
bool OSContainer::active_processor_count(double& value) {
assert(cgroup_subsystem != nullptr, "cgroup subsystem not available");
return cgroup_subsystem->active_processor_count(value);
}
@ -291,11 +289,13 @@ template<typename T> struct metric_fmt;
template<> struct metric_fmt<unsigned long long int> { static constexpr const char* fmt = "%llu"; };
template<> struct metric_fmt<unsigned long int> { static constexpr const char* fmt = "%lu"; };
template<> struct metric_fmt<int> { static constexpr const char* fmt = "%d"; };
template<> struct metric_fmt<double> { static constexpr const char* fmt = "%.2f"; };
template<> struct metric_fmt<const char*> { static constexpr const char* fmt = "%s"; };
template void OSContainer::print_container_metric<unsigned long long int>(outputStream*, const char*, unsigned long long int, const char*);
template void OSContainer::print_container_metric<unsigned long int>(outputStream*, const char*, unsigned long int, const char*);
template void OSContainer::print_container_metric<int>(outputStream*, const char*, int, const char*);
template void OSContainer::print_container_metric<double>(outputStream*, const char*, double, const char*);
template void OSContainer::print_container_metric<const char*>(outputStream*, const char*, const char*, const char*);
template <typename T>
@ -304,12 +304,13 @@ void OSContainer::print_container_metric(outputStream* st, const char* metrics,
constexpr int longest_value = max_length - 11; // Max length - shortest "metric: " string ("cpu_quota: ")
char value_str[longest_value + 1] = {};
os::snprintf_checked(value_str, longest_value, metric_fmt<T>::fmt, value);
st->print("%s: %*s", metrics, max_length - static_cast<int>(strlen(metrics)) - 2, value_str); // -2 for the ": "
if (unit[0] != '\0') {
st->print_cr(" %s", unit);
} else {
st->print_cr("");
}
const int pad_width = max_length - static_cast<int>(strlen(metrics)) - 2; // -2 for the ": "
const char* unit_prefix = unit[0] != '\0' ? " " : "";
char line[128] = {};
os::snprintf_checked(line, sizeof(line), "%s: %*s%s%s", metrics, pad_width, value_str, unit_prefix, unit);
st->print_cr("%s", line);
}
void OSContainer::print_container_helper(outputStream* st, MetricResult& res, const char* metrics) {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -72,8 +72,7 @@ class OSContainer: AllStatic {
static const char * container_type();
static bool available_memory_in_bytes(physical_memory_size_type& value);
static bool available_swap_in_bytes(physical_memory_size_type host_free_swap,
physical_memory_size_type& value);
static bool available_swap_in_bytes(physical_memory_size_type& value);
static bool memory_limit_in_bytes(physical_memory_size_type& value);
static bool memory_and_swap_limit_in_bytes(physical_memory_size_type& value);
static bool memory_and_swap_usage_in_bytes(physical_memory_size_type& value);
@ -84,7 +83,7 @@ class OSContainer: AllStatic {
static bool rss_usage_in_bytes(physical_memory_size_type& value);
static bool cache_usage_in_bytes(physical_memory_size_type& value);
static bool active_processor_count(int& value);
static bool active_processor_count(double& value);
static char * cpu_cpuset_cpus();
static char * cpu_cpuset_memory_nodes();

View File

@ -211,15 +211,58 @@ static bool suppress_primordial_thread_resolution = false;
// utility functions
bool os::is_containerized() {
return OSContainer::is_containerized();
}
bool os::Container::memory_limit(physical_memory_size_type& value) {
physical_memory_size_type result = 0;
if (OSContainer::memory_limit_in_bytes(result) && result != value_unlimited) {
value = result;
return true;
}
return false;
}
bool os::Container::memory_soft_limit(physical_memory_size_type& value) {
physical_memory_size_type result = 0;
if (OSContainer::memory_soft_limit_in_bytes(result) && result != 0 && result != value_unlimited) {
value = result;
return true;
}
return false;
}
bool os::Container::memory_throttle_limit(physical_memory_size_type& value) {
physical_memory_size_type result = 0;
if (OSContainer::memory_throttle_limit_in_bytes(result) && result != value_unlimited) {
value = result;
return true;
}
return false;
}
bool os::Container::used_memory(physical_memory_size_type& value) {
return OSContainer::memory_usage_in_bytes(value);
}
bool os::available_memory(physical_memory_size_type& value) {
if (OSContainer::is_containerized() && OSContainer::available_memory_in_bytes(value)) {
if (is_containerized() && Container::available_memory(value)) {
log_trace(os)("available container memory: " PHYS_MEM_TYPE_FORMAT, value);
return true;
}
return Machine::available_memory(value);
}
bool os::Machine::available_memory(physical_memory_size_type& value) {
return Linux::available_memory(value);
}
bool os::Container::available_memory(physical_memory_size_type& value) {
return OSContainer::available_memory_in_bytes(value);
}
bool os::Linux::available_memory(physical_memory_size_type& value) {
physical_memory_size_type avail_mem = 0;
@ -251,11 +294,15 @@ bool os::Linux::available_memory(physical_memory_size_type& value) {
}
bool os::free_memory(physical_memory_size_type& value) {
if (OSContainer::is_containerized() && OSContainer::available_memory_in_bytes(value)) {
if (is_containerized() && Container::available_memory(value)) {
log_trace(os)("free container memory: " PHYS_MEM_TYPE_FORMAT, value);
return true;
}
return Machine::free_memory(value);
}
bool os::Machine::free_memory(physical_memory_size_type& value) {
return Linux::free_memory(value);
}
@ -274,21 +321,30 @@ bool os::Linux::free_memory(physical_memory_size_type& value) {
}
bool os::total_swap_space(physical_memory_size_type& value) {
if (OSContainer::is_containerized()) {
physical_memory_size_type mem_swap_limit = value_unlimited;
physical_memory_size_type memory_limit = value_unlimited;
if (OSContainer::memory_and_swap_limit_in_bytes(mem_swap_limit) &&
OSContainer::memory_limit_in_bytes(memory_limit)) {
if (memory_limit != value_unlimited && mem_swap_limit != value_unlimited &&
mem_swap_limit >= memory_limit /* ensure swap is >= 0 */) {
value = mem_swap_limit - memory_limit;
return true;
}
}
} // fallback to the host swap space if the container returned unlimited
if (is_containerized() && Container::total_swap_space(value)) {
return true;
} // fallback to the host swap space if the container value fails
return Machine::total_swap_space(value);
}
bool os::Machine::total_swap_space(physical_memory_size_type& value) {
return Linux::host_swap(value);
}
bool os::Container::total_swap_space(physical_memory_size_type& value) {
physical_memory_size_type mem_swap_limit = value_unlimited;
physical_memory_size_type memory_limit = value_unlimited;
if (OSContainer::memory_and_swap_limit_in_bytes(mem_swap_limit) &&
OSContainer::memory_limit_in_bytes(memory_limit)) {
if (memory_limit != value_unlimited && mem_swap_limit != value_unlimited &&
mem_swap_limit >= memory_limit /* ensure swap is >= 0 */) {
value = mem_swap_limit - memory_limit;
return true;
}
}
return false;
}
static bool host_free_swap_f(physical_memory_size_type& value) {
struct sysinfo si;
int ret = sysinfo(&si);
@ -309,32 +365,45 @@ bool os::free_swap_space(physical_memory_size_type& value) {
return false;
}
physical_memory_size_type host_free_swap_val = MIN2(total_swap_space, host_free_swap);
if (OSContainer::is_containerized()) {
if (OSContainer::available_swap_in_bytes(host_free_swap_val, value)) {
if (is_containerized()) {
if (Container::free_swap_space(value)) {
return true;
}
// Fall through to use host value
log_trace(os,container)("os::free_swap_space: containerized value unavailable"
" returning host value: " PHYS_MEM_TYPE_FORMAT, host_free_swap_val);
}
value = host_free_swap_val;
return true;
}
bool os::Machine::free_swap_space(physical_memory_size_type& value) {
return host_free_swap_f(value);
}
bool os::Container::free_swap_space(physical_memory_size_type& value) {
return OSContainer::available_swap_in_bytes(value);
}
physical_memory_size_type os::physical_memory() {
if (OSContainer::is_containerized()) {
if (is_containerized()) {
physical_memory_size_type mem_limit = value_unlimited;
if (OSContainer::memory_limit_in_bytes(mem_limit) && mem_limit != value_unlimited) {
if (Container::memory_limit(mem_limit) && mem_limit != value_unlimited) {
log_trace(os)("total container memory: " PHYS_MEM_TYPE_FORMAT, mem_limit);
return mem_limit;
}
}
physical_memory_size_type phys_mem = Linux::physical_memory();
physical_memory_size_type phys_mem = Machine::physical_memory();
log_trace(os)("total system memory: " PHYS_MEM_TYPE_FORMAT, phys_mem);
return phys_mem;
}
physical_memory_size_type os::Machine::physical_memory() {
return Linux::physical_memory();
}
// Returns the resident set size (RSS) of the process.
// Falls back to using VmRSS from /proc/self/status if /proc/self/smaps_rollup is unavailable.
// Note: On kernels with memory cgroups or shared memory, VmRSS may underreport RSS.
@ -2439,20 +2508,21 @@ bool os::Linux::print_container_info(outputStream* st) {
OSContainer::print_container_metric(st, "cpu_memory_nodes", p != nullptr ? p : "not supported");
free(p);
int i = -1;
bool supported = OSContainer::active_processor_count(i);
double cpus = -1;
bool supported = OSContainer::active_processor_count(cpus);
if (supported) {
assert(i > 0, "must be");
assert(cpus > 0, "must be");
if (ActiveProcessorCount > 0) {
OSContainer::print_container_metric(st, "active_processor_count", ActiveProcessorCount, "(from -XX:ActiveProcessorCount)");
} else {
OSContainer::print_container_metric(st, "active_processor_count", i);
OSContainer::print_container_metric(st, "active_processor_count", cpus);
}
} else {
OSContainer::print_container_metric(st, "active_processor_count", "not supported");
}
int i = -1;
supported = OSContainer::cpu_quota(i);
if (supported && i > 0) {
OSContainer::print_container_metric(st, "cpu_quota", i);
@ -4737,15 +4807,26 @@ int os::active_processor_count() {
return ActiveProcessorCount;
}
int active_cpus = -1;
if (OSContainer::is_containerized() && OSContainer::active_processor_count(active_cpus)) {
log_trace(os)("active_processor_count: determined by OSContainer: %d",
active_cpus);
} else {
active_cpus = os::Linux::active_processor_count();
if (is_containerized()) {
double cpu_quota;
if (Container::processor_count(cpu_quota)) {
int active_cpus = ceilf(cpu_quota); // Round fractional CPU quota up.
assert(active_cpus <= Machine::active_processor_count(), "must be");
log_trace(os)("active_processor_count: determined by OSContainer: %d",
active_cpus);
return active_cpus;
}
}
return active_cpus;
return Machine::active_processor_count();
}
int os::Machine::active_processor_count() {
return os::Linux::active_processor_count();
}
bool os::Container::processor_count(double& value) {
return OSContainer::active_processor_count(value);
}
static bool should_warn_invalid_processor_id() {
@ -4882,9 +4963,14 @@ int os::open(const char *path, int oflag, int mode) {
oflag |= O_CLOEXEC;
int fd = ::open(path, oflag, mode);
if (fd == -1) return -1;
// No further checking is needed if open() returned an error or
// access mode is not read only.
if (fd == -1 || (oflag & O_ACCMODE) != O_RDONLY) {
return fd;
}
//If the open succeeded, the file might still be a directory
// If the open succeeded and is read only, the file might be a directory
// which the JVM doesn't allow to be read.
{
struct stat buf;
int ret = ::fstat(fd, &buf);

View File

@ -112,6 +112,10 @@ static void save_memory_to_file(char* addr, size_t size) {
result = ::close(fd);
if (result == OS_ERR) {
warning("Could not close %s: %s\n", destfile, os::strerror(errno));
} else {
if (!successful_write) {
remove(destfile);
}
}
}
FREE_C_HEAP_ARRAY(char, destfile);
@ -949,6 +953,7 @@ static int create_sharedmem_file(const char* dirname, const char* filename, size
warning("Insufficient space for shared memory file: %s/%s\n", dirname, filename);
}
result = OS_ERR;
remove(filename);
break;
}
}

View File

@ -839,10 +839,18 @@ bool os::available_memory(physical_memory_size_type& value) {
return win32::available_memory(value);
}
bool os::Machine::available_memory(physical_memory_size_type& value) {
return win32::available_memory(value);
}
bool os::free_memory(physical_memory_size_type& value) {
return win32::available_memory(value);
}
bool os::Machine::free_memory(physical_memory_size_type& value) {
return win32::available_memory(value);
}
bool os::win32::available_memory(physical_memory_size_type& value) {
// Use GlobalMemoryStatusEx() because GlobalMemoryStatus() may return incorrect
// value if total memory is larger than 4GB
@ -858,7 +866,11 @@ bool os::win32::available_memory(physical_memory_size_type& value) {
}
}
bool os::total_swap_space(physical_memory_size_type& value) {
bool os::total_swap_space(physical_memory_size_type& value) {
return Machine::total_swap_space(value);
}
bool os::Machine::total_swap_space(physical_memory_size_type& value) {
MEMORYSTATUSEX ms;
ms.dwLength = sizeof(ms);
BOOL res = GlobalMemoryStatusEx(&ms);
@ -872,6 +884,10 @@ bool os::total_swap_space(physical_memory_size_type& value) {
}
bool os::free_swap_space(physical_memory_size_type& value) {
return Machine::free_swap_space(value);
}
bool os::Machine::free_swap_space(physical_memory_size_type& value) {
MEMORYSTATUSEX ms;
ms.dwLength = sizeof(ms);
BOOL res = GlobalMemoryStatusEx(&ms);
@ -888,6 +904,10 @@ physical_memory_size_type os::physical_memory() {
return win32::physical_memory();
}
physical_memory_size_type os::Machine::physical_memory() {
return win32::physical_memory();
}
size_t os::rss() {
size_t rss = 0;
PROCESS_MEMORY_COUNTERS_EX pmex;
@ -911,6 +931,10 @@ int os::active_processor_count() {
return ActiveProcessorCount;
}
return Machine::active_processor_count();
}
int os::Machine::active_processor_count() {
bool schedules_all_processor_groups = win32::is_windows_11_or_greater() || win32::is_windows_server_2022_or_greater();
if (UseAllWindowsProcessorGroups && !schedules_all_processor_groups && !win32::processor_group_warning_displayed()) {
win32::set_processor_group_warning_displayed(true);

View File

@ -571,7 +571,12 @@ ArchiveBuilder::FollowMode ArchiveBuilder::get_follow_mode(MetaspaceClosure::Ref
}
if (is_excluded(klass)) {
ResourceMark rm;
log_debug(cds, dynamic)("Skipping class (excluded): %s", klass->external_name());
aot_log_trace(aot)("pointer set to null: class (excluded): %s", klass->external_name());
return set_to_null;
}
if (klass->is_array_klass() && CDSConfig::is_dumping_dynamic_archive()) {
ResourceMark rm;
aot_log_trace(aot)("pointer set to null: array class not supported in dynamic region: %s", klass->external_name());
return set_to_null;
}
}

View File

@ -216,6 +216,10 @@ ciField::ciField(fieldDescriptor *fd) :
static bool trust_final_non_static_fields(ciInstanceKlass* holder) {
if (holder == nullptr)
return false;
if (holder->trust_final_fields()) {
// Explicit opt-in from system classes
return true;
}
// Even if general trusting is disabled, trust system-built closures in these packages.
if (holder->is_in_package("java/lang/invoke") || holder->is_in_package("sun/invoke") ||
holder->is_in_package("java/lang/reflect") || holder->is_in_package("jdk/internal/reflect") ||
@ -230,14 +234,6 @@ static bool trust_final_non_static_fields(ciInstanceKlass* holder) {
// Trust final fields in records
if (holder->is_record())
return true;
// Trust Atomic*FieldUpdaters: they are very important for performance, and make up one
// more reason not to use Unsafe, if their final fields are trusted. See more in JDK-8140483.
if (holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicIntegerFieldUpdater_Impl() ||
holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicLongFieldUpdater_CASUpdater() ||
holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicLongFieldUpdater_LockedUpdater() ||
holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicReferenceFieldUpdater_Impl()) {
return true;
}
return TrustFinalNonStaticFields;
}

View File

@ -65,6 +65,7 @@ ciInstanceKlass::ciInstanceKlass(Klass* k) :
_has_nonstatic_concrete_methods = ik->has_nonstatic_concrete_methods();
_is_hidden = ik->is_hidden();
_is_record = ik->is_record();
_trust_final_fields = ik->trust_final_fields();
_nonstatic_fields = nullptr; // initialized lazily by compute_nonstatic_fields:
_has_injected_fields = -1;
_implementor = nullptr; // we will fill these lazily

View File

@ -59,6 +59,7 @@ private:
bool _has_nonstatic_concrete_methods;
bool _is_hidden;
bool _is_record;
bool _trust_final_fields;
bool _has_trusted_loader;
ciFlags _flags;
@ -207,6 +208,10 @@ public:
return _is_record;
}
bool trust_final_fields() const {
return _trust_final_fields;
}
ciInstanceKlass* get_canonical_holder(int offset);
ciField* get_field_by_offset(int field_offset, bool is_static);
ciField* get_field_by_name(ciSymbol* name, ciSymbol* signature, bool is_static);

View File

@ -943,6 +943,7 @@ public:
_java_lang_Deprecated_for_removal,
_jdk_internal_vm_annotation_AOTSafeClassInitializer,
_method_AOTRuntimeSetup,
_jdk_internal_vm_annotation_TrustFinalFields,
_annotation_LIMIT
};
const Location _location;
@ -1878,6 +1879,11 @@ AnnotationCollector::annotation_index(const ClassLoaderData* loader_data,
if (!privileged) break; // only allow in privileged code
return _field_Stable;
}
case VM_SYMBOL_ENUM_NAME(jdk_internal_vm_annotation_TrustFinalFields_signature): {
if (_location != _in_class) break; // only allow for classes
if (!privileged) break; // only allow in privileged code
return _jdk_internal_vm_annotation_TrustFinalFields;
}
case VM_SYMBOL_ENUM_NAME(jdk_internal_vm_annotation_Contended_signature): {
if (_location != _in_field && _location != _in_class) {
break; // only allow for fields and classes
@ -1992,6 +1998,9 @@ void ClassFileParser::ClassAnnotationCollector::apply_to(InstanceKlass* ik) {
if (has_annotation(_jdk_internal_vm_annotation_AOTSafeClassInitializer)) {
ik->set_has_aot_safe_initializer();
}
if (has_annotation(_jdk_internal_vm_annotation_TrustFinalFields)) {
ik->set_trust_final_fields(true);
}
}
#define MAX_ARGS_SIZE 255

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -614,6 +614,10 @@ struct StringTableDeleteCheck : StackObj {
};
void StringTable::clean_dead_entries(JavaThread* jt) {
// BulkDeleteTask::prepare() may take ConcurrentHashTableResize_lock (nosafepoint-2).
// When NativeHeapTrimmer is enabled, SuspendMark may take NativeHeapTrimmer::_lock (nosafepoint).
// Take SuspendMark first to keep lock order and avoid deadlock.
NativeHeapTrimmer::SuspendMark sm("stringtable");
StringTableHash::BulkDeleteTask bdt(_local_table);
if (!bdt.prepare(jt)) {
return;
@ -621,7 +625,6 @@ void StringTable::clean_dead_entries(JavaThread* jt) {
StringTableDeleteCheck stdc;
StringTableDoDelete stdd;
NativeHeapTrimmer::SuspendMark sm("stringtable");
{
TraceTime timer("Clean", TRACETIME_LOG(Debug, stringtable, perf));
while(bdt.do_task(jt, stdc, stdd)) {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -763,6 +763,10 @@ struct SymbolTableDeleteCheck : StackObj {
};
void SymbolTable::clean_dead_entries(JavaThread* jt) {
// BulkDeleteTask::prepare() may take ConcurrentHashTableResize_lock (nosafepoint-2).
// When NativeHeapTrimmer is enabled, SuspendMark may take NativeHeapTrimmer::_lock (nosafepoint).
// Take SuspendMark first to keep lock order and avoid deadlock.
NativeHeapTrimmer::SuspendMark sm("symboltable");
SymbolTableHash::BulkDeleteTask bdt(_local_table);
if (!bdt.prepare(jt)) {
return;
@ -770,7 +774,6 @@ void SymbolTable::clean_dead_entries(JavaThread* jt) {
SymbolTableDeleteCheck stdc;
SymbolTableDoDelete stdd;
NativeHeapTrimmer::SuspendMark sm("symboltable");
{
TraceTime timer("Clean", TRACETIME_LOG(Debug, symboltable, perf));
while (bdt.do_task(jt, stdc, stdd)) {

View File

@ -245,10 +245,6 @@ class SerializeClosure;
\
/* Concurrency support */ \
template(java_util_concurrent_locks_AbstractOwnableSynchronizer, "java/util/concurrent/locks/AbstractOwnableSynchronizer") \
template(java_util_concurrent_atomic_AtomicIntegerFieldUpdater_Impl, "java/util/concurrent/atomic/AtomicIntegerFieldUpdater$AtomicIntegerFieldUpdaterImpl") \
template(java_util_concurrent_atomic_AtomicLongFieldUpdater_CASUpdater, "java/util/concurrent/atomic/AtomicLongFieldUpdater$CASUpdater") \
template(java_util_concurrent_atomic_AtomicLongFieldUpdater_LockedUpdater, "java/util/concurrent/atomic/AtomicLongFieldUpdater$LockedUpdater") \
template(java_util_concurrent_atomic_AtomicReferenceFieldUpdater_Impl, "java/util/concurrent/atomic/AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl") \
template(jdk_internal_vm_annotation_Contended_signature, "Ljdk/internal/vm/annotation/Contended;") \
template(jdk_internal_vm_annotation_ReservedStackAccess_signature, "Ljdk/internal/vm/annotation/ReservedStackAccess;") \
template(jdk_internal_ValueBased_signature, "Ljdk/internal/ValueBased;") \
@ -302,6 +298,7 @@ class SerializeClosure;
template(jdk_internal_misc_Scoped_signature, "Ljdk/internal/misc/ScopedMemoryAccess$Scoped;") \
template(jdk_internal_vm_annotation_IntrinsicCandidate_signature, "Ljdk/internal/vm/annotation/IntrinsicCandidate;") \
template(jdk_internal_vm_annotation_Stable_signature, "Ljdk/internal/vm/annotation/Stable;") \
template(jdk_internal_vm_annotation_TrustFinalFields_signature, "Ljdk/internal/vm/annotation/TrustFinalFields;") \
\
template(jdk_internal_vm_annotation_ChangesCurrentThread_signature, "Ljdk/internal/vm/annotation/ChangesCurrentThread;") \
template(jdk_internal_vm_annotation_JvmtiHideEvents_signature, "Ljdk/internal/vm/annotation/JvmtiHideEvents;") \

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2018, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2018, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, Red Hat, Inc. and/or its affiliates.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -209,6 +209,17 @@ void G1Arguments::initialize() {
FLAG_SET_DEFAULT(GCTimeRatio, 24);
}
// Do not interfere with GC-Pressure driven heap resizing unless the user
// explicitly sets otherwise. G1 heap sizing should be free to grow or shrink
// the heap based on GC pressure, rather than being forced to satisfy
// MinHeapFreeRatio or MaxHeapFreeRatio defaults that the user did not set.
if (FLAG_IS_DEFAULT(MinHeapFreeRatio)) {
FLAG_SET_DEFAULT(MinHeapFreeRatio, 0);
}
if (FLAG_IS_DEFAULT(MaxHeapFreeRatio)) {
FLAG_SET_DEFAULT(MaxHeapFreeRatio, 100);
}
// Below, we might need to calculate the pause time interval based on
// the pause target. When we do so we are going to give G1 maximum
// flexibility and allow it to do pauses when it needs to. So, we'll

View File

@ -70,7 +70,11 @@ inline void G1BarrierSet::write_ref_field_pre(T* field) {
template <DecoratorSet decorators, typename T>
inline void G1BarrierSet::write_ref_field_post(T* field) {
volatile CardValue* byte = _card_table->byte_for(field);
// Make sure that the card table reference is read only once. Otherwise the compiler
// might reload that value in the two accesses below, that could cause writes to
// the wrong card table.
CardTable* card_table = AtomicAccess::load(&_card_table);
CardValue* byte = card_table->byte_for(field);
if (*byte == G1CardTable::clean_card_val()) {
*byte = G1CardTable::dirty_card_val();
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -26,7 +26,6 @@
#include "gc/g1/g1BatchedTask.hpp"
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1GCParPhaseTimesTracker.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/growableArray.hpp"
void G1AbstractSubTask::record_work_item(uint worker_id, uint index, size_t count) {
@ -40,7 +39,7 @@ const char* G1AbstractSubTask::name() const {
}
bool G1BatchedTask::try_claim_serial_task(int& task) {
task = AtomicAccess::fetch_then_add(&_num_serial_tasks_done, 1);
task = _num_serial_tasks_done.fetch_then_add(1);
return task < _serial_tasks.length();
}
@ -96,8 +95,8 @@ void G1BatchedTask::work(uint worker_id) {
}
G1BatchedTask::~G1BatchedTask() {
assert(AtomicAccess::load(&_num_serial_tasks_done) >= _serial_tasks.length(),
"Only %d tasks of %d claimed", AtomicAccess::load(&_num_serial_tasks_done), _serial_tasks.length());
assert(_num_serial_tasks_done.load_relaxed() >= _serial_tasks.length(),
"Only %d tasks of %d claimed", _num_serial_tasks_done.load_relaxed(), _serial_tasks.length());
for (G1AbstractSubTask* task : _parallel_tasks) {
delete task;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -28,6 +28,7 @@
#include "gc/g1/g1GCPhaseTimes.hpp"
#include "gc/shared/workerThread.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomic.hpp"
template <typename E, MemTag MT>
class GrowableArrayCHeap;
@ -120,7 +121,7 @@ public:
// 5) ~T()
//
class G1BatchedTask : public WorkerTask {
volatile int _num_serial_tasks_done;
Atomic<int> _num_serial_tasks_done;
G1GCPhaseTimes* _phase_times;
bool try_claim_serial_task(int& task);

View File

@ -29,7 +29,6 @@
#include "gc/shared/gcLogPrecious.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"
#include "memory/allocation.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/globals_extension.hpp"
#include "runtime/java.hpp"
#include "utilities/bitMap.inline.hpp"
@ -192,32 +191,32 @@ const char* G1CardSetConfiguration::mem_object_type_name_str(uint index) {
void G1CardSetCoarsenStats::reset() {
STATIC_ASSERT(ARRAY_SIZE(_coarsen_from) == ARRAY_SIZE(_coarsen_collision));
for (uint i = 0; i < ARRAY_SIZE(_coarsen_from); i++) {
_coarsen_from[i] = 0;
_coarsen_collision[i] = 0;
_coarsen_from[i].store_relaxed(0);
_coarsen_collision[i].store_relaxed(0);
}
}
void G1CardSetCoarsenStats::set(G1CardSetCoarsenStats& other) {
STATIC_ASSERT(ARRAY_SIZE(_coarsen_from) == ARRAY_SIZE(_coarsen_collision));
for (uint i = 0; i < ARRAY_SIZE(_coarsen_from); i++) {
_coarsen_from[i] = other._coarsen_from[i];
_coarsen_collision[i] = other._coarsen_collision[i];
_coarsen_from[i].store_relaxed(other._coarsen_from[i].load_relaxed());
_coarsen_collision[i].store_relaxed(other._coarsen_collision[i].load_relaxed());
}
}
void G1CardSetCoarsenStats::subtract_from(G1CardSetCoarsenStats& other) {
STATIC_ASSERT(ARRAY_SIZE(_coarsen_from) == ARRAY_SIZE(_coarsen_collision));
for (uint i = 0; i < ARRAY_SIZE(_coarsen_from); i++) {
_coarsen_from[i] = other._coarsen_from[i] - _coarsen_from[i];
_coarsen_collision[i] = other._coarsen_collision[i] - _coarsen_collision[i];
_coarsen_from[i].store_relaxed(other._coarsen_from[i].load_relaxed() - _coarsen_from[i].load_relaxed());
_coarsen_collision[i].store_relaxed(other._coarsen_collision[i].load_relaxed() - _coarsen_collision[i].load_relaxed());
}
}
void G1CardSetCoarsenStats::record_coarsening(uint tag, bool collision) {
assert(tag < ARRAY_SIZE(_coarsen_from), "tag %u out of bounds", tag);
AtomicAccess::inc(&_coarsen_from[tag], memory_order_relaxed);
_coarsen_from[tag].add_then_fetch(1u, memory_order_relaxed);
if (collision) {
AtomicAccess::inc(&_coarsen_collision[tag], memory_order_relaxed);
_coarsen_collision[tag].add_then_fetch(1u, memory_order_relaxed);
}
}
@ -228,13 +227,13 @@ void G1CardSetCoarsenStats::print_on(outputStream* out) {
"Inline->AoC %zu (%zu) "
"AoC->BitMap %zu (%zu) "
"BitMap->Full %zu (%zu) ",
_coarsen_from[0], _coarsen_collision[0],
_coarsen_from[1], _coarsen_collision[1],
_coarsen_from[0].load_relaxed(), _coarsen_collision[0].load_relaxed(),
_coarsen_from[1].load_relaxed(), _coarsen_collision[1].load_relaxed(),
// There is no BitMap at the first level so we can't .
_coarsen_from[3], _coarsen_collision[3],
_coarsen_from[4], _coarsen_collision[4],
_coarsen_from[5], _coarsen_collision[5],
_coarsen_from[6], _coarsen_collision[6]
_coarsen_from[3].load_relaxed(), _coarsen_collision[3].load_relaxed(),
_coarsen_from[4].load_relaxed(), _coarsen_collision[4].load_relaxed(),
_coarsen_from[5].load_relaxed(), _coarsen_collision[5].load_relaxed(),
_coarsen_from[6].load_relaxed(), _coarsen_collision[6].load_relaxed()
);
}
@ -248,7 +247,7 @@ class G1CardSetHashTable : public CHeapObj<mtGCCardSet> {
// the per region cardsets.
const static uint GroupBucketClaimSize = 4;
// Did we insert at least one card in the table?
bool volatile _inserted_card;
Atomic<bool> _inserted_card;
G1CardSetMemoryManager* _mm;
CardSetHash _table;
@ -311,10 +310,10 @@ public:
G1CardSetHashTableValue value(region_idx, G1CardSetInlinePtr());
bool inserted = _table.insert_get(Thread::current(), lookup, value, found, should_grow);
if (!_inserted_card && inserted) {
if (!_inserted_card.load_relaxed() && inserted) {
// It does not matter to us who is setting the flag so a regular atomic store
// is sufficient.
AtomicAccess::store(&_inserted_card, true);
_inserted_card.store_relaxed(true);
}
return found.value();
@ -343,9 +342,9 @@ public:
}
void reset() {
if (AtomicAccess::load(&_inserted_card)) {
if (_inserted_card.load_relaxed()) {
_table.unsafe_reset(InitialLogTableSize);
AtomicAccess::store(&_inserted_card, false);
_inserted_card.store_relaxed(false);
}
}
@ -455,14 +454,14 @@ void G1CardSet::free_mem_object(ContainerPtr container) {
_mm->free(container_type_to_mem_object_type(type), value);
}
G1CardSet::ContainerPtr G1CardSet::acquire_container(ContainerPtr volatile* container_addr) {
G1CardSet::ContainerPtr G1CardSet::acquire_container(Atomic<ContainerPtr>* container_addr) {
// Update reference counts under RCU critical section to avoid a
// use-after-cleapup bug where we increment a reference count for
// an object whose memory has already been cleaned up and reused.
GlobalCounter::CriticalSection cs(Thread::current());
while (true) {
// Get ContainerPtr and increment refcount atomically wrt to memory reuse.
ContainerPtr container = AtomicAccess::load_acquire(container_addr);
ContainerPtr container = container_addr->load_acquire();
uint cs_type = container_type(container);
if (container == FullCardSet || cs_type == ContainerInlinePtr) {
return container;
@ -503,15 +502,15 @@ class G1ReleaseCardsets : public StackObj {
G1CardSet* _card_set;
using ContainerPtr = G1CardSet::ContainerPtr;
void coarsen_to_full(ContainerPtr* container_addr) {
void coarsen_to_full(Atomic<ContainerPtr>* container_addr) {
while (true) {
ContainerPtr cur_container = AtomicAccess::load_acquire(container_addr);
ContainerPtr cur_container = container_addr->load_acquire();
uint cs_type = G1CardSet::container_type(cur_container);
if (cur_container == G1CardSet::FullCardSet) {
return;
}
ContainerPtr old_value = AtomicAccess::cmpxchg(container_addr, cur_container, G1CardSet::FullCardSet);
ContainerPtr old_value = container_addr->compare_exchange(cur_container, G1CardSet::FullCardSet);
if (old_value == cur_container) {
_card_set->release_and_maybe_free_container(cur_container);
@ -523,7 +522,7 @@ class G1ReleaseCardsets : public StackObj {
public:
explicit G1ReleaseCardsets(G1CardSet* card_set) : _card_set(card_set) { }
void operator ()(ContainerPtr* container_addr) {
void operator ()(Atomic<ContainerPtr>* container_addr) {
coarsen_to_full(container_addr);
}
};
@ -544,10 +543,10 @@ G1AddCardResult G1CardSet::add_to_howl(ContainerPtr parent_container,
ContainerPtr container;
uint bucket = _config->howl_bucket_index(card_in_region);
ContainerPtr volatile* bucket_entry = howl->container_addr(bucket);
Atomic<ContainerPtr>* bucket_entry = howl->container_addr(bucket);
while (true) {
if (AtomicAccess::load(&howl->_num_entries) >= _config->cards_in_howl_threshold()) {
if (howl->_num_entries.load_relaxed() >= _config->cards_in_howl_threshold()) {
return Overflow;
}
@ -571,7 +570,7 @@ G1AddCardResult G1CardSet::add_to_howl(ContainerPtr parent_container,
}
if (increment_total && add_result == Added) {
AtomicAccess::inc(&howl->_num_entries, memory_order_relaxed);
howl->_num_entries.add_then_fetch(1u, memory_order_relaxed);
}
if (to_transfer != nullptr) {
@ -588,7 +587,7 @@ G1AddCardResult G1CardSet::add_to_bitmap(ContainerPtr container, uint card_in_re
return bitmap->add(card_offset, _config->cards_in_howl_bitmap_threshold(), _config->max_cards_in_howl_bitmap());
}
G1AddCardResult G1CardSet::add_to_inline_ptr(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_in_region) {
G1AddCardResult G1CardSet::add_to_inline_ptr(Atomic<ContainerPtr>* container_addr, ContainerPtr container, uint card_in_region) {
G1CardSetInlinePtr value(container_addr, container);
return value.add(card_in_region, _config->inline_ptr_bits_per_card(), _config->max_cards_in_inline_ptr());
}
@ -610,7 +609,7 @@ G1CardSet::ContainerPtr G1CardSet::create_coarsened_array_of_cards(uint card_in_
return new_container;
}
bool G1CardSet::coarsen_container(ContainerPtr volatile* container_addr,
bool G1CardSet::coarsen_container(Atomic<ContainerPtr>* container_addr,
ContainerPtr cur_container,
uint card_in_region,
bool within_howl) {
@ -640,7 +639,7 @@ bool G1CardSet::coarsen_container(ContainerPtr volatile* container_addr,
ShouldNotReachHere();
}
ContainerPtr old_value = AtomicAccess::cmpxchg(container_addr, cur_container, new_container); // Memory order?
ContainerPtr old_value = container_addr->compare_exchange(cur_container, new_container); // Memory order?
if (old_value == cur_container) {
// Success. Indicate that the cards from the current card set must be transferred
// by this caller.
@ -687,7 +686,7 @@ void G1CardSet::transfer_cards(G1CardSetHashTableValue* table_entry, ContainerPt
assert(container_type(source_container) == ContainerHowl, "must be");
// Need to correct for that the Full remembered set occupies more cards than the
// AoCS before.
AtomicAccess::add(&_num_occupied, _config->max_cards_in_region() - table_entry->_num_occupied, memory_order_relaxed);
_num_occupied.add_then_fetch(_config->max_cards_in_region() - table_entry->_num_occupied.load_relaxed(), memory_order_relaxed);
}
}
@ -713,18 +712,18 @@ void G1CardSet::transfer_cards_in_howl(ContainerPtr parent_container,
diff -= 1;
G1CardSetHowl* howling_array = container_ptr<G1CardSetHowl>(parent_container);
AtomicAccess::add(&howling_array->_num_entries, diff, memory_order_relaxed);
howling_array->_num_entries.add_then_fetch(diff, memory_order_relaxed);
G1CardSetHashTableValue* table_entry = get_container(card_region);
assert(table_entry != nullptr, "Table entry not found for transferred cards");
AtomicAccess::add(&table_entry->_num_occupied, diff, memory_order_relaxed);
table_entry->_num_occupied.add_then_fetch(diff, memory_order_relaxed);
AtomicAccess::add(&_num_occupied, diff, memory_order_relaxed);
_num_occupied.add_then_fetch(diff, memory_order_relaxed);
}
}
G1AddCardResult G1CardSet::add_to_container(ContainerPtr volatile* container_addr,
G1AddCardResult G1CardSet::add_to_container(Atomic<ContainerPtr>* container_addr,
ContainerPtr container,
uint card_region,
uint card_in_region,
@ -827,8 +826,8 @@ G1AddCardResult G1CardSet::add_card(uint card_region, uint card_in_region, bool
}
if (increment_total && add_result == Added) {
AtomicAccess::inc(&table_entry->_num_occupied, memory_order_relaxed);
AtomicAccess::inc(&_num_occupied, memory_order_relaxed);
table_entry->_num_occupied.add_then_fetch(1u, memory_order_relaxed);
_num_occupied.add_then_fetch(1u, memory_order_relaxed);
}
if (should_grow_table) {
_table->grow();
@ -853,7 +852,7 @@ bool G1CardSet::contains_card(uint card_region, uint card_in_region) {
return false;
}
ContainerPtr container = table_entry->_container;
ContainerPtr container = table_entry->_container.load_relaxed();
if (container == FullCardSet) {
// contains_card() is not a performance critical method so we do not hide that
// case in the switch below.
@ -889,7 +888,7 @@ void G1CardSet::print_info(outputStream* st, uintptr_t card) {
return;
}
ContainerPtr container = table_entry->_container;
ContainerPtr container = table_entry->_container.load_relaxed();
if (container == FullCardSet) {
st->print("FULL card set)");
return;
@ -940,7 +939,7 @@ void G1CardSet::iterate_cards_during_transfer(ContainerPtr const container, Card
void G1CardSet::iterate_containers(ContainerPtrClosure* cl, bool at_safepoint) {
auto do_value =
[&] (G1CardSetHashTableValue* value) {
cl->do_containerptr(value->_region_idx, value->_num_occupied, value->_container);
cl->do_containerptr(value->_region_idx, value->_num_occupied.load_relaxed(), value->_container.load_relaxed());
return true;
};
@ -1001,11 +1000,11 @@ bool G1CardSet::occupancy_less_or_equal_to(size_t limit) const {
}
bool G1CardSet::is_empty() const {
return _num_occupied == 0;
return _num_occupied.load_relaxed() == 0;
}
size_t G1CardSet::occupied() const {
return _num_occupied;
return _num_occupied.load_relaxed();
}
size_t G1CardSet::num_containers() {
@ -1051,7 +1050,7 @@ size_t G1CardSet::static_mem_size() {
void G1CardSet::clear() {
_table->reset();
_num_occupied = 0;
_num_occupied.store_relaxed(0);
_mm->flush();
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,6 +27,7 @@
#include "memory/allocation.hpp"
#include "memory/memRegion.hpp"
#include "runtime/atomic.hpp"
#include "utilities/concurrentHashTable.hpp"
class G1CardSetAllocOptions;
@ -154,8 +155,8 @@ public:
private:
// Indices are "from" indices.
size_t _coarsen_from[NumCoarsenCategories];
size_t _coarsen_collision[NumCoarsenCategories];
Atomic<size_t> _coarsen_from[NumCoarsenCategories];
Atomic<size_t> _coarsen_collision[NumCoarsenCategories];
public:
G1CardSetCoarsenStats() { reset(); }
@ -271,11 +272,11 @@ private:
// Total number of cards in this card set. This is a best-effort value, i.e. there may
// be (slightly) more cards in the card set than this value in reality.
size_t _num_occupied;
Atomic<size_t> _num_occupied;
ContainerPtr make_container_ptr(void* value, uintptr_t type);
ContainerPtr acquire_container(ContainerPtr volatile* container_addr);
ContainerPtr acquire_container(Atomic<ContainerPtr>* container_addr);
// Returns true if the card set container should be released
bool release_container(ContainerPtr container);
// Release card set and free if needed.
@ -288,7 +289,7 @@ private:
// coarsen_container does not transfer cards from cur_container
// to the new container. Transfer is achieved by transfer_cards.
// Returns true if this was the thread that coarsened the container (and added the card).
bool coarsen_container(ContainerPtr volatile* container_addr,
bool coarsen_container(Atomic<ContainerPtr>* container_addr,
ContainerPtr cur_container,
uint card_in_region, bool within_howl = false);
@ -300,9 +301,9 @@ private:
void transfer_cards(G1CardSetHashTableValue* table_entry, ContainerPtr source_container, uint card_region);
void transfer_cards_in_howl(ContainerPtr parent_container, ContainerPtr source_container, uint card_region);
G1AddCardResult add_to_container(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_region, uint card, bool increment_total = true);
G1AddCardResult add_to_container(Atomic<ContainerPtr>* container_addr, ContainerPtr container, uint card_region, uint card, bool increment_total = true);
G1AddCardResult add_to_inline_ptr(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_inline_ptr(Atomic<ContainerPtr>* container_addr, ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_array(ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_bitmap(ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_howl(ContainerPtr parent_container, uint card_region, uint card_in_region, bool increment_total = true);
@ -366,7 +367,6 @@ public:
size_t num_containers();
static G1CardSetCoarsenStats coarsen_stats();
static void print_coarsen_stats(outputStream* out);
// Returns size of the actual remembered set containers in bytes.
@ -412,8 +412,15 @@ public:
using ContainerPtr = G1CardSet::ContainerPtr;
const uint _region_idx;
uint volatile _num_occupied;
ContainerPtr volatile _container;
Atomic<uint> _num_occupied;
Atomic<ContainerPtr> _container;
// Copy constructor needed for use in ConcurrentHashTable.
G1CardSetHashTableValue(const G1CardSetHashTableValue& other) :
_region_idx(other._region_idx),
_num_occupied(other._num_occupied.load_relaxed()),
_container(other._container.load_relaxed())
{ }
G1CardSetHashTableValue(uint region_idx, ContainerPtr container) : _region_idx(region_idx), _num_occupied(0), _container(container) { }
};

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2023, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2023, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,7 +27,7 @@
#include "gc/g1/g1CardSet.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/atomic.hpp"
#include "utilities/bitMap.hpp"
#include "utilities/globalDefinitions.hpp"
@ -67,7 +67,7 @@ class G1CardSetInlinePtr : public StackObj {
using ContainerPtr = G1CardSet::ContainerPtr;
ContainerPtr volatile * _value_addr;
Atomic<ContainerPtr>* _value_addr;
ContainerPtr _value;
static const uint SizeFieldLen = 3;
@ -103,7 +103,7 @@ public:
explicit G1CardSetInlinePtr(ContainerPtr value) :
G1CardSetInlinePtr(nullptr, value) {}
G1CardSetInlinePtr(ContainerPtr volatile* value_addr, ContainerPtr value) : _value_addr(value_addr), _value(value) {
G1CardSetInlinePtr(Atomic<ContainerPtr>* value_addr, ContainerPtr value) : _value_addr(value_addr), _value(value) {
assert(G1CardSet::container_type(_value) == G1CardSet::ContainerInlinePtr, "Value " PTR_FORMAT " is not a valid G1CardSetInlinePtr.", p2i(_value));
}
@ -145,13 +145,13 @@ public:
// All but inline pointers are of this kind. For those, card entries are stored
// directly in the ContainerPtr of the ConcurrentHashTable node.
class G1CardSetContainer {
uintptr_t _ref_count;
Atomic<uintptr_t> _ref_count;
protected:
~G1CardSetContainer() = default;
public:
G1CardSetContainer() : _ref_count(3) { }
uintptr_t refcount() const { return AtomicAccess::load_acquire(&_ref_count); }
uintptr_t refcount() const { return _ref_count.load_acquire(); }
bool try_increment_refcount();
@ -172,7 +172,7 @@ public:
using ContainerPtr = G1CardSet::ContainerPtr;
private:
EntryCountType _size;
EntryCountType volatile _num_entries;
Atomic<EntryCountType> _num_entries;
// VLA implementation.
EntryDataType _data[1];
@ -180,10 +180,10 @@ private:
static const EntryCountType EntryMask = LockBitMask - 1;
class G1CardSetArrayLocker : public StackObj {
EntryCountType volatile* _num_entries_addr;
Atomic<EntryCountType>* _num_entries_addr;
EntryCountType _local_num_entries;
public:
G1CardSetArrayLocker(EntryCountType volatile* value);
G1CardSetArrayLocker(Atomic<EntryCountType>* value);
EntryCountType num_entries() const { return _local_num_entries; }
void inc_num_entries() {
@ -192,7 +192,7 @@ private:
}
~G1CardSetArrayLocker() {
AtomicAccess::release_store(_num_entries_addr, _local_num_entries);
_num_entries_addr->release_store(_local_num_entries);
}
};
@ -213,7 +213,7 @@ public:
template <class CardVisitor>
void iterate(CardVisitor& found);
size_t num_entries() const { return _num_entries & EntryMask; }
size_t num_entries() const { return _num_entries.load_relaxed() & EntryMask; }
static size_t header_size_in_bytes();
@ -223,7 +223,7 @@ public:
};
class G1CardSetBitMap : public G1CardSetContainer {
size_t _num_bits_set;
Atomic<size_t> _num_bits_set;
BitMap::bm_word_t _bits[1];
public:
@ -236,7 +236,7 @@ public:
return bm.at(card_idx);
}
uint num_bits_set() const { return (uint)_num_bits_set; }
uint num_bits_set() const { return (uint)_num_bits_set.load_relaxed(); }
template <class CardVisitor>
void iterate(CardVisitor& found, size_t const size_in_bits, uint offset);
@ -255,10 +255,10 @@ class G1CardSetHowl : public G1CardSetContainer {
public:
typedef uint EntryCountType;
using ContainerPtr = G1CardSet::ContainerPtr;
EntryCountType volatile _num_entries;
Atomic<EntryCountType> _num_entries;
private:
// VLA implementation.
ContainerPtr _buckets[1];
Atomic<ContainerPtr> _buckets[1];
// Do not add class member variables beyond this point.
// Iterates over the given ContainerPtr with at index in this Howl card set,
@ -268,14 +268,14 @@ private:
ContainerPtr at(EntryCountType index) const;
ContainerPtr const* buckets() const;
Atomic<ContainerPtr> const* buckets() const;
public:
G1CardSetHowl(EntryCountType card_in_region, G1CardSetConfiguration* config);
ContainerPtr const* container_addr(EntryCountType index) const;
Atomic<ContainerPtr> const* container_addr(EntryCountType index) const;
ContainerPtr* container_addr(EntryCountType index);
Atomic<ContainerPtr>* container_addr(EntryCountType index);
bool contains(uint card_idx, G1CardSetConfiguration* config);
// Iterates over all ContainerPtrs in this Howl card set, applying a CardOrRangeVisitor

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -67,7 +67,7 @@ inline G1AddCardResult G1CardSetInlinePtr::add(uint card_idx, uint bits_per_card
return Overflow;
}
ContainerPtr new_value = merge(_value, card_idx, num_cards, bits_per_card);
ContainerPtr old_value = AtomicAccess::cmpxchg(_value_addr, _value, new_value, memory_order_relaxed);
ContainerPtr old_value = _value_addr->compare_exchange(_value, new_value, memory_order_relaxed);
if (_value == old_value) {
return Added;
}
@ -126,7 +126,7 @@ inline bool G1CardSetContainer::try_increment_refcount() {
}
uintptr_t new_value = old_value + 2;
uintptr_t ref_count = AtomicAccess::cmpxchg(&_ref_count, old_value, new_value);
uintptr_t ref_count = _ref_count.compare_exchange(old_value, new_value);
if (ref_count == old_value) {
return true;
}
@ -137,7 +137,7 @@ inline bool G1CardSetContainer::try_increment_refcount() {
inline uintptr_t G1CardSetContainer::decrement_refcount() {
uintptr_t old_value = refcount();
assert((old_value & 0x1) != 0 && old_value >= 3, "precondition");
return AtomicAccess::sub(&_ref_count, 2u);
return _ref_count.sub_then_fetch(2u);
}
inline G1CardSetArray::G1CardSetArray(uint card_in_region, EntryCountType num_cards) :
@ -149,14 +149,13 @@ inline G1CardSetArray::G1CardSetArray(uint card_in_region, EntryCountType num_ca
*entry_addr(0) = checked_cast<EntryDataType>(card_in_region);
}
inline G1CardSetArray::G1CardSetArrayLocker::G1CardSetArrayLocker(EntryCountType volatile* num_entries_addr) :
inline G1CardSetArray::G1CardSetArrayLocker::G1CardSetArrayLocker(Atomic<EntryCountType>* num_entries_addr) :
_num_entries_addr(num_entries_addr) {
SpinYield s;
EntryCountType num_entries = AtomicAccess::load(_num_entries_addr) & EntryMask;
EntryCountType num_entries = _num_entries_addr->load_relaxed() & EntryMask;
while (true) {
EntryCountType old_value = AtomicAccess::cmpxchg(_num_entries_addr,
num_entries,
(EntryCountType)(num_entries | LockBitMask));
EntryCountType old_value = _num_entries_addr->compare_exchange(num_entries,
(EntryCountType)(num_entries | LockBitMask));
if (old_value == num_entries) {
// Succeeded locking the array.
_local_num_entries = num_entries;
@ -174,7 +173,7 @@ inline G1CardSetArray::EntryDataType const* G1CardSetArray::base_addr() const {
}
inline G1CardSetArray::EntryDataType const* G1CardSetArray::entry_addr(EntryCountType index) const {
assert(index < _num_entries, "precondition");
assert(index < _num_entries.load_relaxed(), "precondition");
return base_addr() + index;
}
@ -189,7 +188,7 @@ inline G1CardSetArray::EntryDataType G1CardSetArray::at(EntryCountType index) co
inline G1AddCardResult G1CardSetArray::add(uint card_idx) {
assert(card_idx < (1u << (sizeof(EntryDataType) * BitsPerByte)),
"Card index %u does not fit allowed card value range.", card_idx);
EntryCountType num_entries = AtomicAccess::load_acquire(&_num_entries) & EntryMask;
EntryCountType num_entries = _num_entries.load_acquire() & EntryMask;
EntryCountType idx = 0;
for (; idx < num_entries; idx++) {
if (at(idx) == card_idx) {
@ -223,7 +222,7 @@ inline G1AddCardResult G1CardSetArray::add(uint card_idx) {
}
inline bool G1CardSetArray::contains(uint card_idx) {
EntryCountType num_entries = AtomicAccess::load_acquire(&_num_entries) & EntryMask;
EntryCountType num_entries = _num_entries.load_acquire() & EntryMask;
for (EntryCountType idx = 0; idx < num_entries; idx++) {
if (at(idx) == card_idx) {
@ -235,7 +234,7 @@ inline bool G1CardSetArray::contains(uint card_idx) {
template <class CardVisitor>
void G1CardSetArray::iterate(CardVisitor& found) {
EntryCountType num_entries = AtomicAccess::load_acquire(&_num_entries) & EntryMask;
EntryCountType num_entries = _num_entries.load_acquire() & EntryMask;
for (EntryCountType idx = 0; idx < num_entries; idx++) {
found(at(idx));
}
@ -256,11 +255,11 @@ inline G1CardSetBitMap::G1CardSetBitMap(uint card_in_region, uint size_in_bits)
inline G1AddCardResult G1CardSetBitMap::add(uint card_idx, size_t threshold, size_t size_in_bits) {
BitMapView bm(_bits, size_in_bits);
if (_num_bits_set >= threshold) {
if (_num_bits_set.load_relaxed() >= threshold) {
return bm.at(card_idx) ? Found : Overflow;
}
if (bm.par_set_bit(card_idx)) {
AtomicAccess::inc(&_num_bits_set, memory_order_relaxed);
_num_bits_set.add_then_fetch(1u, memory_order_relaxed);
return Added;
}
return Found;
@ -276,22 +275,22 @@ inline size_t G1CardSetBitMap::header_size_in_bytes() {
return offset_of(G1CardSetBitMap, _bits);
}
inline G1CardSetHowl::ContainerPtr const* G1CardSetHowl::container_addr(EntryCountType index) const {
assert(index < _num_entries, "precondition");
inline Atomic<G1CardSetHowl::ContainerPtr> const* G1CardSetHowl::container_addr(EntryCountType index) const {
assert(index < _num_entries.load_relaxed(), "precondition");
return buckets() + index;
}
inline G1CardSetHowl::ContainerPtr* G1CardSetHowl::container_addr(EntryCountType index) {
return const_cast<ContainerPtr*>(const_cast<const G1CardSetHowl*>(this)->container_addr(index));
inline Atomic<G1CardSetHowl::ContainerPtr>* G1CardSetHowl::container_addr(EntryCountType index) {
return const_cast<Atomic<ContainerPtr>*>(const_cast<const G1CardSetHowl*>(this)->container_addr(index));
}
inline G1CardSetHowl::ContainerPtr G1CardSetHowl::at(EntryCountType index) const {
return *container_addr(index);
return (*container_addr(index)).load_relaxed();
}
inline G1CardSetHowl::ContainerPtr const* G1CardSetHowl::buckets() const {
inline Atomic<G1CardSetHowl::ContainerPtr> const* G1CardSetHowl::buckets() const {
const void* ptr = reinterpret_cast<const char*>(this) + header_size_in_bytes();
return reinterpret_cast<ContainerPtr const*>(ptr);
return reinterpret_cast<Atomic<ContainerPtr> const*>(ptr);
}
inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConfiguration* config) :
@ -300,7 +299,7 @@ inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConf
EntryCountType num_buckets = config->num_buckets_in_howl();
EntryCountType bucket = config->howl_bucket_index(card_in_region);
for (uint i = 0; i < num_buckets; ++i) {
*container_addr(i) = G1CardSetInlinePtr();
container_addr(i)->store_relaxed(G1CardSetInlinePtr());
if (i == bucket) {
G1CardSetInlinePtr value(container_addr(i), at(i));
value.add(card_in_region, config->inline_ptr_bits_per_card(), config->max_cards_in_inline_ptr());
@ -310,8 +309,8 @@ inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConf
inline bool G1CardSetHowl::contains(uint card_idx, G1CardSetConfiguration* config) {
EntryCountType bucket = config->howl_bucket_index(card_idx);
ContainerPtr* array_entry = container_addr(bucket);
ContainerPtr container = AtomicAccess::load_acquire(array_entry);
Atomic<ContainerPtr>* array_entry = container_addr(bucket);
ContainerPtr container = array_entry->load_acquire();
switch (G1CardSet::container_type(container)) {
case G1CardSet::ContainerArrayOfCards: {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -26,7 +26,6 @@
#include "gc/g1/g1CardSetContainers.inline.hpp"
#include "gc/g1/g1CardSetMemory.inline.hpp"
#include "gc/g1/g1MonotonicArena.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/ostream.hpp"
G1CardSetAllocator::G1CardSetAllocator(const char* name,

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -44,20 +44,20 @@ G1CardTableClaimTable::~G1CardTableClaimTable() {
void G1CardTableClaimTable::initialize(uint max_reserved_regions) {
assert(_card_claims == nullptr, "Must not be initialized twice");
_card_claims = NEW_C_HEAP_ARRAY(uint, max_reserved_regions, mtGC);
_card_claims = NEW_C_HEAP_ARRAY(Atomic<uint>, max_reserved_regions, mtGC);
_max_reserved_regions = max_reserved_regions;
reset_all_to_unclaimed();
}
void G1CardTableClaimTable::reset_all_to_unclaimed() {
for (uint i = 0; i < _max_reserved_regions; i++) {
_card_claims[i] = 0;
_card_claims[i].store_relaxed(0);
}
}
void G1CardTableClaimTable::reset_all_to_claimed() {
for (uint i = 0; i < _max_reserved_regions; i++) {
_card_claims[i] = (uint)G1HeapRegion::CardsPerRegion;
_card_claims[i].store_relaxed((uint)G1HeapRegion::CardsPerRegion);
}
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,6 +27,7 @@
#include "gc/g1/g1CardTable.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomic.hpp"
class G1HeapRegionClosure;
@ -45,7 +46,7 @@ class G1CardTableClaimTable : public CHeapObj<mtGC> {
// Card table iteration claim values for every heap region, from 0 (completely unclaimed)
// to (>=) G1HeapRegion::CardsPerRegion (completely claimed).
uint volatile* _card_claims;
Atomic<uint>* _card_claims;
uint _cards_per_chunk; // For conversion between card index and chunk index.

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -29,26 +29,25 @@
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "runtime/atomicAccess.hpp"
bool G1CardTableClaimTable::has_unclaimed_cards(uint region) {
assert(region < _max_reserved_regions, "Tried to access invalid region %u", region);
return AtomicAccess::load(&_card_claims[region]) < G1HeapRegion::CardsPerRegion;
return _card_claims[region].load_relaxed() < G1HeapRegion::CardsPerRegion;
}
void G1CardTableClaimTable::reset_to_unclaimed(uint region) {
assert(region < _max_reserved_regions, "Tried to access invalid region %u", region);
AtomicAccess::store(&_card_claims[region], 0u);
_card_claims[region].store_relaxed(0u);
}
uint G1CardTableClaimTable::claim_cards(uint region, uint increment) {
assert(region < _max_reserved_regions, "Tried to access invalid region %u", region);
return AtomicAccess::fetch_then_add(&_card_claims[region], increment, memory_order_relaxed);
return _card_claims[region].fetch_then_add(increment, memory_order_relaxed);
}
uint G1CardTableClaimTable::claim_chunk(uint region) {
assert(region < _max_reserved_regions, "Tried to access invalid region %u", region);
return AtomicAccess::fetch_then_add(&_card_claims[region], cards_per_chunk(), memory_order_relaxed);
return _card_claims[region].fetch_then_add(cards_per_chunk(), memory_order_relaxed);
}
uint G1CardTableClaimTable::claim_all_cards(uint region) {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2014, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -28,7 +28,7 @@
#include "gc/g1/g1HeapRegion.hpp"
#include "memory/allocation.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/atomic.hpp"
#include "utilities/concurrentHashTable.inline.hpp"
#include "utilities/concurrentHashTableTasks.inline.hpp"
@ -60,7 +60,7 @@ class G1CodeRootSetHashTable : public CHeapObj<mtGC> {
HashTable _table;
HashTableScanTask _table_scanner;
size_t volatile _num_entries;
Atomic<size_t> _num_entries;
bool is_empty() const { return number_of_entries() == 0; }
@ -120,7 +120,7 @@ public:
bool grow_hint = false;
bool inserted = _table.insert(Thread::current(), lookup, method, &grow_hint);
if (inserted) {
AtomicAccess::inc(&_num_entries);
_num_entries.add_then_fetch(1u);
}
if (grow_hint) {
_table.grow(Thread::current());
@ -131,7 +131,7 @@ public:
HashTableLookUp lookup(method);
bool removed = _table.remove(Thread::current(), lookup);
if (removed) {
AtomicAccess::dec(&_num_entries);
_num_entries.sub_then_fetch(1u);
}
return removed;
}
@ -182,7 +182,7 @@ public:
guarantee(succeeded, "unable to clean table");
if (num_deleted != 0) {
size_t current_size = AtomicAccess::sub(&_num_entries, num_deleted);
size_t current_size = _num_entries.sub_then_fetch(num_deleted);
shrink_to_match(current_size);
}
}
@ -226,7 +226,7 @@ public:
size_t mem_size() { return sizeof(*this) + _table.get_mem_size(Thread::current()); }
size_t number_of_entries() const { return AtomicAccess::load(&_num_entries); }
size_t number_of_entries() const { return _num_entries.load_relaxed(); }
};
uintx G1CodeRootSetHashTable::HashTableLookUp::get_hash() const {

View File

@ -103,7 +103,6 @@
#include "oops/access.inline.hpp"
#include "oops/compressedOops.inline.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/cpuTimeCounters.hpp"
#include "runtime/handles.inline.hpp"
#include "runtime/init.hpp"

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -54,6 +54,7 @@
#include "memory/allocation.hpp"
#include "memory/iterator.hpp"
#include "memory/memRegion.hpp"
#include "runtime/atomic.hpp"
#include "runtime/mutexLocker.hpp"
#include "runtime/threadSMR.hpp"
#include "utilities/bitMap.hpp"
@ -124,7 +125,7 @@ class G1JavaThreadsListClaimer : public StackObj {
ThreadsListHandle _list;
uint _claim_step;
volatile uint _cur_claim;
Atomic<uint> _cur_claim;
// Attempts to claim _claim_step JavaThreads, returning an array of claimed
// JavaThread* with count elements. Returns null (and a zero count) if there
@ -1267,7 +1268,6 @@ public:
bool is_marked(oop obj) const;
inline static bool is_obj_filler(const oop obj);
// Determine if an object is dead, given the object and also
// the region to which the object belongs.
inline bool is_obj_dead(const oop obj, const G1HeapRegion* hr) const;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -31,6 +31,7 @@
#include "gc/g1/g1CollectorState.hpp"
#include "gc/g1/g1ConcurrentMark.inline.hpp"
#include "gc/g1/g1EvacFailureRegions.hpp"
#include "gc/g1/g1EvacStats.inline.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "gc/g1/g1HeapRegionManager.inline.hpp"
#include "gc/g1/g1HeapRegionRemSet.hpp"
@ -38,10 +39,10 @@
#include "gc/g1/g1Policy.hpp"
#include "gc/g1/g1RegionPinCache.inline.hpp"
#include "gc/g1/g1RemSet.hpp"
#include "gc/shared/collectedHeap.inline.hpp"
#include "gc/shared/markBitMap.inline.hpp"
#include "gc/shared/taskqueue.inline.hpp"
#include "oops/stackChunkOop.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/threadSMR.inline.hpp"
#include "utilities/bitMap.inline.hpp"
@ -53,10 +54,10 @@ inline bool G1STWIsAliveClosure::do_object_b(oop p) {
inline JavaThread* const* G1JavaThreadsListClaimer::claim(uint& count) {
count = 0;
if (AtomicAccess::load(&_cur_claim) >= _list.length()) {
if (_cur_claim.load_relaxed() >= _list.length()) {
return nullptr;
}
uint claim = AtomicAccess::fetch_then_add(&_cur_claim, _claim_step);
uint claim = _cur_claim.fetch_then_add(_claim_step);
if (claim >= _list.length()) {
return nullptr;
}
@ -230,16 +231,11 @@ inline bool G1CollectedHeap::requires_barriers(stackChunkOop obj) const {
return !heap_region_containing(obj)->is_young(); // is_in_young does an unnecessary null check
}
inline bool G1CollectedHeap::is_obj_filler(const oop obj) {
Klass* k = obj->klass_without_asserts();
return k == Universe::fillerArrayKlass() || k == vmClasses::FillerObject_klass();
}
inline bool G1CollectedHeap::is_obj_dead(const oop obj, const G1HeapRegion* hr) const {
assert(!hr->is_free(), "looking up obj " PTR_FORMAT " in Free region %u", p2i(obj), hr->hrm_index());
if (hr->is_in_parsable_area(obj)) {
// This object is in the parsable part of the heap, live unless scrubbed.
return is_obj_filler(obj);
return is_filler_object(obj);
} else {
// From Remark until a region has been concurrently scrubbed, parts of the
// region is not guaranteed to be parsable. Use the bitmap for liveness.

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,7 +27,7 @@
#include "gc/g1/g1CollectionSetChooser.hpp"
#include "gc/g1/g1HeapRegionRemSet.inline.hpp"
#include "gc/shared/space.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/atomic.hpp"
#include "utilities/quickSort.hpp"
// Determine collection set candidates (from marking): For all regions determine
@ -50,7 +50,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
G1HeapRegion** _data;
uint volatile _cur_claim_idx;
Atomic<uint> _cur_claim_idx;
static int compare_region_gc_efficiency(G1HeapRegion** rr1, G1HeapRegion** rr2) {
G1HeapRegion* r1 = *rr1;
@ -105,7 +105,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
// Claim a new chunk, returning its bounds [from, to[.
void claim_chunk(uint& from, uint& to) {
uint result = AtomicAccess::add(&_cur_claim_idx, _chunk_size);
uint result = _cur_claim_idx.add_then_fetch(_chunk_size);
assert(_max_size > result - 1,
"Array too small, is %u should be %u with chunk size %u.",
_max_size, result, _chunk_size);
@ -121,14 +121,15 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
}
void sort_by_gc_efficiency() {
if (_cur_claim_idx == 0) {
uint length = _cur_claim_idx.load_relaxed();
if (length == 0) {
return;
}
for (uint i = _cur_claim_idx; i < _max_size; i++) {
for (uint i = length; i < _max_size; i++) {
assert(_data[i] == nullptr, "must be");
}
qsort(_data, _cur_claim_idx, sizeof(_data[0]), (_sort_Fn)compare_region_gc_efficiency);
for (uint i = _cur_claim_idx; i < _max_size; i++) {
qsort(_data, length, sizeof(_data[0]), (_sort_Fn)compare_region_gc_efficiency);
for (uint i = length; i < _max_size; i++) {
assert(_data[i] == nullptr, "must be");
}
}
@ -202,13 +203,13 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
G1CollectedHeap* _g1h;
G1HeapRegionClaimer _hrclaimer;
uint volatile _num_regions_added;
Atomic<uint> _num_regions_added;
G1BuildCandidateArray _result;
void update_totals(uint num_regions) {
if (num_regions > 0) {
AtomicAccess::add(&_num_regions_added, num_regions);
_num_regions_added.add_then_fetch(num_regions);
}
}
@ -220,7 +221,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
void prune(G1HeapRegion** data) {
G1Policy* p = G1CollectedHeap::heap()->policy();
uint num_candidates = AtomicAccess::load(&_num_regions_added);
uint num_candidates = _num_regions_added.load_relaxed();
uint min_old_cset_length = p->calc_min_old_cset_length(num_candidates);
uint num_pruned = 0;
@ -253,7 +254,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
wasted_bytes,
allowed_waste);
AtomicAccess::sub(&_num_regions_added, num_pruned, memory_order_relaxed);
_num_regions_added.sub_then_fetch(num_pruned, memory_order_relaxed);
}
public:
@ -274,7 +275,7 @@ public:
_result.sort_by_gc_efficiency();
prune(_result.array());
candidates->set_candidates_from_marking(_result.array(),
_num_regions_added);
_num_regions_added.load_relaxed());
}
};

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -51,6 +51,9 @@
#include "gc/shared/gcTimer.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"
#include "gc/shared/gcVMOperations.hpp"
#include "gc/shared/partialArraySplitter.inline.hpp"
#include "gc/shared/partialArrayState.hpp"
#include "gc/shared/partialArrayTaskStats.hpp"
#include "gc/shared/referencePolicy.hpp"
#include "gc/shared/suspendibleThreadSet.hpp"
#include "gc/shared/taskqueue.inline.hpp"
@ -67,7 +70,6 @@
#include "nmt/memTracker.hpp"
#include "oops/access.inline.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/globals_extension.hpp"
#include "runtime/handles.inline.hpp"
#include "runtime/java.hpp"
@ -76,6 +78,7 @@
#include "runtime/prefetch.inline.hpp"
#include "runtime/threads.hpp"
#include "utilities/align.hpp"
#include "utilities/checkedCast.hpp"
#include "utilities/formatBuffer.hpp"
#include "utilities/growableArray.hpp"
#include "utilities/powerOfTwo.hpp"
@ -99,7 +102,7 @@ bool G1CMBitMapClosure::do_addr(HeapWord* const addr) {
// We move that task's local finger along.
_task->move_finger_to(addr);
_task->scan_task_entry(G1TaskQueueEntry::from_oop(cast_to_oop(addr)));
_task->process_entry(G1TaskQueueEntry(cast_to_oop(addr)), false /* stolen */);
// we only partially drain the local queue and global stack
_task->drain_local_queue(true);
_task->drain_global_stack(true);
@ -148,25 +151,25 @@ bool G1CMMarkStack::initialize() {
}
G1CMMarkStack::TaskQueueEntryChunk* G1CMMarkStack::ChunkAllocator::allocate_new_chunk() {
if (_size >= _max_capacity) {
if (_size.load_relaxed() >= _max_capacity) {
return nullptr;
}
size_t cur_idx = AtomicAccess::fetch_then_add(&_size, 1u);
size_t cur_idx = _size.fetch_then_add(1u);
if (cur_idx >= _max_capacity) {
return nullptr;
}
size_t bucket = get_bucket(cur_idx);
if (AtomicAccess::load_acquire(&_buckets[bucket]) == nullptr) {
if (_buckets[bucket].load_acquire() == nullptr) {
if (!_should_grow) {
// Prefer to restart the CM.
return nullptr;
}
MutexLocker x(G1MarkStackChunkList_lock, Mutex::_no_safepoint_check_flag);
if (AtomicAccess::load_acquire(&_buckets[bucket]) == nullptr) {
if (_buckets[bucket].load_acquire() == nullptr) {
size_t desired_capacity = bucket_size(bucket) * 2;
if (!try_expand_to(desired_capacity)) {
return nullptr;
@ -175,7 +178,7 @@ G1CMMarkStack::TaskQueueEntryChunk* G1CMMarkStack::ChunkAllocator::allocate_new_
}
size_t bucket_idx = get_bucket_index(cur_idx);
TaskQueueEntryChunk* result = ::new (&_buckets[bucket][bucket_idx]) TaskQueueEntryChunk;
TaskQueueEntryChunk* result = ::new (&_buckets[bucket].load_relaxed()[bucket_idx]) TaskQueueEntryChunk;
result->next = nullptr;
return result;
}
@ -197,10 +200,10 @@ bool G1CMMarkStack::ChunkAllocator::initialize(size_t initial_capacity, size_t m
_max_capacity = max_capacity;
_num_buckets = get_bucket(_max_capacity) + 1;
_buckets = NEW_C_HEAP_ARRAY(TaskQueueEntryChunk*, _num_buckets, mtGC);
_buckets = NEW_C_HEAP_ARRAY(Atomic<TaskQueueEntryChunk*>, _num_buckets, mtGC);
for (size_t i = 0; i < _num_buckets; i++) {
_buckets[i] = nullptr;
_buckets[i].store_relaxed(nullptr);
}
size_t new_capacity = bucket_size(0);
@ -240,9 +243,9 @@ G1CMMarkStack::ChunkAllocator::~ChunkAllocator() {
}
for (size_t i = 0; i < _num_buckets; i++) {
if (_buckets[i] != nullptr) {
MmapArrayAllocator<TaskQueueEntryChunk>::free(_buckets[i], bucket_size(i));
_buckets[i] = nullptr;
if (_buckets[i].load_relaxed() != nullptr) {
MmapArrayAllocator<TaskQueueEntryChunk>::free(_buckets[i].load_relaxed(), bucket_size(i));
_buckets[i].store_relaxed(nullptr);
}
}
@ -259,7 +262,7 @@ bool G1CMMarkStack::ChunkAllocator::reserve(size_t new_capacity) {
// and the new capacity (new_capacity). This step ensures that there are no gaps in the
// array and that the capacity accurately reflects the reserved memory.
for (; i <= highest_bucket; i++) {
if (AtomicAccess::load_acquire(&_buckets[i]) != nullptr) {
if (_buckets[i].load_acquire() != nullptr) {
continue; // Skip over already allocated buckets.
}
@ -279,7 +282,7 @@ bool G1CMMarkStack::ChunkAllocator::reserve(size_t new_capacity) {
return false;
}
_capacity += bucket_capacity;
AtomicAccess::release_store(&_buckets[i], bucket_base);
_buckets[i].release_store(bucket_base);
}
return true;
}
@ -288,9 +291,9 @@ void G1CMMarkStack::expand() {
_chunk_allocator.try_expand();
}
void G1CMMarkStack::add_chunk_to_list(TaskQueueEntryChunk* volatile* list, TaskQueueEntryChunk* elem) {
elem->next = *list;
*list = elem;
void G1CMMarkStack::add_chunk_to_list(Atomic<TaskQueueEntryChunk*>* list, TaskQueueEntryChunk* elem) {
elem->next = list->load_relaxed();
list->store_relaxed(elem);
}
void G1CMMarkStack::add_chunk_to_chunk_list(TaskQueueEntryChunk* elem) {
@ -304,10 +307,10 @@ void G1CMMarkStack::add_chunk_to_free_list(TaskQueueEntryChunk* elem) {
add_chunk_to_list(&_free_list, elem);
}
G1CMMarkStack::TaskQueueEntryChunk* G1CMMarkStack::remove_chunk_from_list(TaskQueueEntryChunk* volatile* list) {
TaskQueueEntryChunk* result = *list;
G1CMMarkStack::TaskQueueEntryChunk* G1CMMarkStack::remove_chunk_from_list(Atomic<TaskQueueEntryChunk*>* list) {
TaskQueueEntryChunk* result = list->load_relaxed();
if (result != nullptr) {
*list = (*list)->next;
list->store_relaxed(list->load_relaxed()->next);
}
return result;
}
@ -361,8 +364,8 @@ bool G1CMMarkStack::par_pop_chunk(G1TaskQueueEntry* ptr_arr) {
void G1CMMarkStack::set_empty() {
_chunks_in_chunk_list = 0;
_chunk_list = nullptr;
_free_list = nullptr;
_chunk_list.store_relaxed(nullptr);
_free_list.store_relaxed(nullptr);
_chunk_allocator.reset();
}
@ -490,6 +493,7 @@ G1ConcurrentMark::G1ConcurrentMark(G1CollectedHeap* g1h,
_task_queues(new G1CMTaskQueueSet(_max_num_tasks)),
_terminator(_max_num_tasks, _task_queues),
_partial_array_state_manager(new PartialArrayStateManager(_max_num_tasks)),
_first_overflow_barrier_sync(),
_second_overflow_barrier_sync(),
@ -556,6 +560,10 @@ G1ConcurrentMark::G1ConcurrentMark(G1CollectedHeap* g1h,
reset_at_marking_complete();
}
PartialArrayStateManager* G1ConcurrentMark::partial_array_state_manager() const {
return _partial_array_state_manager;
}
void G1ConcurrentMark::reset() {
_has_aborted = false;
@ -650,7 +658,26 @@ void G1ConcurrentMark::set_concurrency_and_phase(uint active_tasks, bool concurr
}
}
#if TASKQUEUE_STATS
void G1ConcurrentMark::print_and_reset_taskqueue_stats() {
_task_queues->print_and_reset_taskqueue_stats("G1ConcurrentMark Oop Queue");
auto get_pa_stats = [&](uint i) {
return _tasks[i]->partial_array_task_stats();
};
PartialArrayTaskStats::log_set(_max_num_tasks, get_pa_stats,
"G1ConcurrentMark Partial Array Task Stats");
for (uint i = 0; i < _max_num_tasks; ++i) {
get_pa_stats(i)->reset();
}
}
#endif
void G1ConcurrentMark::reset_at_marking_complete() {
TASKQUEUE_STATS_ONLY(print_and_reset_taskqueue_stats());
// We set the global marking state to some default values when we're
// not doing marking.
reset_marking_for_restart();
@ -804,11 +831,25 @@ void G1ConcurrentMark::cleanup_for_next_mark() {
clear_bitmap(_concurrent_workers, true);
reset_partial_array_state_manager();
// Repeat the asserts from above.
guarantee(cm_thread()->in_progress(), "invariant");
guarantee(!_g1h->collector_state()->mark_or_rebuild_in_progress(), "invariant");
}
void G1ConcurrentMark::reset_partial_array_state_manager() {
for (uint i = 0; i < _max_num_tasks; ++i) {
_tasks[i]->unregister_partial_array_splitter();
}
partial_array_state_manager()->reset();
for (uint i = 0; i < _max_num_tasks; ++i) {
_tasks[i]->register_partial_array_splitter();
}
}
void G1ConcurrentMark::clear_bitmap(WorkerThreads* workers) {
assert_at_safepoint_on_vm_thread();
// To avoid fragmentation the full collection requesting to clear the bitmap
@ -1789,17 +1830,18 @@ public:
{ }
void operator()(G1TaskQueueEntry task_entry) const {
if (task_entry.is_array_slice()) {
guarantee(_g1h->is_in_reserved(task_entry.slice()), "Slice " PTR_FORMAT " must be in heap.", p2i(task_entry.slice()));
if (task_entry.is_partial_array_state()) {
oop obj = task_entry.to_partial_array_state()->source();
guarantee(_g1h->is_in_reserved(obj), "Partial Array " PTR_FORMAT " must be in heap.", p2i(obj));
return;
}
guarantee(oopDesc::is_oop(task_entry.obj()),
guarantee(oopDesc::is_oop(task_entry.to_oop()),
"Non-oop " PTR_FORMAT ", phase: %s, info: %d",
p2i(task_entry.obj()), _phase, _info);
G1HeapRegion* r = _g1h->heap_region_containing(task_entry.obj());
p2i(task_entry.to_oop()), _phase, _info);
G1HeapRegion* r = _g1h->heap_region_containing(task_entry.to_oop());
guarantee(!(r->in_collection_set() || r->has_index_in_opt_cset()),
"obj " PTR_FORMAT " from %s (%d) in region %u in (optional) collection set",
p2i(task_entry.obj()), _phase, _info, r->hrm_index());
p2i(task_entry.to_oop()), _phase, _info, r->hrm_index());
}
};
@ -2055,6 +2097,17 @@ void G1CMTask::reset(G1CMBitMap* mark_bitmap) {
_mark_stats_cache.reset();
}
void G1CMTask::register_partial_array_splitter() {
::new (&_partial_array_splitter) PartialArraySplitter(_cm->partial_array_state_manager(),
_cm->max_num_tasks(),
ObjArrayMarkingStride);
}
void G1CMTask::unregister_partial_array_splitter() {
_partial_array_splitter.~PartialArraySplitter();
}
bool G1CMTask::should_exit_termination() {
if (!regular_clock_call()) {
return true;
@ -2185,7 +2238,7 @@ bool G1CMTask::get_entries_from_global_stack() {
if (task_entry.is_null()) {
break;
}
assert(task_entry.is_array_slice() || oopDesc::is_oop(task_entry.obj()), "Element " PTR_FORMAT " must be an array slice or oop", p2i(task_entry.obj()));
assert(task_entry.is_partial_array_state() || oopDesc::is_oop(task_entry.to_oop()), "Element " PTR_FORMAT " must be an array slice or oop", p2i(task_entry.to_oop()));
bool success = _task_queue->push(task_entry);
// We only call this when the local queue is empty or under a
// given target limit. So, we do not expect this push to fail.
@ -2216,7 +2269,7 @@ void G1CMTask::drain_local_queue(bool partially) {
G1TaskQueueEntry entry;
bool ret = _task_queue->pop_local(entry);
while (ret) {
scan_task_entry(entry);
process_entry(entry, false /* stolen */);
if (_task_queue->size() <= target_size || has_aborted()) {
ret = false;
} else {
@ -2226,6 +2279,37 @@ void G1CMTask::drain_local_queue(bool partially) {
}
}
size_t G1CMTask::start_partial_array_processing(oop obj) {
assert(should_be_sliced(obj), "Must be an array object %d and large %zu", obj->is_objArray(), obj->size());
objArrayOop obj_array = objArrayOop(obj);
size_t array_length = obj_array->length();
size_t initial_chunk_size = _partial_array_splitter.start(_task_queue, obj_array, nullptr, array_length);
// Mark objArray klass metadata
if (_cm_oop_closure->do_metadata()) {
_cm_oop_closure->do_klass(obj_array->klass());
}
process_array_chunk(obj_array, 0, initial_chunk_size);
// Include object header size
return objArrayOopDesc::object_size(checked_cast<int>(initial_chunk_size));
}
size_t G1CMTask::process_partial_array(const G1TaskQueueEntry& task, bool stolen) {
PartialArrayState* state = task.to_partial_array_state();
// Access state before release by claim().
objArrayOop obj = objArrayOop(state->source());
PartialArraySplitter::Claim claim =
_partial_array_splitter.claim(state, _task_queue, stolen);
process_array_chunk(obj, claim._start, claim._end);
return heap_word_size((claim._end - claim._start) * heapOopSize);
}
void G1CMTask::drain_global_stack(bool partially) {
if (has_aborted()) {
return;
@ -2430,7 +2514,7 @@ void G1CMTask::attempt_stealing() {
while (!has_aborted()) {
G1TaskQueueEntry entry;
if (_cm->try_stealing(_worker_id, entry)) {
scan_task_entry(entry);
process_entry(entry, true /* stolen */);
// And since we're towards the end, let's totally drain the
// local queue and global stack.
@ -2759,12 +2843,12 @@ G1CMTask::G1CMTask(uint worker_id,
G1ConcurrentMark* cm,
G1CMTaskQueue* task_queue,
G1RegionMarkStats* mark_stats) :
_objArray_processor(this),
_worker_id(worker_id),
_g1h(G1CollectedHeap::heap()),
_cm(cm),
_mark_bitmap(nullptr),
_task_queue(task_queue),
_partial_array_splitter(_cm->partial_array_state_manager(), _cm->max_num_tasks(), ObjArrayMarkingStride),
_mark_stats_cache(mark_stats, G1RegionMarkStatsCache::RegionMarkStatsCacheSize),
_calls(0),
_time_target_ms(0.0),

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -26,17 +26,20 @@
#define SHARE_GC_G1_G1CONCURRENTMARK_HPP
#include "gc/g1/g1ConcurrentMarkBitMap.hpp"
#include "gc/g1/g1ConcurrentMarkObjArrayProcessor.hpp"
#include "gc/g1/g1HeapRegionSet.hpp"
#include "gc/g1/g1HeapVerifier.hpp"
#include "gc/g1/g1RegionMarkStatsCache.hpp"
#include "gc/shared/gcCause.hpp"
#include "gc/shared/partialArraySplitter.hpp"
#include "gc/shared/partialArrayState.hpp"
#include "gc/shared/partialArrayTaskStats.hpp"
#include "gc/shared/taskqueue.hpp"
#include "gc/shared/taskTerminator.hpp"
#include "gc/shared/verifyOption.hpp"
#include "gc/shared/workerThread.hpp"
#include "gc/shared/workerUtils.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomic.hpp"
#include "utilities/compilerWarnings.hpp"
#include "utilities/numberSeq.hpp"
@ -53,41 +56,7 @@ class G1RegionToSpaceMapper;
class G1SurvivorRegions;
class ThreadClosure;
// This is a container class for either an oop or a continuation address for
// mark stack entries. Both are pushed onto the mark stack.
class G1TaskQueueEntry {
private:
void* _holder;
static const uintptr_t ArraySliceBit = 1;
G1TaskQueueEntry(oop obj) : _holder(obj) {
assert(_holder != nullptr, "Not allowed to set null task queue element");
}
G1TaskQueueEntry(HeapWord* addr) : _holder((void*)((uintptr_t)addr | ArraySliceBit)) { }
public:
G1TaskQueueEntry() : _holder(nullptr) { }
// Trivially copyable, for use in GenericTaskQueue.
static G1TaskQueueEntry from_slice(HeapWord* what) { return G1TaskQueueEntry(what); }
static G1TaskQueueEntry from_oop(oop obj) { return G1TaskQueueEntry(obj); }
oop obj() const {
assert(!is_array_slice(), "Trying to read array slice " PTR_FORMAT " as oop", p2i(_holder));
return cast_to_oop(_holder);
}
HeapWord* slice() const {
assert(is_array_slice(), "Trying to read oop " PTR_FORMAT " as array slice", p2i(_holder));
return (HeapWord*)((uintptr_t)_holder & ~ArraySliceBit);
}
bool is_oop() const { return !is_array_slice(); }
bool is_array_slice() const { return ((uintptr_t)_holder & ArraySliceBit) != 0; }
bool is_null() const { return _holder == nullptr; }
};
typedef ScannerTask G1TaskQueueEntry;
typedef GenericTaskQueue<G1TaskQueueEntry, mtGC> G1CMTaskQueue;
typedef GenericTaskQueueSet<G1CMTaskQueue, mtGC> G1CMTaskQueueSet;
@ -172,9 +141,9 @@ private:
size_t _capacity;
size_t _num_buckets;
bool _should_grow;
TaskQueueEntryChunk* volatile* _buckets;
Atomic<TaskQueueEntryChunk*>* _buckets;
char _pad0[DEFAULT_PADDING_SIZE];
volatile size_t _size;
Atomic<size_t> _size;
char _pad4[DEFAULT_PADDING_SIZE - sizeof(size_t)];
size_t bucket_size(size_t bucket) {
@ -212,7 +181,7 @@ private:
bool initialize(size_t initial_capacity, size_t max_capacity);
void reset() {
_size = 0;
_size.store_relaxed(0);
_should_grow = false;
}
@ -241,17 +210,17 @@ private:
ChunkAllocator _chunk_allocator;
char _pad0[DEFAULT_PADDING_SIZE];
TaskQueueEntryChunk* volatile _free_list; // Linked list of free chunks that can be allocated by users.
Atomic<TaskQueueEntryChunk*> _free_list; // Linked list of free chunks that can be allocated by users.
char _pad1[DEFAULT_PADDING_SIZE - sizeof(TaskQueueEntryChunk*)];
TaskQueueEntryChunk* volatile _chunk_list; // List of chunks currently containing data.
Atomic<TaskQueueEntryChunk*> _chunk_list; // List of chunks currently containing data.
volatile size_t _chunks_in_chunk_list;
char _pad2[DEFAULT_PADDING_SIZE - sizeof(TaskQueueEntryChunk*) - sizeof(size_t)];
// Atomically add the given chunk to the list.
void add_chunk_to_list(TaskQueueEntryChunk* volatile* list, TaskQueueEntryChunk* elem);
void add_chunk_to_list(Atomic<TaskQueueEntryChunk*>* list, TaskQueueEntryChunk* elem);
// Atomically remove and return a chunk from the given list. Returns null if the
// list is empty.
TaskQueueEntryChunk* remove_chunk_from_list(TaskQueueEntryChunk* volatile* list);
TaskQueueEntryChunk* remove_chunk_from_list(Atomic<TaskQueueEntryChunk*>* list);
void add_chunk_to_chunk_list(TaskQueueEntryChunk* elem);
void add_chunk_to_free_list(TaskQueueEntryChunk* elem);
@ -283,7 +252,7 @@ private:
// Return whether the chunk list is empty. Racy due to unsynchronized access to
// _chunk_list.
bool is_empty() const { return _chunk_list == nullptr; }
bool is_empty() const { return _chunk_list.load_relaxed() == nullptr; }
size_t capacity() const { return _chunk_allocator.capacity(); }
@ -411,6 +380,8 @@ class G1ConcurrentMark : public CHeapObj<mtGC> {
G1CMTaskQueueSet* _task_queues; // Task queue set
TaskTerminator _terminator; // For termination
PartialArrayStateManager* _partial_array_state_manager;
// Two sync barriers that are used to synchronize tasks when an
// overflow occurs. The algorithm is the following. All tasks enter
// the first one to ensure that they have all stopped manipulating
@ -488,6 +459,8 @@ class G1ConcurrentMark : public CHeapObj<mtGC> {
// Prints all gathered CM-related statistics
void print_stats();
void print_and_reset_taskqueue_stats();
HeapWord* finger() { return _finger; }
bool concurrent() { return _concurrent; }
uint active_tasks() { return _num_active_tasks; }
@ -556,14 +529,14 @@ public:
// mark_in_bitmap call. Updates various statistics data.
void add_to_liveness(uint worker_id, oop const obj, size_t size);
// Did the last marking find a live object between bottom and TAMS?
bool contains_live_object(uint region) const { return _region_mark_stats[region]._live_words != 0; }
bool contains_live_object(uint region) const { return _region_mark_stats[region].live_words() != 0; }
// Live bytes in the given region as determined by concurrent marking, i.e. the amount of
// live bytes between bottom and TAMS.
size_t live_bytes(uint region) const { return _region_mark_stats[region]._live_words * HeapWordSize; }
size_t live_bytes(uint region) const { return _region_mark_stats[region].live_words() * HeapWordSize; }
// Set live bytes for concurrent marking.
void set_live_bytes(uint region, size_t live_bytes) { _region_mark_stats[region]._live_words = live_bytes / HeapWordSize; }
void set_live_bytes(uint region, size_t live_bytes) { _region_mark_stats[region]._live_words.store_relaxed(live_bytes / HeapWordSize); }
// Approximate number of incoming references found during marking.
size_t incoming_refs(uint region) const { return _region_mark_stats[region]._incoming_refs; }
size_t incoming_refs(uint region) const { return _region_mark_stats[region].incoming_refs(); }
// Update the TAMS for the given region to the current top.
inline void update_top_at_mark_start(G1HeapRegion* r);
@ -582,6 +555,8 @@ public:
uint worker_id_offset() const { return _worker_id_offset; }
uint max_num_tasks() const {return _max_num_tasks; }
// Clear statistics gathered during the concurrent cycle for the given region after
// it has been reclaimed.
void clear_statistics(G1HeapRegion* r);
@ -631,6 +606,8 @@ public:
// Calculates the number of concurrent GC threads to be used in the marking phase.
uint calc_active_marking_workers();
PartialArrayStateManager* partial_array_state_manager() const;
// Resets the global marking data structures, as well as the
// task local ones; should be called during concurrent start.
void reset();
@ -642,6 +619,10 @@ public:
// to be called concurrently to the mutator. It will yield to safepoint requests.
void cleanup_for_next_mark();
// Recycle the memory that has been requested by allocators associated with
// this manager.
void reset_partial_array_state_manager();
// Clear the next marking bitmap during safepoint.
void clear_bitmap(WorkerThreads* workers);
@ -732,14 +713,13 @@ private:
refs_reached_period = 1024,
};
G1CMObjArrayProcessor _objArray_processor;
uint _worker_id;
G1CollectedHeap* _g1h;
G1ConcurrentMark* _cm;
G1CMBitMap* _mark_bitmap;
// the task queue of this task
G1CMTaskQueue* _task_queue;
PartialArraySplitter _partial_array_splitter;
G1RegionMarkStatsCache _mark_stats_cache;
// Number of calls to this task
@ -850,13 +830,24 @@ private:
// mark bitmap scan, and so needs to be pushed onto the mark stack.
bool is_below_finger(oop obj, HeapWord* global_finger) const;
template<bool scan> void process_grey_task_entry(G1TaskQueueEntry task_entry);
template<bool scan> void process_grey_task_entry(G1TaskQueueEntry task_entry, bool stolen);
static bool should_be_sliced(oop obj);
// Start processing the given objArrayOop by first pushing its continuations and
// then scanning the first chunk including the header.
size_t start_partial_array_processing(oop obj);
// Process the given continuation. Returns the number of words scanned.
size_t process_partial_array(const G1TaskQueueEntry& task, bool stolen);
// Apply the closure to the given range of elements in the objArray.
inline void process_array_chunk(objArrayOop obj, size_t start, size_t end);
public:
// Apply the closure on the given area of the objArray. Return the number of words
// scanned.
inline size_t scan_objArray(objArrayOop obj, MemRegion mr);
// Resets the task; should be called right at the beginning of a marking phase.
void reset(G1CMBitMap* mark_bitmap);
// Register/unregister Partial Array Splitter Allocator with the PartialArrayStateManager.
// This allows us to discard memory arenas used for partial object array states at the end
// of a concurrent mark cycle.
void register_partial_array_splitter();
void unregister_partial_array_splitter();
// Clears all the fields that correspond to a claimed region.
void clear_region_fields();
@ -912,7 +903,7 @@ public:
inline bool deal_with_reference(T* p);
// Scans an object and visits its children.
inline void scan_task_entry(G1TaskQueueEntry task_entry);
inline void process_entry(G1TaskQueueEntry task_entry, bool stolen);
// Pushes an object on the local queue.
inline void push(G1TaskQueueEntry task_entry);
@ -957,6 +948,11 @@ public:
Pair<size_t, size_t> flush_mark_stats_cache();
// Prints statistics associated with this task
void print_stats();
#if TASKQUEUE_STATS
PartialArrayTaskStats* partial_array_task_stats() {
return _partial_array_splitter.stats();
}
#endif
};
// Class that's used to to print out per-region liveness

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -29,7 +29,6 @@
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1ConcurrentMarkBitMap.inline.hpp"
#include "gc/g1/g1ConcurrentMarkObjArrayProcessor.inline.hpp"
#include "gc/g1/g1HeapRegion.hpp"
#include "gc/g1/g1HeapRegionRemSet.inline.hpp"
#include "gc/g1/g1OopClosures.inline.hpp"
@ -39,6 +38,7 @@
#include "gc/shared/suspendibleThreadSet.hpp"
#include "gc/shared/taskqueue.inline.hpp"
#include "utilities/bitMap.inline.hpp"
#include "utilities/checkedCast.hpp"
inline bool G1CMIsAliveClosure::do_object_b(oop obj) {
// Check whether the passed in object is null. During discovery the referent
@ -90,7 +90,7 @@ inline void G1CMMarkStack::iterate(Fn fn) const {
size_t num_chunks = 0;
TaskQueueEntryChunk* cur = _chunk_list;
TaskQueueEntryChunk* cur = _chunk_list.load_relaxed();
while (cur != nullptr) {
guarantee(num_chunks <= _chunks_in_chunk_list, "Found %zu oop chunks which is more than there should be", num_chunks);
@ -107,13 +107,15 @@ inline void G1CMMarkStack::iterate(Fn fn) const {
#endif
// It scans an object and visits its children.
inline void G1CMTask::scan_task_entry(G1TaskQueueEntry task_entry) { process_grey_task_entry<true>(task_entry); }
inline void G1CMTask::process_entry(G1TaskQueueEntry task_entry, bool stolen) {
process_grey_task_entry<true>(task_entry, stolen);
}
inline void G1CMTask::push(G1TaskQueueEntry task_entry) {
assert(task_entry.is_array_slice() || _g1h->is_in_reserved(task_entry.obj()), "invariant");
assert(task_entry.is_array_slice() || !_g1h->is_on_master_free_list(
_g1h->heap_region_containing(task_entry.obj())), "invariant");
assert(task_entry.is_array_slice() || _mark_bitmap->is_marked(cast_from_oop<HeapWord*>(task_entry.obj())), "invariant");
assert(task_entry.is_partial_array_state() || _g1h->is_in_reserved(task_entry.to_oop()), "invariant");
assert(task_entry.is_partial_array_state() || !_g1h->is_on_master_free_list(
_g1h->heap_region_containing(task_entry.to_oop())), "invariant");
assert(task_entry.is_partial_array_state() || _mark_bitmap->is_marked(cast_from_oop<HeapWord*>(task_entry.to_oop())), "invariant");
if (!_task_queue->push(task_entry)) {
// The local task queue looks full. We need to push some entries
@ -159,29 +161,34 @@ inline bool G1CMTask::is_below_finger(oop obj, HeapWord* global_finger) const {
}
template<bool scan>
inline void G1CMTask::process_grey_task_entry(G1TaskQueueEntry task_entry) {
assert(scan || (task_entry.is_oop() && task_entry.obj()->is_typeArray()), "Skipping scan of grey non-typeArray");
assert(task_entry.is_array_slice() || _mark_bitmap->is_marked(cast_from_oop<HeapWord*>(task_entry.obj())),
inline void G1CMTask::process_grey_task_entry(G1TaskQueueEntry task_entry, bool stolen) {
assert(scan || (!task_entry.is_partial_array_state() && task_entry.to_oop()->is_typeArray()), "Skipping scan of grey non-typeArray");
assert(task_entry.is_partial_array_state() || _mark_bitmap->is_marked(cast_from_oop<HeapWord*>(task_entry.to_oop())),
"Any stolen object should be a slice or marked");
if (scan) {
if (task_entry.is_array_slice()) {
_words_scanned += _objArray_processor.process_slice(task_entry.slice());
if (task_entry.is_partial_array_state()) {
_words_scanned += process_partial_array(task_entry, stolen);
} else {
oop obj = task_entry.obj();
if (G1CMObjArrayProcessor::should_be_sliced(obj)) {
_words_scanned += _objArray_processor.process_obj(obj);
oop obj = task_entry.to_oop();
if (should_be_sliced(obj)) {
_words_scanned += start_partial_array_processing(obj);
} else {
_words_scanned += obj->oop_iterate_size(_cm_oop_closure);;
_words_scanned += obj->oop_iterate_size(_cm_oop_closure);
}
}
}
check_limits();
}
inline size_t G1CMTask::scan_objArray(objArrayOop obj, MemRegion mr) {
obj->oop_iterate(_cm_oop_closure, mr);
return mr.word_size();
inline bool G1CMTask::should_be_sliced(oop obj) {
return obj->is_objArray() && ((objArrayOop)obj)->length() >= (int)ObjArrayMarkingStride;
}
inline void G1CMTask::process_array_chunk(objArrayOop obj, size_t start, size_t end) {
obj->oop_iterate_elements_range(_cm_oop_closure,
checked_cast<int>(start),
checked_cast<int>(end));
}
inline void G1ConcurrentMark::update_top_at_mark_start(G1HeapRegion* r) {
@ -265,7 +272,7 @@ inline bool G1CMTask::make_reference_grey(oop obj) {
// be pushed on the stack. So, some duplicate work, but no
// correctness problems.
if (is_below_finger(obj, global_finger)) {
G1TaskQueueEntry entry = G1TaskQueueEntry::from_oop(obj);
G1TaskQueueEntry entry(obj);
if (obj->is_typeArray()) {
// Immediately process arrays of primitive types, rather
// than pushing on the mark stack. This keeps us from
@ -277,7 +284,7 @@ inline bool G1CMTask::make_reference_grey(oop obj) {
// by only doing a bookkeeping update and avoiding the
// actual scan of the object - a typeArray contains no
// references, and the metadata is built-in.
process_grey_task_entry<false>(entry);
process_grey_task_entry<false>(entry, false /* stolen */);
} else {
push(entry);
}

View File

@ -1,80 +0,0 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1ConcurrentMark.inline.hpp"
#include "gc/g1/g1ConcurrentMarkObjArrayProcessor.inline.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "gc/shared/gc_globals.hpp"
#include "memory/memRegion.hpp"
#include "utilities/globalDefinitions.hpp"
void G1CMObjArrayProcessor::push_array_slice(HeapWord* what) {
_task->push(G1TaskQueueEntry::from_slice(what));
}
size_t G1CMObjArrayProcessor::process_array_slice(objArrayOop obj, HeapWord* start_from, size_t remaining) {
size_t words_to_scan = MIN2(remaining, (size_t)ObjArrayMarkingStride);
if (remaining > ObjArrayMarkingStride) {
push_array_slice(start_from + ObjArrayMarkingStride);
}
// Then process current area.
MemRegion mr(start_from, words_to_scan);
return _task->scan_objArray(obj, mr);
}
size_t G1CMObjArrayProcessor::process_obj(oop obj) {
assert(should_be_sliced(obj), "Must be an array object %d and large %zu", obj->is_objArray(), obj->size());
return process_array_slice(objArrayOop(obj), cast_from_oop<HeapWord*>(obj), objArrayOop(obj)->size());
}
size_t G1CMObjArrayProcessor::process_slice(HeapWord* slice) {
// Find the start address of the objArrayOop.
// Shortcut the BOT access if the given address is from a humongous object. The BOT
// slide is fast enough for "smaller" objects in non-humongous regions, but is slower
// than directly using heap region table.
G1CollectedHeap* g1h = G1CollectedHeap::heap();
G1HeapRegion* r = g1h->heap_region_containing(slice);
HeapWord* const start_address = r->is_humongous() ?
r->humongous_start_region()->bottom() :
r->block_start(slice);
assert(cast_to_oop(start_address)->is_objArray(), "Address " PTR_FORMAT " does not refer to an object array ", p2i(start_address));
assert(start_address < slice,
"Object start address " PTR_FORMAT " must be smaller than decoded address " PTR_FORMAT,
p2i(start_address),
p2i(slice));
objArrayOop objArray = objArrayOop(cast_to_oop(start_address));
size_t already_scanned = pointer_delta(slice, start_address);
size_t remaining = objArray->size() - already_scanned;
return process_array_slice(objArray, slice, remaining);
}

View File

@ -1,59 +0,0 @@
/*
* Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_GC_G1_G1CONCURRENTMARKOBJARRAYPROCESSOR_HPP
#define SHARE_GC_G1_G1CONCURRENTMARKOBJARRAYPROCESSOR_HPP
#include "oops/oopsHierarchy.hpp"
class G1CMTask;
// Helper class to mark through large objArrays during marking in an efficient way.
// Instead of pushing large object arrays, we push continuations onto the
// mark stack. These continuations are identified by having their LSB set.
// This allows incremental processing of large objects.
class G1CMObjArrayProcessor {
private:
// Reference to the task for doing the actual work.
G1CMTask* _task;
// Push the continuation at the given address onto the mark stack.
void push_array_slice(HeapWord* addr);
// Process (apply the closure) on the given continuation of the given objArray.
size_t process_array_slice(objArrayOop const obj, HeapWord* start_from, size_t remaining);
public:
static bool should_be_sliced(oop obj);
G1CMObjArrayProcessor(G1CMTask* task) : _task(task) {
}
// Process the given continuation. Returns the number of words scanned.
size_t process_slice(HeapWord* slice);
// Start processing the given objArrayOop by scanning the header and pushing its
// continuation.
size_t process_obj(oop obj);
};
#endif // SHARE_GC_G1_G1CONCURRENTMARKOBJARRAYPROCESSOR_HPP

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -30,7 +30,6 @@
#include "gc/g1/g1HeapRegionPrinter.hpp"
#include "gc/g1/g1RemSetTrackingPolicy.hpp"
#include "logging/log.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/mutexLocker.hpp"
struct G1UpdateRegionLivenessAndSelectForRebuildTask::G1OnRegionClosure : public G1HeapRegionClosure {
@ -154,7 +153,7 @@ void G1UpdateRegionLivenessAndSelectForRebuildTask::work(uint worker_id) {
G1OnRegionClosure on_region_cl(_g1h, _cm, &local_cleanup_list);
_g1h->heap_region_par_iterate_from_worker_offset(&on_region_cl, &_hrclaimer, worker_id);
AtomicAccess::add(&_total_selected_for_rebuild, on_region_cl._num_selected_for_rebuild);
_total_selected_for_rebuild.add_then_fetch(on_region_cl._num_selected_for_rebuild);
// Update the old/humongous region sets
_g1h->remove_from_old_gen_sets(on_region_cl._num_old_regions_removed,

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -29,6 +29,7 @@
#include "gc/g1/g1HeapRegionManager.hpp"
#include "gc/g1/g1HeapRegionSet.hpp"
#include "gc/shared/workerThread.hpp"
#include "runtime/atomic.hpp"
class G1CollectedHeap;
class G1ConcurrentMark;
@ -41,7 +42,7 @@ class G1UpdateRegionLivenessAndSelectForRebuildTask : public WorkerTask {
G1ConcurrentMark* _cm;
G1HeapRegionClaimer _hrclaimer;
uint volatile _total_selected_for_rebuild;
Atomic<uint> _total_selected_for_rebuild;
// Reclaimed empty regions
G1FreeRegionList _cleanup_list;
@ -57,7 +58,9 @@ public:
void work(uint worker_id) override;
uint total_selected_for_rebuild() const { return _total_selected_for_rebuild; }
uint total_selected_for_rebuild() const {
return _total_selected_for_rebuild.load_relaxed();
}
static uint desired_num_workers(uint num_regions);
};

View File

@ -28,6 +28,7 @@
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1CollectionSet.hpp"
#include "gc/g1/g1ConcurrentRefine.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "gc/g1/g1ConcurrentRefineSweepTask.hpp"
#include "gc/g1/g1ConcurrentRefineThread.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -22,7 +22,7 @@
*
*/
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/timer.hpp"
@ -39,19 +39,27 @@ G1ConcurrentRefineStats::G1ConcurrentRefineStats() :
{}
void G1ConcurrentRefineStats::add_atomic(G1ConcurrentRefineStats* other) {
AtomicAccess::add(&_sweep_duration, other->_sweep_duration, memory_order_relaxed);
AtomicAccess::add(&_yield_during_sweep_duration, other->_yield_during_sweep_duration, memory_order_relaxed);
_sweep_duration.add_then_fetch(other->_sweep_duration.load_relaxed(), memory_order_relaxed);
_yield_during_sweep_duration.add_then_fetch(other->yield_during_sweep_duration(), memory_order_relaxed);
AtomicAccess::add(&_cards_scanned, other->_cards_scanned, memory_order_relaxed);
AtomicAccess::add(&_cards_clean, other->_cards_clean, memory_order_relaxed);
AtomicAccess::add(&_cards_not_parsable, other->_cards_not_parsable, memory_order_relaxed);
AtomicAccess::add(&_cards_already_refer_to_cset, other->_cards_already_refer_to_cset, memory_order_relaxed);
AtomicAccess::add(&_cards_refer_to_cset, other->_cards_refer_to_cset, memory_order_relaxed);
AtomicAccess::add(&_cards_no_cross_region, other->_cards_no_cross_region, memory_order_relaxed);
_cards_scanned.add_then_fetch(other->cards_scanned(), memory_order_relaxed);
_cards_clean.add_then_fetch(other->cards_clean(), memory_order_relaxed);
_cards_not_parsable.add_then_fetch(other->cards_not_parsable(), memory_order_relaxed);
_cards_already_refer_to_cset.add_then_fetch(other->cards_already_refer_to_cset(), memory_order_relaxed);
_cards_refer_to_cset.add_then_fetch(other->cards_refer_to_cset(), memory_order_relaxed);
_cards_no_cross_region.add_then_fetch(other->cards_no_cross_region(), memory_order_relaxed);
AtomicAccess::add(&_refine_duration, other->_refine_duration, memory_order_relaxed);
_refine_duration.add_then_fetch(other->refine_duration(), memory_order_relaxed);
}
void G1ConcurrentRefineStats::reset() {
*this = G1ConcurrentRefineStats();
_sweep_duration.store_relaxed(0);
_yield_during_sweep_duration.store_relaxed(0);
_cards_scanned.store_relaxed(0);
_cards_clean.store_relaxed(0);
_cards_not_parsable.store_relaxed(0);
_cards_already_refer_to_cset.store_relaxed(0);
_cards_refer_to_cset.store_relaxed(0);
_cards_no_cross_region.store_relaxed(0);
_refine_duration.store_relaxed(0);
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -26,61 +26,61 @@
#define SHARE_GC_G1_G1CONCURRENTREFINESTATS_HPP
#include "memory/allocation.hpp"
#include "runtime/atomic.hpp"
#include "utilities/globalDefinitions.hpp"
#include "utilities/ticks.hpp"
// Collection of statistics for concurrent refinement processing.
// Used for collecting per-thread statistics and for summaries over a
// collection of threads.
class G1ConcurrentRefineStats : public CHeapObj<mtGC> {
jlong _sweep_duration; // Time spent sweeping the table finding non-clean cards
// and refining them.
jlong _yield_during_sweep_duration; // Time spent yielding during the sweep (not doing the sweep).
Atomic<jlong> _sweep_duration; // Time spent sweeping the table finding non-clean cards
// and refining them.
Atomic<jlong> _yield_during_sweep_duration; // Time spent yielding during the sweep (not doing the sweep).
size_t _cards_scanned; // Total number of cards scanned.
size_t _cards_clean; // Number of cards found clean.
size_t _cards_not_parsable; // Number of cards we could not parse and left unrefined.
size_t _cards_already_refer_to_cset;// Number of cards marked found to be already young.
size_t _cards_refer_to_cset; // Number of dirty cards that were recently found to contain a to-cset reference.
size_t _cards_no_cross_region; // Number of dirty cards that were dirtied, but then cleaned again by the mutator.
Atomic<size_t> _cards_scanned; // Total number of cards scanned.
Atomic<size_t> _cards_clean; // Number of cards found clean.
Atomic<size_t> _cards_not_parsable; // Number of cards we could not parse and left unrefined.
Atomic<size_t> _cards_already_refer_to_cset;// Number of cards marked found to be already young.
Atomic<size_t> _cards_refer_to_cset; // Number of dirty cards that were recently found to contain a to-cset reference.
Atomic<size_t> _cards_no_cross_region; // Number of dirty cards that were dirtied, but then cleaned again by the mutator.
jlong _refine_duration; // Time spent during actual refinement.
Atomic<jlong> _refine_duration; // Time spent during actual refinement.
public:
G1ConcurrentRefineStats();
// Time spent performing sweeping the refinement table (includes actual refinement,
// but not yield time).
jlong sweep_duration() const { return _sweep_duration - _yield_during_sweep_duration; }
jlong yield_during_sweep_duration() const { return _yield_during_sweep_duration; }
jlong refine_duration() const { return _refine_duration; }
inline jlong sweep_duration() const;
inline jlong yield_during_sweep_duration() const;
inline jlong refine_duration() const;
// Number of refined cards.
size_t refined_cards() const { return cards_not_clean(); }
inline size_t refined_cards() const;
size_t cards_scanned() const { return _cards_scanned; }
size_t cards_clean() const { return _cards_clean; }
size_t cards_not_clean() const { return _cards_scanned - _cards_clean; }
size_t cards_not_parsable() const { return _cards_not_parsable; }
size_t cards_already_refer_to_cset() const { return _cards_already_refer_to_cset; }
size_t cards_refer_to_cset() const { return _cards_refer_to_cset; }
size_t cards_no_cross_region() const { return _cards_no_cross_region; }
inline size_t cards_scanned() const;
inline size_t cards_clean() const;
inline size_t cards_not_clean() const;
inline size_t cards_not_parsable() const;
inline size_t cards_already_refer_to_cset() const;
inline size_t cards_refer_to_cset() const;
inline size_t cards_no_cross_region() const;
// Number of cards that were marked dirty and in need of refinement. This includes cards recently
// found to refer to the collection set as they originally were dirty.
size_t cards_pending() const { return cards_not_clean() - _cards_already_refer_to_cset; }
inline size_t cards_pending() const;
size_t cards_to_cset() const { return _cards_already_refer_to_cset + _cards_refer_to_cset; }
inline size_t cards_to_cset() const;
void inc_sweep_time(jlong t) { _sweep_duration += t; }
void inc_yield_during_sweep_duration(jlong t) { _yield_during_sweep_duration += t; }
void inc_refine_duration(jlong t) { _refine_duration += t; }
inline void inc_sweep_time(jlong t);
inline void inc_yield_during_sweep_duration(jlong t);
inline void inc_refine_duration(jlong t);
void inc_cards_scanned(size_t increment) { _cards_scanned += increment; }
void inc_cards_clean(size_t increment) { _cards_clean += increment; }
void inc_cards_not_parsable() { _cards_not_parsable++; }
void inc_cards_already_refer_to_cset() { _cards_already_refer_to_cset++; }
void inc_cards_refer_to_cset() { _cards_refer_to_cset++; }
void inc_cards_no_cross_region() { _cards_no_cross_region++; }
inline void inc_cards_scanned(size_t increment);
inline void inc_cards_clean(size_t increment);
inline void inc_cards_not_parsable();
inline void inc_cards_already_refer_to_cset();
inline void inc_cards_refer_to_cset();
inline void inc_cards_no_cross_region();
void add_atomic(G1ConcurrentRefineStats* other);

View File

@ -0,0 +1,118 @@
/*
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_GC_G1_G1CONCURRENTREFINESTATS_INLINE_HPP
#define SHARE_GC_G1_G1CONCURRENTREFINESTATS_INLINE_HPP
#include "gc/g1/g1ConcurrentRefineStats.hpp"
inline jlong G1ConcurrentRefineStats::sweep_duration() const {
return _sweep_duration.load_relaxed() - yield_during_sweep_duration();
}
inline jlong G1ConcurrentRefineStats::yield_during_sweep_duration() const {
return _yield_during_sweep_duration.load_relaxed();
}
inline jlong G1ConcurrentRefineStats::refine_duration() const {
return _refine_duration.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::refined_cards() const {
return cards_not_clean();
}
inline size_t G1ConcurrentRefineStats::cards_scanned() const {
return _cards_scanned.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_clean() const {
return _cards_clean.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_not_clean() const {
return cards_scanned() - cards_clean();
}
inline size_t G1ConcurrentRefineStats::cards_not_parsable() const {
return _cards_not_parsable.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_already_refer_to_cset() const {
return _cards_already_refer_to_cset.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_refer_to_cset() const {
return _cards_refer_to_cset.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_no_cross_region() const {
return _cards_no_cross_region.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_pending() const {
return cards_not_clean() - cards_already_refer_to_cset();
}
inline size_t G1ConcurrentRefineStats::cards_to_cset() const {
return cards_already_refer_to_cset() + cards_refer_to_cset();
}
inline void G1ConcurrentRefineStats::inc_sweep_time(jlong t) {
_sweep_duration.store_relaxed(_sweep_duration.load_relaxed() + t);
}
inline void G1ConcurrentRefineStats::inc_yield_during_sweep_duration(jlong t) {
_yield_during_sweep_duration.store_relaxed(yield_during_sweep_duration() + t);
}
inline void G1ConcurrentRefineStats::inc_refine_duration(jlong t) {
_refine_duration.store_relaxed(refine_duration() + t);
}
inline void G1ConcurrentRefineStats::inc_cards_scanned(size_t increment) {
_cards_scanned.store_relaxed(cards_scanned() + increment);
}
inline void G1ConcurrentRefineStats::inc_cards_clean(size_t increment) {
_cards_clean.store_relaxed(cards_clean() + increment);
}
inline void G1ConcurrentRefineStats::inc_cards_not_parsable() {
_cards_not_parsable.store_relaxed(cards_not_parsable() + 1);
}
inline void G1ConcurrentRefineStats::inc_cards_already_refer_to_cset() {
_cards_already_refer_to_cset.store_relaxed(cards_already_refer_to_cset() + 1);
}
inline void G1ConcurrentRefineStats::inc_cards_refer_to_cset() {
_cards_refer_to_cset.store_relaxed(cards_refer_to_cset() + 1);
}
inline void G1ConcurrentRefineStats::inc_cards_no_cross_region() {
_cards_no_cross_region.store_relaxed(cards_no_cross_region() + 1);
}
#endif // SHARE_GC_G1_G1CONCURRENTREFINESTATS_INLINE_HPP

View File

@ -24,6 +24,7 @@
#include "gc/g1/g1CardTableClaimTable.inline.hpp"
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "gc/g1/g1ConcurrentRefineSweepTask.hpp"
class G1RefineRegionClosure : public G1HeapRegionClosure {

View File

@ -25,10 +25,10 @@
#ifndef SHARE_GC_G1_G1CONCURRENTREFINESWEEPTASK_HPP
#define SHARE_GC_G1_G1CONCURRENTREFINESWEEPTASK_HPP
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/shared/workerThread.hpp"
class G1CardTableClaimTable;
class G1ConcurrentRefineStats;
class G1ConcurrentRefineSweepTask : public WorkerTask {
G1CardTableClaimTable* _scan_state;

View File

@ -26,7 +26,7 @@
#include "gc/g1/g1CardTableClaimTable.inline.hpp"
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1ConcurrentRefine.hpp"
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "gc/g1/g1ConcurrentRefineSweepTask.hpp"
#include "gc/g1/g1ConcurrentRefineThread.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"

View File

@ -25,7 +25,6 @@
#ifndef SHARE_GC_G1_G1CONCURRENTREFINETHREAD_HPP
#define SHARE_GC_G1_G1CONCURRENTREFINETHREAD_HPP
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/shared/concurrentGCThread.hpp"
#include "runtime/mutex.hpp"
#include "utilities/globalDefinitions.hpp"

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2021, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -28,7 +29,6 @@
#include "gc/g1/g1EvacFailureRegions.inline.hpp"
#include "gc/g1/g1HeapRegion.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/bitMap.inline.hpp"
G1EvacFailureRegions::G1EvacFailureRegions() :
@ -43,7 +43,7 @@ G1EvacFailureRegions::~G1EvacFailureRegions() {
}
void G1EvacFailureRegions::pre_collection(uint max_regions) {
AtomicAccess::store(&_num_regions_evac_failed, 0u);
_num_regions_evac_failed.store_relaxed(0u);
_regions_evac_failed.resize(max_regions);
_regions_pinned.resize(max_regions);
_regions_alloc_failed.resize(max_regions);
@ -69,6 +69,6 @@ void G1EvacFailureRegions::par_iterate(G1HeapRegionClosure* closure,
G1CollectedHeap::heap()->par_iterate_regions_array(closure,
hrclaimer,
_evac_failed_regions,
AtomicAccess::load(&_num_regions_evac_failed),
num_regions_evac_failed(),
worker_id);
}

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2021, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -25,6 +26,7 @@
#ifndef SHARE_GC_G1_G1EVACFAILUREREGIONS_HPP
#define SHARE_GC_G1_G1EVACFAILUREREGIONS_HPP
#include "runtime/atomic.hpp"
#include "utilities/bitMap.hpp"
class G1AbstractSubTask;
@ -53,14 +55,14 @@ class G1EvacFailureRegions {
// Evacuation failed regions (indexes) in the current collection.
uint* _evac_failed_regions;
// Number of regions evacuation failed in the current collection.
volatile uint _num_regions_evac_failed;
Atomic<uint> _num_regions_evac_failed;
public:
G1EvacFailureRegions();
~G1EvacFailureRegions();
uint get_region_idx(uint idx) const {
assert(idx < _num_regions_evac_failed, "precondition");
assert(idx < _num_regions_evac_failed.load_relaxed(), "precondition");
return _evac_failed_regions[idx];
}

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2021, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -29,10 +30,9 @@
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1GCPhaseTimes.hpp"
#include "runtime/atomicAccess.hpp"
uint G1EvacFailureRegions::num_regions_evac_failed() const {
return AtomicAccess::load(&_num_regions_evac_failed);
return _num_regions_evac_failed.load_relaxed();
}
bool G1EvacFailureRegions::has_regions_evac_failed() const {
@ -57,7 +57,7 @@ bool G1EvacFailureRegions::record(uint worker_id, uint region_idx, bool cause_pi
bool success = _regions_evac_failed.par_set_bit(region_idx,
memory_order_relaxed);
if (success) {
size_t offset = AtomicAccess::fetch_then_add(&_num_regions_evac_failed, 1u);
size_t offset = _num_regions_evac_failed.fetch_then_add(1u);
_evac_failed_regions[offset] = region_idx;
G1CollectedHeap* g1h = G1CollectedHeap::heap();

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2015, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -22,13 +22,24 @@
*
*/
#include "gc/g1/g1EvacStats.hpp"
#include "gc/g1/g1EvacStats.inline.hpp"
#include "gc/shared/gc_globals.hpp"
#include "gc/shared/gcId.hpp"
#include "logging/log.hpp"
#include "memory/allocation.inline.hpp"
#include "runtime/globals.hpp"
void G1EvacStats::reset() {
PLABStats::reset();
_region_end_waste.store_relaxed(0);
_regions_filled.store_relaxed(0);
_num_plab_filled.store_relaxed(0);
_direct_allocated.store_relaxed(0);
_num_direct_allocated.store_relaxed(0);
_failure_used.store_relaxed(0);
_failure_waste.store_relaxed(0);
}
void G1EvacStats::log_plab_allocation() {
log_debug(gc, plab)("%s PLAB allocation: "
"allocated: %zuB, "
@ -51,13 +62,13 @@ void G1EvacStats::log_plab_allocation() {
"failure used: %zuB, "
"failure wasted: %zuB",
_description,
_region_end_waste * HeapWordSize,
_regions_filled,
_num_plab_filled,
_direct_allocated * HeapWordSize,
_num_direct_allocated,
_failure_used * HeapWordSize,
_failure_waste * HeapWordSize);
region_end_waste() * HeapWordSize,
regions_filled(),
num_plab_filled(),
direct_allocated() * HeapWordSize,
num_direct_allocated(),
failure_used() * HeapWordSize,
failure_waste() * HeapWordSize);
}
void G1EvacStats::log_sizing(size_t calculated_words, size_t net_desired_words) {
@ -109,7 +120,7 @@ size_t G1EvacStats::compute_desired_plab_size() const {
// threads do not allocate anything but a few rather large objects. In this
// degenerate case the PLAB size would simply quickly tend to minimum PLAB size,
// which is an okay reaction.
size_t const used_for_waste_calculation = used() > _region_end_waste ? used() - _region_end_waste : 0;
size_t const used_for_waste_calculation = used() > region_end_waste() ? used() - region_end_waste() : 0;
size_t const total_waste_allowed = used_for_waste_calculation * TargetPLABWastePct;
return (size_t)((double)total_waste_allowed / (100 - G1LastPLABAverageOccupancy));

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2015, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,6 +27,7 @@
#include "gc/shared/gcUtil.hpp"
#include "gc/shared/plab.hpp"
#include "runtime/atomic.hpp"
// Records various memory allocation statistics gathered during evacuation. All sizes
// are in HeapWords.
@ -36,30 +37,21 @@ class G1EvacStats : public PLABStats {
AdaptiveWeightedAverage
_net_plab_size_filter; // Integrator with decay
size_t _region_end_waste; // Number of words wasted due to skipping to the next region.
uint _regions_filled; // Number of regions filled completely.
size_t _num_plab_filled; // Number of PLABs filled and retired.
size_t _direct_allocated; // Number of words allocated directly into the regions.
size_t _num_direct_allocated; // Number of direct allocation attempts.
Atomic<size_t> _region_end_waste; // Number of words wasted due to skipping to the next region.
Atomic<uint> _regions_filled; // Number of regions filled completely.
Atomic<size_t> _num_plab_filled; // Number of PLABs filled and retired.
Atomic<size_t> _direct_allocated; // Number of words allocated directly into the regions.
Atomic<size_t> _num_direct_allocated; // Number of direct allocation attempts.
// Number of words in live objects remaining in regions that ultimately suffered an
// evacuation failure. This is used in the regions when the regions are made old regions.
size_t _failure_used;
Atomic<size_t> _failure_used;
// Number of words wasted in regions which failed evacuation. This is the sum of space
// for objects successfully copied out of the regions (now dead space) plus waste at the
// end of regions.
size_t _failure_waste;
Atomic<size_t> _failure_waste;
virtual void reset() {
PLABStats::reset();
_region_end_waste = 0;
_regions_filled = 0;
_num_plab_filled = 0;
_direct_allocated = 0;
_num_direct_allocated = 0;
_failure_used = 0;
_failure_waste = 0;
}
virtual void reset();
void log_plab_allocation();
void log_sizing(size_t calculated_words, size_t net_desired_words);
@ -77,16 +69,16 @@ public:
// Should be called at the end of a GC pause.
void adjust_desired_plab_size();
uint regions_filled() const { return _regions_filled; }
size_t num_plab_filled() const { return _num_plab_filled; }
size_t region_end_waste() const { return _region_end_waste; }
size_t direct_allocated() const { return _direct_allocated; }
size_t num_direct_allocated() const { return _num_direct_allocated; }
uint regions_filled() const;
size_t num_plab_filled() const;
size_t region_end_waste() const;
size_t direct_allocated() const;
size_t num_direct_allocated() const;
// Amount of space in heapwords used in the failing regions when an evacuation failure happens.
size_t failure_used() const { return _failure_used; }
size_t failure_used() const;
// Amount of space in heapwords wasted (unused) in the failing regions when an evacuation failure happens.
size_t failure_waste() const { return _failure_waste; }
size_t failure_waste() const;
inline void add_num_plab_filled(size_t value);
inline void add_direct_allocated(size_t value);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2015, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,28 +27,54 @@
#include "gc/g1/g1EvacStats.hpp"
#include "runtime/atomicAccess.hpp"
inline uint G1EvacStats::regions_filled() const {
return _regions_filled.load_relaxed();
}
inline size_t G1EvacStats::num_plab_filled() const {
return _num_plab_filled.load_relaxed();
}
inline size_t G1EvacStats::region_end_waste() const {
return _region_end_waste.load_relaxed();
}
inline size_t G1EvacStats::direct_allocated() const {
return _direct_allocated.load_relaxed();
}
inline size_t G1EvacStats::num_direct_allocated() const {
return _num_direct_allocated.load_relaxed();
}
inline size_t G1EvacStats::failure_used() const {
return _failure_used.load_relaxed();
}
inline size_t G1EvacStats::failure_waste() const {
return _failure_waste.load_relaxed();
}
inline void G1EvacStats::add_direct_allocated(size_t value) {
AtomicAccess::add(&_direct_allocated, value, memory_order_relaxed);
_direct_allocated.add_then_fetch(value, memory_order_relaxed);
}
inline void G1EvacStats::add_num_plab_filled(size_t value) {
AtomicAccess::add(&_num_plab_filled, value, memory_order_relaxed);
_num_plab_filled.add_then_fetch(value, memory_order_relaxed);
}
inline void G1EvacStats::add_num_direct_allocated(size_t value) {
AtomicAccess::add(&_num_direct_allocated, value, memory_order_relaxed);
_num_direct_allocated.add_then_fetch(value, memory_order_relaxed);
}
inline void G1EvacStats::add_region_end_waste(size_t value) {
AtomicAccess::add(&_region_end_waste, value, memory_order_relaxed);
AtomicAccess::inc(&_regions_filled, memory_order_relaxed);
_region_end_waste.add_then_fetch(value, memory_order_relaxed);
_regions_filled.add_then_fetch(1u, memory_order_relaxed);
}
inline void G1EvacStats::add_failure_used_and_waste(size_t used, size_t waste) {
AtomicAccess::add(&_failure_used, used, memory_order_relaxed);
AtomicAccess::add(&_failure_waste, waste, memory_order_relaxed);
_failure_used.add_then_fetch(used, memory_order_relaxed);
_failure_waste.add_then_fetch(waste, memory_order_relaxed);
}
#endif // SHARE_GC_G1_G1EVACSTATS_INLINE_HPP

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -116,8 +116,8 @@ G1FullCollector::G1FullCollector(G1CollectedHeap* heap,
_num_workers(calc_active_workers()),
_has_compaction_targets(false),
_has_humongous(false),
_oop_queue_set(_num_workers),
_array_queue_set(_num_workers),
_marking_task_queues(_num_workers),
_partial_array_state_manager(nullptr),
_preserved_marks_set(true),
_serial_compaction_point(this, nullptr),
_humongous_compaction_point(this, nullptr),
@ -134,32 +134,40 @@ G1FullCollector::G1FullCollector(G1CollectedHeap* heap,
_compaction_points = NEW_C_HEAP_ARRAY(G1FullGCCompactionPoint*, _num_workers, mtGC);
_live_stats = NEW_C_HEAP_ARRAY(G1RegionMarkStats, _heap->max_num_regions(), mtGC);
_compaction_tops = NEW_C_HEAP_ARRAY(HeapWord*, _heap->max_num_regions(), mtGC);
_compaction_tops = NEW_C_HEAP_ARRAY(Atomic<HeapWord*>, _heap->max_num_regions(), mtGC);
for (uint j = 0; j < heap->max_num_regions(); j++) {
_live_stats[j].clear();
_compaction_tops[j] = nullptr;
::new (&_compaction_tops[j]) Atomic<HeapWord*>{};
}
_partial_array_state_manager = new PartialArrayStateManager(_num_workers);
for (uint i = 0; i < _num_workers; i++) {
_markers[i] = new G1FullGCMarker(this, i, _live_stats);
_compaction_points[i] = new G1FullGCCompactionPoint(this, _preserved_marks_set.get(i));
_oop_queue_set.register_queue(i, marker(i)->oop_stack());
_array_queue_set.register_queue(i, marker(i)->objarray_stack());
_marking_task_queues.register_queue(i, marker(i)->task_queue());
}
_serial_compaction_point.set_preserved_stack(_preserved_marks_set.get(0));
_humongous_compaction_point.set_preserved_stack(_preserved_marks_set.get(0));
_region_attr_table.initialize(heap->reserved(), G1HeapRegion::GrainBytes);
}
PartialArrayStateManager* G1FullCollector::partial_array_state_manager() const {
return _partial_array_state_manager;
}
G1FullCollector::~G1FullCollector() {
for (uint i = 0; i < _num_workers; i++) {
delete _markers[i];
delete _compaction_points[i];
}
delete _partial_array_state_manager;
FREE_C_HEAP_ARRAY(G1FullGCMarker*, _markers);
FREE_C_HEAP_ARRAY(G1FullGCCompactionPoint*, _compaction_points);
FREE_C_HEAP_ARRAY(HeapWord*, _compaction_tops);
FREE_C_HEAP_ARRAY(Atomic<HeapWord*>, _compaction_tops);
FREE_C_HEAP_ARRAY(G1RegionMarkStats, _live_stats);
}
@ -279,8 +287,8 @@ public:
uint index = (_tm == RefProcThreadModel::Single) ? 0 : worker_id;
G1FullKeepAliveClosure keep_alive(_collector.marker(index));
BarrierEnqueueDiscoveredFieldClosure enqueue;
G1FollowStackClosure* complete_gc = _collector.marker(index)->stack_closure();
_rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, complete_gc);
G1MarkStackClosure* complete_marking = _collector.marker(index)->stack_closure();
_rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, complete_marking);
}
};
@ -302,7 +310,7 @@ void G1FullCollector::phase1_mark_live_objects() {
const ReferenceProcessorStats& stats = reference_processor()->process_discovered_references(task, _heap->workers(), pt);
scope()->tracer()->report_gc_reference_stats(stats);
pt.print_all_references();
assert(marker(0)->oop_stack()->is_empty(), "Should be no oops on the stack");
assert(marker(0)->task_queue()->is_empty(), "Should be no oops on the stack");
}
{
@ -328,8 +336,7 @@ void G1FullCollector::phase1_mark_live_objects() {
scope()->tracer()->report_object_count_after_gc(&_is_alive, _heap->workers());
}
#if TASKQUEUE_STATS
oop_queue_set()->print_and_reset_taskqueue_stats("Oop Queue");
array_queue_set()->print_and_reset_taskqueue_stats("ObjArrayOop Queue");
marking_task_queues()->print_and_reset_taskqueue_stats("Marking Task Queue");
#endif
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -79,8 +79,8 @@ class G1FullCollector : StackObj {
bool _has_humongous;
G1FullGCMarker** _markers;
G1FullGCCompactionPoint** _compaction_points;
OopQueueSet _oop_queue_set;
ObjArrayTaskQueueSet _array_queue_set;
G1MarkTasksQueueSet _marking_task_queues;
PartialArrayStateManager* _partial_array_state_manager;
PreservedMarksSet _preserved_marks_set;
G1FullGCCompactionPoint _serial_compaction_point;
G1FullGCCompactionPoint _humongous_compaction_point;
@ -96,7 +96,7 @@ class G1FullCollector : StackObj {
G1FullGCHeapRegionAttr _region_attr_table;
HeapWord* volatile* _compaction_tops;
Atomic<HeapWord*>* _compaction_tops;
public:
G1FullCollector(G1CollectedHeap* heap,
@ -113,8 +113,7 @@ public:
uint workers() { return _num_workers; }
G1FullGCMarker* marker(uint id) { return _markers[id]; }
G1FullGCCompactionPoint* compaction_point(uint id) { return _compaction_points[id]; }
OopQueueSet* oop_queue_set() { return &_oop_queue_set; }
ObjArrayTaskQueueSet* array_queue_set() { return &_array_queue_set; }
G1MarkTasksQueueSet* marking_task_queues() { return &_marking_task_queues; }
PreservedMarksSet* preserved_mark_set() { return &_preserved_marks_set; }
G1FullGCCompactionPoint* serial_compaction_point() { return &_serial_compaction_point; }
G1FullGCCompactionPoint* humongous_compaction_point() { return &_humongous_compaction_point; }
@ -122,9 +121,11 @@ public:
ReferenceProcessor* reference_processor();
size_t live_words(uint region_index) const {
assert(region_index < _heap->max_num_regions(), "sanity");
return _live_stats[region_index]._live_words;
return _live_stats[region_index].live_words();
}
PartialArrayStateManager* partial_array_state_manager() const;
void before_marking_update_attribute_table(G1HeapRegion* hr);
inline bool is_compacting(oop obj) const;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -63,11 +63,11 @@ void G1FullCollector::update_from_skip_compacting_to_compacting(uint region_idx)
}
void G1FullCollector::set_compaction_top(G1HeapRegion* r, HeapWord* value) {
AtomicAccess::store(&_compaction_tops[r->hrm_index()], value);
_compaction_tops[r->hrm_index()].store_relaxed(value);
}
HeapWord* G1FullCollector::compaction_top(G1HeapRegion* r) const {
return AtomicAccess::load(&_compaction_tops[r->hrm_index()]);
return _compaction_tops[r->hrm_index()].load_relaxed();
}
void G1FullCollector::set_has_compaction_targets() {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -37,7 +37,6 @@
#include "gc/shared/weakProcessor.inline.hpp"
#include "logging/log.hpp"
#include "memory/iterator.inline.hpp"
#include "runtime/atomicAccess.hpp"
class G1AdjustLiveClosure : public StackObj {
G1AdjustClosure* _adjust_closure;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -34,7 +34,7 @@
G1FullGCMarkTask::G1FullGCMarkTask(G1FullCollector* collector) :
G1FullGCTask("G1 Parallel Marking Task", collector),
_root_processor(G1CollectedHeap::heap(), collector->workers()),
_terminator(collector->workers(), collector->array_queue_set()) {
_terminator(collector->workers(), collector->marking_task_queues()) {
}
void G1FullGCMarkTask::work(uint worker_id) {
@ -54,10 +54,9 @@ void G1FullGCMarkTask::work(uint worker_id) {
}
// Mark stack is populated, now process and drain it.
marker->complete_marking(collector()->oop_queue_set(), collector()->array_queue_set(), &_terminator);
marker->complete_marking(collector()->marking_task_queues(), &_terminator);
// This is the point where the entire marking should have completed.
assert(marker->oop_stack()->is_empty(), "Marking should have completed");
assert(marker->objarray_stack()->is_empty(), "Array marking should have completed");
assert(marker->task_queue()->is_empty(), "Marking should have completed");
log_task("Marking task", worker_id, start);
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -25,6 +25,8 @@
#include "classfile/classLoaderData.hpp"
#include "classfile/classLoaderDataGraph.hpp"
#include "gc/g1/g1FullGCMarker.inline.hpp"
#include "gc/shared/partialArraySplitter.inline.hpp"
#include "gc/shared/partialArrayState.hpp"
#include "gc/shared/referenceProcessor.hpp"
#include "gc/shared/taskTerminator.hpp"
#include "gc/shared/verifyOption.hpp"
@ -36,8 +38,8 @@ G1FullGCMarker::G1FullGCMarker(G1FullCollector* collector,
_collector(collector),
_worker_id(worker_id),
_bitmap(collector->mark_bitmap()),
_oop_stack(),
_objarray_stack(),
_task_queue(),
_partial_array_splitter(collector->partial_array_state_manager(), collector->workers(), ObjArrayMarkingStride),
_mark_closure(worker_id, this, ClassLoaderData::_claim_stw_fullgc_mark, G1CollectedHeap::heap()->ref_processor_stw()),
_stack_closure(this),
_cld_closure(mark_closure(), ClassLoaderData::_claim_stw_fullgc_mark),
@ -47,24 +49,36 @@ G1FullGCMarker::G1FullGCMarker(G1FullCollector* collector,
}
G1FullGCMarker::~G1FullGCMarker() {
assert(is_empty(), "Must be empty at this point");
assert(is_task_queue_empty(), "Must be empty at this point");
}
void G1FullGCMarker::complete_marking(OopQueueSet* oop_stacks,
ObjArrayTaskQueueSet* array_stacks,
void G1FullGCMarker::process_partial_array(PartialArrayState* state, bool stolen) {
// Access state before release by claim().
objArrayOop obj_array = objArrayOop(state->source());
PartialArraySplitter::Claim claim =
_partial_array_splitter.claim(state, task_queue(), stolen);
process_array_chunk(obj_array, claim._start, claim._end);
}
void G1FullGCMarker::start_partial_array_processing(objArrayOop obj) {
mark_closure()->do_klass(obj->klass());
// Don't push empty arrays to avoid unnecessary work.
size_t array_length = obj->length();
if (array_length > 0) {
size_t initial_chunk_size = _partial_array_splitter.start(task_queue(), obj, nullptr, array_length);
process_array_chunk(obj, 0, initial_chunk_size);
}
}
void G1FullGCMarker::complete_marking(G1ScannerTasksQueueSet* task_queues,
TaskTerminator* terminator) {
do {
follow_marking_stacks();
ObjArrayTask steal_array;
if (array_stacks->steal(_worker_id, steal_array)) {
follow_array_chunk(objArrayOop(steal_array.obj()), steal_array.index());
} else {
oop steal_oop;
if (oop_stacks->steal(_worker_id, steal_oop)) {
follow_object(steal_oop);
}
process_marking_stacks();
ScannerTask stolen_task;
if (task_queues->steal(_worker_id, stolen_task)) {
dispatch_task(stolen_task, true);
}
} while (!is_empty() || !terminator->offer_termination());
} while (!is_task_queue_empty() || !terminator->offer_termination());
}
void G1FullGCMarker::flush_mark_stats_cache() {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -28,6 +28,8 @@
#include "gc/g1/g1FullGCOopClosures.hpp"
#include "gc/g1/g1OopClosures.hpp"
#include "gc/g1/g1RegionMarkStatsCache.hpp"
#include "gc/shared/partialArraySplitter.hpp"
#include "gc/shared/partialArrayState.hpp"
#include "gc/shared/stringdedup/stringDedup.hpp"
#include "gc/shared/taskqueue.hpp"
#include "memory/iterator.hpp"
@ -38,16 +40,15 @@
#include "utilities/growableArray.hpp"
#include "utilities/stack.hpp"
typedef OverflowTaskQueue<oop, mtGC> OopQueue;
typedef OverflowTaskQueue<ObjArrayTask, mtGC> ObjArrayTaskQueue;
typedef GenericTaskQueueSet<OopQueue, mtGC> OopQueueSet;
typedef GenericTaskQueueSet<ObjArrayTaskQueue, mtGC> ObjArrayTaskQueueSet;
class G1CMBitMap;
class G1FullCollector;
class TaskTerminator;
typedef OverflowTaskQueue<ScannerTask, mtGC> G1MarkTasksQueue;
typedef GenericTaskQueueSet<G1MarkTasksQueue, mtGC> G1MarkTasksQueueSet;
class G1FullGCMarker : public CHeapObj<mtGC> {
G1FullCollector* _collector;
@ -56,56 +57,50 @@ class G1FullGCMarker : public CHeapObj<mtGC> {
G1CMBitMap* _bitmap;
// Mark stack
OopQueue _oop_stack;
ObjArrayTaskQueue _objarray_stack;
G1MarkTasksQueue _task_queue;
PartialArraySplitter _partial_array_splitter;
// Marking closures
G1MarkAndPushClosure _mark_closure;
G1FollowStackClosure _stack_closure;
G1MarkStackClosure _stack_closure;
CLDToOopClosure _cld_closure;
StringDedup::Requests _string_dedup_requests;
G1RegionMarkStatsCache _mark_stats_cache;
inline bool is_empty();
inline void push_objarray(oop obj, size_t index);
inline bool is_task_queue_empty();
inline bool mark_object(oop obj);
// Marking helpers
inline void follow_object(oop obj);
inline void follow_array(objArrayOop array);
inline void follow_array_chunk(objArrayOop array, int index);
inline void process_array_chunk(objArrayOop obj, size_t start, size_t end);
inline void dispatch_task(const ScannerTask& task, bool stolen);
// Start processing the given objArrayOop by first pushing its continuations and
// then scanning the first chunk.
void start_partial_array_processing(objArrayOop obj);
// Process the given continuation.
void process_partial_array(PartialArrayState* state, bool stolen);
inline void publish_and_drain_oop_tasks();
// Try to publish all contents from the objArray task queue overflow stack to
// the shared objArray stack.
// Returns true and a valid task if there has not been enough space in the shared
// objArray stack, otherwise returns false and the task is invalid.
inline bool publish_or_pop_objarray_tasks(ObjArrayTask& task);
public:
G1FullGCMarker(G1FullCollector* collector,
uint worker_id,
G1RegionMarkStats* mark_stats);
~G1FullGCMarker();
// Stack getters
OopQueue* oop_stack() { return &_oop_stack; }
ObjArrayTaskQueue* objarray_stack() { return &_objarray_stack; }
G1MarkTasksQueue* task_queue() { return &_task_queue; }
// Marking entry points
template <class T> inline void mark_and_push(T* p);
inline void follow_marking_stacks();
void complete_marking(OopQueueSet* oop_stacks,
ObjArrayTaskQueueSet* array_stacks,
inline void process_marking_stacks();
void complete_marking(G1MarkTasksQueueSet* task_queues,
TaskTerminator* terminator);
// Closure getters
CLDToOopClosure* cld_closure() { return &_cld_closure; }
G1MarkAndPushClosure* mark_closure() { return &_mark_closure; }
G1FollowStackClosure* stack_closure() { return &_stack_closure; }
G1MarkStackClosure* stack_closure() { return &_stack_closure; }
// Flush live bytes to regions
void flush_mark_stats_cache();

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -42,6 +42,7 @@
#include "oops/access.inline.hpp"
#include "oops/compressedOops.inline.hpp"
#include "oops/oop.inline.hpp"
#include "utilities/checkedCast.hpp"
#include "utilities/debug.hpp"
inline bool G1FullGCMarker::mark_object(oop obj) {
@ -71,94 +72,55 @@ template <class T> inline void G1FullGCMarker::mark_and_push(T* p) {
if (!CompressedOops::is_null(heap_oop)) {
oop obj = CompressedOops::decode_not_null(heap_oop);
if (mark_object(obj)) {
_oop_stack.push(obj);
_task_queue.push(ScannerTask(obj));
}
assert(_bitmap->is_marked(obj), "Must be marked");
}
}
inline bool G1FullGCMarker::is_empty() {
return _oop_stack.is_empty() && _objarray_stack.is_empty();
inline bool G1FullGCMarker::is_task_queue_empty() {
return _task_queue.is_empty();
}
inline void G1FullGCMarker::push_objarray(oop obj, size_t index) {
ObjArrayTask task(obj, index);
assert(task.is_valid(), "bad ObjArrayTask");
_objarray_stack.push(task);
inline void G1FullGCMarker::process_array_chunk(objArrayOop obj, size_t start, size_t end) {
obj->oop_iterate_elements_range(mark_closure(),
checked_cast<int>(start),
checked_cast<int>(end));
}
inline void G1FullGCMarker::follow_array(objArrayOop array) {
mark_closure()->do_klass(array->klass());
// Don't push empty arrays to avoid unnecessary work.
if (array->length() > 0) {
push_objarray(array, 0);
}
}
void G1FullGCMarker::follow_array_chunk(objArrayOop array, int index) {
const int len = array->length();
const int beg_index = index;
assert(beg_index < len || len == 0, "index too large");
const int stride = MIN2(len - beg_index, (int) ObjArrayMarkingStride);
const int end_index = beg_index + stride;
// Push the continuation first to allow more efficient work stealing.
if (end_index < len) {
push_objarray(array, end_index);
}
array->oop_iterate_elements_range(mark_closure(), beg_index, end_index);
}
inline void G1FullGCMarker::follow_object(oop obj) {
assert(_bitmap->is_marked(obj), "should be marked");
if (obj->is_objArray()) {
// Handle object arrays explicitly to allow them to
// be split into chunks if needed.
follow_array((objArrayOop)obj);
inline void G1FullGCMarker::dispatch_task(const ScannerTask& task, bool stolen) {
if (task.is_partial_array_state()) {
assert(_bitmap->is_marked(task.to_partial_array_state()->source()), "should be marked");
process_partial_array(task.to_partial_array_state(), stolen);
} else {
obj->oop_iterate(mark_closure());
oop obj = task.to_oop();
assert(_bitmap->is_marked(obj), "should be marked");
if (obj->is_objArray()) {
// Handle object arrays explicitly to allow them to
// be split into chunks if needed.
start_partial_array_processing((objArrayOop)obj);
} else {
obj->oop_iterate(mark_closure());
}
}
}
inline void G1FullGCMarker::publish_and_drain_oop_tasks() {
oop obj;
while (_oop_stack.pop_overflow(obj)) {
if (!_oop_stack.try_push_to_taskqueue(obj)) {
assert(_bitmap->is_marked(obj), "must be marked");
follow_object(obj);
ScannerTask task;
while (_task_queue.pop_overflow(task)) {
if (!_task_queue.try_push_to_taskqueue(task)) {
dispatch_task(task, false);
}
}
while (_oop_stack.pop_local(obj)) {
assert(_bitmap->is_marked(obj), "must be marked");
follow_object(obj);
while (_task_queue.pop_local(task)) {
dispatch_task(task, false);
}
}
inline bool G1FullGCMarker::publish_or_pop_objarray_tasks(ObjArrayTask& task) {
// It is desirable to move as much as possible work from the overflow queue to
// the shared queue as quickly as possible.
while (_objarray_stack.pop_overflow(task)) {
if (!_objarray_stack.try_push_to_taskqueue(task)) {
return true;
}
}
return false;
}
void G1FullGCMarker::follow_marking_stacks() {
void G1FullGCMarker::process_marking_stacks() {
do {
// First, drain regular oop stack.
publish_and_drain_oop_tasks();
// Then process ObjArrays one at a time to avoid marking stack bloat.
ObjArrayTask task;
if (publish_or_pop_objarray_tasks(task) ||
_objarray_stack.pop_local(task)) {
follow_array_chunk(objArrayOop(task.obj()), task.index());
}
} while (!is_empty());
} while (!is_task_queue_empty());
}
#endif // SHARE_GC_G1_G1FULLGCMARKER_INLINE_HPP

View File

@ -35,7 +35,7 @@
G1IsAliveClosure::G1IsAliveClosure(G1FullCollector* collector) :
G1IsAliveClosure(collector, collector->mark_bitmap()) { }
void G1FollowStackClosure::do_void() { _marker->follow_marking_stacks(); }
void G1MarkStackClosure::do_void() { _marker->process_marking_stacks(); }
void G1FullKeepAliveClosure::do_oop(oop* p) { do_oop_work(p); }
void G1FullKeepAliveClosure::do_oop(narrowOop* p) { do_oop_work(p); }

View File

@ -86,11 +86,11 @@ public:
virtual ReferenceIterationMode reference_iteration_mode() { return DO_FIELDS; }
};
class G1FollowStackClosure: public VoidClosure {
class G1MarkStackClosure: public VoidClosure {
G1FullGCMarker* _marker;
public:
G1FollowStackClosure(G1FullGCMarker* marker) : _marker(marker) {}
G1MarkStackClosure(G1FullGCMarker* marker) : _marker(marker) {}
virtual void do_void();
};

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -31,7 +31,6 @@
#include "memory/allocation.hpp"
#include "memory/padded.inline.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/globals_extension.hpp"
#include "runtime/java.hpp"
#include "runtime/mutexLocker.hpp"

View File

@ -30,7 +30,6 @@
#include "gc/g1/g1CodeRootSet.hpp"
#include "gc/g1/g1CollectionSetCandidates.hpp"
#include "gc/g1/g1FromCardCache.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/mutexLocker.hpp"
#include "runtime/safepoint.hpp"
#include "utilities/bitMap.hpp"

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -30,7 +30,6 @@
#include "gc/g1/g1CardSet.inline.hpp"
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/bitMap.inline.hpp"
void G1HeapRegionRemSet::set_state_untracked() {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2022, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2022, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -24,7 +24,6 @@
#include "gc/g1/g1MonotonicArena.inline.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/vmOperations.hpp"
#include "utilities/globalCounter.inline.hpp"
@ -61,13 +60,13 @@ void G1MonotonicArena::SegmentFreeList::bulk_add(Segment& first,
size_t num,
size_t mem_size) {
_list.prepend(first, last);
AtomicAccess::add(&_num_segments, num, memory_order_relaxed);
AtomicAccess::add(&_mem_size, mem_size, memory_order_relaxed);
_num_segments.add_then_fetch(num, memory_order_relaxed);
_mem_size.add_then_fetch(mem_size, memory_order_relaxed);
}
void G1MonotonicArena::SegmentFreeList::print_on(outputStream* out, const char* prefix) {
out->print_cr("%s: segments %zu size %zu",
prefix, AtomicAccess::load(&_num_segments), AtomicAccess::load(&_mem_size));
prefix, _num_segments.load_relaxed(), _mem_size.load_relaxed());
}
G1MonotonicArena::Segment* G1MonotonicArena::SegmentFreeList::get_all(size_t& num_segments,
@ -75,12 +74,12 @@ G1MonotonicArena::Segment* G1MonotonicArena::SegmentFreeList::get_all(size_t& nu
GlobalCounter::CriticalSection cs(Thread::current());
Segment* result = _list.pop_all();
num_segments = AtomicAccess::load(&_num_segments);
mem_size = AtomicAccess::load(&_mem_size);
num_segments = _num_segments.load_relaxed();
mem_size = _mem_size.load_relaxed();
if (result != nullptr) {
AtomicAccess::sub(&_num_segments, num_segments, memory_order_relaxed);
AtomicAccess::sub(&_mem_size, mem_size, memory_order_relaxed);
_num_segments.sub_then_fetch(num_segments, memory_order_relaxed);
_mem_size.sub_then_fetch(mem_size, memory_order_relaxed);
}
return result;
}
@ -96,8 +95,8 @@ void G1MonotonicArena::SegmentFreeList::free_all() {
Segment::delete_segment(cur);
}
AtomicAccess::sub(&_num_segments, num_freed, memory_order_relaxed);
AtomicAccess::sub(&_mem_size, mem_size_freed, memory_order_relaxed);
_num_segments.sub_then_fetch(num_freed, memory_order_relaxed);
_mem_size.sub_then_fetch(mem_size_freed, memory_order_relaxed);
}
G1MonotonicArena::Segment* G1MonotonicArena::new_segment(Segment* const prev) {
@ -115,7 +114,7 @@ G1MonotonicArena::Segment* G1MonotonicArena::new_segment(Segment* const prev) {
}
// Install it as current allocation segment.
Segment* old = AtomicAccess::cmpxchg(&_first, prev, next);
Segment* old = _first.compare_exchange(prev, next);
if (old != prev) {
// Somebody else installed the segment, use that one.
Segment::delete_segment(next);
@ -126,9 +125,9 @@ G1MonotonicArena::Segment* G1MonotonicArena::new_segment(Segment* const prev) {
_last = next;
}
// Successfully installed the segment into the list.
AtomicAccess::inc(&_num_segments, memory_order_relaxed);
AtomicAccess::add(&_mem_size, next->mem_size(), memory_order_relaxed);
AtomicAccess::add(&_num_total_slots, next->num_slots(), memory_order_relaxed);
_num_segments.add_then_fetch(1u, memory_order_relaxed);
_mem_size.add_then_fetch(next->mem_size(), memory_order_relaxed);
_num_total_slots.add_then_fetch(next->num_slots(), memory_order_relaxed);
return next;
}
}
@ -155,7 +154,7 @@ uint G1MonotonicArena::slot_size() const {
}
void G1MonotonicArena::drop_all() {
Segment* cur = AtomicAccess::load_acquire(&_first);
Segment* cur = _first.load_acquire();
if (cur != nullptr) {
assert(_last != nullptr, "If there is at least one segment, there must be a last one.");
@ -175,25 +174,25 @@ void G1MonotonicArena::drop_all() {
cur = next;
}
#endif
assert(num_segments == _num_segments, "Segment count inconsistent %u %u", num_segments, _num_segments);
assert(mem_size == _mem_size, "Memory size inconsistent");
assert(num_segments == _num_segments.load_relaxed(), "Segment count inconsistent %u %u", num_segments, _num_segments.load_relaxed());
assert(mem_size == _mem_size.load_relaxed(), "Memory size inconsistent");
assert(last == _last, "Inconsistent last segment");
_segment_free_list->bulk_add(*first, *_last, _num_segments, _mem_size);
_segment_free_list->bulk_add(*first, *_last, _num_segments.load_relaxed(), _mem_size.load_relaxed());
}
_first = nullptr;
_first.store_relaxed(nullptr);
_last = nullptr;
_num_segments = 0;
_mem_size = 0;
_num_total_slots = 0;
_num_allocated_slots = 0;
_num_segments.store_relaxed(0);
_mem_size.store_relaxed(0);
_num_total_slots.store_relaxed(0);
_num_allocated_slots.store_relaxed(0);
}
void* G1MonotonicArena::allocate() {
assert(slot_size() > 0, "instance size not set.");
Segment* cur = AtomicAccess::load_acquire(&_first);
Segment* cur = _first.load_acquire();
if (cur == nullptr) {
cur = new_segment(cur);
}
@ -201,7 +200,7 @@ void* G1MonotonicArena::allocate() {
while (true) {
void* slot = cur->allocate_slot();
if (slot != nullptr) {
AtomicAccess::inc(&_num_allocated_slots, memory_order_relaxed);
_num_allocated_slots.add_then_fetch(1u, memory_order_relaxed);
guarantee(is_aligned(slot, _alloc_options->slot_alignment()),
"result " PTR_FORMAT " not aligned at %u", p2i(slot), _alloc_options->slot_alignment());
return slot;
@ -213,7 +212,7 @@ void* G1MonotonicArena::allocate() {
}
uint G1MonotonicArena::num_segments() const {
return AtomicAccess::load(&_num_segments);
return _num_segments.load_relaxed();
}
#ifdef ASSERT
@ -238,7 +237,7 @@ uint G1MonotonicArena::calculate_length() const {
template <typename SegmentClosure>
void G1MonotonicArena::iterate_segments(SegmentClosure& closure) const {
Segment* cur = AtomicAccess::load_acquire(&_first);
Segment* cur = _first.load_acquire();
assert((cur != nullptr) == (_last != nullptr),
"If there is at least one segment, there must be a last one");

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -28,6 +28,7 @@
#include "gc/shared/freeListAllocator.hpp"
#include "nmt/memTag.hpp"
#include "runtime/atomic.hpp"
#include "utilities/globalDefinitions.hpp"
#include "utilities/lockFreeStack.hpp"
@ -65,27 +66,27 @@ private:
// AllocOptions provides parameters for Segment sizing and expansion.
const AllocOptions* _alloc_options;
Segment* volatile _first; // The (start of the) list of all segments.
Segment* _last; // The last segment of the list of all segments.
volatile uint _num_segments; // Number of assigned segments to this allocator.
volatile size_t _mem_size; // Memory used by all segments.
Atomic<Segment*> _first; // The (start of the) list of all segments.
Segment* _last; // The last segment of the list of all segments.
Atomic<uint> _num_segments; // Number of assigned segments to this allocator.
Atomic<size_t> _mem_size; // Memory used by all segments.
SegmentFreeList* _segment_free_list; // The global free segment list to preferentially
// get new segments from.
volatile uint _num_total_slots; // Number of slots available in all segments (allocated + not yet used).
volatile uint _num_allocated_slots; // Number of total slots allocated ever (including free and pending).
Atomic<uint> _num_total_slots; // Number of slots available in all segments (allocated + not yet used).
Atomic<uint> _num_allocated_slots; // Number of total slots allocated ever (including free and pending).
inline Segment* new_segment(Segment* const prev);
DEBUG_ONLY(uint calculate_length() const;)
public:
const Segment* first_segment() const { return AtomicAccess::load(&_first); }
const Segment* first_segment() const { return _first.load_relaxed(); }
uint num_total_slots() const { return AtomicAccess::load(&_num_total_slots); }
uint num_total_slots() const { return _num_total_slots.load_relaxed(); }
uint num_allocated_slots() const {
uint allocated = AtomicAccess::load(&_num_allocated_slots);
uint allocated = _num_allocated_slots.load_relaxed();
assert(calculate_length() == allocated, "Must be");
return allocated;
}
@ -116,11 +117,11 @@ static constexpr uint SegmentPayloadMaxAlignment = 8;
class alignas(SegmentPayloadMaxAlignment) G1MonotonicArena::Segment {
const uint _slot_size;
const uint _num_slots;
Segment* volatile _next;
Atomic<Segment*> _next;
// Index into the next free slot to allocate into. Full if equal (or larger)
// to _num_slots (can be larger because we atomically increment this value and
// check only afterwards if the allocation has been successful).
uint volatile _next_allocate;
Atomic<uint> _next_allocate;
const MemTag _mem_tag;
static size_t header_size() { return align_up(sizeof(Segment), SegmentPayloadMaxAlignment); }
@ -139,21 +140,21 @@ class alignas(SegmentPayloadMaxAlignment) G1MonotonicArena::Segment {
Segment(uint slot_size, uint num_slots, Segment* next, MemTag mem_tag);
~Segment() = default;
public:
Segment* volatile* next_addr() { return &_next; }
Atomic<Segment*>* next_addr() { return &_next; }
void* allocate_slot();
uint num_slots() const { return _num_slots; }
Segment* next() const { return _next; }
Segment* next() const { return _next.load_relaxed(); }
void set_next(Segment* next) {
assert(next != this, " loop condition");
_next = next;
_next.store_relaxed(next);
}
void reset(Segment* next) {
_next_allocate = 0;
_next_allocate.store_relaxed(0);
assert(next != this, " loop condition");
set_next(next);
memset(payload(0), 0, payload_size());
@ -166,7 +167,7 @@ public:
uint length() const {
// _next_allocate might grow larger than _num_slots in multi-thread environments
// due to races.
return MIN2(_next_allocate, _num_slots);
return MIN2(_next_allocate.load_relaxed(), _num_slots);
}
static size_t size_in_bytes(uint slot_size, uint num_slots) {
@ -176,7 +177,7 @@ public:
static Segment* create_segment(uint slot_size, uint num_slots, Segment* next, MemTag mem_tag);
static void delete_segment(Segment* segment);
bool is_full() const { return _next_allocate >= _num_slots; }
bool is_full() const { return _next_allocate.load_relaxed() >= _num_slots; }
};
static_assert(alignof(G1MonotonicArena::Segment) >= SegmentPayloadMaxAlignment, "assert alignment of Segment (and indirectly its payload)");
@ -186,15 +187,15 @@ static_assert(alignof(G1MonotonicArena::Segment) >= SegmentPayloadMaxAlignment,
// performed by multiple threads concurrently.
// Counts and memory usage are current on a best-effort basis if accessed concurrently.
class G1MonotonicArena::SegmentFreeList {
static Segment* volatile* next_ptr(Segment& segment) {
static Atomic<Segment*>* next_ptr(Segment& segment) {
return segment.next_addr();
}
using SegmentStack = LockFreeStack<Segment, &SegmentFreeList::next_ptr>;
SegmentStack _list;
volatile size_t _num_segments;
volatile size_t _mem_size;
Atomic<size_t> _num_segments;
Atomic<size_t> _mem_size;
public:
SegmentFreeList() : _list(), _num_segments(0), _mem_size(0) { }
@ -210,8 +211,8 @@ public:
void print_on(outputStream* out, const char* prefix = "");
size_t num_segments() const { return AtomicAccess::load(&_num_segments); }
size_t mem_size() const { return AtomicAccess::load(&_mem_size); }
size_t num_segments() const { return _num_segments.load_relaxed(); }
size_t mem_size() const { return _mem_size.load_relaxed(); }
};
// Configuration for G1MonotonicArena, e.g slot size, slot number of next Segment.

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@ -28,14 +28,13 @@
#include "gc/g1/g1MonotonicArena.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/globalCounter.inline.hpp"
inline void* G1MonotonicArena::Segment::allocate_slot() {
if (_next_allocate >= _num_slots) {
if (_next_allocate.load_relaxed() >= _num_slots) {
return nullptr;
}
uint result = AtomicAccess::fetch_then_add(&_next_allocate, 1u, memory_order_relaxed);
uint result = _next_allocate.fetch_then_add(1u, memory_order_relaxed);
if (result >= _num_slots) {
return nullptr;
}
@ -48,8 +47,8 @@ inline G1MonotonicArena::Segment* G1MonotonicArena::SegmentFreeList::get() {
Segment* result = _list.pop();
if (result != nullptr) {
AtomicAccess::dec(&_num_segments, memory_order_relaxed);
AtomicAccess::sub(&_mem_size, result->mem_size(), memory_order_relaxed);
_num_segments.sub_then_fetch(1u, memory_order_relaxed);
_mem_size.sub_then_fetch(result->mem_size(), memory_order_relaxed);
}
return result;
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2014, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -28,7 +28,6 @@
#include "nmt/memTracker.hpp"
#include "oops/markWord.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/os.hpp"
#include "utilities/align.hpp"
#include "utilities/bitMap.inline.hpp"

Some files were not shown because too many files have changed in this diff Show More