Compare commits

...

92 Commits

Author SHA1 Message Date
Yasumasa Suenaga
127bfc9b0d 8374926: EnableX86ECoreOpts was not enabled on some hybrid CPU
Reviewed-by: vpaprotski, dholmes
2026-01-28 11:11:07 +00:00
Saranya Natarajan
6afc0d8f39 8366861: Phase AFTER_LOOP_OPTS printed even though the method has no loops
Reviewed-by: chagedorn, dfenacci
2026-01-28 09:38:20 +00:00
Chad Rakoczy
4ae4ffd5a3 8374513: AArch64: Improve receiver type profiling reliability
Reviewed-by: shade, aph
2026-01-28 08:08:36 +00:00
Roland Westrelin
b2cd3b0d48 8350330: C2: PhaseIdealLoop::add_parse_predicate() should mirror GraphKit::add_parse_predicate()
Reviewed-by: chagedorn, qamai
2026-01-28 08:00:11 +00:00
Aleksey Shipilev
88c8a55a43 8373266: Strengthen constant CardTable base accesses
Reviewed-by: tschatzl, xpeng
2026-01-28 07:44:31 +00:00
Prasanta Sadhukhan
1161a640ab 8373239: Test java/awt/print/PrinterJob/PageRanges.java fails with incorrect selection of printed pages
Reviewed-by: prr, aivanov
2026-01-28 06:58:50 +00:00
Chris Plummer
fa1b1d677a 8375477: CoreUtils support for SA tests should attempt to locate and unzip core files when they have been zipped
Reviewed-by: lmesnik, kevinw
2026-01-27 20:39:35 +00:00
Nizar Benalla
eb6e74b1fa 8374176: Update --release 26 symbol information for JDK 26 build 32
Reviewed-by: liach, iris, darcy
2026-01-27 17:14:40 +00:00
Roger Riggs
e8048c87bc 8376509: [process] Problemlist Test java/lang/ProcessBuilder/PipelineLeaksFD.java
Reviewed-by: jpai
2026-01-27 16:07:45 +00:00
Chen Liang
a5d0b05136 8376274: JSpec preview support and output enhancement
Reviewed-by: hannesw
2026-01-27 15:04:26 +00:00
Chen Liang
bbb4b0d498 8376277: Migrate java/lang/reflect tests away from TestNG
Reviewed-by: alanb
2026-01-27 14:51:04 +00:00
Wang Haomin
64b0ae6be8 8376276: Add javafx to allowed-list of CheckFiles
Reviewed-by: erikj, kcr
2026-01-27 14:21:44 +00:00
Matthias Baesken
479ac8b2fd 8376281: Remove USE_XLC_BUILTINS macro usage in AIX code
Reviewed-by: mdoerr, clanger
2026-01-27 13:30:14 +00:00
Daniel Gredler
992a8ef46b 8376226: CharsetEncoder.canEncode(CharSequence) is much slower than necessary
Reviewed-by: alanb, naoto
2026-01-27 13:20:26 +00:00
Thomas Schatzl
40d1b642a4 8376191: Remove AtomicAccess include from files that do not use it in gc/shared
Reviewed-by: iwalulya, stefank
2026-01-27 12:51:20 +00:00
Casper Norrbin
528bbe7919 8376302: os::Machine::used_memory reports container used memory when running containerized
Reviewed-by: eosterlund, sgehwolf
2026-01-27 12:33:43 +00:00
Afshin Zafari
5990165d82 8358957: [ubsan]: The assert in layout_helper_boolean_diffbit() in klass.hpp needs UB to fail
Reviewed-by: dlong, jsjolen
2026-01-27 11:55:25 +00:00
Eirik Bjørsnøs
4ff5f3a8c0 8376271: ZipFile comment confusingly refers to "native" ZIP file implementation
Reviewed-by: jpai
2026-01-27 10:28:54 +00:00
Axel Boldt-Christmas
b1aea55205 8374695: ZGC: Convert zTLABUsage to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-27 10:26:29 +00:00
Eirik Bjørsnøs
e0445c09f7 8376294: ZipFile.Source.Key should not hold on to its BasicFileAttributes instance
Reviewed-by: jpai
2026-01-27 10:25:58 +00:00
Varada M
ee2deaded8 8371187: [BigEndian Platforms] Vector lane reversal error
Reviewed-by: mdoerr, amitkumar
2026-01-27 10:01:02 +00:00
Axel Boldt-Christmas
6fda44172e 8374690: ZGC: Convert zRelocate to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-27 08:42:44 +00:00
Axel Boldt-Christmas
bd92c68ef0 8374687: ZGC: Convert zNMethodTableIteration to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-27 08:36:41 +00:00
Axel Boldt-Christmas
5c05d6f230 8374686: ZGC: Convert zMarkTerminate to use Atomic<T>
Reviewed-by: stefank, kbarrett
2026-01-27 08:26:00 +00:00
Ioi Lam
cba7d88ca4 8374549: Extend MetaspaceClosure to cover non-MetaspaceObj types
Reviewed-by: kvn, asmehra
2026-01-27 03:16:43 +00:00
Chen Liang
fdcc122a9d 8376422: Run compiler/corelibs/OptionalFold.java with tiered compilation
Reviewed-by: dholmes
2026-01-27 00:15:13 +00:00
Damon Nguyen
12570be64a 8376151: Test javax/swing/JFileChooser/4966171/bug4966171.java is failing with OOME
Reviewed-by: prr, azvegint, aivanov
2026-01-26 21:13:01 +00:00
Hannes Greule
82bd3831b0 8374538: Wrong specification of MethodHandles.constant(...)
Reviewed-by: liach, jvernee
2026-01-26 20:13:03 +00:00
Phil Race
c69275ddfe 8376232: Remove AppContext from Swing synth related classes
Reviewed-by: serb, azvegint
2026-01-26 18:53:39 +00:00
Chen Liang
3220c4cb43 8372696: Allow boot classes to explicitly opt-in for final field trusting
Reviewed-by: jvernee, jrose, alanb
2026-01-26 18:32:15 +00:00
Henry Jen
b42861a2aa 8373699: JLink: ModuleReader should be closed in JlinkTask.getReleaseInfo(mref)
Reviewed-by: alanb
2026-01-26 17:19:44 +00:00
Henry Jen
67beb9cd81 8373924: Remove unreferenced ImageDecompressor::image_decompressor_close
Reviewed-by: alanb
2026-01-26 16:38:12 +00:00
Christian Hagedorn
bbae38e510 8375272: [IR Framework] Miscellaneous clean-ups
Reviewed-by: mchevalier, dfenacci, thartmann
2026-01-26 16:23:30 +00:00
Axel Boldt-Christmas
f4607ed0a7 8374684: ZGC: Convert zMark to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 15:35:59 +00:00
Axel Boldt-Christmas
6648567574 8374683: ZGC: Convert zLock to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 15:14:42 +00:00
Axel Boldt-Christmas
99b4e05d50 8374682: ZGC: Convert zLiveMap to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 15:05:24 +00:00
Axel Boldt-Christmas
61b722d59a 8374681: ZGC: Convert zJNICritical to use Atomic<T>
Reviewed-by: tschatzl, stefank
2026-01-26 14:45:24 +00:00
Axel Boldt-Christmas
b59f49a1c3 8374680: ZGC: Convert zGeneration to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 14:28:39 +00:00
Axel Boldt-Christmas
fef85ff932 8374679: ZGC: Convert zForwardingAllocator to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 14:13:48 +00:00
Axel Boldt-Christmas
512f95cf26 8374678: ZGC: Convert zForwarding to use Atomic<T>
Reviewed-by: stefank, eosterlund
2026-01-26 13:53:12 +00:00
Axel Boldt-Christmas
319e21e9b4 8374677: ZGC: Convert zArray to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 13:44:06 +00:00
Hannes Wallnöfer
37cb22826a 8373679: Link color accessibility issue in dark theme
Reviewed-by: liach, nbenalla
2026-01-26 13:28:04 +00:00
Daniel Fuchs
8a9127fc2d 8376118: java/net/httpclient/StreamingBody.java fails intermittently on Windows
Reviewed-by: vyazici, jpai
2026-01-26 12:57:23 +00:00
Axel Boldt-Christmas
de5c7a9e86 8374676: ZGC: Convert zAbort to use Atomic<T>
Reviewed-by: stefank, tschatzl
2026-01-26 12:16:05 +00:00
Matthias Baesken
0f1b96a50a 8375684: Avoid leak in KeystoreImpl.m when using CFArrayCreateMutable
Reviewed-by: clanger
2026-01-26 11:38:05 +00:00
Quan Anh Mai
30675faa67 8375653: C2: CmpUNode::sub is not monotonic
Reviewed-by: chagedorn, mchevalier
2026-01-26 11:18:21 +00:00
Thomas Schatzl
48d636872f 8376293: Bad copyright header in g1ConcurrentRefineStats.inline.hpp breaks the build
Reviewed-by: mhaessig, chagedorn
2026-01-26 10:15:57 +00:00
Thomas Schatzl
42c0126fb2 8376119: G1: Convert volatiles in G1CMMarkStack to Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:47:52 +00:00
Jan Lahoda
90d065e677 8375712: Convert java/lang/runtime tests to use JUnit
Reviewed-by: liach
2026-01-26 09:42:49 +00:00
Thomas Schatzl
0bc2dc3401 8375971: G1: Convert G1EvacStats to use Atomic<T>
Reviewed-by: iwalulya, kbarrett
2026-01-26 09:17:22 +00:00
Thomas Schatzl
c3360ff511 8375983: G1: Convert G1ConcurrentRefineStats to use Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:17:01 +00:00
Thomas Schatzl
a49986c62f 8375964: G1: Convert G1BuildCandidateRegionsTask to use Atomic<T>
Reviewed-by: shade, iwalulya
2026-01-26 09:16:41 +00:00
Thomas Schatzl
4597046984 8375974: G1: Convert G1FullCollector to use Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:16:11 +00:00
Thomas Schatzl
e7cadd90b2 8375981: G1: Convert G1RemSet helper classes to use Atomic<T>
Reviewed-by: shade, iwalulya
2026-01-26 09:15:32 +00:00
Thomas Schatzl
2af271e5e6 8375436: G1: Convert G1CardSet classes to use Atomic<T>
Reviewed-by: kbarrett, iwalulya
2026-01-26 09:12:39 +00:00
Arno Zeller
90b5469253 8375999: com/sun/jndi/ldap/LdapPoolTimeoutTest.java fails sporadically on Windows
Reviewed-by: jpai, mbaesken
2026-01-26 08:34:56 +00:00
Xiaohong Gong
38b66b1258 8374043: C2: assert(_base >= VectorMask && _base <= VectorZ) failed: Not a Vector
Reviewed-by: qamai, vlivanov
2026-01-26 01:50:57 +00:00
SendaoYan
932556026d 8375683: Add notes for sctp tests
Reviewed-by: erikj, vyazici
2026-01-25 01:08:31 +00:00
Lei Zhu
a40dbce495 8374293: Jshell throws an error and crashes when using keyword Public
Reviewed-by: jlahoda
2026-01-24 14:19:40 +00:00
Yasumasa Suenaga
a3b1aa9f7d 8374482: SA does not handle signal handler frame in mixed jstack
Reviewed-by: cjplummer, kevinw
2026-01-24 08:43:37 +00:00
Phil Race
44b74e165e 8375351: Remove usage of AppContext from print implementation
Reviewed-by: serb, tr
2026-01-23 20:20:22 +00:00
Valerie Peng
e55124041e 8375549: ConcurrentModificationException if jdk.crypto.disabledAlgorithms has multiple entries with known oid
Reviewed-by: mullan, coffeys
2026-01-23 19:46:40 +00:00
Phil Race
e617ccd529 8375480: Remove usage of AppContext from javax/swing/text
Reviewed-by: serb, psadhukhan
2026-01-23 19:12:54 +00:00
Phil Race
e88edd0bc6 8375338: sun/awt/image/ImageRepresentation/LUTCompareTest.java fails with -Xcheck:jni
Reviewed-by: aivanov, serb, krk
2026-01-23 18:53:48 +00:00
Phil Race
e08fb3a914 8375221: Update code to get PrinterResolution from CUPS/IPP print service
Reviewed-by: serb, psadhukhan
2026-01-23 18:19:23 +00:00
Cesar Soares Lucas
2c3ad0f425 8373021: aarch64: MacroAssembler::arrays_equals reads out of bounds
Reviewed-by: rkennke, aph
2026-01-23 17:56:04 +00:00
Chen Liang
40f7a18b2d 8373935: Migrate java/lang/invoke tests away from TestNG
Reviewed-by: jvernee, alanb
2026-01-23 17:32:53 +00:00
Severin Gehwolf
3fb118a29e 8375692: Hotspot container tests assert with non-ascii vendor name
Reviewed-by: naoto, dholmes, syan
2026-01-23 16:55:38 +00:00
Guanqiang Han
6f6966b28b 8374862: assert(false) failed: Attempting to acquire lock MDOExtraData_lock/nosafepoint-1 out of order with lock tty_lock/tty -- possible deadlock (running with -XX:+Verbose -XX:+WizardMode -XX:+PrintDeoptimizationDetails)
Reviewed-by: dholmes, dlong
2026-01-23 11:37:30 +00:00
Thomas Schatzl
fa20391e73 8375966: G1: Convert G1UpdateRegionLivenessAndSelectForRebuildTask to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-23 08:31:31 +00:00
Volkan Yazici
ca37dba4d4 8376089: Increase QUIC idle timeout in H3FixedThreadPoolTest to collect more diagnostic
Reviewed-by: dfuchs, jpai
2026-01-23 08:27:27 +00:00
Jan Lahoda
315bf07b23 8375119: SwitchBoostraps.enumSwitch does not throw an NPE when lookup is null in some cases
Reviewed-by: liach
2026-01-23 07:40:52 +00:00
Julian Waters
39f0e6d6f9 8375241: Simplify --with-native-debug-symbols-level option implementation
Reviewed-by: erikj, shade
2026-01-23 07:07:51 +00:00
Ioi Lam
7f2aa59f82 8375654: Exclude all array classes from dynamic CDS archive
Reviewed-by: kvn, vlivanov
2026-01-23 06:24:47 +00:00
SendaoYan
0f087a7fef 8376051: gc/stress/TestStressG1Uncommit.java fails assertLessThan: expected that xxx < xxx
Reviewed-by: tschatzl, shade
2026-01-23 00:57:25 +00:00
Daniel Jeliński
25d2b52ab9 8328046: Need to keep leading zeros in TlsPremasterSecret of TLS1.3 DHKeyAgreement
Reviewed-by: hchao
2026-01-22 21:48:28 +00:00
Kelvin Nilsen
d6ebcf8a4f 8357471: GenShen: Share collector reserves between young and old
Reviewed-by: wkemper
2026-01-22 21:28:57 +00:00
Phil Race
f3121d1023 8373931: Test javax/sound/sampled/Clip/AutoCloseTimeCheck.java timed out
Reviewed-by: dholmes, dnguyen, kizune
2026-01-22 20:16:44 +00:00
Hai-May Chao
96a2649e29 8373408: SHA1withECDSA is not required for ECDHE and ECDSA
Reviewed-by: djelinski, ascarpino
2026-01-22 17:41:00 +00:00
Henry Jen
5dfda66e13 8373928: 4 Dangling pointer defect groups in java.c
Reviewed-by: bpb, alanb, jpai, jwaters
2026-01-22 17:21:44 +00:00
Alexander Zuev
8c82b58db9 8286258: [Accessibility,macOS,VoiceOver] VoiceOver reads the spinner value wrong and sometime partially
Reviewed-by: psadhukhan, asemenov
2026-01-22 16:36:24 +00:00
Brian Burkhalter
07f6617e0b 8367284: (fs) Support current working directory target in SecureDirectoryStream.move
Reviewed-by: alanb
2026-01-22 16:11:33 +00:00
Patricio Chilano Mateo
26aab3cccd 8373120: Virtual thread stuck in BLOCKED state
Co-authored-by: Alan Bateman <alanb@openjdk.org>
Reviewed-by: alanb
2026-01-22 14:56:23 +00:00
Artur Barashev
025041ba04 8370885: Default namedGroups values are not being filtered against algorithm constraints
Reviewed-by: hchao
2026-01-22 13:11:42 +00:00
Weijun Wang
eda15aa19c 8277489: Rewrite JAAS UnixLoginModule with FFM
Co-authored-by: Martin Doerr <mdoerr@openjdk.org>
Reviewed-by: mdoerr, ascarpino, erikj
2026-01-22 12:16:09 +00:00
Roland Westrelin
0d1d4d07b9 8374725: C2: assert(x_ctrl == get_late_ctrl_with_anti_dep(x->as_Load(), early_ctrl, x_ctrl)) failed: anti-dependences were already checked
Reviewed-by: chagedorn, qamai, dfenacci
2026-01-22 12:09:11 +00:00
Thomas Schatzl
5e0ed3f408 8375982: G1: Convert G1YoungCollector helper classes to use Atomic<T>
Reviewed-by: kbarrett, shade
2026-01-22 11:51:37 +00:00
Ivan Walulya
66e950e9b6 8340470: G1: Adopt PartialArrayState to consolidate marking stack in Full GC
Co-authored-by: Stefan Johansson <sjohanss@openjdk.org>
Reviewed-by: sjohanss, tschatzl
2026-01-22 11:07:42 +00:00
Thomas Schatzl
0ad81fbd16 8375541: G1: Race in G1BarrierSet::write_ref_field_post()
Reviewed-by: iwalulya, sjohanss, shade
2026-01-22 11:04:09 +00:00
Roland Westrelin
6e9256cb61 8373343: C2: verify AddP base input only set for heap addresses
Reviewed-by: dlong, chagedorn, qamai
2026-01-22 10:37:26 +00:00
Liam Miller-Cushon
e8eb218ca2 8374643: Fix reference to implMethodKind in LambdaToMethod debug printf statement
Reviewed-by: vromero, liach
2026-01-22 10:05:05 +00:00
Casper Norrbin
ddbd4617a6 8303470: containers/docker/TestMemoryAwareness.java failed with "'memory_limit_in_bytes:.*512000 k' missing from stdout/stderr"
Reviewed-by: sgehwolf, dholmes
2026-01-22 09:45:40 +00:00
500 changed files with 12213 additions and 10117 deletions

View File

@ -72,6 +72,7 @@ id="toc-notes-for-specific-tests">Notes for Specific Tests</a>
<li><a href="#non-us-locale" id="toc-non-us-locale">Non-US
locale</a></li>
<li><a href="#pkcs11-tests" id="toc-pkcs11-tests">PKCS11 Tests</a></li>
<li><a href="#sctp-tests" id="toc-sctp-tests">SCTP Tests</a></li>
<li><a href="#testing-ahead-of-time-optimizations"
id="toc-testing-ahead-of-time-optimizations">Testing Ahead-of-time
Optimizations</a></li>
@ -621,6 +622,21 @@ element of the appropriate <code>@Artifact</code> class. (See
JTREG=&quot;JAVA_OPTIONS=-Djdk.test.lib.artifacts.nsslib-linux_aarch64=/path/to/NSS-libs&quot;</code></pre>
<p>For more notes about the PKCS11 tests, please refer to
test/jdk/sun/security/pkcs11/README.</p>
<h3 id="sctp-tests">SCTP Tests</h3>
<p>The SCTP tests require the SCTP runtime library, which is often not
installed by default in popular Linux distributions. Without this
library, the SCTP tests will be skipped. If you want to enable the SCTP
tests, you should install the SCTP library before running the tests.</p>
<p>For distributions using the .deb packaging format and the apt tool
(such as Debian, Ubuntu, etc.), try this:</p>
<pre><code>sudo apt install libsctp1
sudo modprobe sctp
lsmod | grep sctp</code></pre>
<p>For distributions using the .rpm packaging format and the dnf tool
(such as Fedora, Red Hat, etc.), try this:</p>
<pre><code>sudo dnf install -y lksctp-tools
sudo modprobe sctp
lsmod | grep sctp</code></pre>
<h3 id="testing-ahead-of-time-optimizations">Testing Ahead-of-time
Optimizations</h3>
<p>One way to improve test coverage of ahead-of-time (AOT) optimizations

View File

@ -640,6 +640,32 @@ $ make test TEST="jtreg:sun/security/pkcs11/Secmod/AddTrustedCert.java" \
For more notes about the PKCS11 tests, please refer to
test/jdk/sun/security/pkcs11/README.
### SCTP Tests
The SCTP tests require the SCTP runtime library, which is often not installed
by default in popular Linux distributions. Without this library, the SCTP tests
will be skipped. If you want to enable the SCTP tests, you should install the
SCTP library before running the tests.
For distributions using the .deb packaging format and the apt tool
(such as Debian, Ubuntu, etc.), try this:
```
sudo apt install libsctp1
sudo modprobe sctp
lsmod | grep sctp
```
For distributions using the .rpm packaging format and the dnf tool
(such as Fedora, Red Hat, etc.), try this:
```
sudo dnf install -y lksctp-tools
sudo modprobe sctp
lsmod | grep sctp
```
### Testing Ahead-of-time Optimizations
One way to improve test coverage of ahead-of-time (AOT) optimizations in

View File

@ -69,22 +69,18 @@ AC_DEFUN([FLAGS_SETUP_DEBUG_SYMBOLS],
# Debug prefix mapping if supported by compiler
DEBUG_PREFIX_CFLAGS=
UTIL_ARG_WITH(NAME: native-debug-symbols-level, TYPE: string,
DEFAULT: "",
RESULT: DEBUG_SYMBOLS_LEVEL,
UTIL_ARG_WITH(NAME: native-debug-symbols-level, TYPE: literal,
DEFAULT: [auto], VALID_VALUES: [auto 1 2 3],
CHECK_AVAILABLE: [
if test x$TOOLCHAIN_TYPE = xmicrosoft; then
AVAILABLE=false
fi
],
DESC: [set the native debug symbol level (GCC and Clang only)],
DEFAULT_DESC: [toolchain default])
AC_SUBST(DEBUG_SYMBOLS_LEVEL)
if test "x${TOOLCHAIN_TYPE}" = xgcc || \
test "x${TOOLCHAIN_TYPE}" = xclang; then
DEBUG_SYMBOLS_LEVEL_FLAGS="-g"
if test "x${DEBUG_SYMBOLS_LEVEL}" != "x"; then
DEBUG_SYMBOLS_LEVEL_FLAGS="-g${DEBUG_SYMBOLS_LEVEL}"
FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [${DEBUG_SYMBOLS_LEVEL_FLAGS}],
IF_FALSE: AC_MSG_ERROR("Debug info level ${DEBUG_SYMBOLS_LEVEL} is not supported"))
fi
fi
DEFAULT_DESC: [toolchain default],
IF_AUTO: [
RESULT=""
])
# Debug symbols
if test "x$TOOLCHAIN_TYPE" = xgcc; then
@ -111,8 +107,8 @@ AC_DEFUN([FLAGS_SETUP_DEBUG_SYMBOLS],
fi
# Debug info level should follow the debug format to be effective.
CFLAGS_DEBUG_SYMBOLS="-gdwarf-4 ${DEBUG_SYMBOLS_LEVEL_FLAGS}"
ASFLAGS_DEBUG_SYMBOLS="${DEBUG_SYMBOLS_LEVEL_FLAGS}"
CFLAGS_DEBUG_SYMBOLS="-gdwarf-4 -g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
ASFLAGS_DEBUG_SYMBOLS="-g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
elif test "x$TOOLCHAIN_TYPE" = xclang; then
if test "x$ALLOW_ABSOLUTE_PATHS_IN_OUTPUT" = "xfalse"; then
# Check if compiler supports -fdebug-prefix-map. If so, use that to make
@ -132,8 +128,8 @@ AC_DEFUN([FLAGS_SETUP_DEBUG_SYMBOLS],
IF_FALSE: [GDWARF_FLAGS=""])
# Debug info level should follow the debug format to be effective.
CFLAGS_DEBUG_SYMBOLS="${GDWARF_FLAGS} ${DEBUG_SYMBOLS_LEVEL_FLAGS}"
ASFLAGS_DEBUG_SYMBOLS="${DEBUG_SYMBOLS_LEVEL_FLAGS}"
CFLAGS_DEBUG_SYMBOLS="${GDWARF_FLAGS} -g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
ASFLAGS_DEBUG_SYMBOLS="-g${NATIVE_DEBUG_SYMBOLS_LEVEL}"
elif test "x$TOOLCHAIN_TYPE" = xmicrosoft; then
CFLAGS_DEBUG_SYMBOLS="-Z7"
fi

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -49,13 +49,15 @@ import static com.sun.source.doctree.DocTree.Kind.*;
* The tags can be used as follows:
*
* <pre>
* &commat;jls section-number description
* &commat;jls chapter.section description
* &commat;jls preview-feature-chapter.section description
* </pre>
*
* For example:
*
* <pre>
* &commat;jls 3.4 Line Terminators
* &commat;jls primitive-types-in-patterns-instanceof-switch-5.7.1 Exact Testing Conversions
* </pre>
*
* will produce the following HTML, depending on the file containing
@ -64,10 +66,24 @@ import static com.sun.source.doctree.DocTree.Kind.*;
* <pre>{@code
* <dt>See <i>Java Language Specification</i>:
* <dd><a href="../../specs/jls/jls-3.html#jls-3.4">3.4 Line terminators</a>
* <dd><a href="../../specs/primitive-types-in-patterns-instanceof-switch-jls.html#jls-5.7.1">
* 5.7.1 Exact Testing Conversions</a><sup class="preview-mark">
* <a href="../../specs/jls/jls-1.html#jls-1.5.1">PREVIEW</a></sup>
* }</pre>
*
* Copies of JLS and JVMS are expected to have been placed in the {@code specs}
* folder. These documents are not included in open-source repositories.
* In inline tags (note you need manual JLS/JVMS prefix):
* <pre>
* JLS {&commat;jls 3.4}
* </pre>
*
* produces (note the section sign and no trailing dot):
* <pre>
* JLS <a href="../../specs/jls/jls-3.html#jls-3.4">§3.4</a>
* </pre>
*
* Copies of JLS, JVMS, and preview JLS and JVMS changes are expected to have
* been placed in the {@code specs} folder. These documents are not included
* in open-source repositories.
*/
public class JSpec implements Taglet {
@ -87,9 +103,9 @@ public class JSpec implements Taglet {
}
}
private String tagName;
private String specTitle;
private String idPrefix;
private final String tagName;
private final String specTitle;
private final String idPrefix;
JSpec(String tagName, String specTitle, String idPrefix) {
this.tagName = tagName;
@ -98,7 +114,7 @@ public class JSpec implements Taglet {
}
// Note: Matches special cases like @jvms 6.5.checkcast
private static final Pattern TAG_PATTERN = Pattern.compile("(?s)(.+ )?(?<chapter>[1-9][0-9]*)(?<section>[0-9a-z_.]*)( .*)?$");
private static final Pattern TAG_PATTERN = Pattern.compile("(?s)(.+ )?(?<preview>([a-z0-9]+-)+)?(?<chapter>[1-9][0-9]*)(?<section>[0-9a-z_.]*)( .*)?$");
/**
* Returns the set of locations in which the tag may be used.
@ -157,19 +173,50 @@ public class JSpec implements Taglet {
.trim();
Matcher m = TAG_PATTERN.matcher(tagText);
if (m.find()) {
// preview-feature-4.6 is preview-feature-, 4, .6
String preview = m.group("preview"); // null if no preview feature
String chapter = m.group("chapter");
String section = m.group("section");
String rootParent = currentPath().replaceAll("[^/]+", "..");
String url = String.format("%1$s/specs/%2$s/%2$s-%3$s.html#%2$s-%3$s%4$s",
rootParent, idPrefix, chapter, section);
String url = preview == null ?
String.format("%1$s/specs/%2$s/%2$s-%3$s.html#%2$s-%3$s%4$s",
rootParent, idPrefix, chapter, section) :
String.format("%1$s/specs/%5$s%2$s.html#%2$s-%3$s%4$s",
rootParent, idPrefix, chapter, section, preview);
var literal = expand(contents).trim();
var prefix = (preview == null ? "" : preview) + chapter + section;
if (literal.startsWith(prefix)) {
var hasFullTitle = literal.length() > prefix.length();
if (hasFullTitle) {
// Drop the preview identifier
literal = chapter + section + literal.substring(prefix.length());
} else {
// No section sign if the tag refers to a chapter, like {@jvms 4}
String sectionSign = section.isEmpty() ? "" : "§";
// Change whole text to "§chapter.x" in inline tags.
literal = sectionSign + chapter + section;
}
}
sb.append("<a href=\"")
.append(url)
.append("\">")
.append(expand(contents))
.append(literal)
.append("</a>");
if (preview != null) {
// Add PREVIEW superscript that links to JLS/JVMS 1.5.1
// "Restrictions on the Use of Preview Features"
// Similar to how APIs link to the Preview info box warning
var sectionLink = String.format("%1$s/specs/%2$s/%2$s-%3$s.html#%2$s-%3$s%4$s",
rootParent, idPrefix, "1", ".5.1");
sb.append("<sup class=\"preview-mark\"><a href=\"")
.append(sectionLink)
.append("\">PREVIEW</a></sup>");
}
if (tag.getKind() == DocTree.Kind.UNKNOWN_BLOCK_TAG) {
sb.append("<br>");
}

View File

@ -31,13 +31,14 @@ include LibCommon.gmk
## Build libjaas
################################################################################
$(eval $(call SetupJdkLibrary, BUILD_LIBJAAS, \
NAME := jaas, \
OPTIMIZATION := LOW, \
EXTRA_HEADER_DIRS := java.base:libjava, \
LIBS_windows := advapi32.lib mpr.lib netapi32.lib user32.lib, \
))
TARGETS += $(BUILD_LIBJAAS)
ifeq ($(call isTargetOs, windows), true)
$(eval $(call SetupJdkLibrary, BUILD_LIBJAAS, \
NAME := jaas, \
OPTIMIZATION := LOW, \
EXTRA_HEADER_DIRS := java.base:libjava, \
LIBS_windows := advapi32.lib mpr.lib netapi32.lib user32.lib, \
))
TARGETS += $(BUILD_LIBJAAS)
endif
################################################################################

View File

@ -1218,43 +1218,11 @@ void LIR_Assembler::emit_alloc_array(LIR_OpAllocArray* op) {
__ bind(*op->stub()->continuation());
}
void LIR_Assembler::type_profile_helper(Register mdo,
ciMethodData *md, ciProfileData *data,
Register recv, Label* update_done) {
void LIR_Assembler::type_profile_helper(Register mdo, ciMethodData *md,
ciProfileData *data, Register recv) {
// Given a profile data offset, generate an Address which points to
// the corresponding slot in mdo->data().
// Clobbers rscratch2.
auto slot_at = [=](ByteSize offset) -> Address {
return __ form_address(rscratch2, mdo,
md->byte_offset_of_slot(data, offset),
LogBytesPerWord);
};
for (uint i = 0; i < ReceiverTypeData::row_limit(); i++) {
Label next_test;
// See if the receiver is receiver[n].
__ ldr(rscratch1, slot_at(ReceiverTypeData::receiver_offset(i)));
__ cmp(recv, rscratch1);
__ br(Assembler::NE, next_test);
__ addptr(slot_at(ReceiverTypeData::receiver_count_offset(i)),
DataLayout::counter_increment);
__ b(*update_done);
__ bind(next_test);
}
// Didn't find receiver; find next empty slot and fill it in
for (uint i = 0; i < ReceiverTypeData::row_limit(); i++) {
Label next_test;
Address recv_addr(slot_at(ReceiverTypeData::receiver_offset(i)));
__ ldr(rscratch1, recv_addr);
__ cbnz(rscratch1, next_test);
__ str(recv, recv_addr);
__ mov(rscratch1, DataLayout::counter_increment);
__ str(rscratch1, slot_at(ReceiverTypeData::receiver_count_offset(i)));
__ b(*update_done);
__ bind(next_test);
}
int mdp_offset = md->byte_offset_of_slot(data, in_ByteSize(0));
__ profile_receiver_type(recv, mdo, mdp_offset);
}
void LIR_Assembler::emit_typecheck_helper(LIR_OpTypeCheck *op, Label* success, Label* failure, Label* obj_is_null) {
@ -1316,14 +1284,9 @@ void LIR_Assembler::emit_typecheck_helper(LIR_OpTypeCheck *op, Label* success, L
__ b(*obj_is_null);
__ bind(not_null);
Label update_done;
Register recv = k_RInfo;
__ load_klass(recv, obj);
type_profile_helper(mdo, md, data, recv, &update_done);
Address counter_addr(mdo, md->byte_offset_of_slot(data, CounterData::count_offset()));
__ addptr(counter_addr, DataLayout::counter_increment);
__ bind(update_done);
type_profile_helper(mdo, md, data, recv);
} else {
__ cbz(obj, *obj_is_null);
}
@ -1430,13 +1393,9 @@ void LIR_Assembler::emit_opTypeCheck(LIR_OpTypeCheck* op) {
__ b(done);
__ bind(not_null);
Label update_done;
Register recv = k_RInfo;
__ load_klass(recv, value);
type_profile_helper(mdo, md, data, recv, &update_done);
Address counter_addr(mdo, md->byte_offset_of_slot(data, CounterData::count_offset()));
__ addptr(counter_addr, DataLayout::counter_increment);
__ bind(update_done);
type_profile_helper(mdo, md, data, recv);
} else {
__ cbz(value, done);
}
@ -2540,13 +2499,9 @@ void LIR_Assembler::emit_profile_call(LIR_OpProfileCall* op) {
if (C1OptimizeVirtualCallProfiling && known_klass != nullptr) {
// We know the type that will be seen at this call site; we can
// statically update the MethodData* rather than needing to do
// dynamic tests on the receiver type
// NOTE: we should probably put a lock around this search to
// avoid collisions by concurrent compilations
// dynamic tests on the receiver type.
ciVirtualCallData* vc_data = (ciVirtualCallData*) data;
uint i;
for (i = 0; i < VirtualCallData::row_limit(); i++) {
for (uint i = 0; i < VirtualCallData::row_limit(); i++) {
ciKlass* receiver = vc_data->receiver(i);
if (known_klass->equals(receiver)) {
Address data_addr(mdo, md->byte_offset_of_slot(data, VirtualCallData::receiver_count_offset(i)));
@ -2554,36 +2509,13 @@ void LIR_Assembler::emit_profile_call(LIR_OpProfileCall* op) {
return;
}
}
// Receiver type not found in profile data; select an empty slot
// Note that this is less efficient than it should be because it
// always does a write to the receiver part of the
// VirtualCallData rather than just the first time
for (i = 0; i < VirtualCallData::row_limit(); i++) {
ciKlass* receiver = vc_data->receiver(i);
if (receiver == nullptr) {
__ mov_metadata(rscratch1, known_klass->constant_encoding());
Address recv_addr =
__ form_address(rscratch2, mdo,
md->byte_offset_of_slot(data, VirtualCallData::receiver_offset(i)),
LogBytesPerWord);
__ str(rscratch1, recv_addr);
Address data_addr(mdo, md->byte_offset_of_slot(data, VirtualCallData::receiver_count_offset(i)));
__ addptr(data_addr, DataLayout::counter_increment);
return;
}
}
// Receiver type is not found in profile data.
// Fall back to runtime helper to handle the rest at runtime.
__ mov_metadata(recv, known_klass->constant_encoding());
} else {
__ load_klass(recv, recv);
Label update_done;
type_profile_helper(mdo, md, data, recv, &update_done);
// Receiver did not match any saved receiver and there is no empty row for it.
// Increment total counter to indicate polymorphic case.
__ addptr(counter_addr, DataLayout::counter_increment);
__ bind(update_done);
}
type_profile_helper(mdo, md, data, recv);
} else {
// Static call
__ addptr(counter_addr, DataLayout::counter_increment);

View File

@ -50,9 +50,8 @@ friend class ArrayCopyStub;
Address stack_slot_address(int index, uint shift, Register tmp, int adjust = 0);
// Record the type of the receiver in ReceiverTypeData
void type_profile_helper(Register mdo,
ciMethodData *md, ciProfileData *data,
Register recv, Label* update_done);
void type_profile_helper(Register mdo, ciMethodData *md,
ciProfileData *data, Register recv);
void add_debug_info_for_branch(address adr, CodeEmitInfo* info);
void casw(Register addr, Register newval, Register cmpval);

View File

@ -240,15 +240,14 @@ void InterpreterMacroAssembler::load_resolved_klass_at_offset(
// Rsub_klass: subklass
//
// Kills:
// r2, r5
// r2
void InterpreterMacroAssembler::gen_subtype_check(Register Rsub_klass,
Label& ok_is_subtype) {
assert(Rsub_klass != r0, "r0 holds superklass");
assert(Rsub_klass != r2, "r2 holds 2ndary super array length");
assert(Rsub_klass != r5, "r5 holds 2ndary super array scan ptr");
// Profile the not-null value's klass.
profile_typecheck(r2, Rsub_klass, r5); // blows r2, reloads r5
profile_typecheck(r2, Rsub_klass); // blows r2
// Do the check.
check_klass_subtype(Rsub_klass, r0, r2, ok_is_subtype); // blows r2
@ -991,7 +990,6 @@ void InterpreterMacroAssembler::profile_final_call(Register mdp) {
void InterpreterMacroAssembler::profile_virtual_call(Register receiver,
Register mdp,
Register reg2,
bool receiver_can_be_null) {
if (ProfileInterpreter) {
Label profile_continue;
@ -1009,7 +1007,7 @@ void InterpreterMacroAssembler::profile_virtual_call(Register receiver,
}
// Record the receiver type.
record_klass_in_profile(receiver, mdp, reg2);
profile_receiver_type(receiver, mdp, 0);
bind(skip_receiver_profile);
// The method data pointer needs to be updated to reflect the new target.
@ -1018,131 +1016,6 @@ void InterpreterMacroAssembler::profile_virtual_call(Register receiver,
}
}
// This routine creates a state machine for updating the multi-row
// type profile at a virtual call site (or other type-sensitive bytecode).
// The machine visits each row (of receiver/count) until the receiver type
// is found, or until it runs out of rows. At the same time, it remembers
// the location of the first empty row. (An empty row records null for its
// receiver, and can be allocated for a newly-observed receiver type.)
// Because there are two degrees of freedom in the state, a simple linear
// search will not work; it must be a decision tree. Hence this helper
// function is recursive, to generate the required tree structured code.
// It's the interpreter, so we are trading off code space for speed.
// See below for example code.
void InterpreterMacroAssembler::record_klass_in_profile_helper(
Register receiver, Register mdp,
Register reg2, int start_row,
Label& done) {
if (TypeProfileWidth == 0) {
increment_mdp_data_at(mdp, in_bytes(CounterData::count_offset()));
} else {
record_item_in_profile_helper(receiver, mdp, reg2, 0, done, TypeProfileWidth,
&VirtualCallData::receiver_offset, &VirtualCallData::receiver_count_offset);
}
}
void InterpreterMacroAssembler::record_item_in_profile_helper(Register item, Register mdp,
Register reg2, int start_row, Label& done, int total_rows,
OffsetFunction item_offset_fn, OffsetFunction item_count_offset_fn) {
int last_row = total_rows - 1;
assert(start_row <= last_row, "must be work left to do");
// Test this row for both the item and for null.
// Take any of three different outcomes:
// 1. found item => increment count and goto done
// 2. found null => keep looking for case 1, maybe allocate this cell
// 3. found something else => keep looking for cases 1 and 2
// Case 3 is handled by a recursive call.
for (int row = start_row; row <= last_row; row++) {
Label next_test;
bool test_for_null_also = (row == start_row);
// See if the item is item[n].
int item_offset = in_bytes(item_offset_fn(row));
test_mdp_data_at(mdp, item_offset, item,
(test_for_null_also ? reg2 : noreg),
next_test);
// (Reg2 now contains the item from the CallData.)
// The item is item[n]. Increment count[n].
int count_offset = in_bytes(item_count_offset_fn(row));
increment_mdp_data_at(mdp, count_offset);
b(done);
bind(next_test);
if (test_for_null_also) {
Label found_null;
// Failed the equality check on item[n]... Test for null.
if (start_row == last_row) {
// The only thing left to do is handle the null case.
cbz(reg2, found_null);
// Item did not match any saved item and there is no empty row for it.
// Increment total counter to indicate polymorphic case.
increment_mdp_data_at(mdp, in_bytes(CounterData::count_offset()));
b(done);
bind(found_null);
break;
}
// Since null is rare, make it be the branch-taken case.
cbz(reg2, found_null);
// Put all the "Case 3" tests here.
record_item_in_profile_helper(item, mdp, reg2, start_row + 1, done, total_rows,
item_offset_fn, item_count_offset_fn);
// Found a null. Keep searching for a matching item,
// but remember that this is an empty (unused) slot.
bind(found_null);
}
}
// In the fall-through case, we found no matching item, but we
// observed the item[start_row] is null.
// Fill in the item field and increment the count.
int item_offset = in_bytes(item_offset_fn(start_row));
set_mdp_data_at(mdp, item_offset, item);
int count_offset = in_bytes(item_count_offset_fn(start_row));
mov(reg2, DataLayout::counter_increment);
set_mdp_data_at(mdp, count_offset, reg2);
if (start_row > 0) {
b(done);
}
}
// Example state machine code for three profile rows:
// // main copy of decision tree, rooted at row[1]
// if (row[0].rec == rec) { row[0].incr(); goto done; }
// if (row[0].rec != nullptr) {
// // inner copy of decision tree, rooted at row[1]
// if (row[1].rec == rec) { row[1].incr(); goto done; }
// if (row[1].rec != nullptr) {
// // degenerate decision tree, rooted at row[2]
// if (row[2].rec == rec) { row[2].incr(); goto done; }
// if (row[2].rec != nullptr) { count.incr(); goto done; } // overflow
// row[2].init(rec); goto done;
// } else {
// // remember row[1] is empty
// if (row[2].rec == rec) { row[2].incr(); goto done; }
// row[1].init(rec); goto done;
// }
// } else {
// // remember row[0] is empty
// if (row[1].rec == rec) { row[1].incr(); goto done; }
// if (row[2].rec == rec) { row[2].incr(); goto done; }
// row[0].init(rec); goto done;
// }
// done:
void InterpreterMacroAssembler::record_klass_in_profile(Register receiver,
Register mdp, Register reg2) {
assert(ProfileInterpreter, "must be profiling");
Label done;
record_klass_in_profile_helper(receiver, mdp, reg2, 0, done);
bind (done);
}
void InterpreterMacroAssembler::profile_ret(Register return_bci,
Register mdp) {
if (ProfileInterpreter) {
@ -1200,7 +1073,7 @@ void InterpreterMacroAssembler::profile_null_seen(Register mdp) {
}
}
void InterpreterMacroAssembler::profile_typecheck(Register mdp, Register klass, Register reg2) {
void InterpreterMacroAssembler::profile_typecheck(Register mdp, Register klass) {
if (ProfileInterpreter) {
Label profile_continue;
@ -1213,7 +1086,7 @@ void InterpreterMacroAssembler::profile_typecheck(Register mdp, Register klass,
mdp_delta = in_bytes(VirtualCallData::virtual_call_data_size());
// Record the object type.
record_klass_in_profile(klass, mdp, reg2);
profile_receiver_type(klass, mdp, 0);
}
update_mdp_by_constant(mdp, mdp_delta);

View File

@ -273,15 +273,6 @@ class InterpreterMacroAssembler: public MacroAssembler {
Register test_value_out,
Label& not_equal_continue);
void record_klass_in_profile(Register receiver, Register mdp,
Register reg2);
void record_klass_in_profile_helper(Register receiver, Register mdp,
Register reg2, int start_row,
Label& done);
void record_item_in_profile_helper(Register item, Register mdp,
Register reg2, int start_row, Label& done, int total_rows,
OffsetFunction item_offset_fn, OffsetFunction item_count_offset_fn);
void update_mdp_by_offset(Register mdp_in, int offset_of_offset);
void update_mdp_by_offset(Register mdp_in, Register reg, int offset_of_disp);
void update_mdp_by_constant(Register mdp_in, int constant);
@ -295,11 +286,10 @@ class InterpreterMacroAssembler: public MacroAssembler {
void profile_call(Register mdp);
void profile_final_call(Register mdp);
void profile_virtual_call(Register receiver, Register mdp,
Register scratch2,
bool receiver_can_be_null = false);
void profile_ret(Register return_bci, Register mdp);
void profile_null_seen(Register mdp);
void profile_typecheck(Register mdp, Register klass, Register scratch);
void profile_typecheck(Register mdp, Register klass);
void profile_typecheck_failed(Register mdp);
void profile_switch_default(Register mdp);
void profile_switch_case(Register index_in_scratch, Register mdp,

View File

@ -2118,6 +2118,161 @@ Address MacroAssembler::argument_address(RegisterOrConstant arg_slot,
}
}
// Handle the receiver type profile update given the "recv" klass.
//
// Normally updates the ReceiverData (RD) that starts at "mdp" + "mdp_offset".
// If there are no matching or claimable receiver entries in RD, updates
// the polymorphic counter.
//
// This code expected to run by either the interpreter or JIT-ed code, without
// extra synchronization. For safety, receiver cells are claimed atomically, which
// avoids grossly misrepresenting the profiles under concurrent updates. For speed,
// counter updates are not atomic.
//
void MacroAssembler::profile_receiver_type(Register recv, Register mdp, int mdp_offset) {
assert_different_registers(recv, mdp, rscratch1, rscratch2);
int base_receiver_offset = in_bytes(ReceiverTypeData::receiver_offset(0));
int end_receiver_offset = in_bytes(ReceiverTypeData::receiver_offset(ReceiverTypeData::row_limit()));
int poly_count_offset = in_bytes(CounterData::count_offset());
int receiver_step = in_bytes(ReceiverTypeData::receiver_offset(1)) - base_receiver_offset;
int receiver_to_count_step = in_bytes(ReceiverTypeData::receiver_count_offset(0)) - base_receiver_offset;
// Adjust for MDP offsets.
base_receiver_offset += mdp_offset;
end_receiver_offset += mdp_offset;
poly_count_offset += mdp_offset;
#ifdef ASSERT
// We are about to walk the MDO slots without asking for offsets.
// Check that our math hits all the right spots.
for (uint c = 0; c < ReceiverTypeData::row_limit(); c++) {
int real_recv_offset = mdp_offset + in_bytes(ReceiverTypeData::receiver_offset(c));
int real_count_offset = mdp_offset + in_bytes(ReceiverTypeData::receiver_count_offset(c));
int offset = base_receiver_offset + receiver_step*c;
int count_offset = offset + receiver_to_count_step;
assert(offset == real_recv_offset, "receiver slot math");
assert(count_offset == real_count_offset, "receiver count math");
}
int real_poly_count_offset = mdp_offset + in_bytes(CounterData::count_offset());
assert(poly_count_offset == real_poly_count_offset, "poly counter math");
#endif
// Corner case: no profile table. Increment poly counter and exit.
if (ReceiverTypeData::row_limit() == 0) {
increment(Address(mdp, poly_count_offset), DataLayout::counter_increment);
return;
}
Register offset = rscratch2;
Label L_loop_search_receiver, L_loop_search_empty;
Label L_restart, L_found_recv, L_found_empty, L_polymorphic, L_count_update;
// The code here recognizes three major cases:
// A. Fastest: receiver found in the table
// B. Fast: no receiver in the table, and the table is full
// C. Slow: no receiver in the table, free slots in the table
//
// The case A performance is most important, as perfectly-behaved code would end up
// there, especially with larger TypeProfileWidth. The case B performance is
// important as well, this is where bulk of code would land for normally megamorphic
// cases. The case C performance is not essential, its job is to deal with installation
// races, we optimize for code density instead. Case C needs to make sure that receiver
// rows are only claimed once. This makes sure we never overwrite a row for another
// receiver and never duplicate the receivers in the list, making profile type-accurate.
//
// It is very tempting to handle these cases in a single loop, and claim the first slot
// without checking the rest of the table. But, profiling code should tolerate free slots
// in the table, as class unloading can clear them. After such cleanup, the receiver
// we need might be _after_ the free slot. Therefore, we need to let at least full scan
// to complete, before trying to install new slots. Splitting the code in several tight
// loops also helpfully optimizes for cases A and B.
//
// This code is effectively:
//
// restart:
// // Fastest: receiver is already installed
// for (i = 0; i < receiver_count(); i++) {
// if (receiver(i) == recv) goto found_recv(i);
// }
//
// // Fast: no receiver, but profile is full
// for (i = 0; i < receiver_count(); i++) {
// if (receiver(i) == null) goto found_null(i);
// }
// goto polymorphic
//
// // Slow: try to install receiver
// found_null(i):
// CAS(&receiver(i), null, recv);
// goto restart
//
// polymorphic:
// count++;
// return
//
// found_recv(i):
// *receiver_count(i)++
//
bind(L_restart);
// Fastest: receiver is already installed
mov(offset, base_receiver_offset);
bind(L_loop_search_receiver);
ldr(rscratch1, Address(mdp, offset));
cmp(rscratch1, recv);
br(Assembler::EQ, L_found_recv);
add(offset, offset, receiver_step);
sub(rscratch1, offset, end_receiver_offset);
cbnz(rscratch1, L_loop_search_receiver);
// Fast: no receiver, but profile is full
mov(offset, base_receiver_offset);
bind(L_loop_search_empty);
ldr(rscratch1, Address(mdp, offset));
cbz(rscratch1, L_found_empty);
add(offset, offset, receiver_step);
sub(rscratch1, offset, end_receiver_offset);
cbnz(rscratch1, L_loop_search_empty);
b(L_polymorphic);
// Slow: try to install receiver
bind(L_found_empty);
// Atomically swing receiver slot: null -> recv.
//
// The update uses CAS, which clobbers rscratch1. Therefore, rscratch2
// is used to hold the destination address. This is safe because the
// offset is no longer needed after the address is computed.
lea(rscratch2, Address(mdp, offset));
cmpxchg(/*addr*/ rscratch2, /*expected*/ zr, /*new*/ recv, Assembler::xword,
/*acquire*/ false, /*release*/ false, /*weak*/ true, noreg);
// CAS success means the slot now has the receiver we want. CAS failure means
// something had claimed the slot concurrently: it can be the same receiver we want,
// or something else. Since this is a slow path, we can optimize for code density,
// and just restart the search from the beginning.
b(L_restart);
// Counter updates:
// Increment polymorphic counter instead of receiver slot.
bind(L_polymorphic);
mov(offset, poly_count_offset);
b(L_count_update);
// Found a receiver, convert its slot offset to corresponding count offset.
bind(L_found_recv);
add(offset, offset, receiver_to_count_step);
bind(L_count_update);
increment(Address(mdp, offset), DataLayout::counter_increment);
}
void MacroAssembler::call_VM_leaf_base(address entry_point,
int number_of_arguments,
Label *retaddr) {
@ -5606,12 +5761,11 @@ void MacroAssembler::adrp(Register reg1, const Address &dest, uint64_t &byte_off
}
void MacroAssembler::load_byte_map_base(Register reg) {
CardTable::CardValue* byte_map_base =
((CardTableBarrierSet*)(BarrierSet::barrier_set()))->card_table()->byte_map_base();
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
// Strictly speaking the byte_map_base isn't an address at all, and it might
// Strictly speaking the card table base isn't an address at all, and it might
// even be negative. It is thus materialised as a constant.
mov(reg, (uint64_t)byte_map_base);
mov(reg, (uint64_t)ctbs->card_table_base_const());
}
void MacroAssembler::build_frame(int framesize) {
@ -5782,6 +5936,9 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
// return false;
bind(A_IS_NOT_NULL);
ldrw(cnt1, Address(a1, length_offset));
ldrw(tmp5, Address(a2, length_offset));
cmp(cnt1, tmp5);
br(NE, DONE); // If lengths differ, return false
// Increase loop counter by diff between base- and actual start-offset.
addw(cnt1, cnt1, extra_length);
lea(a1, Address(a1, start_offset));
@ -5848,6 +6005,9 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
cbz(a1, DONE);
ldrw(cnt1, Address(a1, length_offset));
cbz(a2, DONE);
ldrw(tmp5, Address(a2, length_offset));
cmp(cnt1, tmp5);
br(NE, DONE); // If lengths differ, return false
// Increase loop counter by diff between base- and actual start-offset.
addw(cnt1, cnt1, extra_length);

View File

@ -1122,6 +1122,8 @@ public:
Address argument_address(RegisterOrConstant arg_slot, int extra_slot_offset = 0);
void profile_receiver_type(Register recv, Register mdp, int mdp_offset);
void verify_sve_vector_length(Register tmp = rscratch1);
void reinitialize_ptrue() {
if (UseSVE > 0) {

View File

@ -3370,7 +3370,7 @@ void TemplateTable::invokevirtual_helper(Register index,
__ load_klass(r0, recv);
// profile this call
__ profile_virtual_call(r0, rlocals, r3);
__ profile_virtual_call(r0, rlocals);
// get target Method & entry point
__ lookup_virtual_method(r0, index, method);
@ -3500,7 +3500,7 @@ void TemplateTable::invokeinterface(int byte_no) {
/*return_method=*/false);
// profile this call
__ profile_virtual_call(r3, r13, r19);
__ profile_virtual_call(r3, r13);
// Get declaring interface class from method, and itable index

View File

@ -67,9 +67,7 @@ void CardTableBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet d
void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators,
Register addr, Register count, Register tmp) {
BLOCK_COMMENT("CardTablePostBarrier");
BarrierSet* bs = BarrierSet::barrier_set();
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(bs);
CardTable* ct = ctbs->card_table();
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
Label L_cardtable_loop, L_done;
@ -83,7 +81,7 @@ void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembl
__ sub(count, count, addr); // nb of cards
// warning: Rthread has not been preserved
__ mov_address(tmp, (address) ct->byte_map_base());
__ mov_address(tmp, (address)ctbs->card_table_base_const());
__ add(addr,tmp, addr);
Register zero = __ zero_register(tmp);
@ -122,8 +120,7 @@ void CardTableBarrierSetAssembler::store_check_part1(MacroAssembler* masm, Regis
assert(bs->kind() == BarrierSet::CardTableBarrierSet,
"Wrong barrier set kind");
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(bs);
CardTable* ct = ctbs->card_table();
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
// Load card table base address.
@ -140,7 +137,7 @@ void CardTableBarrierSetAssembler::store_check_part1(MacroAssembler* masm, Regis
Possible cause is a cache miss (card table base address resides in a
rarely accessed area of thread descriptor).
*/
__ mov_address(card_table_base, (address)ct->byte_map_base());
__ mov_address(card_table_base, (address)ctbs->card_table_base_const());
}
// The 2nd part of the store check.
@ -170,8 +167,8 @@ void CardTableBarrierSetAssembler::store_check_part2(MacroAssembler* masm, Regis
void CardTableBarrierSetAssembler::set_card(MacroAssembler* masm, Register card_table_base, Address card_table_addr, Register tmp) {
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(BarrierSet::barrier_set());
CardTable* ct = ctbs->card_table();
if ((((uintptr_t)ct->byte_map_base() & 0xff) == 0)) {
if ((((uintptr_t)ctbs->card_table_base_const() & 0xff) == 0)) {
// Card table is aligned so the lowest byte of the table address base is zero.
// This works only if the code is not saved for later use, possibly
// in a context where the base would no longer be aligned.

View File

@ -103,8 +103,7 @@ void CardTableBarrierSetAssembler::resolve_jobject(MacroAssembler* masm, Registe
void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators, Register addr,
Register count, Register preserve) {
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(BarrierSet::barrier_set());
CardTable* ct = ctbs->card_table();
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
assert_different_registers(addr, count, R0);
Label Lskip_loop, Lstore_loop;
@ -117,7 +116,7 @@ void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembl
__ srdi(addr, addr, CardTable::card_shift());
__ srdi(count, count, CardTable::card_shift());
__ subf(count, addr, count);
__ add_const_optimized(addr, addr, (address)ct->byte_map_base(), R0);
__ add_const_optimized(addr, addr, (address)ctbs->card_table_base_const(), R0);
__ addi(count, count, 1);
__ li(R0, 0);
__ mtctr(count);
@ -140,8 +139,8 @@ void CardTableBarrierSetAssembler::card_table_write(MacroAssembler* masm,
}
void CardTableBarrierSetAssembler::card_write_barrier_post(MacroAssembler* masm, Register store_addr, Register tmp) {
CardTableBarrierSet* bs = barrier_set_cast<CardTableBarrierSet>(BarrierSet::barrier_set());
card_table_write(masm, bs->card_table()->byte_map_base(), tmp, store_addr);
CardTableBarrierSet* bs = CardTableBarrierSet::barrier_set();
card_table_write(masm, bs->card_table_base_const(), tmp, store_addr);
}
void CardTableBarrierSetAssembler::oop_store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,

View File

@ -771,9 +771,6 @@ void ShenandoahBarrierSetAssembler::cmpxchg_oop(MacroAssembler *masm, Register b
void ShenandoahBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators,
Register addr, Register count, Register preserve) {
assert(ShenandoahCardBarrier, "Should have been checked by caller");
ShenandoahBarrierSet* bs = ShenandoahBarrierSet::barrier_set();
CardTable* ct = bs->card_table();
assert_different_registers(addr, count, R0);
Label L_skip_loop, L_store_loop;

View File

@ -5110,9 +5110,8 @@ void MacroAssembler::get_thread(Register thread) {
}
void MacroAssembler::load_byte_map_base(Register reg) {
CardTable::CardValue* byte_map_base =
((CardTableBarrierSet*)(BarrierSet::barrier_set()))->card_table()->byte_map_base();
mv(reg, (uint64_t)byte_map_base);
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
mv(reg, (uint64_t)ctbs->card_table_base_const());
}
void MacroAssembler::build_frame(int framesize) {

View File

@ -83,8 +83,7 @@ void CardTableBarrierSetAssembler::resolve_jobject(MacroAssembler* masm, Registe
void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators, Register addr, Register count,
bool do_return) {
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(BarrierSet::barrier_set());
CardTable* ct = ctbs->card_table();
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
NearLabel doXC, done;
assert_different_registers(Z_R0, Z_R1, addr, count);
@ -105,7 +104,7 @@ void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembl
__ add2reg_with_index(count, -BytesPerHeapOop, count, addr);
// Get base address of card table.
__ load_const_optimized(Z_R1, (address)ct->byte_map_base());
__ load_const_optimized(Z_R1, (address)ctbs->card_table_base_const());
// count = (count>>shift) - (addr>>shift)
__ z_srlg(addr, addr, CardTable::card_shift());
@ -179,13 +178,12 @@ void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembl
void CardTableBarrierSetAssembler::store_check(MacroAssembler* masm, Register store_addr, Register tmp) {
// Does a store check for the oop in register obj. The content of
// register obj is destroyed afterwards.
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(BarrierSet::barrier_set());
CardTable* ct = ctbs->card_table();
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
assert_different_registers(store_addr, tmp);
__ z_srlg(store_addr, store_addr, CardTable::card_shift());
__ load_absolute_address(tmp, (address)ct->byte_map_base());
__ load_absolute_address(tmp, (address)ctbs->card_table_base_const());
__ z_agr(store_addr, tmp);
__ z_mvi(0, store_addr, CardTable::dirty_card_val());
}

View File

@ -95,11 +95,7 @@ void CardTableBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet d
void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators,
Register addr, Register count, Register tmp) {
BarrierSet *bs = BarrierSet::barrier_set();
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(bs);
CardTable* ct = ctbs->card_table();
intptr_t disp = (intptr_t) ct->byte_map_base();
SHENANDOAHGC_ONLY(assert(!UseShenandoahGC, "Shenandoah byte_map_base is not constant.");)
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
Label L_loop, L_done;
const Register end = count;
@ -115,7 +111,7 @@ void CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembl
__ shrptr(end, CardTable::card_shift());
__ subptr(end, addr); // end --> cards count
__ mov64(tmp, disp);
__ mov64(tmp, (intptr_t)ctbs->card_table_base_const());
__ addptr(addr, tmp);
__ BIND(L_loop);
__ movb(Address(addr, count, Address::times_1), 0);
@ -128,10 +124,7 @@ __ BIND(L_done);
void CardTableBarrierSetAssembler::store_check(MacroAssembler* masm, Register obj, Address dst) {
// Does a store check for the oop in register obj. The content of
// register obj is destroyed afterwards.
BarrierSet* bs = BarrierSet::barrier_set();
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(bs);
CardTable* ct = ctbs->card_table();
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
__ shrptr(obj, CardTable::card_shift());
@ -142,7 +135,7 @@ void CardTableBarrierSetAssembler::store_check(MacroAssembler* masm, Register ob
// So this essentially converts an address to a displacement and it will
// never need to be relocated. On 64bit however the value may be too
// large for a 32bit displacement.
intptr_t byte_map_base = (intptr_t)ct->byte_map_base();
intptr_t byte_map_base = (intptr_t)ctbs->card_table_base_const();
if (__ is_simm32(byte_map_base)) {
card_addr = Address(noreg, obj, Address::times_1, byte_map_base);
} else {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -921,8 +921,9 @@ void VM_Version::get_processor_features() {
// Check if processor has Intel Ecore
if (FLAG_IS_DEFAULT(EnableX86ECoreOpts) && is_intel() && is_intel_server_family() &&
(_model == 0x97 || _model == 0xAA || _model == 0xAC || _model == 0xAF ||
_model == 0xCC || _model == 0xDD)) {
(supports_hybrid() ||
_model == 0xAF /* Xeon 6 E-cores (Sierra Forest) */ ||
_model == 0xDD /* Xeon 6+ E-cores (Clearwater Forest) */ )) {
FLAG_SET_DEFAULT(EnableX86ECoreOpts, true);
}

View File

@ -304,12 +304,13 @@ void OSContainer::print_container_metric(outputStream* st, const char* metrics,
constexpr int longest_value = max_length - 11; // Max length - shortest "metric: " string ("cpu_quota: ")
char value_str[longest_value + 1] = {};
os::snprintf_checked(value_str, longest_value, metric_fmt<T>::fmt, value);
st->print("%s: %*s", metrics, max_length - static_cast<int>(strlen(metrics)) - 2, value_str); // -2 for the ": "
if (unit[0] != '\0') {
st->print_cr(" %s", unit);
} else {
st->print_cr("");
}
const int pad_width = max_length - static_cast<int>(strlen(metrics)) - 2; // -2 for the ": "
const char* unit_prefix = unit[0] != '\0' ? " " : "";
char line[128] = {};
os::snprintf_checked(line, sizeof(line), "%s: %*s%s%s", metrics, pad_width, value_str, unit_prefix, unit);
st->print_cr("%s", line);
}
void OSContainer::print_container_helper(outputStream* st, MetricResult& res, const char* metrics) {

View File

@ -1,6 +1,6 @@
/*
* Copyright (c) 1997, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2025 SAP SE. All rights reserved.
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2026 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -412,12 +412,8 @@ run_stub:
}
void os::Aix::init_thread_fpu_state(void) {
#if !defined(USE_XLC_BUILTINS)
// Disable FP exceptions.
__asm__ __volatile__ ("mtfsfi 6,0");
#else
__mtfsfi(6, 0);
#endif
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -1,6 +1,6 @@
/*
* Copyright (c) 1997, 2026, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2013 SAP SE. All rights reserved.
* Copyright (c) 2012, 2026 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -29,29 +29,21 @@
// Included in runtime/prefetch.inline.hpp
inline void Prefetch::read(const void *loc, intx interval) {
#if !defined(USE_XLC_BUILTINS)
__asm__ __volatile__ (
" dcbt 0, %0 \n"
:
: /*%0*/"r" ( ((address)loc) +((long)interval) )
//:
);
#else
__dcbt(((address)loc) +((long)interval));
#endif
}
inline void Prefetch::write(void *loc, intx interval) {
#if !defined(USE_XLC_BUILTINS)
__asm__ __volatile__ (
" dcbtst 0, %0 \n"
:
: /*%0*/"r" ( ((address)loc) +((long)interval) )
//:
);
#else
__dcbtst( ((address)loc) +((long)interval) );
#endif
}
#endif // OS_CPU_AIX_PPC_PREFETCH_AIX_PPC_INLINE_HPP

View File

@ -43,7 +43,8 @@ void JavaThread::cache_global_variables() {
BarrierSet* bs = BarrierSet::barrier_set();
if (bs->is_a(BarrierSet::CardTableBarrierSet)) {
_card_table_base = (address) (barrier_set_cast<CardTableBarrierSet>(bs)->card_table()->byte_map_base());
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
_card_table_base = (address)ctbs->card_table_base_const();
} else {
_card_table_base = nullptr;
}

View File

@ -0,0 +1,34 @@
/*
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "cds/aotGrowableArray.hpp"
#include "cds/aotMetaspace.hpp"
#include "memory/allocation.inline.hpp"
#include "utilities/growableArray.hpp"
void AOTGrowableArrayHelper::deallocate(void* mem) {
if (!AOTMetaspace::in_aot_cache(mem)) {
GrowableArrayCHeapAllocator::deallocate(mem);
}
}

View File

@ -0,0 +1,76 @@
/*
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_AOT_AOTGROWABLEARRAY_HPP
#define SHARE_AOT_AOTGROWABLEARRAY_HPP
#include <memory/metaspaceClosureType.hpp>
#include <utilities/growableArray.hpp>
class AOTGrowableArrayHelper {
public:
static void deallocate(void* mem);
};
// An AOTGrowableArray<T> provides the same functionality as a GrowableArray<T> that
// uses the C heap allocator. In addition, AOTGrowableArray<T> can be iterated with
// MetaspaceClosure. This type should be used for growable arrays that need to be
// stored in the AOT cache. See ModuleEntry::_reads for an example.
template <typename E>
class AOTGrowableArray : public GrowableArrayWithAllocator<E, AOTGrowableArray<E>> {
friend class VMStructs;
friend class GrowableArrayWithAllocator<E, AOTGrowableArray>;
static E* allocate(int max, MemTag mem_tag) {
return (E*)GrowableArrayCHeapAllocator::allocate(max, sizeof(E), mem_tag);
}
E* allocate() {
return allocate(this->_capacity, mtClass);
}
void deallocate(E* mem) {
#if INCLUDE_CDS
AOTGrowableArrayHelper::deallocate(mem);
#else
GrowableArrayCHeapAllocator::deallocate(mem);
#endif
}
public:
AOTGrowableArray(int initial_capacity, MemTag mem_tag) :
GrowableArrayWithAllocator<E, AOTGrowableArray>(
allocate(initial_capacity, mem_tag),
initial_capacity) {}
AOTGrowableArray() : AOTGrowableArray(0, mtClassShared) {}
// methods required by MetaspaceClosure
void metaspace_pointers_do(MetaspaceClosure* it);
int size_in_heapwords() const { return (int)heap_word_size(sizeof(*this)); }
MetaspaceClosureType type() const { return MetaspaceClosureType::GrowableArrayType; }
static bool is_read_only_by_default() { return false; }
};
#endif // SHARE_AOT_AOTGROWABLEARRAY_HPP

View File

@ -0,0 +1,37 @@
/*
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_CDS_AOTGROWABLEARRAY_INLINE_HPP
#define SHARE_CDS_AOTGROWABLEARRAY_INLINE_HPP
#include "cds/aotGrowableArray.hpp"
#include "memory/metaspaceClosure.hpp"
template <typename E>
void AOTGrowableArray<E>::metaspace_pointers_do(MetaspaceClosure* it) {
it->push_c_array(AOTGrowableArray<E>::data_addr(), AOTGrowableArray<E>::capacity());
}
#endif // SHARE_CDS_AOTGROWABLEARRAY_INLINE_HPP

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -29,6 +29,8 @@
#include "cds/aotStreamedHeapWriter.hpp"
#include "cds/cdsConfig.hpp"
#include "cds/filemap.hpp"
#include "classfile/moduleEntry.hpp"
#include "classfile/packageEntry.hpp"
#include "classfile/systemDictionaryShared.hpp"
#include "classfile/vmClasses.hpp"
#include "logging/log.hpp"
@ -141,7 +143,7 @@ public:
info._buffered_addr = ref->obj();
info._requested_addr = ref->obj();
info._bytes = ref->size() * BytesPerWord;
info._type = ref->msotype();
info._type = ref->type();
_objs.append(info);
}
@ -214,7 +216,7 @@ void AOTMapLogger::dumptime_log_metaspace_region(const char* name, DumpRegion* r
info._buffered_addr = src_info->buffered_addr();
info._requested_addr = info._buffered_addr + _buffer_to_requested_delta;
info._bytes = src_info->size_in_bytes();
info._type = src_info->msotype();
info._type = src_info->type();
objs.append(info);
}
@ -332,43 +334,52 @@ void AOTMapLogger::log_metaspace_objects_impl(address region_base, address regio
address buffered_addr = info._buffered_addr;
address requested_addr = info._requested_addr;
int bytes = info._bytes;
MetaspaceObj::Type type = info._type;
const char* type_name = MetaspaceObj::type_name(type);
MetaspaceClosureType type = info._type;
const char* type_name = MetaspaceClosure::type_name(type);
log_as_hex(last_obj_base, buffered_addr, last_obj_base + _buffer_to_requested_delta);
switch (type) {
case MetaspaceObj::ClassType:
case MetaspaceClosureType::ClassType:
log_klass((Klass*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::ConstantPoolType:
case MetaspaceClosureType::ConstantPoolType:
log_constant_pool((ConstantPool*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::ConstantPoolCacheType:
case MetaspaceClosureType::ConstantPoolCacheType:
log_constant_pool_cache((ConstantPoolCache*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::ConstMethodType:
case MetaspaceClosureType::ConstMethodType:
log_const_method((ConstMethod*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::MethodType:
case MetaspaceClosureType::MethodType:
log_method((Method*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::MethodCountersType:
case MetaspaceClosureType::MethodCountersType:
log_method_counters((MethodCounters*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::MethodDataType:
case MetaspaceClosureType::MethodDataType:
log_method_data((MethodData*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::SymbolType:
case MetaspaceClosureType::ModuleEntryType:
log_module_entry((ModuleEntry*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceClosureType::PackageEntryType:
log_package_entry((PackageEntry*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceClosureType::GrowableArrayType:
log_growable_array((GrowableArrayBase*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceClosureType::SymbolType:
log_symbol((Symbol*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::KlassTrainingDataType:
case MetaspaceClosureType::KlassTrainingDataType:
log_klass_training_data((KlassTrainingData*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::MethodTrainingDataType:
case MetaspaceClosureType::MethodTrainingDataType:
log_method_training_data((MethodTrainingData*)src, requested_addr, type_name, bytes, current);
break;
case MetaspaceObj::CompileTrainingDataType:
case MetaspaceClosureType::CompileTrainingDataType:
log_compile_training_data((CompileTrainingData*)src, requested_addr, type_name, bytes, current);
break;
default:
@ -421,6 +432,27 @@ void AOTMapLogger::log_method_data(MethodData* md, address requested_addr, const
log_debug(aot, map)(_LOG_PREFIX " %s", p2i(requested_addr), type_name, bytes, md->method()->external_name());
}
void AOTMapLogger::log_module_entry(ModuleEntry* mod, address requested_addr, const char* type_name,
int bytes, Thread* current) {
ResourceMark rm(current);
log_debug(aot, map)(_LOG_PREFIX " %s", p2i(requested_addr), type_name, bytes,
mod->name_as_C_string());
}
void AOTMapLogger::log_package_entry(PackageEntry* pkg, address requested_addr, const char* type_name,
int bytes, Thread* current) {
ResourceMark rm(current);
log_debug(aot, map)(_LOG_PREFIX " %s - %s", p2i(requested_addr), type_name, bytes,
pkg->module()->name_as_C_string(), pkg->name_as_C_string());
}
void AOTMapLogger::log_growable_array(GrowableArrayBase* arr, address requested_addr, const char* type_name,
int bytes, Thread* current) {
ResourceMark rm(current);
log_debug(aot, map)(_LOG_PREFIX " %d (%d)", p2i(requested_addr), type_name, bytes,
arr->length(), arr->capacity());
}
void AOTMapLogger::log_klass(Klass* k, address requested_addr, const char* type_name,
int bytes, Thread* current) {
ResourceMark rm(current);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -28,6 +28,7 @@
#include "cds/archiveBuilder.hpp"
#include "memory/allocation.hpp"
#include "memory/allStatic.hpp"
#include "memory/metaspaceClosureType.hpp"
#include "oops/oopsHierarchy.hpp"
#include "utilities/globalDefinitions.hpp"
#include "utilities/growableArray.hpp"
@ -37,9 +38,13 @@ class ArchiveStreamedHeapInfo;
class CompileTrainingData;
class DumpRegion;
class FileMapInfo;
class GrowableArrayBase;
class KlassTrainingData;
class MethodCounters;
class MethodTrainingData;
class ModuleEntry;
class outputStream;
class PackageEntry;
// Write detailed info to a mapfile to analyze contents of the AOT cache/CDS archive.
// -Xlog:aot+map* can be used both when creating an AOT cache, or when using an AOT cache.
@ -62,7 +67,7 @@ class AOTMapLogger : AllStatic {
address _buffered_addr;
address _requested_addr;
int _bytes;
MetaspaceObj::Type _type;
MetaspaceClosureType _type;
};
public:
@ -142,6 +147,9 @@ private:
Thread* current);
static void log_klass(Klass* k, address requested_addr, const char* type_name, int bytes, Thread* current);
static void log_method(Method* m, address requested_addr, const char* type_name, int bytes, Thread* current);
static void log_module_entry(ModuleEntry* mod, address requested_addr, const char* type_name, int bytes, Thread* current);
static void log_package_entry(PackageEntry* pkg, address requested_addr, const char* type_name, int bytes, Thread* current);
static void log_growable_array(GrowableArrayBase* arr, address requested_addr, const char* type_name, int bytes, Thread* current);
static void log_symbol(Symbol* s, address requested_addr, const char* type_name, int bytes, Thread* current);
static void log_klass_training_data(KlassTrainingData* ktd, address requested_addr, const char* type_name, int bytes, Thread* current);
static void log_method_training_data(MethodTrainingData* mtd, address requested_addr, const char* type_name, int bytes, Thread* current);

View File

@ -698,6 +698,9 @@ public:
Universe::metaspace_pointers_do(it);
vmSymbols::metaspace_pointers_do(it);
TrainingData::iterate_roots(it);
if (CDSConfig::is_dumping_full_module_graph()) {
ClassLoaderDataShared::iterate_roots(it);
}
// The above code should find all the symbols that are referenced by the
// archived classes. We just need to add the extra symbols which
@ -795,6 +798,10 @@ void VM_PopulateDumpSharedSpace::doit() {
_builder.make_klasses_shareable();
AOTMetaspace::make_method_handle_intrinsics_shareable();
if (CDSConfig::is_dumping_full_module_graph()) {
ClassLoaderDataShared::remove_unshareable_info();
}
dump_java_heap_objects();
dump_shared_symbol_table(_builder.symbols());
@ -1135,6 +1142,7 @@ void AOTMetaspace::dump_static_archive_impl(StaticArchiveBuilder& builder, TRAPS
HeapShared::init_heap_writer();
if (CDSConfig::is_dumping_full_module_graph()) {
ClassLoaderDataShared::ensure_module_entry_tables_exist();
ClassLoaderDataShared::build_tables(CHECK);
HeapShared::reset_archived_object_states(CHECK);
}

View File

@ -243,7 +243,7 @@ bool ArchiveBuilder::gather_klass_and_symbol(MetaspaceClosure::Ref* ref, bool re
if (get_follow_mode(ref) != make_a_copy) {
return false;
}
if (ref->msotype() == MetaspaceObj::ClassType) {
if (ref->type() == MetaspaceClosureType::ClassType) {
Klass* klass = (Klass*)ref->obj();
assert(klass->is_klass(), "must be");
if (!is_excluded(klass)) {
@ -252,7 +252,7 @@ bool ArchiveBuilder::gather_klass_and_symbol(MetaspaceClosure::Ref* ref, bool re
assert(klass->is_instance_klass(), "must be");
}
}
} else if (ref->msotype() == MetaspaceObj::SymbolType) {
} else if (ref->type() == MetaspaceClosureType::SymbolType) {
// Make sure the symbol won't be GC'ed while we are dumping the archive.
Symbol* sym = (Symbol*)ref->obj();
sym->increment_refcount();
@ -271,11 +271,6 @@ void ArchiveBuilder::gather_klasses_and_symbols() {
aot_log_info(aot)("Gathering classes and symbols ... ");
GatherKlassesAndSymbols doit(this);
iterate_roots(&doit);
#if INCLUDE_CDS_JAVA_HEAP
if (CDSConfig::is_dumping_full_module_graph()) {
ClassLoaderDataShared::iterate_symbols(&doit);
}
#endif
doit.finish();
if (CDSConfig::is_dumping_static_archive()) {
@ -446,14 +441,14 @@ bool ArchiveBuilder::gather_one_source_obj(MetaspaceClosure::Ref* ref, bool read
}
#ifdef ASSERT
if (ref->msotype() == MetaspaceObj::MethodType) {
if (ref->type() == MetaspaceClosureType::MethodType) {
Method* m = (Method*)ref->obj();
assert(!RegeneratedClasses::has_been_regenerated((address)m->method_holder()),
"Should not archive methods in a class that has been regenerated");
}
#endif
if (ref->msotype() == MetaspaceObj::MethodDataType) {
if (ref->type() == MetaspaceClosureType::MethodDataType) {
MethodData* md = (MethodData*)ref->obj();
md->clean_method_data(false /* always_clean */);
}
@ -554,16 +549,16 @@ ArchiveBuilder::FollowMode ArchiveBuilder::get_follow_mode(MetaspaceClosure::Ref
if (CDSConfig::is_dumping_dynamic_archive() && AOTMetaspace::in_aot_cache(obj)) {
// Don't dump existing shared metadata again.
return point_to_it;
} else if (ref->msotype() == MetaspaceObj::MethodDataType ||
ref->msotype() == MetaspaceObj::MethodCountersType ||
ref->msotype() == MetaspaceObj::KlassTrainingDataType ||
ref->msotype() == MetaspaceObj::MethodTrainingDataType ||
ref->msotype() == MetaspaceObj::CompileTrainingDataType) {
} else if (ref->type() == MetaspaceClosureType::MethodDataType ||
ref->type() == MetaspaceClosureType::MethodCountersType ||
ref->type() == MetaspaceClosureType::KlassTrainingDataType ||
ref->type() == MetaspaceClosureType::MethodTrainingDataType ||
ref->type() == MetaspaceClosureType::CompileTrainingDataType) {
return (TrainingData::need_data() || TrainingData::assembling_data()) ? make_a_copy : set_to_null;
} else if (ref->msotype() == MetaspaceObj::AdapterHandlerEntryType) {
} else if (ref->type() == MetaspaceClosureType::AdapterHandlerEntryType) {
return CDSConfig::is_dumping_adapters() ? make_a_copy : set_to_null;
} else {
if (ref->msotype() == MetaspaceObj::ClassType) {
if (ref->type() == MetaspaceClosureType::ClassType) {
Klass* klass = (Klass*)ref->obj();
assert(klass->is_klass(), "must be");
if (RegeneratedClasses::has_been_regenerated(klass)) {
@ -571,7 +566,12 @@ ArchiveBuilder::FollowMode ArchiveBuilder::get_follow_mode(MetaspaceClosure::Ref
}
if (is_excluded(klass)) {
ResourceMark rm;
log_debug(cds, dynamic)("Skipping class (excluded): %s", klass->external_name());
aot_log_trace(aot)("pointer set to null: class (excluded): %s", klass->external_name());
return set_to_null;
}
if (klass->is_array_klass() && CDSConfig::is_dumping_dynamic_archive()) {
ResourceMark rm;
aot_log_trace(aot)("pointer set to null: array class not supported in dynamic region: %s", klass->external_name());
return set_to_null;
}
}
@ -615,15 +615,6 @@ void ArchiveBuilder::dump_rw_metadata() {
ResourceMark rm;
aot_log_info(aot)("Allocating RW objects ... ");
make_shallow_copies(&_rw_region, &_rw_src_objs);
#if INCLUDE_CDS_JAVA_HEAP
if (CDSConfig::is_dumping_full_module_graph()) {
// Archive the ModuleEntry's and PackageEntry's of the 3 built-in loaders
char* start = rw_region()->top();
ClassLoaderDataShared::allocate_archived_tables();
alloc_stats()->record_modules(rw_region()->top() - start, /*read_only*/false);
}
#endif
}
void ArchiveBuilder::dump_ro_metadata() {
@ -632,15 +623,6 @@ void ArchiveBuilder::dump_ro_metadata() {
start_dump_region(&_ro_region);
make_shallow_copies(&_ro_region, &_ro_src_objs);
#if INCLUDE_CDS_JAVA_HEAP
if (CDSConfig::is_dumping_full_module_graph()) {
char* start = ro_region()->top();
ClassLoaderDataShared::init_archived_tables();
alloc_stats()->record_modules(ro_region()->top() - start, /*read_only*/true);
}
#endif
RegeneratedClasses::record_regenerated_objects();
}
@ -658,7 +640,7 @@ void ArchiveBuilder::make_shallow_copy(DumpRegion *dump_region, SourceObjInfo* s
size_t alignment = SharedSpaceObjectAlignment; // alignment for the dest pointer
char* oldtop = dump_region->top();
if (src_info->msotype() == MetaspaceObj::ClassType) {
if (src_info->type() == MetaspaceClosureType::ClassType) {
// Allocate space for a pointer directly in front of the future InstanceKlass, so
// we can do a quick lookup from InstanceKlass* -> RunTimeClassInfo*
// without building another hashtable. See RunTimeClassInfo::get_for()
@ -674,7 +656,7 @@ void ArchiveBuilder::make_shallow_copy(DumpRegion *dump_region, SourceObjInfo* s
alignment = nth_bit(ArchiveBuilder::precomputed_narrow_klass_shift());
}
#endif
} else if (src_info->msotype() == MetaspaceObj::SymbolType) {
} else if (src_info->type() == MetaspaceClosureType::SymbolType) {
// Symbols may be allocated by using AllocateHeap, so their sizes
// may be less than size_in_bytes() indicates.
bytes = ((Symbol*)src)->byte_size();
@ -684,7 +666,7 @@ void ArchiveBuilder::make_shallow_copy(DumpRegion *dump_region, SourceObjInfo* s
memcpy(dest, src, bytes);
// Update the hash of buffered sorted symbols for static dump so that the symbols have deterministic contents
if (CDSConfig::is_dumping_static_archive() && (src_info->msotype() == MetaspaceObj::SymbolType)) {
if (CDSConfig::is_dumping_static_archive() && (src_info->type() == MetaspaceClosureType::SymbolType)) {
Symbol* buffered_symbol = (Symbol*)dest;
assert(((Symbol*)src)->is_permanent(), "archived symbols must be permanent");
buffered_symbol->update_identity_hash();
@ -699,7 +681,7 @@ void ArchiveBuilder::make_shallow_copy(DumpRegion *dump_region, SourceObjInfo* s
}
}
intptr_t* archived_vtable = CppVtables::get_archived_vtable(src_info->msotype(), (address)dest);
intptr_t* archived_vtable = CppVtables::get_archived_vtable(src_info->type(), (address)dest);
if (archived_vtable != nullptr) {
*(address*)dest = (address)archived_vtable;
ArchivePtrMarker::mark_pointer((address*)dest);
@ -709,7 +691,7 @@ void ArchiveBuilder::make_shallow_copy(DumpRegion *dump_region, SourceObjInfo* s
src_info->set_buffered_addr((address)dest);
char* newtop = dump_region->top();
_alloc_stats.record(src_info->msotype(), int(newtop - oldtop), src_info->read_only());
_alloc_stats.record(src_info->type(), int(newtop - oldtop), src_info->read_only());
DEBUG_ONLY(_alloc_stats.verify((int)dump_region->used(), src_info->read_only()));
}
@ -992,15 +974,15 @@ void ArchiveBuilder::make_training_data_shareable() {
return;
}
if (info.msotype() == MetaspaceObj::KlassTrainingDataType ||
info.msotype() == MetaspaceObj::MethodTrainingDataType ||
info.msotype() == MetaspaceObj::CompileTrainingDataType) {
if (info.type() == MetaspaceClosureType::KlassTrainingDataType ||
info.type() == MetaspaceClosureType::MethodTrainingDataType ||
info.type() == MetaspaceClosureType::CompileTrainingDataType) {
TrainingData* buffered_td = (TrainingData*)info.buffered_addr();
buffered_td->remove_unshareable_info();
} else if (info.msotype() == MetaspaceObj::MethodDataType) {
} else if (info.type() == MetaspaceClosureType::MethodDataType) {
MethodData* buffered_mdo = (MethodData*)info.buffered_addr();
buffered_mdo->remove_unshareable_info();
} else if (info.msotype() == MetaspaceObj::MethodCountersType) {
} else if (info.type() == MetaspaceClosureType::MethodCountersType) {
MethodCounters* buffered_mc = (MethodCounters*)info.buffered_addr();
buffered_mc->remove_unshareable_info();
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -134,13 +134,13 @@ private:
int _size_in_bytes;
int _id; // Each object has a unique serial ID, starting from zero. The ID is assigned
// when the object is added into _source_objs.
MetaspaceObj::Type _msotype;
MetaspaceClosureType _type;
address _source_addr; // The source object to be copied.
address _buffered_addr; // The copy of this object insider the buffer.
public:
SourceObjInfo(MetaspaceClosure::Ref* ref, bool read_only, FollowMode follow_mode) :
_ptrmap_start(0), _ptrmap_end(0), _read_only(read_only), _has_embedded_pointer(false), _follow_mode(follow_mode),
_size_in_bytes(ref->size() * BytesPerWord), _id(0), _msotype(ref->msotype()),
_size_in_bytes(ref->size() * BytesPerWord), _id(0), _type(ref->type()),
_source_addr(ref->obj()) {
if (follow_mode == point_to_it) {
_buffered_addr = ref->obj();
@ -155,7 +155,7 @@ private:
SourceObjInfo(address src, SourceObjInfo* renegerated_obj_info) :
_ptrmap_start(0), _ptrmap_end(0), _read_only(false),
_follow_mode(renegerated_obj_info->_follow_mode),
_size_in_bytes(0), _msotype(renegerated_obj_info->_msotype),
_size_in_bytes(0), _type(renegerated_obj_info->_type),
_source_addr(src), _buffered_addr(renegerated_obj_info->_buffered_addr) {}
bool should_copy() const { return _follow_mode == make_a_copy; }
@ -182,7 +182,7 @@ private:
}
return _buffered_addr;
}
MetaspaceObj::Type msotype() const { return _msotype; }
MetaspaceClosureType type() const { return _type; }
FollowMode follow_mode() const { return _follow_mode; }
};

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -22,12 +22,14 @@
*
*/
#include "cds/aotGrowableArray.hpp"
#include "cds/aotMetaspace.hpp"
#include "cds/archiveBuilder.hpp"
#include "cds/archiveUtils.hpp"
#include "cds/cdsConfig.hpp"
#include "cds/cppVtables.hpp"
#include "logging/log.hpp"
#include "memory/resourceArea.hpp"
#include "oops/instanceClassLoaderKlass.hpp"
#include "oops/instanceMirrorKlass.hpp"
#include "oops/instanceRefKlass.hpp"
@ -53,6 +55,19 @@
// + at run time: we clone the actual contents of the vtables from libjvm.so
// into our own tables.
#ifndef PRODUCT
// AOTGrowableArray has a vtable only when in non-product builds (due to
// the virtual printing functions in AnyObj).
using GrowableArray_ModuleEntry_ptr = AOTGrowableArray<ModuleEntry*>;
#define DEBUG_CPP_VTABLE_TYPES_DO(f) \
f(GrowableArray_ModuleEntry_ptr) \
#endif
// Currently, the archive contains ONLY the following types of objects that have C++ vtables.
#define CPP_VTABLE_TYPES_DO(f) \
f(ConstantPool) \
@ -68,7 +83,8 @@
f(TypeArrayKlass) \
f(KlassTrainingData) \
f(MethodTrainingData) \
f(CompileTrainingData)
f(CompileTrainingData) \
NOT_PRODUCT(DEBUG_CPP_VTABLE_TYPES_DO(f))
class CppVtableInfo {
intptr_t _vtable_size;
@ -86,7 +102,7 @@ public:
}
};
static inline intptr_t* vtable_of(const Metadata* m) {
static inline intptr_t* vtable_of(const void* m) {
return *((intptr_t**)m);
}
@ -116,6 +132,7 @@ CppVtableInfo* CppVtableCloner<T>::allocate_and_initialize(const char* name) {
template <class T>
void CppVtableCloner<T>::initialize(const char* name, CppVtableInfo* info) {
ResourceMark rm;
T tmp; // Allocate temporary dummy metadata object to get to the original vtable.
int n = info->vtable_size();
intptr_t* srcvtable = vtable_of(&tmp);
@ -268,7 +285,7 @@ void CppVtables::serialize(SerializeClosure* soc) {
}
}
intptr_t* CppVtables::get_archived_vtable(MetaspaceObj::Type msotype, address obj) {
intptr_t* CppVtables::get_archived_vtable(MetaspaceClosureType type, address obj) {
if (!_orig_cpp_vtptrs_inited) {
CPP_VTABLE_TYPES_DO(INIT_ORIG_CPP_VTPTRS);
_orig_cpp_vtptrs_inited = true;
@ -276,19 +293,23 @@ intptr_t* CppVtables::get_archived_vtable(MetaspaceObj::Type msotype, address ob
assert(CDSConfig::is_dumping_archive(), "sanity");
int kind = -1;
switch (msotype) {
case MetaspaceObj::SymbolType:
case MetaspaceObj::TypeArrayU1Type:
case MetaspaceObj::TypeArrayU2Type:
case MetaspaceObj::TypeArrayU4Type:
case MetaspaceObj::TypeArrayU8Type:
case MetaspaceObj::TypeArrayOtherType:
case MetaspaceObj::ConstMethodType:
case MetaspaceObj::ConstantPoolCacheType:
case MetaspaceObj::AnnotationsType:
case MetaspaceObj::RecordComponentType:
case MetaspaceObj::AdapterHandlerEntryType:
case MetaspaceObj::AdapterFingerPrintType:
switch (type) {
case MetaspaceClosureType::SymbolType:
case MetaspaceClosureType::TypeArrayU1Type:
case MetaspaceClosureType::TypeArrayU2Type:
case MetaspaceClosureType::TypeArrayU4Type:
case MetaspaceClosureType::TypeArrayU8Type:
case MetaspaceClosureType::TypeArrayOtherType:
case MetaspaceClosureType::CArrayType:
case MetaspaceClosureType::ConstMethodType:
case MetaspaceClosureType::ConstantPoolCacheType:
case MetaspaceClosureType::AnnotationsType:
case MetaspaceClosureType::ModuleEntryType:
case MetaspaceClosureType::PackageEntryType:
case MetaspaceClosureType::RecordComponentType:
case MetaspaceClosureType::AdapterHandlerEntryType:
case MetaspaceClosureType::AdapterFingerPrintType:
PRODUCT_ONLY(case MetaspaceClosureType::GrowableArrayType:)
// These have no vtables.
break;
default:

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,6 +27,7 @@
#include "memory/allocation.hpp"
#include "memory/allStatic.hpp"
#include "memory/metaspaceClosureType.hpp"
#include "utilities/globalDefinitions.hpp"
class ArchiveBuilder;
@ -40,7 +41,7 @@ class CppVtables : AllStatic {
public:
static void dumptime_init(ArchiveBuilder* builder);
static void zero_archived_vtables();
static intptr_t* get_archived_vtable(MetaspaceObj::Type msotype, address obj);
static intptr_t* get_archived_vtable(MetaspaceClosureType type, address obj);
static void serialize(SerializeClosure* sc);
static bool is_valid_shared_method(const Method* m) NOT_CDS_RETURN_(false);
static char* vtables_serialized_base() { return _vtables_serialized_base; }

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,32 +27,34 @@
#include "classfile/compactHashtable.hpp"
#include "memory/allocation.hpp"
#include "memory/metaspaceClosureType.hpp"
// This is for dumping detailed statistics for the allocations
// in the shared spaces.
class DumpAllocStats : public StackObj {
public:
// Here's poor man's enum inheritance
#define SHAREDSPACE_OBJ_TYPES_DO(f) \
METASPACE_OBJ_TYPES_DO(f) \
#define DUMPED_OBJ_TYPES_DO(f) \
METASPACE_CLOSURE_TYPES_DO(f) \
f(SymbolHashentry) \
f(SymbolBucket) \
f(StringHashentry) \
f(StringBucket) \
f(ModulesNatives) \
f(CppVTables) \
f(Other)
#define DUMPED_TYPE_DECLARE(name) name ## Type,
#define DUMPED_TYPE_NAME_CASE(name) case name ## Type: return #name;
enum Type {
// Types are MetaspaceObj::ClassType, MetaspaceObj::SymbolType, etc
SHAREDSPACE_OBJ_TYPES_DO(METASPACE_OBJ_TYPE_DECLARE)
DUMPED_OBJ_TYPES_DO(DUMPED_TYPE_DECLARE)
_number_of_types
};
static const char* type_name(Type type) {
switch(type) {
SHAREDSPACE_OBJ_TYPES_DO(METASPACE_OBJ_TYPE_NAME_CASE)
DUMPED_OBJ_TYPES_DO(DUMPED_TYPE_NAME_CASE)
default:
ShouldNotReachHere();
return nullptr;
@ -101,16 +103,12 @@ public:
CompactHashtableStats* symbol_stats() { return &_symbol_stats; }
CompactHashtableStats* string_stats() { return &_string_stats; }
void record(MetaspaceObj::Type type, int byte_size, bool read_only) {
assert(int(type) >= 0 && type < MetaspaceObj::_number_of_types, "sanity");
void record(MetaspaceClosureType type, int byte_size, bool read_only) {
int t = (int)type;
assert(t >= 0 && t < (int)MetaspaceClosureType::_number_of_types, "sanity");
int which = (read_only) ? RO : RW;
_counts[which][type] ++;
_bytes [which][type] += byte_size;
}
void record_modules(int byte_size, bool read_only) {
int which = (read_only) ? RO : RW;
_bytes [which][ModulesNativesType] += byte_size;
_counts[which][t] ++;
_bytes [which][t] += byte_size;
}
void record_other_type(int byte_size, bool read_only) {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2018, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2018, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -948,10 +948,6 @@ void HeapShared::archive_subgraphs() {
true /* is_full_module_graph */);
}
}
if (CDSConfig::is_dumping_full_module_graph()) {
Modules::verify_archived_modules();
}
}
//

View File

@ -216,6 +216,10 @@ ciField::ciField(fieldDescriptor *fd) :
static bool trust_final_non_static_fields(ciInstanceKlass* holder) {
if (holder == nullptr)
return false;
if (holder->trust_final_fields()) {
// Explicit opt-in from system classes
return true;
}
// Even if general trusting is disabled, trust system-built closures in these packages.
if (holder->is_in_package("java/lang/invoke") || holder->is_in_package("sun/invoke") ||
holder->is_in_package("java/lang/reflect") || holder->is_in_package("jdk/internal/reflect") ||
@ -230,14 +234,6 @@ static bool trust_final_non_static_fields(ciInstanceKlass* holder) {
// Trust final fields in records
if (holder->is_record())
return true;
// Trust Atomic*FieldUpdaters: they are very important for performance, and make up one
// more reason not to use Unsafe, if their final fields are trusted. See more in JDK-8140483.
if (holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicIntegerFieldUpdater_Impl() ||
holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicLongFieldUpdater_CASUpdater() ||
holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicLongFieldUpdater_LockedUpdater() ||
holder->name() == ciSymbols::java_util_concurrent_atomic_AtomicReferenceFieldUpdater_Impl()) {
return true;
}
return TrustFinalNonStaticFields;
}

View File

@ -65,6 +65,7 @@ ciInstanceKlass::ciInstanceKlass(Klass* k) :
_has_nonstatic_concrete_methods = ik->has_nonstatic_concrete_methods();
_is_hidden = ik->is_hidden();
_is_record = ik->is_record();
_trust_final_fields = ik->trust_final_fields();
_nonstatic_fields = nullptr; // initialized lazily by compute_nonstatic_fields:
_has_injected_fields = -1;
_implementor = nullptr; // we will fill these lazily

View File

@ -59,6 +59,7 @@ private:
bool _has_nonstatic_concrete_methods;
bool _is_hidden;
bool _is_record;
bool _trust_final_fields;
bool _has_trusted_loader;
ciFlags _flags;
@ -207,6 +208,10 @@ public:
return _is_record;
}
bool trust_final_fields() const {
return _trust_final_fields;
}
ciInstanceKlass* get_canonical_holder(int offset);
ciField* get_field_by_offset(int field_offset, bool is_static);
ciField* get_field_by_name(ciSymbol* name, ciSymbol* signature, bool is_static);

View File

@ -42,10 +42,7 @@ const char* basictype_to_str(BasicType t) {
// ------------------------------------------------------------------
// card_table_base
CardTable::CardValue* ci_card_table_address() {
BarrierSet* bs = BarrierSet::barrier_set();
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(bs);
CardTable* ct = ctbs->card_table();
assert(!UseShenandoahGC, "Shenandoah byte_map_base is not constant.");
return ct->byte_map_base();
CardTable::CardValue* ci_card_table_address_const() {
CardTableBarrierSet* ctbs = CardTableBarrierSet::barrier_set();
return ctbs->card_table_base_const();
}

View File

@ -51,9 +51,9 @@ inline const char* bool_to_str(bool b) {
const char* basictype_to_str(BasicType t);
CardTable::CardValue* ci_card_table_address();
CardTable::CardValue* ci_card_table_address_const();
template <typename T> T ci_card_table_address_as() {
return reinterpret_cast<T>(ci_card_table_address());
return reinterpret_cast<T>(ci_card_table_address_const());
}
#endif // SHARE_CI_CIUTILITIES_HPP

View File

@ -943,6 +943,7 @@ public:
_java_lang_Deprecated_for_removal,
_jdk_internal_vm_annotation_AOTSafeClassInitializer,
_method_AOTRuntimeSetup,
_jdk_internal_vm_annotation_TrustFinalFields,
_annotation_LIMIT
};
const Location _location;
@ -1878,6 +1879,11 @@ AnnotationCollector::annotation_index(const ClassLoaderData* loader_data,
if (!privileged) break; // only allow in privileged code
return _field_Stable;
}
case VM_SYMBOL_ENUM_NAME(jdk_internal_vm_annotation_TrustFinalFields_signature): {
if (_location != _in_class) break; // only allow for classes
if (!privileged) break; // only allow in privileged code
return _jdk_internal_vm_annotation_TrustFinalFields;
}
case VM_SYMBOL_ENUM_NAME(jdk_internal_vm_annotation_Contended_signature): {
if (_location != _in_field && _location != _in_class) {
break; // only allow for fields and classes
@ -1992,6 +1998,9 @@ void ClassFileParser::ClassAnnotationCollector::apply_to(InstanceKlass* ik) {
if (has_annotation(_jdk_internal_vm_annotation_AOTSafeClassInitializer)) {
ik->set_has_aot_safe_initializer();
}
if (has_annotation(_jdk_internal_vm_annotation_TrustFinalFields)) {
ik->set_trust_final_fields(true);
}
}
#define MAX_ARGS_SIZE 255

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -33,6 +33,7 @@
#include "classfile/packageEntry.hpp"
#include "classfile/systemDictionary.hpp"
#include "logging/log.hpp"
#include "memory/metaspaceClosure.hpp"
#include "runtime/handles.inline.hpp"
#include "runtime/safepoint.hpp"
@ -56,9 +57,9 @@ class ArchivedClassLoaderData {
public:
ArchivedClassLoaderData() : _packages(nullptr), _modules(nullptr), _unnamed_module(nullptr) {}
void iterate_symbols(ClassLoaderData* loader_data, MetaspaceClosure* closure);
void allocate(ClassLoaderData* loader_data);
void init_archived_entries(ClassLoaderData* loader_data);
void iterate_roots(MetaspaceClosure* closure);
void build_tables(ClassLoaderData* loader_data, TRAPS);
void remove_unshareable_info();
ModuleEntry* unnamed_module() {
return _unnamed_module;
}
@ -80,17 +81,14 @@ static ModuleEntry* _archived_javabase_moduleEntry = nullptr;
static int _platform_loader_root_index = -1;
static int _system_loader_root_index = -1;
void ArchivedClassLoaderData::iterate_symbols(ClassLoaderData* loader_data, MetaspaceClosure* closure) {
void ArchivedClassLoaderData::iterate_roots(MetaspaceClosure* it) {
assert(CDSConfig::is_dumping_full_module_graph(), "must be");
assert_valid(loader_data);
if (loader_data != nullptr) {
loader_data->packages()->iterate_symbols(closure);
loader_data->modules() ->iterate_symbols(closure);
loader_data->unnamed_module()->iterate_symbols(closure);
}
it->push(&_packages);
it->push(&_modules);
it->push(&_unnamed_module);
}
void ArchivedClassLoaderData::allocate(ClassLoaderData* loader_data) {
void ArchivedClassLoaderData::build_tables(ClassLoaderData* loader_data, TRAPS) {
assert(CDSConfig::is_dumping_full_module_graph(), "must be");
assert_valid(loader_data);
if (loader_data != nullptr) {
@ -98,19 +96,28 @@ void ArchivedClassLoaderData::allocate(ClassLoaderData* loader_data) {
// address of the Symbols, which may be relocated at runtime due to ASLR.
// So we store the packages/modules in Arrays. At runtime, we create
// the hashtables using these arrays.
_packages = loader_data->packages()->allocate_archived_entries();
_modules = loader_data->modules() ->allocate_archived_entries();
_unnamed_module = loader_data->unnamed_module()->allocate_archived_entry();
_packages = loader_data->packages()->build_aot_table(loader_data, CHECK);
_modules = loader_data->modules()->build_aot_table(loader_data, CHECK);
_unnamed_module = loader_data->unnamed_module();
}
}
void ArchivedClassLoaderData::init_archived_entries(ClassLoaderData* loader_data) {
assert(CDSConfig::is_dumping_full_module_graph(), "must be");
assert_valid(loader_data);
if (loader_data != nullptr) {
loader_data->packages()->init_archived_entries(_packages);
loader_data->modules() ->init_archived_entries(_modules);
_unnamed_module->init_as_archived_entry();
void ArchivedClassLoaderData::remove_unshareable_info() {
if (_packages != nullptr) {
_packages = ArchiveBuilder::current()->get_buffered_addr(_packages);
for (int i = 0; i < _packages->length(); i++) {
_packages->at(i)->remove_unshareable_info();
}
}
if (_modules != nullptr) {
_modules = ArchiveBuilder::current()->get_buffered_addr(_modules);
for (int i = 0; i < _modules->length(); i++) {
_modules->at(i)->remove_unshareable_info();
}
}
if (_unnamed_module != nullptr) {
_unnamed_module = ArchiveBuilder::current()->get_buffered_addr(_unnamed_module);
_unnamed_module->remove_unshareable_info();
}
}
@ -153,7 +160,6 @@ void ArchivedClassLoaderData::clear_archived_oops() {
// ------------------------------
void ClassLoaderDataShared::load_archived_platform_and_system_class_loaders() {
#if INCLUDE_CDS_JAVA_HEAP
// The streaming object loader prefers loading the class loader related objects before
// the CLD constructor which has a NoSafepointVerifier.
if (!HeapShared::is_loading_streaming_mode()) {
@ -178,7 +184,6 @@ void ClassLoaderDataShared::load_archived_platform_and_system_class_loaders() {
if (system_loader_module_entry != nullptr) {
system_loader_module_entry->preload_archived_oops();
}
#endif
}
static ClassLoaderData* null_class_loader_data() {
@ -210,28 +215,27 @@ void ClassLoaderDataShared::ensure_module_entry_table_exists(oop class_loader) {
assert(met != nullptr, "sanity");
}
void ClassLoaderDataShared::iterate_symbols(MetaspaceClosure* closure) {
void ClassLoaderDataShared::build_tables(TRAPS) {
assert(CDSConfig::is_dumping_full_module_graph(), "must be");
_archived_boot_loader_data.iterate_symbols (null_class_loader_data(), closure);
_archived_platform_loader_data.iterate_symbols(java_platform_loader_data_or_null(), closure);
_archived_system_loader_data.iterate_symbols (java_system_loader_data_or_null(), closure);
_archived_boot_loader_data.build_tables(null_class_loader_data(), CHECK);
_archived_platform_loader_data.build_tables(java_platform_loader_data_or_null(), CHECK);
_archived_system_loader_data.build_tables(java_system_loader_data_or_null(), CHECK);
}
void ClassLoaderDataShared::allocate_archived_tables() {
void ClassLoaderDataShared::iterate_roots(MetaspaceClosure* it) {
assert(CDSConfig::is_dumping_full_module_graph(), "must be");
_archived_boot_loader_data.allocate (null_class_loader_data());
_archived_platform_loader_data.allocate(java_platform_loader_data_or_null());
_archived_system_loader_data.allocate (java_system_loader_data_or_null());
_archived_boot_loader_data.iterate_roots(it);
_archived_platform_loader_data.iterate_roots(it);
_archived_system_loader_data.iterate_roots(it);
}
void ClassLoaderDataShared::init_archived_tables() {
void ClassLoaderDataShared::remove_unshareable_info() {
assert(CDSConfig::is_dumping_full_module_graph(), "must be");
_archived_boot_loader_data.remove_unshareable_info();
_archived_platform_loader_data.remove_unshareable_info();
_archived_system_loader_data.remove_unshareable_info();
_archived_boot_loader_data.init_archived_entries (null_class_loader_data());
_archived_platform_loader_data.init_archived_entries(java_platform_loader_data_or_null());
_archived_system_loader_data.init_archived_entries (java_system_loader_data_or_null());
_archived_javabase_moduleEntry = ModuleEntry::get_archived_entry(ModuleEntryTable::javabase_moduleEntry());
_archived_javabase_moduleEntry = ArchiveBuilder::current()->get_buffered_addr(ModuleEntryTable::javabase_moduleEntry());
_platform_loader_root_index = HeapShared::append_root(SystemDictionary::java_platform_loader());
_system_loader_root_index = HeapShared::append_root(SystemDictionary::java_system_loader());
@ -271,7 +275,6 @@ ModuleEntry* ClassLoaderDataShared::archived_unnamed_module(ClassLoaderData* loa
return archived_module;
}
void ClassLoaderDataShared::clear_archived_oops() {
assert(!CDSConfig::is_using_full_module_graph(), "must be");
_archived_boot_loader_data.clear_archived_oops();

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -40,11 +40,11 @@ class ClassLoaderDataShared : AllStatic {
public:
static void load_archived_platform_and_system_class_loaders() NOT_CDS_JAVA_HEAP_RETURN;
static void restore_archived_modules_for_preloading_classes(JavaThread* current) NOT_CDS_JAVA_HEAP_RETURN;
static void build_tables(TRAPS) NOT_CDS_JAVA_HEAP_RETURN;
static void iterate_roots(MetaspaceClosure* closure) NOT_CDS_JAVA_HEAP_RETURN;
static void remove_unshareable_info() NOT_CDS_JAVA_HEAP_RETURN;
#if INCLUDE_CDS_JAVA_HEAP
static void ensure_module_entry_tables_exist();
static void allocate_archived_tables();
static void iterate_symbols(MetaspaceClosure* closure);
static void init_archived_tables();
static void serialize(SerializeClosure* f);
static void clear_archived_oops();
static void restore_archived_entries_for_null_class_loader_data();

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -23,6 +23,7 @@
*/
#include "cds/aotClassLocation.hpp"
#include "cds/aotGrowableArray.inline.hpp"
#include "cds/archiveBuilder.hpp"
#include "cds/archiveUtils.hpp"
#include "cds/cdsConfig.hpp"
@ -37,6 +38,7 @@
#include "jni.h"
#include "logging/log.hpp"
#include "logging/logStream.hpp"
#include "memory/metadataFactory.hpp"
#include "memory/resourceArea.hpp"
#include "memory/universe.hpp"
#include "oops/oopHandle.inline.hpp"
@ -44,7 +46,6 @@
#include "runtime/handles.inline.hpp"
#include "runtime/safepoint.hpp"
#include "utilities/events.hpp"
#include "utilities/growableArray.hpp"
#include "utilities/hashTable.hpp"
#include "utilities/ostream.hpp"
#include "utilities/quickSort.hpp"
@ -167,7 +168,7 @@ void ModuleEntry::add_read(ModuleEntry* m) {
} else {
if (reads() == nullptr) {
// Lazily create a module's reads list
GrowableArray<ModuleEntry*>* new_reads = new (mtModule) GrowableArray<ModuleEntry*>(MODULE_READS_SIZE, mtModule);
AOTGrowableArray<ModuleEntry*>* new_reads = new (mtModule) AOTGrowableArray<ModuleEntry*>(MODULE_READS_SIZE, mtModule);
set_reads(new_reads);
}
@ -274,8 +275,7 @@ ModuleEntry::ModuleEntry(Handle module_handle,
_has_default_read_edges(false),
_must_walk_reads(false),
_is_open(is_open),
_is_patched(false)
DEBUG_ONLY(COMMA _reads_is_archived(false)) {
_is_patched(false) {
// Initialize fields specific to a ModuleEntry
if (_name == nullptr) {
@ -394,7 +394,6 @@ ModuleEntryTable::~ModuleEntryTable() {
ModuleEntryTableDeleter deleter;
_table.unlink(&deleter);
assert(_table.number_of_entries() == 0, "should have removed all entries");
}
void ModuleEntry::set_loader_data(ClassLoaderData* cld) {
@ -402,147 +401,51 @@ void ModuleEntry::set_loader_data(ClassLoaderData* cld) {
_loader_data = cld;
}
void ModuleEntry::metaspace_pointers_do(MetaspaceClosure* it) {
it->push(&_name);
it->push(&_reads);
it->push(&_version);
it->push(&_location);
}
#if INCLUDE_CDS_JAVA_HEAP
typedef HashTable<
const ModuleEntry*,
ModuleEntry*,
557, // prime number
AnyObj::C_HEAP> ArchivedModuleEntries;
static ArchivedModuleEntries* _archive_modules_entries = nullptr;
#ifndef PRODUCT
static int _num_archived_module_entries = 0;
static int _num_inited_module_entries = 0;
#endif
bool ModuleEntry::should_be_archived() const {
return SystemDictionaryShared::is_builtin_loader(loader_data());
}
ModuleEntry* ModuleEntry::allocate_archived_entry() const {
precond(should_be_archived());
precond(CDSConfig::is_dumping_full_module_graph());
ModuleEntry* archived_entry = (ModuleEntry*)ArchiveBuilder::rw_region_alloc(sizeof(ModuleEntry));
memcpy((void*)archived_entry, (void*)this, sizeof(ModuleEntry));
void ModuleEntry::remove_unshareable_info() {
_archived_module_index = HeapShared::append_root(module_oop());
archived_entry->_archived_module_index = HeapShared::append_root(module_oop());
if (_archive_modules_entries == nullptr) {
_archive_modules_entries = new (mtClass)ArchivedModuleEntries();
}
assert(_archive_modules_entries->get(this) == nullptr, "Each ModuleEntry must not be shared across ModuleEntryTables");
_archive_modules_entries->put(this, archived_entry);
DEBUG_ONLY(_num_archived_module_entries++);
if (CDSConfig::is_dumping_final_static_archive()) {
OopHandle null_handle;
archived_entry->_shared_pd = null_handle;
} else {
assert(archived_entry->shared_protection_domain() == nullptr, "never set during -Xshare:dump");
if (_reads != nullptr) {
_reads->set_in_aot_cache();
}
// Clear handles and restore at run time. Handles cannot be archived.
if (CDSConfig::is_dumping_final_static_archive()) {
OopHandle null_handle;
_shared_pd = null_handle;
} else {
assert(shared_protection_domain() == nullptr, "never set during -Xshare:dump");
}
OopHandle null_handle;
archived_entry->_module_handle = null_handle;
// For verify_archived_module_entries()
DEBUG_ONLY(_num_inited_module_entries++);
if (log_is_enabled(Info, aot, module)) {
ResourceMark rm;
LogStream ls(Log(aot, module)::info());
ls.print("Stored in archive: ");
archived_entry->print(&ls);
}
return archived_entry;
}
bool ModuleEntry::has_been_archived() {
assert(!ArchiveBuilder::current()->is_in_buffer_space(this), "must be called on original ModuleEntry");
return _archive_modules_entries->contains(this);
}
ModuleEntry* ModuleEntry::get_archived_entry(ModuleEntry* orig_entry) {
ModuleEntry** ptr = _archive_modules_entries->get(orig_entry);
assert(ptr != nullptr && *ptr != nullptr, "must have been allocated");
return *ptr;
}
// This function is used to archive ModuleEntry::_reads and PackageEntry::_qualified_exports.
// GrowableArray cannot be directly archived, as it needs to be expandable at runtime.
// Write it out as an Array, and convert it back to GrowableArray at runtime.
Array<ModuleEntry*>* ModuleEntry::write_growable_array(GrowableArray<ModuleEntry*>* array) {
Array<ModuleEntry*>* archived_array = nullptr;
int length = (array == nullptr) ? 0 : array->length();
if (length > 0) {
archived_array = ArchiveBuilder::new_ro_array<ModuleEntry*>(length);
for (int i = 0; i < length; i++) {
ModuleEntry* archived_entry = get_archived_entry(array->at(i));
archived_array->at_put(i, archived_entry);
ArchivePtrMarker::mark_pointer((address*)archived_array->adr_at(i));
}
}
return archived_array;
}
GrowableArray<ModuleEntry*>* ModuleEntry::restore_growable_array(Array<ModuleEntry*>* archived_array) {
GrowableArray<ModuleEntry*>* array = nullptr;
int length = (archived_array == nullptr) ? 0 : archived_array->length();
if (length > 0) {
array = new (mtModule) GrowableArray<ModuleEntry*>(length, mtModule);
for (int i = 0; i < length; i++) {
ModuleEntry* archived_entry = archived_array->at(i);
array->append(archived_entry);
}
}
return array;
}
void ModuleEntry::iterate_symbols(MetaspaceClosure* closure) {
closure->push(&_name);
closure->push(&_version);
closure->push(&_location);
}
void ModuleEntry::init_as_archived_entry() {
set_archived_reads(write_growable_array(reads()));
_module_handle = null_handle;
_loader_data = nullptr; // re-init at runtime
if (name() != nullptr) {
_shared_path_index = AOTClassLocationConfig::dumptime()->get_module_shared_path_index(_location);
_name = ArchiveBuilder::get_buffered_symbol(_name);
ArchivePtrMarker::mark_pointer((address*)&_name);
Symbol* src_location = ArchiveBuilder::current()->get_source_addr(_location);
_shared_path_index = AOTClassLocationConfig::dumptime()->get_module_shared_path_index(src_location);
} else {
// _shared_path_index is used only by SystemDictionary::is_shared_class_visible_impl()
// for checking classes in named modules.
_shared_path_index = -1;
}
if (_version != nullptr) {
_version = ArchiveBuilder::get_buffered_symbol(_version);
}
if (_location != nullptr) {
_location = ArchiveBuilder::get_buffered_symbol(_location);
}
JFR_ONLY(set_trace_id(0);) // re-init at runtime
ArchivePtrMarker::mark_pointer((address*)&_reads);
ArchivePtrMarker::mark_pointer((address*)&_version);
ArchivePtrMarker::mark_pointer((address*)&_location);
}
#ifndef PRODUCT
void ModuleEntry::verify_archived_module_entries() {
assert(_num_archived_module_entries == _num_inited_module_entries,
"%d ModuleEntries have been archived but %d of them have been properly initialized with archived java.lang.Module objects",
_num_archived_module_entries, _num_inited_module_entries);
}
#endif // PRODUCT
void ModuleEntry::load_from_archive(ClassLoaderData* loader_data) {
assert(CDSConfig::is_using_archive(), "runtime only");
set_loader_data(loader_data);
set_reads(restore_growable_array(archived_reads()));
JFR_ONLY(INIT_ID(this);)
}
@ -581,38 +484,28 @@ static int compare_module_by_name(ModuleEntry* a, ModuleEntry* b) {
return a->name()->fast_compare(b->name());
}
void ModuleEntryTable::iterate_symbols(MetaspaceClosure* closure) {
auto syms = [&] (const SymbolHandle& key, ModuleEntry*& m) {
m->iterate_symbols(closure);
};
_table.iterate_all(syms);
}
Array<ModuleEntry*>* ModuleEntryTable::allocate_archived_entries() {
Array<ModuleEntry*>* archived_modules = ArchiveBuilder::new_rw_array<ModuleEntry*>(_table.number_of_entries());
Array<ModuleEntry*>* ModuleEntryTable::build_aot_table(ClassLoaderData* loader_data, TRAPS) {
Array<ModuleEntry*>* aot_table =
MetadataFactory::new_array<ModuleEntry*>(loader_data, _table.number_of_entries(), nullptr, CHECK_NULL);
int n = 0;
auto grab = [&] (const SymbolHandle& key, ModuleEntry*& m) {
archived_modules->at_put(n++, m);
m->pack_reads();
aot_table->at_put(n++, m);
if (log_is_enabled(Info, aot, module)) {
ResourceMark rm;
LogStream ls(Log(aot, module)::info());
ls.print("Stored in archive: ");
m->print(&ls);
}
};
_table.iterate_all(grab);
if (n > 1) {
// Always allocate in the same order to produce deterministic archive.
QuickSort::sort(archived_modules->data(), n, compare_module_by_name);
QuickSort::sort(aot_table->data(), n, compare_module_by_name);
}
for (int i = 0; i < n; i++) {
archived_modules->at_put(i, archived_modules->at(i)->allocate_archived_entry());
ArchivePtrMarker::mark_pointer((address*)archived_modules->adr_at(i));
}
return archived_modules;
}
void ModuleEntryTable::init_archived_entries(Array<ModuleEntry*>* archived_modules) {
assert(CDSConfig::is_dumping_full_module_graph(), "sanity");
for (int i = 0; i < archived_modules->length(); i++) {
ModuleEntry* archived_entry = archived_modules->at(i);
archived_entry->init_as_archived_entry();
}
return aot_table;
}
void ModuleEntryTable::load_archived_entries(ClassLoaderData* loader_data,

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -25,7 +25,9 @@
#ifndef SHARE_CLASSFILE_MODULEENTRY_HPP
#define SHARE_CLASSFILE_MODULEENTRY_HPP
#include "cds/aotGrowableArray.hpp"
#include "jni.h"
#include "memory/metaspaceClosureType.hpp"
#include "oops/oopHandle.hpp"
#include "oops/symbol.hpp"
#include "oops/symbolHandle.hpp"
@ -68,11 +70,8 @@ private:
// for shared classes from this module
Symbol* _name; // name of this module
ClassLoaderData* _loader_data;
AOTGrowableArray<ModuleEntry*>* _reads; // list of modules that are readable by this module
union {
GrowableArray<ModuleEntry*>* _reads; // list of modules that are readable by this module
Array<ModuleEntry*>* _archived_reads; // List of readable modules stored in the CDS archive
};
Symbol* _version; // module version number
Symbol* _location; // module location
CDS_ONLY(int _shared_path_index;) // >=0 if classes in this module are in CDS archive
@ -81,7 +80,6 @@ private:
bool _must_walk_reads; // walk module's reads list at GC safepoints to purge out dead modules
bool _is_open; // whether the packages in the module are all unqualifiedly exported
bool _is_patched; // whether the module is patched via --patch-module
DEBUG_ONLY(bool _reads_is_archived);
CDS_JAVA_HEAP_ONLY(int _archived_module_index;)
JFR_ONLY(DEFINE_TRACE_ID_FIELD;)
@ -120,22 +118,18 @@ public:
bool can_read(ModuleEntry* m) const;
bool has_reads_list() const;
GrowableArray<ModuleEntry*>* reads() const {
assert(!_reads_is_archived, "sanity");
AOTGrowableArray<ModuleEntry*>* reads() const {
return _reads;
}
void set_reads(GrowableArray<ModuleEntry*>* r) {
void set_reads(AOTGrowableArray<ModuleEntry*>* r) {
_reads = r;
DEBUG_ONLY(_reads_is_archived = false);
}
Array<ModuleEntry*>* archived_reads() const {
assert(_reads_is_archived, "sanity");
return _archived_reads;
}
void set_archived_reads(Array<ModuleEntry*>* r) {
_archived_reads = r;
DEBUG_ONLY(_reads_is_archived = true);
void pack_reads() {
if (_reads != nullptr) {
_reads->shrink_to_fit();
}
}
void add_read(ModuleEntry* m);
void set_read_walk_required(ClassLoaderData* m_loader_data);
@ -189,6 +183,13 @@ public:
const char* name_as_C_string() const {
return is_named() ? name()->as_C_string() : UNNAMED_MODULE;
}
// methods required by MetaspaceClosure
void metaspace_pointers_do(MetaspaceClosure* it);
int size_in_heapwords() const { return (int)heap_word_size(sizeof(ModuleEntry)); }
MetaspaceClosureType type() const { return MetaspaceClosureType::ModuleEntryType; }
static bool is_read_only_by_default() { return false; }
void print(outputStream* st = tty) const;
void verify();
@ -198,18 +199,11 @@ public:
#if INCLUDE_CDS_JAVA_HEAP
bool should_be_archived() const;
void iterate_symbols(MetaspaceClosure* closure);
ModuleEntry* allocate_archived_entry() const;
void init_as_archived_entry();
static ModuleEntry* get_archived_entry(ModuleEntry* orig_entry);
bool has_been_archived();
static Array<ModuleEntry*>* write_growable_array(GrowableArray<ModuleEntry*>* array);
static GrowableArray<ModuleEntry*>* restore_growable_array(Array<ModuleEntry*>* archived_array);
void remove_unshareable_info();
void load_from_archive(ClassLoaderData* loader_data);
void preload_archived_oops();
void restore_archived_oops(ClassLoaderData* loader_data);
void clear_archived_oops();
static void verify_archived_module_entries() PRODUCT_RETURN;
#endif
};
@ -275,9 +269,7 @@ public:
void verify();
#if INCLUDE_CDS_JAVA_HEAP
void iterate_symbols(MetaspaceClosure* closure);
Array<ModuleEntry*>* allocate_archived_entries();
void init_archived_entries(Array<ModuleEntry*>* archived_modules);
Array<ModuleEntry*>* build_aot_table(ClassLoaderData* loader_data, TRAPS);
void load_archived_entries(ClassLoaderData* loader_data,
Array<ModuleEntry*>* archived_modules);
void restore_archived_oops(ClassLoaderData* loader_data,

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -505,13 +505,10 @@ void Modules::check_archived_module_oop(oop orig_module_obj) {
ClassLoaderData* loader_data = orig_module_ent->loader_data();
assert(loader_data->is_builtin_class_loader_data(), "must be");
if (orig_module_ent->name() != nullptr) {
// For each named module, we archive both the java.lang.Module oop and the ModuleEntry.
assert(orig_module_ent->has_been_archived(), "sanity");
} else {
precond(ArchiveBuilder::current()->has_been_archived(orig_module_ent));
if (orig_module_ent->name() == nullptr) {
// We always archive unnamed module oop for boot, platform, and system loaders.
precond(orig_module_ent->should_be_archived());
precond(orig_module_ent->has_been_archived());
if (loader_data->is_boot_class_loader_data()) {
assert(!_seen_boot_unnamed_module, "only once");
@ -529,10 +526,6 @@ void Modules::check_archived_module_oop(oop orig_module_obj) {
}
}
void Modules::verify_archived_modules() {
ModuleEntry::verify_archived_module_entries();
}
class Modules::ArchivedProperty {
const char* _prop;
const bool _numbered;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -59,7 +59,6 @@ public:
TRAPS) NOT_CDS_JAVA_HEAP_RETURN;
static void init_archived_modules(JavaThread* current, Handle h_platform_loader, Handle h_system_loader)
NOT_CDS_JAVA_HEAP_RETURN;
static void verify_archived_modules() NOT_CDS_JAVA_HEAP_RETURN;
static void dump_archived_module_info() NOT_CDS_JAVA_HEAP_RETURN;
static void serialize_archived_module_info(SerializeClosure* soc) NOT_CDS_JAVA_HEAP_RETURN;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -22,6 +22,8 @@
*
*/
#include "cds/aotGrowableArray.inline.hpp"
#include "cds/aotMetaspace.hpp"
#include "cds/archiveBuilder.hpp"
#include "cds/archiveUtils.hpp"
#include "cds/cdsConfig.hpp"
@ -31,13 +33,13 @@
#include "classfile/vmSymbols.hpp"
#include "logging/log.hpp"
#include "logging/logStream.hpp"
#include "memory/metadataFactory.hpp"
#include "memory/resourceArea.hpp"
#include "oops/array.hpp"
#include "oops/symbol.hpp"
#include "runtime/handles.inline.hpp"
#include "runtime/java.hpp"
#include "utilities/events.hpp"
#include "utilities/growableArray.hpp"
#include "utilities/hashTable.hpp"
#include "utilities/ostream.hpp"
#include "utilities/quickSort.hpp"
@ -51,7 +53,7 @@ PackageEntry::PackageEntry(Symbol* name, ModuleEntry* module) :
_qualified_exports(nullptr),
_defined_by_cds_in_class_path(0)
{
// name can't be null
// name can't be null -- a class in the default package gets a PackageEntry of nullptr.
_name->increment_refcount();
JFR_ONLY(INIT_ID(this);)
@ -81,7 +83,7 @@ void PackageEntry::add_qexport(ModuleEntry* m) {
if (!has_qual_exports_list()) {
// Lazily create a package's qualified exports list.
// Initial size is small, do not anticipate export lists to be large.
_qualified_exports = new (mtModule) GrowableArray<ModuleEntry*>(QUAL_EXP_SIZE, mtModule);
_qualified_exports = new (mtModule) AOTGrowableArray<ModuleEntry*>(QUAL_EXP_SIZE, mtModule);
}
// Determine, based on this newly established export to module m,
@ -183,12 +185,24 @@ void PackageEntry::purge_qualified_exports() {
}
void PackageEntry::delete_qualified_exports() {
if (_qualified_exports != nullptr) {
if (_qualified_exports != nullptr && !AOTMetaspace::in_aot_cache(_qualified_exports)) {
delete _qualified_exports;
}
_qualified_exports = nullptr;
}
void PackageEntry::pack_qualified_exports() {
if (_qualified_exports != nullptr) {
_qualified_exports->shrink_to_fit();
}
}
void PackageEntry::metaspace_pointers_do(MetaspaceClosure* it) {
it->push(&_name);
it->push(&_module);
it->push(&_qualified_exports);
}
PackageEntryTable::PackageEntryTable() { }
PackageEntryTable::~PackageEntryTable() {
@ -212,66 +226,19 @@ PackageEntryTable::~PackageEntryTable() {
}
#if INCLUDE_CDS_JAVA_HEAP
typedef HashTable<
const PackageEntry*,
PackageEntry*,
557, // prime number
AnyObj::C_HEAP> ArchivedPackageEntries;
static ArchivedPackageEntries* _archived_packages_entries = nullptr;
bool PackageEntry::should_be_archived() const {
return module()->should_be_archived();
}
PackageEntry* PackageEntry::allocate_archived_entry() const {
precond(should_be_archived());
PackageEntry* archived_entry = (PackageEntry*)ArchiveBuilder::rw_region_alloc(sizeof(PackageEntry));
memcpy((void*)archived_entry, (void*)this, sizeof(PackageEntry));
if (_archived_packages_entries == nullptr) {
_archived_packages_entries = new (mtClass)ArchivedPackageEntries();
void PackageEntry::remove_unshareable_info() {
if (_qualified_exports != nullptr) {
_qualified_exports->set_in_aot_cache();
}
assert(_archived_packages_entries->get(this) == nullptr, "Each PackageEntry must not be shared across PackageEntryTables");
_archived_packages_entries->put(this, archived_entry);
return archived_entry;
}
PackageEntry* PackageEntry::get_archived_entry(PackageEntry* orig_entry) {
PackageEntry** ptr = _archived_packages_entries->get(orig_entry);
if (ptr != nullptr) {
return *ptr;
} else {
return nullptr;
}
}
void PackageEntry::iterate_symbols(MetaspaceClosure* closure) {
closure->push(&_name);
}
void PackageEntry::init_as_archived_entry() {
Array<ModuleEntry*>* archived_qualified_exports = ModuleEntry::write_growable_array(_qualified_exports);
_name = ArchiveBuilder::get_buffered_symbol(_name);
_module = ModuleEntry::get_archived_entry(_module);
_qualified_exports = (GrowableArray<ModuleEntry*>*)archived_qualified_exports;
_defined_by_cds_in_class_path = 0;
JFR_ONLY(set_trace_id(0);) // re-init at runtime
ArchivePtrMarker::mark_pointer((address*)&_name);
ArchivePtrMarker::mark_pointer((address*)&_module);
ArchivePtrMarker::mark_pointer((address*)&_qualified_exports);
LogStreamHandle(Info, aot, package) st;
if (st.is_enabled()) {
st.print("archived ");
print(&st);
}
}
void PackageEntry::load_from_archive() {
_qualified_exports = ModuleEntry::restore_growable_array((Array<ModuleEntry*>*)_qualified_exports);
JFR_ONLY(INIT_ID(this);)
}
@ -280,14 +247,7 @@ static int compare_package_by_name(PackageEntry* a, PackageEntry* b) {
return a->name()->fast_compare(b->name());
}
void PackageEntryTable::iterate_symbols(MetaspaceClosure* closure) {
auto syms = [&] (const SymbolHandle& key, PackageEntry*& p) {
p->iterate_symbols(closure);
};
_table.iterate_all(syms);
}
Array<PackageEntry*>* PackageEntryTable::allocate_archived_entries() {
Array<PackageEntry*>* PackageEntryTable::build_aot_table(ClassLoaderData* loader_data, TRAPS) {
// First count the packages in named modules
int n = 0;
auto count = [&] (const SymbolHandle& key, PackageEntry*& p) {
@ -297,12 +257,19 @@ Array<PackageEntry*>* PackageEntryTable::allocate_archived_entries() {
};
_table.iterate_all(count);
Array<PackageEntry*>* archived_packages = ArchiveBuilder::new_rw_array<PackageEntry*>(n);
Array<PackageEntry*>* archived_packages = MetadataFactory::new_array<PackageEntry*>(loader_data, n, nullptr, CHECK_NULL);
// reset n
n = 0;
auto grab = [&] (const SymbolHandle& key, PackageEntry*& p) {
if (p->should_be_archived()) {
p->pack_qualified_exports();
archived_packages->at_put(n++, p);
LogStreamHandle(Info, aot, package) st;
if (st.is_enabled()) {
st.print("archived ");
p->print(&st);
}
}
};
_table.iterate_all(grab);
@ -311,18 +278,8 @@ Array<PackageEntry*>* PackageEntryTable::allocate_archived_entries() {
// Always allocate in the same order to produce deterministic archive.
QuickSort::sort(archived_packages->data(), n, compare_package_by_name);
}
for (int i = 0; i < n; i++) {
archived_packages->at_put(i, archived_packages->at(i)->allocate_archived_entry());
ArchivePtrMarker::mark_pointer((address*)archived_packages->adr_at(i));
}
return archived_packages;
}
void PackageEntryTable::init_archived_entries(Array<PackageEntry*>* archived_packages) {
for (int i = 0; i < archived_packages->length(); i++) {
PackageEntry* archived_entry = archived_packages->at(i);
archived_entry->init_as_archived_entry();
}
return archived_packages;
}
void PackageEntryTable::load_archived_entries(Array<PackageEntry*>* archived_packages) {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2016, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -25,7 +25,9 @@
#ifndef SHARE_CLASSFILE_PACKAGEENTRY_HPP
#define SHARE_CLASSFILE_PACKAGEENTRY_HPP
#include "cds/aotGrowableArray.hpp"
#include "classfile/moduleEntry.hpp"
#include "memory/metaspaceClosureType.hpp"
#include "oops/symbol.hpp"
#include "oops/symbolHandle.hpp"
#include "runtime/atomicAccess.hpp"
@ -114,7 +116,7 @@ private:
bool _must_walk_exports;
// Contains list of modules this package is qualifiedly exported to. Access
// to this list is protected by the Module_lock.
GrowableArray<ModuleEntry*>* _qualified_exports;
AOTGrowableArray<ModuleEntry*>* _qualified_exports;
JFR_ONLY(DEFINE_TRACE_ID_FIELD;)
// Initial size of a package entry's list of qualified exports.
@ -205,14 +207,24 @@ public:
void purge_qualified_exports();
void delete_qualified_exports();
void pack_qualified_exports(); // used by AOT
// methods required by MetaspaceClosure
void metaspace_pointers_do(MetaspaceClosure* it);
int size_in_heapwords() const { return (int)heap_word_size(sizeof(PackageEntry)); }
MetaspaceClosureType type() const { return MetaspaceClosureType::PackageEntryType; }
static bool is_read_only_by_default() { return false; }
void print(outputStream* st = tty);
char* name_as_C_string() const {
assert(_name != nullptr, "name can't be null");
return name()->as_C_string();
}
#if INCLUDE_CDS_JAVA_HEAP
bool should_be_archived() const;
void iterate_symbols(MetaspaceClosure* closure);
PackageEntry* allocate_archived_entry() const;
void init_as_archived_entry();
static PackageEntry* get_archived_entry(PackageEntry* orig_entry);
void remove_unshareable_info();
void load_from_archive();
#endif
@ -271,9 +283,7 @@ public:
void print(outputStream* st = tty);
#if INCLUDE_CDS_JAVA_HEAP
void iterate_symbols(MetaspaceClosure* closure);
Array<PackageEntry*>* allocate_archived_entries();
void init_archived_entries(Array<PackageEntry*>* archived_packages);
Array<PackageEntry*>* build_aot_table(ClassLoaderData* loader_data, TRAPS);
void load_archived_entries(Array<PackageEntry*>* archived_packages);
#endif
};

View File

@ -245,10 +245,6 @@ class SerializeClosure;
\
/* Concurrency support */ \
template(java_util_concurrent_locks_AbstractOwnableSynchronizer, "java/util/concurrent/locks/AbstractOwnableSynchronizer") \
template(java_util_concurrent_atomic_AtomicIntegerFieldUpdater_Impl, "java/util/concurrent/atomic/AtomicIntegerFieldUpdater$AtomicIntegerFieldUpdaterImpl") \
template(java_util_concurrent_atomic_AtomicLongFieldUpdater_CASUpdater, "java/util/concurrent/atomic/AtomicLongFieldUpdater$CASUpdater") \
template(java_util_concurrent_atomic_AtomicLongFieldUpdater_LockedUpdater, "java/util/concurrent/atomic/AtomicLongFieldUpdater$LockedUpdater") \
template(java_util_concurrent_atomic_AtomicReferenceFieldUpdater_Impl, "java/util/concurrent/atomic/AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl") \
template(jdk_internal_vm_annotation_Contended_signature, "Ljdk/internal/vm/annotation/Contended;") \
template(jdk_internal_vm_annotation_ReservedStackAccess_signature, "Ljdk/internal/vm/annotation/ReservedStackAccess;") \
template(jdk_internal_ValueBased_signature, "Ljdk/internal/ValueBased;") \
@ -302,6 +298,7 @@ class SerializeClosure;
template(jdk_internal_misc_Scoped_signature, "Ljdk/internal/misc/ScopedMemoryAccess$Scoped;") \
template(jdk_internal_vm_annotation_IntrinsicCandidate_signature, "Ljdk/internal/vm/annotation/IntrinsicCandidate;") \
template(jdk_internal_vm_annotation_Stable_signature, "Ljdk/internal/vm/annotation/Stable;") \
template(jdk_internal_vm_annotation_TrustFinalFields_signature, "Ljdk/internal/vm/annotation/TrustFinalFields;") \
\
template(jdk_internal_vm_annotation_ChangesCurrentThread_signature, "Ljdk/internal/vm/annotation/ChangesCurrentThread;") \
template(jdk_internal_vm_annotation_JvmtiHideEvents_signature, "Ljdk/internal/vm/annotation/JvmtiHideEvents;") \

View File

@ -607,10 +607,27 @@ void decode_env::print_address(address adr) {
return;
}
address card_table_base = nullptr;
BarrierSet* bs = BarrierSet::barrier_set();
if (bs->is_a(BarrierSet::CardTableBarrierSet) &&
adr == ci_card_table_address_as<address>()) {
st->print("word_map_base");
#if INCLUDE_G1GC
if (bs->is_a(BarrierSet::G1BarrierSet)) {
G1BarrierSet* g1bs = barrier_set_cast<G1BarrierSet>(bs);
card_table_base = g1bs->card_table()->byte_map_base();
} else
#endif
#if INCLUDE_SHENANDOAHGC
if (bs->is_a(BarrierSet::ShenandoahBarrierSet)) {
ShenandoahBarrierSet* sbs = barrier_set_cast<ShenandoahBarrierSet>(bs);
if (sbs->card_table() != nullptr) {
card_table_base = sbs->card_table()->byte_map_base();
}
} else
#endif
if (bs->is_a(BarrierSet::CardTableBarrierSet)) {
card_table_base = ci_card_table_address_as<address>();
}
if (card_table_base != nullptr && adr == card_table_base) {
st->print("card_table_base");
if (WizardMode) st->print(" " INTPTR_FORMAT, p2i(adr));
return;
}

View File

@ -70,7 +70,11 @@ inline void G1BarrierSet::write_ref_field_pre(T* field) {
template <DecoratorSet decorators, typename T>
inline void G1BarrierSet::write_ref_field_post(T* field) {
volatile CardValue* byte = _card_table->byte_for(field);
// Make sure that the card table reference is read only once. Otherwise the compiler
// might reload that value in the two accesses below, that could cause writes to
// the wrong card table.
CardTable* card_table = AtomicAccess::load(&_card_table);
CardValue* byte = card_table->byte_for(field);
if (*byte == G1CardTable::clean_card_val()) {
*byte = G1CardTable::dirty_card_val();
}

View File

@ -29,7 +29,6 @@
#include "gc/shared/gcLogPrecious.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"
#include "memory/allocation.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/globals_extension.hpp"
#include "runtime/java.hpp"
#include "utilities/bitMap.inline.hpp"
@ -192,32 +191,32 @@ const char* G1CardSetConfiguration::mem_object_type_name_str(uint index) {
void G1CardSetCoarsenStats::reset() {
STATIC_ASSERT(ARRAY_SIZE(_coarsen_from) == ARRAY_SIZE(_coarsen_collision));
for (uint i = 0; i < ARRAY_SIZE(_coarsen_from); i++) {
_coarsen_from[i] = 0;
_coarsen_collision[i] = 0;
_coarsen_from[i].store_relaxed(0);
_coarsen_collision[i].store_relaxed(0);
}
}
void G1CardSetCoarsenStats::set(G1CardSetCoarsenStats& other) {
STATIC_ASSERT(ARRAY_SIZE(_coarsen_from) == ARRAY_SIZE(_coarsen_collision));
for (uint i = 0; i < ARRAY_SIZE(_coarsen_from); i++) {
_coarsen_from[i] = other._coarsen_from[i];
_coarsen_collision[i] = other._coarsen_collision[i];
_coarsen_from[i].store_relaxed(other._coarsen_from[i].load_relaxed());
_coarsen_collision[i].store_relaxed(other._coarsen_collision[i].load_relaxed());
}
}
void G1CardSetCoarsenStats::subtract_from(G1CardSetCoarsenStats& other) {
STATIC_ASSERT(ARRAY_SIZE(_coarsen_from) == ARRAY_SIZE(_coarsen_collision));
for (uint i = 0; i < ARRAY_SIZE(_coarsen_from); i++) {
_coarsen_from[i] = other._coarsen_from[i] - _coarsen_from[i];
_coarsen_collision[i] = other._coarsen_collision[i] - _coarsen_collision[i];
_coarsen_from[i].store_relaxed(other._coarsen_from[i].load_relaxed() - _coarsen_from[i].load_relaxed());
_coarsen_collision[i].store_relaxed(other._coarsen_collision[i].load_relaxed() - _coarsen_collision[i].load_relaxed());
}
}
void G1CardSetCoarsenStats::record_coarsening(uint tag, bool collision) {
assert(tag < ARRAY_SIZE(_coarsen_from), "tag %u out of bounds", tag);
AtomicAccess::inc(&_coarsen_from[tag], memory_order_relaxed);
_coarsen_from[tag].add_then_fetch(1u, memory_order_relaxed);
if (collision) {
AtomicAccess::inc(&_coarsen_collision[tag], memory_order_relaxed);
_coarsen_collision[tag].add_then_fetch(1u, memory_order_relaxed);
}
}
@ -228,13 +227,13 @@ void G1CardSetCoarsenStats::print_on(outputStream* out) {
"Inline->AoC %zu (%zu) "
"AoC->BitMap %zu (%zu) "
"BitMap->Full %zu (%zu) ",
_coarsen_from[0], _coarsen_collision[0],
_coarsen_from[1], _coarsen_collision[1],
_coarsen_from[0].load_relaxed(), _coarsen_collision[0].load_relaxed(),
_coarsen_from[1].load_relaxed(), _coarsen_collision[1].load_relaxed(),
// There is no BitMap at the first level so we can't .
_coarsen_from[3], _coarsen_collision[3],
_coarsen_from[4], _coarsen_collision[4],
_coarsen_from[5], _coarsen_collision[5],
_coarsen_from[6], _coarsen_collision[6]
_coarsen_from[3].load_relaxed(), _coarsen_collision[3].load_relaxed(),
_coarsen_from[4].load_relaxed(), _coarsen_collision[4].load_relaxed(),
_coarsen_from[5].load_relaxed(), _coarsen_collision[5].load_relaxed(),
_coarsen_from[6].load_relaxed(), _coarsen_collision[6].load_relaxed()
);
}
@ -248,7 +247,7 @@ class G1CardSetHashTable : public CHeapObj<mtGCCardSet> {
// the per region cardsets.
const static uint GroupBucketClaimSize = 4;
// Did we insert at least one card in the table?
bool volatile _inserted_card;
Atomic<bool> _inserted_card;
G1CardSetMemoryManager* _mm;
CardSetHash _table;
@ -311,10 +310,10 @@ public:
G1CardSetHashTableValue value(region_idx, G1CardSetInlinePtr());
bool inserted = _table.insert_get(Thread::current(), lookup, value, found, should_grow);
if (!_inserted_card && inserted) {
if (!_inserted_card.load_relaxed() && inserted) {
// It does not matter to us who is setting the flag so a regular atomic store
// is sufficient.
AtomicAccess::store(&_inserted_card, true);
_inserted_card.store_relaxed(true);
}
return found.value();
@ -343,9 +342,9 @@ public:
}
void reset() {
if (AtomicAccess::load(&_inserted_card)) {
if (_inserted_card.load_relaxed()) {
_table.unsafe_reset(InitialLogTableSize);
AtomicAccess::store(&_inserted_card, false);
_inserted_card.store_relaxed(false);
}
}
@ -455,14 +454,14 @@ void G1CardSet::free_mem_object(ContainerPtr container) {
_mm->free(container_type_to_mem_object_type(type), value);
}
G1CardSet::ContainerPtr G1CardSet::acquire_container(ContainerPtr volatile* container_addr) {
G1CardSet::ContainerPtr G1CardSet::acquire_container(Atomic<ContainerPtr>* container_addr) {
// Update reference counts under RCU critical section to avoid a
// use-after-cleapup bug where we increment a reference count for
// an object whose memory has already been cleaned up and reused.
GlobalCounter::CriticalSection cs(Thread::current());
while (true) {
// Get ContainerPtr and increment refcount atomically wrt to memory reuse.
ContainerPtr container = AtomicAccess::load_acquire(container_addr);
ContainerPtr container = container_addr->load_acquire();
uint cs_type = container_type(container);
if (container == FullCardSet || cs_type == ContainerInlinePtr) {
return container;
@ -503,15 +502,15 @@ class G1ReleaseCardsets : public StackObj {
G1CardSet* _card_set;
using ContainerPtr = G1CardSet::ContainerPtr;
void coarsen_to_full(ContainerPtr* container_addr) {
void coarsen_to_full(Atomic<ContainerPtr>* container_addr) {
while (true) {
ContainerPtr cur_container = AtomicAccess::load_acquire(container_addr);
ContainerPtr cur_container = container_addr->load_acquire();
uint cs_type = G1CardSet::container_type(cur_container);
if (cur_container == G1CardSet::FullCardSet) {
return;
}
ContainerPtr old_value = AtomicAccess::cmpxchg(container_addr, cur_container, G1CardSet::FullCardSet);
ContainerPtr old_value = container_addr->compare_exchange(cur_container, G1CardSet::FullCardSet);
if (old_value == cur_container) {
_card_set->release_and_maybe_free_container(cur_container);
@ -523,7 +522,7 @@ class G1ReleaseCardsets : public StackObj {
public:
explicit G1ReleaseCardsets(G1CardSet* card_set) : _card_set(card_set) { }
void operator ()(ContainerPtr* container_addr) {
void operator ()(Atomic<ContainerPtr>* container_addr) {
coarsen_to_full(container_addr);
}
};
@ -544,10 +543,10 @@ G1AddCardResult G1CardSet::add_to_howl(ContainerPtr parent_container,
ContainerPtr container;
uint bucket = _config->howl_bucket_index(card_in_region);
ContainerPtr volatile* bucket_entry = howl->container_addr(bucket);
Atomic<ContainerPtr>* bucket_entry = howl->container_addr(bucket);
while (true) {
if (AtomicAccess::load(&howl->_num_entries) >= _config->cards_in_howl_threshold()) {
if (howl->_num_entries.load_relaxed() >= _config->cards_in_howl_threshold()) {
return Overflow;
}
@ -571,7 +570,7 @@ G1AddCardResult G1CardSet::add_to_howl(ContainerPtr parent_container,
}
if (increment_total && add_result == Added) {
AtomicAccess::inc(&howl->_num_entries, memory_order_relaxed);
howl->_num_entries.add_then_fetch(1u, memory_order_relaxed);
}
if (to_transfer != nullptr) {
@ -588,7 +587,7 @@ G1AddCardResult G1CardSet::add_to_bitmap(ContainerPtr container, uint card_in_re
return bitmap->add(card_offset, _config->cards_in_howl_bitmap_threshold(), _config->max_cards_in_howl_bitmap());
}
G1AddCardResult G1CardSet::add_to_inline_ptr(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_in_region) {
G1AddCardResult G1CardSet::add_to_inline_ptr(Atomic<ContainerPtr>* container_addr, ContainerPtr container, uint card_in_region) {
G1CardSetInlinePtr value(container_addr, container);
return value.add(card_in_region, _config->inline_ptr_bits_per_card(), _config->max_cards_in_inline_ptr());
}
@ -610,7 +609,7 @@ G1CardSet::ContainerPtr G1CardSet::create_coarsened_array_of_cards(uint card_in_
return new_container;
}
bool G1CardSet::coarsen_container(ContainerPtr volatile* container_addr,
bool G1CardSet::coarsen_container(Atomic<ContainerPtr>* container_addr,
ContainerPtr cur_container,
uint card_in_region,
bool within_howl) {
@ -640,7 +639,7 @@ bool G1CardSet::coarsen_container(ContainerPtr volatile* container_addr,
ShouldNotReachHere();
}
ContainerPtr old_value = AtomicAccess::cmpxchg(container_addr, cur_container, new_container); // Memory order?
ContainerPtr old_value = container_addr->compare_exchange(cur_container, new_container); // Memory order?
if (old_value == cur_container) {
// Success. Indicate that the cards from the current card set must be transferred
// by this caller.
@ -687,7 +686,7 @@ void G1CardSet::transfer_cards(G1CardSetHashTableValue* table_entry, ContainerPt
assert(container_type(source_container) == ContainerHowl, "must be");
// Need to correct for that the Full remembered set occupies more cards than the
// AoCS before.
AtomicAccess::add(&_num_occupied, _config->max_cards_in_region() - table_entry->_num_occupied, memory_order_relaxed);
_num_occupied.add_then_fetch(_config->max_cards_in_region() - table_entry->_num_occupied.load_relaxed(), memory_order_relaxed);
}
}
@ -713,18 +712,18 @@ void G1CardSet::transfer_cards_in_howl(ContainerPtr parent_container,
diff -= 1;
G1CardSetHowl* howling_array = container_ptr<G1CardSetHowl>(parent_container);
AtomicAccess::add(&howling_array->_num_entries, diff, memory_order_relaxed);
howling_array->_num_entries.add_then_fetch(diff, memory_order_relaxed);
G1CardSetHashTableValue* table_entry = get_container(card_region);
assert(table_entry != nullptr, "Table entry not found for transferred cards");
AtomicAccess::add(&table_entry->_num_occupied, diff, memory_order_relaxed);
table_entry->_num_occupied.add_then_fetch(diff, memory_order_relaxed);
AtomicAccess::add(&_num_occupied, diff, memory_order_relaxed);
_num_occupied.add_then_fetch(diff, memory_order_relaxed);
}
}
G1AddCardResult G1CardSet::add_to_container(ContainerPtr volatile* container_addr,
G1AddCardResult G1CardSet::add_to_container(Atomic<ContainerPtr>* container_addr,
ContainerPtr container,
uint card_region,
uint card_in_region,
@ -827,8 +826,8 @@ G1AddCardResult G1CardSet::add_card(uint card_region, uint card_in_region, bool
}
if (increment_total && add_result == Added) {
AtomicAccess::inc(&table_entry->_num_occupied, memory_order_relaxed);
AtomicAccess::inc(&_num_occupied, memory_order_relaxed);
table_entry->_num_occupied.add_then_fetch(1u, memory_order_relaxed);
_num_occupied.add_then_fetch(1u, memory_order_relaxed);
}
if (should_grow_table) {
_table->grow();
@ -853,7 +852,7 @@ bool G1CardSet::contains_card(uint card_region, uint card_in_region) {
return false;
}
ContainerPtr container = table_entry->_container;
ContainerPtr container = table_entry->_container.load_relaxed();
if (container == FullCardSet) {
// contains_card() is not a performance critical method so we do not hide that
// case in the switch below.
@ -889,7 +888,7 @@ void G1CardSet::print_info(outputStream* st, uintptr_t card) {
return;
}
ContainerPtr container = table_entry->_container;
ContainerPtr container = table_entry->_container.load_relaxed();
if (container == FullCardSet) {
st->print("FULL card set)");
return;
@ -940,7 +939,7 @@ void G1CardSet::iterate_cards_during_transfer(ContainerPtr const container, Card
void G1CardSet::iterate_containers(ContainerPtrClosure* cl, bool at_safepoint) {
auto do_value =
[&] (G1CardSetHashTableValue* value) {
cl->do_containerptr(value->_region_idx, value->_num_occupied, value->_container);
cl->do_containerptr(value->_region_idx, value->_num_occupied.load_relaxed(), value->_container.load_relaxed());
return true;
};
@ -1001,11 +1000,11 @@ bool G1CardSet::occupancy_less_or_equal_to(size_t limit) const {
}
bool G1CardSet::is_empty() const {
return _num_occupied == 0;
return _num_occupied.load_relaxed() == 0;
}
size_t G1CardSet::occupied() const {
return _num_occupied;
return _num_occupied.load_relaxed();
}
size_t G1CardSet::num_containers() {
@ -1051,7 +1050,7 @@ size_t G1CardSet::static_mem_size() {
void G1CardSet::clear() {
_table->reset();
_num_occupied = 0;
_num_occupied.store_relaxed(0);
_mm->flush();
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,6 +27,7 @@
#include "memory/allocation.hpp"
#include "memory/memRegion.hpp"
#include "runtime/atomic.hpp"
#include "utilities/concurrentHashTable.hpp"
class G1CardSetAllocOptions;
@ -154,8 +155,8 @@ public:
private:
// Indices are "from" indices.
size_t _coarsen_from[NumCoarsenCategories];
size_t _coarsen_collision[NumCoarsenCategories];
Atomic<size_t> _coarsen_from[NumCoarsenCategories];
Atomic<size_t> _coarsen_collision[NumCoarsenCategories];
public:
G1CardSetCoarsenStats() { reset(); }
@ -271,11 +272,11 @@ private:
// Total number of cards in this card set. This is a best-effort value, i.e. there may
// be (slightly) more cards in the card set than this value in reality.
size_t _num_occupied;
Atomic<size_t> _num_occupied;
ContainerPtr make_container_ptr(void* value, uintptr_t type);
ContainerPtr acquire_container(ContainerPtr volatile* container_addr);
ContainerPtr acquire_container(Atomic<ContainerPtr>* container_addr);
// Returns true if the card set container should be released
bool release_container(ContainerPtr container);
// Release card set and free if needed.
@ -288,7 +289,7 @@ private:
// coarsen_container does not transfer cards from cur_container
// to the new container. Transfer is achieved by transfer_cards.
// Returns true if this was the thread that coarsened the container (and added the card).
bool coarsen_container(ContainerPtr volatile* container_addr,
bool coarsen_container(Atomic<ContainerPtr>* container_addr,
ContainerPtr cur_container,
uint card_in_region, bool within_howl = false);
@ -300,9 +301,9 @@ private:
void transfer_cards(G1CardSetHashTableValue* table_entry, ContainerPtr source_container, uint card_region);
void transfer_cards_in_howl(ContainerPtr parent_container, ContainerPtr source_container, uint card_region);
G1AddCardResult add_to_container(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_region, uint card, bool increment_total = true);
G1AddCardResult add_to_container(Atomic<ContainerPtr>* container_addr, ContainerPtr container, uint card_region, uint card, bool increment_total = true);
G1AddCardResult add_to_inline_ptr(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_inline_ptr(Atomic<ContainerPtr>* container_addr, ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_array(ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_bitmap(ContainerPtr container, uint card_in_region);
G1AddCardResult add_to_howl(ContainerPtr parent_container, uint card_region, uint card_in_region, bool increment_total = true);
@ -366,7 +367,6 @@ public:
size_t num_containers();
static G1CardSetCoarsenStats coarsen_stats();
static void print_coarsen_stats(outputStream* out);
// Returns size of the actual remembered set containers in bytes.
@ -412,8 +412,15 @@ public:
using ContainerPtr = G1CardSet::ContainerPtr;
const uint _region_idx;
uint volatile _num_occupied;
ContainerPtr volatile _container;
Atomic<uint> _num_occupied;
Atomic<ContainerPtr> _container;
// Copy constructor needed for use in ConcurrentHashTable.
G1CardSetHashTableValue(const G1CardSetHashTableValue& other) :
_region_idx(other._region_idx),
_num_occupied(other._num_occupied.load_relaxed()),
_container(other._container.load_relaxed())
{ }
G1CardSetHashTableValue(uint region_idx, ContainerPtr container) : _region_idx(region_idx), _num_occupied(0), _container(container) { }
};

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2023, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2023, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,7 +27,7 @@
#include "gc/g1/g1CardSet.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/atomic.hpp"
#include "utilities/bitMap.hpp"
#include "utilities/globalDefinitions.hpp"
@ -67,7 +67,7 @@ class G1CardSetInlinePtr : public StackObj {
using ContainerPtr = G1CardSet::ContainerPtr;
ContainerPtr volatile * _value_addr;
Atomic<ContainerPtr>* _value_addr;
ContainerPtr _value;
static const uint SizeFieldLen = 3;
@ -103,7 +103,7 @@ public:
explicit G1CardSetInlinePtr(ContainerPtr value) :
G1CardSetInlinePtr(nullptr, value) {}
G1CardSetInlinePtr(ContainerPtr volatile* value_addr, ContainerPtr value) : _value_addr(value_addr), _value(value) {
G1CardSetInlinePtr(Atomic<ContainerPtr>* value_addr, ContainerPtr value) : _value_addr(value_addr), _value(value) {
assert(G1CardSet::container_type(_value) == G1CardSet::ContainerInlinePtr, "Value " PTR_FORMAT " is not a valid G1CardSetInlinePtr.", p2i(_value));
}
@ -145,13 +145,13 @@ public:
// All but inline pointers are of this kind. For those, card entries are stored
// directly in the ContainerPtr of the ConcurrentHashTable node.
class G1CardSetContainer {
uintptr_t _ref_count;
Atomic<uintptr_t> _ref_count;
protected:
~G1CardSetContainer() = default;
public:
G1CardSetContainer() : _ref_count(3) { }
uintptr_t refcount() const { return AtomicAccess::load_acquire(&_ref_count); }
uintptr_t refcount() const { return _ref_count.load_acquire(); }
bool try_increment_refcount();
@ -172,7 +172,7 @@ public:
using ContainerPtr = G1CardSet::ContainerPtr;
private:
EntryCountType _size;
EntryCountType volatile _num_entries;
Atomic<EntryCountType> _num_entries;
// VLA implementation.
EntryDataType _data[1];
@ -180,10 +180,10 @@ private:
static const EntryCountType EntryMask = LockBitMask - 1;
class G1CardSetArrayLocker : public StackObj {
EntryCountType volatile* _num_entries_addr;
Atomic<EntryCountType>* _num_entries_addr;
EntryCountType _local_num_entries;
public:
G1CardSetArrayLocker(EntryCountType volatile* value);
G1CardSetArrayLocker(Atomic<EntryCountType>* value);
EntryCountType num_entries() const { return _local_num_entries; }
void inc_num_entries() {
@ -192,7 +192,7 @@ private:
}
~G1CardSetArrayLocker() {
AtomicAccess::release_store(_num_entries_addr, _local_num_entries);
_num_entries_addr->release_store(_local_num_entries);
}
};
@ -213,7 +213,7 @@ public:
template <class CardVisitor>
void iterate(CardVisitor& found);
size_t num_entries() const { return _num_entries & EntryMask; }
size_t num_entries() const { return _num_entries.load_relaxed() & EntryMask; }
static size_t header_size_in_bytes();
@ -223,7 +223,7 @@ public:
};
class G1CardSetBitMap : public G1CardSetContainer {
size_t _num_bits_set;
Atomic<size_t> _num_bits_set;
BitMap::bm_word_t _bits[1];
public:
@ -236,7 +236,7 @@ public:
return bm.at(card_idx);
}
uint num_bits_set() const { return (uint)_num_bits_set; }
uint num_bits_set() const { return (uint)_num_bits_set.load_relaxed(); }
template <class CardVisitor>
void iterate(CardVisitor& found, size_t const size_in_bits, uint offset);
@ -255,10 +255,10 @@ class G1CardSetHowl : public G1CardSetContainer {
public:
typedef uint EntryCountType;
using ContainerPtr = G1CardSet::ContainerPtr;
EntryCountType volatile _num_entries;
Atomic<EntryCountType> _num_entries;
private:
// VLA implementation.
ContainerPtr _buckets[1];
Atomic<ContainerPtr> _buckets[1];
// Do not add class member variables beyond this point.
// Iterates over the given ContainerPtr with at index in this Howl card set,
@ -268,14 +268,14 @@ private:
ContainerPtr at(EntryCountType index) const;
ContainerPtr const* buckets() const;
Atomic<ContainerPtr> const* buckets() const;
public:
G1CardSetHowl(EntryCountType card_in_region, G1CardSetConfiguration* config);
ContainerPtr const* container_addr(EntryCountType index) const;
Atomic<ContainerPtr> const* container_addr(EntryCountType index) const;
ContainerPtr* container_addr(EntryCountType index);
Atomic<ContainerPtr>* container_addr(EntryCountType index);
bool contains(uint card_idx, G1CardSetConfiguration* config);
// Iterates over all ContainerPtrs in this Howl card set, applying a CardOrRangeVisitor

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -67,7 +67,7 @@ inline G1AddCardResult G1CardSetInlinePtr::add(uint card_idx, uint bits_per_card
return Overflow;
}
ContainerPtr new_value = merge(_value, card_idx, num_cards, bits_per_card);
ContainerPtr old_value = AtomicAccess::cmpxchg(_value_addr, _value, new_value, memory_order_relaxed);
ContainerPtr old_value = _value_addr->compare_exchange(_value, new_value, memory_order_relaxed);
if (_value == old_value) {
return Added;
}
@ -126,7 +126,7 @@ inline bool G1CardSetContainer::try_increment_refcount() {
}
uintptr_t new_value = old_value + 2;
uintptr_t ref_count = AtomicAccess::cmpxchg(&_ref_count, old_value, new_value);
uintptr_t ref_count = _ref_count.compare_exchange(old_value, new_value);
if (ref_count == old_value) {
return true;
}
@ -137,7 +137,7 @@ inline bool G1CardSetContainer::try_increment_refcount() {
inline uintptr_t G1CardSetContainer::decrement_refcount() {
uintptr_t old_value = refcount();
assert((old_value & 0x1) != 0 && old_value >= 3, "precondition");
return AtomicAccess::sub(&_ref_count, 2u);
return _ref_count.sub_then_fetch(2u);
}
inline G1CardSetArray::G1CardSetArray(uint card_in_region, EntryCountType num_cards) :
@ -149,14 +149,13 @@ inline G1CardSetArray::G1CardSetArray(uint card_in_region, EntryCountType num_ca
*entry_addr(0) = checked_cast<EntryDataType>(card_in_region);
}
inline G1CardSetArray::G1CardSetArrayLocker::G1CardSetArrayLocker(EntryCountType volatile* num_entries_addr) :
inline G1CardSetArray::G1CardSetArrayLocker::G1CardSetArrayLocker(Atomic<EntryCountType>* num_entries_addr) :
_num_entries_addr(num_entries_addr) {
SpinYield s;
EntryCountType num_entries = AtomicAccess::load(_num_entries_addr) & EntryMask;
EntryCountType num_entries = _num_entries_addr->load_relaxed() & EntryMask;
while (true) {
EntryCountType old_value = AtomicAccess::cmpxchg(_num_entries_addr,
num_entries,
(EntryCountType)(num_entries | LockBitMask));
EntryCountType old_value = _num_entries_addr->compare_exchange(num_entries,
(EntryCountType)(num_entries | LockBitMask));
if (old_value == num_entries) {
// Succeeded locking the array.
_local_num_entries = num_entries;
@ -174,7 +173,7 @@ inline G1CardSetArray::EntryDataType const* G1CardSetArray::base_addr() const {
}
inline G1CardSetArray::EntryDataType const* G1CardSetArray::entry_addr(EntryCountType index) const {
assert(index < _num_entries, "precondition");
assert(index < _num_entries.load_relaxed(), "precondition");
return base_addr() + index;
}
@ -189,7 +188,7 @@ inline G1CardSetArray::EntryDataType G1CardSetArray::at(EntryCountType index) co
inline G1AddCardResult G1CardSetArray::add(uint card_idx) {
assert(card_idx < (1u << (sizeof(EntryDataType) * BitsPerByte)),
"Card index %u does not fit allowed card value range.", card_idx);
EntryCountType num_entries = AtomicAccess::load_acquire(&_num_entries) & EntryMask;
EntryCountType num_entries = _num_entries.load_acquire() & EntryMask;
EntryCountType idx = 0;
for (; idx < num_entries; idx++) {
if (at(idx) == card_idx) {
@ -223,7 +222,7 @@ inline G1AddCardResult G1CardSetArray::add(uint card_idx) {
}
inline bool G1CardSetArray::contains(uint card_idx) {
EntryCountType num_entries = AtomicAccess::load_acquire(&_num_entries) & EntryMask;
EntryCountType num_entries = _num_entries.load_acquire() & EntryMask;
for (EntryCountType idx = 0; idx < num_entries; idx++) {
if (at(idx) == card_idx) {
@ -235,7 +234,7 @@ inline bool G1CardSetArray::contains(uint card_idx) {
template <class CardVisitor>
void G1CardSetArray::iterate(CardVisitor& found) {
EntryCountType num_entries = AtomicAccess::load_acquire(&_num_entries) & EntryMask;
EntryCountType num_entries = _num_entries.load_acquire() & EntryMask;
for (EntryCountType idx = 0; idx < num_entries; idx++) {
found(at(idx));
}
@ -256,11 +255,11 @@ inline G1CardSetBitMap::G1CardSetBitMap(uint card_in_region, uint size_in_bits)
inline G1AddCardResult G1CardSetBitMap::add(uint card_idx, size_t threshold, size_t size_in_bits) {
BitMapView bm(_bits, size_in_bits);
if (_num_bits_set >= threshold) {
if (_num_bits_set.load_relaxed() >= threshold) {
return bm.at(card_idx) ? Found : Overflow;
}
if (bm.par_set_bit(card_idx)) {
AtomicAccess::inc(&_num_bits_set, memory_order_relaxed);
_num_bits_set.add_then_fetch(1u, memory_order_relaxed);
return Added;
}
return Found;
@ -276,22 +275,22 @@ inline size_t G1CardSetBitMap::header_size_in_bytes() {
return offset_of(G1CardSetBitMap, _bits);
}
inline G1CardSetHowl::ContainerPtr const* G1CardSetHowl::container_addr(EntryCountType index) const {
assert(index < _num_entries, "precondition");
inline Atomic<G1CardSetHowl::ContainerPtr> const* G1CardSetHowl::container_addr(EntryCountType index) const {
assert(index < _num_entries.load_relaxed(), "precondition");
return buckets() + index;
}
inline G1CardSetHowl::ContainerPtr* G1CardSetHowl::container_addr(EntryCountType index) {
return const_cast<ContainerPtr*>(const_cast<const G1CardSetHowl*>(this)->container_addr(index));
inline Atomic<G1CardSetHowl::ContainerPtr>* G1CardSetHowl::container_addr(EntryCountType index) {
return const_cast<Atomic<ContainerPtr>*>(const_cast<const G1CardSetHowl*>(this)->container_addr(index));
}
inline G1CardSetHowl::ContainerPtr G1CardSetHowl::at(EntryCountType index) const {
return *container_addr(index);
return (*container_addr(index)).load_relaxed();
}
inline G1CardSetHowl::ContainerPtr const* G1CardSetHowl::buckets() const {
inline Atomic<G1CardSetHowl::ContainerPtr> const* G1CardSetHowl::buckets() const {
const void* ptr = reinterpret_cast<const char*>(this) + header_size_in_bytes();
return reinterpret_cast<ContainerPtr const*>(ptr);
return reinterpret_cast<Atomic<ContainerPtr> const*>(ptr);
}
inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConfiguration* config) :
@ -300,7 +299,7 @@ inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConf
EntryCountType num_buckets = config->num_buckets_in_howl();
EntryCountType bucket = config->howl_bucket_index(card_in_region);
for (uint i = 0; i < num_buckets; ++i) {
*container_addr(i) = G1CardSetInlinePtr();
container_addr(i)->store_relaxed(G1CardSetInlinePtr());
if (i == bucket) {
G1CardSetInlinePtr value(container_addr(i), at(i));
value.add(card_in_region, config->inline_ptr_bits_per_card(), config->max_cards_in_inline_ptr());
@ -310,8 +309,8 @@ inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConf
inline bool G1CardSetHowl::contains(uint card_idx, G1CardSetConfiguration* config) {
EntryCountType bucket = config->howl_bucket_index(card_idx);
ContainerPtr* array_entry = container_addr(bucket);
ContainerPtr container = AtomicAccess::load_acquire(array_entry);
Atomic<ContainerPtr>* array_entry = container_addr(bucket);
ContainerPtr container = array_entry->load_acquire();
switch (G1CardSet::container_type(container)) {
case G1CardSet::ContainerArrayOfCards: {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -26,7 +26,6 @@
#include "gc/g1/g1CardSetContainers.inline.hpp"
#include "gc/g1/g1CardSetMemory.inline.hpp"
#include "gc/g1/g1MonotonicArena.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/ostream.hpp"
G1CardSetAllocator::G1CardSetAllocator(const char* name,

View File

@ -31,6 +31,7 @@
#include "gc/g1/g1CollectorState.hpp"
#include "gc/g1/g1ConcurrentMark.inline.hpp"
#include "gc/g1/g1EvacFailureRegions.hpp"
#include "gc/g1/g1EvacStats.inline.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "gc/g1/g1HeapRegionManager.inline.hpp"
#include "gc/g1/g1HeapRegionRemSet.hpp"

View File

@ -203,13 +203,13 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
G1CollectedHeap* _g1h;
G1HeapRegionClaimer _hrclaimer;
uint volatile _num_regions_added;
Atomic<uint> _num_regions_added;
G1BuildCandidateArray _result;
void update_totals(uint num_regions) {
if (num_regions > 0) {
AtomicAccess::add(&_num_regions_added, num_regions);
_num_regions_added.add_then_fetch(num_regions);
}
}
@ -221,7 +221,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
void prune(G1HeapRegion** data) {
G1Policy* p = G1CollectedHeap::heap()->policy();
uint num_candidates = AtomicAccess::load(&_num_regions_added);
uint num_candidates = _num_regions_added.load_relaxed();
uint min_old_cset_length = p->calc_min_old_cset_length(num_candidates);
uint num_pruned = 0;
@ -254,7 +254,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
wasted_bytes,
allowed_waste);
AtomicAccess::sub(&_num_regions_added, num_pruned, memory_order_relaxed);
_num_regions_added.sub_then_fetch(num_pruned, memory_order_relaxed);
}
public:
@ -275,7 +275,7 @@ public:
_result.sort_by_gc_efficiency();
prune(_result.array());
candidates->set_candidates_from_marking(_result.array(),
_num_regions_added);
_num_regions_added.load_relaxed());
}
};

View File

@ -291,9 +291,9 @@ void G1CMMarkStack::expand() {
_chunk_allocator.try_expand();
}
void G1CMMarkStack::add_chunk_to_list(TaskQueueEntryChunk* volatile* list, TaskQueueEntryChunk* elem) {
elem->next = *list;
*list = elem;
void G1CMMarkStack::add_chunk_to_list(Atomic<TaskQueueEntryChunk*>* list, TaskQueueEntryChunk* elem) {
elem->next = list->load_relaxed();
list->store_relaxed(elem);
}
void G1CMMarkStack::add_chunk_to_chunk_list(TaskQueueEntryChunk* elem) {
@ -307,10 +307,10 @@ void G1CMMarkStack::add_chunk_to_free_list(TaskQueueEntryChunk* elem) {
add_chunk_to_list(&_free_list, elem);
}
G1CMMarkStack::TaskQueueEntryChunk* G1CMMarkStack::remove_chunk_from_list(TaskQueueEntryChunk* volatile* list) {
TaskQueueEntryChunk* result = *list;
G1CMMarkStack::TaskQueueEntryChunk* G1CMMarkStack::remove_chunk_from_list(Atomic<TaskQueueEntryChunk*>* list) {
TaskQueueEntryChunk* result = list->load_relaxed();
if (result != nullptr) {
*list = (*list)->next;
list->store_relaxed(list->load_relaxed()->next);
}
return result;
}
@ -364,8 +364,8 @@ bool G1CMMarkStack::par_pop_chunk(G1TaskQueueEntry* ptr_arr) {
void G1CMMarkStack::set_empty() {
_chunks_in_chunk_list = 0;
_chunk_list = nullptr;
_free_list = nullptr;
_chunk_list.store_relaxed(nullptr);
_free_list.store_relaxed(nullptr);
_chunk_allocator.reset();
}

View File

@ -210,17 +210,17 @@ private:
ChunkAllocator _chunk_allocator;
char _pad0[DEFAULT_PADDING_SIZE];
TaskQueueEntryChunk* volatile _free_list; // Linked list of free chunks that can be allocated by users.
Atomic<TaskQueueEntryChunk*> _free_list; // Linked list of free chunks that can be allocated by users.
char _pad1[DEFAULT_PADDING_SIZE - sizeof(TaskQueueEntryChunk*)];
TaskQueueEntryChunk* volatile _chunk_list; // List of chunks currently containing data.
Atomic<TaskQueueEntryChunk*> _chunk_list; // List of chunks currently containing data.
volatile size_t _chunks_in_chunk_list;
char _pad2[DEFAULT_PADDING_SIZE - sizeof(TaskQueueEntryChunk*) - sizeof(size_t)];
// Atomically add the given chunk to the list.
void add_chunk_to_list(TaskQueueEntryChunk* volatile* list, TaskQueueEntryChunk* elem);
void add_chunk_to_list(Atomic<TaskQueueEntryChunk*>* list, TaskQueueEntryChunk* elem);
// Atomically remove and return a chunk from the given list. Returns null if the
// list is empty.
TaskQueueEntryChunk* remove_chunk_from_list(TaskQueueEntryChunk* volatile* list);
TaskQueueEntryChunk* remove_chunk_from_list(Atomic<TaskQueueEntryChunk*>* list);
void add_chunk_to_chunk_list(TaskQueueEntryChunk* elem);
void add_chunk_to_free_list(TaskQueueEntryChunk* elem);
@ -252,7 +252,7 @@ private:
// Return whether the chunk list is empty. Racy due to unsynchronized access to
// _chunk_list.
bool is_empty() const { return _chunk_list == nullptr; }
bool is_empty() const { return _chunk_list.load_relaxed() == nullptr; }
size_t capacity() const { return _chunk_allocator.capacity(); }

View File

@ -90,7 +90,7 @@ inline void G1CMMarkStack::iterate(Fn fn) const {
size_t num_chunks = 0;
TaskQueueEntryChunk* cur = _chunk_list;
TaskQueueEntryChunk* cur = _chunk_list.load_relaxed();
while (cur != nullptr) {
guarantee(num_chunks <= _chunks_in_chunk_list, "Found %zu oop chunks which is more than there should be", num_chunks);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -30,7 +30,6 @@
#include "gc/g1/g1HeapRegionPrinter.hpp"
#include "gc/g1/g1RemSetTrackingPolicy.hpp"
#include "logging/log.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/mutexLocker.hpp"
struct G1UpdateRegionLivenessAndSelectForRebuildTask::G1OnRegionClosure : public G1HeapRegionClosure {
@ -154,7 +153,7 @@ void G1UpdateRegionLivenessAndSelectForRebuildTask::work(uint worker_id) {
G1OnRegionClosure on_region_cl(_g1h, _cm, &local_cleanup_list);
_g1h->heap_region_par_iterate_from_worker_offset(&on_region_cl, &_hrclaimer, worker_id);
AtomicAccess::add(&_total_selected_for_rebuild, on_region_cl._num_selected_for_rebuild);
_total_selected_for_rebuild.add_then_fetch(on_region_cl._num_selected_for_rebuild);
// Update the old/humongous region sets
_g1h->remove_from_old_gen_sets(on_region_cl._num_old_regions_removed,

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2025, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -29,6 +29,7 @@
#include "gc/g1/g1HeapRegionManager.hpp"
#include "gc/g1/g1HeapRegionSet.hpp"
#include "gc/shared/workerThread.hpp"
#include "runtime/atomic.hpp"
class G1CollectedHeap;
class G1ConcurrentMark;
@ -41,7 +42,7 @@ class G1UpdateRegionLivenessAndSelectForRebuildTask : public WorkerTask {
G1ConcurrentMark* _cm;
G1HeapRegionClaimer _hrclaimer;
uint volatile _total_selected_for_rebuild;
Atomic<uint> _total_selected_for_rebuild;
// Reclaimed empty regions
G1FreeRegionList _cleanup_list;
@ -57,7 +58,9 @@ public:
void work(uint worker_id) override;
uint total_selected_for_rebuild() const { return _total_selected_for_rebuild; }
uint total_selected_for_rebuild() const {
return _total_selected_for_rebuild.load_relaxed();
}
static uint desired_num_workers(uint num_regions);
};

View File

@ -28,6 +28,7 @@
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1CollectionSet.hpp"
#include "gc/g1/g1ConcurrentRefine.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "gc/g1/g1ConcurrentRefineSweepTask.hpp"
#include "gc/g1/g1ConcurrentRefineThread.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -22,7 +22,7 @@
*
*/
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/timer.hpp"
@ -39,19 +39,27 @@ G1ConcurrentRefineStats::G1ConcurrentRefineStats() :
{}
void G1ConcurrentRefineStats::add_atomic(G1ConcurrentRefineStats* other) {
AtomicAccess::add(&_sweep_duration, other->_sweep_duration, memory_order_relaxed);
AtomicAccess::add(&_yield_during_sweep_duration, other->_yield_during_sweep_duration, memory_order_relaxed);
_sweep_duration.add_then_fetch(other->_sweep_duration.load_relaxed(), memory_order_relaxed);
_yield_during_sweep_duration.add_then_fetch(other->yield_during_sweep_duration(), memory_order_relaxed);
AtomicAccess::add(&_cards_scanned, other->_cards_scanned, memory_order_relaxed);
AtomicAccess::add(&_cards_clean, other->_cards_clean, memory_order_relaxed);
AtomicAccess::add(&_cards_not_parsable, other->_cards_not_parsable, memory_order_relaxed);
AtomicAccess::add(&_cards_already_refer_to_cset, other->_cards_already_refer_to_cset, memory_order_relaxed);
AtomicAccess::add(&_cards_refer_to_cset, other->_cards_refer_to_cset, memory_order_relaxed);
AtomicAccess::add(&_cards_no_cross_region, other->_cards_no_cross_region, memory_order_relaxed);
_cards_scanned.add_then_fetch(other->cards_scanned(), memory_order_relaxed);
_cards_clean.add_then_fetch(other->cards_clean(), memory_order_relaxed);
_cards_not_parsable.add_then_fetch(other->cards_not_parsable(), memory_order_relaxed);
_cards_already_refer_to_cset.add_then_fetch(other->cards_already_refer_to_cset(), memory_order_relaxed);
_cards_refer_to_cset.add_then_fetch(other->cards_refer_to_cset(), memory_order_relaxed);
_cards_no_cross_region.add_then_fetch(other->cards_no_cross_region(), memory_order_relaxed);
AtomicAccess::add(&_refine_duration, other->_refine_duration, memory_order_relaxed);
_refine_duration.add_then_fetch(other->refine_duration(), memory_order_relaxed);
}
void G1ConcurrentRefineStats::reset() {
*this = G1ConcurrentRefineStats();
_sweep_duration.store_relaxed(0);
_yield_during_sweep_duration.store_relaxed(0);
_cards_scanned.store_relaxed(0);
_cards_clean.store_relaxed(0);
_cards_not_parsable.store_relaxed(0);
_cards_already_refer_to_cset.store_relaxed(0);
_cards_refer_to_cset.store_relaxed(0);
_cards_no_cross_region.store_relaxed(0);
_refine_duration.store_relaxed(0);
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -26,61 +26,61 @@
#define SHARE_GC_G1_G1CONCURRENTREFINESTATS_HPP
#include "memory/allocation.hpp"
#include "runtime/atomic.hpp"
#include "utilities/globalDefinitions.hpp"
#include "utilities/ticks.hpp"
// Collection of statistics for concurrent refinement processing.
// Used for collecting per-thread statistics and for summaries over a
// collection of threads.
class G1ConcurrentRefineStats : public CHeapObj<mtGC> {
jlong _sweep_duration; // Time spent sweeping the table finding non-clean cards
// and refining them.
jlong _yield_during_sweep_duration; // Time spent yielding during the sweep (not doing the sweep).
Atomic<jlong> _sweep_duration; // Time spent sweeping the table finding non-clean cards
// and refining them.
Atomic<jlong> _yield_during_sweep_duration; // Time spent yielding during the sweep (not doing the sweep).
size_t _cards_scanned; // Total number of cards scanned.
size_t _cards_clean; // Number of cards found clean.
size_t _cards_not_parsable; // Number of cards we could not parse and left unrefined.
size_t _cards_already_refer_to_cset;// Number of cards marked found to be already young.
size_t _cards_refer_to_cset; // Number of dirty cards that were recently found to contain a to-cset reference.
size_t _cards_no_cross_region; // Number of dirty cards that were dirtied, but then cleaned again by the mutator.
Atomic<size_t> _cards_scanned; // Total number of cards scanned.
Atomic<size_t> _cards_clean; // Number of cards found clean.
Atomic<size_t> _cards_not_parsable; // Number of cards we could not parse and left unrefined.
Atomic<size_t> _cards_already_refer_to_cset;// Number of cards marked found to be already young.
Atomic<size_t> _cards_refer_to_cset; // Number of dirty cards that were recently found to contain a to-cset reference.
Atomic<size_t> _cards_no_cross_region; // Number of dirty cards that were dirtied, but then cleaned again by the mutator.
jlong _refine_duration; // Time spent during actual refinement.
Atomic<jlong> _refine_duration; // Time spent during actual refinement.
public:
G1ConcurrentRefineStats();
// Time spent performing sweeping the refinement table (includes actual refinement,
// but not yield time).
jlong sweep_duration() const { return _sweep_duration - _yield_during_sweep_duration; }
jlong yield_during_sweep_duration() const { return _yield_during_sweep_duration; }
jlong refine_duration() const { return _refine_duration; }
inline jlong sweep_duration() const;
inline jlong yield_during_sweep_duration() const;
inline jlong refine_duration() const;
// Number of refined cards.
size_t refined_cards() const { return cards_not_clean(); }
inline size_t refined_cards() const;
size_t cards_scanned() const { return _cards_scanned; }
size_t cards_clean() const { return _cards_clean; }
size_t cards_not_clean() const { return _cards_scanned - _cards_clean; }
size_t cards_not_parsable() const { return _cards_not_parsable; }
size_t cards_already_refer_to_cset() const { return _cards_already_refer_to_cset; }
size_t cards_refer_to_cset() const { return _cards_refer_to_cset; }
size_t cards_no_cross_region() const { return _cards_no_cross_region; }
inline size_t cards_scanned() const;
inline size_t cards_clean() const;
inline size_t cards_not_clean() const;
inline size_t cards_not_parsable() const;
inline size_t cards_already_refer_to_cset() const;
inline size_t cards_refer_to_cset() const;
inline size_t cards_no_cross_region() const;
// Number of cards that were marked dirty and in need of refinement. This includes cards recently
// found to refer to the collection set as they originally were dirty.
size_t cards_pending() const { return cards_not_clean() - _cards_already_refer_to_cset; }
inline size_t cards_pending() const;
size_t cards_to_cset() const { return _cards_already_refer_to_cset + _cards_refer_to_cset; }
inline size_t cards_to_cset() const;
void inc_sweep_time(jlong t) { _sweep_duration += t; }
void inc_yield_during_sweep_duration(jlong t) { _yield_during_sweep_duration += t; }
void inc_refine_duration(jlong t) { _refine_duration += t; }
inline void inc_sweep_time(jlong t);
inline void inc_yield_during_sweep_duration(jlong t);
inline void inc_refine_duration(jlong t);
void inc_cards_scanned(size_t increment) { _cards_scanned += increment; }
void inc_cards_clean(size_t increment) { _cards_clean += increment; }
void inc_cards_not_parsable() { _cards_not_parsable++; }
void inc_cards_already_refer_to_cset() { _cards_already_refer_to_cset++; }
void inc_cards_refer_to_cset() { _cards_refer_to_cset++; }
void inc_cards_no_cross_region() { _cards_no_cross_region++; }
inline void inc_cards_scanned(size_t increment);
inline void inc_cards_clean(size_t increment);
inline void inc_cards_not_parsable();
inline void inc_cards_already_refer_to_cset();
inline void inc_cards_refer_to_cset();
inline void inc_cards_no_cross_region();
void add_atomic(G1ConcurrentRefineStats* other);

View File

@ -0,0 +1,118 @@
/*
* Copyright (c) 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#ifndef SHARE_GC_G1_G1CONCURRENTREFINESTATS_INLINE_HPP
#define SHARE_GC_G1_G1CONCURRENTREFINESTATS_INLINE_HPP
#include "gc/g1/g1ConcurrentRefineStats.hpp"
inline jlong G1ConcurrentRefineStats::sweep_duration() const {
return _sweep_duration.load_relaxed() - yield_during_sweep_duration();
}
inline jlong G1ConcurrentRefineStats::yield_during_sweep_duration() const {
return _yield_during_sweep_duration.load_relaxed();
}
inline jlong G1ConcurrentRefineStats::refine_duration() const {
return _refine_duration.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::refined_cards() const {
return cards_not_clean();
}
inline size_t G1ConcurrentRefineStats::cards_scanned() const {
return _cards_scanned.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_clean() const {
return _cards_clean.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_not_clean() const {
return cards_scanned() - cards_clean();
}
inline size_t G1ConcurrentRefineStats::cards_not_parsable() const {
return _cards_not_parsable.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_already_refer_to_cset() const {
return _cards_already_refer_to_cset.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_refer_to_cset() const {
return _cards_refer_to_cset.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_no_cross_region() const {
return _cards_no_cross_region.load_relaxed();
}
inline size_t G1ConcurrentRefineStats::cards_pending() const {
return cards_not_clean() - cards_already_refer_to_cset();
}
inline size_t G1ConcurrentRefineStats::cards_to_cset() const {
return cards_already_refer_to_cset() + cards_refer_to_cset();
}
inline void G1ConcurrentRefineStats::inc_sweep_time(jlong t) {
_sweep_duration.store_relaxed(_sweep_duration.load_relaxed() + t);
}
inline void G1ConcurrentRefineStats::inc_yield_during_sweep_duration(jlong t) {
_yield_during_sweep_duration.store_relaxed(yield_during_sweep_duration() + t);
}
inline void G1ConcurrentRefineStats::inc_refine_duration(jlong t) {
_refine_duration.store_relaxed(refine_duration() + t);
}
inline void G1ConcurrentRefineStats::inc_cards_scanned(size_t increment) {
_cards_scanned.store_relaxed(cards_scanned() + increment);
}
inline void G1ConcurrentRefineStats::inc_cards_clean(size_t increment) {
_cards_clean.store_relaxed(cards_clean() + increment);
}
inline void G1ConcurrentRefineStats::inc_cards_not_parsable() {
_cards_not_parsable.store_relaxed(cards_not_parsable() + 1);
}
inline void G1ConcurrentRefineStats::inc_cards_already_refer_to_cset() {
_cards_already_refer_to_cset.store_relaxed(cards_already_refer_to_cset() + 1);
}
inline void G1ConcurrentRefineStats::inc_cards_refer_to_cset() {
_cards_refer_to_cset.store_relaxed(cards_refer_to_cset() + 1);
}
inline void G1ConcurrentRefineStats::inc_cards_no_cross_region() {
_cards_no_cross_region.store_relaxed(cards_no_cross_region() + 1);
}
#endif // SHARE_GC_G1_G1CONCURRENTREFINESTATS_INLINE_HPP

View File

@ -24,6 +24,7 @@
#include "gc/g1/g1CardTableClaimTable.inline.hpp"
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "gc/g1/g1ConcurrentRefineSweepTask.hpp"
class G1RefineRegionClosure : public G1HeapRegionClosure {

View File

@ -25,10 +25,10 @@
#ifndef SHARE_GC_G1_G1CONCURRENTREFINESWEEPTASK_HPP
#define SHARE_GC_G1_G1CONCURRENTREFINESWEEPTASK_HPP
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/shared/workerThread.hpp"
class G1CardTableClaimTable;
class G1ConcurrentRefineStats;
class G1ConcurrentRefineSweepTask : public WorkerTask {
G1CardTableClaimTable* _scan_state;

View File

@ -26,7 +26,7 @@
#include "gc/g1/g1CardTableClaimTable.inline.hpp"
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1ConcurrentRefine.hpp"
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "gc/g1/g1ConcurrentRefineSweepTask.hpp"
#include "gc/g1/g1ConcurrentRefineThread.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"

View File

@ -25,7 +25,6 @@
#ifndef SHARE_GC_G1_G1CONCURRENTREFINETHREAD_HPP
#define SHARE_GC_G1_G1CONCURRENTREFINETHREAD_HPP
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/shared/concurrentGCThread.hpp"
#include "runtime/mutex.hpp"
#include "utilities/globalDefinitions.hpp"

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2015, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -22,13 +22,24 @@
*
*/
#include "gc/g1/g1EvacStats.hpp"
#include "gc/g1/g1EvacStats.inline.hpp"
#include "gc/shared/gc_globals.hpp"
#include "gc/shared/gcId.hpp"
#include "logging/log.hpp"
#include "memory/allocation.inline.hpp"
#include "runtime/globals.hpp"
void G1EvacStats::reset() {
PLABStats::reset();
_region_end_waste.store_relaxed(0);
_regions_filled.store_relaxed(0);
_num_plab_filled.store_relaxed(0);
_direct_allocated.store_relaxed(0);
_num_direct_allocated.store_relaxed(0);
_failure_used.store_relaxed(0);
_failure_waste.store_relaxed(0);
}
void G1EvacStats::log_plab_allocation() {
log_debug(gc, plab)("%s PLAB allocation: "
"allocated: %zuB, "
@ -51,13 +62,13 @@ void G1EvacStats::log_plab_allocation() {
"failure used: %zuB, "
"failure wasted: %zuB",
_description,
_region_end_waste * HeapWordSize,
_regions_filled,
_num_plab_filled,
_direct_allocated * HeapWordSize,
_num_direct_allocated,
_failure_used * HeapWordSize,
_failure_waste * HeapWordSize);
region_end_waste() * HeapWordSize,
regions_filled(),
num_plab_filled(),
direct_allocated() * HeapWordSize,
num_direct_allocated(),
failure_used() * HeapWordSize,
failure_waste() * HeapWordSize);
}
void G1EvacStats::log_sizing(size_t calculated_words, size_t net_desired_words) {
@ -109,7 +120,7 @@ size_t G1EvacStats::compute_desired_plab_size() const {
// threads do not allocate anything but a few rather large objects. In this
// degenerate case the PLAB size would simply quickly tend to minimum PLAB size,
// which is an okay reaction.
size_t const used_for_waste_calculation = used() > _region_end_waste ? used() - _region_end_waste : 0;
size_t const used_for_waste_calculation = used() > region_end_waste() ? used() - region_end_waste() : 0;
size_t const total_waste_allowed = used_for_waste_calculation * TargetPLABWastePct;
return (size_t)((double)total_waste_allowed / (100 - G1LastPLABAverageOccupancy));

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2015, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,6 +27,7 @@
#include "gc/shared/gcUtil.hpp"
#include "gc/shared/plab.hpp"
#include "runtime/atomic.hpp"
// Records various memory allocation statistics gathered during evacuation. All sizes
// are in HeapWords.
@ -36,30 +37,21 @@ class G1EvacStats : public PLABStats {
AdaptiveWeightedAverage
_net_plab_size_filter; // Integrator with decay
size_t _region_end_waste; // Number of words wasted due to skipping to the next region.
uint _regions_filled; // Number of regions filled completely.
size_t _num_plab_filled; // Number of PLABs filled and retired.
size_t _direct_allocated; // Number of words allocated directly into the regions.
size_t _num_direct_allocated; // Number of direct allocation attempts.
Atomic<size_t> _region_end_waste; // Number of words wasted due to skipping to the next region.
Atomic<uint> _regions_filled; // Number of regions filled completely.
Atomic<size_t> _num_plab_filled; // Number of PLABs filled and retired.
Atomic<size_t> _direct_allocated; // Number of words allocated directly into the regions.
Atomic<size_t> _num_direct_allocated; // Number of direct allocation attempts.
// Number of words in live objects remaining in regions that ultimately suffered an
// evacuation failure. This is used in the regions when the regions are made old regions.
size_t _failure_used;
Atomic<size_t> _failure_used;
// Number of words wasted in regions which failed evacuation. This is the sum of space
// for objects successfully copied out of the regions (now dead space) plus waste at the
// end of regions.
size_t _failure_waste;
Atomic<size_t> _failure_waste;
virtual void reset() {
PLABStats::reset();
_region_end_waste = 0;
_regions_filled = 0;
_num_plab_filled = 0;
_direct_allocated = 0;
_num_direct_allocated = 0;
_failure_used = 0;
_failure_waste = 0;
}
virtual void reset();
void log_plab_allocation();
void log_sizing(size_t calculated_words, size_t net_desired_words);
@ -77,16 +69,16 @@ public:
// Should be called at the end of a GC pause.
void adjust_desired_plab_size();
uint regions_filled() const { return _regions_filled; }
size_t num_plab_filled() const { return _num_plab_filled; }
size_t region_end_waste() const { return _region_end_waste; }
size_t direct_allocated() const { return _direct_allocated; }
size_t num_direct_allocated() const { return _num_direct_allocated; }
uint regions_filled() const;
size_t num_plab_filled() const;
size_t region_end_waste() const;
size_t direct_allocated() const;
size_t num_direct_allocated() const;
// Amount of space in heapwords used in the failing regions when an evacuation failure happens.
size_t failure_used() const { return _failure_used; }
size_t failure_used() const;
// Amount of space in heapwords wasted (unused) in the failing regions when an evacuation failure happens.
size_t failure_waste() const { return _failure_waste; }
size_t failure_waste() const;
inline void add_num_plab_filled(size_t value);
inline void add_direct_allocated(size_t value);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2015, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -27,28 +27,54 @@
#include "gc/g1/g1EvacStats.hpp"
#include "runtime/atomicAccess.hpp"
inline uint G1EvacStats::regions_filled() const {
return _regions_filled.load_relaxed();
}
inline size_t G1EvacStats::num_plab_filled() const {
return _num_plab_filled.load_relaxed();
}
inline size_t G1EvacStats::region_end_waste() const {
return _region_end_waste.load_relaxed();
}
inline size_t G1EvacStats::direct_allocated() const {
return _direct_allocated.load_relaxed();
}
inline size_t G1EvacStats::num_direct_allocated() const {
return _num_direct_allocated.load_relaxed();
}
inline size_t G1EvacStats::failure_used() const {
return _failure_used.load_relaxed();
}
inline size_t G1EvacStats::failure_waste() const {
return _failure_waste.load_relaxed();
}
inline void G1EvacStats::add_direct_allocated(size_t value) {
AtomicAccess::add(&_direct_allocated, value, memory_order_relaxed);
_direct_allocated.add_then_fetch(value, memory_order_relaxed);
}
inline void G1EvacStats::add_num_plab_filled(size_t value) {
AtomicAccess::add(&_num_plab_filled, value, memory_order_relaxed);
_num_plab_filled.add_then_fetch(value, memory_order_relaxed);
}
inline void G1EvacStats::add_num_direct_allocated(size_t value) {
AtomicAccess::add(&_num_direct_allocated, value, memory_order_relaxed);
_num_direct_allocated.add_then_fetch(value, memory_order_relaxed);
}
inline void G1EvacStats::add_region_end_waste(size_t value) {
AtomicAccess::add(&_region_end_waste, value, memory_order_relaxed);
AtomicAccess::inc(&_regions_filled, memory_order_relaxed);
_region_end_waste.add_then_fetch(value, memory_order_relaxed);
_regions_filled.add_then_fetch(1u, memory_order_relaxed);
}
inline void G1EvacStats::add_failure_used_and_waste(size_t used, size_t waste) {
AtomicAccess::add(&_failure_used, used, memory_order_relaxed);
AtomicAccess::add(&_failure_waste, waste, memory_order_relaxed);
_failure_used.add_then_fetch(used, memory_order_relaxed);
_failure_waste.add_then_fetch(waste, memory_order_relaxed);
}
#endif // SHARE_GC_G1_G1EVACSTATS_INLINE_HPP

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -116,8 +116,8 @@ G1FullCollector::G1FullCollector(G1CollectedHeap* heap,
_num_workers(calc_active_workers()),
_has_compaction_targets(false),
_has_humongous(false),
_oop_queue_set(_num_workers),
_array_queue_set(_num_workers),
_marking_task_queues(_num_workers),
_partial_array_state_manager(nullptr),
_preserved_marks_set(true),
_serial_compaction_point(this, nullptr),
_humongous_compaction_point(this, nullptr),
@ -134,32 +134,40 @@ G1FullCollector::G1FullCollector(G1CollectedHeap* heap,
_compaction_points = NEW_C_HEAP_ARRAY(G1FullGCCompactionPoint*, _num_workers, mtGC);
_live_stats = NEW_C_HEAP_ARRAY(G1RegionMarkStats, _heap->max_num_regions(), mtGC);
_compaction_tops = NEW_C_HEAP_ARRAY(HeapWord*, _heap->max_num_regions(), mtGC);
_compaction_tops = NEW_C_HEAP_ARRAY(Atomic<HeapWord*>, _heap->max_num_regions(), mtGC);
for (uint j = 0; j < heap->max_num_regions(); j++) {
_live_stats[j].clear();
_compaction_tops[j] = nullptr;
::new (&_compaction_tops[j]) Atomic<HeapWord*>{};
}
_partial_array_state_manager = new PartialArrayStateManager(_num_workers);
for (uint i = 0; i < _num_workers; i++) {
_markers[i] = new G1FullGCMarker(this, i, _live_stats);
_compaction_points[i] = new G1FullGCCompactionPoint(this, _preserved_marks_set.get(i));
_oop_queue_set.register_queue(i, marker(i)->oop_stack());
_array_queue_set.register_queue(i, marker(i)->objarray_stack());
_marking_task_queues.register_queue(i, marker(i)->task_queue());
}
_serial_compaction_point.set_preserved_stack(_preserved_marks_set.get(0));
_humongous_compaction_point.set_preserved_stack(_preserved_marks_set.get(0));
_region_attr_table.initialize(heap->reserved(), G1HeapRegion::GrainBytes);
}
PartialArrayStateManager* G1FullCollector::partial_array_state_manager() const {
return _partial_array_state_manager;
}
G1FullCollector::~G1FullCollector() {
for (uint i = 0; i < _num_workers; i++) {
delete _markers[i];
delete _compaction_points[i];
}
delete _partial_array_state_manager;
FREE_C_HEAP_ARRAY(G1FullGCMarker*, _markers);
FREE_C_HEAP_ARRAY(G1FullGCCompactionPoint*, _compaction_points);
FREE_C_HEAP_ARRAY(HeapWord*, _compaction_tops);
FREE_C_HEAP_ARRAY(Atomic<HeapWord*>, _compaction_tops);
FREE_C_HEAP_ARRAY(G1RegionMarkStats, _live_stats);
}
@ -279,8 +287,8 @@ public:
uint index = (_tm == RefProcThreadModel::Single) ? 0 : worker_id;
G1FullKeepAliveClosure keep_alive(_collector.marker(index));
BarrierEnqueueDiscoveredFieldClosure enqueue;
G1FollowStackClosure* complete_gc = _collector.marker(index)->stack_closure();
_rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, complete_gc);
G1MarkStackClosure* complete_marking = _collector.marker(index)->stack_closure();
_rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, complete_marking);
}
};
@ -302,7 +310,7 @@ void G1FullCollector::phase1_mark_live_objects() {
const ReferenceProcessorStats& stats = reference_processor()->process_discovered_references(task, _heap->workers(), pt);
scope()->tracer()->report_gc_reference_stats(stats);
pt.print_all_references();
assert(marker(0)->oop_stack()->is_empty(), "Should be no oops on the stack");
assert(marker(0)->task_queue()->is_empty(), "Should be no oops on the stack");
}
{
@ -328,8 +336,7 @@ void G1FullCollector::phase1_mark_live_objects() {
scope()->tracer()->report_object_count_after_gc(&_is_alive, _heap->workers());
}
#if TASKQUEUE_STATS
oop_queue_set()->print_and_reset_taskqueue_stats("Oop Queue");
array_queue_set()->print_and_reset_taskqueue_stats("ObjArrayOop Queue");
marking_task_queues()->print_and_reset_taskqueue_stats("Marking Task Queue");
#endif
}

View File

@ -79,8 +79,8 @@ class G1FullCollector : StackObj {
bool _has_humongous;
G1FullGCMarker** _markers;
G1FullGCCompactionPoint** _compaction_points;
OopQueueSet _oop_queue_set;
ObjArrayTaskQueueSet _array_queue_set;
G1MarkTasksQueueSet _marking_task_queues;
PartialArrayStateManager* _partial_array_state_manager;
PreservedMarksSet _preserved_marks_set;
G1FullGCCompactionPoint _serial_compaction_point;
G1FullGCCompactionPoint _humongous_compaction_point;
@ -96,7 +96,7 @@ class G1FullCollector : StackObj {
G1FullGCHeapRegionAttr _region_attr_table;
HeapWord* volatile* _compaction_tops;
Atomic<HeapWord*>* _compaction_tops;
public:
G1FullCollector(G1CollectedHeap* heap,
@ -113,8 +113,7 @@ public:
uint workers() { return _num_workers; }
G1FullGCMarker* marker(uint id) { return _markers[id]; }
G1FullGCCompactionPoint* compaction_point(uint id) { return _compaction_points[id]; }
OopQueueSet* oop_queue_set() { return &_oop_queue_set; }
ObjArrayTaskQueueSet* array_queue_set() { return &_array_queue_set; }
G1MarkTasksQueueSet* marking_task_queues() { return &_marking_task_queues; }
PreservedMarksSet* preserved_mark_set() { return &_preserved_marks_set; }
G1FullGCCompactionPoint* serial_compaction_point() { return &_serial_compaction_point; }
G1FullGCCompactionPoint* humongous_compaction_point() { return &_humongous_compaction_point; }
@ -125,6 +124,8 @@ public:
return _live_stats[region_index].live_words();
}
PartialArrayStateManager* partial_array_state_manager() const;
void before_marking_update_attribute_table(G1HeapRegion* hr);
inline bool is_compacting(oop obj) const;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2020, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2020, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -63,11 +63,11 @@ void G1FullCollector::update_from_skip_compacting_to_compacting(uint region_idx)
}
void G1FullCollector::set_compaction_top(G1HeapRegion* r, HeapWord* value) {
AtomicAccess::store(&_compaction_tops[r->hrm_index()], value);
_compaction_tops[r->hrm_index()].store_relaxed(value);
}
HeapWord* G1FullCollector::compaction_top(G1HeapRegion* r) const {
return AtomicAccess::load(&_compaction_tops[r->hrm_index()]);
return _compaction_tops[r->hrm_index()].load_relaxed();
}
void G1FullCollector::set_has_compaction_targets() {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -34,7 +34,7 @@
G1FullGCMarkTask::G1FullGCMarkTask(G1FullCollector* collector) :
G1FullGCTask("G1 Parallel Marking Task", collector),
_root_processor(G1CollectedHeap::heap(), collector->workers()),
_terminator(collector->workers(), collector->array_queue_set()) {
_terminator(collector->workers(), collector->marking_task_queues()) {
}
void G1FullGCMarkTask::work(uint worker_id) {
@ -54,10 +54,9 @@ void G1FullGCMarkTask::work(uint worker_id) {
}
// Mark stack is populated, now process and drain it.
marker->complete_marking(collector()->oop_queue_set(), collector()->array_queue_set(), &_terminator);
marker->complete_marking(collector()->marking_task_queues(), &_terminator);
// This is the point where the entire marking should have completed.
assert(marker->oop_stack()->is_empty(), "Marking should have completed");
assert(marker->objarray_stack()->is_empty(), "Array marking should have completed");
assert(marker->task_queue()->is_empty(), "Marking should have completed");
log_task("Marking task", worker_id, start);
}

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -25,6 +25,8 @@
#include "classfile/classLoaderData.hpp"
#include "classfile/classLoaderDataGraph.hpp"
#include "gc/g1/g1FullGCMarker.inline.hpp"
#include "gc/shared/partialArraySplitter.inline.hpp"
#include "gc/shared/partialArrayState.hpp"
#include "gc/shared/referenceProcessor.hpp"
#include "gc/shared/taskTerminator.hpp"
#include "gc/shared/verifyOption.hpp"
@ -36,8 +38,8 @@ G1FullGCMarker::G1FullGCMarker(G1FullCollector* collector,
_collector(collector),
_worker_id(worker_id),
_bitmap(collector->mark_bitmap()),
_oop_stack(),
_objarray_stack(),
_task_queue(),
_partial_array_splitter(collector->partial_array_state_manager(), collector->workers(), ObjArrayMarkingStride),
_mark_closure(worker_id, this, ClassLoaderData::_claim_stw_fullgc_mark, G1CollectedHeap::heap()->ref_processor_stw()),
_stack_closure(this),
_cld_closure(mark_closure(), ClassLoaderData::_claim_stw_fullgc_mark),
@ -47,24 +49,36 @@ G1FullGCMarker::G1FullGCMarker(G1FullCollector* collector,
}
G1FullGCMarker::~G1FullGCMarker() {
assert(is_empty(), "Must be empty at this point");
assert(is_task_queue_empty(), "Must be empty at this point");
}
void G1FullGCMarker::complete_marking(OopQueueSet* oop_stacks,
ObjArrayTaskQueueSet* array_stacks,
void G1FullGCMarker::process_partial_array(PartialArrayState* state, bool stolen) {
// Access state before release by claim().
objArrayOop obj_array = objArrayOop(state->source());
PartialArraySplitter::Claim claim =
_partial_array_splitter.claim(state, task_queue(), stolen);
process_array_chunk(obj_array, claim._start, claim._end);
}
void G1FullGCMarker::start_partial_array_processing(objArrayOop obj) {
mark_closure()->do_klass(obj->klass());
// Don't push empty arrays to avoid unnecessary work.
size_t array_length = obj->length();
if (array_length > 0) {
size_t initial_chunk_size = _partial_array_splitter.start(task_queue(), obj, nullptr, array_length);
process_array_chunk(obj, 0, initial_chunk_size);
}
}
void G1FullGCMarker::complete_marking(G1ScannerTasksQueueSet* task_queues,
TaskTerminator* terminator) {
do {
follow_marking_stacks();
ObjArrayTask steal_array;
if (array_stacks->steal(_worker_id, steal_array)) {
follow_array_chunk(objArrayOop(steal_array.obj()), steal_array.index());
} else {
oop steal_oop;
if (oop_stacks->steal(_worker_id, steal_oop)) {
follow_object(steal_oop);
}
process_marking_stacks();
ScannerTask stolen_task;
if (task_queues->steal(_worker_id, stolen_task)) {
dispatch_task(stolen_task, true);
}
} while (!is_empty() || !terminator->offer_termination());
} while (!is_task_queue_empty() || !terminator->offer_termination());
}
void G1FullGCMarker::flush_mark_stats_cache() {

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -28,6 +28,8 @@
#include "gc/g1/g1FullGCOopClosures.hpp"
#include "gc/g1/g1OopClosures.hpp"
#include "gc/g1/g1RegionMarkStatsCache.hpp"
#include "gc/shared/partialArraySplitter.hpp"
#include "gc/shared/partialArrayState.hpp"
#include "gc/shared/stringdedup/stringDedup.hpp"
#include "gc/shared/taskqueue.hpp"
#include "memory/iterator.hpp"
@ -38,16 +40,15 @@
#include "utilities/growableArray.hpp"
#include "utilities/stack.hpp"
typedef OverflowTaskQueue<oop, mtGC> OopQueue;
typedef OverflowTaskQueue<ObjArrayTask, mtGC> ObjArrayTaskQueue;
typedef GenericTaskQueueSet<OopQueue, mtGC> OopQueueSet;
typedef GenericTaskQueueSet<ObjArrayTaskQueue, mtGC> ObjArrayTaskQueueSet;
class G1CMBitMap;
class G1FullCollector;
class TaskTerminator;
typedef OverflowTaskQueue<ScannerTask, mtGC> G1MarkTasksQueue;
typedef GenericTaskQueueSet<G1MarkTasksQueue, mtGC> G1MarkTasksQueueSet;
class G1FullGCMarker : public CHeapObj<mtGC> {
G1FullCollector* _collector;
@ -56,56 +57,50 @@ class G1FullGCMarker : public CHeapObj<mtGC> {
G1CMBitMap* _bitmap;
// Mark stack
OopQueue _oop_stack;
ObjArrayTaskQueue _objarray_stack;
G1MarkTasksQueue _task_queue;
PartialArraySplitter _partial_array_splitter;
// Marking closures
G1MarkAndPushClosure _mark_closure;
G1FollowStackClosure _stack_closure;
G1MarkStackClosure _stack_closure;
CLDToOopClosure _cld_closure;
StringDedup::Requests _string_dedup_requests;
G1RegionMarkStatsCache _mark_stats_cache;
inline bool is_empty();
inline void push_objarray(oop obj, size_t index);
inline bool is_task_queue_empty();
inline bool mark_object(oop obj);
// Marking helpers
inline void follow_object(oop obj);
inline void follow_array(objArrayOop array);
inline void follow_array_chunk(objArrayOop array, int index);
inline void process_array_chunk(objArrayOop obj, size_t start, size_t end);
inline void dispatch_task(const ScannerTask& task, bool stolen);
// Start processing the given objArrayOop by first pushing its continuations and
// then scanning the first chunk.
void start_partial_array_processing(objArrayOop obj);
// Process the given continuation.
void process_partial_array(PartialArrayState* state, bool stolen);
inline void publish_and_drain_oop_tasks();
// Try to publish all contents from the objArray task queue overflow stack to
// the shared objArray stack.
// Returns true and a valid task if there has not been enough space in the shared
// objArray stack, otherwise returns false and the task is invalid.
inline bool publish_or_pop_objarray_tasks(ObjArrayTask& task);
public:
G1FullGCMarker(G1FullCollector* collector,
uint worker_id,
G1RegionMarkStats* mark_stats);
~G1FullGCMarker();
// Stack getters
OopQueue* oop_stack() { return &_oop_stack; }
ObjArrayTaskQueue* objarray_stack() { return &_objarray_stack; }
G1MarkTasksQueue* task_queue() { return &_task_queue; }
// Marking entry points
template <class T> inline void mark_and_push(T* p);
inline void follow_marking_stacks();
void complete_marking(OopQueueSet* oop_stacks,
ObjArrayTaskQueueSet* array_stacks,
inline void process_marking_stacks();
void complete_marking(G1MarkTasksQueueSet* task_queues,
TaskTerminator* terminator);
// Closure getters
CLDToOopClosure* cld_closure() { return &_cld_closure; }
G1MarkAndPushClosure* mark_closure() { return &_mark_closure; }
G1FollowStackClosure* stack_closure() { return &_stack_closure; }
G1MarkStackClosure* stack_closure() { return &_stack_closure; }
// Flush live bytes to regions
void flush_mark_stats_cache();

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -42,6 +42,7 @@
#include "oops/access.inline.hpp"
#include "oops/compressedOops.inline.hpp"
#include "oops/oop.inline.hpp"
#include "utilities/checkedCast.hpp"
#include "utilities/debug.hpp"
inline bool G1FullGCMarker::mark_object(oop obj) {
@ -71,94 +72,55 @@ template <class T> inline void G1FullGCMarker::mark_and_push(T* p) {
if (!CompressedOops::is_null(heap_oop)) {
oop obj = CompressedOops::decode_not_null(heap_oop);
if (mark_object(obj)) {
_oop_stack.push(obj);
_task_queue.push(ScannerTask(obj));
}
assert(_bitmap->is_marked(obj), "Must be marked");
}
}
inline bool G1FullGCMarker::is_empty() {
return _oop_stack.is_empty() && _objarray_stack.is_empty();
inline bool G1FullGCMarker::is_task_queue_empty() {
return _task_queue.is_empty();
}
inline void G1FullGCMarker::push_objarray(oop obj, size_t index) {
ObjArrayTask task(obj, index);
assert(task.is_valid(), "bad ObjArrayTask");
_objarray_stack.push(task);
inline void G1FullGCMarker::process_array_chunk(objArrayOop obj, size_t start, size_t end) {
obj->oop_iterate_elements_range(mark_closure(),
checked_cast<int>(start),
checked_cast<int>(end));
}
inline void G1FullGCMarker::follow_array(objArrayOop array) {
mark_closure()->do_klass(array->klass());
// Don't push empty arrays to avoid unnecessary work.
if (array->length() > 0) {
push_objarray(array, 0);
}
}
void G1FullGCMarker::follow_array_chunk(objArrayOop array, int index) {
const int len = array->length();
const int beg_index = index;
assert(beg_index < len || len == 0, "index too large");
const int stride = MIN2(len - beg_index, (int) ObjArrayMarkingStride);
const int end_index = beg_index + stride;
// Push the continuation first to allow more efficient work stealing.
if (end_index < len) {
push_objarray(array, end_index);
}
array->oop_iterate_elements_range(mark_closure(), beg_index, end_index);
}
inline void G1FullGCMarker::follow_object(oop obj) {
assert(_bitmap->is_marked(obj), "should be marked");
if (obj->is_objArray()) {
// Handle object arrays explicitly to allow them to
// be split into chunks if needed.
follow_array((objArrayOop)obj);
inline void G1FullGCMarker::dispatch_task(const ScannerTask& task, bool stolen) {
if (task.is_partial_array_state()) {
assert(_bitmap->is_marked(task.to_partial_array_state()->source()), "should be marked");
process_partial_array(task.to_partial_array_state(), stolen);
} else {
obj->oop_iterate(mark_closure());
oop obj = task.to_oop();
assert(_bitmap->is_marked(obj), "should be marked");
if (obj->is_objArray()) {
// Handle object arrays explicitly to allow them to
// be split into chunks if needed.
start_partial_array_processing((objArrayOop)obj);
} else {
obj->oop_iterate(mark_closure());
}
}
}
inline void G1FullGCMarker::publish_and_drain_oop_tasks() {
oop obj;
while (_oop_stack.pop_overflow(obj)) {
if (!_oop_stack.try_push_to_taskqueue(obj)) {
assert(_bitmap->is_marked(obj), "must be marked");
follow_object(obj);
ScannerTask task;
while (_task_queue.pop_overflow(task)) {
if (!_task_queue.try_push_to_taskqueue(task)) {
dispatch_task(task, false);
}
}
while (_oop_stack.pop_local(obj)) {
assert(_bitmap->is_marked(obj), "must be marked");
follow_object(obj);
while (_task_queue.pop_local(task)) {
dispatch_task(task, false);
}
}
inline bool G1FullGCMarker::publish_or_pop_objarray_tasks(ObjArrayTask& task) {
// It is desirable to move as much as possible work from the overflow queue to
// the shared queue as quickly as possible.
while (_objarray_stack.pop_overflow(task)) {
if (!_objarray_stack.try_push_to_taskqueue(task)) {
return true;
}
}
return false;
}
void G1FullGCMarker::follow_marking_stacks() {
void G1FullGCMarker::process_marking_stacks() {
do {
// First, drain regular oop stack.
publish_and_drain_oop_tasks();
// Then process ObjArrays one at a time to avoid marking stack bloat.
ObjArrayTask task;
if (publish_or_pop_objarray_tasks(task) ||
_objarray_stack.pop_local(task)) {
follow_array_chunk(objArrayOop(task.obj()), task.index());
}
} while (!is_empty());
} while (!is_task_queue_empty());
}
#endif // SHARE_GC_G1_G1FULLGCMARKER_INLINE_HPP

View File

@ -35,7 +35,7 @@
G1IsAliveClosure::G1IsAliveClosure(G1FullCollector* collector) :
G1IsAliveClosure(collector, collector->mark_bitmap()) { }
void G1FollowStackClosure::do_void() { _marker->follow_marking_stacks(); }
void G1MarkStackClosure::do_void() { _marker->process_marking_stacks(); }
void G1FullKeepAliveClosure::do_oop(oop* p) { do_oop_work(p); }
void G1FullKeepAliveClosure::do_oop(narrowOop* p) { do_oop_work(p); }

View File

@ -86,11 +86,11 @@ public:
virtual ReferenceIterationMode reference_iteration_mode() { return DO_FIELDS; }
};
class G1FollowStackClosure: public VoidClosure {
class G1MarkStackClosure: public VoidClosure {
G1FullGCMarker* _marker;
public:
G1FollowStackClosure(G1FullGCMarker* marker) : _marker(marker) {}
G1MarkStackClosure(G1FullGCMarker* marker) : _marker(marker) {}
virtual void do_void();
};

View File

@ -32,7 +32,7 @@
#include "gc/g1/g1ConcurrentMark.hpp"
#include "gc/g1/g1ConcurrentMarkThread.inline.hpp"
#include "gc/g1/g1ConcurrentRefine.hpp"
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/g1/g1ConcurrentRefineStats.inline.hpp"
#include "gc/g1/g1GCPhaseTimes.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "gc/g1/g1HeapRegionRemSet.inline.hpp"

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2001, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -50,7 +50,7 @@
#include "memory/resourceArea.hpp"
#include "oops/access.inline.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/atomic.hpp"
#include "runtime/os.hpp"
#include "utilities/align.hpp"
#include "utilities/globalDefinitions.hpp"
@ -107,46 +107,48 @@ class G1RemSetScanState : public CHeapObj<mtGC> {
// Set of (unique) regions that can be added to concurrently.
class G1DirtyRegions : public CHeapObj<mtGC> {
uint* _buffer;
uint _cur_idx;
Atomic<uint> _cur_idx;
size_t _max_reserved_regions;
bool* _contains;
Atomic<bool>* _contains;
public:
G1DirtyRegions(size_t max_reserved_regions) :
_buffer(NEW_C_HEAP_ARRAY(uint, max_reserved_regions, mtGC)),
_cur_idx(0),
_max_reserved_regions(max_reserved_regions),
_contains(NEW_C_HEAP_ARRAY(bool, max_reserved_regions, mtGC)) {
_contains(NEW_C_HEAP_ARRAY(Atomic<bool>, max_reserved_regions, mtGC)) {
reset();
}
~G1DirtyRegions() {
FREE_C_HEAP_ARRAY(uint, _buffer);
FREE_C_HEAP_ARRAY(bool, _contains);
FREE_C_HEAP_ARRAY(Atomic<bool>, _contains);
}
void reset() {
_cur_idx = 0;
::memset(_contains, false, _max_reserved_regions * sizeof(bool));
_cur_idx.store_relaxed(0);
for (uint i = 0; i < _max_reserved_regions; i++) {
_contains[i].store_relaxed(false);
}
}
uint size() const { return _cur_idx; }
uint size() const { return _cur_idx.load_relaxed(); }
uint at(uint idx) const {
assert(idx < _cur_idx, "Index %u beyond valid regions", idx);
assert(idx < size(), "Index %u beyond valid regions", idx);
return _buffer[idx];
}
void add_dirty_region(uint region) {
if (_contains[region]) {
if (_contains[region].load_relaxed()) {
return;
}
bool marked_as_dirty = AtomicAccess::cmpxchg(&_contains[region], false, true) == false;
bool marked_as_dirty = _contains[region].compare_set(false, true);
if (marked_as_dirty) {
uint allocated = AtomicAccess::fetch_then_add(&_cur_idx, 1u);
uint allocated = _cur_idx.fetch_then_add(1u);
_buffer[allocated] = region;
}
}
@ -155,9 +157,11 @@ class G1RemSetScanState : public CHeapObj<mtGC> {
void merge(const G1DirtyRegions* other) {
for (uint i = 0; i < other->size(); i++) {
uint region = other->at(i);
if (!_contains[region]) {
_buffer[_cur_idx++] = region;
_contains[region] = true;
if (!_contains[region].load_relaxed()) {
uint cur = _cur_idx.load_relaxed();
_buffer[cur] = region;
_cur_idx.store_relaxed(cur + 1);
_contains[region].store_relaxed(true);
}
}
}
@ -173,7 +177,7 @@ class G1RemSetScanState : public CHeapObj<mtGC> {
class G1ClearCardTableTask : public G1AbstractSubTask {
G1CollectedHeap* _g1h;
G1DirtyRegions* _regions;
uint volatile _cur_dirty_regions;
Atomic<uint> _cur_dirty_regions;
G1RemSetScanState* _scan_state;
@ -210,8 +214,9 @@ class G1ClearCardTableTask : public G1AbstractSubTask {
void do_work(uint worker_id) override {
const uint num_regions_per_worker = num_cards_per_worker / (uint)G1HeapRegion::CardsPerRegion;
while (_cur_dirty_regions < _regions->size()) {
uint next = AtomicAccess::fetch_then_add(&_cur_dirty_regions, num_regions_per_worker);
uint cur = _cur_dirty_regions.load_relaxed();
while (cur < _regions->size()) {
uint next = _cur_dirty_regions.fetch_then_add(num_regions_per_worker);
uint max = MIN2(next + num_regions_per_worker, _regions->size());
for (uint i = next; i < max; i++) {
@ -226,6 +231,7 @@ class G1ClearCardTableTask : public G1AbstractSubTask {
// old regions use it for old->collection set candidates, so they should not be cleared
// either.
}
cur = max;
}
}
};
@ -1115,7 +1121,7 @@ class G1MergeHeapRootsTask : public WorkerTask {
bool _initial_evacuation;
volatile bool _fast_reclaim_handled;
Atomic<bool> _fast_reclaim_handled;
public:
G1MergeHeapRootsTask(G1RemSetScanState* scan_state, uint num_workers, bool initial_evacuation) :
@ -1143,8 +1149,8 @@ public:
// 1. eager-reclaim candidates
if (_initial_evacuation &&
g1h->has_humongous_reclaim_candidates() &&
!_fast_reclaim_handled &&
!AtomicAccess::cmpxchg(&_fast_reclaim_handled, false, true)) {
!_fast_reclaim_handled.load_relaxed() &&
_fast_reclaim_handled.compare_set(false, true)) {
G1GCParPhaseTimesTracker subphase_x(p, G1GCPhaseTimes::MergeER, worker_id);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2021, 2025, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2026, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -58,6 +58,7 @@
#include "gc/shared/workerThread.hpp"
#include "jfr/jfrEvents.hpp"
#include "memory/resourceArea.hpp"
#include "runtime/atomic.hpp"
#include "runtime/threads.hpp"
#include "utilities/ticks.hpp"
@ -459,8 +460,8 @@ class G1PrepareEvacuationTask : public WorkerTask {
G1CollectedHeap* _g1h;
G1HeapRegionClaimer _claimer;
volatile uint _humongous_total;
volatile uint _humongous_candidates;
Atomic<uint> _humongous_total;
Atomic<uint> _humongous_candidates;
G1MonotonicArenaMemoryStats _all_card_set_stats;
@ -481,19 +482,19 @@ public:
}
void add_humongous_candidates(uint candidates) {
AtomicAccess::add(&_humongous_candidates, candidates);
_humongous_candidates.add_then_fetch(candidates);
}
void add_humongous_total(uint total) {
AtomicAccess::add(&_humongous_total, total);
_humongous_total.add_then_fetch(total);
}
uint humongous_candidates() {
return _humongous_candidates;
return _humongous_candidates.load_relaxed();
}
uint humongous_total() {
return _humongous_total;
return _humongous_total.load_relaxed();
}
const G1MonotonicArenaMemoryStats all_card_set_stats() const {
@ -698,7 +699,7 @@ protected:
virtual void evacuate_live_objects(G1ParScanThreadState* pss, uint worker_id) = 0;
private:
volatile bool _pinned_regions_recorded;
Atomic<bool> _pinned_regions_recorded;
public:
G1EvacuateRegionsBaseTask(const char* name,
@ -722,7 +723,7 @@ public:
G1ParScanThreadState* pss = _per_thread_states->state_for_worker(worker_id);
pss->set_ref_discoverer(_g1h->ref_processor_stw());
if (!AtomicAccess::cmpxchg(&_pinned_regions_recorded, false, true)) {
if (_pinned_regions_recorded.compare_set(false, true)) {
record_pinned_regions(pss, worker_id);
}
scan_roots(pss, worker_id);

View File

@ -46,6 +46,7 @@
#include "oops/access.inline.hpp"
#include "oops/compressedOops.inline.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomic.hpp"
#include "runtime/prefetch.inline.hpp"
#include "runtime/threads.hpp"
#include "runtime/threadSMR.hpp"
@ -759,7 +760,7 @@ class G1PostEvacuateCollectionSetCleanupTask2::FreeCollectionSetTask : public G1
const size_t* _surviving_young_words;
uint _active_workers;
G1EvacFailureRegions* _evac_failure_regions;
volatile uint _num_retained_regions;
Atomic<uint> _num_retained_regions;
FreeCSetStats* worker_stats(uint worker) {
return &_worker_stats[worker];
@ -794,7 +795,7 @@ public:
virtual ~FreeCollectionSetTask() {
Ticks serial_time = Ticks::now();
bool has_new_retained_regions = AtomicAccess::load(&_num_retained_regions) != 0;
bool has_new_retained_regions = _num_retained_regions.load_relaxed() != 0;
if (has_new_retained_regions) {
G1CollectionSetCandidates* candidates = _g1h->collection_set()->candidates();
candidates->sort_by_efficiency();
@ -829,7 +830,7 @@ public:
// Report per-region type timings.
cl.report_timing();
AtomicAccess::add(&_num_retained_regions, cl.num_retained_regions(), memory_order_relaxed);
_num_retained_regions.add_then_fetch(cl.num_retained_regions(), memory_order_relaxed);
}
};

View File

@ -23,7 +23,6 @@
*/
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1ConcurrentRefineStats.hpp"
#include "gc/g1/g1RegionPinCache.inline.hpp"
#include "gc/g1/g1ThreadLocalData.hpp"
#include "gc/g1/g1YoungGCPreEvacuateTasks.hpp"

View File

@ -104,11 +104,8 @@ void CardTableBarrierSetC1::post_barrier(LIRAccess& access, LIR_Opr addr, LIR_Op
return;
}
BarrierSet* bs = BarrierSet::barrier_set();
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(bs);
CardTable* ct = ctbs->card_table();
LIR_Const* card_table_base = new LIR_Const(ct->byte_map_base());
SHENANDOAHGC_ONLY(assert(!UseShenandoahGC, "Shenandoah byte_map_base is not constant.");)
CardTableBarrierSet* ctbs = barrier_set_cast<CardTableBarrierSet>(BarrierSet::barrier_set());
LIR_Const* card_table_base = new LIR_Const(ctbs->card_table_base_const());
if (addr->is_address()) {
LIR_Address* address = addr->as_address_ptr();

View File

@ -771,7 +771,7 @@ Node* BarrierSetC2::obj_allocate(PhaseMacroExpand* macro, Node* mem, Node* toobi
// this will require extensive changes to the loop optimization in order to
// prevent a degradation of the optimization.
// See comment in memnode.hpp, around line 227 in class LoadPNode.
Node* tlab_end = macro->make_load(toobig_false, mem, tlab_end_adr, 0, TypeRawPtr::BOTTOM, T_ADDRESS);
Node* tlab_end = macro->make_load_raw(toobig_false, mem, tlab_end_adr, 0, TypeRawPtr::BOTTOM, T_ADDRESS);
// Load the TLAB top.
Node* old_tlab_top = new LoadPNode(toobig_false, mem, tlab_top_adr, TypeRawPtr::BOTTOM, TypeRawPtr::BOTTOM, MemNode::unordered);

View File

@ -116,7 +116,7 @@ Node* CardTableBarrierSetC2::atomic_xchg_at_resolved(C2AtomicParseAccess& access
Node* CardTableBarrierSetC2::byte_map_base_node(GraphKit* kit) const {
// Get base of card map
CardTable::CardValue* card_table_base = ci_card_table_address();
CardTable::CardValue* card_table_base = ci_card_table_address_const();
if (card_table_base != nullptr) {
return kit->makecon(TypeRawPtr::make((address)card_table_base));
} else {

View File

@ -27,6 +27,7 @@
#include "gc/shared/barrierSet.hpp"
#include "gc/shared/cardTable.hpp"
#include "gc/shared/gc_globals.hpp"
#include "memory/memRegion.hpp"
#include "utilities/align.hpp"
@ -60,6 +61,10 @@ public:
CardTableBarrierSet(CardTable* card_table);
virtual ~CardTableBarrierSet();
inline static CardTableBarrierSet* barrier_set() {
return barrier_set_cast<CardTableBarrierSet>(BarrierSet::barrier_set());
}
template <DecoratorSet decorators, typename T>
inline void write_ref_field_pre(T* addr) {}
@ -86,6 +91,10 @@ public:
inline void write_ref_array(HeapWord* start, size_t count);
CardTable* card_table() const { return _card_table; }
CardValue* card_table_base_const() const {
assert(UseSerialGC || UseParallelGC, "Only these GCs have constant card table base");
return _card_table->byte_map_base();
}
virtual void on_slowpath_allocation_exit(JavaThread* thread, oop new_obj);

View File

@ -31,7 +31,6 @@
#include "gc/shared/oopStorageSet.hpp"
#include "memory/iterator.hpp"
#include "oops/access.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/debug.hpp"
template <bool concurrent, bool is_const>

View File

@ -28,7 +28,6 @@
#include "memory/arena.hpp"
#include "nmt/memTag.hpp"
#include "oops/oopsHierarchy.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/orderAccess.hpp"
#include "utilities/debug.hpp"
#include "utilities/globalDefinitions.hpp"

View File

@ -28,7 +28,6 @@
#include "gc/shared/partialArrayTaskStepper.hpp"
#include "gc/shared/partialArrayState.hpp"
#include "runtime/atomicAccess.hpp"
#include "utilities/checkedCast.hpp"
#include "utilities/debug.hpp"

View File

@ -25,7 +25,6 @@
#include "gc/shared/taskqueue.hpp"
#include "logging/log.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/javaThread.hpp"
#include "runtime/os.hpp"
#include "utilities/debug.hpp"

View File

@ -32,7 +32,6 @@
#include "memory/allocation.inline.hpp"
#include "memory/resourceArea.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/orderAccess.hpp"
#include "utilities/debug.hpp"
#include "utilities/ostream.hpp"

View File

@ -26,7 +26,6 @@
#include "gc/shared/workerThread.hpp"
#include "logging/log.hpp"
#include "memory/iterator.hpp"
#include "runtime/atomicAccess.hpp"
#include "runtime/init.hpp"
#include "runtime/java.hpp"
#include "runtime/os.hpp"

View File

@ -68,9 +68,9 @@ ShenandoahAdaptiveHeuristics::ShenandoahAdaptiveHeuristics(ShenandoahSpaceInfo*
ShenandoahAdaptiveHeuristics::~ShenandoahAdaptiveHeuristics() {}
void ShenandoahAdaptiveHeuristics::choose_collection_set_from_regiondata(ShenandoahCollectionSet* cset,
RegionData* data, size_t size,
size_t actual_free) {
size_t ShenandoahAdaptiveHeuristics::choose_collection_set_from_regiondata(ShenandoahCollectionSet* cset,
RegionData* data, size_t size,
size_t actual_free) {
size_t garbage_threshold = ShenandoahHeapRegion::region_size_bytes() * ShenandoahGarbageThreshold / 100;
// The logic for cset selection in adaptive is as follows:
@ -124,6 +124,7 @@ void ShenandoahAdaptiveHeuristics::choose_collection_set_from_regiondata(Shenand
cur_garbage = new_garbage;
}
}
return 0;
}
void ShenandoahAdaptiveHeuristics::record_cycle_start() {

Some files were not shown because too many files have changed in this diff Show More