This commit extends the use of the new qExecAndArgs packet (added in
the previous commit) so that GDB now understands when it is connected
to a remote server that doesn't have a default executable set. We
don't do much with this information right now, other than produce more
useful text for 'show remote exec-file'.
Here I've connected to a gdbserver with no default executable set,
this is with this patch in place:
(gdb) target extended-remote | gdbserver --multi --once -
(gdb) show remote exec-file
The remote exec-file is unset, the remote has no default executable set.
(gdb) file /tmp/hello.x
Reading symbols from /tmp/hello.x...
(gdb) run
Starting program: /tmp/hello.x
Running the default executable on the remote target failed; try "set remote exec-file"?
(gdb)
The important line is this one:
The remote exec-file is unset, the remote has no default executable set.
Without this patch we'd get:
The remote exec-file is unset, the default remote executable will be used.
The new message is clearer that there is no default executable set on
the remote.
In the future I plan to make use of this additional information,
coupled with an understanding (via 'set sysroot') of when gdb and
gdbserver share the same filesystem, to allow GDB to automatically use
the current executable (e.g. loaded with the 'file' command) as the
remote exec-file. But this is not part of this patch, or this patch
series, just future planned work.
Approved-By: Tom Tromey <tom@tromey.com>
This commit adds a new remote protocol packet qExecAndArgs, and
updates GDB to use it.
When gdbserver is started a user can provide an executable and
arguments, these are used (by the remote target) to start an initial
inferior, this is the inferior to which GDB first connects.
When GDB is connected in extended-remote mode, if the user does a
'run' without specifying a new 'remote exec-file' then the executable
given on the gdbserver command line is reused to start the new
inferior.
Interestingly, the arguments given on the gdbserver command line are
only used when starting the first inferior, subsequent inferiors will
be passed an empty argument string by GDB. This might catch out a
user, causing the rerun to behave differently than the first run.
In this commit I will add a new qExecAndArgs packet, which I think
will improve the experience in this area.
The new qExecAndArgs packet is sent from GDB, and gdbserver replies
with a packet that includes the executable filename and the arguments
string that were used for starting the initial inferior.
On the GDB side this information can be used to update GDB's state,
the 'show remote exec-file' will reflect how gdbserver was started,
and 'show args' will reflect the arguments used for starting the
inferior.
As a result of updating the args, if the user restarts the inferior,
then this same argument string will be passed back to the remote
target, and used for the new inferior. Thus, rerunning the inferior
will behave just like the initial inferior, which I think is a good
improvement.
Finally, GDB will warn if the user has 'set remote exec-file' and
then connects to a gdbserver that was started with some alternative
filename, like this:
(gdb) set remote exec-file /tmp/foo
(gdb) target remote | gdbserver --once - /tmp/bar
... snip ...
warning: updating 'remote exec-file' to '/tmp/bar' to match remote target
... snip ...
I made the choice to have GDB update the remote exec-file setting to
match the remote, as, after the 'target remote', we are connected to
an inferior that is running /tmp/bar (in this case), so trying to hang
onto the non-matching user supplied setting doesn't seem helpful.
There is one case where I can see this choice being a problem, if a
user does:
(gdb) set remote exec-file /tmp/foo
(gdb) target extended-remote | gdbserver --multi --once - /tmp/bar
... snip ...
warning: updating 'remote exec-file' to '/tmp/bar' to match remote target
... snip ...
(gdb) run
In this case, prior to this patch, they would 'run' /tmp/foo, while
after this patch, they will run /tmp/bar. I think it is unfortunate
that I'm breaking this use case, but, I'm not _that_ sorry -- just
start gdbserver with the correct executable, or even no executable,
and the problem goes away.
This last point is important, in extended-remote mode, it is possible
to start gdbserver without specifying an executable, like this:
$ gdbserver --multi --once :54321
In this case gdbserver doesn't start an initial inferior. When GDB
connects the qExecAndArgs reply from gdbserver indicates that no
information (executable or arguments) were set, and any existing
information is retained, as in this session:
(gdb) set sysroot
(gdb) set remote exec-file /tmp/foo
(gdb) set args a b c
(gdb) target extended-remote | ./gdbserver/gdbserver --multi --once -
Remote debugging using | ./gdbserver/gdbserver --multi --once -
Remote debugging using stdio
(gdb) show remote exec-file
The remote exec-file is "/tmp/foo".
(gdb) show args
Argument list to give program being debugged when it is started is "a b c".
(gdb)
This is the second time proposing this new packet. The first attempt
can be found here:
https://inbox.sourceware.org/gdb-patches/80d8b37d757033976b1a8ddd370c294c7aae8f8c.1692200989.git.aburgess@redhat.com
The review feedback on this patch was that the inferior arguments
should be passed back as a vector of individual strings. This makes
sense, at the time that feedback was given, GDB would pass arguments
to gdbserver as a vector of individual arguments, so it would seem
sensible that gdbserver should adopt the same approach for passing
arguments back to GDB.
However, since then I have been working on how GDB passes the inferior
arguments to gdbserver, fixing a lot of broken corner cases, which
culminated in this patch:
commit 8e28eef6cd
Date: Thu Nov 23 18:46:54 2023 +0000
gdb/gdbserver: pass inferior arguments as a single string
Though we do retain the vector of individual arguments behaviour for
backward compatibility with old remote targets, the preferred approach
now is for GDB to pass arguments to gdbserver as a single string.
This removes the need for GDB/gdbserver to try and figure out what is
the correct escaping to apply to the arguments, and fixes some
argument passing corner cases.
And so, now, I think it makes sense that gdbserver should also pass
the arguments back to GDB as a single string. I've updated the
documentation a little to (I hope) explain how gdbserver should escape
things before passing them back to GDB (TLDR: no additional escaping
should be added just for sending to GDB. The argument string should
be sent to GDB as if it were being sent to the 'set args' GDB
command).
The main test for this new functionality is
gdb.server/fetch-exec-and-args.exp, but I've also added a test
gdb.replay/fetch-exec-and-args.exp, which allows me to test a corner
case that isn't currently exercised by gdbserver, this is the case for
sending pack inferior arguments, but no executable.
The qExecAndArgs reply format is 'S;exec;args;' where 'exec' and
'args' are hex encoded strings. If 'args' is empty then this is
perfectly valid, this just means there were no command line
arguments. But what if 'exec' is empty? I needed to decide what to
do in this case. The easiest choice is to treat empty 'exec' as the
executable is not set. But currently, due to how gdbserver works, it
is not possible to hit this case, so I used the gdbreplay testing
framework to exercise this instead. There were a few supporting
changes needed to write this test though.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
While testing another patch I'm working on I discovered that passing
an empty program name to gdbserver would trigger an assertion, like
this:
$ gdbserver --multi :54321 ""
../../gdb/gdbserver/../gdb/nat/fork-inferior.c:240: A problem internal to GDBserver has been detected.
fork_inferior: Assertion `exec_file != nullptr' failed.
User input, no matter how weird, shouldn't be triggering an assertion,
so lets fix that.
In extended mode, it is valid to start gdbserver without an executable
name, like this:
$ gdbserver --multi :54321
Here gdbserver doesn't start an inferior, and it is up to GDB to
connect, and tell gdbserver what to run, and to then start it running.
I did wonder if the empty string case should handled like the no
executable name case, but then you get into the situation where the
user can specify command line arguments without an inferior, like:
$ gdbserver --multi :54321 "" a b c
And while there's nothing really wrong with this, and I'm sure someone
could come up with a use case for it. I'd like to propose that for
now at least, we take the simple approach of not allowing an empty
executable name, instead we should give an error, like this:
$ gdbserver --multi :54321 ""
No program to debug
Exiting
We can always relax this requirement in the future, and allow the
empty executable with or without inferior arguments, if we decide
there's a compelling reason for it. It would be simple enough to add
this in the future, but once we add support for it, it's much harder
to remove the feature in the future, so lets start simple.
The non-extended remote case works much the same. It too triggers the
assertion currently, and after this patch exits with the same error.
Of course, the non-extended remote case never supported not having an
inferior, if you did:
$ gdbserver :54321
You'd be shown the usage text and gdbserver would exit.
Approved-By: Tom Tromey <tom@tromey.com>
When I ran GDB testsuite, I noticed that process record tests are not
currently supported on RISC-V. This patch fixes it.
Approved-By: Guinevere Larsen <guinevere@redhat.com>
Fix PR libsframe/33437 - libsframe test names are not unique
The TEST () macro definition originally in plt-findfre-2.c, was being
used to differentiate between multiple runs of the testcases. Adapt
that definition a bit to allow for a variable number of arguments following
the test condition: A test name format string may be used by macro
users, such that the name of the tests are unique.
Move the new variadic TEST macro definition in the testsuite's common
header sframe-test.h, and use it throughout the testsuite.
Reviewed-by: Jens Remus <jremus@linux.ibm.com>
libsframe/testsuite/
PR libsframe/33437
* libsframe.decode/be-flipping.c: Use new TEST macro with
suffix.
* libsframe.decode/frecnt-1.c: Likewise.
* libsframe.decode/frecnt-2.c: Likewise.
* libsframe.encode/encode-1.c: Likewise.
* libsframe.find/findfre-1.c: Likewise.
* libsframe.find/findfunc-1.c: Likewise.
* libsframe.find/plt-findfre-1.c: Likewise.
* libsframe.find/plt-findfre-2.c: Likewise.
* sframe-test.h: Move the TEST macro definition to this
testsuite header.
Since x86 .eh_frame section may reference _GLOBAL_OFFSET_TABLE_, keep
_GLOBAL_OFFSET_TABLE_ if there is dynamic section and the output
.eh_frame section is non-empty.
PR ld/33499
* elfxx-x86.c (_bfd_x86_elf_late_size_sections): Keep
_GLOBAL_OFFSET_TABLE_ if there is dynamic section and the
output .eh_frame section is non-empty.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Use uint64_t for common symbol alignment warning to avoid
elflink.c:5548:12: runtime error: shift exponent 37 is too large for 32-bit type 'int'
with invalid input in PR ld/33500. Now ld issues:
ld: warning: alignment 137438953472 of common symbol `__afl_global_area_ptr' in pr33500.o is greater than the alignment (8) of its section *COM*
instead of
ld: warning: alignment 32 of common symbol `__afl_global_area_ptr' in pr33500.o is greater than the alignment (8) of its section *COM*
PR ld/33511
* elflink.c (elf_link_add_object_symbols): Use uint64_t for
common symbol alignment warning.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
My editor "accidentally" removed all trailing whitespaces from
gdb.texinfo while doing a change. That was mostly just an annoyance
but to avoid it happening again, I suggest removing them for good.
I look at the difference in the output of "make html". The new output
has some trailing whitespaces removed, but none of them seems to cause a
semantic difference. Not sure about other formats like info or pdf
though.
Change-Id: I3f349b28c581af69703365fea07e7b93614c987c
Approved-By: Eli Zaretskii <eliz@gnu.org>
This commit builds on the previous commit. In the future I am
proposing to move the core file BFD from the program_space into the
core_target. In the last commit I updated 'maint info program-spaces'
to remove the core file name from the output.
In this commit I'm adding the core file name to the 'info inferiors'
output.
My proposal is to add the core file as auxiliary information beneath
an inferior's line in the 'info inferiors' output. We already do
this vfork parent/child information.
The alternative would be to add the core file as an additional column
in the 'info inferiors' output, indeed, I did initially propose this:
https://inbox.sourceware.org/gdb-patches/e3e040272a0f8f5fd826298331da4c19d01f3a5e.1757615333.git.aburgess@redhat.com
But the problem with this is that the 'info inferiors' output can
easily become very long, and the line wrapping gets very messy, making
the output much harder to parse. The feedback on this initial
approach wasn't super strong, so I'm trying the auxiliary information
approach to see if this is liked more.
The new output looks like this:
(gdb) info inferiors
Num Description Connection Executable
* 1 process 54313 1 (core) /tmp/executable
core file /tmp/core.54313
The only other option I can think of, if this approach is not liked,
would be to add an entirely new command, 'info core-files', with
output like:
Num Core File
* 1 /tmp/corefile.core
The 'Num' column here would just be the inferior number again. In
effect this new command is just splitting the 'info inferiors' into
two commands.
I extended gdb.base/corefile.exp to check the current output style,
and updated the gdb.multi/multi-target-info-inferiors.exp test to take
the new output into account.
Approved-By: Tom Tromey <tom@tromey.com>
I'm currently working towards a goal of moving the core file BFD out
of program_space and into core_target. I believe this is a good
change to make as the core_target already holds a lot of state that is
parsed from the core file BFD, so storing the parsed, structured,
information in a different location to the original core file BFD
doesn't make sense to me.
In preparation for this change, the 'maint info program-spaces'
command needs updating. Currently this command lists the name of the
core file BFD that is loaded into each program space.
Once the core file moves into core_target then the core file really
becomes a property of the inferior.
We could try to retain the existing output by looking up which
inferior is active in a given program space, and find the core file
that way, however, I don't like this plan because GDB does support
shared program spaces, in theory, a target could exist where every
inferior shares a single program space. Even on more common POSIX
targets, after a vfork the parent and child share a program space.
Now the vfork case clearly doesn't impact the core file case, and I
don't know if GDB _actually_ supports any shared program space targets
any more.... but still, I don't think we should try to retain the
existing behaviour.
So, this commit removes the core file name from the 'maint info
program-spaces' output. The next commit will add the core file name
back in a new home.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
This introduces a new file, gdbsupport/cxx-thread.h, which provides
stubs for the C++ threading functionality on systems that don't
support it.
On fully-working ports, this header just supplies a number of aliases
in the gdb namespace. So, for instance, gdb::mutex is just an alias
for std::mutex.
For non-working ports, compatibility stubs are provided for the subset
of threading functionality that's used in gdb. These generally do
nothing and assume single-threaded operation.
The idea behind this is to reduce the number of checks of
CXX_STD_THREAD, making the code cleaner.
Not all spots using CXX_STD_THREAD could readily be converted.
In particular:
* Unit tests
* --config output
* Code manipulating threads themselves
* The extension interrupting handling code
These all seem fine to me.
Note there's also a check in py-dap.c. This one is perhaps slightly
subtle: DAP starts threads on the Python side, but it relies on gdb
itself being thread-savvy, for instance in gdb.post_event.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
This changes one spot in run-on-main-thread.c to use an explicit
template argument, rather than relying on deduction. The deduction
would otherwise fail with the next patch.
dwarf2/read.c no longer uses gdb::task_group, so the include isn't
needed. Simon pointed out that the thread-pool.h include isn't needed
either.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Perform int to bool conversion for find_memory_region_ftype function
type. This function type is used in the find_memory_regions API, both
target_find_memory_regions and target_find_memory_regions.
There should be no user visible changes after this commit.
Approved-By: Tom Tromey <tom@tromey.com>
Replace
if test x${COMPILER_FOR_TARGET} = x"\$(CC)"; then
with
if test x"${COMPILER_FOR_TARGET}" = x"\$(CC)"; then
since COMPILER_FOR_TARGET may contain spaces when configuring GCC.
* configure: Regenerated.
config/
* clang-plugin.m4 (CLANG_PLUGIN_FILE_FOR_TARGET): Quote
${COMPILER_FOR_TARGET}.
* gcc-plugin.m4 (GCC_PLUGIN_OPTION_FOR_TARGET): Likewise.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
commit bab1b2488e2a01b311d584bbecbc6834194e30ed
Author: Nicolas Boulenguez <nicolas@debian.org>
Date: Sun Jun 22 19:23:11 2025 +0200
Ada: Introduce GNATMAKE_FOR_BUILD Makefile variable
This gets rid of the hardcoded 'gnatmake' command used during the build.
commit 79091220da796a4b60561a7bf2e9e8f5e5276bc4
Author: Kugan Vivekanandarajah <kvivekananda@nvidia.com>
Date: Tue Jun 10 09:19:37 2025 +1000
[AutoFDO] Fix profile bootstrap for x86_64
This patch fixes profile bootstrap for x86_64 by special
caseing cpu_type for x86_64 as it shares AUTO_PROFILE
from i386.
commit fcb60292984fa7181ec91d7f81fd18549d1aaf39
Author: Kugan Vivekanandarajah <kvivekananda@nvidia.com>
Date: Thu May 29 08:47:19 2025 +1000
[AUTOFDO] Fix autogen remake issue
Fix autogen issue introduced by commit
commit 86dc974cf30f926a014438a5fccdc9d41e26282b
commit 86dc974cf30f926a014438a5fccdc9d41e26282b
Author: Kugan Vivekanandarajah <kvivekananda@nvidia.com>
Date: Mon May 26 11:41:59 2025 +1000
[AUTOFDO][AARCH64] Add support for profilebootstrap
Add support for autoprofiledbootstrap in aarch64.
This is similar to what is done for i386. Added
gcc/config/aarch64/gcc-auto-profile for aarch64 profile
creation.
How to run:
configure --with-build-config=bootstrap-lto
make autoprofiledbootstrap
commit dff727b2c28c52e90e0bd61957d15f907494b245
Author: Stephanos Ioannidis <root@stephanos.io>
Date: Wed May 21 17:28:36 2025 -0600
[PATCH] configure: Always add pre-installed header directories to search path
configure script was adding the target directory flags, including the
'-B' flags for the executable prefix and the '-isystem' flags for the
pre-installed header directories, to the target flags only for
non-Canadian builds under the premise that the host binaries under the
executable prefix will not be able to execute on the build system for
Canadian builds.
While that is true for the '-B' flags specifying the executable prefix,
the '-isystem' flags specifying the pre-installed header directories are
not affected by this and do not need special handling.
This patch updates the configure script to always add the 'include' and
'sys-include' pre-installed header directories to the target search
path, in order to ensure that the availability of the pre-installed
header directories in the search path is consistent across non-Canadian
and Canadian builds.
When '--with-headers' flag is specified, this effectively ensures that
the libc headers, that are copied from the specified header directory to
the sys-include directory, are used by libstdc++.
commit 6390fc86995fbd5239497cb9e1797a3af51d3936
Author: Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE>
Date: Tue Apr 22 13:47:17 2025 +0200
cobol: Restrict COBOL to supported Linux arches [PR119217]
The COBOL frontend is currently built on all x86_64 and aarch64 hosts
although the code contains some Linux/glibc specifics that break the build
e.g. on Solaris/amd64.
Tested on Linux/x86_64 and Solaris/amd64.
commit 17ed44c96f6e5c0cc02d8cb29ff5943dd30ab3c1
Author: Iain Sandoe <iain@sandoe.co.uk>
Date: Mon Mar 31 07:02:54 2025 +0100
config, toplevel, Darwin: Pass -B instead of -L to C++ commands.
Darwin from 10.11 needs embedded rpaths to find the correct libraries at
runtime. Recent increases in hardening have made it such that the dynamic
loader will no longer fall back to using an installed libstdc++ when the
(new) linked one is not found. This means we fail configure tests (that
should pass) for runtimes that use C++.
We can resolve this by passing '-B' to the C++ command lines instead of '-L'
(-B implies -L on Darwin, but also causes a corresponding embedded rpath).
commit dcb7009efc5358207d1b0612732a0608915a3ef7
Author: Richard Biener <rguenther@suse.de>
Date: Fri Mar 28 13:48:36 2025 +0100
bootstrap/119513 - fix cobol bootstrap with --enable-generated-files-in-srcdir
This adds gcc/cobol/parse.o to compare_exclusions and makes sure to
ignore errors when copying generated files, like it's done when
copying gengtype-lex.cc.
commit 0fb10aca02852b2e8d78a78c07aa2f62aec6a07e
Author: Iain Sandoe <iain@sandoe.co.uk>
Date: Tue Mar 25 16:20:58 2025 +0000
toplevel, libcobol: Add dependency on libquadmath build [PR119244].
For the configuration of libgcobol to be correct for targets that need
to use libquadmath for 128b FP support, we must be able to find the
quadmath library (or not, for targets that have the support in libc).
commit 70bc553e1b565d2e162894ea29a223b44e9133e3
Author: Iain Sandoe <iain@sandoe.co.uk>
Date: Sun Mar 23 11:45:17 2025 +0000
toplevel, Makefile: Add missing CXX_FOR_TARGET export [PR88319].
Actually, the issue is not local to the libitm case, it currently affects
any 'cxx=true' top-level configured target library.
The issue is a missing export of CXX_FOR_TARGET.
commit c650b557cb01f97bebb894aa68e5e74c2147c395
Author: Thomas Schwinge <thomas@codesourcery.com>
Date: Mon Jul 11 22:36:39 2022 +0200
GCN, nvptx: Don't default-disable libstdc++ build
In addition to making libstdc++ itself available, this, via enabling
'build-gcc/*/libstdc++-v3/scripts/testsuite_flags', in particular also makes
the standard C++ headers available to 'make check-gcc-c++'. With that, there
are a lot of FAIL/UNRESOLVED -> PASS progressions, where we previously ran
into, for example:
FAIL: g++.dg/coroutines/co-await-syntax-00-needs-expr.C (test for errors, line 6)
FAIL: g++.dg/coroutines/co-await-syntax-00-needs-expr.C (test for excess errors)
Excess errors:
[...]/gcc/testsuite/g++.dg/coroutines/coro.h:132:10: fatal error: cstdlib: No such file or directory
Similarly, there are a lot of FAIL/UNRESOLVED -> UNSUPPORTED "progressions" due
to 'sorry, unimplemented: exception handling not supported'.
The 'make check-target-libstdc++-v3' results don't look too bad, either.
This also reverts Subversion r221362
(Git commit d94fae044da071381b73a2ee8afa874b14fa3820) "No libstdc++ for nvptx",
and commit 2f4f3c0e9345805160ecacd6de527b519a8c9206 "No libstdc++ for GCN".
With libstdc++ now available, libgrust gets enabled, which we in turn again
have to disable, for 'sorry, unimplemented: exception handling not supported'
reasons.
commit 09c2a0ab94e1e731433eb2687ad16a9c79617e14
Author: Jakub Jelinek <jakub@redhat.com>
Date: Tue Mar 11 14:34:01 2025 +0100
cobol: Fix up libgcobol configure [PR119216]
Sorry, seems I've screwed up the earlier libgcobol/configure.tgt change.
Looking in more detail, the way e.g. libsanitizer/configure.tgt works is
that it is sourced twice, once at toplevel and there it just sets
UNSUPPORTED=1 for fully unsupported triplets, and then inside of
libsanitizer/configure where it decides to include or not include the
various sublibraries depending on the *_SUPPORTED flags.
So, the following patch attempts to do the same for libgcobol as well.
The BIULD_LIBGCOBOL automake conditional was unused, this patch guards it
on LIBGCOBOL_SUPPORTED as well and guards with it
toolexeclib_LTLIBRARIES = libgcobol.la
Also, AM_CFLAGS has been changed to AM_CXXFLAGS as there are just C++
sources in the library.
commit 6a3f9f30d93c376a8a5e98be888da14923b85e63
Author: Iain Sandoe <iain@sandoe.co.uk>
Date: Tue Mar 11 09:56:18 2025 +0000
configure, Darwin: Require explicit selection of COBOL.
By defult, Darwin does not have sufficient tools to build COBOL
so we do not want to include it in --enable-languages=all since
this will break regular testing of all supported languages.
However, we do want to be able to build it on demand (where the
build system has sufficiently new tools) and so do not want to
disable it permanently.
commit 45c281deb7a2e24a21f2f68a2a3652e30f27f953
Author: James K. Lowden <jklowden@symas.com>
Date: Mon Mar 10 16:04:49 2025 +0100
COBOL: config and build machinery
commit ab35fc0d897011c6de075e000d1e0388e6359d4e
Author: Thomas Schwinge <tschwinge@baylibre.com>
Date: Wed Feb 19 09:30:45 2025 +0100
GCN, nvptx: Support '--enable-languages=all'
..., where "support" means that the build doesn't fail, but it doesn't mean
that all target libraries get built and we get pretty test results for the
additional languages.
commit bc3597635a708cd91d742c91c6050829cfb4062a
Author: David Malcolm <dmalcolm@redhat.com>
Date: Fri Nov 29 18:13:22 2024 -0500
Rename "libdiagnostics" to "libgdiagnostics"
"libdiagnostics" clashes with an existing soname in Debian, as
per:
https://gcc.gnu.org/pipermail/gcc/2024-November/245175.html
Rename it to "libgdiagnostics" for uniqueness.
I am being deliberately vague about what the "g" stands for:
it could be "gnu", "gcc", or "gpl-licensed" as the reader desires.
commit fc59a3995cb46c190c0efb0431ad204e399975c4
Author: Pierre-Emmanuel Patry <pierre-emmanuel.patry@embecosm.com>
Date: Wed May 3 18:43:10 2023 +0200
gccrs: Fix bootstrap build
This commit fixes bootstrapping for future additions to libgrust/
commit 7a6906c8d80e437a97c780370a8fec4e00561c7b
Author: Pierre-Emmanuel Patry <pierre-emmanuel.patry@embecosm.com>
Date: Mon Jun 12 10:51:49 2023 +0200
gccrs: Fix missing build dependency
Fix the missing dependency between the gcc and libgrust.
* Makefile.def: Synced from gcc.
* Makefile.tpl: Likewise.
* configure.ac: Likewise.
* Makefile.in: Regenerated.
* configure: Likewise.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Both GCC and GDB/binutils now have root editorconfig files. It would
make sense to unify them as this sets the general tone for these
projects.
ChangeLog:
* .editorconfig: Unify the GCC and GDB/binutils root config.
The numeric check was always false, and correcting it to match the
comment causes lots of testsuite failures. "tic4x" is a valid string.
* cpu-tic4x.c (tic4x_scan): Remove always false condition.
Fix comment.
Since catching-syscalls was added, there had been added files containing
syscalls in xml format. As for now riscv-canonicalize-syscall-gen.py uses
glibc for generating, it may be not so comfortable. I changed this
script for reusing newly generated riscv-linux.xml file. Also, I renamed
riscv64_canonicalize_syscall to riscv_linux_canonicalize_syscall as only 64
system is supported in linux. This will simplify the possible further
generalization of this script to other architectures.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
Use AC_TRY_COMPILE to check for the working target clang and gcc when
configuring for cross tools.
PR binutils/33503
* configure: Regenerated.
config/
PR binutils/33503
* clang-plugin.m4 (CLANG_PLUGIN_FILE_FOR_TARGET): Use
AC_TRY_COMPILE to check the target clang and replace
clang_cv_is_clang with clang_target_cv_working.
* gcc-plugin.m4 (GCC_PLUGIN_OPTION_FOR_TARGET): Use
AC_TRY_COMPILE to check the target gcc.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
The DWARF indexer splits the work statically based on the unit sizes,
attempting to give each worker thread about the same amount of bytes to
process. This works relatively well with standard compilation. But
when compiling with DWO files (-gsplit-dwarf), it's not as good. I see
this when loading a relatively big program (telegram-desktop, which
includes a lot of static dependencies) compiled with -gsplit-dwarf:
Time for "DWARF indexing worker": wall 0.000, user 0.000, sys 0.000, user+sys 0.000, -nan % CPU
Time for "DWARF indexing worker": wall 0.001, user 0.000, sys 0.000, user+sys 0.000, 0.0 % CPU
Time for "DWARF indexing worker": wall 0.001, user 0.001, sys 0.000, user+sys 0.001, 100.0 % CPU
Time for "DWARF indexing worker": wall 0.748, user 0.284, sys 0.297, user+sys 0.581, 77.7 % CPU
Time for "DWARF indexing worker": wall 0.818, user 0.408, sys 0.262, user+sys 0.670, 81.9 % CPU
Time for "DWARF indexing worker": wall 1.196, user 0.580, sys 0.402, user+sys 0.982, 82.1 % CPU
Time for "DWARF indexing worker": wall 1.250, user 0.511, sys 0.500, user+sys 1.011, 80.9 % CPU
Time for "DWARF indexing worker": wall 7.730, user 5.891, sys 1.729, user+sys 7.620, 98.6 % CPU
Note how the wall times vary from 0 to 7.7 seconds. This is
undesirable, because the time to do that indexing step takes as long as
the slowest worker thread takes.
The imbalance in this step also causes imbalance in the following
"finalize" step:
Time for "DWARF finalize worker": wall 0.007, user 0.004, sys 0.002, user+sys 0.006, 85.7 % CPU
Time for "DWARF finalize worker": wall 0.012, user 0.005, sys 0.005, user+sys 0.010, 83.3 % CPU
Time for "DWARF finalize worker": wall 0.015, user 0.010, sys 0.004, user+sys 0.014, 93.3 % CPU
Time for "DWARF finalize worker": wall 0.389, user 0.359, sys 0.029, user+sys 0.388, 99.7 % CPU
Time for "DWARF finalize worker": wall 0.680, user 0.644, sys 0.035, user+sys 0.679, 99.9 % CPU
Time for "DWARF finalize worker": wall 0.929, user 0.907, sys 0.020, user+sys 0.927, 99.8 % CPU
Time for "DWARF finalize worker": wall 1.093, user 1.055, sys 0.037, user+sys 1.092, 99.9 % CPU
Time for "DWARF finalize worker": wall 2.016, user 1.934, sys 0.082, user+sys 2.016, 100.0 % CPU
Time for "DWARF finalize worker": wall 25.882, user 25.471, sys 0.404, user+sys 25.875, 100.0 % CPU
With DWO files, the split of the workload by size doesn't work, because
it is done using the size of the skeleton units in the main file, which
is not representative of how much DWARF is contained in each DWO file.
I haven't tried it, but a similar problem could occur with cross-unit
imports, which can happen with dwz or LTO. You could have a small unit
that imports a lot from other units, in which case the size of the units
is not representative of the work to accomplish.
To try to improve this situation, change the DWARF indexer to use
dynamic partitioning, using gdb::parallel_for_each_async. With this,
each worker thread pops one unit at a time from a shared work queue to
process it, until the queue is empty. That should result in a more
balance workload split. I chose 1 as the minimum batch size here,
because I judged that indexing one CU was a big enough piece of work
compared to the synchronization overhead of the queue. That can always
be tweaked later if someone wants to do more tests.
As a result, the timings are much more balanced:
Time for "DWARF indexing worker": wall 2.325, user 1.033, sys 0.573, user+sys 1.606, 69.1 % CPU
Time for "DWARF indexing worker": wall 2.326, user 1.028, sys 0.568, user+sys 1.596, 68.6 % CPU
Time for "DWARF indexing worker": wall 2.326, user 1.068, sys 0.513, user+sys 1.581, 68.0 % CPU
Time for "DWARF indexing worker": wall 2.326, user 1.005, sys 0.579, user+sys 1.584, 68.1 % CPU
Time for "DWARF indexing worker": wall 2.326, user 1.070, sys 0.516, user+sys 1.586, 68.2 % CPU
Time for "DWARF indexing worker": wall 2.326, user 1.063, sys 0.584, user+sys 1.647, 70.8 % CPU
Time for "DWARF indexing worker": wall 2.326, user 1.049, sys 0.550, user+sys 1.599, 68.7 % CPU
Time for "DWARF indexing worker": wall 2.328, user 1.058, sys 0.541, user+sys 1.599, 68.7 % CPU
...
Time for "DWARF finalize worker": wall 2.833, user 2.791, sys 0.040, user+sys 2.831, 99.9 % CPU
Time for "DWARF finalize worker": wall 2.939, user 2.896, sys 0.043, user+sys 2.939, 100.0 % CPU
Time for "DWARF finalize worker": wall 3.016, user 2.969, sys 0.046, user+sys 3.015, 100.0 % CPU
Time for "DWARF finalize worker": wall 3.076, user 2.957, sys 0.118, user+sys 3.075, 100.0 % CPU
Time for "DWARF finalize worker": wall 3.159, user 3.054, sys 0.104, user+sys 3.158, 100.0 % CPU
Time for "DWARF finalize worker": wall 3.198, user 3.082, sys 0.114, user+sys 3.196, 99.9 % CPU
Time for "DWARF finalize worker": wall 3.197, user 3.076, sys 0.121, user+sys 3.197, 100.0 % CPU
Time for "DWARF finalize worker": wall 3.268, user 3.136, sys 0.131, user+sys 3.267, 100.0 % CPU
Time for "DWARF finalize worker": wall 1.907, user 1.810, sys 0.096, user+sys 1.906, 99.9 % CPU
In absolute terms, the total time for GDB to load the file and exit goes
down from about 42 seconds to 17 seconds.
Some implementation notes:
- The state previously kept in as local variables in
cooked_index_worker_debug_info::process_units becomes fields of the
new parallel worker object.
- The work previously done for each unit in
cooked_index_worker_debug_info::process_units becomes the operator()
of the new parallel worker object.
- The work previously done at the end of
cooked_index_worker_debug_info::process_units (including calling
bfd_thread_cleanup) becomes the destructor of the new parallel worker
object.
- The "done" callback of gdb::task_group becomes the "done" callback of
gdb::parallel_for_each_async.
- I placed the parallel_indexing_worker struct inside
cooked_index_worker_debug_info, so that it has access to
parallel_indexing_worker's private fields (e.g. m_results, to push
the results). It will also be possible to re-use it for skeletonless
type units in a later patch.
Change-Id: I5dc5cf8793abe9ebe2659e78da38ffc94289e5f2
Approved-By: Tom Tromey <tom@tromey.com>
I would like to use gdb::parallel_for_each to implement the parallelism
of the DWARF unit indexing. However, the existing implementation of
gdb::parallel_for_each is blocking, which doesn't work with the model
used by the DWARF indexer, which is asynchronous and callback-based.
Add an asynchronouys version of gdb::parallel_for_each that will be
suitable for this task.
This new version accepts a callback that is invoked when the parallel
for each is complete.
This function uses the same strategy as gdb::task_group to invoke the
"done" callback: worker threads have a shared_ptr reference to some
object. The last worker thread to drop its reference causes the object
to be deleted, which invokes the callback.
Unlike for the sync version of gdb::parallel_for_each, it's not possible
to keep any state in the calling thread's stack, because that disappears
immediately after starting the workers. So all the state is kept in
that same shared object.
There is a limitation that the sync version doesn't have, regarding the
arguments you can pass to the worker objects: it's not possibly to rely
on references. There are more details in a comment in the code.
It would be possible to implement the sync version of
gdb::parallel_for_each on top of the async version, but I decided not to
do it to avoid the unnecessary dynamic allocation of the shared object,
and to avoid adding the limitations on passing references I mentioned
just above. But if we judge that it would be an acceptable cost to
avoid the duplication, we could do it.
Add a self test for the new function.
Change-Id: I6173defb1e09856d137c1aa05ad51cbf521ea0b0
Approved-By: Tom Tromey <tom@tromey.com>
In preparation for a following patch that will re-use the shared work
queue algorithm, move it to a separate class.
Change-Id: Id05cf8898a5d162048fa8fa056fbf7e0441bfb78
Approved-By: Tom Tromey <tom@tromey.com>
I think it would be convenient for parallel_for_each to pass an
iterator_range to the worker function, instead of separate begin and end
parameters. This allows using a ranged for loop directly.
Change-Id: I8f9681da65b0eb00b738379dfd2f4dc6fb1ee612
Approved-By: Tom Tromey <tom@tromey.com>
Add iterator_range::empty, indicating if the range is empty. This is
used in the following patch.
Change-Id: I1e6c873e635c2bb0ce5aaea2a176470970f6d7ac
Approved-By: Tom Tromey <tom@tromey.com>
gdb::parallel_for_each uses static partitioning of the workload, meaning
that each worker thread receives a similar number of work items. Change
it to use dynamic partitioning, where worker threads pull work items
from a shared work queue when they need to.
Note that gdb::parallel_for_each is currently only used for processing
minimal symbols in GDB. I am looking at improving the startup
performance of GDB, where the minimal symbol process is one step.
With static partitioning, there is a risk of workload imbalance if some
threads receive "easier" work than others. Some threads sit still while
others finish working on their share of the work. This is not
desirable, because the gdb::parallel_for_each takes as long as the
slowest thread takes.
When loading a file with a lot of minimal symbols (~600k) in GDB, with
"maint set per-command time on", I observe some imbalance:
Time for "minsyms install worker": wall 0.732, user 0.550, sys 0.041, user+sys 0.591, 80.7 % CPU
Time for "minsyms install worker": wall 0.881, user 0.722, sys 0.071, user+sys 0.793, 90.0 % CPU
Time for "minsyms install worker": wall 2.107, user 1.804, sys 0.147, user+sys 1.951, 92.6 % CPU
Time for "minsyms install worker": wall 2.351, user 2.003, sys 0.151, user+sys 2.154, 91.6 % CPU
Time for "minsyms install worker": wall 2.611, user 2.322, sys 0.235, user+sys 2.557, 97.9 % CPU
Time for "minsyms install worker": wall 3.074, user 2.729, sys 0.203, user+sys 2.932, 95.4 % CPU
Time for "minsyms install worker": wall 3.486, user 3.074, sys 0.260, user+sys 3.334, 95.6 % CPU
Time for "minsyms install worker": wall 3.927, user 3.475, sys 0.336, user+sys 3.811, 97.0 % CPU
^
----´
The fastest thread took 0.732 seconds to complete its work (and then sat
still), while the slowest took 3.927 seconds. This means the
parallel_for_each took a bit less than 4 seconds.
Even if the number of minimal symbols assigned to each worker is the
same, I suppose that some symbols (e.g. those that need demangling) take
longer to process, which could explain the imbalance.
With this patch, things are much more balanced:
Time for "minsym install worker": wall 2.807, user 2.222, sys 0.144, user+sys 2.366, 84.3 % CPU
Time for "minsym install worker": wall 2.808, user 2.073, sys 0.131, user+sys 2.204, 78.5 % CPU
Time for "minsym install worker": wall 2.804, user 1.994, sys 0.151, user+sys 2.145, 76.5 % CPU
Time for "minsym install worker": wall 2.808, user 1.977, sys 0.135, user+sys 2.112, 75.2 % CPU
Time for "minsym install worker": wall 2.808, user 2.061, sys 0.142, user+sys 2.203, 78.5 % CPU
Time for "minsym install worker": wall 2.809, user 2.012, sys 0.146, user+sys 2.158, 76.8 % CPU
Time for "minsym install worker": wall 2.809, user 2.178, sys 0.137, user+sys 2.315, 82.4 % CPU
Time for "minsym install worker": wall 2.820, user 2.141, sys 0.170, user+sys 2.311, 82.0 % CPU
^
----´
In this version, the parallel_for_each took about 2.8 seconds,
representing a reduction of ~1.2 seconds for this step. Not
life-changing, but it's still good I think.
Note that this patch helps when loading big programs. My go-to test
program for this is telegram-desktop that I built from source. For
small programs (including loading gdb itself), it makes no perceptible
difference.
Now the technical bits:
- One impact that this change has on the minimal symbol processing
specifically is that not all calls to compute_and_set_names (a
critical region guarded by a mutex) are done at the end of each
worker thread's task anymore.
Before this patch, each thread would compute the names and hash values for
all the minimal symbols it has been assigned, and then would call
compute_and_set_names for all of them, while holding the mutex (thus
preventing other threads from doing this same step).
With the shared work queue approach, each thread grabs a batch of of
minimal symbols, computes the names and hash values for them, and
then calls compute_and_set_names (with the mutex held) for this batch
only. It then repeats that until the work queue is empty.
There are therefore more small and spread out compute_and_set_names
critical sections, instead of just one per worker thread at the end.
Given that before this patch the work was not well balanced among worker
threads, I guess that threads would enter that critical region at
roughly different times, causing little contention.
In the "with this patch" results, the CPU utilization numbers are not
as good, suggesting that there is some contention. But I don't know
if it's contention due to the compute_and_set_names critical section
or the shared work queue critical section. That can be investigated
later. In any case, what ultimately counts is the wall time, which
improves.
- One choice I had to make was to decide how many work items (in this
case minimal symbols) each worker should pop when getting work from
the shared queue. The general wisdom is that:
- popping too few items, and the synchronization overhead becomes
significant, and the total processing time increases
- popping too many items, and we get some imbalance back, and the
total processing time increases again
I experimented using a dynamic batch size proportional to the number
of remaining work items. It worked well in some cases but not
always. So I decided to keep it simple, with a fixed batch size.
That can always be tweaked later.
- I want to still be able to use scoped_time_it to measure the time
that each worker thread spent working on the task. I find it really
handy when measuring the performance impact of changes.
Unfortunately, the current interface of gdb::parallel_for_each, which
receives a simple callback, is not well-suited for that, once I
introduce the dynamic partitioning. The callback would get called
once for each work item batch (multiple time for each worker thread),
so it's not possible to maintain a per-worker thread object for the
duration of the parallel for.
To allow this, I changed gdb::parallel_for_each to receive a worker
type as a template parameter. Each worker thread creates one local
instance of that type, and calls operator() on it for each work item
batch. By having a scoped_time_it object as a field of that worker,
we can get the timings per worker thread.
The drawbacks of this approach is that we must now define the
parallel task in a separate class and manually capture any context we
need as fields of that class.
Change-Id: Ibf1fea65c91f76a95b9ed8f706fd6fa5ef52d9cf
Approved-By: Tom Tromey <tom@tromey.com>
I started working on this patch because I noticed that this
parallel_for_each test:
/* Check that if there are fewer tasks than threads, then we won't
end up with a null result. */
is not really checking anything. And then, this patch ended with
several changes, leading to general refactor of the whole file.
This test verifies, using std::all_of, that no entry in the intresults
vector is nullptr. However, nothing ever adds anything to intresults.
Since the vector is always empty, std::all_of is always true. This
state probably dates back to afdd136635 ("Back out some
parallel_for_each features"), which removed the ability for
parallel_for_each to return a vector of results. That commit removed
some tests, but left this one in, I'm guessing as an oversight.
One good idea in this test is to check that the worker never receives
empty ranges. I think we should always test for that. I think it's
also a good idea to test with exactly one item, that's a good edge case.
To achieve this without adding some more code duplication, factor out
the core functionality of the test in yet another test_one function (I'm
running out of ideas for names). In there, check that the range
received by the worker is not empty. Doing this pointed out that the
worker is actually called with empty ranges in some cases, necessitating
some minor changes in parallel-for.h.
Then, instead of only checking that the sum of the ranges received by
worker functions is the right count, save the elements received as part
of those ranges (in a vector), and check that this vector contains each
expected element exactly once. This should make the test a bit more
robust (otherwise we could have the right number of calls, but on the
wrong items).
Then, a subsequent patch in this series changes the interface or
parallel_for_each to use iterator_range. The only hiccup is that it
doesn't really work if the "RandomIt" type of the parallel_for_each is
"int". iterator_range<int>::size wouldn't work, as std::distance
doesn't work on two ints. Fix this in the test right away by building
an std::vector<int> to use as input.
Finally, run the test with the default thread pool thread count in
addition to counts 0, 1 an 3, currently tested. I'm thinking that it
doesn't hurt to test parallel_for_each in the configuration that it's
actually used with.
Change-Id: I5adf3d61e6ffe3bc249996660f0a34b281490d54
Approved-By: Tom Tromey <tom@tromey.com>
Currently stable tclint (v6.0.1) as used in pre-commit doesn't check code that
is passed as arguments to commands specific to the gdb testsuite [1], like
with_test_prefix:
...
with_test_prefix foo {
...
}
...
I wrote a rudimentary tclint patch handling this, skipping the dwarf assembler
procs.
Fix the additional issues found.
[1] https://github.com/nmoroze/tclint/issues/121
I realized I was seeing the newly added tclint check twice:
...
$ touch gdb/testsuite/gdb.base/foo.exp
$ git add gdb/testsuite/gdb.base/foo.exp
$ git commit -a -m foo 2>&1 | grep tclint
tclint..................................................................Passed
tclint..............................................(no files to check)Skipped
$
...
The hook is run once for stage pre-commit, and once for stage commit-msg.
Since the hook doesn't specify a stage at which it's supposed to be run, it
takes its default from default_stages, which defaults to all stages.
Fix this by setting default_stages to pre-commit:
...
$ git commit -a -m foo 2>&1 | grep tclint
tclint..................................................................Passed
$
...
The only hook sofar that needs a different stage than pre-commit is
codespell-log, and it's not affected by this change because it has an explicit
"stages: [commit-msg]" setting.
Approved-By: Tom Tromey <tom@tromey.com>
gdb.lookup_type accepts a 'block' argument, but in some cases does not
use it. This can cause the wrong type to be returned.
This patch fixes the problem by simply passing the block through. I
have no idea why it worked the way it did, and there weren't any tests
for the 'block' parameter. (I didn't look at git blame out of fear
that it was my patch back in the day.)
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=16942
On aarch64-linux, I run into:
...
FAIL: gdb.tui/pr30056.exp: arrow right
...
because while the intention is to observe the change from:
...
| 20 main (void) |
...
into:
...
| 20 ain (void) |
...
we're actually looking at another line.
Fix this by looking at the contents of the entire source window.
Tested on aarch64-linux and x86_64-linux.
PR testsuite/33506
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33506
When multi-target support was added to GDB, an assumption was made
that all process_stratum_target sub-classes could be shared by
multiple inferiors.
For things like the Linux and FreeBSD native targets, this is
absolutely true (or was made true). But some targets were either not
updated, or, due to design restrictions, cannot be shared.
This patch adds a target_ops::is_shareable member function. When this
returns false then this indicates that an instance of a particular
target should only appear on a single target stack. It is fine for
difference instances of a single target type to appear on different
target stacks though.
This is my second attempt at this patch. The first attempt can be
found here:
https://inbox.sourceware.org/gdb-patches/577f2c47793acb501c2611c0e6c7ea379f774830.1668789658.git.aburgess@redhat.com
The initial approach was to have all targets be shareable by default,
and to then mark those targets which I knew were problematic.
Specifically, the only target I was interested in was core_target,
which cannot be shared (details below). During review Tom pointed out:
I think there are a lot of other targets that can't be
shared... remote-sim, all the trace targets, even I think windows-nat,
since it isn't multi-inferior-capable yet.
The suggestion was that the default should be that targets were NOT
shareable, and we should then mark those targets which we know can be
shared. That is the big change in this iteration of the patch.
The core_target is still non-shareable. This target stores state
relating to the open core file in the core_target and in the current
inferior's program_space. After an 'add-inferior' command, if we
share the core_target, the new inferior will have its own
program_space, but will share the core_target with the original
inferior. This leaves the new inferior in an unexpected state where
the core BFD (from the new program_space) is NULL. Attempting to make
use of the second inferior, e.g. to load a new executable, will (on
x86 at least) cause GDB to crash as it is not expecting the core BFD
to be NULL.
I am working to move the core file BFD into core_target, at which
point it might be possible to share the core_target, though I'm still
not entirely sure this makes sense; loading a core file will in many
cases, automatically set the executable in the program_space, creating
a new inferior would share the core_target, but the new inferior's
program space would not have the executable loaded. But I figure we
can worry about this another day because ....
As Tom pointed out in his review of V1, there are other targets that
should be non-shareable (see quote above). In addition, I believe
that the remote target can only be shared in some specific situations,
the 'add-inferior' case is one where the 'remote' target should NOT be
shared.
The 'remote' (not 'extended-remote') target doesn't allow new
inferior's to be started, you need to connect to an already running
target. As such, it doesn't really make sense to allow a 'remote'
target to be shared over an 'add-inferior' call, what would the user
do with the new inferior? They cannot start a new process. They're
not debugging the same process as the original inferior. This just
leaves GDB in a weird state.
However, 'remote' targets are a little weird in that, if the remote
inferior forks, and GDB is set to follow both the parent and the
child, then it does make sense to allow sharing. In this case the new
inferior is automatically connected to the already running child
process.
So when we consider 'add-inferior' there are two things we need to
consider:
1. Can the target be shared at all? The new target_ops::is_shareable
function tells us this.
2. Can the target be used to start a new inferior? The existing
target_ops::can_create_inferior function tells us this.
If the process_stratum_target returns true for both of these functions
then it is OK to share it across an 'add-inferior' call. If either
returns false then the target should not be shared.
When pushing a target onto an inferior's target stack, we only need to
consider target_ops::is_shareable, only shareable targets should be
pushed onto multiple target stacks.
The new target_ops::is_shareable function returns true as its default,
all the immediate sub-classes are shareable.
However, this is overridden in process_stratum_target::is_shareable, to
return false. All process_stratum_target sub-classes are non-shareable
by default.
Finally, linux_nat_target, fbsd_nat_target, and remote_target, are all
marked as shareable. This leaves all other process_stratum_target
sub-classes non-shareable.
I did some very light testing on Windows, and I don't believe that this
target supports multiple inferiors, but I could easily be wrong here.
My windows testing setup is really iffy, and I'm not 100% sure if I did
this right.
But for the Windows target, and any of the others, if this commit breaks
existing multi-inferior support, then the fix is as simple as adding an
is_shareable member function that returns true.
If the user tries to 'add-inferior' from an inferior with a
non-shareable target, or the 'remote' target as it cannot start new
inferiors, then they will get a warning, and the new inferior will be
created without a connection.
If the user explicitly asks for the new inferior to be created without
a connection, then no warning will be given.
At this point the user is free to setup the new inferior connection as
they see fit.
I've updated the docs, and added a NEWS entry for the new warning. In
the docs for clone-inferior I've added reference to -no-connection,
which was previously missing.
Some tests needed fixing with this change, these were
gdb.base/quit-live.exp, gdb.mi/mi-add-inferior.exp,
gdb.mi/new-ui-mi-sync.exp, and gdb.python/py-connection-removed.exp. In
all cases, when using the native-gdbserver board, these tests tried to
create a new inferior, and expected the new inferior to start sharing
the connection with the original inferior. None of these tests actually
tried to run anything in the new inferior, if they did then they would
have discovered that the new inferior wasn't really sharing a
connection. All the tests have been updated to understand that for
'remote' connections (not 'extended-remote') the connection will not be
shared. These fixes are all pretty simple.
Approved-By: Tom Tromey <tom@tromey.com>
While compilers default to v8plus on 32-bit Solaris/SPARC (gcc at least
since 4.4 in 2009, cc since at least Stdio 9 in 2010), gas still uses a
sparclite default. While this doesn't cause issue for gcc (it passes
-Av8plus as necessary), it repeatedly lead to problems in the testsuite
which has to be sprinkled with setting ASFLAGS accordingly since gas cannot
assemble the gcc output by default.
This patch switches the default to v8plus on Solaris to match gcc.
I had to introduce a new arch value, v8plus-32, matching v9-64, to allow
for this.
I cannot reliably tell if other SPARC targets are similarly affected, so
this patch restricts the change to Solaris.
Tested on sparc-sun-solaris2.11 and sparcv9-sun-solaris2.11.
2025-09-25 Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE>
gas:
* config/tc-sparc.c (sparc_arch_table): Introduce v8plus-32.
* configure.tgt (generic_target) <sparc-*-solaris*>: Set arch to
v8plus-32 for 32-bit sparc.
* testsuite/gas/sparc/v8plus.rd, testsuite/gas/sparc/v8plus.s: New
test.
* testsuite/gas/sparc/sparc.exp: Run it on sparc*-*-solaris2*.