Files
binutils-gdb/gdb/amd64-obsd-tdep.c
Simon Marchi a2e3cce344 gdb/solib: C++ify solib_ops
Convert solib_ops into an abstract base class (with abstract methods,
some of them with default implementations) and convert all the existing
solib_ops instances to solib_ops derived classes / implementations.

Prior to this patch, solib_ops is a structure holding function pointers,
of which there are only a handful of global instances (in the
`solib-*.c` files).  When passing an `solib_ops *` around, it's a
pointer to one of these instances.  After this patch, there are no more
global solib_ops instances.  Instances are created as needed and stored
in struct program_space.  These instances could eventually be made to
contain the program space-specific data, which is currently kept in
per-program space registries (I have some pending patches for that).

Prior to this patch, `gdbarch_so_ops` is a gdbarch method that returns a
pointer to the appropriate solib_ops implementation for the gdbarch.
This is replaced with the `gdbarch_make_solib_ops` method, which returns
a new instance of the appropriate solib_ops implementation for this
gdbarch.  This requires introducing some factory functions for the
various solib_ops implementation, to be used as `gdbarch_make_solib_ops`
callbacks.  For instance:

    solib_ops_up
    make_linux_ilp32_svr4_solib_ops ()
    {
      return std::make_unique<linux_ilp32_svr4_solib_ops> ();
    }

The previous code is full of cases of tdep files copying some base
solib_ops implementation, and overriding one or more function pointer
(see ppc_linux_init_abi, for instance).  I tried to convert all of this
is a class hierarchy.  I like that it's now possible to get a good
static view of all the existing solib_ops variants.  The hierarchy looks
like this:

    solib_ops
    ├── aix_solib_ops
    ├── darwin_solib_ops
    ├── dsbt_solib_ops
    ├── frv_solib_ops
    ├── rocm_solib_ops
    ├── svr4_solib_ops
    │   ├── ilp32_svr4_solib_ops
    │   ├── lp64_svr4_solib_ops
    │   ├── linux_ilp32_svr4_solib_ops
    │   │   ├── mips_linux_ilp32_svr4_solib_ops
    │   │   └── ppc_linux_ilp32_svr4_solib_ops
    │   ├── linux_lp64_svr4_solib_ops
    │   │   └── mips_linux_lp64_svr4_solib_ops
    │   ├── mips_nbsd_ilp32_svr4_solib_ops
    │   ├── mips_nbsd_lp64_svr4_solib_ops
    │   ├── mips_fbsd_ilp32_svr4_solib_ops
    │   └── mips_fbsd_lp64_svr4_solib_ops
    └── target_solib_ops
        └── windows_solib_ops

The solib-svr4 code has per-arch specialization to provide a
link_map_offsets, containing the offsets of the interesting fields in
`struct link_map` on that particular architecture.  Prior to this patch,
arches would set a callback returning the appropriate link_map_offsets
by calling `set_solib_svr4_fetch_link_map_offsets`, which also happened
to set the gdbarch's so_ops to `&svr_so_ops`.  I converted this to an
abstract virtual method of `struct svr4_solib_ops`, meaning that all
classes deriving from svr4_solib_ops must provide a method returning the
appropriate link_map_offsets for the architecture.  I renamed
`set_solib_svr4_fetch_link_map_offsets` to `set_solib_svr4_ops`.  This
function is still necessary because it also calls
set_gdbarch_iterate_over_objfiles_in_search_order, but if it was not for
that, we could get rid of it.

There is an instance of CRTP in mips-linux-tdep.c, because both
mips_linux_ilp32_svr4_solib_ops and mips_linux_lp64_svr4_solib_ops need
to derive from different SVR4 base classes (linux_ilp32_svr4_solib_ops
and linux_lp64_svr4_solib_ops), but they both want to override the
in_dynsym_resolve_code method with the same implementation.

The solib_ops::supports_namespaces method is new: the support for
namespaces was previously predicated by the presence or absence of a
find_solib_ns method.  It now needs to be explicit.

There is a new progspace::release_solib_ops method, which is only needed
for rocm_solib_ops.  For the moment, rocm_solib_ops replaces and wraps
the existing svr4_solib_ops instance, in order to combine the results of
the two.  The plan is to have a subsequent patch to allow program spaces to have
multiple solib_ops, removing the need for release_solib_ops.

Speaking of rocm_solib_ops: it previously overrode only a few methods by
copying svr4_solib_ops and overwriting some function pointers.  Now, it
needs to implement all the methods that svr4_solib_ops implements, in
order to forward the call.  Otherwise, the default solib_ops method
would be called, hiding the svr4_solib_ops implementation.  Again, this
can be removed once we have support for multiple solib_ops in a
program_space.

There is also a small change in how rocm_solib_ops is activated.  Prior
to this patch, it's done at the end of rocm_update_solib_list.  Since it
overrides the function pointer in the static svr4_solib_ops, and then
overwrites the host gdbarch, so_ops field, it's something that happens
only once.  After the patch though, we need to set rocm_solib_ops in all
the program spaces that appear.  We do this in
rocm_solib_target_inferior_created and in the new
rocm_solib_target_inferior_execd.  After this, I will explore doing a
change where rocm_solib_ops is only set when we detect the ROCm runtime
is loaded.

Change-Id: I5896b5bcbf8bdb024d67980380feba1ffefaa4c9
Approved-By: Pedro Alves <pedro@palves.net>
2025-06-26 14:08:31 -04:00

460 lines
13 KiB
C
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

/* Target-dependent code for OpenBSD/amd64.
Copyright (C) 2003-2025 Free Software Foundation, Inc.
This file is part of GDB.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. */
#include "extract-store-integer.h"
#include "frame.h"
#include "frame-unwind.h"
#include "gdbcore.h"
#include "symtab.h"
#include "objfiles.h"
#include "osabi.h"
#include "regcache.h"
#include "regset.h"
#include "target.h"
#include "trad-frame.h"
#include "obsd-tdep.h"
#include "amd64-tdep.h"
#include "i387-tdep.h"
#include "gdbsupport/x86-xstate.h"
#include "solib-svr4.h"
#include "bsd-uthread.h"
/* Support for signal handlers. */
/* Default page size. */
static const int amd64obsd_page_size = 4096;
/* Return whether THIS_FRAME corresponds to an OpenBSD sigtramp
routine. */
static int
amd64obsd_sigtramp_p (const frame_info_ptr &this_frame)
{
CORE_ADDR pc = get_frame_pc (this_frame);
CORE_ADDR start_pc = (pc & ~(amd64obsd_page_size - 1));
const gdb_byte osigreturn[] =
{
0x48, 0xc7, 0xc0,
0x67, 0x00, 0x00, 0x00, /* movq $SYS_sigreturn, %rax */
0xcd, 0x80 /* int $0x80 */
};
const gdb_byte sigreturn[] =
{
0x48, 0xc7, 0xc0,
0x67, 0x00, 0x00, 0x00, /* movq $SYS_sigreturn, %rax */
0x0f, 0x05 /* syscall */
};
size_t buflen = (sizeof sigreturn) + 1;
gdb_byte *buf;
const char *name;
/* If the function has a valid symbol name, it isn't a
trampoline. */
find_pc_partial_function (pc, &name, NULL, NULL);
if (name != NULL)
return 0;
/* If the function lives in a valid section (even without a starting
point) it isn't a trampoline. */
if (find_pc_section (pc) != NULL)
return 0;
/* If we can't read the instructions at START_PC, return zero. */
buf = (gdb_byte *) alloca ((sizeof sigreturn) + 1);
if (!safe_frame_unwind_memory (this_frame, start_pc + 6, {buf, buflen}))
return 0;
/* Check for sigreturn(2). Depending on how the assembler encoded
the `movq %rsp, %rdi' instruction, the code starts at offset 6 or
7. OpenBSD 5.0 and later use the `syscall' instruction. Older
versions use `int $0x80'. Check for both. */
if (memcmp (buf, sigreturn, sizeof sigreturn)
&& memcmp (buf + 1, sigreturn, sizeof sigreturn)
&& memcmp (buf, osigreturn, sizeof osigreturn)
&& memcmp (buf + 1, osigreturn, sizeof osigreturn))
return 0;
return 1;
}
/* Assuming THIS_FRAME is for a BSD sigtramp routine, return the
address of the associated sigcontext structure. */
static CORE_ADDR
amd64obsd_sigcontext_addr (const frame_info_ptr &this_frame)
{
CORE_ADDR pc = get_frame_pc (this_frame);
ULONGEST offset = (pc & (amd64obsd_page_size - 1));
/* The %rsp register points at `struct sigcontext' upon entry of a
signal trampoline. The relevant part of the trampoline is
call *%rax
movq %rsp, %rdi
pushq %rdi
movq $SYS_sigreturn,%rax
int $0x80
(see /usr/src/sys/arch/amd64/amd64/locore.S). The `pushq'
instruction clobbers %rsp, but its value is saved in `%rdi'. */
if (offset > 5)
return get_frame_register_unsigned (this_frame, AMD64_RDI_REGNUM);
else
return get_frame_register_unsigned (this_frame, AMD64_RSP_REGNUM);
}
/* OpenBSD 3.5 or later. */
/* Mapping between the general-purpose registers in `struct reg'
format and GDB's register cache layout. */
/* From <machine/reg.h>. */
int amd64obsd_r_reg_offset[] =
{
14 * 8, /* %rax */
13 * 8, /* %rbx */
3 * 8, /* %rcx */
2 * 8, /* %rdx */
1 * 8, /* %rsi */
0 * 8, /* %rdi */
12 * 8, /* %rbp */
15 * 8, /* %rsp */
4 * 8, /* %r8 .. */
5 * 8,
6 * 8,
7 * 8,
8 * 8,
9 * 8,
10 * 8,
11 * 8, /* ... %r15 */
16 * 8, /* %rip */
17 * 8, /* %eflags */
18 * 8, /* %cs */
19 * 8, /* %ss */
20 * 8, /* %ds */
21 * 8, /* %es */
22 * 8, /* %fs */
23 * 8 /* %gs */
};
/* From <machine/signal.h>. */
static int amd64obsd_sc_reg_offset[] =
{
14 * 8, /* %rax */
13 * 8, /* %rbx */
3 * 8, /* %rcx */
2 * 8, /* %rdx */
1 * 8, /* %rsi */
0 * 8, /* %rdi */
12 * 8, /* %rbp */
24 * 8, /* %rsp */
4 * 8, /* %r8 ... */
5 * 8,
6 * 8,
7 * 8,
8 * 8,
9 * 8,
10 * 8,
11 * 8, /* ... %r15 */
21 * 8, /* %rip */
23 * 8, /* %eflags */
22 * 8, /* %cs */
25 * 8, /* %ss */
18 * 8, /* %ds */
17 * 8, /* %es */
16 * 8, /* %fs */
15 * 8 /* %gs */
};
/* From /usr/src/lib/libpthread/arch/amd64/uthread_machdep.c. */
static int amd64obsd_uthread_reg_offset[] =
{
19 * 8, /* %rax */
16 * 8, /* %rbx */
18 * 8, /* %rcx */
17 * 8, /* %rdx */
14 * 8, /* %rsi */
13 * 8, /* %rdi */
15 * 8, /* %rbp */
-1, /* %rsp */
12 * 8, /* %r8 ... */
11 * 8,
10 * 8,
9 * 8,
8 * 8,
7 * 8,
6 * 8,
5 * 8, /* ... %r15 */
20 * 8, /* %rip */
4 * 8, /* %eflags */
21 * 8, /* %cs */
-1, /* %ss */
3 * 8, /* %ds */
2 * 8, /* %es */
1 * 8, /* %fs */
0 * 8 /* %gs */
};
/* Offset within the thread structure where we can find the saved
stack pointer (%esp). */
#define AMD64OBSD_UTHREAD_RSP_OFFSET 400
static void
amd64obsd_supply_uthread (struct regcache *regcache,
int regnum, CORE_ADDR addr)
{
struct gdbarch *gdbarch = regcache->arch ();
enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
CORE_ADDR sp_addr = addr + AMD64OBSD_UTHREAD_RSP_OFFSET;
CORE_ADDR sp = 0;
gdb_byte buf[8];
int i;
gdb_assert (regnum >= -1);
if (regnum == -1 || regnum == AMD64_RSP_REGNUM)
{
int offset;
/* Fetch stack pointer from thread structure. */
sp = read_memory_unsigned_integer (sp_addr, 8, byte_order);
/* Adjust the stack pointer such that it looks as if we just
returned from _thread_machdep_switch. */
offset = amd64obsd_uthread_reg_offset[AMD64_RIP_REGNUM] + 8;
store_unsigned_integer (buf, 8, byte_order, sp + offset);
regcache->raw_supply (AMD64_RSP_REGNUM, buf);
}
for (i = 0; i < ARRAY_SIZE (amd64obsd_uthread_reg_offset); i++)
{
if (amd64obsd_uthread_reg_offset[i] != -1
&& (regnum == -1 || regnum == i))
{
/* Fetch stack pointer from thread structure (if we didn't
do so already). */
if (sp == 0)
sp = read_memory_unsigned_integer (sp_addr, 8, byte_order);
/* Read the saved register from the stack frame. */
read_memory (sp + amd64obsd_uthread_reg_offset[i], buf, 8);
regcache->raw_supply (i, buf);
}
}
}
static void
amd64obsd_collect_uthread (const struct regcache *regcache,
int regnum, CORE_ADDR addr)
{
struct gdbarch *gdbarch = regcache->arch ();
enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
CORE_ADDR sp_addr = addr + AMD64OBSD_UTHREAD_RSP_OFFSET;
CORE_ADDR sp = 0;
gdb_byte buf[8];
int i;
gdb_assert (regnum >= -1);
if (regnum == -1 || regnum == AMD64_RSP_REGNUM)
{
int offset;
/* Calculate the stack pointer (frame pointer) that will be
stored into the thread structure. */
offset = amd64obsd_uthread_reg_offset[AMD64_RIP_REGNUM] + 8;
regcache->raw_collect (AMD64_RSP_REGNUM, buf);
sp = extract_unsigned_integer (buf, 8, byte_order) - offset;
/* Store the stack pointer. */
write_memory_unsigned_integer (sp_addr, 8, byte_order, sp);
/* The stack pointer was (potentially) modified. Make sure we
build a proper stack frame. */
regnum = -1;
}
for (i = 0; i < ARRAY_SIZE (amd64obsd_uthread_reg_offset); i++)
{
if (amd64obsd_uthread_reg_offset[i] != -1
&& (regnum == -1 || regnum == i))
{
/* Fetch stack pointer from thread structure (if we didn't
calculate it already). */
if (sp == 0)
sp = read_memory_unsigned_integer (sp_addr, 8, byte_order);
/* Write the register into the stack frame. */
regcache->raw_collect (i, buf);
write_memory (sp + amd64obsd_uthread_reg_offset[i], buf, 8);
}
}
}
/* Kernel debugging support. */
/* From <machine/frame.h>. Easy since `struct trapframe' matches
`struct sigcontext'. */
#define amd64obsd_tf_reg_offset amd64obsd_sc_reg_offset
static struct trad_frame_cache *
amd64obsd_trapframe_cache (const frame_info_ptr &this_frame, void **this_cache)
{
struct gdbarch *gdbarch = get_frame_arch (this_frame);
enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
struct trad_frame_cache *cache;
CORE_ADDR func, sp, addr;
ULONGEST cs;
const char *name;
int i;
if (*this_cache)
return (struct trad_frame_cache *) *this_cache;
cache = trad_frame_cache_zalloc (this_frame);
*this_cache = cache;
func = get_frame_func (this_frame);
sp = get_frame_register_unsigned (this_frame, AMD64_RSP_REGNUM);
find_pc_partial_function (func, &name, NULL, NULL);
if (name && startswith (name, "Xintr"))
addr = sp + 8; /* It's an interrupt frame. */
else
addr = sp;
for (i = 0; i < ARRAY_SIZE (amd64obsd_tf_reg_offset); i++)
if (amd64obsd_tf_reg_offset[i] != -1)
trad_frame_set_reg_addr (cache, i, addr + amd64obsd_tf_reg_offset[i]);
/* Read %cs from trap frame. */
addr += amd64obsd_tf_reg_offset[AMD64_CS_REGNUM];
cs = read_memory_unsigned_integer (addr, 8, byte_order);
if ((cs & I386_SEL_RPL) == I386_SEL_UPL)
{
/* Trap from user space; terminate backtrace. */
trad_frame_set_id (cache, outer_frame_id);
}
else
{
/* Construct the frame ID using the function start. */
trad_frame_set_id (cache, frame_id_build (sp + 16, func));
}
return cache;
}
static void
amd64obsd_trapframe_this_id (const frame_info_ptr &this_frame,
void **this_cache, struct frame_id *this_id)
{
struct trad_frame_cache *cache =
amd64obsd_trapframe_cache (this_frame, this_cache);
trad_frame_get_id (cache, this_id);
}
static struct value *
amd64obsd_trapframe_prev_register (const frame_info_ptr &this_frame,
void **this_cache, int regnum)
{
struct trad_frame_cache *cache =
amd64obsd_trapframe_cache (this_frame, this_cache);
return trad_frame_get_register (cache, this_frame, regnum);
}
static int
amd64obsd_trapframe_sniffer (const struct frame_unwind *self,
const frame_info_ptr &this_frame,
void **this_prologue_cache)
{
ULONGEST cs;
const char *name;
/* Check Current Privilege Level and bail out if we're not executing
in kernel space. */
cs = get_frame_register_unsigned (this_frame, AMD64_CS_REGNUM);
if ((cs & I386_SEL_RPL) == I386_SEL_UPL)
return 0;
find_pc_partial_function (get_frame_pc (this_frame), &name, NULL, NULL);
return (name && ((strcmp (name, "calltrap") == 0)
|| (strcmp (name, "osyscall1") == 0)
|| (strcmp (name, "Xsyscall") == 0)
|| (startswith (name, "Xintr"))));
}
static const struct frame_unwind_legacy amd64obsd_trapframe_unwind (
/* FIXME: kettenis/20051219: This really is more like an interrupt
frame, but SIGTRAMP_FRAME would print <signal handler called>,
which really is not what we want here. */
"amd64 openbsd trap",
NORMAL_FRAME,
FRAME_UNWIND_ARCH,
default_frame_unwind_stop_reason,
amd64obsd_trapframe_this_id,
amd64obsd_trapframe_prev_register,
NULL,
amd64obsd_trapframe_sniffer
);
static void
amd64obsd_init_abi (struct gdbarch_info info, struct gdbarch *gdbarch)
{
i386_gdbarch_tdep *tdep = gdbarch_tdep<i386_gdbarch_tdep> (gdbarch);
amd64_init_abi (info, gdbarch,
amd64_target_description (X86_XSTATE_SSE_MASK, true));
obsd_init_abi (info, gdbarch);
/* Initialize general-purpose register set details. */
tdep->gregset_reg_offset = amd64obsd_r_reg_offset;
tdep->gregset_num_regs = ARRAY_SIZE (amd64obsd_r_reg_offset);
tdep->sizeof_gregset = 24 * 8;
tdep->jb_pc_offset = 7 * 8;
tdep->sigtramp_p = amd64obsd_sigtramp_p;
tdep->sigcontext_addr = amd64obsd_sigcontext_addr;
tdep->sc_reg_offset = amd64obsd_sc_reg_offset;
tdep->sc_num_regs = ARRAY_SIZE (amd64obsd_sc_reg_offset);
/* OpenBSD provides a user-level threads implementation. */
bsd_uthread_set_supply_uthread (gdbarch, amd64obsd_supply_uthread);
bsd_uthread_set_collect_uthread (gdbarch, amd64obsd_collect_uthread);
/* OpenBSD uses SVR4-style shared libraries. */
set_solib_svr4_ops (gdbarch, make_svr4_lp64_solib_ops);
/* Unwind kernel trap frames correctly. */
frame_unwind_prepend_unwinder (gdbarch, &amd64obsd_trapframe_unwind);
}
INIT_GDB_FILE (amd64obsd_tdep)
{
/* The OpenBSD/amd64 native dependent code makes this assumption. */
gdb_assert (ARRAY_SIZE (amd64obsd_r_reg_offset) == AMD64_NUM_GREGS);
gdbarch_register_osabi (bfd_arch_i386, bfd_mach_x86_64,
GDB_OSABI_OPENBSD, amd64obsd_init_abi);
}