Step over clone syscall w/ breakpoint, TARGET_WAITKIND_THREAD_CLONED

(A good chunk of the problem statement in the commit log below is
Andrew's, adjusted for a different solution, and for covering
displaced stepping too.)

This commit addresses bugs gdb/19675 and gdb/27830, which are about
stepping over a breakpoint set at a clone syscall instruction, one is
about displaced stepping, and the other about in-line stepping.

Currently, when a new thread is created through a clone syscall, GDB
sets the new thread running.  With 'continue' this makes sense
(assuming no schedlock):

 - all-stop mode, user issues 'continue', all threads are set running,
   a newly created thread should also be set running.

 - non-stop mode, user issues 'continue', other pre-existing threads
   are not affected, but as the new thread is (sort-of) a child of the
   thread the user asked to run, it makes sense that the new threads
   should be created in the running state.

Similarly, if we are stopped at the clone syscall, and there's no
software breakpoint at this address, then the current behaviour is
fine:

 - all-stop mode, user issues 'stepi', stepping will be done in place
   (as there's no breakpoint to step over).  While stepping the thread
   of interest all the other threads will be allowed to continue.  A
   newly created thread will be set running, and then stopped once the
   thread of interest has completed its step.

 - non-stop mode, user issues 'stepi', stepping will be done in place
   (as there's no breakpoint to step over).  Other threads might be
   running or stopped, but as with the continue case above, the new
   thread will be created running.  The only possible issue here is
   that the new thread will be left running after the initial thread
   has completed its stepi.  The user would need to manually select
   the thread and interrupt it, this might not be what the user
   expects.  However, this is not something this commit tries to
   change.

The problem then is what happens when we try to step over a clone
syscall if there is a breakpoint at the syscall address.

- For both all-stop and non-stop modes, with in-line stepping:

   + user issues 'stepi',
   + [non-stop mode only] GDB stops all threads.  In all-stop mode all
     threads are already stopped.
   + GDB removes s/w breakpoint at syscall address,
   + GDB single steps just the thread of interest, all other threads
     are left stopped,
   + New thread is created running,
   + Initial thread completes its step,
   + [non-stop mode only] GDB resumes all threads that it previously
     stopped.

There are two problems in the in-line stepping scenario above:

  1. The new thread might pass through the same code that the initial
     thread is in (i.e. the clone syscall code), in which case it will
     fail to hit the breakpoint in clone as this was removed so the
     first thread can single step,

  2. The new thread might trigger some other stop event before the
     initial thread reports its step completion.  If this happens we
     end up triggering an assertion as GDB assumes that only the
     thread being stepped should stop.  The assert looks like this:

     infrun.c:5899: internal-error: int finish_step_over(execution_control_state*): Assertion `ecs->event_thread->control.trap_expected' failed.

- For both all-stop and non-stop modes, with displaced stepping:

   + user issues 'stepi',
   + GDB starts the displaced step, moves thread's PC to the
     out-of-line scratch pad, maybe adjusts registers,
   + GDB single steps the thread of interest, [non-stop mode only] all
     other threads are left as they were, either running or stopped.
     In all-stop, all other threads are left stopped.
   + New thread is created running,
   + Initial thread completes its step, GDB re-adjusts its PC,
     restores/releases scratchpad,
   + [non-stop mode only] GDB resumes the thread, now past its
     breakpoint.
   + [all-stop mode only] GDB resumes all threads.

There is one problem with the displaced stepping scenario above:

  3. When the parent thread completed its step, GDB adjusted its PC,
     but did not adjust the child's PC, thus that new child thread
     will continue execution in the scratch pad, invoking undefined
     behavior.  If you're lucky, you see a crash.  If unlucky, the
     inferior gets silently corrupted.

What is needed is for GDB to have more control over whether the new
thread is created running or not.  Issue #1 above requires that the
new thread not be allowed to run until the breakpoint has been
reinserted.  The only way to guarantee this is if the new thread is
held in a stopped state until the single step has completed.  Issue #3
above requires that GDB is informed of when a thread clones itself,
and of what is the child's ptid, so that GDB can fixup both the parent
and the child.

When looking for solutions to this problem I considered how GDB
handles fork/vfork as these have some of the same issues.  The main
difference between fork/vfork and clone is that the clone events are
not reported back to core GDB.  Instead, the clone event is handled
automatically in the target code and the child thread is immediately
set running.

Note we have support for requesting thread creation events out of the
target (TARGET_WAITKIND_THREAD_CREATED).  However, those are reported
for the new/child thread.  That would be sufficient to address in-line
stepping (issue #1), but not for displaced-stepping (issue #3).  To
handle displaced-stepping, we need an event that is reported to the
_parent_ of the clone, as the information about the displaced step is
associated with the clone parent.  TARGET_WAITKIND_THREAD_CREATED
includes no indication of which thread is the parent that spawned the
new child.  In fact, for some targets, like e.g., Windows, it would be
impossible to know which thread that was, as thread creation there
doesn't work by "cloning".

The solution implemented here is to model clone on fork/vfork, and
introduce a new TARGET_WAITKIND_THREAD_CLONED event.  This event is
similar to TARGET_WAITKIND_FORKED and TARGET_WAITKIND_VFORKED, except
that we end up with a new thread in the same process, instead of a new
thread of a new process.  Like FORKED and VFORKED, THREAD_CLONED
waitstatuses have a child_ptid property, and the child is held stopped
until GDB explicitly resumes it.  This addresses the in-line stepping
case (issues #1 and #2).

The infrun code that handles displaced stepping fixup for the child
after a fork/vfork event is thus reused for THREAD_CLONE, with some
minimal conditions added, addressing the displaced stepping case
(issue #3).

The native Linux backend is adjusted to unconditionally report
TARGET_WAITKIND_THREAD_CLONED events to the core.

Following the follow_fork model in core GDB, we introduce a
target_follow_clone target method, which is responsible for making the
new clone child visible to the rest of GDB.

Subsequent patches will add clone events support to the remote
protocol and gdbserver.

A testcase will be added by a later patch.

displaced_step_in_progress_thread becomes unused with this patch, but
a new use will reappear later in the series.  To avoid deleting it and
readding it back, this patch marks it with attribute unused, and the
latter patch removes the attribute again.  We need to do this because
the function is static, and with no callers, the compiler would warn,
(error with -Werror), breaking the build.

Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=19675
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27830

Change-Id: I474e9a7015dd3d33469e322a5764ae83f8a32787
This commit is contained in:
Pedro Alves
2021-11-12 20:50:29 +00:00
parent 6bf0ade3d4
commit 3d5fce9fb6
8 changed files with 299 additions and 201 deletions

View File

@@ -1584,6 +1584,7 @@ step_over_info_valid_p (void)
/* Return true if THREAD is doing a displaced step. */ /* Return true if THREAD is doing a displaced step. */
static bool static bool
ATTRIBUTE_UNUSED
displaced_step_in_progress_thread (thread_info *thread) displaced_step_in_progress_thread (thread_info *thread)
{ {
gdb_assert (thread != nullptr); gdb_assert (thread != nullptr);
@@ -1898,6 +1899,31 @@ static displaced_step_finish_status
displaced_step_finish (thread_info *event_thread, displaced_step_finish (thread_info *event_thread,
const target_waitstatus &event_status) const target_waitstatus &event_status)
{ {
/* Check whether the parent is displaced stepping. */
struct regcache *regcache = get_thread_regcache (event_thread);
struct gdbarch *gdbarch = regcache->arch ();
inferior *parent_inf = event_thread->inf;
/* If this was a fork/vfork/clone, this event indicates that the
displaced stepping of the syscall instruction has been done, so
we perform cleanup for parent here. Also note that this
operation also cleans up the child for vfork, because their pages
are shared. */
/* If this is a fork (child gets its own address space copy) and
some displaced step buffers were in use at the time of the fork,
restore the displaced step buffer bytes in the child process.
Architectures which support displaced stepping and fork events
must supply an implementation of
gdbarch_displaced_step_restore_all_in_ptid. This is not enforced
during gdbarch validation to support architectures which support
displaced stepping but not forks. */
if (event_status.kind () == TARGET_WAITKIND_FORKED
&& gdbarch_supports_displaced_stepping (gdbarch))
gdbarch_displaced_step_restore_all_in_ptid
(gdbarch, parent_inf, event_status.child_ptid ());
displaced_step_thread_state *displaced = &event_thread->displaced_step_state; displaced_step_thread_state *displaced = &event_thread->displaced_step_state;
/* Was this thread performing a displaced step? */ /* Was this thread performing a displaced step? */
@@ -1917,8 +1943,39 @@ displaced_step_finish (thread_info *event_thread,
/* Do the fixup, and release the resources acquired to do the displaced /* Do the fixup, and release the resources acquired to do the displaced
step. */ step. */
return gdbarch_displaced_step_finish (displaced->get_original_gdbarch (), displaced_step_finish_status status
event_thread, event_status); = gdbarch_displaced_step_finish (displaced->get_original_gdbarch (),
event_thread, event_status);
if (event_status.kind () == TARGET_WAITKIND_FORKED
|| event_status.kind () == TARGET_WAITKIND_VFORKED
|| event_status.kind () == TARGET_WAITKIND_THREAD_CLONED)
{
/* Since the vfork/fork/clone syscall instruction was executed
in the scratchpad, the child's PC is also within the
scratchpad. Set the child's PC to the parent's PC value,
which has already been fixed up. Note: we use the parent's
aspace here, although we're touching the child, because the
child hasn't been added to the inferior list yet at this
point. */
struct regcache *child_regcache
= get_thread_arch_aspace_regcache (parent_inf->process_target (),
event_status.child_ptid (),
gdbarch,
parent_inf->aspace);
/* Read PC value of parent. */
CORE_ADDR parent_pc = regcache_read_pc (regcache);
displaced_debug_printf ("write child pc from %s to %s",
paddress (gdbarch,
regcache_read_pc (child_regcache)),
paddress (gdbarch, parent_pc));
regcache_write_pc (child_regcache, parent_pc);
}
return status;
} }
/* Data to be passed around while handling an event. This data is /* Data to be passed around while handling an event. This data is
@@ -5717,67 +5774,13 @@ handle_inferior_event (struct execution_control_state *ecs)
case TARGET_WAITKIND_FORKED: case TARGET_WAITKIND_FORKED:
case TARGET_WAITKIND_VFORKED: case TARGET_WAITKIND_VFORKED:
/* Check whether the inferior is displaced stepping. */ case TARGET_WAITKIND_THREAD_CLONED:
{
struct regcache *regcache = get_thread_regcache (ecs->event_thread);
struct gdbarch *gdbarch = regcache->arch ();
inferior *parent_inf = find_inferior_ptid (ecs->target, ecs->ptid);
/* If this is a fork (child gets its own address space copy) displaced_step_finish (ecs->event_thread, ecs->ws);
and some displaced step buffers were in use at the time of
the fork, restore the displaced step buffer bytes in the
child process.
Architectures which support displaced stepping and fork /* Start a new step-over in another thread if there's one that
events must supply an implementation of needs it. */
gdbarch_displaced_step_restore_all_in_ptid. This is not start_step_over ();
enforced during gdbarch validation to support architectures
which support displaced stepping but not forks. */
if (ecs->ws.kind () == TARGET_WAITKIND_FORKED
&& gdbarch_supports_displaced_stepping (gdbarch))
gdbarch_displaced_step_restore_all_in_ptid
(gdbarch, parent_inf, ecs->ws.child_ptid ());
/* If displaced stepping is supported, and thread ecs->ptid is
displaced stepping. */
if (displaced_step_in_progress_thread (ecs->event_thread))
{
struct regcache *child_regcache;
CORE_ADDR parent_pc;
/* GDB has got TARGET_WAITKIND_FORKED or TARGET_WAITKIND_VFORKED,
indicating that the displaced stepping of syscall instruction
has been done. Perform cleanup for parent process here. Note
that this operation also cleans up the child process for vfork,
because their pages are shared. */
displaced_step_finish (ecs->event_thread, ecs->ws);
/* Start a new step-over in another thread if there's one
that needs it. */
start_step_over ();
/* Since the vfork/fork syscall instruction was executed in the scratchpad,
the child's PC is also within the scratchpad. Set the child's PC
to the parent's PC value, which has already been fixed up.
FIXME: we use the parent's aspace here, although we're touching
the child, because the child hasn't been added to the inferior
list yet at this point. */
child_regcache
= get_thread_arch_aspace_regcache (parent_inf->process_target (),
ecs->ws.child_ptid (),
gdbarch,
parent_inf->aspace);
/* Read PC value of parent process. */
parent_pc = regcache_read_pc (regcache);
displaced_debug_printf ("write child pc from %s to %s",
paddress (gdbarch,
regcache_read_pc (child_regcache)),
paddress (gdbarch, parent_pc));
regcache_write_pc (child_regcache, parent_pc);
}
}
context_switch (ecs); context_switch (ecs);
@@ -5793,7 +5796,7 @@ handle_inferior_event (struct execution_control_state *ecs)
need to unpatch at follow/detach time instead to be certain need to unpatch at follow/detach time instead to be certain
that new breakpoints added between catchpoint hit time and that new breakpoints added between catchpoint hit time and
vfork follow are detached. */ vfork follow are detached. */
if (ecs->ws.kind () != TARGET_WAITKIND_VFORKED) if (ecs->ws.kind () == TARGET_WAITKIND_FORKED)
{ {
/* This won't actually modify the breakpoint list, but will /* This won't actually modify the breakpoint list, but will
physically remove the breakpoints from the child. */ physically remove the breakpoints from the child. */
@@ -5825,14 +5828,24 @@ handle_inferior_event (struct execution_control_state *ecs)
if (!bpstat_causes_stop (ecs->event_thread->control.stop_bpstat)) if (!bpstat_causes_stop (ecs->event_thread->control.stop_bpstat))
{ {
bool follow_child bool follow_child
= (follow_fork_mode_string == follow_fork_mode_child); = (ecs->ws.kind () != TARGET_WAITKIND_THREAD_CLONED
&& follow_fork_mode_string == follow_fork_mode_child);
ecs->event_thread->set_stop_signal (GDB_SIGNAL_0); ecs->event_thread->set_stop_signal (GDB_SIGNAL_0);
process_stratum_target *targ process_stratum_target *targ
= ecs->event_thread->inf->process_target (); = ecs->event_thread->inf->process_target ();
bool should_resume = follow_fork (); bool should_resume;
if (ecs->ws.kind () != TARGET_WAITKIND_THREAD_CLONED)
should_resume = follow_fork ();
else
{
should_resume = true;
inferior *inf = ecs->event_thread->inf;
inf->top_target ()->follow_clone (ecs->ws.child_ptid ());
ecs->event_thread->pending_follow.set_spurious ();
}
/* Note that one of these may be an invalid pointer, /* Note that one of these may be an invalid pointer,
depending on detach_fork. */ depending on detach_fork. */
@@ -5843,16 +5856,21 @@ handle_inferior_event (struct execution_control_state *ecs)
child is marked stopped. */ child is marked stopped. */
/* If not resuming the parent, mark it stopped. */ /* If not resuming the parent, mark it stopped. */
if (follow_child && !detach_fork && !non_stop && !sched_multi) if (ecs->ws.kind () != TARGET_WAITKIND_THREAD_CLONED
&& follow_child && !detach_fork && !non_stop && !sched_multi)
parent->set_running (false); parent->set_running (false);
/* If resuming the child, mark it running. */ /* If resuming the child, mark it running. */
if (follow_child || (!detach_fork && (non_stop || sched_multi))) if (ecs->ws.kind () == TARGET_WAITKIND_THREAD_CLONED
|| (follow_child || (!detach_fork && (non_stop || sched_multi))))
child->set_running (true); child->set_running (true);
/* In non-stop mode, also resume the other branch. */ /* In non-stop mode, also resume the other branch. */
if (!detach_fork && (non_stop if ((ecs->ws.kind () == TARGET_WAITKIND_THREAD_CLONED
|| (sched_multi && target_is_non_stop_p ()))) && target_is_non_stop_p ())
|| (!detach_fork && (non_stop
|| (sched_multi
&& target_is_non_stop_p ()))))
{ {
if (follow_child) if (follow_child)
switch_to_thread (parent); switch_to_thread (parent);

View File

@@ -1286,6 +1286,63 @@ get_detach_signal (struct lwp_info *lp)
return 0; return 0;
} }
/* If LP has a pending fork/vfork/clone status, return it. */
static gdb::optional<target_waitstatus>
get_pending_child_status (lwp_info *lp)
{
/* Check in lwp_info::status. */
if (WIFSTOPPED (lp->status) && linux_is_extended_waitstatus (lp->status))
{
int event = linux_ptrace_get_extended_event (lp->status);
if (event == PTRACE_EVENT_FORK
|| event == PTRACE_EVENT_VFORK
|| event == PTRACE_EVENT_CLONE)
{
unsigned long child_pid;
int ret = ptrace (PTRACE_GETEVENTMSG, lp->ptid.lwp (), 0, &child_pid);
if (ret == 0)
{
target_waitstatus ws;
if (event == PTRACE_EVENT_FORK)
ws.set_forked (ptid_t (child_pid, child_pid));
else if (event == PTRACE_EVENT_VFORK)
ws.set_vforked (ptid_t (child_pid, child_pid));
else if (event == PTRACE_EVENT_CLONE)
ws.set_thread_cloned (ptid_t (lp->ptid.pid (), child_pid));
else
gdb_assert_not_reached ("unhandled");
return ws;
}
else
{
perror_warning_with_name (_("Failed to retrieve event msg"));
return {};
}
}
}
/* Check in lwp_info::waitstatus. */
if (is_new_child_status (lp->waitstatus.kind ()))
return lp->waitstatus;
thread_info *tp = find_thread_ptid (linux_target, lp->ptid);
/* Check in thread_info::pending_waitstatus. */
if (tp->has_pending_waitstatus ()
&& is_new_child_status (tp->pending_waitstatus ().kind ()))
return tp->pending_waitstatus ();
/* Check in thread_info::pending_follow. */
if (is_new_child_status (tp->pending_follow.kind ()))
return tp->pending_follow;
return {};
}
/* Detach from LP. If SIGNO_P is non-NULL, then it points to the /* Detach from LP. If SIGNO_P is non-NULL, then it points to the
signal number that should be passed to the LWP when detaching. signal number that should be passed to the LWP when detaching.
Otherwise pass any pending signal the LWP may have, if any. */ Otherwise pass any pending signal the LWP may have, if any. */
@@ -1296,54 +1353,13 @@ detach_one_lwp (struct lwp_info *lp, int *signo_p)
int lwpid = lp->ptid.lwp (); int lwpid = lp->ptid.lwp ();
int signo; int signo;
gdb_assert (lp->status == 0 || WIFSTOPPED (lp->status)); /* If the lwp/thread we are about to detach has a pending fork/clone
event, there is a process/thread GDB is attached to that the core
of GDB doesn't know about. Detach from it. */
/* If the lwp/thread we are about to detach has a pending fork event, gdb::optional<target_waitstatus> ws = get_pending_child_status (lp);
there is a process GDB is attached to that the core of GDB doesn't know if (ws.has_value ())
about. Detach from it. */ detach_one_pid (ws->child_ptid ().lwp (), 0);
/* Check in lwp_info::status. */
if (WIFSTOPPED (lp->status) && linux_is_extended_waitstatus (lp->status))
{
int event = linux_ptrace_get_extended_event (lp->status);
if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK)
{
unsigned long child_pid;
int ret = ptrace (PTRACE_GETEVENTMSG, lp->ptid.lwp (), 0, &child_pid);
if (ret == 0)
detach_one_pid (child_pid, 0);
else
perror_warning_with_name (_("Failed to detach fork child"));
}
}
/* Check in lwp_info::waitstatus. */
if (lp->waitstatus.kind () == TARGET_WAITKIND_VFORKED
|| lp->waitstatus.kind () == TARGET_WAITKIND_FORKED)
detach_one_pid (lp->waitstatus.child_ptid ().pid (), 0);
/* Check in thread_info::pending_waitstatus. */
thread_info *tp = find_thread_ptid (linux_target, lp->ptid);
if (tp->has_pending_waitstatus ())
{
const target_waitstatus &ws = tp->pending_waitstatus ();
if (ws.kind () == TARGET_WAITKIND_VFORKED
|| ws.kind () == TARGET_WAITKIND_FORKED)
detach_one_pid (ws.child_ptid ().pid (), 0);
}
/* Check in thread_info::pending_follow. */
if (tp->pending_follow.kind () == TARGET_WAITKIND_VFORKED
|| tp->pending_follow.kind () == TARGET_WAITKIND_FORKED)
detach_one_pid (tp->pending_follow.child_ptid ().pid (), 0);
if (lp->status != 0)
linux_nat_debug_printf ("Pending %s for %s on detach.",
strsignal (WSTOPSIG (lp->status)),
lp->ptid.to_string ().c_str ());
/* If there is a pending SIGSTOP, get rid of it. */ /* If there is a pending SIGSTOP, get rid of it. */
if (lp->signalled) if (lp->signalled)
@@ -1821,6 +1837,55 @@ linux_handle_syscall_trap (struct lwp_info *lp, int stopping)
return 1; return 1;
} }
/* See target.h. */
void
linux_nat_target::follow_clone (ptid_t child_ptid)
{
lwp_info *new_lp = add_lwp (child_ptid);
new_lp->stopped = 1;
/* If the thread_db layer is active, let it record the user
level thread id and status, and add the thread to GDB's
list. */
if (!thread_db_notice_clone (inferior_ptid, new_lp->ptid))
{
/* The process is not using thread_db. Add the LWP to
GDB's list. */
add_thread (linux_target, new_lp->ptid);
}
/* We just created NEW_LP so it cannot yet contain STATUS. */
gdb_assert (new_lp->status == 0);
if (!pull_pid_from_list (&stopped_pids, child_ptid.lwp (), &new_lp->status))
internal_error (_("no saved status for clone lwp"));
if (WSTOPSIG (new_lp->status) != SIGSTOP)
{
/* This can happen if someone starts sending signals to
the new thread before it gets a chance to run, which
have a lower number than SIGSTOP (e.g. SIGUSR1).
This is an unlikely case, and harder to handle for
fork / vfork than for clone, so we do not try - but
we handle it for clone events here. */
new_lp->signalled = 1;
/* Save the wait status to report later. */
linux_nat_debug_printf
("waitpid of new LWP %ld, saving status %s",
(long) new_lp->ptid.lwp (), status_to_str (new_lp->status).c_str ());
}
else
{
new_lp->status = 0;
if (report_thread_events)
new_lp->waitstatus.set_thread_created ();
}
}
/* Handle a GNU/Linux extended wait response. If we see a clone /* Handle a GNU/Linux extended wait response. If we see a clone
event, we need to add the new LWP to our list (and not report the event, we need to add the new LWP to our list (and not report the
trap to higher layers). This function returns non-zero if the trap to higher layers). This function returns non-zero if the
@@ -1861,11 +1926,9 @@ linux_handle_extended_wait (struct lwp_info *lp, int status)
internal_error (_("wait returned unexpected status 0x%x"), status); internal_error (_("wait returned unexpected status 0x%x"), status);
} }
ptid_t child_ptid (new_pid, new_pid);
if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK) if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK)
{ {
open_proc_mem_file (child_ptid); open_proc_mem_file (ptid_t (new_pid, new_pid));
/* The arch-specific native code may need to know about new /* The arch-specific native code may need to know about new
forks even if those end up never mapped to an forks even if those end up never mapped to an
@@ -1902,66 +1965,18 @@ linux_handle_extended_wait (struct lwp_info *lp, int status)
} }
if (event == PTRACE_EVENT_FORK) if (event == PTRACE_EVENT_FORK)
ourstatus->set_forked (child_ptid); ourstatus->set_forked (ptid_t (new_pid, new_pid));
else if (event == PTRACE_EVENT_VFORK) else if (event == PTRACE_EVENT_VFORK)
ourstatus->set_vforked (child_ptid); ourstatus->set_vforked (ptid_t (new_pid, new_pid));
else if (event == PTRACE_EVENT_CLONE) else if (event == PTRACE_EVENT_CLONE)
{ {
struct lwp_info *new_lp;
ourstatus->set_ignore ();
linux_nat_debug_printf linux_nat_debug_printf
("Got clone event from LWP %d, new child is LWP %ld", pid, new_pid); ("Got clone event from LWP %d, new child is LWP %ld", pid, new_pid);
new_lp = add_lwp (ptid_t (lp->ptid.pid (), new_pid)); /* Save the status again, we'll use it in follow_clone. */
new_lp->stopped = 1; add_to_pid_list (&stopped_pids, new_pid, status);
new_lp->resumed = 1;
/* If the thread_db layer is active, let it record the user ourstatus->set_thread_cloned (ptid_t (lp->ptid.pid (), new_pid));
level thread id and status, and add the thread to GDB's
list. */
if (!thread_db_notice_clone (lp->ptid, new_lp->ptid))
{
/* The process is not using thread_db. Add the LWP to
GDB's list. */
add_thread (linux_target, new_lp->ptid);
}
/* Even if we're stopping the thread for some reason
internal to this module, from the perspective of infrun
and the user/frontend, this new thread is running until
it next reports a stop. */
set_running (linux_target, new_lp->ptid, true);
set_executing (linux_target, new_lp->ptid, true);
if (WSTOPSIG (status) != SIGSTOP)
{
/* This can happen if someone starts sending signals to
the new thread before it gets a chance to run, which
have a lower number than SIGSTOP (e.g. SIGUSR1).
This is an unlikely case, and harder to handle for
fork / vfork than for clone, so we do not try - but
we handle it for clone events here. */
new_lp->signalled = 1;
/* We created NEW_LP so it cannot yet contain STATUS. */
gdb_assert (new_lp->status == 0);
/* Save the wait status to report later. */
linux_nat_debug_printf
("waitpid of new LWP %ld, saving status %s",
(long) new_lp->ptid.lwp (), status_to_str (status).c_str ());
new_lp->status = status;
}
else if (report_thread_events)
{
new_lp->waitstatus.set_thread_created ();
new_lp->status = status;
}
return 1;
} }
return 0; return 0;
@@ -3536,59 +3551,56 @@ kill_wait_callback (struct lwp_info *lp)
return 0; return 0;
} }
/* Kill the fork children of any threads of inferior INF that are /* Kill the fork/clone child of LP if it has an unfollowed child. */
stopped at a fork event. */
static void static int
kill_unfollowed_fork_children (struct inferior *inf) kill_unfollowed_child_callback (lwp_info *lp)
{ {
for (thread_info *thread : inf->non_exited_threads ()) gdb::optional<target_waitstatus> ws = get_pending_child_status (lp);
if (ws.has_value ())
{ {
struct target_waitstatus *ws = &thread->pending_follow; ptid_t child_ptid = ws->child_ptid ();
int child_pid = child_ptid.pid ();
int child_lwp = child_ptid.lwp ();
if (ws->kind () == TARGET_WAITKIND_FORKED kill_one_lwp (child_lwp);
|| ws->kind () == TARGET_WAITKIND_VFORKED) kill_wait_one_lwp (child_lwp);
{
ptid_t child_ptid = ws->child_ptid ();
int child_pid = child_ptid.pid ();
int child_lwp = child_ptid.lwp ();
kill_one_lwp (child_lwp); /* Let the arch-specific native code know this process is
kill_wait_one_lwp (child_lwp); gone. */
if (ws->kind () != TARGET_WAITKIND_THREAD_CLONED)
/* Let the arch-specific native code know this process is linux_target->low_forget_process (child_pid);
gone. */
linux_target->low_forget_process (child_pid);
}
} }
return 0;
} }
void void
linux_nat_target::kill () linux_nat_target::kill ()
{ {
/* If we're stopped while forking and we haven't followed yet, ptid_t pid_ptid (inferior_ptid.pid ());
kill the other task. We need to do this first because the
/* If we're stopped while forking/cloning and we haven't followed
yet, kill the child task. We need to do this first because the
parent will be sleeping if this is a vfork. */ parent will be sleeping if this is a vfork. */
kill_unfollowed_fork_children (current_inferior ()); iterate_over_lwps (pid_ptid, kill_unfollowed_child_callback);
if (forks_exist_p ()) if (forks_exist_p ())
linux_fork_killall (); linux_fork_killall ();
else else
{ {
ptid_t ptid = ptid_t (inferior_ptid.pid ());
/* Stop all threads before killing them, since ptrace requires /* Stop all threads before killing them, since ptrace requires
that the thread is stopped to successfully PTRACE_KILL. */ that the thread is stopped to successfully PTRACE_KILL. */
iterate_over_lwps (ptid, stop_callback); iterate_over_lwps (pid_ptid, stop_callback);
/* ... and wait until all of them have reported back that /* ... and wait until all of them have reported back that
they're no longer running. */ they're no longer running. */
iterate_over_lwps (ptid, stop_wait_callback); iterate_over_lwps (pid_ptid, stop_wait_callback);
/* Kill all LWP's ... */ /* Kill all LWP's ... */
iterate_over_lwps (ptid, kill_callback); iterate_over_lwps (pid_ptid, kill_callback);
/* ... and wait until we've flushed all events. */ /* ... and wait until we've flushed all events. */
iterate_over_lwps (ptid, kill_wait_callback); iterate_over_lwps (pid_ptid, kill_wait_callback);
} }
target_mourn_inferior (inferior_ptid); target_mourn_inferior (inferior_ptid);

View File

@@ -129,6 +129,8 @@ public:
void follow_fork (inferior *, ptid_t, target_waitkind, bool, bool) override; void follow_fork (inferior *, ptid_t, target_waitkind, bool, bool) override;
void follow_clone (ptid_t) override;
std::vector<static_tracepoint_marker> std::vector<static_tracepoint_marker>
static_tracepoint_markers_by_strid (const char *id) override; static_tracepoint_markers_by_strid (const char *id) override;

View File

@@ -76,6 +76,7 @@ struct dummy_target : public target_ops
int insert_vfork_catchpoint (int arg0) override; int insert_vfork_catchpoint (int arg0) override;
int remove_vfork_catchpoint (int arg0) override; int remove_vfork_catchpoint (int arg0) override;
void follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bool arg3, bool arg4) override; void follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bool arg3, bool arg4) override;
void follow_clone (ptid_t arg0) override;
int insert_exec_catchpoint (int arg0) override; int insert_exec_catchpoint (int arg0) override;
int remove_exec_catchpoint (int arg0) override; int remove_exec_catchpoint (int arg0) override;
void follow_exec (inferior *arg0, ptid_t arg1, const char *arg2) override; void follow_exec (inferior *arg0, ptid_t arg1, const char *arg2) override;
@@ -250,6 +251,7 @@ struct debug_target : public target_ops
int insert_vfork_catchpoint (int arg0) override; int insert_vfork_catchpoint (int arg0) override;
int remove_vfork_catchpoint (int arg0) override; int remove_vfork_catchpoint (int arg0) override;
void follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bool arg3, bool arg4) override; void follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bool arg3, bool arg4) override;
void follow_clone (ptid_t arg0) override;
int insert_exec_catchpoint (int arg0) override; int insert_exec_catchpoint (int arg0) override;
int remove_exec_catchpoint (int arg0) override; int remove_exec_catchpoint (int arg0) override;
void follow_exec (inferior *arg0, ptid_t arg1, const char *arg2) override; void follow_exec (inferior *arg0, ptid_t arg1, const char *arg2) override;
@@ -1545,6 +1547,28 @@ debug_target::follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bo
gdb_puts (")\n", gdb_stdlog); gdb_puts (")\n", gdb_stdlog);
} }
void
target_ops::follow_clone (ptid_t arg0)
{
this->beneath ()->follow_clone (arg0);
}
void
dummy_target::follow_clone (ptid_t arg0)
{
default_follow_clone (this, arg0);
}
void
debug_target::follow_clone (ptid_t arg0)
{
gdb_printf (gdb_stdlog, "-> %s->follow_clone (...)\n", this->beneath ()->shortname ());
this->beneath ()->follow_clone (arg0);
gdb_printf (gdb_stdlog, "<- %s->follow_clone (", this->beneath ()->shortname ());
target_debug_print_ptid_t (arg0);
gdb_puts (")\n", gdb_stdlog);
}
int int
target_ops::insert_exec_catchpoint (int arg0) target_ops::insert_exec_catchpoint (int arg0)
{ {

View File

@@ -2701,6 +2701,13 @@ default_follow_fork (struct target_ops *self, inferior *child_inf,
internal_error (_("could not find a target to follow fork")); internal_error (_("could not find a target to follow fork"));
} }
static void
default_follow_clone (struct target_ops *self, ptid_t child_ptid)
{
/* Some target returned a clone event, but did not know how to follow it. */
internal_error (_("could not find a target to follow clone"));
}
/* See target.h. */ /* See target.h. */
void void

View File

@@ -637,6 +637,13 @@ struct target_ops
TARGET_DEFAULT_RETURN (1); TARGET_DEFAULT_RETURN (1);
virtual void follow_fork (inferior *, ptid_t, target_waitkind, bool, bool) virtual void follow_fork (inferior *, ptid_t, target_waitkind, bool, bool)
TARGET_DEFAULT_FUNC (default_follow_fork); TARGET_DEFAULT_FUNC (default_follow_fork);
/* Add CHILD_PTID to the thread list, after handling a
TARGET_WAITKIND_THREAD_CLONE event for the clone parent. The
parent is inferior_ptid. */
virtual void follow_clone (ptid_t child_ptid)
TARGET_DEFAULT_FUNC (default_follow_clone);
virtual int insert_exec_catchpoint (int) virtual int insert_exec_catchpoint (int)
TARGET_DEFAULT_RETURN (1); TARGET_DEFAULT_RETURN (1);
virtual int remove_exec_catchpoint (int) virtual int remove_exec_catchpoint (int)

View File

@@ -45,6 +45,7 @@ DIAGNOSTIC_ERROR_SWITCH
case TARGET_WAITKIND_FORKED: case TARGET_WAITKIND_FORKED:
case TARGET_WAITKIND_VFORKED: case TARGET_WAITKIND_VFORKED:
case TARGET_WAITKIND_THREAD_CLONED:
return string_appendf (str, ", child_ptid = %s", return string_appendf (str, ", child_ptid = %s",
this->child_ptid ().to_string ().c_str ()); this->child_ptid ().to_string ().c_str ());

View File

@@ -95,6 +95,13 @@ enum target_waitkind
/* There are no resumed children left in the program. */ /* There are no resumed children left in the program. */
TARGET_WAITKIND_NO_RESUMED, TARGET_WAITKIND_NO_RESUMED,
/* The thread was cloned. The event's ptid corresponds to the
cloned parent. The cloned child is held stopped at its entry
point, and its ptid is in the event's m_child_ptid. The target
must not add the cloned child to GDB's thread list until
target_ops::follow_clone() is called. */
TARGET_WAITKIND_THREAD_CLONED,
/* The thread was created. */ /* The thread was created. */
TARGET_WAITKIND_THREAD_CREATED, TARGET_WAITKIND_THREAD_CREATED,
@@ -102,6 +109,17 @@ enum target_waitkind
TARGET_WAITKIND_THREAD_EXITED, TARGET_WAITKIND_THREAD_EXITED,
}; };
/* Determine if KIND represents an event with a new child - a fork,
vfork, or clone. */
static inline bool
is_new_child_status (target_waitkind kind)
{
return (kind == TARGET_WAITKIND_FORKED
|| kind == TARGET_WAITKIND_VFORKED
|| kind == TARGET_WAITKIND_THREAD_CLONED);
}
/* Return KIND as a string. */ /* Return KIND as a string. */
static inline const char * static inline const char *
@@ -125,6 +143,8 @@ DIAGNOSTIC_ERROR_SWITCH
return "FORKED"; return "FORKED";
case TARGET_WAITKIND_VFORKED: case TARGET_WAITKIND_VFORKED:
return "VFORKED"; return "VFORKED";
case TARGET_WAITKIND_THREAD_CLONED:
return "THREAD_CLONED";
case TARGET_WAITKIND_EXECD: case TARGET_WAITKIND_EXECD:
return "EXECD"; return "EXECD";
case TARGET_WAITKIND_VFORK_DONE: case TARGET_WAITKIND_VFORK_DONE:
@@ -325,6 +345,14 @@ struct target_waitstatus
return *this; return *this;
} }
target_waitstatus &set_thread_cloned (ptid_t child_ptid)
{
this->reset ();
m_kind = TARGET_WAITKIND_THREAD_CLONED;
m_value.child_ptid = child_ptid;
return *this;
}
target_waitstatus &set_thread_created () target_waitstatus &set_thread_created ()
{ {
this->reset (); this->reset ();
@@ -369,8 +397,7 @@ struct target_waitstatus
ptid_t child_ptid () const ptid_t child_ptid () const
{ {
gdb_assert (m_kind == TARGET_WAITKIND_FORKED gdb_assert (is_new_child_status (m_kind));
|| m_kind == TARGET_WAITKIND_VFORKED);
return m_value.child_ptid; return m_value.child_ptid;
} }