forked from Imagelibrary/binutils-gdb
Implement TARGET_OBJECT_STACK_MEMORY.
* NEWS: Add note on new "set stack-cache" option. * corefile.c (read_stack): New function. * dcache.c (dcache_struct): New member ptid. (dcache_enable_p): Mark as obsolete. (show_dcache_enabled_p): Flag option as deprecated. (dcache_invalidate): Update ptid. (dcache_invalidate_line): New function. (dcache_read_line): No longer check cacheable attribute, stack accesses get cached despite attribute. (dcache_init): Set ptid. (dcache_xfer_memory): Flush cache if from different ptid than before. Update cache after write. (dcache_update): New function. (dcache_info): Report ptid. (_initialize_dcache): Update text for `remotecache' to indicate it is obsolete. * dcache.h (dcache_update): Declare. * dwarf2loc.c (dwarf2_evaluate_loc_desc): Mark values on stack with set_value_stack. * frame-unwind.c (frame_unwind_got_memory): Ditto. * gdbcore.h (read_stack): Declare. * memattr.c (mem_enable_command): Call target_dcache_invalidate instead of dcache_invalidate. (mem_disable_command, mem_delete_command): Ditto. * target.c (stack_cache_enabled_p_1): New static global. (stack_cache_enabled_p): New static global. (set_stack_cache_enabled_p): New function. (show_stack_cache_enabled_p): New function. (target_dcache): Make static. (target_dcache_invalidate): New function. (target_load, target_resume): Call target_dcache_invalidate instead of dcache_invalidate. (memory_xfer_partial): New arg object, all callers updated. Check for existing inferior before calling dcache routines. When writing non-TARGET_OBJECT_STACK_MEMORY, notify dcache. (target_xfer_partial): Call memory_xfer_partial for TARGET_OBJECT_STACK_MEMORY. (target_read_stack): New function. (initialize_targets): Install new option `stack-cache'. * target.h: Remove #include of dcache.h. (enum target_object): New value TARGET_OBJECT_STACK_MEMORY. (target_dcache): Delete. (target_dcache_invalidate): Declare. (target_read_stack): Declare. * top.c (prepare_execute_command): New function. (execute_command): Call prepare_execute_command instead of free_all_values. * top.h (prepare_execute_command): Declare. * valops.c (get_value_at): New function. (value_at): Guts moved to get_value_at. (value_at_lazy): Similarly. (value_fetch_lazy): Call read_stack for stack values. * value.c (struct value): New member `stack'. (value_stack, set_value_stack): New functions. * value.h (value_stack, set_value_stack): Declare. * mi/mi-main.c (mi_cmd_execute): Call prepare_execute_command instead of free_all_values. doc/ * gdb.texinfo (Caching Data of Remote Targets): Update text. Mark `set/show remotecache' options as obsolete. Document new `set/show stack-cache' option. Update text for `info dcache'.
This commit is contained in:
@@ -1,3 +1,65 @@
|
|||||||
|
2009-08-31 Jacob Potter <jdpotter@google.com>
|
||||||
|
Doug Evans <dje@google.com>
|
||||||
|
|
||||||
|
Implement TARGET_OBJECT_STACK_MEMORY.
|
||||||
|
* NEWS: Add note on new "set stack-cache" option.
|
||||||
|
* corefile.c (read_stack): New function.
|
||||||
|
* dcache.c (dcache_struct): New member ptid.
|
||||||
|
(dcache_enable_p): Mark as obsolete.
|
||||||
|
(show_dcache_enabled_p): Flag option as deprecated.
|
||||||
|
(dcache_invalidate): Update ptid.
|
||||||
|
(dcache_invalidate_line): New function.
|
||||||
|
(dcache_read_line): No longer check cacheable attribute, stack
|
||||||
|
accesses get cached despite attribute.
|
||||||
|
(dcache_init): Set ptid.
|
||||||
|
(dcache_xfer_memory): Flush cache if from different ptid than before.
|
||||||
|
Update cache after write.
|
||||||
|
(dcache_update): New function.
|
||||||
|
(dcache_info): Report ptid.
|
||||||
|
(_initialize_dcache): Update text for `remotecache' to indicate it
|
||||||
|
is obsolete.
|
||||||
|
* dcache.h (dcache_update): Declare.
|
||||||
|
* dwarf2loc.c (dwarf2_evaluate_loc_desc): Mark values on stack with
|
||||||
|
set_value_stack.
|
||||||
|
* frame-unwind.c (frame_unwind_got_memory): Ditto.
|
||||||
|
* gdbcore.h (read_stack): Declare.
|
||||||
|
* memattr.c (mem_enable_command): Call target_dcache_invalidate
|
||||||
|
instead of dcache_invalidate.
|
||||||
|
(mem_disable_command, mem_delete_command): Ditto.
|
||||||
|
* target.c (stack_cache_enabled_p_1): New static global.
|
||||||
|
(stack_cache_enabled_p): New static global.
|
||||||
|
(set_stack_cache_enabled_p): New function.
|
||||||
|
(show_stack_cache_enabled_p): New function.
|
||||||
|
(target_dcache): Make static.
|
||||||
|
(target_dcache_invalidate): New function.
|
||||||
|
(target_load, target_resume): Call target_dcache_invalidate
|
||||||
|
instead of dcache_invalidate.
|
||||||
|
(memory_xfer_partial): New arg object, all callers updated.
|
||||||
|
Check for existing inferior before calling dcache routines.
|
||||||
|
When writing non-TARGET_OBJECT_STACK_MEMORY, notify dcache.
|
||||||
|
(target_xfer_partial): Call memory_xfer_partial for
|
||||||
|
TARGET_OBJECT_STACK_MEMORY.
|
||||||
|
(target_read_stack): New function.
|
||||||
|
(initialize_targets): Install new option `stack-cache'.
|
||||||
|
* target.h: Remove #include of dcache.h.
|
||||||
|
(enum target_object): New value TARGET_OBJECT_STACK_MEMORY.
|
||||||
|
(target_dcache): Delete.
|
||||||
|
(target_dcache_invalidate): Declare.
|
||||||
|
(target_read_stack): Declare.
|
||||||
|
* top.c (prepare_execute_command): New function.
|
||||||
|
(execute_command): Call prepare_execute_command
|
||||||
|
instead of free_all_values.
|
||||||
|
* top.h (prepare_execute_command): Declare.
|
||||||
|
* valops.c (get_value_at): New function.
|
||||||
|
(value_at): Guts moved to get_value_at.
|
||||||
|
(value_at_lazy): Similarly.
|
||||||
|
(value_fetch_lazy): Call read_stack for stack values.
|
||||||
|
* value.c (struct value): New member `stack'.
|
||||||
|
(value_stack, set_value_stack): New functions.
|
||||||
|
* value.h (value_stack, set_value_stack): Declare.
|
||||||
|
* mi/mi-main.c (mi_cmd_execute): Call prepare_execute_command
|
||||||
|
instead of free_all_values.
|
||||||
|
|
||||||
2009-08-29 Hui Zhu <teawater@gmail.com>
|
2009-08-29 Hui Zhu <teawater@gmail.com>
|
||||||
|
|
||||||
* i386-tdep.c (i386_process_record): Fix the error of string
|
* i386-tdep.c (i386_process_record): Fix the error of string
|
||||||
|
|||||||
6
gdb/NEWS
6
gdb/NEWS
@@ -394,6 +394,12 @@ show schedule-multiple
|
|||||||
Allow GDB to resume all threads of all processes or only threads of
|
Allow GDB to resume all threads of all processes or only threads of
|
||||||
the current process.
|
the current process.
|
||||||
|
|
||||||
|
set stack-cache
|
||||||
|
show stack-cache
|
||||||
|
Use more aggressive caching for accesses to the stack. This improves
|
||||||
|
performance of remote debugging (particularly backtraces) without
|
||||||
|
affecting correctness.
|
||||||
|
|
||||||
* Removed commands
|
* Removed commands
|
||||||
|
|
||||||
info forks
|
info forks
|
||||||
|
|||||||
@@ -228,6 +228,7 @@ memory_error (int status, CORE_ADDR memaddr)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Same as target_read_memory, but report an error if can't read. */
|
/* Same as target_read_memory, but report an error if can't read. */
|
||||||
|
|
||||||
void
|
void
|
||||||
read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
||||||
{
|
{
|
||||||
@@ -237,6 +238,17 @@ read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
|||||||
memory_error (status, memaddr);
|
memory_error (status, memaddr);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Same as target_read_stack, but report an error if can't read. */
|
||||||
|
|
||||||
|
void
|
||||||
|
read_stack (CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
||||||
|
{
|
||||||
|
int status;
|
||||||
|
status = target_read_stack (memaddr, myaddr, len);
|
||||||
|
if (status != 0)
|
||||||
|
memory_error (status, memaddr);
|
||||||
|
}
|
||||||
|
|
||||||
/* Argument / return result struct for use with
|
/* Argument / return result struct for use with
|
||||||
do_captured_read_memory_integer(). MEMADDR and LEN are filled in
|
do_captured_read_memory_integer(). MEMADDR and LEN are filled in
|
||||||
by gdb_read_memory_integer(). RESULT is the contents that were
|
by gdb_read_memory_integer(). RESULT is the contents that were
|
||||||
|
|||||||
88
gdb/dcache.c
88
gdb/dcache.c
@@ -24,6 +24,7 @@
|
|||||||
#include "gdb_string.h"
|
#include "gdb_string.h"
|
||||||
#include "gdbcore.h"
|
#include "gdbcore.h"
|
||||||
#include "target.h"
|
#include "target.h"
|
||||||
|
#include "inferior.h"
|
||||||
#include "splay-tree.h"
|
#include "splay-tree.h"
|
||||||
|
|
||||||
/* The data cache could lead to incorrect results because it doesn't
|
/* The data cache could lead to incorrect results because it doesn't
|
||||||
@@ -103,6 +104,9 @@ struct dcache_struct
|
|||||||
|
|
||||||
/* The number of in-use lines in the cache. */
|
/* The number of in-use lines in the cache. */
|
||||||
int size;
|
int size;
|
||||||
|
|
||||||
|
/* The ptid of last inferior to use cache or null_ptid. */
|
||||||
|
ptid_t ptid;
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct dcache_block *dcache_hit (DCACHE *dcache, CORE_ADDR addr);
|
static struct dcache_block *dcache_hit (DCACHE *dcache, CORE_ADDR addr);
|
||||||
@@ -117,16 +121,15 @@ static void dcache_info (char *exp, int tty);
|
|||||||
|
|
||||||
void _initialize_dcache (void);
|
void _initialize_dcache (void);
|
||||||
|
|
||||||
static int dcache_enabled_p = 0;
|
static int dcache_enabled_p = 0; /* OBSOLETE */
|
||||||
|
|
||||||
static void
|
static void
|
||||||
show_dcache_enabled_p (struct ui_file *file, int from_tty,
|
show_dcache_enabled_p (struct ui_file *file, int from_tty,
|
||||||
struct cmd_list_element *c, const char *value)
|
struct cmd_list_element *c, const char *value)
|
||||||
{
|
{
|
||||||
fprintf_filtered (file, _("Cache use for remote targets is %s.\n"), value);
|
fprintf_filtered (file, _("Deprecated remotecache flag is %s.\n"), value);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static DCACHE *last_cache; /* Used by info dcache */
|
static DCACHE *last_cache; /* Used by info dcache */
|
||||||
|
|
||||||
/* Free all the data cache blocks, thus discarding all cached data. */
|
/* Free all the data cache blocks, thus discarding all cached data. */
|
||||||
@@ -152,6 +155,23 @@ dcache_invalidate (DCACHE *dcache)
|
|||||||
dcache->oldest = NULL;
|
dcache->oldest = NULL;
|
||||||
dcache->newest = NULL;
|
dcache->newest = NULL;
|
||||||
dcache->size = 0;
|
dcache->size = 0;
|
||||||
|
dcache->ptid = null_ptid;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Invalidate the line associated with ADDR. */
|
||||||
|
|
||||||
|
static void
|
||||||
|
dcache_invalidate_line (DCACHE *dcache, CORE_ADDR addr)
|
||||||
|
{
|
||||||
|
struct dcache_block *db = dcache_hit (dcache, addr);
|
||||||
|
|
||||||
|
if (db)
|
||||||
|
{
|
||||||
|
splay_tree_remove (dcache->tree, (splay_tree_key) db->addr);
|
||||||
|
db->newer = dcache->freelist;
|
||||||
|
dcache->freelist = db;
|
||||||
|
--dcache->size;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* If addr is present in the dcache, return the address of the block
|
/* If addr is present in the dcache, return the address of the block
|
||||||
@@ -198,8 +218,9 @@ dcache_read_line (DCACHE *dcache, struct dcache_block *db)
|
|||||||
else
|
else
|
||||||
reg_len = region->hi - memaddr;
|
reg_len = region->hi - memaddr;
|
||||||
|
|
||||||
/* Skip non-cacheable/non-readable regions. */
|
/* Skip non-readable regions. The cache attribute can be ignored,
|
||||||
if (!region->attrib.cache || region->attrib.mode == MEM_WO)
|
since we may be loading this for a stack access. */
|
||||||
|
if (region->attrib.mode == MEM_WO)
|
||||||
{
|
{
|
||||||
memaddr += reg_len;
|
memaddr += reg_len;
|
||||||
myaddr += reg_len;
|
myaddr += reg_len;
|
||||||
@@ -296,7 +317,7 @@ dcache_peek_byte (DCACHE *dcache, CORE_ADDR addr, gdb_byte *ptr)
|
|||||||
an area of memory which wasn't present in the cache doesn't cause
|
an area of memory which wasn't present in the cache doesn't cause
|
||||||
it to be loaded in.
|
it to be loaded in.
|
||||||
|
|
||||||
Always return 1 to simplify dcache_xfer_memory. */
|
Always return 1 (meaning success) to simplify dcache_xfer_memory. */
|
||||||
|
|
||||||
static int
|
static int
|
||||||
dcache_poke_byte (DCACHE *dcache, CORE_ADDR addr, gdb_byte *ptr)
|
dcache_poke_byte (DCACHE *dcache, CORE_ADDR addr, gdb_byte *ptr)
|
||||||
@@ -338,6 +359,7 @@ dcache_init (void)
|
|||||||
dcache->newest = NULL;
|
dcache->newest = NULL;
|
||||||
dcache->freelist = NULL;
|
dcache->freelist = NULL;
|
||||||
dcache->size = 0;
|
dcache->size = 0;
|
||||||
|
dcache->ptid = null_ptid;
|
||||||
last_cache = dcache;
|
last_cache = dcache;
|
||||||
|
|
||||||
return dcache;
|
return dcache;
|
||||||
@@ -366,7 +388,7 @@ dcache_free (DCACHE *dcache)
|
|||||||
to or from debugger address MYADDR. Write to inferior if SHOULD_WRITE is
|
to or from debugger address MYADDR. Write to inferior if SHOULD_WRITE is
|
||||||
nonzero.
|
nonzero.
|
||||||
|
|
||||||
Returns length of data written or read; 0 for error. */
|
The meaning of the result is the same as for target_write. */
|
||||||
|
|
||||||
int
|
int
|
||||||
dcache_xfer_memory (struct target_ops *ops, DCACHE *dcache,
|
dcache_xfer_memory (struct target_ops *ops, DCACHE *dcache,
|
||||||
@@ -378,6 +400,15 @@ dcache_xfer_memory (struct target_ops *ops, DCACHE *dcache,
|
|||||||
int (*xfunc) (DCACHE *dcache, CORE_ADDR addr, gdb_byte *ptr);
|
int (*xfunc) (DCACHE *dcache, CORE_ADDR addr, gdb_byte *ptr);
|
||||||
xfunc = should_write ? dcache_poke_byte : dcache_peek_byte;
|
xfunc = should_write ? dcache_poke_byte : dcache_peek_byte;
|
||||||
|
|
||||||
|
/* If this is a different inferior from what we've recorded,
|
||||||
|
flush the cache. */
|
||||||
|
|
||||||
|
if (! ptid_equal (inferior_ptid, dcache->ptid))
|
||||||
|
{
|
||||||
|
dcache_invalidate (dcache);
|
||||||
|
dcache->ptid = inferior_ptid;
|
||||||
|
}
|
||||||
|
|
||||||
/* Do write-through first, so that if it fails, we don't write to
|
/* Do write-through first, so that if it fails, we don't write to
|
||||||
the cache at all. */
|
the cache at all. */
|
||||||
|
|
||||||
@@ -385,14 +416,25 @@ dcache_xfer_memory (struct target_ops *ops, DCACHE *dcache,
|
|||||||
{
|
{
|
||||||
res = target_write (ops, TARGET_OBJECT_RAW_MEMORY,
|
res = target_write (ops, TARGET_OBJECT_RAW_MEMORY,
|
||||||
NULL, myaddr, memaddr, len);
|
NULL, myaddr, memaddr, len);
|
||||||
if (res < len)
|
if (res <= 0)
|
||||||
return 0;
|
return res;
|
||||||
|
/* Update LEN to what was actually written. */
|
||||||
|
len = res;
|
||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < len; i++)
|
for (i = 0; i < len; i++)
|
||||||
{
|
{
|
||||||
if (!xfunc (dcache, memaddr + i, myaddr + i))
|
if (!xfunc (dcache, memaddr + i, myaddr + i))
|
||||||
return 0;
|
{
|
||||||
|
/* That failed. Discard its cache line so we don't have a
|
||||||
|
partially read line. */
|
||||||
|
dcache_invalidate_line (dcache, memaddr + i);
|
||||||
|
/* If we're writing, we still wrote LEN bytes. */
|
||||||
|
if (should_write)
|
||||||
|
return len;
|
||||||
|
else
|
||||||
|
return i;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return len;
|
return len;
|
||||||
@@ -407,6 +449,18 @@ dcache_xfer_memory (struct target_ops *ops, DCACHE *dcache,
|
|||||||
"logically" connected but not actually a single call to one of the
|
"logically" connected but not actually a single call to one of the
|
||||||
memory transfer functions. */
|
memory transfer functions. */
|
||||||
|
|
||||||
|
/* Just update any cache lines which are already present. This is called
|
||||||
|
by memory_xfer_partial in cases where the access would otherwise not go
|
||||||
|
through the cache. */
|
||||||
|
|
||||||
|
void
|
||||||
|
dcache_update (DCACHE *dcache, CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
for (i = 0; i < len; i++)
|
||||||
|
dcache_poke_byte (dcache, memaddr + i, myaddr + i);
|
||||||
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
dcache_print_line (int index)
|
dcache_print_line (int index)
|
||||||
{
|
{
|
||||||
@@ -474,12 +528,15 @@ dcache_info (char *exp, int tty)
|
|||||||
printf_filtered (_("Dcache line width %d, maximum size %d\n"),
|
printf_filtered (_("Dcache line width %d, maximum size %d\n"),
|
||||||
LINE_SIZE, DCACHE_SIZE);
|
LINE_SIZE, DCACHE_SIZE);
|
||||||
|
|
||||||
if (!last_cache)
|
if (!last_cache || ptid_equal (last_cache->ptid, null_ptid))
|
||||||
{
|
{
|
||||||
printf_filtered (_("No data cache available.\n"));
|
printf_filtered (_("No data cache available.\n"));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
printf_filtered (_("Contains data for %s\n"),
|
||||||
|
target_pid_to_str (last_cache->ptid));
|
||||||
|
|
||||||
refcount = 0;
|
refcount = 0;
|
||||||
|
|
||||||
n = splay_tree_min (last_cache->tree);
|
n = splay_tree_min (last_cache->tree);
|
||||||
@@ -507,11 +564,10 @@ _initialize_dcache (void)
|
|||||||
&dcache_enabled_p, _("\
|
&dcache_enabled_p, _("\
|
||||||
Set cache use for remote targets."), _("\
|
Set cache use for remote targets."), _("\
|
||||||
Show cache use for remote targets."), _("\
|
Show cache use for remote targets."), _("\
|
||||||
When on, use data caching for remote targets. For many remote targets\n\
|
This used to enable the data cache for remote targets. The cache\n\
|
||||||
this option can offer better throughput for reading target memory.\n\
|
functionality is now controlled by the memory region system and the\n\
|
||||||
Unfortunately, gdb does not currently know anything about volatile\n\
|
\"stack-cache\" flag; \"remotecache\" now does nothing and\n\
|
||||||
registers and thus data caching will produce incorrect results with\n\
|
exists only for compatibility reasons."),
|
||||||
volatile registers are in use. By default, this option is off."),
|
|
||||||
NULL,
|
NULL,
|
||||||
show_dcache_enabled_p,
|
show_dcache_enabled_p,
|
||||||
&setlist, &showlist);
|
&setlist, &showlist);
|
||||||
|
|||||||
@@ -38,4 +38,7 @@ void dcache_free (DCACHE *);
|
|||||||
int dcache_xfer_memory (struct target_ops *ops, DCACHE *cache, CORE_ADDR mem,
|
int dcache_xfer_memory (struct target_ops *ops, DCACHE *cache, CORE_ADDR mem,
|
||||||
gdb_byte *my, int len, int should_write);
|
gdb_byte *my, int len, int should_write);
|
||||||
|
|
||||||
|
void dcache_update (DCACHE *dcache, CORE_ADDR memaddr, gdb_byte *myaddr,
|
||||||
|
int len);
|
||||||
|
|
||||||
#endif /* DCACHE_H */
|
#endif /* DCACHE_H */
|
||||||
|
|||||||
@@ -1,3 +1,11 @@
|
|||||||
|
2009-08-31 Jacob Potter <jdpotter@google.com>
|
||||||
|
Doug Evans <dje@google.com>
|
||||||
|
|
||||||
|
* gdb.texinfo (Caching Data of Remote Targets): Update text.
|
||||||
|
Mark `set/show remotecache' options as obsolete.
|
||||||
|
Document new `set/show stack-cache' option.
|
||||||
|
Update text for `info dcache'.
|
||||||
|
|
||||||
2009-08-27 Doug Evans <dje@google.com>
|
2009-08-27 Doug Evans <dje@google.com>
|
||||||
|
|
||||||
* gdb.texinfo (Symbols): Delete `set print symbol-loading'.
|
* gdb.texinfo (Symbols): Delete `set print symbol-loading'.
|
||||||
|
|||||||
@@ -8421,32 +8421,47 @@ character.
|
|||||||
@section Caching Data of Remote Targets
|
@section Caching Data of Remote Targets
|
||||||
@cindex caching data of remote targets
|
@cindex caching data of remote targets
|
||||||
|
|
||||||
@value{GDBN} can cache data exchanged between the debugger and a
|
@value{GDBN} caches data exchanged between the debugger and a
|
||||||
remote target (@pxref{Remote Debugging}). Such caching generally improves
|
remote target (@pxref{Remote Debugging}). Such caching generally improves
|
||||||
performance, because it reduces the overhead of the remote protocol by
|
performance, because it reduces the overhead of the remote protocol by
|
||||||
bundling memory reads and writes into large chunks. Unfortunately,
|
bundling memory reads and writes into large chunks. Unfortunately, simply
|
||||||
@value{GDBN} does not currently know anything about volatile
|
caching everything would lead to incorrect results, since @value{GDBN}
|
||||||
registers, and thus data caching will produce incorrect results when
|
does not necessarily know anything about volatile values, memory-mapped I/O
|
||||||
volatile registers are in use.
|
addresses, etc. Therefore, by default, @value{GDBN} only caches data
|
||||||
|
known to be on the stack. Other regions of memory can be explicitly marked
|
||||||
|
cacheable; see @pxref{Memory Region Attributes}.
|
||||||
|
|
||||||
@table @code
|
@table @code
|
||||||
@kindex set remotecache
|
@kindex set remotecache
|
||||||
@item set remotecache on
|
@item set remotecache on
|
||||||
@itemx set remotecache off
|
@itemx set remotecache off
|
||||||
Set caching state for remote targets. When @code{ON}, use data
|
This option no longer does anything; it exists for compatibility
|
||||||
caching. By default, this option is @code{OFF}.
|
with old scripts.
|
||||||
|
|
||||||
@kindex show remotecache
|
@kindex show remotecache
|
||||||
@item show remotecache
|
@item show remotecache
|
||||||
Show the current state of data caching for remote targets.
|
Show the current state of the obsolete remotecache flag.
|
||||||
|
|
||||||
|
@kindex set stack-cache
|
||||||
|
@item set stack-cache on
|
||||||
|
@itemx set stack-cache off
|
||||||
|
Enable or disable caching of stack accesses. When @code{ON}, use
|
||||||
|
caching. By default, this option is @code{ON}.
|
||||||
|
|
||||||
|
@kindex show stack-cache
|
||||||
|
@item show stack-cache
|
||||||
|
Show the current state of data caching for memory accesses.
|
||||||
|
|
||||||
@kindex info dcache
|
@kindex info dcache
|
||||||
@item info dcache
|
@item info dcache @r{[}line@r{]}
|
||||||
Print the information about the data cache performance. The
|
Print the information about the data cache performance. The
|
||||||
information displayed includes: the dcache width and depth; and for
|
information displayed includes the dcache width and depth, and for
|
||||||
each cache line, how many times it was referenced, and its data and
|
each cache line, its number, address, and how many times it was
|
||||||
state (invalid, dirty, valid). This command is useful for debugging
|
referenced. This command is useful for debugging the data cache
|
||||||
the data cache operation.
|
operation.
|
||||||
|
|
||||||
|
If a line number is specified, the contents of that line will be
|
||||||
|
printed in hex.
|
||||||
@end table
|
@end table
|
||||||
|
|
||||||
@node Searching Memory
|
@node Searching Memory
|
||||||
|
|||||||
@@ -280,6 +280,7 @@ dwarf2_evaluate_loc_desc (struct symbol *var, struct frame_info *frame,
|
|||||||
retval = allocate_value (SYMBOL_TYPE (var));
|
retval = allocate_value (SYMBOL_TYPE (var));
|
||||||
VALUE_LVAL (retval) = lval_memory;
|
VALUE_LVAL (retval) = lval_memory;
|
||||||
set_value_lazy (retval, 1);
|
set_value_lazy (retval, 1);
|
||||||
|
set_value_stack (retval, 1);
|
||||||
set_value_address (retval, address);
|
set_value_address (retval, address);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -153,8 +153,10 @@ struct value *
|
|||||||
frame_unwind_got_memory (struct frame_info *frame, int regnum, CORE_ADDR addr)
|
frame_unwind_got_memory (struct frame_info *frame, int regnum, CORE_ADDR addr)
|
||||||
{
|
{
|
||||||
struct gdbarch *gdbarch = frame_unwind_arch (frame);
|
struct gdbarch *gdbarch = frame_unwind_arch (frame);
|
||||||
|
struct value *v = value_at_lazy (register_type (gdbarch, regnum), addr);
|
||||||
|
|
||||||
return value_at_lazy (register_type (gdbarch, regnum), addr);
|
set_value_stack (v, 1);
|
||||||
|
return v;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Return a value which indicates that FRAME's saved version of
|
/* Return a value which indicates that FRAME's saved version of
|
||||||
|
|||||||
@@ -47,6 +47,10 @@ extern void memory_error (int status, CORE_ADDR memaddr);
|
|||||||
|
|
||||||
extern void read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len);
|
extern void read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len);
|
||||||
|
|
||||||
|
/* Like target_read_stack, but report an error if can't read. */
|
||||||
|
|
||||||
|
extern void read_stack (CORE_ADDR memaddr, gdb_byte *myaddr, int len);
|
||||||
|
|
||||||
/* Read an integer from debugged memory, given address and number of
|
/* Read an integer from debugged memory, given address and number of
|
||||||
bytes. */
|
bytes. */
|
||||||
|
|
||||||
|
|||||||
@@ -571,7 +571,7 @@ mem_enable_command (char *args, int from_tty)
|
|||||||
|
|
||||||
require_user_regions (from_tty);
|
require_user_regions (from_tty);
|
||||||
|
|
||||||
dcache_invalidate (target_dcache);
|
target_dcache_invalidate ();
|
||||||
|
|
||||||
if (p == 0)
|
if (p == 0)
|
||||||
{
|
{
|
||||||
@@ -625,7 +625,7 @@ mem_disable_command (char *args, int from_tty)
|
|||||||
|
|
||||||
require_user_regions (from_tty);
|
require_user_regions (from_tty);
|
||||||
|
|
||||||
dcache_invalidate (target_dcache);
|
target_dcache_invalidate ();
|
||||||
|
|
||||||
if (p == 0)
|
if (p == 0)
|
||||||
{
|
{
|
||||||
@@ -686,7 +686,7 @@ mem_delete_command (char *args, int from_tty)
|
|||||||
|
|
||||||
require_user_regions (from_tty);
|
require_user_regions (from_tty);
|
||||||
|
|
||||||
dcache_invalidate (target_dcache);
|
target_dcache_invalidate ();
|
||||||
|
|
||||||
if (p == 0)
|
if (p == 0)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -1353,7 +1353,8 @@ mi_cmd_execute (struct mi_parse *parse)
|
|||||||
struct cleanup *cleanup;
|
struct cleanup *cleanup;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
free_all_values ();
|
prepare_execute_command ();
|
||||||
|
|
||||||
cleanup = make_cleanup (null_cleanup, NULL);
|
cleanup = make_cleanup (null_cleanup, NULL);
|
||||||
|
|
||||||
if (parse->frame != -1 && parse->thread == -1)
|
if (parse->frame != -1 && parse->thread == -1)
|
||||||
|
|||||||
102
gdb/target.c
102
gdb/target.c
@@ -210,7 +210,45 @@ show_targetdebug (struct ui_file *file, int from_tty,
|
|||||||
|
|
||||||
static void setup_target_debug (void);
|
static void setup_target_debug (void);
|
||||||
|
|
||||||
DCACHE *target_dcache;
|
/* The option sets this. */
|
||||||
|
static int stack_cache_enabled_p_1 = 1;
|
||||||
|
/* And set_stack_cache_enabled_p updates this.
|
||||||
|
The reason for the separation is so that we don't flush the cache for
|
||||||
|
on->on transitions. */
|
||||||
|
static int stack_cache_enabled_p = 1;
|
||||||
|
|
||||||
|
/* This is called *after* the stack-cache has been set.
|
||||||
|
Flush the cache for off->on and on->off transitions.
|
||||||
|
There's no real need to flush the cache for on->off transitions,
|
||||||
|
except cleanliness. */
|
||||||
|
|
||||||
|
static void
|
||||||
|
set_stack_cache_enabled_p (char *args, int from_tty,
|
||||||
|
struct cmd_list_element *c)
|
||||||
|
{
|
||||||
|
if (stack_cache_enabled_p != stack_cache_enabled_p_1)
|
||||||
|
target_dcache_invalidate ();
|
||||||
|
|
||||||
|
stack_cache_enabled_p = stack_cache_enabled_p_1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
show_stack_cache_enabled_p (struct ui_file *file, int from_tty,
|
||||||
|
struct cmd_list_element *c, const char *value)
|
||||||
|
{
|
||||||
|
fprintf_filtered (file, _("Cache use for stack accesses is %s.\n"), value);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Cache of memory operations, to speed up remote access. */
|
||||||
|
static DCACHE *target_dcache;
|
||||||
|
|
||||||
|
/* Invalidate the target dcache. */
|
||||||
|
|
||||||
|
void
|
||||||
|
target_dcache_invalidate (void)
|
||||||
|
{
|
||||||
|
dcache_invalidate (target_dcache);
|
||||||
|
}
|
||||||
|
|
||||||
/* The user just typed 'target' without the name of a target. */
|
/* The user just typed 'target' without the name of a target. */
|
||||||
|
|
||||||
@@ -413,7 +451,7 @@ target_kill (void)
|
|||||||
void
|
void
|
||||||
target_load (char *arg, int from_tty)
|
target_load (char *arg, int from_tty)
|
||||||
{
|
{
|
||||||
dcache_invalidate (target_dcache);
|
target_dcache_invalidate ();
|
||||||
(*current_target.to_load) (arg, from_tty);
|
(*current_target.to_load) (arg, from_tty);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1143,12 +1181,14 @@ target_section_by_addr (struct target_ops *target, CORE_ADDR addr)
|
|||||||
value are just as for target_xfer_partial. */
|
value are just as for target_xfer_partial. */
|
||||||
|
|
||||||
static LONGEST
|
static LONGEST
|
||||||
memory_xfer_partial (struct target_ops *ops, void *readbuf, const void *writebuf,
|
memory_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||||
ULONGEST memaddr, LONGEST len)
|
void *readbuf, const void *writebuf, ULONGEST memaddr,
|
||||||
|
LONGEST len)
|
||||||
{
|
{
|
||||||
LONGEST res;
|
LONGEST res;
|
||||||
int reg_len;
|
int reg_len;
|
||||||
struct mem_region *region;
|
struct mem_region *region;
|
||||||
|
struct inferior *inf;
|
||||||
|
|
||||||
/* Zero length requests are ok and require no work. */
|
/* Zero length requests are ok and require no work. */
|
||||||
if (len == 0)
|
if (len == 0)
|
||||||
@@ -1223,7 +1263,11 @@ memory_xfer_partial (struct target_ops *ops, void *readbuf, const void *writebuf
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (region->attrib.cache)
|
inf = find_inferior_pid (ptid_get_pid (inferior_ptid));
|
||||||
|
|
||||||
|
if (inf != NULL
|
||||||
|
&& (region->attrib.cache
|
||||||
|
|| (stack_cache_enabled_p && object == TARGET_OBJECT_STACK_MEMORY)))
|
||||||
{
|
{
|
||||||
if (readbuf != NULL)
|
if (readbuf != NULL)
|
||||||
res = dcache_xfer_memory (ops, target_dcache, memaddr, readbuf,
|
res = dcache_xfer_memory (ops, target_dcache, memaddr, readbuf,
|
||||||
@@ -1245,6 +1289,19 @@ memory_xfer_partial (struct target_ops *ops, void *readbuf, const void *writebuf
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Make sure the cache gets updated no matter what - if we are writing
|
||||||
|
to the stack, even if this write is not tagged as such, we still need
|
||||||
|
to update the cache. */
|
||||||
|
|
||||||
|
if (inf != NULL
|
||||||
|
&& readbuf == NULL
|
||||||
|
&& !region->attrib.cache
|
||||||
|
&& stack_cache_enabled_p
|
||||||
|
&& object != TARGET_OBJECT_STACK_MEMORY)
|
||||||
|
{
|
||||||
|
dcache_update (target_dcache, memaddr, (void *) writebuf, reg_len);
|
||||||
|
}
|
||||||
|
|
||||||
/* If none of those methods found the memory we wanted, fall back
|
/* If none of those methods found the memory we wanted, fall back
|
||||||
to a target partial transfer. Normally a single call to
|
to a target partial transfer. Normally a single call to
|
||||||
to_xfer_partial is enough; if it doesn't recognize an object
|
to_xfer_partial is enough; if it doesn't recognize an object
|
||||||
@@ -1308,8 +1365,9 @@ target_xfer_partial (struct target_ops *ops,
|
|||||||
/* If this is a memory transfer, let the memory-specific code
|
/* If this is a memory transfer, let the memory-specific code
|
||||||
have a look at it instead. Memory transfers are more
|
have a look at it instead. Memory transfers are more
|
||||||
complicated. */
|
complicated. */
|
||||||
if (object == TARGET_OBJECT_MEMORY)
|
if (object == TARGET_OBJECT_MEMORY || object == TARGET_OBJECT_STACK_MEMORY)
|
||||||
retval = memory_xfer_partial (ops, readbuf, writebuf, offset, len);
|
retval = memory_xfer_partial (ops, object, readbuf,
|
||||||
|
writebuf, offset, len);
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
enum target_object raw_object = object;
|
enum target_object raw_object = object;
|
||||||
@@ -1391,6 +1449,23 @@ target_read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
|||||||
return EIO;
|
return EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Like target_read_memory, but specify explicitly that this is a read from
|
||||||
|
the target's stack. This may trigger different cache behavior. */
|
||||||
|
|
||||||
|
int
|
||||||
|
target_read_stack (CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
||||||
|
{
|
||||||
|
/* Dispatch to the topmost target, not the flattened current_target.
|
||||||
|
Memory accesses check target->to_has_(all_)memory, and the
|
||||||
|
flattened target doesn't inherit those. */
|
||||||
|
|
||||||
|
if (target_read (current_target.beneath, TARGET_OBJECT_STACK_MEMORY, NULL,
|
||||||
|
myaddr, memaddr, len) == len)
|
||||||
|
return 0;
|
||||||
|
else
|
||||||
|
return EIO;
|
||||||
|
}
|
||||||
|
|
||||||
int
|
int
|
||||||
target_write_memory (CORE_ADDR memaddr, const gdb_byte *myaddr, int len)
|
target_write_memory (CORE_ADDR memaddr, const gdb_byte *myaddr, int len)
|
||||||
{
|
{
|
||||||
@@ -2055,7 +2130,7 @@ target_resume (ptid_t ptid, int step, enum target_signal signal)
|
|||||||
{
|
{
|
||||||
struct target_ops *t;
|
struct target_ops *t;
|
||||||
|
|
||||||
dcache_invalidate (target_dcache);
|
target_dcache_invalidate ();
|
||||||
|
|
||||||
for (t = current_target.beneath; t != NULL; t = t->beneath)
|
for (t = current_target.beneath; t != NULL; t = t->beneath)
|
||||||
{
|
{
|
||||||
@@ -3479,5 +3554,16 @@ Tells gdb whether to control the inferior in asynchronous mode."),
|
|||||||
&setlist,
|
&setlist,
|
||||||
&showlist);
|
&showlist);
|
||||||
|
|
||||||
|
add_setshow_boolean_cmd ("stack-cache", class_support,
|
||||||
|
&stack_cache_enabled_p, _("\
|
||||||
|
Set cache use for stack access."), _("\
|
||||||
|
Show cache use for stack access."), _("\
|
||||||
|
When on, use the data cache for all stack access, regardless of any\n\
|
||||||
|
configured memory regions. This improves remote performance significantly.\n\
|
||||||
|
By default, caching for stack access is on."),
|
||||||
|
set_stack_cache_enabled_p,
|
||||||
|
show_stack_cache_enabled_p,
|
||||||
|
&setlist, &showlist);
|
||||||
|
|
||||||
target_dcache = dcache_init ();
|
target_dcache = dcache_init ();
|
||||||
}
|
}
|
||||||
|
|||||||
10
gdb/target.h
10
gdb/target.h
@@ -53,7 +53,6 @@ struct target_section_table;
|
|||||||
|
|
||||||
#include "bfd.h"
|
#include "bfd.h"
|
||||||
#include "symtab.h"
|
#include "symtab.h"
|
||||||
#include "dcache.h"
|
|
||||||
#include "memattr.h"
|
#include "memattr.h"
|
||||||
#include "vec.h"
|
#include "vec.h"
|
||||||
#include "gdb_signals.h"
|
#include "gdb_signals.h"
|
||||||
@@ -203,6 +202,10 @@ enum target_object
|
|||||||
Target implementations of to_xfer_partial never need to handle
|
Target implementations of to_xfer_partial never need to handle
|
||||||
this object, and most callers should not use it. */
|
this object, and most callers should not use it. */
|
||||||
TARGET_OBJECT_RAW_MEMORY,
|
TARGET_OBJECT_RAW_MEMORY,
|
||||||
|
/* Memory known to be part of the target's stack. This is cached even
|
||||||
|
if it is not in a region marked as such, since it is known to be
|
||||||
|
"normal" RAM. */
|
||||||
|
TARGET_OBJECT_STACK_MEMORY,
|
||||||
/* Kernel Unwind Table. See "ia64-tdep.c". */
|
/* Kernel Unwind Table. See "ia64-tdep.c". */
|
||||||
TARGET_OBJECT_UNWIND_TABLE,
|
TARGET_OBJECT_UNWIND_TABLE,
|
||||||
/* Transfer auxilliary vector. */
|
/* Transfer auxilliary vector. */
|
||||||
@@ -671,12 +674,15 @@ extern void target_store_registers (struct regcache *regcache, int regs);
|
|||||||
#define target_supports_multi_process() \
|
#define target_supports_multi_process() \
|
||||||
(*current_target.to_supports_multi_process) ()
|
(*current_target.to_supports_multi_process) ()
|
||||||
|
|
||||||
extern DCACHE *target_dcache;
|
/* Invalidate all target dcaches. */
|
||||||
|
extern void target_dcache_invalidate (void);
|
||||||
|
|
||||||
extern int target_read_string (CORE_ADDR, char **, int, int *);
|
extern int target_read_string (CORE_ADDR, char **, int, int *);
|
||||||
|
|
||||||
extern int target_read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len);
|
extern int target_read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len);
|
||||||
|
|
||||||
|
extern int target_read_stack (CORE_ADDR memaddr, gdb_byte *myaddr, int len);
|
||||||
|
|
||||||
extern int target_write_memory (CORE_ADDR memaddr, const gdb_byte *myaddr,
|
extern int target_write_memory (CORE_ADDR memaddr, const gdb_byte *myaddr,
|
||||||
int len);
|
int len);
|
||||||
|
|
||||||
|
|||||||
17
gdb/top.c
17
gdb/top.c
@@ -345,6 +345,19 @@ do_chdir_cleanup (void *old_dir)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
void
|
||||||
|
prepare_execute_command (void)
|
||||||
|
{
|
||||||
|
free_all_values ();
|
||||||
|
|
||||||
|
/* With multiple threads running while the one we're examining is stopped,
|
||||||
|
the dcache can get stale without us being able to detect it.
|
||||||
|
For the duration of the command, though, use the dcache to help
|
||||||
|
things like backtrace. */
|
||||||
|
if (non_stop)
|
||||||
|
target_dcache_invalidate ();
|
||||||
|
}
|
||||||
|
|
||||||
/* Execute the line P as a command, in the current user context.
|
/* Execute the line P as a command, in the current user context.
|
||||||
Pass FROM_TTY as second argument to the defining function. */
|
Pass FROM_TTY as second argument to the defining function. */
|
||||||
|
|
||||||
@@ -374,8 +387,8 @@ execute_command (char *p, int from_tty)
|
|||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
free_all_values ();
|
prepare_execute_command ();
|
||||||
|
|
||||||
/* Force cleanup of any alloca areas if using C alloca instead of
|
/* Force cleanup of any alloca areas if using C alloca instead of
|
||||||
a builtin alloca. */
|
a builtin alloca. */
|
||||||
|
|||||||
@@ -49,6 +49,10 @@ extern void quit_command (char *, int);
|
|||||||
extern int quit_cover (void *);
|
extern int quit_cover (void *);
|
||||||
extern void execute_command (char *, int);
|
extern void execute_command (char *, int);
|
||||||
|
|
||||||
|
/* Prepare for execution of a command.
|
||||||
|
Call this before every command, CLI or MI. */
|
||||||
|
extern void prepare_execute_command (void);
|
||||||
|
|
||||||
/* This function returns a pointer to the string that is used
|
/* This function returns a pointer to the string that is used
|
||||||
by gdb for its command prompt. */
|
by gdb for its command prompt. */
|
||||||
extern char *get_prompt (void);
|
extern char *get_prompt (void);
|
||||||
|
|||||||
59
gdb/valops.c
59
gdb/valops.c
@@ -565,6 +565,32 @@ value_one (struct type *type, enum lval_type lv)
|
|||||||
return val;
|
return val;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Helper function for value_at, value_at_lazy, and value_at_lazy_stack. */
|
||||||
|
|
||||||
|
static struct value *
|
||||||
|
get_value_at (struct type *type, CORE_ADDR addr, int lazy)
|
||||||
|
{
|
||||||
|
struct value *val;
|
||||||
|
|
||||||
|
if (TYPE_CODE (check_typedef (type)) == TYPE_CODE_VOID)
|
||||||
|
error (_("Attempt to dereference a generic pointer."));
|
||||||
|
|
||||||
|
if (lazy)
|
||||||
|
{
|
||||||
|
val = allocate_value_lazy (type);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
val = allocate_value (type);
|
||||||
|
read_memory (addr, value_contents_all_raw (val), TYPE_LENGTH (type));
|
||||||
|
}
|
||||||
|
|
||||||
|
VALUE_LVAL (val) = lval_memory;
|
||||||
|
set_value_address (val, addr);
|
||||||
|
|
||||||
|
return val;
|
||||||
|
}
|
||||||
|
|
||||||
/* Return a value with type TYPE located at ADDR.
|
/* Return a value with type TYPE located at ADDR.
|
||||||
|
|
||||||
Call value_at only if the data needs to be fetched immediately;
|
Call value_at only if the data needs to be fetched immediately;
|
||||||
@@ -580,19 +606,7 @@ value_one (struct type *type, enum lval_type lv)
|
|||||||
struct value *
|
struct value *
|
||||||
value_at (struct type *type, CORE_ADDR addr)
|
value_at (struct type *type, CORE_ADDR addr)
|
||||||
{
|
{
|
||||||
struct value *val;
|
return get_value_at (type, addr, 0);
|
||||||
|
|
||||||
if (TYPE_CODE (check_typedef (type)) == TYPE_CODE_VOID)
|
|
||||||
error (_("Attempt to dereference a generic pointer."));
|
|
||||||
|
|
||||||
val = allocate_value (type);
|
|
||||||
|
|
||||||
read_memory (addr, value_contents_all_raw (val), TYPE_LENGTH (type));
|
|
||||||
|
|
||||||
VALUE_LVAL (val) = lval_memory;
|
|
||||||
set_value_address (val, addr);
|
|
||||||
|
|
||||||
return val;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Return a lazy value with type TYPE located at ADDR (cf. value_at). */
|
/* Return a lazy value with type TYPE located at ADDR (cf. value_at). */
|
||||||
@@ -600,17 +614,7 @@ value_at (struct type *type, CORE_ADDR addr)
|
|||||||
struct value *
|
struct value *
|
||||||
value_at_lazy (struct type *type, CORE_ADDR addr)
|
value_at_lazy (struct type *type, CORE_ADDR addr)
|
||||||
{
|
{
|
||||||
struct value *val;
|
return get_value_at (type, addr, 1);
|
||||||
|
|
||||||
if (TYPE_CODE (check_typedef (type)) == TYPE_CODE_VOID)
|
|
||||||
error (_("Attempt to dereference a generic pointer."));
|
|
||||||
|
|
||||||
val = allocate_value_lazy (type);
|
|
||||||
|
|
||||||
VALUE_LVAL (val) = lval_memory;
|
|
||||||
set_value_address (val, addr);
|
|
||||||
|
|
||||||
return val;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Called only from the value_contents and value_contents_all()
|
/* Called only from the value_contents and value_contents_all()
|
||||||
@@ -656,7 +660,12 @@ value_fetch_lazy (struct value *val)
|
|||||||
int length = TYPE_LENGTH (check_typedef (value_enclosing_type (val)));
|
int length = TYPE_LENGTH (check_typedef (value_enclosing_type (val)));
|
||||||
|
|
||||||
if (length)
|
if (length)
|
||||||
read_memory (addr, value_contents_all_raw (val), length);
|
{
|
||||||
|
if (value_stack (val))
|
||||||
|
read_stack (addr, value_contents_all_raw (val), length);
|
||||||
|
else
|
||||||
|
read_memory (addr, value_contents_all_raw (val), length);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
else if (VALUE_LVAL (val) == lval_register)
|
else if (VALUE_LVAL (val) == lval_register)
|
||||||
{
|
{
|
||||||
|
|||||||
16
gdb/value.c
16
gdb/value.c
@@ -196,6 +196,10 @@ struct value
|
|||||||
/* If value is a variable, is it initialized or not. */
|
/* If value is a variable, is it initialized or not. */
|
||||||
int initialized;
|
int initialized;
|
||||||
|
|
||||||
|
/* If value is from the stack. If this is set, read_stack will be
|
||||||
|
used instead of read_memory to enable extra caching. */
|
||||||
|
int stack;
|
||||||
|
|
||||||
/* Actual contents of the value. Target byte-order. NULL or not
|
/* Actual contents of the value. Target byte-order. NULL or not
|
||||||
valid if lazy is nonzero. */
|
valid if lazy is nonzero. */
|
||||||
gdb_byte *contents;
|
gdb_byte *contents;
|
||||||
@@ -424,6 +428,18 @@ set_value_lazy (struct value *value, int val)
|
|||||||
value->lazy = val;
|
value->lazy = val;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int
|
||||||
|
value_stack (struct value *value)
|
||||||
|
{
|
||||||
|
return value->stack;
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
set_value_stack (struct value *value, int val)
|
||||||
|
{
|
||||||
|
value->stack = val;
|
||||||
|
}
|
||||||
|
|
||||||
const gdb_byte *
|
const gdb_byte *
|
||||||
value_contents (struct value *value)
|
value_contents (struct value *value)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -215,6 +215,9 @@ extern void *value_computed_closure (struct value *value);
|
|||||||
extern int value_lazy (struct value *);
|
extern int value_lazy (struct value *);
|
||||||
extern void set_value_lazy (struct value *value, int val);
|
extern void set_value_lazy (struct value *value, int val);
|
||||||
|
|
||||||
|
extern int value_stack (struct value *);
|
||||||
|
extern void set_value_stack (struct value *value, int val);
|
||||||
|
|
||||||
/* value_contents() and value_contents_raw() both return the address
|
/* value_contents() and value_contents_raw() both return the address
|
||||||
of the gdb buffer used to hold a copy of the contents of the lval.
|
of the gdb buffer used to hold a copy of the contents of the lval.
|
||||||
value_contents() is used when the contents of the buffer are needed
|
value_contents() is used when the contents of the buffer are needed
|
||||||
|
|||||||
Reference in New Issue
Block a user