Added an "ignore" argument to the circular_includes script. This
allows the caller to specify files for the script to ignore when
it parses the source file. Rather than creating a special
ignore case for "kernel_all.c" in the script itself, the user
parses the file as an argument (plus others if needed). Updated
the kernels cmake file to reflect the change.
This patch moves the 'outer' chunk of lockTLBEntry into C rather
than handwritten assembly. The outer chunk accesses a global
counter and does arithmetic. The inner chunk (lockTLBEntryCritical)
writes to the registers, must be specially aligned, and is generally
special.
The change reduces unnecessary handwritten assembly, and also avoids
a special case that was problematic for binary verification.
It has become clear that the 'packed' GCC attribute affects the
memory semantics of C in a way that the verification tools do not
understand. The bootinfo types are used by kernel boot code (not
currently verified, but covered by binary verification) and should
not use this attribute.
This is a source-compatible but not binary-compatible change.
Added small bash scripts to run astyle, pylint and xmllint
checks over the kernel source. These style checks were ported
from the old Make build system.
Leaves the last entry in the top level page table free so that it can be used for mapping
devices in the future. This moves the kernel image down to the second last entry in the
top level page table. Leaving the last entry in the top level page table also matches the
rv64 design.
Only a single level 2 page table is now used for mapping the kernel image so this simplifies
the state data to only allocate a single PT and removes the now out of date description.
Makes more explicit that the extra window at KERNEL_BASE that is for the kernel image
is only 1GiB, and the next GiB is for the future when RISC-V platforms have devices
that need to be memory mapped.
This instruction is required when more than one thread exists with
different ASIDs. The system will lock up after the first context
switch when running on hardware.
Issue first noticed and fixed by Hesham Almatary
<Hesham.Almatary@cl.cam.ac.uk>
Change-Id: I6eb64df6b584ff7de79c8af30b28bbc7bb234643
Updated the map_kernel_window function to aid in mapping kernel
memory in 2MiB page tables when the memory addresses aren't
aligned to 1GiB boundaries.
This is needed for platforms with less than 1GiB of memory or
for memory regions that aren't aligned to 1GiB boundaries.
Co-authored-by: Chris Guikema <chris.guikema@dornerworks.com>
Change-Id: I084f82c69f05928dc4fd602d053955e51fd02a4d
The isPTEPageTable function was moved to the top of the vspace source
file so that it could be used in the map_kernel_window function.
Change-Id: If9741f8d546a6e102d0f52466a6361178500f71a
This uses a one dimensional page table for the first level
and a two dimensional array for the second level such that
in a worst case scenario, the entire kernel region could
be mapped using second level tables.
Co-authored-by: Chris Guikema <chris.guikema@dornerworks.com>
Change-Id: Iad62303a0d7c2321d6038ca718888100614f91db
This change is required because the zedboard rocket-chip only has
256MiB of memory. Therefore the load address needs to be lowered
to fit in the available range.
This change will also require the kernel to be mapped with 2MiB
granularity so everything is properly page aligned.
Change-Id: I75ddec0be1bb2fd05d0a947ea19bce46e2cd9f96
These registers are part of the 'regular' TCB state and are saved and restored as part
of normal thread switching. As such it is conflicting to have a duplicate idea of the value
of these registers, especially as it is not kept in sync with the version in the TCB,
which is what is actually being loaded into the hardware.
Threads that have a VCPU, and hence might be running in supervisor mode, probably don't
care about the IPC buffer and would rather their registers contain the values they expect.
This register can be modified by the supervisor mode thread attached to a VCPU and we
should be saving and restoring it. The necessity of doing this has been revealed due to
the kernel now allowing TPIDRURO to be used for TLS_BASE, causing the register to be
overriden if we switch away from a VCPU and then back to it.
Defines TLS_BASE to the be the TP register. Currently the TP register is already used to
place the location of the IPC buffer into it and so a user thread should not set a value
for TLS_BASE unless they have their own way to find their IPC buffer.
This provides a common invocation for all architectures for setting their respective
TLS_BASE virtual register. As you frequently want to modify your *own* TLS_BASE, and
doing read/write registers to modify your own registers is tricky to impossible
depending on which register and how they are ordered in seL4_UserContext, this is a
separate invocation.
This commit provides a universal TLS_BASE virtual register on ARM, similar to as exists
on x86. Depending on the precise configuration this virtual register maps to a different
register
* aarch64: TPIDRURW is used for the TLS_BASE and is already declared and being saved
and restored on context switches, so this just adds TLS_BASE as an alias of it
* armv6: Has no hardware register for use for a TLS_BASE, and so the virtual register
gets stored into the globals frame
* armv7+: TPIDURO is used for TLS_BASE and so the restore paths are modified to load
TLS_BASE into it
This takes the current logic in the decode function for unmap that decides which perform
function to call and considers that logic to be part of the perform step, and places it
in a wrapping perform function. As a result the case where we do not call any detailed
perform function will still result in calling a perform step from decode, instead of doing
nothing.
Prior to this commit the kernel would incorrectly not save the contents
of tpidrurw, as CONFIG_IPC_BUF_TPIDRURW was incorrectly set. This change
fixes the cmake config to only allow this option on aarch32 builds.
Previously arm code assumed that either CONFIG_IPC_BUF_TPIDRURW or
CONFIG_IPC_BUF_GLOBALS_FRAME needed to be set. Given that neither of
these options are required for aarch64, remove this assumption and only
guard code with #ifdefs are required.