This alters the AArch64 page table generation and mapping code and MMU
configuration to use page table level 0 in addition to levels 1, 2, and
3. This allows the mapping of up to 48 bits of memory space and is the
maximum that can be mapped without relying on additional processor
extensions. Mappings are restricted based on the number of physical
address bits that the CPU supports.
Sort the .noinit* input sections by name first, then by alignment if two
sections have the same name. This allows the placement of begin/end symbols to
initialize some areas with a special value.
Update #4678.
Always start the executive in Exception Level 1, Non-Secure mode.
If we boot in EL3 Secure with GICv3 then we have to initialize
the distributor and redistributor to set up G1NS interrupts
early in the boot sequence before stepping down from EL3S to EL1NS.
Now there is no need to distinguish between secure and non-secure
world execution after the primary core boots, so get rid of the
AARCH64_IS_NONSECURE configuration option.
The AArch64 cache implementation does not define
rtems_cache_disable_data(), but declares that it does via
CPU_CACHE_SUPPORT_PROVIDES_DISABLE_DATA. The existing implementation of
_CPU_cache_disable_data() is sufficient to enable this functionality
without the erroneous cache feature flag.
Closes#4569
Debug events should be masked at least until after the first context
switch and should usually be masked until a debugger is attached for
application debugging.
This moves the AArch64 MMU memory type definitions into cpukit for use
by libdebugger since remapping of memory is required to insert software
breakpoints.
This fixes a bug where addresses were not being aligned correctly.
Addresses used in cache functions are now aligned consistently using
RTEMS_ALIGN_DOWN.
Run with stack alignment faults enabled under RTEMS_DEBUG to catch any
stack misalignments early. This makes it easier to track them down
should they ever occur.
Remove usage of SUBALIGN() in aarch64 linkcmds which works around a
difference in behavior on AArch64 platforms. This is no longer necessary
since alignment is now enforced explicitly.
Closes#4178.
The SUBALIGN(4) required on rtemsroset and rtemsrwset for ILP32 builds
was previously present on LP64 builds and causes no issues within RTEMS,
but causes relocation/alignment issues when building libbsd. This
restricts those alignment changes to ILP32 builds.