Next cache operations should work on most of cores now
rtems_cache_flush_entire_data()
rtems_cache_invalidate_entire_data()
rtems_cache_invalidate_entire_instruction()
Instruction cache invalidate works on the first level for now only.
Data cacache operations are extended to ensure flush/invalidate
on all cache levels.
The CP15 arm_cp15_data_cache_clean_all_levels() function extended
to continue through unified levels too (ctype = 4).
Disabling MMU requires complex cache flushing and invalidation
operations. There is almost no way how to do that right
on SMP system without stopping all other CPUs. On the other hand,
there is documented sequence of operations which should be used
according to ARM manual and it guarantees even distribution
of maintenance operations to other cores for last generation
of Cortex-A cores with multiprocessor extension.
This change could require addition of appropriate entry
to arm_cp15_start_mmu_config_table for some BSPs to ensure
that MMU table stays accessible after MMU is enabled
{
.begin = (uint32_t) bsp_translation_table_base,
.end = (uint32_t) bsp_translation_table_base + 0x4000,
.flags = ARMV7_MMU_DATA_READ_WRITE_CACHED
}
This patch adapts the previously added Beaglebone PWM code from BBBIO to RTEMS.
This work was done in the context of the Google Summer of Code 2016, and further
patches will follow to improve the code quality and documentation.
The basic data and instruction rage functions should be compatible
for all ARMv4,5,6,7 functions. On the other hand, some functions
are not portable, for example arm_cp15_data_cache_test_and_clean()
and arm_cp15_data_cache_invalidate() for all versions and there
has to be specialized version for newer cores.
arm_cache_l1_properties_for_level uses CCSIDR which is not present
on older chips.
Actual version is only experimental, needs more changes
and problem has been found on RPi1 with dlopen so there seems
to be real problem.
The original ARM architecture wide cache_.h is changed to dummy version
for targets not implementing/enablig cache at all.
The ARM targets equipped by cache should include
appropriate implementation.
Next options are available for now
c/src/lib/libbsp/arm/shared/armv467ar-basic-cache/cache_.h
basic ARM cache integrated on the CPU core directly
which requires only CP15 oparations
c/src/lib/libbsp/arm/shared/arm-l2c-310/cache_.h
support for case where ARM L2C-310 cache controller
is used. It is accessible as mmaped peripheral.
c/src/lib/libbsp/arm/shared/armv7m/include/cache_.h
Cortex-M specific cache support
There is need for unambiguous named and defined cache function
which should be called when code is updated, loaded
or is self-modifying.
There should be function to obtain maximal cache line length
as well. This function can and should be used for allocations
which can be used for data and or code and ensures that
there are no partial cache lines overlaps on start and
end of allocated region.
The main reason for inclusion of minimum hypervisor related defines
is that current ARM boards firmware and loaders (U-boot for example)
start loaded operating system kernel in HYP mode to allow it take
control of virtualization (Linux/KVM for example).
Rename _ISR_Disable() into _ISR_Local_disable(). Rename _ISR_Enable()
into _ISR_Local_enable(). Remove _Debug_Is_owner_of_giant().
This is a preparation to remove the Giant lock.
Update #2555.
Rename _ISR_Disable_without_giant() into _ISR_Local_disable(). Rename
_ISR_Enable_without_giant() into _ISR_Local_enable().
This is a preparation to remove the Giant lock.
Update #2555.
Avoid _Thread_Dispatch_increment_disable_level() and
_Thread_Dispatch_decrement_disable_level() and thus the Giant
lock.
This is a preparation to remove the Giant lock.
Update #2555.
The window underflow trap handler used %i5 which destroyed the %o5 of
the calling context. Bug introduced by
0d3b5d4742.
Go back to the pre 0d3b5d4742 behaviour
and use the two unused instructions in the trap vector to optimize a
bit.
Update #2651.
This patch makes the following changes to the Beaglebone IRQ handling code:
- Disable support for nested interrupts.
- Detect spurious IRQs using the SPURIOUSIRQ field of the INTC_SIR_IRQ register.
- Acknowledge spurious IRQs by setting the NewIRQAgr bit of the INTC_CONTROL
register. This cleans the SPURIOUSIRQ field and allows new interrupts
to be generated.
- Improve the get_mir_reg function a bit.
Closes#2580.
According to the C11 and C++11 memory models only a read-modify-write
operation guarantees that we read the last value written in modification
order. Avoid the sequential consistent thread fence and instead use the
inter-processor interrupt to set the thread dispatch necessary
indicator.
Store the floating-point unit property in the thread control block
regardless of the CPU_HARDWARE_FP and CPU_SOFTWARE_FP settings. Make
sure the floating-point unit is only enabled for the corresponding
multilibs. This helps targets which have a volatile only floating point
context like SPARC for example.