53 Commits

Author SHA1 Message Date
Xiuwen Cai
39f3c86c61 Add error handling in lock initialization in the Xtensa port (#340)
* Add error handling in lock initialization.

* Update release data and version.
2023-12-28 13:17:58 +08:00
Xiuwen Cai
9f3e35d3dc Add check for overflow in queue size calculation in RTOS compatibility layer. (#339)
* Add check for overflow in queue size calculation.

* Update release data and version.
2023-12-28 13:17:40 +08:00
TiejunZhou
d9ffb0f97d Update release date and version (#338)
* Update version number in API header

* Update release date and version
2023-12-28 10:51:29 +08:00
Yajun Xia
e73843f6d4 Added thumb mode support for threadX GNU ports on armv7a platforms. (#333)
* Added thumb mode support for threadX GNU ports on armv7a platforms.

https://msazure.visualstudio.com/One/_workitems/edit/26105175/

* move the swi interrupt to tx_initialize_low_level.S.

* update the test log.
2023-12-28 09:37:39 +08:00
Bo Chen
dbfad5d126 Merge pull request #336 from wenhui-xie/add_sudo
Add sudo to move coverage folder created by root user.
2023-12-22 10:28:34 +08:00
Wenhui Xie
23aa67c948 Add sudo to move coverage folder created by root user. 2023-12-22 01:36:38 +00:00
Ting Zhu
776ea213ce Correct condition syntax in "Prepare Coverage GitHub Pages" step.l (#329)
* Correct syntax.

* Update regression_template.yml

* Update regression_template.yml
2023-11-30 10:52:29 +08:00
Bo Chen
55673c2410 Merge pull request #328 from ting-ms/master
Add additional condition for "Prepare Coverage GitHub Pages" step.
2023-11-29 15:48:44 +08:00
Ting Zhu
ebe373b1f3 Add additional condition for "Prepare Coverage GitHub Pages" step. 2023-11-29 14:17:16 +08:00
CQ Xiao
a8e5d0946c Added input skip_coverage and coverage_name for customiziing. (#327)
* Update regression_template.yml

* Update regression_template.yml

* Update regression_template.yml

* Update regression_template.yml

* Update regression_template.yml

* Update regression_template.yml

Coverage name of all -> default_build_coverage

* Update regression_template.yml

* Update regression_template.yml

Check inputs.skip_coverage to do steps.

* Update regression_template.yml

* Update regression_template.yml

* Update regression_template.yml

* Update regression_template.yml

Enable coverage upload when manually triggered.

* Update regression_template.yml

* Update regression_template.yml

* Update regression_template.yml

* Update .github/workflows/regression_template.yml

Fix comments.

Co-authored-by: TiejunZhou <50469179+TiejunMS@users.noreply.github.com>

---------

Co-authored-by: TiejunZhou <50469179+TiejunMS@users.noreply.github.com>
2023-11-27 11:50:52 +08:00
TiejunZhou
cad6c42ecc Fix action to run on fork repo for cortex-m builds and unify them (#326) 2023-11-24 16:23:21 +08:00
TiejunZhou
e420e2fa02 Restrict deploy run condition (#325)
* Restrict deploy run condition

* Fail the job for testing purpose

* Revert "Fail the job for testing purpose"

This reverts commit 6ae18cafe2.
2023-11-24 15:41:34 +08:00
TiejunZhou
5f9c713c48 Split artifacts for multiple jobs (#324)
* Test multiple code coverage pages

* Add affix to artifacts

* Test uploading code coverage as artifact

* Deploy GitHub pages at last for multiple jobs

* Test using unified upload pages

* Disable test cases to accelerate experiment

* Fix escape character $

* Revert "Test using unified upload pages"

This reverts commit 3668d9f672.

* Set destination for downloaded artifact

* Use a different artifact name

* Fix escape value

* Revert "Disable test cases to accelerate experiment"

This reverts commit 8468f17d02.

* Override duplicated github-pages in artifact

* Revert "Override duplicated github-pages in artifact"

This reverts commit 17a83aa97d.

* Delete Duplicate Code Coverage Artifact
2023-11-24 13:59:26 +08:00
TiejunZhou
11a7db22b4 Convert ADO pipelines to GitHub actions (#321)
* Convert ADO pipelines to GitHub actions

* Remove version in uses as not valid for local workflows

* Fix cmake path and add deploy url affix

* Add SMP build job

* Fix code coverage URL

* Add affix to titles of steps

* Remove ADO pipelines

* Add affix to titles of code coverage

* separate PR results for multiple jobs

* Revert "separate PR results for multiple jobs"

This reverts commit 6da13540fd.

* separate PR results for multiple jobs
2023-11-23 13:17:52 +08:00
Bo Chen
d17d7bdcbd Merge pull request #314 from ting-ms/master
Update test script to generate JUnit format test report.
2023-11-16 13:31:34 +08:00
tinzhu
2362271d4b Upgrade CMake to the latest. 2023-11-10 15:50:36 +08:00
tinzhu
57c251cd39 Run ctest with additional option "--output-junit" to generate JUnit format test result. 2023-11-10 15:37:49 +08:00
Yajun Xia
cd87763dbd Removed redudant sample_threadX project from Cortex A7 ports_module IAR example_build. (#312)
https://msazure.visualstudio.com/One/_workitems/edit/25784627
2023-11-10 10:31:03 +08:00
TiejunZhou
13b700fd3e Update release version to 6.3.0 and date to 10-31-2023 (#308) 2023-10-23 15:31:03 +08:00
TiejunZhou
9ee2738aec Improved the logic to validate object from application in ThreadX Module (#307) 2023-10-23 14:33:24 +08:00
Yajun Xia
bc4bd804d5 Fixed the issue of the data/bss section cannot be read from ARM FVP debug tool in cortex-A5 GNU port (#306)
https://msazure.visualstudio.com/One/_workitems/edit/25153813/
2023-09-26 09:51:47 +08:00
Yajun Xia
d43cba10b2 Fixed the issue of the data/bss section cannot be read from ARM FVP debug tool in cortex-A9 GNU port. (#303)
https://msazure.visualstudio.com/One/_workitems/edit/25153785/
2023-09-18 16:36:36 +08:00
Yajun Xia
a0a0ef9385 Fixed the issue of the data/bss section cannot be read from ARM FVP debug tool in cortex-A8 GNU port. (#302)
https://msazure.visualstudio.com/One/_workitems/edit/25139203/
2023-09-18 10:32:07 +08:00
Yajun Xia
6aeefea8e6 Fixed the issue of the data/bss section cannot be read from ARM FVP d… (#301)
* Fixed the issue of the data/bss section cannot be read from ARM FVP debug tool in cortex-A7 GNU port.

https://msazure.visualstudio.com/One/_workitems/edit/24597276/

* remove untracked files.
2023-09-15 10:46:20 +08:00
Yajun Xia
cd9007712b Fixed the issue of ports_arch_check failed on the step of copy ports arch on ARMv8 ports. (#300)
https://msazure.visualstudio.com/One/_workitems/edit/25154735/
2023-09-14 09:43:51 +08:00
yajunxiaMS
bc8bed494d Added thumb mode support under IAR for module manager on Cortex-A7 pl… (#289)
* Added thumb mode support under IAR for module manager on Cortex-A7 platform.

* update code for comments.
2023-08-07 17:35:31 +08:00
yajunxiaMS
7fa087d061 Added thumb mode support under GNU for module manager on Cortex-A7 pl… (#287)
* Added thumb mode support under GNU for module manager on Cortex-A7 platform.

* update code for comment.
2023-07-21 09:26:22 +08:00
TiejunZhou
1ffd7c2cde Allow manual trigger for CodeQL action (#286) 2023-07-13 13:24:31 +08:00
TiejunZhou
fd2bf7c19a Enable CodeQL (#285)
* Enable CodeQL

* Build cortex-m0 in CodeQL

* Trigger the CodeQL by cron only
2023-07-13 13:09:47 +08:00
TiejunZhou
8ff9910ddc Added memory barrier before thread scheduling for ARMv8-A ThreadX SMP. (#280) 2023-06-26 09:21:06 +08:00
TiejunZhou
08380caa77 Unify ThreadX and SMP for ARMv8-A. (#275)
* Unify ThreadX and SMP for ARMv8-A.

* Fix path in pipeline to check ports arch.

* Add ignore folders for ARM DS

* Generate ThreadX and SMP ports for ARMv8-A.

* Ignore untracked files for ports_arch check.

* Use arch instead of CPU to simplify the project management.
2023-06-21 18:23:36 +08:00
Yanwu Cai
1b2995cea8 Fix compile warnings in Linux port. (#276) 2023-06-19 17:45:16 +08:00
TiejunZhou
25a8fa2362 Add a pull request template (#272) 2023-06-06 14:03:35 +08:00
TiejunZhou
71cc95eaed Include tx_user.h in cortex_m33/55/85 IAR port (#267) 2023-05-24 13:31:02 +08:00
Xiuwen Cai
361590dc40 Export _tx_handler_svc_unrecognized as weak symbol. (#264) 2023-05-19 11:06:50 +08:00
TiejunZhou
d66a519685 Fix MISRA issues for ThreadX SMP (#263)
* Fixed MISRA2012 rule 10.4_a

The operands `pool_ptr->tx_byte_pool_fragments' and `2' have essential type categories unsigned 32-bit int and signed 8-bit int, which do not match.

* Fixed MISRA2012 rule 10.4_a

The operands `next_priority' and `TX_MAX_PRIORITIES' have essential type categories unsigned 32-bit int and signed 8-bit int, which do not match.

* Fixed MISRA2012 rule 8.3

Declaration/definition of `_tx_thread_smp_preemptable_threads_get' is inconsistent with previous declaration/definition in types and/or type qualifiers
2023-05-18 15:57:53 +08:00
Xiuwen Cai
6b8ece0ff2 Add random number stack filling option. (#257)
Co-authored-by: TiejunZhou <50469179+TiejunMS@users.noreply.github.com>
2023-05-12 10:13:42 +08:00
Stefan Wick
6d9f25fac9 Update LICENSE.txt (#261) 2023-05-12 09:57:13 +08:00
TiejunZhou
e2a8334f96 Include tx_user.h in cortex_m3/4/7 IAR and AC5 port (#255)
* Include tx_user.h in ARMv7-M IAR port

* Include tx_user.h in ARMv7-M AC5 port

* Include tx_user.h in cortex_m3/4/7 IAR and AC5 port
2023-04-24 09:33:00 +08:00
TiejunZhou
7a3bb8311b Release scripts to validate ThreadX port (#254) 2023-04-23 10:58:21 +08:00
TiejunZhou
b11d1be6ac Update devcontainer to Ubuntu 22.04 (#253) 2023-04-21 09:41:18 +08:00
TiejunZhou
390c5ce1b7 Update CFS usage (#252) 2023-04-20 17:20:15 +08:00
TiejunZhou
672c5e953e Release ARMv7-A architecture ports and add tx_user.h to GNU port assembly files (#250)
* Release ARMv7-A architecture ports

* Add tx_user.h to GNU port assembly files

* Update GitHub action to perform check for Cortex-A ports
2023-04-19 17:56:09 +08:00
TiejunZhou
23680f5e5f Release ARMv7-M and ARMv8-M architecture ports (#249)
* Release ARMv7-M and ARMv8-M architecture ports

* Add a pipeline to check ports_arch
2023-04-18 18:11:20 +08:00
TiejunZhou
d64ef2ab06 Filter the path for PR trigger and add codeowners (#248)
* Filter the path for PR trigger

* Add codeowners

* Fix syntax in pipeline
2023-04-17 13:16:14 +08:00
TiejunZhou
4c4547d5d5 Fix path to test reports in pipeline (#247)
* Fix path to test reports in pipeline

* Fix test case when CPU starves, the thread 2 can run 14 ronuds.
2023-04-17 09:40:59 +08:00
TiejunZhou
0d308c7ae6 Fix random failure in test case threadx_event_flag_suspension_timeout_test.c (#246)
Depending on the starting time, thread 1 can run either 32 or 33 rounds.
2023-04-14 14:55:04 +08:00
TiejunZhou
487ca45752 Merge pull request #244 from azure-rtos/tizho/test
Release ThreadX regression system
2023-04-13 16:57:35 +08:00
Tiejun Zhou
5f430f22e2 Add Azure DevOps pipelines for ThreadX test 2023-04-12 09:40:17 +00:00
Tiejun Zhou
ebeb02b958 Release ThreadX regression system 2023-04-04 09:40:54 +00:00
Tiejun Zhou
ac3b6b326c Update on 31 Mar 2023. Expand to see details.
af5702cbf Include tx_user.h only when TX_INCLUDE_USER_DEFINE_FILE is defined for assembly files
2023-03-31 07:34:47 +00:00
TiejunZhou
dac41f6015 Merge pull request #236 from wickste/patch-1
Update LICENSED-HARDWARE.txt
2023-03-22 09:18:28 +08:00
Stefan Wick
f4d6b638de Update LICENSED-HARDWARE.txt
Adding per Renesas updated support for MPUs
2023-03-21 11:31:09 -07:00
2419 changed files with 198447 additions and 8912 deletions

View File

@@ -0,0 +1,13 @@
{
"image": "ghcr.io/tiejunms/azure_rtos_docker",
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-vscode.cpptools",
"ms-vscode.cmake-tools"
],
"remoteUser": "vscode",
"runArgs": [ "--cap-add=NET_ADMIN"]
}

1
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1 @@
@azure-rtos/admins

5
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,5 @@
## PR checklist
<!--- Put an `x` in all the boxes that apply. -->
- [ ] Updated function header with a short description and version number
- [ ] Added test case for bug fix or new feature
- [ ] Validated on real hardware <!-- hardware - toolchain -->

View File

@@ -1,28 +0,0 @@
name: cache-update
on:
schedule:
- cron: '0 0 */3 * *' # every 30m for testing
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps: # Cache location for arm tools
- name: Cache arm-none-eabi-gcc tools
id: cache-arm-gcc
uses: actions/cache@v1
with:
path: $HOME/arm-none-eabi-gcc-9-2019-q4
key: ${{ runner.os }}-arm-gcc-9-2019-q4
# Get the arm-non-eabi-gcc toolchain
- name: Install arm-none-eabi-gcc
uses: fiam/arm-none-eabi-gcc@v1
with:
release: '9-2019-q4' # The arm-none-eabi-gcc release to use.
directory: $HOME/arm-none-eabi-gcc-9-2019-q4

View File

@@ -1,6 +1,6 @@
# This is a basic workflow to help you get started with Actions
name: cortex_m7
name: cortex_m
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
@@ -9,6 +9,14 @@ on:
branches: [ master ]
pull_request:
branches: [ master ]
paths:
- ".github/workflows/ci_cortex_m.yml"
- 'common/**'
- 'utility/**'
- 'ports/cortex_m0/gnu/**'
- 'ports/cortex_m3/gnu/**'
- 'ports/cortex_m4/gnu/**'
- 'ports/cortex_m7/gnu/**'
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
@@ -17,13 +25,18 @@ jobs:
# The type of runner that the job will run on
runs-on: ubuntu-latest
strategy:
matrix:
port: [0, 3, 4, 7]
name: Cortex M${{ matrix.port }} build
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout sources recursively
uses: actions/checkout@v2
- name: Check out the repository
uses: actions/checkout@v4
with:
token: ${{ secrets.REPO_SCOPED_TOKEN }}
submodules: true
# Store the arm compilers in the cache to speed up builds
@@ -52,7 +65,7 @@ jobs:
# Prepare the build system
- name: Prepare build system
run: cmake -Bbuild -DCMAKE_TOOLCHAIN_FILE=./cmake/cortex_m7.cmake -GNinja .
run: cmake -Bbuild -DCMAKE_TOOLCHAIN_FILE=./cmake/cortex_m${{ matrix.port }}.cmake -GNinja .
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"

View File

@@ -1,64 +0,0 @@
# This is a basic workflow to help you get started with Actions
name: cortex_m0
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout sources recursively
uses: actions/checkout@v2
with:
token: ${{ secrets.REPO_SCOPED_TOKEN }}
submodules: true
# Store the arm compilers in the cache to speed up builds
- name: Cache arm-none-eabi-gcc tools
id: cache-arm-gcc
uses: actions/cache@v1
with:
path: $HOME/arm-none-eabi-gcc-9-2019-q4
key: ${{ runner.os }}-arm-gcc-9-2019-q4
# Get the arm-non-eabi-gcc toolchain
- name: Install arm-none-eabi-gcc
uses: fiam/arm-none-eabi-gcc@v1
if: steps.cache-arm-gcc.outputs.cache-hit != 'true'
with:
release: '9-2019-q4' # The arm-none-eabi-gcc release to use.
directory: $HOME/arm-none-eabi-gcc-9-2019-q4
# Get CMake into the environment
- name: Install cmake 3.19.1
uses: lukka/get-cmake@v3.19.1
# Get Ninja into the environment
- name: Install ninja-build
uses: seanmiddleditch/gha-setup-ninja@v3
# Prepare the build system
- name: Prepare build system
run: cmake -Bbuild -DCMAKE_TOOLCHAIN_FILE=./cmake/cortex_m0.cmake -GNinja .
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"
- name: Compile and link
run: cmake --build ./build
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"

View File

@@ -1,64 +0,0 @@
# This is a basic workflow to help you get started with Actions
name: cortex_m3
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout sources recursively
uses: actions/checkout@v2
with:
token: ${{ secrets.REPO_SCOPED_TOKEN }}
submodules: true
# Store the arm compilers in the cache to speed up builds
- name: Cache arm-none-eabi-gcc tools
id: cache-arm-gcc
uses: actions/cache@v1
with:
path: $HOME/arm-none-eabi-gcc-9-2019-q4
key: ${{ runner.os }}-arm-gcc-9-2019-q4
# Get the arm-non-eabi-gcc toolchain
- name: Install arm-none-eabi-gcc
uses: fiam/arm-none-eabi-gcc@v1
if: steps.cache-arm-gcc.outputs.cache-hit != 'true'
with:
release: '9-2019-q4' # The arm-none-eabi-gcc release to use.
directory: $HOME/arm-none-eabi-gcc-9-2019-q4
# Get CMake into the environment
- name: Install cmake 3.19.1
uses: lukka/get-cmake@v3.19.1
# Get Ninja into the environment
- name: Install ninja-build
uses: seanmiddleditch/gha-setup-ninja@v3
# Prepare the build system
- name: Prepare build system
run: cmake -Bbuild -DCMAKE_TOOLCHAIN_FILE=./cmake/cortex_m3.cmake -GNinja .
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"
- name: Compile and link
run: cmake --build ./build
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"

View File

@@ -1,64 +0,0 @@
# This is a basic workflow to help you get started with Actions
name: cortex_m4
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout sources recursively
uses: actions/checkout@v2
with:
token: ${{ secrets.REPO_SCOPED_TOKEN }}
submodules: true
# Store the arm compilers in the cache to speed up builds
- name: Cache arm-none-eabi-gcc tools
id: cache-arm-gcc
uses: actions/cache@v1
with:
path: $HOME/arm-none-eabi-gcc-9-2019-q4
key: ${{ runner.os }}-arm-gcc-9-2019-q4
# Get the arm-non-eabi-gcc toolchain
- name: Install arm-none-eabi-gcc
uses: fiam/arm-none-eabi-gcc@v1
if: steps.cache-arm-gcc.outputs.cache-hit != 'true'
with:
release: '9-2019-q4' # The arm-none-eabi-gcc release to use.
directory: $HOME/arm-none-eabi-gcc-9-2019-q4
# Get CMake into the environment
- name: Install cmake 3.19.1
uses: lukka/get-cmake@v3.19.1
# Get Ninja into the environment
- name: Install ninja-build
uses: seanmiddleditch/gha-setup-ninja@v3
# Prepare the build system
- name: Prepare build system
run: cmake -Bbuild -DCMAKE_TOOLCHAIN_FILE=./cmake/cortex_m4.cmake -GNinja .
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"
- name: Compile and link
run: cmake --build ./build
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"

110
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
workflow_dispatch:
schedule:
- cron: '33 1 * * 6'
jobs:
analyze:
name: Analyze
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
timeout-minutes: ${{ (matrix.language == 'swift' && 120) || 360 }}
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'cpp' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby', 'swift' ]
# Use only 'java' to analyze code written in Java, Kotlin or both
# Use only 'javascript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
#- name: Autobuild
# uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
#- run: |
# echo "Run, Build Application using script"
# ./scripts/install.sh
# ./test/tx/cmake/run.sh build
# Store the arm compilers in the cache to speed up builds
- name: Cache arm-none-eabi-gcc tools
id: cache-arm-gcc
uses: actions/cache@v1
with:
path: $HOME/arm-none-eabi-gcc-9-2019-q4
key: ${{ runner.os }}-arm-gcc-9-2019-q4
# Get the arm-non-eabi-gcc toolchain
- name: Install arm-none-eabi-gcc
uses: fiam/arm-none-eabi-gcc@v1
if: steps.cache-arm-gcc.outputs.cache-hit != 'true'
with:
release: '9-2019-q4' # The arm-none-eabi-gcc release to use.
directory: $HOME/arm-none-eabi-gcc-9-2019-q4
# Get CMake into the environment
- name: Install cmake 3.19.1
uses: lukka/get-cmake@v3.19.1
# Get Ninja into the environment
- name: Install ninja-build
uses: seanmiddleditch/gha-setup-ninja@v3
# Prepare the build system
- name: Prepare build system
run: cmake -Bbuild -DCMAKE_TOOLCHAIN_FILE=./cmake/cortex_m0.cmake -GNinja .
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"
- name: Compile and link
run: cmake --build ./build
env:
PATH: "$HOME/arm-none-eabi-gcc-9-2019-q4/bin:$PATH"
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

73
.github/workflows/ports_arch_check.yml vendored Normal file
View File

@@ -0,0 +1,73 @@
# This is a basic workflow to help you get started with Actions
name: ports_arch_check
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
pull_request:
branches: [ master ]
paths:
- ".github/workflows/ports_arch_check.yml"
- 'common/**'
- 'common_modules/**'
- 'common_smp/**'
- 'ports/**'
- 'ports_modules/**'
- 'ports_smp/**'
- 'ports_arch/**'
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# Check ports for cortex-m
cortex-m:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout sources recursively
uses: actions/checkout@v2
with:
token: ${{ secrets.REPO_SCOPED_TOKEN }}
submodules: true
# Copy ports arch
- name: Copy ports arch
run: |
scripts/copy_armv7_m.sh && scripts/copy_armv8_m.sh && scripts/copy_module_armv7_m.sh
if [[ -n $(git status --porcelain -uno) ]]; then
echo "Ports for ARM architecture is not updated"
git status
exit 1
fi
cortex-a:
# Check ports for cortex-a
runs-on: windows-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout sources recursively
uses: actions/checkout@v2
with:
token: ${{ secrets.REPO_SCOPED_TOKEN }}
submodules: true
# Copy ports arch
- name: Copy ports arch
run: |
cd ports_arch/ARMv7-A
pwsh -Command ./update.ps1 -PortSets tx -CopyCommonFiles -CopyPortFiles -CopyExample -PatchFiles
cd ../../ports_arch/ARMv8-A
pwsh -Command ./update.ps1 -PortSets tx,tx_smp -CopyCommonFiles -CopyPortFiles -CopyExample -PatchFiles
if ((git status --porcelain -uno) -ne $null) {
Write-Host "Ports for ARM architecture is not updated"
git status
Exit 1
}

View File

@@ -0,0 +1,197 @@
# This is a basic workflow that is manually triggered
name: regression_template
on:
workflow_call:
inputs:
install_script:
default: './scripts/install.sh'
required: false
type: string
build_script:
default: './scripts/build.sh'
required: false
type: string
test_script:
default: './scripts/test.sh'
required: false
type: string
cmake_path:
default: './test/cmake'
required: false
type: string
skip_test:
default: false
required: false
type: boolean
skip_coverage:
default: false
required: false
type: boolean
coverage_name:
default: 'default_build_coverage'
required: false
type: string
skip_deploy:
default: false
required: false
type: boolean
deploy_list:
default: ''
required: false
type: string
result_affix:
default: ''
required: false
type: string
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "linux_job"
run_tests:
if: ${{ !inputs.skip_test}}
permissions:
contents: read
issues: read
checks: write
pull-requests: write
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: Check out the repository
uses: actions/checkout@v4
with:
submodules: true
- name: Install softwares
run: ${{ inputs.install_script }}
- name: Build
run: ${{ inputs.build_script }}
- name: Test
run: ${{ inputs.test_script }}
- name: Publish Test Results
uses: EnricoMi/publish-unit-test-result-action@v2.11.0
if: always()
with:
check_name: Test Results ${{ inputs.result_affix }}
files: |
${{ inputs.cmake_path }}/build/*/*.xml
- name: Upload Test Results
if: success() || failure()
uses: actions/upload-artifact@v3.1.3
with:
name: test_reports ${{ inputs.result_affix }}
path: |
${{ inputs.cmake_path }}/build/*.txt
${{ inputs.cmake_path }}/build/*/Testing/**/*.xml
${{ inputs.cmake_path }}/build/**/regression/output_files/*.bin
- name: Configure GitHub Pages
uses: actions/configure-pages@v3.0.6
- name: Generate Code Coverage Results Summary
if: (!inputs.skip_coverage)
uses: irongut/CodeCoverageSummary@v1.3.0
with:
filename: ${{ inputs.cmake_path }}/coverage_report/${{ inputs.coverage_name }}.xml
format: markdown
badge: true
hide_complexity: true
output: file
- name: Write Code Coverage Summary
if: (!inputs.skip_coverage)
run: |
echo "## Coverage Report ${{ inputs.result_affix }}" >> $GITHUB_STEP_SUMMARY
cat code-coverage-results.md >> $GITHUB_STEP_SUMMARY
- name: Create CheckRun for Code Coverage
if: ((github.event_name == 'push') || (github.event_name == 'workflow_dispatch') || (github.event.pull_request.head.repo.full_name == github.repository)) && (!inputs.skip_coverage)
uses: LouisBrunner/checks-action@v1.6.2
with:
token: ${{ secrets.GITHUB_TOKEN }}
name: Code Coverage ${{ inputs.result_affix }}
conclusion: ${{ job.status }}
output: |
{"summary":"Coverage Report"}
output_text_description_file: code-coverage-results.md
- name: Add Code Coverage PR Comment
if: ((github.event_name == 'push') || (github.event.pull_request.head.repo.full_name == github.repository)) && (!inputs.skip_coverage)
uses: marocchino/sticky-pull-request-comment@v2
with:
header: Code Coverage ${{ inputs.result_affix }}
path: code-coverage-results.md
# Add sudo to move coverage folder created by root user
- name: Prepare Coverage GitHub Pages
if: (!inputs.skip_coverage)
run: >-
if [ "${{ inputs.result_affix }}" != "" ] && ${{ inputs.skip_deploy }}; then
sudo mv ${{ inputs.cmake_path }}/coverage_report/${{ inputs.coverage_name }} \
${{ inputs.cmake_path }}/coverage_report/${{ inputs.result_affix }}
fi
- name: Upload Code Coverage Artifacts
uses: actions/upload-artifact@v3.1.3
if: (inputs.skip_deploy && !inputs.skip_coverage)
with:
name: coverage_report
path: ${{ inputs.cmake_path }}/coverage_report
retention-days: 1
- name: Upload Code Coverage Pages
uses: actions/upload-pages-artifact@v2.0.0
if: (!inputs.skip_deploy && !inputs.skip_coverage)
with:
path: ${{ inputs.cmake_path }}/coverage_report/${{ inputs.coverage_name }}
deploy_code_coverage:
runs-on: ubuntu-latest
if: ((github.event_name == 'push') || (github.event_name == 'workflow_dispatch')) && !inputs.skip_coverage && !inputs.skip_deploy && !failure() && !cancelled()
needs: run_tests
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
permissions:
pages: write
id-token: write
steps:
- uses: actions/download-artifact@v3
if: ${{ inputs.skip_test }}
with:
name: coverage_report
- name: Upload Code Coverage Pages
uses: actions/upload-pages-artifact@v2.0.0
if: ${{ inputs.skip_test }}
with:
path: .
- name: Delete Duplicate Code Coverage Artifact
uses: geekyeggo/delete-artifact@v2
with:
name: coverage_report
- name: Deploy GitHub Pages site
id: deployment
uses: actions/deploy-pages@v1.2.9
- name: Write Code Coverage Report URL
run: >-
if [ "${{ inputs.deploy_list }}" != "" ]; then
for i in ${{ inputs.deploy_list }}; do
echo 'Coverage report for ' $i ':${{ steps.deployment.outputs.page_url }}'$i >> $GITHUB_STEP_SUMMARY
done
else
echo 'Coverage report: ${{ steps.deployment.outputs.page_url }}' >> $GITHUB_STEP_SUMMARY
fi

35
.github/workflows/regression_test.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: regression_test
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
workflow_dispatch:
push:
branches: [ master ]
pull_request:
branches: [ master ]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
tx:
uses: ./.github/workflows/regression_template.yml
with:
build_script: ./scripts/build_tx.sh
test_script: ./scripts/test_tx.sh
cmake_path: ./test/tx/cmake
result_affix: ThreadX
skip_deploy: true
smp:
uses: ./.github/workflows/regression_template.yml
with:
build_script: ./scripts/build_smp.sh
test_script: ./scripts/test_smp.sh
cmake_path: ./test/smp/cmake
result_affix: SMP
skip_deploy: true
deploy:
needs: [tx, smp]
uses: ./.github/workflows/regression_template.yml
with:
skip_test: true
deploy_list: "ThreadX SMP"

9
.gitignore vendored
View File

@@ -1,6 +1,9 @@
.vscode/
.settings/
.metadata/
_deps/
build/
Debug/
CMakeFiles/
CMakeScripts/
CMakeLists.txt.user
@@ -11,4 +14,10 @@ cmake_install.cmake
install_manifest.txt
compile_commands.json
CTestTestfile.cmake
*.dep
*.o
*.axf
*.map
*.a
*.htm

View File

@@ -2,7 +2,6 @@ MICROSOFT SOFTWARE LICENSE TERMS
MICROSOFT AZURE RTOS
Shape
These license terms are an agreement between you and Microsoft Corporation (or
one of its affiliates). They apply to the software named above and any Microsoft
@@ -14,10 +13,11 @@ HAVE THE RIGHTS BELOW. BY USING THE SOFTWARE, YOU ACCEPT THESE TERMS.
1. INSTALLATION AND USE RIGHTS.
a) General. You may install and use the software and the included Microsoft
applications solely for internal development, testing and evaluation purposes.
Any distribution or production use requires a separate license as set forth in
Section 2.
a) General. You may (I) install, use and modify the software and (ii) install and use the included Microsoft
Applications (if any), each solely for internal development, testing and evaluation purposes.
Distribution or production use is governed by the license terms set forth in
Section 2. You may also obtain distribution or production use rights through a separate agreement with
Microsoft.
b) Contributions. Microsoft welcomes contributions to this software. In the event
that you make a contribution to this software you will be required to agree to a
@@ -25,7 +25,7 @@ Contributor License Agreement (CLA) declaring that you have the right to, and
actually do, grant Microsoft the rights to use your contribution. For details,
visit https://cla.microsoft.com.
c) Included Microsoft Applications. The software includes other Microsoft
c) Included Microsoft Applications. The software may include other Microsoft
applications which are governed by the licenses embedded in or made available
with those applications.
@@ -57,7 +57,6 @@ i. You may use the software in production (e.g. program the modified or unmodifi
software to devices you own or control) and distribute (i.e. make available to
third parties) the modified or unmodified binary image produced from this code.
ii. You may permit your device distributors or developers to copy and distribute the
binary image as programmed or to be programmed to your devices.
@@ -70,17 +69,12 @@ b) Requirements. For any code you distribute, you must:
i. when distributed in binary form, except as embedded in a device, include with
such distribution the terms of this agreement;
ii. when distributed in source code form to distributors or developers of your
devices, include with such distribution the terms of this agreement; and
iii. indemnify, defend and hold harmless Microsoft from any claims, including
attorneys fees, related to the distribution or use of your devices, except to
the extent that any claim is based solely on the unmodified software.
iii. indemnify, defend and hold harmless Microsoft from any claims, including claims arising from any High Risk Uses, and inclusive of attorneys fees, related to the distribution or use of your devices that include the software, except to the extent that any intellectual property claim is based solely on the unmodified software.
c) Restrictions. You may not:
i. use or modify the software to create a competing real time operating system
i. use or modify the software to create competing real time operating system
software;
ii. remove any copyright notices or licenses contained in the software;
@@ -179,12 +173,13 @@ breach of which would endanger the purpose of this agreement and the compliance
with which a party may constantly trust in (so-called "cardinal obligations").
In other cases of slight negligence, Microsoft will not be liable for slight
negligence.
12. DISCLAIMER OF WARRANTY. THE SOFTWARE IS LICENSED “AS IS.” YOU BEAR THE RISK OF
12. DISCLAIMER OF WARRANTY.
a) THE SOFTWARE IS LICENSED “AS IS.” YOU BEAR THE RISK OF
USING IT. MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. TO
THE EXTENT PERMITTED UNDER APPLICABLE LAWS, MICROSOFT EXCLUDES ALL IMPLIED
WARRANTIES, INCLUDING MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND
NON-INFRINGEMENT.
b) HIGH RISK USE DISCLAIMER. WARNING: THE SOFTWARE IS NOT DESIGNED OR INTENDED FOR USE WHERE FAILURE OR FAULT OF ANY KIND OF THE SOFTWARE COULD RESULT IN DEATH OR SERIOUS BODILY INJURY, OR IN PHYSICAL OR ENVIRONMENTAL DAMAGE (“collectively High Risk Use”). Accordingly, You must design and implement your hardware and software such that, in the event of any interruption, defect, error, or other failure of the software, the safety of people, property, and the environment are not reduced below a level that is reasonable, appropriate, and legal, whether in general or for a specific industry. Your High Risk Use of the software is at Your own risk.
13. LIMITATION ON AND EXCLUSION OF DAMAGES. IF YOU HAVE ANY BASIS FOR RECOVERING
DAMAGES DESPITE THE PRECEDING DISCLAIMER OF WARRANTY, YOU CAN RECOVER FROM
@@ -203,21 +198,29 @@ possibility of the damages. The above limitation or exclusion may not apply to
you because your state, province, or country may not allow the exclusion or
limitation of incidental, consequential, or other damages.
Please note: As this software is distributed in Canada, some of the clauses in
Please note: As this software is distributed in Canada, some of the clauses in
this agreement are provided below in French.
Remarque: Ce logiciel étant distribué au Canada, certaines des clauses dans ce
contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le logiciel visé par une licence est offert « tel quel
EXONÉRATION DE GARANTIE.
a) Le logiciel visé par une licence est offert « tel quel
». Toute utilisation de ce logiciel est à votre seule risque et péril. Microsoft
naccorde aucune autre garantie expresse. Vous pouvez bénéficier de droits
additionnels en vertu du droit local sur la protection des consommateurs, que ce
contrat ne peut modifier. La ou elles sont permises par le droit locale, les
garanties implicites de qualité marchande, dadéquation à un usage particulier
et dabsence de contrefaçon sont exclues.
b) CLAUSE DEXCLUSION DE RESPONSABILITÉ RELATIVE À LUTILISATION À HAUT RISQUE.
AVERTISSEMENT: LE LOGICIEL NEST PAS CONÇU OU DESTINÉ À ÊTRE UTILISÉ LORSQUUNE
DÉFAILLANCE OU UN DÉFAUT DE QUELQUE NATURE QUE CE SOIT POURRAIT ENTRAÎNER LA
MORT OU DES BLESSURES CORPORELLES GRAVES, OU DES DOMMAGES PHYSIQUES OU
ENVIRONNEMENTAUX (« Utilisation à haut risque »). Par conséquent, vous devez concevoir et mettre en
œuvre votre équipement et votre logiciel de manière à ce que, en cas dinterruption, de défaut, derreur
ou de toute autre défaillance du logiciel, la sécurité des personnes, des biens et de lenvironnement ne
soit pas réduite en dessous dun niveau raisonnable, approprié et légal, que ce soit en général ou pour
un secteur spécifique. Votre utilisation à haut risque du logiciel est à vos propres risques.
LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES
DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une
@@ -243,4 +246,4 @@ ci-dessus ne sappliquera pas à votre égard.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous
pourriez avoir dautres droits prévus par les lois de votre pays. Le présent
contrat ne modifie pas les droits que vous confèrent les lois de votre pays si
celles-ci ne le permettent pas.
celles-ci ne le permettent pas.

View File

@@ -38,7 +38,7 @@ Renesas:
Synergy Platform
RX Family of MCUs
RA Family of MCUs
RZ/A, RZ/N and RZ/T Family of MPUs
RZ Family of MPUs
--------------------------------------------------------------------------------

View File

@@ -26,7 +26,7 @@
/* APPLICATION INTERFACE DEFINITION RELEASE */
/* */
/* tx_api.h PORTABLE C */
/* 6.2.1 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -97,6 +97,13 @@
/* 03-08-2023 Tiejun Zhou Modified comment(s), */
/* update patch number, */
/* resulting in version 6.2.1 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added option for random */
/* number stack filling, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Tiejun Zhou Modified comment(s), */
/* update version number, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
@@ -135,8 +142,8 @@ extern "C" {
#define AZURE_RTOS_THREADX
#define THREADX_MAJOR_VERSION 6
#define THREADX_MINOR_VERSION 2
#define THREADX_PATCH_VERSION 1
#define THREADX_MINOR_VERSION 4
#define THREADX_PATCH_VERSION 0
/* Define the following symbol for backward compatibility */
#define EL_PRODUCT_THREADX
@@ -171,7 +178,11 @@ extern "C" {
#define TX_NO_MESSAGES ((UINT) 0)
#define TX_EMPTY ((ULONG) 0)
#define TX_CLEAR_ID ((ULONG) 0)
#if defined(TX_ENABLE_RANDOM_NUMBER_STACK_FILLING) && defined(TX_ENABLE_STACK_CHECKING)
#define TX_STACK_FILL (thread_ptr -> tx_thread_stack_fill_value)
#else
#define TX_STACK_FILL ((ULONG) 0xEFEFEFEFUL)
#endif
/* Thread execution state values. */
@@ -618,6 +629,12 @@ typedef struct TX_THREAD_STRUCT
cleanup routine executes. */
ULONG tx_thread_suspension_sequence;
#if defined(TX_ENABLE_RANDOM_NUMBER_STACK_FILLING) && defined(TX_ENABLE_STACK_CHECKING)
/* Define the random stack fill number. This can be used to detect stack overflow. */
ULONG tx_thread_stack_fill_value;
#endif
/* Define the user extension field. This typically is defined
to white space, but some ports of ThreadX may need to have
additional fields in the thread control block. This is
@@ -1892,6 +1909,21 @@ UINT _tx_trace_interrupt_control(UINT new_posture);
#endif
/* Add a default macro that can be re-defined in tx_port.h to add processing to the initialize random number generator.
By default, this is simply defined as whitespace. */
#ifndef TX_INITIALIZE_RANDOM_GENERATOR_INITIALIZATION
#define TX_INITIALIZE_RANDOM_GENERATOR_INITIALIZATION
#endif
/* Define the TX_RAND macro to the standard library function, if not already defined. */
#ifndef TX_RAND
#define TX_RAND() rand()
#endif
/* Check for MISRA compliance requirements. */
#ifdef TX_MISRA_ENABLE

View File

@@ -26,7 +26,7 @@
/* PORT SPECIFIC C INFORMATION RELEASE */
/* */
/* tx_user.h PORTABLE C */
/* 6.1.11 */
/* 6.3.0 */
/* */
/* AUTHOR */
/* */
@@ -62,6 +62,10 @@
/* optimized the definition of */
/* TX_TIMER_TICKS_PER_SECOND, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added option for random */
/* number stack filling, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
@@ -170,6 +174,14 @@
#define TX_ENABLE_STACK_CHECKING
*/
/* Determine if random number is used for stack filling. By default, ThreadX uses a fixed
pattern for stack filling. When the following is defined, ThreadX uses a random number
for stack filling. This is effective only when TX_ENABLE_STACK_CHECKING is defined. */
/*
#define TX_ENABLE_RANDOM_NUMBER_STACK_FILLING
*/
/* Determine if preemption-threshold should be disabled. By default, preemption-threshold is
enabled. If the application does not use preemption-threshold, it may be disabled to reduce
code size and improve performance. */

View File

@@ -49,7 +49,7 @@ TX_SAFETY_CRITICAL_EXCEPTION_HANDLER
/* FUNCTION RELEASE */
/* */
/* _tx_initialize_kernel_enter PORTABLE C */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -93,6 +93,10 @@ TX_SAFETY_CRITICAL_EXCEPTION_HANDLER
/* 04-25-2022 Scott Larson Modified comment(s), */
/* added EPK initialization, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added random generator */
/* initialization, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
VOID _tx_initialize_kernel_enter(VOID)
@@ -133,6 +137,9 @@ VOID _tx_initialize_kernel_enter(VOID)
later used to represent interrupt nesting. */
_tx_thread_system_state = TX_INITIALIZE_IN_PROGRESS;
/* Optional random number generator initialization. */
TX_INITIALIZE_RANDOM_GENERATOR_INITIALIZATION
/* Call the application provided initialization function. Pass the
first available memory address to it. */
tx_application_define(_tx_initialize_unused_memory);

View File

@@ -36,7 +36,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_create PORTABLE C */
/* 6.1.8 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -88,6 +88,10 @@
/* supported TX_MISRA_ENABLE, */
/* 08-02-2021 Scott Larson Removed unneeded cast, */
/* resulting in version 6.1.8 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added option for random */
/* number stack filling, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
UINT _tx_thread_create(TX_THREAD *thread_ptr, CHAR *name_ptr, VOID (*entry_function)(ULONG id), ULONG entry_input,
@@ -109,6 +113,17 @@ ALIGN_TYPE updated_stack_start;
#endif
#ifndef TX_DISABLE_STACK_FILLING
#if defined(TX_ENABLE_RANDOM_NUMBER_STACK_FILLING) && defined(TX_ENABLE_STACK_CHECKING)
/* Initialize the stack fill value to a 8-bit random value. */
thread_ptr -> tx_thread_stack_fill_value = ((ULONG) TX_RAND()) & 0xFFUL;
/* Duplicate the random value in each of the 4 bytes of the stack fill value. */
thread_ptr -> tx_thread_stack_fill_value = thread_ptr -> tx_thread_stack_fill_value |
(thread_ptr -> tx_thread_stack_fill_value << 8) |
(thread_ptr -> tx_thread_stack_fill_value << 16) |
(thread_ptr -> tx_thread_stack_fill_value << 24);
#endif
/* Set the thread stack to a pattern prior to creating the initial
stack frame. This pattern is used by the stack checking routines

View File

@@ -26,7 +26,7 @@
/* COMPONENT DEFINITION RELEASE */
/* */
/* txm_module_manager_util.h PORTABLE C */
/* 6.1.6 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* Scott Larson, Microsoft Corporation */
@@ -44,6 +44,9 @@
/* 04-02-2021 Scott Larson Modified comment(s) and */
/* optimized object checks, */
/* resulting in version 6.1.6 */
/* 10-31-2023 Tiejun Zhou Modified comment(s) and */
/* improved object check, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
@@ -100,13 +103,15 @@
/* Kernel objects should be outside the module at the very least. */
#define TXM_MODULE_MANAGER_PARAM_CHECK_OBJECT_FOR_USE(module_instance, obj_ptr, obj_size) \
((TXM_MODULE_MANAGER_ENSURE_OUTSIDE_MODULE(module_instance, obj_ptr, obj_size)) || \
(TXM_MODULE_MANAGER_ENSURE_OUTSIDE_MODULE(module_instance, obj_ptr, obj_size) || \
(_txm_module_manager_created_object_check(module_instance, (void *)obj_ptr) == TX_FALSE) || \
((void *) (obj_ptr) == TX_NULL))
/* When creating an object, the object must be inside the object pool. */
#define TXM_MODULE_MANAGER_PARAM_CHECK_OBJECT_FOR_CREATION(module_instance, obj_ptr, obj_size) \
((TXM_MODULE_MANAGER_ENSURE_INSIDE_OBJ_POOL(module_instance, obj_ptr, obj_size) && \
(_txm_module_manager_object_size_check(obj_ptr, obj_size) == TX_SUCCESS)) || \
(_txm_module_manager_created_object_check(module_instance, (void *)obj_ptr) == TX_FALSE) || \
((void *) (obj_ptr) == TX_NULL))
/* Strings we dereference can be in RW/RO/Shared areas. */

View File

@@ -39,7 +39,7 @@
/* FUNCTION RELEASE */
/* */
/* _txm_module_manager_thread_create PORTABLE C */
/* 6.2.1 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* Scott Larson, Microsoft Corporation */
@@ -94,6 +94,12 @@
/* 03-08-2023 Scott Larson Check module stack for */
/* overlap, */
/* resulting in version 6.2.1 */
/* 10-31-2023 Xiuwen Cai, Yajun xia Modified comment(s), */
/* added option for random */
/* number stack filling, */
/* fixed the kernel stack */
/* allocation issue, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
UINT _txm_module_manager_thread_create(TX_THREAD *thread_ptr, CHAR *name_ptr,
@@ -272,6 +278,17 @@ ULONG i;
}
#ifndef TX_DISABLE_STACK_FILLING
#if defined(TX_ENABLE_RANDOM_NUMBER_STACK_FILLING) && defined(TX_ENABLE_STACK_CHECKING)
/* Initialize the stack fill value to a 8-bit random value. */
thread_ptr -> tx_thread_stack_fill_value = ((ULONG) TX_RAND()) & 0xFFUL;
/* Duplicate the random value in each of the 4 bytes of the stack fill value. */
thread_ptr -> tx_thread_stack_fill_value = thread_ptr -> tx_thread_stack_fill_value |
(thread_ptr -> tx_thread_stack_fill_value << 8) |
(thread_ptr -> tx_thread_stack_fill_value << 16) |
(thread_ptr -> tx_thread_stack_fill_value << 24);
#endif
/* Set the thread stack to a pattern prior to creating the initial
stack frame. This pattern is used by the stack checking routines
@@ -312,9 +329,8 @@ ULONG i;
/* Initialize thread control block to all zeros. */
TX_MEMSET(thread_ptr, 0, sizeof(TX_THREAD));
#if TXM_MODULE_MEMORY_PROTECTION
/* If this is a memory protected module, allocate a kernel stack. */
if((module_instance -> txm_module_instance_property_flags) & TXM_MODULE_MEMORY_PROTECTION)
/* If the thread runs on user mode, allocate the kernel stack for syscall. */
if((module_instance -> txm_module_instance_property_flags) & TXM_MODULE_USER_MODE)
{
ULONG status;
@@ -339,6 +355,7 @@ ULONG i;
thread_ptr -> tx_thread_module_kernel_stack_size = TXM_MODULE_KERNEL_STACK_SIZE;
}
#if TXM_MODULE_MEMORY_PROTECTION
/* Place the stack parameters into the thread's control block. */
thread_ptr -> tx_thread_module_stack_start = stack_start;
thread_ptr -> tx_thread_module_stack_size = stack_size;

View File

@@ -26,7 +26,7 @@
/* APPLICATION INTERFACE DEFINITION RELEASE */
/* */
/* tx_api.h PORTABLE SMP */
/* 6.2.1 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -85,6 +85,13 @@
/* 03-08-2023 Tiejun Zhou Modified comment(s), */
/* update patch number, */
/* resulting in version 6.2.1 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added option for random */
/* number stack filling, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Tiejun Zhou Modified comment(s), */
/* update version number, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
@@ -136,8 +143,8 @@ extern "C" {
#define AZURE_RTOS_THREADX
#define THREADX_MAJOR_VERSION 6
#define THREADX_MINOR_VERSION 2
#define THREADX_PATCH_VERSION 1
#define THREADX_MINOR_VERSION 4
#define THREADX_PATCH_VERSION 0
/* Define the following symbol for backward compatibility */
#define EL_PRODUCT_THREADX
@@ -172,7 +179,11 @@ extern "C" {
#define TX_NO_MESSAGES ((UINT) 0)
#define TX_EMPTY ((ULONG) 0)
#define TX_CLEAR_ID ((ULONG) 0)
#if defined(TX_ENABLE_RANDOM_NUMBER_STACK_FILLING) && defined(TX_ENABLE_STACK_CHECKING)
#define TX_STACK_FILL (thread_ptr -> tx_thread_stack_fill_value)
#else
#define TX_STACK_FILL ((ULONG) 0xEFEFEFEFUL)
#endif
/* Thread execution state values. */
@@ -639,6 +650,12 @@ typedef struct TX_THREAD_STRUCT
cleanup routine executes. */
ULONG tx_thread_suspension_sequence;
#if defined(TX_ENABLE_RANDOM_NUMBER_STACK_FILLING) && defined(TX_ENABLE_STACK_CHECKING)
/* Define the random stack fill number. This can be used to detect stack overflow. */
ULONG tx_thread_stack_fill_value;
#endif
/* Define the user extension field. This typically is defined
to white space, but some ports of ThreadX may need to have
additional fields in the thread control block. This is
@@ -1886,6 +1903,21 @@ UINT _tx_trace_interrupt_control(UINT new_posture);
#endif
/* Add a default macro that can be re-defined in tx_port.h to add processing to the initialize random number generator.
By default, this is simply defined as whitespace. */
#ifndef TX_INITIALIZE_RANDOM_GENERATOR_INITIALIZATION
#define TX_INITIALIZE_RANDOM_GENERATOR_INITIALIZATION
#endif
/* Define the TX_RAND macro to the standard library function, if not already defined. */
#ifndef TX_RAND
#define TX_RAND() rand()
#endif
/* Check for MISRA compliance requirements. */
#ifdef TX_MISRA_ENABLE

View File

@@ -26,7 +26,7 @@
/* COMPONENT DEFINITION RELEASE */
/* */
/* tx_thread.h PORTABLE SMP */
/* 6.1 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -42,6 +42,8 @@
/* DATE NAME DESCRIPTION */
/* */
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 10-31-2023 Tiejun Zhou Fixed MISRA2012 rule 8.3, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
@@ -1349,7 +1351,7 @@ TX_THREAD *thread_remap_list[TX_THREAD_SMP_MAX_CORES];
}
static INLINE_DECLARE ULONG _tx_thread_smp_preemptable_threads_get(UINT priority, TX_THREAD *possible_preemption_list[])
static INLINE_DECLARE ULONG _tx_thread_smp_preemptable_threads_get(UINT priority, TX_THREAD *possible_preemption_list[TX_THREAD_SMP_MAX_CORES])
{
UINT i, j, k;
@@ -1668,7 +1670,7 @@ ULONG _tx_thread_smp_available_cores_get(void);
ULONG _tx_thread_smp_possible_cores_get(void);
UINT _tx_thread_smp_lowest_priority_get(void);
UINT _tx_thread_smp_remap_solution_find(TX_THREAD *schedule_thread, ULONG available_cores, ULONG thread_possible_cores, ULONG test_possible_cores);
ULONG _tx_thread_smp_preemptable_threads_get(UINT priority, TX_THREAD *possible_preemption_list[]);
ULONG _tx_thread_smp_preemptable_threads_get(UINT priority, TX_THREAD *possible_preemption_list[TX_THREAD_SMP_MAX_CORES]);
VOID _tx_thread_smp_simple_priority_change(TX_THREAD *thread_ptr, UINT new_priority);
#endif

View File

@@ -26,7 +26,7 @@
/* PORT SPECIFIC C INFORMATION RELEASE */
/* */
/* tx_user.h PORTABLE C */
/* 6.1.11 */
/* 6.3.0 */
/* */
/* AUTHOR */
/* */
@@ -62,6 +62,10 @@
/* optimized the definition of */
/* TX_TIMER_TICKS_PER_SECOND, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added option for random */
/* number stack filling, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
@@ -170,6 +174,14 @@
#define TX_ENABLE_STACK_CHECKING
*/
/* Determine if random number is used for stack filling. By default, ThreadX uses a fixed
pattern for stack filling. When the following is defined, ThreadX uses a random number
for stack filling. This is effective only when TX_ENABLE_STACK_CHECKING is defined. */
/*
#define TX_ENABLE_RANDOM_NUMBER_STACK_FILLING
*/
/* Determine if preemption-threshold should be disabled. By default, preemption-threshold is
enabled. If the application does not use preemption-threshold, it may be disabled to reduce
code size and improve performance. */

View File

@@ -35,7 +35,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_byte_pool_search PORTABLE SMP */
/* 6.1.7 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -81,6 +81,8 @@
/* calculation, and reduced */
/* number of search resets, */
/* resulting in version 6.1.7 */
/* 10-31-2023 Tiejun Zhou Fixed MISRA2012 rule 10.4_a, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
UCHAR *_tx_byte_pool_search(TX_BYTE_POOL *pool_ptr, ULONG memory_size)
@@ -110,7 +112,7 @@ UINT blocks_searched = ((UINT) 0);
/* First, determine if there are enough bytes in the pool. */
/* Theoretical bytes available = free bytes + ((fragments-2) * overhead of each block) */
total_theoretical_available = pool_ptr -> tx_byte_pool_available + ((pool_ptr -> tx_byte_pool_fragments - 2) * ((sizeof(UCHAR *)) + (sizeof(ALIGN_TYPE))));
total_theoretical_available = pool_ptr -> tx_byte_pool_available + ((pool_ptr -> tx_byte_pool_fragments - 2U) * ((sizeof(UCHAR *)) + (sizeof(ALIGN_TYPE))));
if (memory_size >= total_theoretical_available)
{

View File

@@ -47,7 +47,7 @@ TX_SAFETY_CRITICAL_EXCEPTION_HANDLER
/* FUNCTION RELEASE */
/* */
/* _tx_initialize_kernel_enter PORTABLE SMP */
/* 6.1 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -87,7 +87,11 @@ TX_SAFETY_CRITICAL_EXCEPTION_HANDLER
/* */
/* DATE NAME DESCRIPTION */
/* */
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added random generator */
/* initialization, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
VOID _tx_initialize_kernel_enter(VOID)
@@ -134,6 +138,9 @@ ULONG other_core_status, i;
later used to represent interrupt nesting. */
_tx_thread_system_state[0] = TX_INITIALIZE_IN_PROGRESS;
/* Optional random number generator initialization. */
TX_INITIALIZE_RANDOM_GENERATOR_INITIALIZATION
/* Call the application provided initialization function. Pass the
first available memory address to it. */
tx_application_define(_tx_initialize_unused_memory);

View File

@@ -37,7 +37,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_create PORTABLE SMP */
/* 6.2.0 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -89,6 +89,10 @@
/* restore interrupts at end */
/* of if block, */
/* resulting in version 6.2.0 */
/* 10-31-2023 Xiuwen Cai Modified comment(s), */
/* added option for random */
/* number stack filling, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
UINT _tx_thread_create(TX_THREAD *thread_ptr, CHAR *name_ptr,
@@ -110,6 +114,17 @@ ALIGN_TYPE updated_stack_start;
#ifndef TX_DISABLE_STACK_FILLING
#if defined(TX_ENABLE_RANDOM_NUMBER_STACK_FILLING) && defined(TX_ENABLE_STACK_CHECKING)
/* Initialize the stack fill value to a 8-bit random value. */
thread_ptr -> tx_thread_stack_fill_value = ((ULONG) TX_RAND()) & 0xFFUL;
/* Duplicate the random value in each of the 4 bytes of the stack fill value. */
thread_ptr -> tx_thread_stack_fill_value = thread_ptr -> tx_thread_stack_fill_value |
(thread_ptr -> tx_thread_stack_fill_value << 8) |
(thread_ptr -> tx_thread_stack_fill_value << 16) |
(thread_ptr -> tx_thread_stack_fill_value << 24);
#endif
/* Set the thread stack to a pattern prior to creating the initial
stack frame. This pattern is used by the stack checking routines

View File

@@ -826,7 +826,7 @@ TX_THREAD *thread_remap_list[TX_THREAD_SMP_MAX_CORES];
}
ULONG _tx_thread_smp_preemptable_threads_get(UINT priority, TX_THREAD *possible_preemption_list[])
ULONG _tx_thread_smp_preemptable_threads_get(UINT priority, TX_THREAD *possible_preemption_list[TX_THREAD_SMP_MAX_CORES])
{
UINT i, j, k;

View File

@@ -38,7 +38,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_system_suspend PORTABLE SMP */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -91,6 +91,8 @@
/* 04-25-2022 Scott Larson Modified comments and fixed */
/* loop to find next thread, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Fixed MISRA2012 rule 10.4_a, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
VOID _tx_thread_system_suspend(TX_THREAD *thread_ptr)
@@ -671,7 +673,7 @@ UINT processing_complete;
complex_path_possible = possible_cores & available_cores;
/* Check if we need to loop to find the next highest priority thread. */
if (next_priority == TX_MAX_PRIORITIES)
if (next_priority == (ULONG)TX_MAX_PRIORITIES)
{
loop_finished = TX_TRUE;
}

View File

@@ -322,7 +322,7 @@ void _tx_initialize_start_interrupts(void);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARCv2_EM/MetaWare Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARCv2_EM/MetaWare Version 6.4.0 *";
#else
#ifdef TX_MISRA_ENABLE
extern CHAR _tx_version_id[100];

View File

@@ -336,7 +336,7 @@ VOID tx_thread_register_bank_assign(VOID *thread_ptr, UINT register_bank);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARC_HS/MetaWare Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARC_HS/MetaWare Version 6.4.0 *";
#else
#ifdef TX_MISRA_ENABLE
extern CHAR _tx_version_id[100];

View File

@@ -320,7 +320,7 @@ unsigned int _tx_thread_interrupt_restore(UINT old_posture);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM11/AC5 Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM11/AC5 Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -309,7 +309,7 @@ unsigned int _tx_thread_interrupt_restore(UINT old_posture);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM11/GNU Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM11/GNU Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -375,7 +375,7 @@ void _tx_thread_interrupt_restore(UINT old_posture);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM11/IAR Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM11/IAR Version 6.4.0 *";
#else
#ifdef TX_MISRA_ENABLE
extern CHAR _tx_version_id[100];

View File

@@ -322,7 +322,7 @@ unsigned int _tx_thread_interrupt_restore(UINT old_posture);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM9/AC5 Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM9/AC5 Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -309,7 +309,7 @@ unsigned int _tx_thread_interrupt_restore(UINT old_posture);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM9/GNU Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM9/GNU Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -375,7 +375,7 @@ void _tx_thread_interrupt_restore(UINT old_posture);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM9/IAR Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARM9/IAR Version 6.4.0 *";
#else
#ifdef TX_MISRA_ENABLE
extern CHAR _tx_version_id[100];

View File

@@ -271,7 +271,7 @@ unsigned int _tx_thread_interrupt_control(unsigned int);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX C667x/TI Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX C667x/TI Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -321,7 +321,7 @@ void tx_thread_vfp_disable(void);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -19,17 +19,20 @@
/** */
/**************************************************************************/
/**************************************************************************/
.arm
#ifdef TX_ENABLE_FIQ_SUPPORT
SVC_MODE = 0xD3 // Disable IRQ/FIQ, SVC mode
IRQ_MODE = 0xD2 // Disable IRQ/FIQ, IRQ mode
#else
SVC_MODE = 0x93 // Disable IRQ, SVC mode
IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
SVC_MODE = 0x13 // SVC mode
IRQ_MODE = 0x12 // IRQ mode
.global _tx_thread_system_state
.global _tx_thread_current_ptr
.global _tx_thread_execute_ptr
@@ -42,7 +45,6 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_context_restore
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -50,7 +52,7 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* FUNCTION RELEASE */
/* */
/* _tx_thread_context_restore ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -88,6 +90,12 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
.global _tx_thread_context_restore
@@ -123,9 +131,9 @@ _tx_thread_context_restore:
/* Just recover the saved registers and return to the point of
interrupt. */
LDMIA sp!, {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
POP {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
MSR SPSR_cxsf, r0 // Put SPSR back
LDMIA sp!, {r0-r3} // Recover r0-r3
POP {r0-r3} // Recover r0-r3
MOVS pc, lr // Return to point of interrupt
__tx_thread_not_nested_restore:
@@ -154,26 +162,23 @@ __tx_thread_no_preempt_restore:
/* Pickup the saved stack pointer. */
/* Recover the saved context and return to the point of interrupt. */
LDMIA sp!, {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
POP {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
MSR SPSR_cxsf, r0 // Put SPSR back
LDMIA sp!, {r0-r3} // Recover r0-r3
POP {r0-r3} // Recover r0-r3
MOVS pc, lr // Return to point of interrupt
__tx_thread_preempt_restore:
LDMIA sp!, {r3, r10, r12, lr} // Recover temporarily saved registers
POP {r3, r10, r12, lr} // Recover temporarily saved registers
MOV r1, lr // Save lr (point of interrupt)
MOV r2, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r2 // Enter SVC mode
CPS #SVC_MODE // Enter SVC mode
STR r1, [sp, #-4]! // Save point of interrupt
STMDB sp!, {r4-r12, lr} // Save upper half of registers
PUSH {r4-r12, lr} // Save upper half of registers
MOV r4, r3 // Save SPSR in r4
MOV r2, #IRQ_MODE // Build IRQ mode CPSR
MSR CPSR_c, r2 // Enter IRQ mode
LDMIA sp!, {r0-r3} // Recover r0-r3
MOV r5, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r5 // Enter SVC mode
STMDB sp!, {r0-r3} // Save r0-r3 on thread's stack
CPS #IRQ_MODE // Enter IRQ mode
POP {r0-r3} // Recover r0-r3
CPS #SVC_MODE // Enter SVC mode
PUSH {r0-r3} // Save r0-r3 on thread's stack
LDR r1, =_tx_thread_current_ptr // Pickup address of current thread ptr
LDR r0, [r1] // Pickup current thread pointer
@@ -186,13 +191,11 @@ __tx_thread_preempt_restore:
STR r2, [sp, #-4]! // Save FPSCR
VSTMDB sp!, {D16-D31} // Save D16-D31
VSTMDB sp!, {D0-D15} // Save D0-D15
_tx_skip_irq_vfp_save:
#endif
MOV r3, #1 // Build interrupt stack type
STMDB sp!, {r3, r4} // Save interrupt stack type and SPSR
PUSH {r3, r4} // Save interrupt stack type and SPSR
STR sp, [r0, #8] // Save stack pointer in thread control
// block
@@ -217,6 +220,5 @@ __tx_thread_dont_save_ts:
__tx_thread_idle_system_restore:
/* Just return back to the scheduler! */
MOV r0, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r0 // Enter SVC mode
CPS #SVC_MODE // Enter SVC mode
B _tx_thread_schedule // Return to scheduler

View File

@@ -19,6 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_system_state
.global _tx_thread_current_ptr
@@ -28,7 +38,6 @@
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_context_save
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -36,7 +45,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_context_save ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -73,6 +82,12 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
.global _tx_thread_context_save
@@ -84,7 +99,7 @@ _tx_thread_context_save:
/* Check for a nested interrupt condition. */
STMDB sp!, {r0-r3} // Save some working registers
PUSH {r0-r3} // Save some working registers
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Disable FIQ interrupts
#endif
@@ -95,15 +110,15 @@ _tx_thread_context_save:
/* Nested interrupt condition. */
ADD r2, r2, #1 // Increment the interrupt counter
ADD r2, #1 // Increment the interrupt counter
STR r2, [r3] // Store it back in the variable
/* Save the rest of the scratch registers on the stack and return to the
calling ISR. */
MRS r0, SPSR // Pickup saved SPSR
SUB lr, lr, #4 // Adjust point of interrupt
STMDB sp!, {r0, r10, r12, lr} // Store other registers
SUB lr, #4 // Adjust point of interrupt
PUSH {r0, r10, r12, lr} // Store other registers
/* Return to the ISR. */
@@ -123,7 +138,7 @@ _tx_thread_context_save:
__tx_thread_not_nested_save:
/* Otherwise, not nested, check to see if a thread was running. */
ADD r2, r2, #1 // Increment the interrupt counter
ADD r2, #1 // Increment the interrupt counter
STR r2, [r3] // Store it back in the variable
LDR r1, =_tx_thread_current_ptr // Pickup address of current thread ptr
LDR r0, [r1] // Pickup current thread pointer
@@ -134,8 +149,8 @@ __tx_thread_not_nested_save:
/* Save minimal context of interrupted thread. */
MRS r2, SPSR // Pickup saved SPSR
SUB lr, lr, #4 // Adjust point of interrupt
STMDB sp!, {r2, r10, r12, lr} // Store other registers
SUB lr, #4 // Adjust point of interrupt
PUSH {r2, r10, r12, lr} // Store other registers
MOV r10, #0 // Clear stack limit
@@ -168,5 +183,5 @@ __tx_thread_idle_system_save:
POP {lr} // Recover ISR lr
#endif
ADD sp, sp, #16 // Recover saved registers
ADD sp, #16 // Recover saved registers
B __tx_irq_processing_return // Continue IRQ processing

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
SVC_MODE = 0xD3 // SVC mode
FIQ_MODE = 0xD1 // FIQ mode
@@ -48,7 +51,7 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_context_restore ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -86,6 +89,9 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_fiq_context_restore

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.global _tx_thread_system_state
.global _tx_thread_current_ptr
@@ -37,7 +40,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_context_save ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -74,6 +77,9 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_fiq_context_save

View File

@@ -19,6 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
#ifdef TX_ENABLE_FIQ_SUPPORT
DISABLE_INTS = 0xC0 // Disable IRQ/FIQ interrupts
@@ -28,11 +38,6 @@ DISABLE_INTS = 0x80 // Disable IRQ interrupts
MODE_MASK = 0x1F // Mode mask
FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_fiq_nesting_end
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -40,7 +45,7 @@ FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_nesting_end ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -82,8 +87,17 @@ FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_fiq_nesting_end
.type _tx_thread_fiq_nesting_end,function
_tx_thread_fiq_nesting_end:
@@ -97,8 +111,4 @@ _tx_thread_fiq_nesting_end:
ORR r0, r0, #FIQ_MODE_BITS // Build IRQ mode CPSR
MSR CPSR_c, r0 // Reenter IRQ mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,16 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
FIQ_DISABLE = 0x40 // FIQ disable bit
MODE_MASK = 0x1F // Mode mask
SYS_MODE_BITS = 0x1F // System mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_fiq_nesting_start
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -36,7 +41,7 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_nesting_start ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -75,8 +80,17 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_fiq_nesting_start
.type _tx_thread_fiq_nesting_start,function
_tx_thread_fiq_nesting_start:
@@ -89,8 +103,4 @@ _tx_thread_fiq_nesting_start:
// and push r1 just to keep 8-byte alignment
BIC r0, r0, #FIQ_DISABLE // Build enable FIQ CPSR
MSR CPSR_c, r0 // Enter system mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,26 +19,22 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
INT_MASK = 0x03F
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_control for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_control
$_tx_thread_interrupt_control:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_control // Call _tx_thread_interrupt_control function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
INT_MASK = 0x0C0
IRQ_MASK = 0x080
#ifdef TX_ENABLE_FIQ_SUPPORT
FIQ_MASK = 0x040
#endif
.text
.align 2
@@ -47,7 +43,7 @@ $_tx_thread_interrupt_control:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_control ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -80,25 +76,38 @@ $_tx_thread_interrupt_control:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_control
.type _tx_thread_interrupt_control,function
_tx_thread_interrupt_control:
MRS r1, CPSR // Pickup current CPSR
/* Pickup current interrupt lockout posture. */
MRS r3, CPSR // Pickup current CPSR
MOV r2, #INT_MASK // Build interrupt mask
AND r1, r3, r2 // Clear interrupt lockout bits
ORR r1, r1, r0 // Or-in new interrupt lockout bits
/* Apply the new interrupt posture. */
MSR CPSR_c, r1 // Setup new CPSR
BIC r0, r3, r2 // Return previous interrupt mask
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Disable IRQ and FIQ
#else
MOV pc, lr // Return to caller
CPSID i // Disable IRQ
#endif
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
AND r0, r1, #INT_MASK
BX lr

View File

@@ -19,23 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_disable for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_disable
$_tx_thread_interrupt_disable:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_disable // Call _tx_thread_interrupt_disable function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
.text
.align 2
@@ -44,7 +37,7 @@ $_tx_thread_interrupt_disable:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_disable ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -76,8 +69,17 @@ $_tx_thread_interrupt_disable:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_disable
.type _tx_thread_interrupt_disable,function
_tx_thread_interrupt_disable:
@@ -94,8 +96,4 @@ _tx_thread_interrupt_disable:
CPSID i // Disable IRQ
#endif
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -19,23 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_restore for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_restore
$_tx_thread_interrupt_restore:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_restore // Call _tx_thread_interrupt_restore function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
IRQ_MASK = 0x080
#ifdef TX_ENABLE_FIQ_SUPPORT
FIQ_MASK = 0x040
#endif
.text
.align 2
@@ -44,7 +42,7 @@ $_tx_thread_interrupt_restore:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_restore ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -77,17 +75,30 @@ $_tx_thread_interrupt_restore:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_restore
.type _tx_thread_interrupt_restore,function
_tx_thread_interrupt_restore:
/* Apply the new interrupt posture. */
MSR CPSR_c, r0 // Setup new CPSR
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
BX lr // Return to caller

View File

@@ -19,6 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
#ifdef TX_ENABLE_FIQ_SUPPORT
DISABLE_INTS = 0xC0 // Disable IRQ/FIQ interrupts
@@ -28,11 +38,6 @@ DISABLE_INTS = 0x80 // Disable IRQ interrupts
MODE_MASK = 0x1F // Mode mask
IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_irq_nesting_end
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -40,7 +45,7 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_irq_nesting_end ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -82,8 +87,17 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_irq_nesting_end
.type _tx_thread_irq_nesting_end,function
_tx_thread_irq_nesting_end:
@@ -96,8 +110,4 @@ _tx_thread_irq_nesting_end:
BIC r0, r0, #MODE_MASK // Clear mode bits
ORR r0, r0, #IRQ_MODE_BITS // Build IRQ mode CPSR
MSR CPSR_c, r0 // Reenter IRQ mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,16 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
IRQ_DISABLE = 0x80 // IRQ disable bit
MODE_MASK = 0x1F // Mode mask
SYS_MODE_BITS = 0x1F // System mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_irq_nesting_start
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -36,7 +41,7 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_irq_nesting_start ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -75,8 +80,17 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_irq_nesting_start
.type _tx_thread_irq_nesting_start,function
_tx_thread_irq_nesting_start:
@@ -89,8 +103,4 @@ _tx_thread_irq_nesting_start:
// and push r1 just to keep 8-byte alignment
BIC r0, r0, #IRQ_DISABLE // Build enable IRQ CPSR
MSR CPSR_c, r0 // Enter system mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,38 +19,33 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_execute_ptr
.global _tx_thread_current_ptr
.global _tx_timer_time_slice
/* Define the 16-bit Thumb mode veneer for _tx_thread_schedule for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_schedule
.type $_tx_thread_schedule,function
$_tx_thread_schedule:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_schedule // Call _tx_thread_schedule function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
#define IRQ_MODE 0x12 // IRQ mode
#define SVC_MODE 0x13 // SVC mode
/**************************************************************************/
/* */
/* FUNCTION RELEASE */
/* */
/* _tx_thread_schedule ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -89,8 +84,17 @@ $_tx_thread_schedule:
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_schedule
.type _tx_thread_schedule,function
_tx_thread_schedule:
@@ -134,30 +138,39 @@ __tx_thread_schedule_loop:
/* Setup time-slice, if present. */
LDR r2, =_tx_timer_time_slice // Pickup address of time-slice
// variable
LDR sp, [r0, #8] // Switch stack pointers
LDR r2, =_tx_timer_time_slice // Pickup address of time-slice variable
STR r3, [r2] // Setup time-slice
LDR sp, [r0, #8] // Switch stack pointers
#if (defined(TX_ENABLE_EXECUTION_CHANGE_NOTIFY) || defined(TX_EXECUTION_PROFILE_ENABLE))
/* Call the thread entry function to indicate the thread is executing. */
/* Call the thread entry function to indicate the thread is executing. */
MOV r5, r0 // Save r0
BL _tx_execution_thread_enter // Call the thread execution enter function
MOV r0, r5 // Restore r0
#endif
/* Determine if an interrupt frame or a synchronous task suspension frame
is present. */
/* Determine if an interrupt frame or a synchronous task suspension frame is present. */
LDMIA sp!, {r4, r5} // Pickup the stack type and saved CPSR
POP {r4, r5} // Pickup the stack type and saved CPSR
CMP r4, #0 // Check for synchronous context switch
BEQ _tx_solicited_return
#if !defined(THUMB_MODE)
MSR SPSR_cxsf, r5 // Setup SPSR for return
#else
CPS #IRQ_MODE // Enter IRQ mode
MSR SPSR_cxsf, r5 // Setup SPSR for return
LDR r1, [r0, #8] // Get thread SP
LDR lr, [r1, #0x40] // Get thread PC
CPS #SVC_MODE // Enter SVC mode
#endif
#ifdef TX_ENABLE_VFP_SUPPORT
LDR r1, [r0, #144] // Pickup the VFP enabled flag
CMP r1, #0 // Is the VFP enabled?
LDR r2, [r0, #144] // Pickup the VFP enabled flag
CMP r2, #0 // Is the VFP enabled?
BEQ _tx_skip_interrupt_vfp_restore // No, skip VFP interrupt restore
VLDMIA sp!, {D0-D15} // Recover D0-D15
VLDMIA sp!, {D16-D31} // Recover D16-D31
@@ -165,7 +178,15 @@ __tx_thread_schedule_loop:
VMSR FPSCR, r4 // Restore FPSCR
_tx_skip_interrupt_vfp_restore:
#endif
#if !defined(THUMB_MODE)
LDMIA sp!, {r0-r12, lr, pc}^ // Return to point of thread interrupt
#else
POP {r0-r12, lr} // Restore registers
ADD sp, #4 // Fix stack pointer (skip PC saved on stack)
CPS #IRQ_MODE // Enter IRQ mode
SUBS pc, lr, #0 // Return to point of thread interrupt
#endif
_tx_solicited_return:
@@ -179,52 +200,63 @@ _tx_solicited_return:
VMSR FPSCR, r4 // Restore FPSCR
_tx_skip_solicited_vfp_restore:
#endif
MSR CPSR_cxsf, r5 // Recover CPSR
LDMIA sp!, {r4-r11, lr} // Return to thread synchronously
#ifdef __THUMB_INTERWORK
POP {r4-r11, lr} // Restore registers
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif
#ifdef TX_ENABLE_VFP_SUPPORT
#if defined(THUMB_MODE)
.thumb_func
#endif
.global tx_thread_vfp_enable
.type tx_thread_vfp_enable,function
tx_thread_vfp_enable:
MRS r2, CPSR // Pickup the CPSR
MRS r0, CPSR // Pickup current CPSR
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Enable IRQ and FIQ interrupts
CPSID if // Disable IRQ and FIQ
#else
CPSID i // Enable IRQ interrupts
CPSID i // Disable IRQ
#endif
LDR r0, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r0] // Pickup current thread pointer
LDR r2, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r2] // Pickup current thread pointer
CMP r1, #0 // Check for NULL thread pointer
BEQ __tx_no_thread_to_enable // If NULL, skip VFP enable
MOV r0, #1 // Build enable value
STR r0, [r1, #144] // Set the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
__tx_no_thread_to_enable:
MSR CPSR_cxsf, r2 // Recover CPSR
BX LR // Return to caller
BEQ restore_ints // If NULL, skip VFP enable
MOV r2, #1 // Build enable value
STR r2, [r1, #144] // Set the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
B restore_ints
#if defined(THUMB_MODE)
.thumb_func
#endif
.global tx_thread_vfp_disable
.type tx_thread_vfp_disable,function
tx_thread_vfp_disable:
MRS r2, CPSR // Pickup the CPSR
MRS r0, CPSR // Pickup current CPSR
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Enable IRQ and FIQ interrupts
CPSID if // Disable IRQ and FIQ
#else
CPSID i // Enable IRQ interrupts
CPSID i // Disable IRQ
#endif
LDR r0, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r0] // Pickup current thread pointer
LDR r2, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r2] // Pickup current thread pointer
CMP r1, #0 // Check for NULL thread pointer
BEQ __tx_no_thread_to_disable // If NULL, skip VFP disable
MOV r0, #0 // Build disable value
STR r0, [r1, #144] // Clear the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
__tx_no_thread_to_disable:
MSR CPSR_cxsf, r2 // Recover CPSR
BX LR // Return to caller
BEQ restore_ints // If NULL, skip VFP disable
MOV r2, #0 // Build disable value
STR r2, [r1, #144] // Clear the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
restore_ints:
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
BX lr
#endif

View File

@@ -19,33 +19,26 @@
/** */
/**************************************************************************/
/**************************************************************************/
.arm
SVC_MODE = 0x13 // SVC mode
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSR_MASK = 0xDF // Mask initial CPSR, IRQ & FIQ interrupts enabled
#else
CPSR_MASK = 0x9F // Mask initial CPSR, IRQ interrupts enabled
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_stack_build for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.syntax unified
#if defined(THUMB_MODE)
.thumb
.global $_tx_thread_stack_build
.type $_tx_thread_stack_build,function
$_tx_thread_stack_build:
BX pc // Switch to 32-bit mode
NOP //
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_stack_build // Call _tx_thread_stack_build function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
SVC_MODE = 0x13 // SVC mode
THUMB_MASK = 0x20 // Thumb bit mask
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSR_MASK = 0xFF // Mask initial CPSR, T, IRQ & FIQ interrupts enabled
#else
CPSR_MASK = 0xBF // Mask initial CPSR, T, IRQ interrupts enabled
#endif
.text
.align 2
@@ -54,7 +47,7 @@ $_tx_thread_stack_build:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_stack_build ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -89,8 +82,17 @@ $_tx_thread_stack_build:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_stack_build
.type _tx_thread_stack_build,function
_tx_thread_stack_build:
@@ -128,6 +130,15 @@ _tx_thread_stack_build:
MOV r3, #1 // Build interrupt stack type
STR r3, [r2, #0] // Store stack type
MRS r3, CPSR // Pickup CPSR
BIC r3, #CPSR_MASK // Mask mode bits of CPSR
ORR r3, #SVC_MODE // Build CPSR, SYS mode, interrupts enabled
TST r1, #1 // Check if the initial PC is a Thumb function
IT NE
ORRNE r3, #THUMB_MASK // If the initial PC is a thumb function, CPSR must reflect this
STR r3, [r2, #4] // Store initial CPSR
MOV r3, #0 // Build initial register value
STR r3, [r2, #8] // Store initial r0
STR r3, [r2, #12] // Store initial r1
@@ -139,26 +150,20 @@ _tx_thread_stack_build:
STR r3, [r2, #36] // Store initial r7
STR r3, [r2, #40] // Store initial r8
STR r3, [r2, #44] // Store initial r9
LDR r3, [r0, #12] // Pickup stack starting address
STR r3, [r2, #48] // Store initial r10 (sl)
LDR r3,=_tx_thread_schedule // Pickup address of _tx_thread_schedule for GDB backtrace
STR r3, [r2, #60] // Store initial r14 (lr)
MOV r3, #0 // Build initial register value
STR r3, [r2, #52] // Store initial r11
STR r3, [r2, #56] // Store initial r12
STR r1, [r2, #64] // Store initial pc
STR r3, [r2, #68] // 0 for back-trace
MRS r1, CPSR // Pickup CPSR
BIC r1, r1, #CPSR_MASK // Mask mode bits of CPSR
ORR r3, r1, #SVC_MODE // Build CPSR, SVC mode, interrupts enabled
STR r3, [r2, #4] // Store initial CPSR
LDR r3, [r0, #12] // Pickup stack starting address
STR r3, [r2, #48] // Store initial r10 (sl)
LDR r3,=_tx_thread_schedule // Pickup address of _tx_thread_schedule for GDB backtrace
STR r3, [r2, #60] // Store initial r14 (lr)
STR r1, [r2, #64] // Store initial pc
/* Setup stack pointer. */
STR r2, [r0, #8] // Save stack pointer in thread's
// control block
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -19,34 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_current_ptr
.global _tx_timer_time_slice
.global _tx_thread_schedule
/* Define the 16-bit Thumb mode veneer for _tx_thread_system_return for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_system_return
.type $_tx_thread_system_return,function
$_tx_thread_system_return:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_system_return // Call _tx_thread_system_return function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
/**************************************************************************/
@@ -54,7 +41,7 @@ $_tx_thread_system_return:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_system_return ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -92,15 +79,24 @@ $_tx_thread_system_return:
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_system_return
.type _tx_thread_system_return,function
_tx_thread_system_return:
/* Save minimal context on the stack. */
STMDB sp!, {r4-r11, lr} // Save minimal context
PUSH {r4-r11, lr} // Save minimal context
LDR r4, =_tx_thread_current_ptr // Pickup address of current ptr
LDR r5, [r4] // Pickup current thread pointer
@@ -117,8 +113,11 @@ _tx_skip_solicited_vfp_save:
#endif
MOV r0, #0 // Build a solicited stack type
MRS r1, CPSR // Pickup the CPSR
STMDB sp!, {r0-r1} // Save type and CPSR
MRS r1, CPSR // Pickup the CPSR, T bit is always cleared by hardware
TST lr, #1 // Check if calling function is in Thumb mode
IT NE
ORRNE r1, #0x20 // Set the T bit so that the correct mode is set on return
PUSH {r0-r1} // Save type and CPSR
/* Lockout interrupts. */

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.global _tx_thread_system_state
.global _tx_thread_current_ptr
@@ -37,7 +40,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_vectored_context_save ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -74,6 +77,9 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_vectored_context_save

View File

@@ -19,9 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
/* Define Assembly language external references... */
@@ -34,26 +41,6 @@
.global _tx_timer_expired
.global _tx_thread_time_slice
/* Define the 16-bit Thumb mode veneer for _tx_timer_interrupt for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.thumb
.global $_tx_timer_interrupt
.type $_tx_timer_interrupt,function
$_tx_timer_interrupt:
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_timer_interrupt // Call _tx_timer_interrupt function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
/**************************************************************************/
@@ -61,7 +48,7 @@ $_tx_timer_interrupt:
/* FUNCTION RELEASE */
/* */
/* _tx_timer_interrupt ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -98,8 +85,17 @@ $_tx_timer_interrupt:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_timer_interrupt
.type _tx_timer_interrupt,function
_tx_timer_interrupt:
@@ -191,7 +187,7 @@ __tx_timer_done:
__tx_something_expired:
STMDB sp!, {r0, lr} // Save the lr register on the stack
PUSH {r0, lr} // Save the lr register on the stack
// and save r0 just to keep 8-byte alignment
/* Did a timer expire? */
@@ -219,13 +215,9 @@ __tx_timer_dont_activate:
__tx_timer_not_ts_expiration:
LDMIA sp!, {r0, lr} // Recover lr register (r0 is just there for
POP {r0, lr} // Recover lr register (r0 is just there for
// the 8-byte stack alignment
__tx_timer_nothing_expired:
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -1,15 +1,30 @@
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
/* .text is used instead of .section .text so it works with arm-aout too. */
.text
.code 32
.align 0
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _mainCRTStartup
_mainCRTStartup:
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _start
_start:
#if defined(THUMB_MODE)
.thumb_func
#endif
.global start
start:
_start:
_mainCRTStartup:
/* Start by setting up a stack */
/* Set up the stack pointer to a fixed value */
@@ -69,16 +84,12 @@ _mainCRTStartup:
.word _fini
#endif */
/* Return ... */
#ifdef __APCS_26__
movs pc, lr
#else
#ifdef __THUMB_INTERWORK
bx lr
#else
mov pc, lr
#endif
#endif
.global _fini
.type _fini,function
_fini:
BX lr // Return to caller
/* Workspace for Angel calls. */
.data

View File

@@ -109,7 +109,7 @@ SECTIONS
.eh_frame_hdr : { *(.eh_frame_hdr) }
/* Adjust the address for the data segment. We want to adjust up to
the same address within the page on the next page up. */
. = ALIGN(256) + (. & (256 - 1));
. = 0x2E000000;
.data :
{
*(.data)

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.arm
@@ -64,7 +67,7 @@ $_tx_initialize_low_level:
/* FUNCTION RELEASE */
/* */
/* _tx_initialize_low_level ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -100,6 +103,9 @@ $_tx_initialize_low_level:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_initialize_low_level

View File

@@ -0,0 +1,155 @@
// ------------------------------------------------------------
// v7-A Cache, TLB and Branch Prediction Maintenance Operations
// Header File
//
// Copyright (c) 2011-2016 Arm Limited (or its affiliates). All rights reserved.
// Use, modification and redistribution of this file is subject to your possession of a
// valid End User License Agreement for the Arm Product of which these examples are part of
// and your compliance with all applicable terms and conditions of such licence agreement.
// ------------------------------------------------------------
#ifndef _ARMV7A_GENERIC_H
#define _ARMV7A_GENERIC_H
// ------------------------------------------------------------
// Memory barrier mnemonics
enum MemBarOpt {
RESERVED_0 = 0, RESERVED_1 = 1, OSHST = 2, OSH = 3,
RESERVED_4 = 4, RESERVED_5 = 5, NSHST = 6, NSH = 7,
RESERVED_8 = 8, RESERVED_9 = 9, ISHST = 10, ISH = 11,
RESERVED_12 = 12, RESERVED_13 = 13, ST = 14, SY = 15
};
//
// Note:
// *_IS() stands for "inner shareable"
// DO NOT USE THESE FUNCTIONS ON A CORTEX-A8
//
// ------------------------------------------------------------
// Interrupts
// Enable/disables IRQs (not FIQs)
void enableInterrupts(void);
void disableInterrupts(void);
// ------------------------------------------------------------
// Caches
void invalidateCaches_IS(void);
void cleanInvalidateDCache(void);
void invalidateCaches_IS(void);
void enableCaches(void);
void disableCaches(void);
void invalidateCaches(void);
void cleanDCache(void);
// ------------------------------------------------------------
// TLBs
void invalidateUnifiedTLB(void);
void invalidateUnifiedTLB_IS(void);
// ------------------------------------------------------------
// Branch prediction
void flushBranchTargetCache(void);
void flushBranchTargetCache_IS(void);
// ------------------------------------------------------------
// High Vecs
void enableHighVecs(void);
void disableHighVecs(void);
// ------------------------------------------------------------
// ID Registers
unsigned int getMIDR(void);
#define MIDR_IMPL_SHIFT 24
#define MIDR_IMPL_MASK 0xFF
#define MIDR_VAR_SHIFT 20
#define MIDR_VAR_MASK 0xF
#define MIDR_ARCH_SHIFT 16
#define MIDR_ARCH_MASK 0xF
#define MIDR_PART_SHIFT 4
#define MIDR_PART_MASK 0xFFF
#define MIDR_REV_SHIFT 0
#define MIDR_REV_MASK 0xF
// tmp = get_MIDR();
// implementor = (tmp >> MIDR_IMPL_SHIFT) & MIDR_IMPL_MASK;
// variant = (tmp >> MIDR_VAR_SHIFT) & MIDR_VAR_MASK;
// architecture= (tmp >> MIDR_ARCH_SHIFT) & MIDR_ARCH_MASK;
// part_number = (tmp >> MIDR_PART_SHIFT) & MIDR_PART_MASK;
// revision = tmp & MIDR_REV_MASK;
#define MIDR_PART_CA5 0xC05
#define MIDR_PART_CA8 0xC08
#define MIDR_PART_CA9 0xC09
unsigned int getMPIDR(void);
#define MPIDR_FORMAT_SHIFT 31
#define MPIDR_FORMAT_MASK 0x1
#define MPIDR_UBIT_SHIFT 30
#define MPIDR_UBIT_MASK 0x1
#define MPIDR_CLUSTER_SHIFT 7
#define MPIDR_CLUSTER_MASK 0xF
#define MPIDR_CPUID_SHIFT 0
#define MPIDR_CPUID_MASK 0x3
#define MPIDR_CPUID_CPU0 0x0
#define MPIDR_CPUID_CPU1 0x1
#define MPIDR_CPUID_CPU2 0x2
#define MPIDR_CPUID_CPU3 0x3
#define MPIDR_UNIPROCESSPR 0x1
#define MPDIR_NEW_FORMAT 0x1
// ------------------------------------------------------------
// Context ID
unsigned int getContextID(void);
void setContextID(unsigned int);
#define CONTEXTID_ASID_SHIFT 0
#define CONTEXTID_ASID_MASK 0xFF
#define CONTEXTID_PROCID_SHIFT 8
#define CONTEXTID_PROCID_MASK 0x00FFFFFF
// tmp = getContextID();
// ASID = tmp & CONTEXTID_ASID_MASK;
// PROCID = (tmp >> CONTEXTID_PROCID_SHIFT) & CONTEXTID_PROCID_MASK;
// ------------------------------------------------------------
// SMP related for Armv7-A MPCore processors
//
// DO NOT CALL THESE FUNCTIONS ON A CORTEX-A8
// Returns the base address of the private peripheral memory space
unsigned int getBaseAddr(void);
// Returns the CPU ID (0 to 3) of the CPU executed on
#define MP_CPU0 (0)
#define MP_CPU1 (1)
#define MP_CPU2 (2)
#define MP_CPU3 (3)
unsigned int getCPUID(void);
// Set this core as participating in SMP
void joinSMP(void);
// Set this core as NOT participating in SMP
void leaveSMP(void);
// Go to sleep, never returns
void goToSleep(void);
#endif
// ------------------------------------------------------------
// End of v7.h
// ------------------------------------------------------------

View File

@@ -0,0 +1,476 @@
// ------------------------------------------------------------
// v7-A Cache and Branch Prediction Maintenance Operations
//
// Copyright (c) 2011-2018 Arm Limited (or its affiliates). All rights reserved.
// Use, modification and redistribution of this file is subject to your possession of a
// valid End User License Agreement for the Arm Product of which these examples are part of
// and your compliance with all applicable terms and conditions of such licence agreement.
// ------------------------------------------------------------
.arm
// ------------------------------------------------------------
// Interrupt enable/disable
// ------------------------------------------------------------
// Could use intrinsic instead of these
.global enableInterrupts
.type enableInterrupts,function
// void enableInterrupts(void)//
enableInterrupts:
CPSIE i
BX lr
.global disableInterrupts
.type disableInterrupts,function
// void disableInterrupts(void)//
disableInterrupts:
CPSID i
BX lr
// ------------------------------------------------------------
// Cache Maintenance
// ------------------------------------------------------------
.global enableCaches
.type enableCaches,function
// void enableCaches(void)//
enableCaches:
MRC p15, 0, r0, c1, c0, 0 // Read System Control Register
ORR r0, r0, #(1 << 2) // Set C bit
ORR r0, r0, #(1 << 12) // Set I bit
MCR p15, 0, r0, c1, c0, 0 // Write System Control Register
ISB
BX lr
.global disableCaches
.type disableCaches,function
// void disableCaches(void)
disableCaches:
MRC p15, 0, r0, c1, c0, 0 // Read System Control Register
BIC r0, r0, #(1 << 2) // Clear C bit
BIC r0, r0, #(1 << 12) // Clear I bit
MCR p15, 0, r0, c1, c0, 0 // Write System Control Register
ISB
BX lr
.global cleanDCache
.type cleanDCache,function
// void cleanDCache(void)//
cleanDCache:
PUSH {r4-r12}
//
// Based on code example given in section 11.2.4 of Armv7-A/R Architecture Reference Manual (DDI 0406B)
//
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ clean_dcache_finished
MOV r10, #0
clean_dcache_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT clean_dcache_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
clean_dcache_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
clean_dcache_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c10, 2 // DCCSW - clean by set/way
SUBS r9, r9, #1 // decrement the way number
BGE clean_dcache_loop3
SUBS r7, r7, #1 // decrement the index
BGE clean_dcache_loop2
clean_dcache_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT clean_dcache_loop1
clean_dcache_finished:
POP {r4-r12}
BX lr
.global cleanInvalidateDCache
.type cleanInvalidateDCache,function
// void cleanInvalidateDCache(void)//
cleanInvalidateDCache:
PUSH {r4-r12}
//
// Based on code example given in section 11.2.4 of Armv7-A/R Architecture Reference Manual (DDI 0406B)
//
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ clean_invalidate_dcache_finished
MOV r10, #0
clean_invalidate_dcache_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT clean_invalidate_dcache_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
clean_invalidate_dcache_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
clean_invalidate_dcache_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c14, 2 // DCCISW - clean and invalidate by set/way
SUBS r9, r9, #1 // decrement the way number
BGE clean_invalidate_dcache_loop3
SUBS r7, r7, #1 // decrement the index
BGE clean_invalidate_dcache_loop2
clean_invalidate_dcache_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT clean_invalidate_dcache_loop1
clean_invalidate_dcache_finished:
POP {r4-r12}
BX lr
.global invalidateCaches
.type invalidateCaches,function
// void invalidateCaches(void)//
invalidateCaches:
PUSH {r4-r12}
//
// Based on code example given in section B2.2.4/11.2.4 of Armv7-A/R Architecture Reference Manual (DDI 0406B)
//
MOV r0, #0
MCR p15, 0, r0, c7, c5, 0 // ICIALLU - Invalidate entire I Cache, and flushes branch target cache
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ invalidate_caches_finished
MOV r10, #0
invalidate_caches_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT invalidate_caches_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
invalidate_caches_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
invalidate_caches_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c6, 2 // DCISW - invalidate by set/way
SUBS r9, r9, #1 // decrement the way number
BGE invalidate_caches_loop3
SUBS r7, r7, #1 // decrement the index
BGE invalidate_caches_loop2
invalidate_caches_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT invalidate_caches_loop1
invalidate_caches_finished:
POP {r4-r12}
BX lr
.global invalidateCaches_IS
.type invalidateCaches_IS,function
// void invalidateCaches_IS(void)//
invalidateCaches_IS:
PUSH {r4-r12}
MOV r0, #0
MCR p15, 0, r0, c7, c1, 0 // ICIALLUIS - Invalidate entire I Cache inner shareable
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ invalidate_caches_is_finished
MOV r10, #0
invalidate_caches_is_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT invalidate_caches_is_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
invalidate_caches_is_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
invalidate_caches_is_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c6, 2 // DCISW - clean by set/way
SUBS r9, r9, #1 // decrement the way number
BGE invalidate_caches_is_loop3
SUBS r7, r7, #1 // decrement the index
BGE invalidate_caches_is_loop2
invalidate_caches_is_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT invalidate_caches_is_loop1
invalidate_caches_is_finished:
POP {r4-r12}
BX lr
// ------------------------------------------------------------
// TLB
// ------------------------------------------------------------
.global invalidateUnifiedTLB
.type invalidateUnifiedTLB,function
// void invalidateUnifiedTLB(void)//
invalidateUnifiedTLB:
MOV r0, #0
MCR p15, 0, r0, c8, c7, 0 // TLBIALL - Invalidate entire unified TLB
BX lr
.global invalidateUnifiedTLB_IS
.type invalidateUnifiedTLB_IS,function
// void invalidateUnifiedTLB_IS(void)//
invalidateUnifiedTLB_IS:
MOV r0, #1
MCR p15, 0, r0, c8, c3, 0 // TLBIALLIS - Invalidate entire unified TLB Inner Shareable
BX lr
// ------------------------------------------------------------
// Branch Prediction
// ------------------------------------------------------------
.global flushBranchTargetCache
.type flushBranchTargetCache,function
// void flushBranchTargetCache(void)
flushBranchTargetCache:
MOV r0, #0
MCR p15, 0, r0, c7, c5, 6 // BPIALL - Invalidate entire branch predictor array
BX lr
.global flushBranchTargetCache_IS
.type flushBranchTargetCache_IS,function
// void flushBranchTargetCache_IS(void)
flushBranchTargetCache_IS:
MOV r0, #0
MCR p15, 0, r0, c7, c1, 6 // BPIALLIS - Invalidate entire branch predictor array Inner Shareable
BX lr
// ------------------------------------------------------------
// High Vecs
// ------------------------------------------------------------
.global enableHighVecs
.type enableHighVecs,function
// void enableHighVecs(void)//
enableHighVecs:
MRC p15, 0, r0, c1, c0, 0 // Read Control Register
ORR r0, r0, #(1 << 13) // Set the V bit (bit 13)
MCR p15, 0, r0, c1, c0, 0 // Write Control Register
ISB
BX lr
.global disableHighVecs
.type disableHighVecs,function
// void disable_highvecs(void)//
disableHighVecs:
MRC p15, 0, r0, c1, c0, 0 // Read Control Register
BIC r0, r0, #(1 << 13) // Clear the V bit (bit 13)
MCR p15, 0, r0, c1, c0, 0 // Write Control Register
ISB
BX lr
// ------------------------------------------------------------
// Context ID
// ------------------------------------------------------------
.global getContextID
.type getContextID,function
// uint32_t getContextIDd(void)//
getContextID:
MRC p15, 0, r0, c13, c0, 1 // Read Context ID Register
BX lr
.global setContextID
.type setContextID,function
// void setContextID(uint32_t)//
setContextID:
MCR p15, 0, r0, c13, c0, 1 // Write Context ID Register
BX lr
// ------------------------------------------------------------
// ID registers
// ------------------------------------------------------------
.global getMIDR
.type getMIDR,function
// uint32_t getMIDR(void)//
getMIDR:
MRC p15, 0, r0, c0, c0, 0 // Read Main ID Register (MIDR)
BX lr
.global getMPIDR
.type getMPIDR,function
// uint32_t getMPIDR(void)//
getMPIDR:
MRC p15, 0, r0, c0 ,c0, 5// Read Multiprocessor ID register (MPIDR)
BX lr
// ------------------------------------------------------------
// CP15 SMP related
// ------------------------------------------------------------
.global getBaseAddr
.type getBaseAddr,function
// uint32_t getBaseAddr(void)
// Returns the value CBAR (base address of the private peripheral memory space)
getBaseAddr:
MRC p15, 4, r0, c15, c0, 0 // Read peripheral base address
BX lr
// ------------------------------------------------------------
.global getCPUID
.type getCPUID,function
// uint32_t getCPUID(void)
// Returns the CPU ID (0 to 3) of the CPU executed on
getCPUID:
MRC p15, 0, r0, c0, c0, 5 // Read CPU ID register
AND r0, r0, #0x03 // Mask off, leaving the CPU ID field
BX lr
// ------------------------------------------------------------
.global goToSleep
.type goToSleep,function
// void goToSleep(void)
goToSleep:
DSB // Clear all pending data accesses
WFI // Go into standby
B goToSleep // Catch in case of rogue events
BX lr
// ------------------------------------------------------------
.global joinSMP
.type joinSMP,function
// void joinSMP(void)
// Sets the ACTRL.SMP bit
joinSMP:
// SMP status is controlled by bit 6 of the CP15 Aux Ctrl Reg
MRC p15, 0, r0, c1, c0, 1 // Read ACTLR
MOV r1, r0
ORR r0, r0, #0x040 // Set bit 6
CMP r0, r1
MCRNE p15, 0, r0, c1, c0, 1 // Write ACTLR
ISB
BX lr
// ------------------------------------------------------------
.global leaveSMP
.type leaveSMP,function
// void leaveSMP(void)
// Clear the ACTRL.SMP bit
leaveSMP:
// SMP status is controlled by bit 6 of the CP15 Aux Ctrl Reg
MRC p15, 0, r0, c1, c0, 1 // Read ACTLR
BIC r0, r0, #0x040 // Clear bit 6
MCR p15, 0, r0, c1, c0, 1 // Write ACTLR
ISB
BX lr
// ------------------------------------------------------------
// End of v7.s
// ------------------------------------------------------------

View File

@@ -321,7 +321,7 @@ void tx_thread_vfp_disable(void);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -23,16 +23,16 @@
#include "tx_user.h"
#endif
.arm
#ifdef TX_ENABLE_FIQ_SUPPORT
SVC_MODE = 0xD3 // Disable IRQ/FIQ, SVC mode
IRQ_MODE = 0xD2 // Disable IRQ/FIQ, IRQ mode
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
SVC_MODE = 0x93 // Disable IRQ, SVC mode
IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
.arm
#endif
SVC_MODE = 0x13 // SVC mode
IRQ_MODE = 0x12 // IRQ mode
.global _tx_thread_system_state
.global _tx_thread_current_ptr
.global _tx_thread_execute_ptr
@@ -45,7 +45,6 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_context_restore
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -53,7 +52,7 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* FUNCTION RELEASE */
/* */
/* _tx_thread_context_restore ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -91,9 +90,12 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
.global _tx_thread_context_restore
@@ -129,9 +131,9 @@ _tx_thread_context_restore:
/* Just recover the saved registers and return to the point of
interrupt. */
LDMIA sp!, {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
POP {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
MSR SPSR_cxsf, r0 // Put SPSR back
LDMIA sp!, {r0-r3} // Recover r0-r3
POP {r0-r3} // Recover r0-r3
MOVS pc, lr // Return to point of interrupt
__tx_thread_not_nested_restore:
@@ -160,26 +162,23 @@ __tx_thread_no_preempt_restore:
/* Pickup the saved stack pointer. */
/* Recover the saved context and return to the point of interrupt. */
LDMIA sp!, {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
POP {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
MSR SPSR_cxsf, r0 // Put SPSR back
LDMIA sp!, {r0-r3} // Recover r0-r3
POP {r0-r3} // Recover r0-r3
MOVS pc, lr // Return to point of interrupt
__tx_thread_preempt_restore:
LDMIA sp!, {r3, r10, r12, lr} // Recover temporarily saved registers
POP {r3, r10, r12, lr} // Recover temporarily saved registers
MOV r1, lr // Save lr (point of interrupt)
MOV r2, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r2 // Enter SVC mode
CPS #SVC_MODE // Enter SVC mode
STR r1, [sp, #-4]! // Save point of interrupt
STMDB sp!, {r4-r12, lr} // Save upper half of registers
PUSH {r4-r12, lr} // Save upper half of registers
MOV r4, r3 // Save SPSR in r4
MOV r2, #IRQ_MODE // Build IRQ mode CPSR
MSR CPSR_c, r2 // Enter IRQ mode
LDMIA sp!, {r0-r3} // Recover r0-r3
MOV r5, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r5 // Enter SVC mode
STMDB sp!, {r0-r3} // Save r0-r3 on thread's stack
CPS #IRQ_MODE // Enter IRQ mode
POP {r0-r3} // Recover r0-r3
CPS #SVC_MODE // Enter SVC mode
PUSH {r0-r3} // Save r0-r3 on thread's stack
LDR r1, =_tx_thread_current_ptr // Pickup address of current thread ptr
LDR r0, [r1] // Pickup current thread pointer
@@ -192,13 +191,11 @@ __tx_thread_preempt_restore:
STR r2, [sp, #-4]! // Save FPSCR
VSTMDB sp!, {D16-D31} // Save D16-D31
VSTMDB sp!, {D0-D15} // Save D0-D15
_tx_skip_irq_vfp_save:
#endif
MOV r3, #1 // Build interrupt stack type
STMDB sp!, {r3, r4} // Save interrupt stack type and SPSR
PUSH {r3, r4} // Save interrupt stack type and SPSR
STR sp, [r0, #8] // Save stack pointer in thread control
// block
@@ -223,6 +220,5 @@ __tx_thread_dont_save_ts:
__tx_thread_idle_system_restore:
/* Just return back to the scheduler! */
MOV r0, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r0 // Enter SVC mode
CPS #SVC_MODE // Enter SVC mode
B _tx_thread_schedule // Return to scheduler

View File

@@ -23,6 +23,13 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_system_state
.global _tx_thread_current_ptr
.global __tx_irq_processing_return
@@ -31,7 +38,6 @@
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_context_save
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -39,7 +45,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_context_save ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -76,9 +82,12 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
.global _tx_thread_context_save
@@ -90,7 +99,7 @@ _tx_thread_context_save:
/* Check for a nested interrupt condition. */
STMDB sp!, {r0-r3} // Save some working registers
PUSH {r0-r3} // Save some working registers
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Disable FIQ interrupts
#endif
@@ -101,15 +110,15 @@ _tx_thread_context_save:
/* Nested interrupt condition. */
ADD r2, r2, #1 // Increment the interrupt counter
ADD r2, #1 // Increment the interrupt counter
STR r2, [r3] // Store it back in the variable
/* Save the rest of the scratch registers on the stack and return to the
calling ISR. */
MRS r0, SPSR // Pickup saved SPSR
SUB lr, lr, #4 // Adjust point of interrupt
STMDB sp!, {r0, r10, r12, lr} // Store other registers
SUB lr, #4 // Adjust point of interrupt
PUSH {r0, r10, r12, lr} // Store other registers
/* Return to the ISR. */
@@ -129,7 +138,7 @@ _tx_thread_context_save:
__tx_thread_not_nested_save:
/* Otherwise, not nested, check to see if a thread was running. */
ADD r2, r2, #1 // Increment the interrupt counter
ADD r2, #1 // Increment the interrupt counter
STR r2, [r3] // Store it back in the variable
LDR r1, =_tx_thread_current_ptr // Pickup address of current thread ptr
LDR r0, [r1] // Pickup current thread pointer
@@ -140,8 +149,8 @@ __tx_thread_not_nested_save:
/* Save minimal context of interrupted thread. */
MRS r2, SPSR // Pickup saved SPSR
SUB lr, lr, #4 // Adjust point of interrupt
STMDB sp!, {r2, r10, r12, lr} // Store other registers
SUB lr, #4 // Adjust point of interrupt
PUSH {r2, r10, r12, lr} // Store other registers
MOV r10, #0 // Clear stack limit
@@ -174,5 +183,5 @@ __tx_thread_idle_system_save:
POP {lr} // Recover ISR lr
#endif
ADD sp, sp, #16 // Recover saved registers
ADD sp, #16 // Recover saved registers
B __tx_irq_processing_return // Continue IRQ processing

View File

@@ -51,7 +51,7 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_context_restore ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -89,9 +89,9 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_fiq_context_restore

View File

@@ -40,7 +40,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_context_save ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -77,9 +77,9 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_fiq_context_save

View File

@@ -23,6 +23,13 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
#ifdef TX_ENABLE_FIQ_SUPPORT
DISABLE_INTS = 0xC0 // Disable IRQ/FIQ interrupts
#else
@@ -31,11 +38,6 @@ DISABLE_INTS = 0x80 // Disable IRQ interrupts
MODE_MASK = 0x1F // Mode mask
FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_fiq_nesting_end
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -43,7 +45,7 @@ FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_nesting_end ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -85,11 +87,17 @@ FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_fiq_nesting_end
.type _tx_thread_fiq_nesting_end,function
_tx_thread_fiq_nesting_end:
@@ -103,8 +111,4 @@ _tx_thread_fiq_nesting_end:
ORR r0, r0, #FIQ_MODE_BITS // Build IRQ mode CPSR
MSR CPSR_c, r0 // Reenter IRQ mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -23,15 +23,17 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
FIQ_DISABLE = 0x40 // FIQ disable bit
MODE_MASK = 0x1F // Mode mask
SYS_MODE_BITS = 0x1F // System mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_fiq_nesting_start
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -39,7 +41,7 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_nesting_start ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -78,11 +80,17 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_fiq_nesting_start
.type _tx_thread_fiq_nesting_start,function
_tx_thread_fiq_nesting_start:
@@ -95,8 +103,4 @@ _tx_thread_fiq_nesting_start:
// and push r1 just to keep 8-byte alignment
BIC r0, r0, #FIQ_DISABLE // Build enable FIQ CPSR
MSR CPSR_c, r0 // Enter system mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -23,25 +23,18 @@
#include "tx_user.h"
#endif
INT_MASK = 0x03F
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_control for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_control
$_tx_thread_interrupt_control:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_control // Call _tx_thread_interrupt_control function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
INT_MASK = 0x0C0
IRQ_MASK = 0x080
#ifdef TX_ENABLE_FIQ_SUPPORT
FIQ_MASK = 0x040
#endif
.text
.align 2
@@ -50,7 +43,7 @@ $_tx_thread_interrupt_control:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_control ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -83,28 +76,38 @@ $_tx_thread_interrupt_control:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_control
.type _tx_thread_interrupt_control,function
_tx_thread_interrupt_control:
MRS r1, CPSR // Pickup current CPSR
/* Pickup current interrupt lockout posture. */
MRS r3, CPSR // Pickup current CPSR
MOV r2, #INT_MASK // Build interrupt mask
AND r1, r3, r2 // Clear interrupt lockout bits
ORR r1, r1, r0 // Or-in new interrupt lockout bits
/* Apply the new interrupt posture. */
MSR CPSR_c, r1 // Setup new CPSR
BIC r0, r3, r2 // Return previous interrupt mask
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Disable IRQ and FIQ
#else
MOV pc, lr // Return to caller
CPSID i // Disable IRQ
#endif
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
AND r0, r1, #INT_MASK
BX lr

View File

@@ -23,22 +23,12 @@
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_disable for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_disable
$_tx_thread_interrupt_disable:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_disable // Call _tx_thread_interrupt_disable function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
.text
.align 2
@@ -47,7 +37,7 @@ $_tx_thread_interrupt_disable:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_disable ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -79,11 +69,17 @@ $_tx_thread_interrupt_disable:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_disable
.type _tx_thread_interrupt_disable,function
_tx_thread_interrupt_disable:
@@ -100,8 +96,4 @@ _tx_thread_interrupt_disable:
CPSID i // Disable IRQ
#endif
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -23,22 +23,17 @@
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_restore for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_restore
$_tx_thread_interrupt_restore:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_restore // Call _tx_thread_interrupt_restore function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
IRQ_MASK = 0x080
#ifdef TX_ENABLE_FIQ_SUPPORT
FIQ_MASK = 0x040
#endif
.text
.align 2
@@ -47,7 +42,7 @@ $_tx_thread_interrupt_restore:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_restore ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -80,20 +75,30 @@ $_tx_thread_interrupt_restore:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_restore
.type _tx_thread_interrupt_restore,function
_tx_thread_interrupt_restore:
/* Apply the new interrupt posture. */
MSR CPSR_c, r0 // Setup new CPSR
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
BX lr // Return to caller

View File

@@ -23,6 +23,13 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
#ifdef TX_ENABLE_FIQ_SUPPORT
DISABLE_INTS = 0xC0 // Disable IRQ/FIQ interrupts
#else
@@ -31,11 +38,6 @@ DISABLE_INTS = 0x80 // Disable IRQ interrupts
MODE_MASK = 0x1F // Mode mask
IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_irq_nesting_end
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -43,7 +45,7 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_irq_nesting_end ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -85,11 +87,17 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_irq_nesting_end
.type _tx_thread_irq_nesting_end,function
_tx_thread_irq_nesting_end:
@@ -102,8 +110,4 @@ _tx_thread_irq_nesting_end:
BIC r0, r0, #MODE_MASK // Clear mode bits
ORR r0, r0, #IRQ_MODE_BITS // Build IRQ mode CPSR
MSR CPSR_c, r0 // Reenter IRQ mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -23,15 +23,17 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
IRQ_DISABLE = 0x80 // IRQ disable bit
MODE_MASK = 0x1F // Mode mask
SYS_MODE_BITS = 0x1F // System mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_irq_nesting_start
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -39,7 +41,7 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_irq_nesting_start ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -78,11 +80,17 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_irq_nesting_start
.type _tx_thread_irq_nesting_start,function
_tx_thread_irq_nesting_start:
@@ -95,8 +103,4 @@ _tx_thread_irq_nesting_start:
// and push r1 just to keep 8-byte alignment
BIC r0, r0, #IRQ_DISABLE // Build enable IRQ CPSR
MSR CPSR_c, r0 // Enter system mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -23,37 +23,29 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_execute_ptr
.global _tx_thread_current_ptr
.global _tx_timer_time_slice
/* Define the 16-bit Thumb mode veneer for _tx_thread_schedule for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_schedule
.type $_tx_thread_schedule,function
$_tx_thread_schedule:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_schedule // Call _tx_thread_schedule function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
#define IRQ_MODE 0x12 // IRQ mode
#define SVC_MODE 0x13 // SVC mode
/**************************************************************************/
/* */
/* FUNCTION RELEASE */
/* */
/* _tx_thread_schedule ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -92,11 +84,17 @@ $_tx_thread_schedule:
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_schedule
.type _tx_thread_schedule,function
_tx_thread_schedule:
@@ -140,30 +138,39 @@ __tx_thread_schedule_loop:
/* Setup time-slice, if present. */
LDR r2, =_tx_timer_time_slice // Pickup address of time-slice
// variable
LDR sp, [r0, #8] // Switch stack pointers
LDR r2, =_tx_timer_time_slice // Pickup address of time-slice variable
STR r3, [r2] // Setup time-slice
LDR sp, [r0, #8] // Switch stack pointers
#if (defined(TX_ENABLE_EXECUTION_CHANGE_NOTIFY) || defined(TX_EXECUTION_PROFILE_ENABLE))
/* Call the thread entry function to indicate the thread is executing. */
/* Call the thread entry function to indicate the thread is executing. */
MOV r5, r0 // Save r0
BL _tx_execution_thread_enter // Call the thread execution enter function
MOV r0, r5 // Restore r0
#endif
/* Determine if an interrupt frame or a synchronous task suspension frame
is present. */
/* Determine if an interrupt frame or a synchronous task suspension frame is present. */
LDMIA sp!, {r4, r5} // Pickup the stack type and saved CPSR
POP {r4, r5} // Pickup the stack type and saved CPSR
CMP r4, #0 // Check for synchronous context switch
BEQ _tx_solicited_return
#if !defined(THUMB_MODE)
MSR SPSR_cxsf, r5 // Setup SPSR for return
#else
CPS #IRQ_MODE // Enter IRQ mode
MSR SPSR_cxsf, r5 // Setup SPSR for return
LDR r1, [r0, #8] // Get thread SP
LDR lr, [r1, #0x40] // Get thread PC
CPS #SVC_MODE // Enter SVC mode
#endif
#ifdef TX_ENABLE_VFP_SUPPORT
LDR r1, [r0, #144] // Pickup the VFP enabled flag
CMP r1, #0 // Is the VFP enabled?
LDR r2, [r0, #144] // Pickup the VFP enabled flag
CMP r2, #0 // Is the VFP enabled?
BEQ _tx_skip_interrupt_vfp_restore // No, skip VFP interrupt restore
VLDMIA sp!, {D0-D15} // Recover D0-D15
VLDMIA sp!, {D16-D31} // Recover D16-D31
@@ -171,7 +178,15 @@ __tx_thread_schedule_loop:
VMSR FPSCR, r4 // Restore FPSCR
_tx_skip_interrupt_vfp_restore:
#endif
#if !defined(THUMB_MODE)
LDMIA sp!, {r0-r12, lr, pc}^ // Return to point of thread interrupt
#else
POP {r0-r12, lr} // Restore registers
ADD sp, #4 // Fix stack pointer (skip PC saved on stack)
CPS #IRQ_MODE // Enter IRQ mode
SUBS pc, lr, #0 // Return to point of thread interrupt
#endif
_tx_solicited_return:
@@ -185,52 +200,63 @@ _tx_solicited_return:
VMSR FPSCR, r4 // Restore FPSCR
_tx_skip_solicited_vfp_restore:
#endif
MSR CPSR_cxsf, r5 // Recover CPSR
LDMIA sp!, {r4-r11, lr} // Return to thread synchronously
#ifdef __THUMB_INTERWORK
POP {r4-r11, lr} // Restore registers
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif
#ifdef TX_ENABLE_VFP_SUPPORT
#if defined(THUMB_MODE)
.thumb_func
#endif
.global tx_thread_vfp_enable
.type tx_thread_vfp_enable,function
tx_thread_vfp_enable:
MRS r2, CPSR // Pickup the CPSR
MRS r0, CPSR // Pickup current CPSR
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Enable IRQ and FIQ interrupts
CPSID if // Disable IRQ and FIQ
#else
CPSID i // Enable IRQ interrupts
CPSID i // Disable IRQ
#endif
LDR r0, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r0] // Pickup current thread pointer
LDR r2, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r2] // Pickup current thread pointer
CMP r1, #0 // Check for NULL thread pointer
BEQ __tx_no_thread_to_enable // If NULL, skip VFP enable
MOV r0, #1 // Build enable value
STR r0, [r1, #144] // Set the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
__tx_no_thread_to_enable:
MSR CPSR_cxsf, r2 // Recover CPSR
BX LR // Return to caller
BEQ restore_ints // If NULL, skip VFP enable
MOV r2, #1 // Build enable value
STR r2, [r1, #144] // Set the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
B restore_ints
#if defined(THUMB_MODE)
.thumb_func
#endif
.global tx_thread_vfp_disable
.type tx_thread_vfp_disable,function
tx_thread_vfp_disable:
MRS r2, CPSR // Pickup the CPSR
MRS r0, CPSR // Pickup current CPSR
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Enable IRQ and FIQ interrupts
CPSID if // Disable IRQ and FIQ
#else
CPSID i // Enable IRQ interrupts
CPSID i // Disable IRQ
#endif
LDR r0, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r0] // Pickup current thread pointer
LDR r2, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r2] // Pickup current thread pointer
CMP r1, #0 // Check for NULL thread pointer
BEQ __tx_no_thread_to_disable // If NULL, skip VFP disable
MOV r0, #0 // Build disable value
STR r0, [r1, #144] // Clear the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
__tx_no_thread_to_disable:
MSR CPSR_cxsf, r2 // Recover CPSR
BX LR // Return to caller
BEQ restore_ints // If NULL, skip VFP disable
MOV r2, #0 // Build disable value
STR r2, [r1, #144] // Clear the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
restore_ints:
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
BX lr
#endif

View File

@@ -23,33 +23,22 @@
#include "tx_user.h"
#endif
.arm
SVC_MODE = 0x13 // SVC mode
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSR_MASK = 0xDF // Mask initial CPSR, IRQ & FIQ interrupts enabled
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
CPSR_MASK = 0x9F // Mask initial CPSR, IRQ interrupts enabled
.arm
#endif
SVC_MODE = 0x13 // SVC mode
/* Define the 16-bit Thumb mode veneer for _tx_thread_stack_build for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.thumb
.global $_tx_thread_stack_build
.type $_tx_thread_stack_build,function
$_tx_thread_stack_build:
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_stack_build // Call _tx_thread_stack_build function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
THUMB_MASK = 0x20 // Thumb bit mask
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSR_MASK = 0xFF // Mask initial CPSR, T, IRQ & FIQ interrupts enabled
#else
CPSR_MASK = 0xBF // Mask initial CPSR, T, IRQ interrupts enabled
#endif
.text
.align 2
@@ -58,7 +47,7 @@ $_tx_thread_stack_build:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_stack_build ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -93,11 +82,17 @@ $_tx_thread_stack_build:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_stack_build
.type _tx_thread_stack_build,function
_tx_thread_stack_build:
@@ -135,6 +130,15 @@ _tx_thread_stack_build:
MOV r3, #1 // Build interrupt stack type
STR r3, [r2, #0] // Store stack type
MRS r3, CPSR // Pickup CPSR
BIC r3, #CPSR_MASK // Mask mode bits of CPSR
ORR r3, #SVC_MODE // Build CPSR, SYS mode, interrupts enabled
TST r1, #1 // Check if the initial PC is a Thumb function
IT NE
ORRNE r3, #THUMB_MASK // If the initial PC is a thumb function, CPSR must reflect this
STR r3, [r2, #4] // Store initial CPSR
MOV r3, #0 // Build initial register value
STR r3, [r2, #8] // Store initial r0
STR r3, [r2, #12] // Store initial r1
@@ -146,26 +150,20 @@ _tx_thread_stack_build:
STR r3, [r2, #36] // Store initial r7
STR r3, [r2, #40] // Store initial r8
STR r3, [r2, #44] // Store initial r9
LDR r3, [r0, #12] // Pickup stack starting address
STR r3, [r2, #48] // Store initial r10 (sl)
LDR r3,=_tx_thread_schedule // Pickup address of _tx_thread_schedule for GDB backtrace
STR r3, [r2, #60] // Store initial r14 (lr)
MOV r3, #0 // Build initial register value
STR r3, [r2, #52] // Store initial r11
STR r3, [r2, #56] // Store initial r12
STR r1, [r2, #64] // Store initial pc
STR r3, [r2, #68] // 0 for back-trace
MRS r1, CPSR // Pickup CPSR
BIC r1, r1, #CPSR_MASK // Mask mode bits of CPSR
ORR r3, r1, #SVC_MODE // Build CPSR, SVC mode, interrupts enabled
STR r3, [r2, #4] // Store initial CPSR
LDR r3, [r0, #12] // Pickup stack starting address
STR r3, [r2, #48] // Store initial r10 (sl)
LDR r3,=_tx_thread_schedule // Pickup address of _tx_thread_schedule for GDB backtrace
STR r3, [r2, #60] // Store initial r14 (lr)
STR r1, [r2, #64] // Store initial pc
/* Setup stack pointer. */
STR r2, [r0, #8] // Save stack pointer in thread's
// control block
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -23,33 +23,17 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_current_ptr
.global _tx_timer_time_slice
.global _tx_thread_schedule
/* Define the 16-bit Thumb mode veneer for _tx_thread_system_return for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_system_return
.type $_tx_thread_system_return,function
$_tx_thread_system_return:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_system_return // Call _tx_thread_system_return function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
/**************************************************************************/
@@ -57,7 +41,7 @@ $_tx_thread_system_return:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_system_return ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -95,18 +79,24 @@ $_tx_thread_system_return:
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_system_return
.type _tx_thread_system_return,function
_tx_thread_system_return:
/* Save minimal context on the stack. */
STMDB sp!, {r4-r11, lr} // Save minimal context
PUSH {r4-r11, lr} // Save minimal context
LDR r4, =_tx_thread_current_ptr // Pickup address of current ptr
LDR r5, [r4] // Pickup current thread pointer
@@ -123,8 +113,11 @@ _tx_skip_solicited_vfp_save:
#endif
MOV r0, #0 // Build a solicited stack type
MRS r1, CPSR // Pickup the CPSR
STMDB sp!, {r0-r1} // Save type and CPSR
MRS r1, CPSR // Pickup the CPSR, T bit is always cleared by hardware
TST lr, #1 // Check if calling function is in Thumb mode
IT NE
ORRNE r1, #0x20 // Set the T bit so that the correct mode is set on return
PUSH {r0-r1} // Save type and CPSR
/* Lockout interrupts. */

View File

@@ -40,7 +40,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_vectored_context_save ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -77,9 +77,9 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_vectored_context_save

View File

@@ -23,8 +23,12 @@
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
/* Define Assembly language external references... */
@@ -37,26 +41,6 @@
.global _tx_timer_expired
.global _tx_thread_time_slice
/* Define the 16-bit Thumb mode veneer for _tx_timer_interrupt for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.thumb
.global $_tx_timer_interrupt
.type $_tx_timer_interrupt,function
$_tx_timer_interrupt:
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_timer_interrupt // Call _tx_timer_interrupt function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
/**************************************************************************/
@@ -64,7 +48,7 @@ $_tx_timer_interrupt:
/* FUNCTION RELEASE */
/* */
/* _tx_timer_interrupt ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -101,11 +85,17 @@ $_tx_timer_interrupt:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 03-08-2023 Cindy Deng Modified comment(s), added */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.2.1 */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_timer_interrupt
.type _tx_timer_interrupt,function
_tx_timer_interrupt:
@@ -197,7 +187,7 @@ __tx_timer_done:
__tx_something_expired:
STMDB sp!, {r0, lr} // Save the lr register on the stack
PUSH {r0, lr} // Save the lr register on the stack
// and save r0 just to keep 8-byte alignment
/* Did a timer expire? */
@@ -225,13 +215,9 @@ __tx_timer_dont_activate:
__tx_timer_not_ts_expiration:
LDMIA sp!, {r0, lr} // Recover lr register (r0 is just there for
POP {r0, lr} // Recover lr register (r0 is just there for
// the 8-byte stack alignment
__tx_timer_nothing_expired:
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -321,7 +321,7 @@ void tx_thread_vfp_disable(void);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

View File

@@ -19,17 +19,20 @@
/** */
/**************************************************************************/
/**************************************************************************/
.arm
#ifdef TX_ENABLE_FIQ_SUPPORT
SVC_MODE = 0xD3 // Disable IRQ/FIQ, SVC mode
IRQ_MODE = 0xD2 // Disable IRQ/FIQ, IRQ mode
#else
SVC_MODE = 0x93 // Disable IRQ, SVC mode
IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
SVC_MODE = 0x13 // SVC mode
IRQ_MODE = 0x12 // IRQ mode
.global _tx_thread_system_state
.global _tx_thread_current_ptr
.global _tx_thread_execute_ptr
@@ -42,7 +45,6 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_context_restore
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -50,7 +52,7 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* FUNCTION RELEASE */
/* */
/* _tx_thread_context_restore ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -88,6 +90,12 @@ IRQ_MODE = 0x92 // Disable IRQ, IRQ mode
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
.global _tx_thread_context_restore
@@ -123,9 +131,9 @@ _tx_thread_context_restore:
/* Just recover the saved registers and return to the point of
interrupt. */
LDMIA sp!, {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
POP {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
MSR SPSR_cxsf, r0 // Put SPSR back
LDMIA sp!, {r0-r3} // Recover r0-r3
POP {r0-r3} // Recover r0-r3
MOVS pc, lr // Return to point of interrupt
__tx_thread_not_nested_restore:
@@ -154,26 +162,23 @@ __tx_thread_no_preempt_restore:
/* Pickup the saved stack pointer. */
/* Recover the saved context and return to the point of interrupt. */
LDMIA sp!, {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
POP {r0, r10, r12, lr} // Recover SPSR, POI, and scratch regs
MSR SPSR_cxsf, r0 // Put SPSR back
LDMIA sp!, {r0-r3} // Recover r0-r3
POP {r0-r3} // Recover r0-r3
MOVS pc, lr // Return to point of interrupt
__tx_thread_preempt_restore:
LDMIA sp!, {r3, r10, r12, lr} // Recover temporarily saved registers
POP {r3, r10, r12, lr} // Recover temporarily saved registers
MOV r1, lr // Save lr (point of interrupt)
MOV r2, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r2 // Enter SVC mode
CPS #SVC_MODE // Enter SVC mode
STR r1, [sp, #-4]! // Save point of interrupt
STMDB sp!, {r4-r12, lr} // Save upper half of registers
PUSH {r4-r12, lr} // Save upper half of registers
MOV r4, r3 // Save SPSR in r4
MOV r2, #IRQ_MODE // Build IRQ mode CPSR
MSR CPSR_c, r2 // Enter IRQ mode
LDMIA sp!, {r0-r3} // Recover r0-r3
MOV r5, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r5 // Enter SVC mode
STMDB sp!, {r0-r3} // Save r0-r3 on thread's stack
CPS #IRQ_MODE // Enter IRQ mode
POP {r0-r3} // Recover r0-r3
CPS #SVC_MODE // Enter SVC mode
PUSH {r0-r3} // Save r0-r3 on thread's stack
LDR r1, =_tx_thread_current_ptr // Pickup address of current thread ptr
LDR r0, [r1] // Pickup current thread pointer
@@ -186,13 +191,11 @@ __tx_thread_preempt_restore:
STR r2, [sp, #-4]! // Save FPSCR
VSTMDB sp!, {D16-D31} // Save D16-D31
VSTMDB sp!, {D0-D15} // Save D0-D15
_tx_skip_irq_vfp_save:
#endif
MOV r3, #1 // Build interrupt stack type
STMDB sp!, {r3, r4} // Save interrupt stack type and SPSR
PUSH {r3, r4} // Save interrupt stack type and SPSR
STR sp, [r0, #8] // Save stack pointer in thread control
// block
@@ -217,6 +220,5 @@ __tx_thread_dont_save_ts:
__tx_thread_idle_system_restore:
/* Just return back to the scheduler! */
MOV r0, #SVC_MODE // Build SVC mode CPSR
MSR CPSR_c, r0 // Enter SVC mode
CPS #SVC_MODE // Enter SVC mode
B _tx_thread_schedule // Return to scheduler

View File

@@ -19,6 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_system_state
.global _tx_thread_current_ptr
@@ -28,7 +38,6 @@
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_context_save
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -36,7 +45,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_context_save ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -73,6 +82,12 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
.global _tx_thread_context_save
@@ -84,7 +99,7 @@ _tx_thread_context_save:
/* Check for a nested interrupt condition. */
STMDB sp!, {r0-r3} // Save some working registers
PUSH {r0-r3} // Save some working registers
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Disable FIQ interrupts
#endif
@@ -95,15 +110,15 @@ _tx_thread_context_save:
/* Nested interrupt condition. */
ADD r2, r2, #1 // Increment the interrupt counter
ADD r2, #1 // Increment the interrupt counter
STR r2, [r3] // Store it back in the variable
/* Save the rest of the scratch registers on the stack and return to the
calling ISR. */
MRS r0, SPSR // Pickup saved SPSR
SUB lr, lr, #4 // Adjust point of interrupt
STMDB sp!, {r0, r10, r12, lr} // Store other registers
SUB lr, #4 // Adjust point of interrupt
PUSH {r0, r10, r12, lr} // Store other registers
/* Return to the ISR. */
@@ -123,7 +138,7 @@ _tx_thread_context_save:
__tx_thread_not_nested_save:
/* Otherwise, not nested, check to see if a thread was running. */
ADD r2, r2, #1 // Increment the interrupt counter
ADD r2, #1 // Increment the interrupt counter
STR r2, [r3] // Store it back in the variable
LDR r1, =_tx_thread_current_ptr // Pickup address of current thread ptr
LDR r0, [r1] // Pickup current thread pointer
@@ -134,8 +149,8 @@ __tx_thread_not_nested_save:
/* Save minimal context of interrupted thread. */
MRS r2, SPSR // Pickup saved SPSR
SUB lr, lr, #4 // Adjust point of interrupt
STMDB sp!, {r2, r10, r12, lr} // Store other registers
SUB lr, #4 // Adjust point of interrupt
PUSH {r2, r10, r12, lr} // Store other registers
MOV r10, #0 // Clear stack limit
@@ -168,5 +183,5 @@ __tx_thread_idle_system_save:
POP {lr} // Recover ISR lr
#endif
ADD sp, sp, #16 // Recover saved registers
ADD sp, #16 // Recover saved registers
B __tx_irq_processing_return // Continue IRQ processing

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
SVC_MODE = 0xD3 // SVC mode
FIQ_MODE = 0xD1 // FIQ mode
@@ -48,7 +51,7 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_context_restore ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -86,6 +89,9 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_fiq_context_restore

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.global _tx_thread_system_state
.global _tx_thread_current_ptr
@@ -37,7 +40,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_context_save ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -74,6 +77,9 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_fiq_context_save

View File

@@ -19,6 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
#ifdef TX_ENABLE_FIQ_SUPPORT
DISABLE_INTS = 0xC0 // Disable IRQ/FIQ interrupts
@@ -28,11 +38,6 @@ DISABLE_INTS = 0x80 // Disable IRQ interrupts
MODE_MASK = 0x1F // Mode mask
FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_fiq_nesting_end
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -40,7 +45,7 @@ FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_nesting_end ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -82,8 +87,17 @@ FIQ_MODE_BITS = 0x11 // FIQ mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_fiq_nesting_end
.type _tx_thread_fiq_nesting_end,function
_tx_thread_fiq_nesting_end:
@@ -97,8 +111,4 @@ _tx_thread_fiq_nesting_end:
ORR r0, r0, #FIQ_MODE_BITS // Build IRQ mode CPSR
MSR CPSR_c, r0 // Reenter IRQ mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,16 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
FIQ_DISABLE = 0x40 // FIQ disable bit
MODE_MASK = 0x1F // Mode mask
SYS_MODE_BITS = 0x1F // System mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_fiq_nesting_start
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -36,7 +41,7 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_fiq_nesting_start ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -75,8 +80,17 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_fiq_nesting_start
.type _tx_thread_fiq_nesting_start,function
_tx_thread_fiq_nesting_start:
@@ -89,8 +103,4 @@ _tx_thread_fiq_nesting_start:
// and push r1 just to keep 8-byte alignment
BIC r0, r0, #FIQ_DISABLE // Build enable FIQ CPSR
MSR CPSR_c, r0 // Enter system mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,26 +19,22 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
INT_MASK = 0x03F
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_control for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_control
$_tx_thread_interrupt_control:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_control // Call _tx_thread_interrupt_control function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
INT_MASK = 0x0C0
IRQ_MASK = 0x080
#ifdef TX_ENABLE_FIQ_SUPPORT
FIQ_MASK = 0x040
#endif
.text
.align 2
@@ -47,7 +43,7 @@ $_tx_thread_interrupt_control:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_control ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -80,25 +76,38 @@ $_tx_thread_interrupt_control:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_control
.type _tx_thread_interrupt_control,function
_tx_thread_interrupt_control:
MRS r1, CPSR // Pickup current CPSR
/* Pickup current interrupt lockout posture. */
MRS r3, CPSR // Pickup current CPSR
MOV r2, #INT_MASK // Build interrupt mask
AND r1, r3, r2 // Clear interrupt lockout bits
ORR r1, r1, r0 // Or-in new interrupt lockout bits
/* Apply the new interrupt posture. */
MSR CPSR_c, r1 // Setup new CPSR
BIC r0, r3, r2 // Return previous interrupt mask
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Disable IRQ and FIQ
#else
MOV pc, lr // Return to caller
CPSID i // Disable IRQ
#endif
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
AND r0, r1, #INT_MASK
BX lr

View File

@@ -19,23 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_disable for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_disable
$_tx_thread_interrupt_disable:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_disable // Call _tx_thread_interrupt_disable function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
.text
.align 2
@@ -44,7 +37,7 @@ $_tx_thread_interrupt_disable:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_disable ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -76,8 +69,17 @@ $_tx_thread_interrupt_disable:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_disable
.type _tx_thread_interrupt_disable,function
_tx_thread_interrupt_disable:
@@ -94,8 +96,4 @@ _tx_thread_interrupt_disable:
CPSID i // Disable IRQ
#endif
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -19,23 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_interrupt_restore for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_interrupt_restore
$_tx_thread_interrupt_restore:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_interrupt_restore // Call _tx_thread_interrupt_restore function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
IRQ_MASK = 0x080
#ifdef TX_ENABLE_FIQ_SUPPORT
FIQ_MASK = 0x040
#endif
.text
.align 2
@@ -44,7 +42,7 @@ $_tx_thread_interrupt_restore:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_interrupt_restore ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -77,17 +75,30 @@ $_tx_thread_interrupt_restore:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_interrupt_restore
.type _tx_thread_interrupt_restore,function
_tx_thread_interrupt_restore:
/* Apply the new interrupt posture. */
MSR CPSR_c, r0 // Setup new CPSR
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
BX lr // Return to caller

View File

@@ -19,6 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
#ifdef TX_ENABLE_FIQ_SUPPORT
DISABLE_INTS = 0xC0 // Disable IRQ/FIQ interrupts
@@ -28,11 +38,6 @@ DISABLE_INTS = 0x80 // Disable IRQ interrupts
MODE_MASK = 0x1F // Mode mask
IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_irq_nesting_end
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -40,7 +45,7 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_irq_nesting_end ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -82,8 +87,17 @@ IRQ_MODE_BITS = 0x12 // IRQ mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_irq_nesting_end
.type _tx_thread_irq_nesting_end,function
_tx_thread_irq_nesting_end:
@@ -96,8 +110,4 @@ _tx_thread_irq_nesting_end:
BIC r0, r0, #MODE_MASK // Clear mode bits
ORR r0, r0, #IRQ_MODE_BITS // Build IRQ mode CPSR
MSR CPSR_c, r0 // Reenter IRQ mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,16 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
IRQ_DISABLE = 0x80 // IRQ disable bit
MODE_MASK = 0x1F // Mode mask
SYS_MODE_BITS = 0x1F // System mode bits
/* No 16-bit Thumb mode veneer code is needed for _tx_thread_irq_nesting_start
since it will never be called 16-bit mode. */
.arm
.text
.align 2
/**************************************************************************/
@@ -36,7 +41,7 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* FUNCTION RELEASE */
/* */
/* _tx_thread_irq_nesting_start ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -75,8 +80,17 @@ SYS_MODE_BITS = 0x1F // System mode bits
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_irq_nesting_start
.type _tx_thread_irq_nesting_start,function
_tx_thread_irq_nesting_start:
@@ -89,8 +103,4 @@ _tx_thread_irq_nesting_start:
// and push r1 just to keep 8-byte alignment
BIC r0, r0, #IRQ_DISABLE // Build enable IRQ CPSR
MSR CPSR_c, r0 // Enter system mode
#ifdef __THUMB_INTERWORK
BX r3 // Return to caller
#else
MOV pc, r3 // Return to caller
#endif

View File

@@ -19,38 +19,33 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_execute_ptr
.global _tx_thread_current_ptr
.global _tx_timer_time_slice
/* Define the 16-bit Thumb mode veneer for _tx_thread_schedule for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_schedule
.type $_tx_thread_schedule,function
$_tx_thread_schedule:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_schedule // Call _tx_thread_schedule function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
#define IRQ_MODE 0x12 // IRQ mode
#define SVC_MODE 0x13 // SVC mode
/**************************************************************************/
/* */
/* FUNCTION RELEASE */
/* */
/* _tx_thread_schedule ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -89,8 +84,17 @@ $_tx_thread_schedule:
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_schedule
.type _tx_thread_schedule,function
_tx_thread_schedule:
@@ -134,30 +138,39 @@ __tx_thread_schedule_loop:
/* Setup time-slice, if present. */
LDR r2, =_tx_timer_time_slice // Pickup address of time-slice
// variable
LDR sp, [r0, #8] // Switch stack pointers
LDR r2, =_tx_timer_time_slice // Pickup address of time-slice variable
STR r3, [r2] // Setup time-slice
LDR sp, [r0, #8] // Switch stack pointers
#if (defined(TX_ENABLE_EXECUTION_CHANGE_NOTIFY) || defined(TX_EXECUTION_PROFILE_ENABLE))
/* Call the thread entry function to indicate the thread is executing. */
/* Call the thread entry function to indicate the thread is executing. */
MOV r5, r0 // Save r0
BL _tx_execution_thread_enter // Call the thread execution enter function
MOV r0, r5 // Restore r0
#endif
/* Determine if an interrupt frame or a synchronous task suspension frame
is present. */
/* Determine if an interrupt frame or a synchronous task suspension frame is present. */
LDMIA sp!, {r4, r5} // Pickup the stack type and saved CPSR
POP {r4, r5} // Pickup the stack type and saved CPSR
CMP r4, #0 // Check for synchronous context switch
BEQ _tx_solicited_return
#if !defined(THUMB_MODE)
MSR SPSR_cxsf, r5 // Setup SPSR for return
#else
CPS #IRQ_MODE // Enter IRQ mode
MSR SPSR_cxsf, r5 // Setup SPSR for return
LDR r1, [r0, #8] // Get thread SP
LDR lr, [r1, #0x40] // Get thread PC
CPS #SVC_MODE // Enter SVC mode
#endif
#ifdef TX_ENABLE_VFP_SUPPORT
LDR r1, [r0, #144] // Pickup the VFP enabled flag
CMP r1, #0 // Is the VFP enabled?
LDR r2, [r0, #144] // Pickup the VFP enabled flag
CMP r2, #0 // Is the VFP enabled?
BEQ _tx_skip_interrupt_vfp_restore // No, skip VFP interrupt restore
VLDMIA sp!, {D0-D15} // Recover D0-D15
VLDMIA sp!, {D16-D31} // Recover D16-D31
@@ -165,7 +178,15 @@ __tx_thread_schedule_loop:
VMSR FPSCR, r4 // Restore FPSCR
_tx_skip_interrupt_vfp_restore:
#endif
#if !defined(THUMB_MODE)
LDMIA sp!, {r0-r12, lr, pc}^ // Return to point of thread interrupt
#else
POP {r0-r12, lr} // Restore registers
ADD sp, #4 // Fix stack pointer (skip PC saved on stack)
CPS #IRQ_MODE // Enter IRQ mode
SUBS pc, lr, #0 // Return to point of thread interrupt
#endif
_tx_solicited_return:
@@ -179,52 +200,63 @@ _tx_solicited_return:
VMSR FPSCR, r4 // Restore FPSCR
_tx_skip_solicited_vfp_restore:
#endif
MSR CPSR_cxsf, r5 // Recover CPSR
LDMIA sp!, {r4-r11, lr} // Return to thread synchronously
#ifdef __THUMB_INTERWORK
POP {r4-r11, lr} // Restore registers
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif
#ifdef TX_ENABLE_VFP_SUPPORT
#if defined(THUMB_MODE)
.thumb_func
#endif
.global tx_thread_vfp_enable
.type tx_thread_vfp_enable,function
tx_thread_vfp_enable:
MRS r2, CPSR // Pickup the CPSR
MRS r0, CPSR // Pickup current CPSR
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Enable IRQ and FIQ interrupts
CPSID if // Disable IRQ and FIQ
#else
CPSID i // Enable IRQ interrupts
CPSID i // Disable IRQ
#endif
LDR r0, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r0] // Pickup current thread pointer
LDR r2, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r2] // Pickup current thread pointer
CMP r1, #0 // Check for NULL thread pointer
BEQ __tx_no_thread_to_enable // If NULL, skip VFP enable
MOV r0, #1 // Build enable value
STR r0, [r1, #144] // Set the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
__tx_no_thread_to_enable:
MSR CPSR_cxsf, r2 // Recover CPSR
BX LR // Return to caller
BEQ restore_ints // If NULL, skip VFP enable
MOV r2, #1 // Build enable value
STR r2, [r1, #144] // Set the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
B restore_ints
#if defined(THUMB_MODE)
.thumb_func
#endif
.global tx_thread_vfp_disable
.type tx_thread_vfp_disable,function
tx_thread_vfp_disable:
MRS r2, CPSR // Pickup the CPSR
MRS r0, CPSR // Pickup current CPSR
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSID if // Enable IRQ and FIQ interrupts
CPSID if // Disable IRQ and FIQ
#else
CPSID i // Enable IRQ interrupts
CPSID i // Disable IRQ
#endif
LDR r0, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r0] // Pickup current thread pointer
LDR r2, =_tx_thread_current_ptr // Build current thread pointer address
LDR r1, [r2] // Pickup current thread pointer
CMP r1, #0 // Check for NULL thread pointer
BEQ __tx_no_thread_to_disable // If NULL, skip VFP disable
MOV r0, #0 // Build disable value
STR r0, [r1, #144] // Clear the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
__tx_no_thread_to_disable:
MSR CPSR_cxsf, r2 // Recover CPSR
BX LR // Return to caller
BEQ restore_ints // If NULL, skip VFP disable
MOV r2, #0 // Build disable value
STR r2, [r1, #144] // Clear the VFP enable flag (tx_thread_vfp_enable field in TX_THREAD)
restore_ints:
TST r0, #IRQ_MASK
BNE no_irq
CPSIE i
no_irq:
#ifdef TX_ENABLE_FIQ_SUPPORT
TST r0, #FIQ_MASK
BNE no_fiq
CPSIE f
no_fiq:
#endif
BX lr
#endif

View File

@@ -19,33 +19,26 @@
/** */
/**************************************************************************/
/**************************************************************************/
.arm
SVC_MODE = 0x13 // SVC mode
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSR_MASK = 0xDF // Mask initial CPSR, IRQ & FIQ interrupts enabled
#else
CPSR_MASK = 0x9F // Mask initial CPSR, IRQ interrupts enabled
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
/* Define the 16-bit Thumb mode veneer for _tx_thread_stack_build for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.syntax unified
#if defined(THUMB_MODE)
.thumb
.global $_tx_thread_stack_build
.type $_tx_thread_stack_build,function
$_tx_thread_stack_build:
BX pc // Switch to 32-bit mode
NOP //
#else
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_stack_build // Call _tx_thread_stack_build function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
#endif
SVC_MODE = 0x13 // SVC mode
THUMB_MASK = 0x20 // Thumb bit mask
#ifdef TX_ENABLE_FIQ_SUPPORT
CPSR_MASK = 0xFF // Mask initial CPSR, T, IRQ & FIQ interrupts enabled
#else
CPSR_MASK = 0xBF // Mask initial CPSR, T, IRQ interrupts enabled
#endif
.text
.align 2
@@ -54,7 +47,7 @@ $_tx_thread_stack_build:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_stack_build ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -89,8 +82,17 @@ $_tx_thread_stack_build:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_stack_build
.type _tx_thread_stack_build,function
_tx_thread_stack_build:
@@ -128,6 +130,15 @@ _tx_thread_stack_build:
MOV r3, #1 // Build interrupt stack type
STR r3, [r2, #0] // Store stack type
MRS r3, CPSR // Pickup CPSR
BIC r3, #CPSR_MASK // Mask mode bits of CPSR
ORR r3, #SVC_MODE // Build CPSR, SYS mode, interrupts enabled
TST r1, #1 // Check if the initial PC is a Thumb function
IT NE
ORRNE r3, #THUMB_MASK // If the initial PC is a thumb function, CPSR must reflect this
STR r3, [r2, #4] // Store initial CPSR
MOV r3, #0 // Build initial register value
STR r3, [r2, #8] // Store initial r0
STR r3, [r2, #12] // Store initial r1
@@ -139,26 +150,20 @@ _tx_thread_stack_build:
STR r3, [r2, #36] // Store initial r7
STR r3, [r2, #40] // Store initial r8
STR r3, [r2, #44] // Store initial r9
LDR r3, [r0, #12] // Pickup stack starting address
STR r3, [r2, #48] // Store initial r10 (sl)
LDR r3,=_tx_thread_schedule // Pickup address of _tx_thread_schedule for GDB backtrace
STR r3, [r2, #60] // Store initial r14 (lr)
MOV r3, #0 // Build initial register value
STR r3, [r2, #52] // Store initial r11
STR r3, [r2, #56] // Store initial r12
STR r1, [r2, #64] // Store initial pc
STR r3, [r2, #68] // 0 for back-trace
MRS r1, CPSR // Pickup CPSR
BIC r1, r1, #CPSR_MASK // Mask mode bits of CPSR
ORR r3, r1, #SVC_MODE // Build CPSR, SVC mode, interrupts enabled
STR r3, [r2, #4] // Store initial CPSR
LDR r3, [r0, #12] // Pickup stack starting address
STR r3, [r2, #48] // Store initial r10 (sl)
LDR r3,=_tx_thread_schedule // Pickup address of _tx_thread_schedule for GDB backtrace
STR r3, [r2, #60] // Store initial r14 (lr)
STR r1, [r2, #64] // Store initial pc
/* Setup stack pointer. */
STR r2, [r0, #8] // Save stack pointer in thread's
// control block
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -19,34 +19,21 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
.global _tx_thread_current_ptr
.global _tx_timer_time_slice
.global _tx_thread_schedule
/* Define the 16-bit Thumb mode veneer for _tx_thread_system_return for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.global $_tx_thread_system_return
.type $_tx_thread_system_return,function
$_tx_thread_system_return:
.thumb
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_thread_system_return // Call _tx_thread_system_return function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
/**************************************************************************/
@@ -54,7 +41,7 @@ $_tx_thread_system_return:
/* FUNCTION RELEASE */
/* */
/* _tx_thread_system_return ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -92,15 +79,24 @@ $_tx_thread_system_return:
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_thread_system_return
.type _tx_thread_system_return,function
_tx_thread_system_return:
/* Save minimal context on the stack. */
STMDB sp!, {r4-r11, lr} // Save minimal context
PUSH {r4-r11, lr} // Save minimal context
LDR r4, =_tx_thread_current_ptr // Pickup address of current ptr
LDR r5, [r4] // Pickup current thread pointer
@@ -117,8 +113,11 @@ _tx_skip_solicited_vfp_save:
#endif
MOV r0, #0 // Build a solicited stack type
MRS r1, CPSR // Pickup the CPSR
STMDB sp!, {r0-r1} // Save type and CPSR
MRS r1, CPSR // Pickup the CPSR, T bit is always cleared by hardware
TST lr, #1 // Check if calling function is in Thumb mode
IT NE
ORRNE r1, #0x20 // Set the T bit so that the correct mode is set on return
PUSH {r0-r1} // Save type and CPSR
/* Lockout interrupts. */

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.global _tx_thread_system_state
.global _tx_thread_current_ptr
@@ -37,7 +40,7 @@
/* FUNCTION RELEASE */
/* */
/* _tx_thread_vectored_context_save ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -74,6 +77,9 @@
/* resulting in version 6.1.9 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_thread_vectored_context_save

View File

@@ -19,9 +19,16 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
/* Define Assembly language external references... */
@@ -34,26 +41,6 @@
.global _tx_timer_expired
.global _tx_thread_time_slice
/* Define the 16-bit Thumb mode veneer for _tx_timer_interrupt for
applications calling this function from to 16-bit Thumb mode. */
.text
.align 2
.thumb
.global $_tx_timer_interrupt
.type $_tx_timer_interrupt,function
$_tx_timer_interrupt:
BX pc // Switch to 32-bit mode
NOP //
.arm
STMFD sp!, {lr} // Save return address
BL _tx_timer_interrupt // Call _tx_timer_interrupt function
LDMFD sp!, {lr} // Recover saved return address
BX lr // Return to 16-bit caller
.text
.align 2
/**************************************************************************/
@@ -61,7 +48,7 @@ $_tx_timer_interrupt:
/* FUNCTION RELEASE */
/* */
/* _tx_timer_interrupt ARMv7-A */
/* 6.1.11 */
/* 6.4.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -98,8 +85,17 @@ $_tx_timer_interrupt:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* 12-31-2023 Yajun Xia Modified comment(s), */
/* Added thumb mode support, */
/* resulting in version 6.4.0 */
/* */
/**************************************************************************/
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _tx_timer_interrupt
.type _tx_timer_interrupt,function
_tx_timer_interrupt:
@@ -191,7 +187,7 @@ __tx_timer_done:
__tx_something_expired:
STMDB sp!, {r0, lr} // Save the lr register on the stack
PUSH {r0, lr} // Save the lr register on the stack
// and save r0 just to keep 8-byte alignment
/* Did a timer expire? */
@@ -219,13 +215,9 @@ __tx_timer_dont_activate:
__tx_timer_not_ts_expiration:
LDMIA sp!, {r0, lr} // Recover lr register (r0 is just there for
POP {r0, lr} // Recover lr register (r0 is just there for
// the 8-byte stack alignment
__tx_timer_nothing_expired:
#ifdef __THUMB_INTERWORK
BX lr // Return to caller
#else
MOV pc, lr // Return to caller
#endif

View File

@@ -1,15 +1,30 @@
.syntax unified
#if defined(THUMB_MODE)
.thumb
#else
.arm
#endif
/* .text is used instead of .section .text so it works with arm-aout too. */
.text
.code 32
.align 0
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _mainCRTStartup
_mainCRTStartup:
#if defined(THUMB_MODE)
.thumb_func
#endif
.global _start
_start:
#if defined(THUMB_MODE)
.thumb_func
#endif
.global start
start:
_start:
_mainCRTStartup:
/* Start by setting up a stack */
/* Set up the stack pointer to a fixed value */
@@ -69,16 +84,12 @@ _mainCRTStartup:
.word _fini
#endif */
/* Return ... */
#ifdef __APCS_26__
movs pc, lr
#else
#ifdef __THUMB_INTERWORK
bx lr
#else
mov pc, lr
#endif
#endif
.global _fini
.type _fini,function
_fini:
BX lr // Return to caller
/* Workspace for Angel calls. */
.data

View File

@@ -109,7 +109,7 @@ SECTIONS
.eh_frame_hdr : { *(.eh_frame_hdr) }
/* Adjust the address for the data segment. We want to adjust up to
the same address within the page on the next page up. */
. = ALIGN(256) + (. & (256 - 1));
. = 0x2E000000;
.data :
{
*(.data)

View File

@@ -19,6 +19,9 @@
/** */
/**************************************************************************/
/**************************************************************************/
#ifdef TX_INCLUDE_USER_DEFINE_FILE
#include "tx_user.h"
#endif
.arm
@@ -64,7 +67,7 @@ $_tx_initialize_low_level:
/* FUNCTION RELEASE */
/* */
/* _tx_initialize_low_level ARMv7-A */
/* 6.1.11 */
/* 6.3.0 */
/* AUTHOR */
/* */
/* William E. Lamie, Microsoft Corporation */
@@ -100,6 +103,9 @@ $_tx_initialize_low_level:
/* 09-30-2020 William E. Lamie Initial Version 6.1 */
/* 04-25-2022 Zhen Kong Updated comments, */
/* resulting in version 6.1.11 */
/* 10-31-2023 Tiejun Zhou Modified comment(s), added */
/* #include tx_user.h, */
/* resulting in version 6.3.0 */
/* */
/**************************************************************************/
.global _tx_initialize_low_level

View File

@@ -0,0 +1,155 @@
// ------------------------------------------------------------
// v7-A Cache, TLB and Branch Prediction Maintenance Operations
// Header File
//
// Copyright (c) 2011-2016 Arm Limited (or its affiliates). All rights reserved.
// Use, modification and redistribution of this file is subject to your possession of a
// valid End User License Agreement for the Arm Product of which these examples are part of
// and your compliance with all applicable terms and conditions of such licence agreement.
// ------------------------------------------------------------
#ifndef _ARMV7A_GENERIC_H
#define _ARMV7A_GENERIC_H
// ------------------------------------------------------------
// Memory barrier mnemonics
enum MemBarOpt {
RESERVED_0 = 0, RESERVED_1 = 1, OSHST = 2, OSH = 3,
RESERVED_4 = 4, RESERVED_5 = 5, NSHST = 6, NSH = 7,
RESERVED_8 = 8, RESERVED_9 = 9, ISHST = 10, ISH = 11,
RESERVED_12 = 12, RESERVED_13 = 13, ST = 14, SY = 15
};
//
// Note:
// *_IS() stands for "inner shareable"
// DO NOT USE THESE FUNCTIONS ON A CORTEX-A8
//
// ------------------------------------------------------------
// Interrupts
// Enable/disables IRQs (not FIQs)
void enableInterrupts(void);
void disableInterrupts(void);
// ------------------------------------------------------------
// Caches
void invalidateCaches_IS(void);
void cleanInvalidateDCache(void);
void invalidateCaches_IS(void);
void enableCaches(void);
void disableCaches(void);
void invalidateCaches(void);
void cleanDCache(void);
// ------------------------------------------------------------
// TLBs
void invalidateUnifiedTLB(void);
void invalidateUnifiedTLB_IS(void);
// ------------------------------------------------------------
// Branch prediction
void flushBranchTargetCache(void);
void flushBranchTargetCache_IS(void);
// ------------------------------------------------------------
// High Vecs
void enableHighVecs(void);
void disableHighVecs(void);
// ------------------------------------------------------------
// ID Registers
unsigned int getMIDR(void);
#define MIDR_IMPL_SHIFT 24
#define MIDR_IMPL_MASK 0xFF
#define MIDR_VAR_SHIFT 20
#define MIDR_VAR_MASK 0xF
#define MIDR_ARCH_SHIFT 16
#define MIDR_ARCH_MASK 0xF
#define MIDR_PART_SHIFT 4
#define MIDR_PART_MASK 0xFFF
#define MIDR_REV_SHIFT 0
#define MIDR_REV_MASK 0xF
// tmp = get_MIDR();
// implementor = (tmp >> MIDR_IMPL_SHIFT) & MIDR_IMPL_MASK;
// variant = (tmp >> MIDR_VAR_SHIFT) & MIDR_VAR_MASK;
// architecture= (tmp >> MIDR_ARCH_SHIFT) & MIDR_ARCH_MASK;
// part_number = (tmp >> MIDR_PART_SHIFT) & MIDR_PART_MASK;
// revision = tmp & MIDR_REV_MASK;
#define MIDR_PART_CA5 0xC05
#define MIDR_PART_CA8 0xC08
#define MIDR_PART_CA9 0xC09
unsigned int getMPIDR(void);
#define MPIDR_FORMAT_SHIFT 31
#define MPIDR_FORMAT_MASK 0x1
#define MPIDR_UBIT_SHIFT 30
#define MPIDR_UBIT_MASK 0x1
#define MPIDR_CLUSTER_SHIFT 7
#define MPIDR_CLUSTER_MASK 0xF
#define MPIDR_CPUID_SHIFT 0
#define MPIDR_CPUID_MASK 0x3
#define MPIDR_CPUID_CPU0 0x0
#define MPIDR_CPUID_CPU1 0x1
#define MPIDR_CPUID_CPU2 0x2
#define MPIDR_CPUID_CPU3 0x3
#define MPIDR_UNIPROCESSPR 0x1
#define MPDIR_NEW_FORMAT 0x1
// ------------------------------------------------------------
// Context ID
unsigned int getContextID(void);
void setContextID(unsigned int);
#define CONTEXTID_ASID_SHIFT 0
#define CONTEXTID_ASID_MASK 0xFF
#define CONTEXTID_PROCID_SHIFT 8
#define CONTEXTID_PROCID_MASK 0x00FFFFFF
// tmp = getContextID();
// ASID = tmp & CONTEXTID_ASID_MASK;
// PROCID = (tmp >> CONTEXTID_PROCID_SHIFT) & CONTEXTID_PROCID_MASK;
// ------------------------------------------------------------
// SMP related for Armv7-A MPCore processors
//
// DO NOT CALL THESE FUNCTIONS ON A CORTEX-A8
// Returns the base address of the private peripheral memory space
unsigned int getBaseAddr(void);
// Returns the CPU ID (0 to 3) of the CPU executed on
#define MP_CPU0 (0)
#define MP_CPU1 (1)
#define MP_CPU2 (2)
#define MP_CPU3 (3)
unsigned int getCPUID(void);
// Set this core as participating in SMP
void joinSMP(void);
// Set this core as NOT participating in SMP
void leaveSMP(void);
// Go to sleep, never returns
void goToSleep(void);
#endif
// ------------------------------------------------------------
// End of v7.h
// ------------------------------------------------------------

View File

@@ -0,0 +1,476 @@
// ------------------------------------------------------------
// v7-A Cache and Branch Prediction Maintenance Operations
//
// Copyright (c) 2011-2018 Arm Limited (or its affiliates). All rights reserved.
// Use, modification and redistribution of this file is subject to your possession of a
// valid End User License Agreement for the Arm Product of which these examples are part of
// and your compliance with all applicable terms and conditions of such licence agreement.
// ------------------------------------------------------------
.arm
// ------------------------------------------------------------
// Interrupt enable/disable
// ------------------------------------------------------------
// Could use intrinsic instead of these
.global enableInterrupts
.type enableInterrupts,function
// void enableInterrupts(void)//
enableInterrupts:
CPSIE i
BX lr
.global disableInterrupts
.type disableInterrupts,function
// void disableInterrupts(void)//
disableInterrupts:
CPSID i
BX lr
// ------------------------------------------------------------
// Cache Maintenance
// ------------------------------------------------------------
.global enableCaches
.type enableCaches,function
// void enableCaches(void)//
enableCaches:
MRC p15, 0, r0, c1, c0, 0 // Read System Control Register
ORR r0, r0, #(1 << 2) // Set C bit
ORR r0, r0, #(1 << 12) // Set I bit
MCR p15, 0, r0, c1, c0, 0 // Write System Control Register
ISB
BX lr
.global disableCaches
.type disableCaches,function
// void disableCaches(void)
disableCaches:
MRC p15, 0, r0, c1, c0, 0 // Read System Control Register
BIC r0, r0, #(1 << 2) // Clear C bit
BIC r0, r0, #(1 << 12) // Clear I bit
MCR p15, 0, r0, c1, c0, 0 // Write System Control Register
ISB
BX lr
.global cleanDCache
.type cleanDCache,function
// void cleanDCache(void)//
cleanDCache:
PUSH {r4-r12}
//
// Based on code example given in section 11.2.4 of Armv7-A/R Architecture Reference Manual (DDI 0406B)
//
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ clean_dcache_finished
MOV r10, #0
clean_dcache_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT clean_dcache_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
clean_dcache_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
clean_dcache_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c10, 2 // DCCSW - clean by set/way
SUBS r9, r9, #1 // decrement the way number
BGE clean_dcache_loop3
SUBS r7, r7, #1 // decrement the index
BGE clean_dcache_loop2
clean_dcache_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT clean_dcache_loop1
clean_dcache_finished:
POP {r4-r12}
BX lr
.global cleanInvalidateDCache
.type cleanInvalidateDCache,function
// void cleanInvalidateDCache(void)//
cleanInvalidateDCache:
PUSH {r4-r12}
//
// Based on code example given in section 11.2.4 of Armv7-A/R Architecture Reference Manual (DDI 0406B)
//
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ clean_invalidate_dcache_finished
MOV r10, #0
clean_invalidate_dcache_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT clean_invalidate_dcache_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
clean_invalidate_dcache_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
clean_invalidate_dcache_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c14, 2 // DCCISW - clean and invalidate by set/way
SUBS r9, r9, #1 // decrement the way number
BGE clean_invalidate_dcache_loop3
SUBS r7, r7, #1 // decrement the index
BGE clean_invalidate_dcache_loop2
clean_invalidate_dcache_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT clean_invalidate_dcache_loop1
clean_invalidate_dcache_finished:
POP {r4-r12}
BX lr
.global invalidateCaches
.type invalidateCaches,function
// void invalidateCaches(void)//
invalidateCaches:
PUSH {r4-r12}
//
// Based on code example given in section B2.2.4/11.2.4 of Armv7-A/R Architecture Reference Manual (DDI 0406B)
//
MOV r0, #0
MCR p15, 0, r0, c7, c5, 0 // ICIALLU - Invalidate entire I Cache, and flushes branch target cache
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ invalidate_caches_finished
MOV r10, #0
invalidate_caches_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT invalidate_caches_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
invalidate_caches_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
invalidate_caches_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c6, 2 // DCISW - invalidate by set/way
SUBS r9, r9, #1 // decrement the way number
BGE invalidate_caches_loop3
SUBS r7, r7, #1 // decrement the index
BGE invalidate_caches_loop2
invalidate_caches_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT invalidate_caches_loop1
invalidate_caches_finished:
POP {r4-r12}
BX lr
.global invalidateCaches_IS
.type invalidateCaches_IS,function
// void invalidateCaches_IS(void)//
invalidateCaches_IS:
PUSH {r4-r12}
MOV r0, #0
MCR p15, 0, r0, c7, c1, 0 // ICIALLUIS - Invalidate entire I Cache inner shareable
MRC p15, 1, r0, c0, c0, 1 // Read CLIDR
ANDS r3, r0, #0x7000000
MOV r3, r3, LSR #23 // Cache level value (naturally aligned)
BEQ invalidate_caches_is_finished
MOV r10, #0
invalidate_caches_is_loop1:
ADD r2, r10, r10, LSR #1 // Work out 3xcachelevel
MOV r1, r0, LSR r2 // bottom 3 bits are the Cache type for this level
AND r1, r1, #7 // get those 3 bits alone
CMP r1, #2
BLT invalidate_caches_is_skip // no cache or only instruction cache at this level
MCR p15, 2, r10, c0, c0, 0 // write the Cache Size selection register
ISB // ISB to sync the change to the CacheSizeID reg
MRC p15, 1, r1, c0, c0, 0 // reads current Cache Size ID register
AND r2, r1, #7 // extract the line length field
ADD r2, r2, #4 // add 4 for the line length offset (log2 16 bytes)
LDR r4, =0x3FF
ANDS r4, r4, r1, LSR #3 // R4 is the max number on the way size (right aligned)
CLZ r5, r4 // R5 is the bit position of the way size increment
LDR r7, =0x00007FFF
ANDS r7, r7, r1, LSR #13 // R7 is the max number of the index size (right aligned)
invalidate_caches_is_loop2:
MOV r9, R4 // R9 working copy of the max way size (right aligned)
invalidate_caches_is_loop3:
ORR r11, r10, r9, LSL r5 // factor in the way number and cache number into R11
ORR r11, r11, r7, LSL r2 // factor in the index number
MCR p15, 0, r11, c7, c6, 2 // DCISW - clean by set/way
SUBS r9, r9, #1 // decrement the way number
BGE invalidate_caches_is_loop3
SUBS r7, r7, #1 // decrement the index
BGE invalidate_caches_is_loop2
invalidate_caches_is_skip:
ADD r10, r10, #2 // increment the cache number
CMP r3, r10
BGT invalidate_caches_is_loop1
invalidate_caches_is_finished:
POP {r4-r12}
BX lr
// ------------------------------------------------------------
// TLB
// ------------------------------------------------------------
.global invalidateUnifiedTLB
.type invalidateUnifiedTLB,function
// void invalidateUnifiedTLB(void)//
invalidateUnifiedTLB:
MOV r0, #0
MCR p15, 0, r0, c8, c7, 0 // TLBIALL - Invalidate entire unified TLB
BX lr
.global invalidateUnifiedTLB_IS
.type invalidateUnifiedTLB_IS,function
// void invalidateUnifiedTLB_IS(void)//
invalidateUnifiedTLB_IS:
MOV r0, #1
MCR p15, 0, r0, c8, c3, 0 // TLBIALLIS - Invalidate entire unified TLB Inner Shareable
BX lr
// ------------------------------------------------------------
// Branch Prediction
// ------------------------------------------------------------
.global flushBranchTargetCache
.type flushBranchTargetCache,function
// void flushBranchTargetCache(void)
flushBranchTargetCache:
MOV r0, #0
MCR p15, 0, r0, c7, c5, 6 // BPIALL - Invalidate entire branch predictor array
BX lr
.global flushBranchTargetCache_IS
.type flushBranchTargetCache_IS,function
// void flushBranchTargetCache_IS(void)
flushBranchTargetCache_IS:
MOV r0, #0
MCR p15, 0, r0, c7, c1, 6 // BPIALLIS - Invalidate entire branch predictor array Inner Shareable
BX lr
// ------------------------------------------------------------
// High Vecs
// ------------------------------------------------------------
.global enableHighVecs
.type enableHighVecs,function
// void enableHighVecs(void)//
enableHighVecs:
MRC p15, 0, r0, c1, c0, 0 // Read Control Register
ORR r0, r0, #(1 << 13) // Set the V bit (bit 13)
MCR p15, 0, r0, c1, c0, 0 // Write Control Register
ISB
BX lr
.global disableHighVecs
.type disableHighVecs,function
// void disable_highvecs(void)//
disableHighVecs:
MRC p15, 0, r0, c1, c0, 0 // Read Control Register
BIC r0, r0, #(1 << 13) // Clear the V bit (bit 13)
MCR p15, 0, r0, c1, c0, 0 // Write Control Register
ISB
BX lr
// ------------------------------------------------------------
// Context ID
// ------------------------------------------------------------
.global getContextID
.type getContextID,function
// uint32_t getContextIDd(void)//
getContextID:
MRC p15, 0, r0, c13, c0, 1 // Read Context ID Register
BX lr
.global setContextID
.type setContextID,function
// void setContextID(uint32_t)//
setContextID:
MCR p15, 0, r0, c13, c0, 1 // Write Context ID Register
BX lr
// ------------------------------------------------------------
// ID registers
// ------------------------------------------------------------
.global getMIDR
.type getMIDR,function
// uint32_t getMIDR(void)//
getMIDR:
MRC p15, 0, r0, c0, c0, 0 // Read Main ID Register (MIDR)
BX lr
.global getMPIDR
.type getMPIDR,function
// uint32_t getMPIDR(void)//
getMPIDR:
MRC p15, 0, r0, c0 ,c0, 5// Read Multiprocessor ID register (MPIDR)
BX lr
// ------------------------------------------------------------
// CP15 SMP related
// ------------------------------------------------------------
.global getBaseAddr
.type getBaseAddr,function
// uint32_t getBaseAddr(void)
// Returns the value CBAR (base address of the private peripheral memory space)
getBaseAddr:
MRC p15, 4, r0, c15, c0, 0 // Read peripheral base address
BX lr
// ------------------------------------------------------------
.global getCPUID
.type getCPUID,function
// uint32_t getCPUID(void)
// Returns the CPU ID (0 to 3) of the CPU executed on
getCPUID:
MRC p15, 0, r0, c0, c0, 5 // Read CPU ID register
AND r0, r0, #0x03 // Mask off, leaving the CPU ID field
BX lr
// ------------------------------------------------------------
.global goToSleep
.type goToSleep,function
// void goToSleep(void)
goToSleep:
DSB // Clear all pending data accesses
WFI // Go into standby
B goToSleep // Catch in case of rogue events
BX lr
// ------------------------------------------------------------
.global joinSMP
.type joinSMP,function
// void joinSMP(void)
// Sets the ACTRL.SMP bit
joinSMP:
// SMP status is controlled by bit 6 of the CP15 Aux Ctrl Reg
MRC p15, 0, r0, c1, c0, 1 // Read ACTLR
MOV r1, r0
ORR r0, r0, #0x040 // Set bit 6
CMP r0, r1
MCRNE p15, 0, r0, c1, c0, 1 // Write ACTLR
ISB
BX lr
// ------------------------------------------------------------
.global leaveSMP
.type leaveSMP,function
// void leaveSMP(void)
// Clear the ACTRL.SMP bit
leaveSMP:
// SMP status is controlled by bit 6 of the CP15 Aux Ctrl Reg
MRC p15, 0, r0, c1, c0, 1 // Read ACTLR
BIC r0, r0, #0x040 // Clear bit 6
MCR p15, 0, r0, c1, c0, 1 // Write ACTLR
ISB
BX lr
// ------------------------------------------------------------
// End of v7.s
// ------------------------------------------------------------

View File

@@ -321,7 +321,7 @@ void tx_thread_vfp_disable(void);
#ifdef TX_THREAD_INIT
CHAR _tx_version_id[] =
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.2.1 *";
"Copyright (c) Microsoft Corporation. All rights reserved. * ThreadX ARMv7-A Version 6.4.0 *";
#else
extern CHAR _tx_version_id[];
#endif

Some files were not shown because too many files have changed in this diff Show More