Bug report by Oleg Kravtsov:
In rtems_bdbuf_swapout_processing() function there is the following
lines:
if (bdbuf_cache.sync_active && !transfered_buffers)
{
rtems_id sync_requester;
rtems_bdbuf_lock_cache ();
...
}
Here access to bdbuf_cache.sync_active is not protected with anything.
Imagine the following test case:
1. Task1 releases buffer(s) with bdbuf_release_modified() calls;
2. After a while swapout task starts and flushes all buffers;
3. In the end of that swapout flush we are before that part of code, and
assume there is task switching (just before "if (bdbuf_cache.sync_active
&& !transfered_buffers)");
4. Some other task (with higher priority) does bdbuf_release_modified
and rtems_bdbuf_syncdev().
This task successfully gets both locks sync and pool (in
rtems_bdbuf_syncdev() function), sets sync_active to true and starts
waiting for RTEMS_BDBUF_TRANSFER_SYNC event with only sync lock got.
5. Task switching happens again and we are again before "if
(bdbuf_cache.sync_active && !transfered_buffers)".
As the result we check sync_active and we come inside that "if"
statement.
6. The result is that we send RTEMS_BDBUF_TRANSFER_SYNC event! Though
ALL modified messages of that task are not flushed yet!
close#1485
Use the PTHREAD mutexes and condition variables if available. This
helps on SMP configurations to avoid the home grown condition variables
via disabled preemption.
Enabling and disabling preemption as done for single core will not work
for SMP. In the bdbuf initialization preemption handling can be avoided
in general by using pthread_once().
Add a local context structure to the SMP lock API for acquire and
release pairs. This context can be used to store the ISR level and
profiling information. It may be later used to enable more
sophisticated lock algorithms, e.g. MCS locks.
There is only one lock that cannot be used with a local context. This
is the per-CPU lock since here we would have to transfer the local
context through a context switch which is very complicated.
The readv() and writev() support was implemented in terms of multiple
calls to the read and write handlers. This imposes a problem on device
files which use an IO vector as single request entity. For example a
low-level network device (e.g. BPF(4)) may use an IO vector to create
one frame from multiple protocol layers each with its own IO vector
entry.
Show the correct index of partition's last block (partition end).
The documentation of struct rtems_bdpart_partition (P) says that the member
'end' is the "Block index for partition end (this block is not a part of the
partition)". Then, the fdisk's partition table dump should print ((P)->end -
1).
Currently, one can think that the last block of a partition P is superposing
the beginning of the partition (P + 1). Example:
----------------------------------------
PARTITION TABLE
------------+------------+--------------
BEGIN | END | TYPE
------------+------------+--------------
2048 | 133120 | FAT 32
133120 | 15628032 | FAT 32
------------+------------+--------------
With be proposed patch, it would be:
----------------------------------------
PARTITION TABLE
------------+------------+--------------
BEGIN | END | TYPE
------------+------------+--------------
2048 | 133119 | FAT 32
133120 | 15628031 | FAT 32
------------+------------+--------------
All resource allocations take place in rtems_bdbuf_init() now. After
rtems_bdbuf_init() no fatal errors can happen due to configuration
errors or resource limits. This makes it easier to detect
configuration errors for users.
Add rtems_bdbuf_fatal_code as a replacement for the previous fatal error
codes. Remove unused error codes. Add new error codes. Use
rtems_fatal() with RTEMS_FATAL_SOURCE_BDBUF as source.