ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

Real-Time Embedded Multithreading--Using ThreadX & ARM-MEMORY MANAGEMENT: BYTE POOLS

2019-08-19 15:03:37  阅读:363  来源: 互联网

标签:Real status MANAGEMENT Embedded bytes block memory byte pool


This is the learn of Real-Time Embedded Multithreading–Using ThreadX & ARM-MEMORY MANAGEMENT: BYTE POOLS.
Introduction
Recall that we used arrays for the thread stacks in the previous chapter. Although this
approach has the advantage of simplicity, it is frequently undesirable and is quite inflexible.
This chapter focuses on two ThreadX memory management resources that provide
a good deal of flexibility: memory byte pools and memory block pools.
A memory byte pool is a contiguous block of bytes. Within such a pool, byte groups
of any size (subject to the total size of the pool) may be used and reused. Memory byte
pools are flexible and can be used for thread stacks and other resources that require
memory. However, this flexibility leads to some problems, such as fragmentation of the
memory byte pool as groups of bytes of varying sizes are used.
A memory block pool is also a contiguous block of bytes, but it is organized into a
collection of fixed-size memory blocks. Thus, the amount of memory used or reused
from a memory block pool is always the same—the size of one fixed-size memory block.
There is no fragmentation problem, and allocating and releasing memory blocks is fast.
In general, the use of memory block pools is preferred over memory byte pools.
We will study and compare both types of memory management resources in this
chapter. We will consider the features, capabilities, pitfalls, and services for each type.
We will also create illustrative sample systems using these resources.

Summary of Memory Byte Pools
A memory byte pool is similar to a standard C heap. In contrast to the C heap, a
ThreadX application may use multiple memory byte pools. In addition, threads can suspend
on a memory byte pool until the requested memory becomes available.
Allocations from memory byte pools resemble traditional malloc calls, which include
the amount of memory desired (in bytes). ThreadX allocates memory from the memory
byte pool in a first-fit manner, i.e., it uses the first free memory block that is large enough
to satisfy the request. ThreadX converts excess memory from this block into a new
block and places it back in the free memory list. This process is called fragmentation.
When ThreadX performs a subsequent allocation search for a large-enough block of
free memory, it merges adjacent free memory blocks together. This process is called
defragmentation.
Each memory byte pool is a public resource; ThreadX imposes no constraints on
how memory byte pools may be used. Applications may create memory byte pools
either during initialization or during run-time. There are no explicit limits on the number
of memory byte pools an application may use.
The number of allocatable bytes in a memory byte pool is slightly less than what was
specified during creation. This is because management of the free memory area introduces
some overhead. Each free memory block in the pool requires the equivalent of two
C pointers of overhead. In addition, when the pool is created, ThreadX automatically
divides it into two blocks, a large free block and a small permanently allocated block at
the end of the memory area. This allocated end block is used to improve performance of
the allocation algorithm. It eliminates the need to continuously check for the end of the
pool area during merging. During run-time, the amount of overhead in the pool typically
increases. This is partly because when an odd number of bytes is allocated, ThreadX
pads out the block to ensure proper alignment of the next memory block. In addition,
overhead increases as the pool becomes more fragmented.
The memory area for a memory byte pool is specified during creation. Like other
memory areas, it can be located anywhere in the target’s address space. This is an important
feature because of the considerable flexibility it gives the application. For example,
if the target hardware has a high-speed memory area and a low-speed memory area, the
user can manage memory allocation for both areas by creating a pool in each of them.
Application threads can suspend while waiting for memory bytes from a pool. When
sufficient contiguous memory becomes available, the suspended threads receive their
requested memory and are resumed. If multiple threads have suspended on the same
memory byte pool, ThreadX gives them memory and resumes them in the order they
occur on the Suspended Thread List (usually FIFO). However, an application can cause
priority resumption of suspended threads, by calling tx_byte_pool_prioritize prior tothe byte release call that lifts thread suspension. The byte pool prioritize service places
the highest priority thread at the front of the suspension list, while leaving all other suspended
threads in the same FIFO order.

** Memory Byte Pool Control Block**
The characteristics of each memory byte pool are found in its Control Block. It contains
useful information such as the number of available bytes in the pool. Memory Byte
Pool Control Blocks can be located anywhere in memory, but it is most common to
make the Control Block a global structure by defining it outside the scope of any function.
Table Memory Byte Pool Control Block contains many of the fields that comprise this Control Block.

In most cases, the developer can ignore the contents of the Memory Byte Pool Control
Block. However, there are several fields that may be useful during debugging, such as the
number of available bytes, the number of fragments, and the number of threads suspended
on this memory byte pool.

Pitfalls of Memory Byte Pools
Although memory byte pools provide the most flexible memory allocation, they also suffer
from somewhat nondeterministic behavior. For example, a memory byte pool may have 2,000 bytes of memory available but not be able to satisfy an allocation request of even
1,000 bytes. This is because there is no guarantee on how many of the free bytes are contiguous.
Even if a 1,000-byte free block exists, there is no guarantee on how long it
might take to find the block. The allocation service may well have to search the entire
memory pool to find the 1,000-byte block. Because of this problem, it is generally good
practice to avoid using memory byte services in areas where deterministic, real-time
behavior is required. Many such applications pre-allocate their required memory during
initialization or run-time configuration. Another option is to use a memory block pool.
Users of byte pool allocated memory must not write outside its boundaries. If this
happens, corruption occurs in an adjacent (usually subsequent) memory area. The
results are unpredictable and quite often catastrophic.

Summary of Memory Byte Pool Services
Table Services of the memory byte pool contains a listing of all available memory byte pool services. In the subsequent sections of this chapter, we will investigate each of these services.
在这里插入图片描述
We will first consider the tx_byte_pool_create service because it must be invoked
before any of the other services.

Creating a Memory Byte Pool
A memory byte pool is declared with the TX_BYTE_POOL data type and is defined
with the tx_byte_pool_create service. When defining a memory byte pool, you need
to specify its Control Block, the name of the memory byte pool, the address of the memory
byte pool, and the number of bytes available.

Creating a memory byte pool.
UINT status;
TX_BYTE_POOL my_pool;
/* Create a memory pool whose total size is 2000 bytes
starting at address 0x500000. */
status = tx_byte_pool_create(&my_pool, "my_pool",
(VOID *) 0x500000, 2000);
/* If status equals TX_SUCCESS, my_pool is available
for allocating memory. */

If variable status contains the return value TX_SUCCESS, then a memory byte pool
called my_pool that contains 2,000 bytes, and which begins at location 0x500000 has
been created successfully.

Allocating from a Memory Byte Pool
After a memory byte pool has been declared and defined, we can start using it in a variety
of applications. The tx_byte_allocate service is the method by which bytes of memory
are allocated from the memory byte pool. To use this service, we must indicate how many
bytes are needed, and what to do if enough memory is not available from this byte pool.
Figure 8.5 shows a sample allocation, which will “wait forever” if adequate memory is
not available. If the allocation succeeds, the pointer memory_ptr contains the starting
location of the allocated bytes.

Allocating bytes from a memory byte pool.
TX_BYTE_POOL my_pool;
unsigned char *memory_ptr;
UINT status;
/* Allocate a 112 byte memory area from my_pool. Assume
that the byte pool has already been created with a call
to tx_byte_pool_create. */
status = tx_byte_allocate(&my_pool, (VOID **) &memory_ptr,
112, TX_WAIT_FOREVER);
/* If status equals TX_SUCCESS, memory_ptr contains the
address of the allocated memory area. */

If variable status contains the return value TX_SUCCESS, then a block of 112 bytes,
pointed to by memory_ptr has been created successfully.
Note that the time required by this service depends on the block size and the amount
of fragmentation in the memory byte pool. Therefore, you should not use this service
during time-critical threads of execution.

Deleting a Memory Byte Pool
A memory byte pool can be deleted with the tx_byte_pool_delete service. All threads
that are suspended because they are waiting for memory from this byte pool are resumed
and receive a TX_DELETED return status.

Deleting a memory byte pool.
TX_BYTE_POOL my_pool;
UINT status;
...
/* Delete entire memory pool. Assume that the pool has already
been created with a call to tx_byte_pool_create. */
status = tx_byte_pool_delete(&my_pool);
/* If status equals TX_SUCCESS, the memory pool is deleted. */

If variable status contains the return value TX_SUCCESS, then the memory byte
pool has been deleted successfully

Retrieving Memory Byte Pool Information
The tx_byte_pool_info_get service retrieves a variety of information about a memory
byte pool. The information that is retrieved includes the byte pool name, the number of
bytes available, the number of memory fragments, the location of the thread that is first
on the suspension list for this byte pool, the number of threads currently suspended on
this byte pool, and the location of the next created memory byte pool.

Retrieving Information about a memory byte pool.
TX_BYTE_POOL my_pool;
CHAR *name;
ULONG available;
ULONG fragments;
TX_THREAD *first_suspended;
ULONG suspended_count;
TX_BYTE_POOL *next_pool;
UINT status;
...
/* Retrieve information about the previously created
block pool "my_pool." */
status = tx_byte_pool_info_get(&my_pool, &name,
&available, &fragments,
&first_suspended, &suspended_count,
&next_pool);
/* If status equals TX_SUCCESS, the information requested is valid. */

If variable status contains the return value TX_SUCCESS, then valid information
about the memory byte pool has been obtained successfully.

Prioritizing a Memory Byte Pool Suspension List
When a thread is suspended because it is waiting for a memory byte pool, it is placed in
the suspension list in a FIFO manner. When a memory byte pool regains an adequate
amount memory, the first thread in the suspension list (regardless of priority) receives
an opportunity to allocate bytes from that memory byte pool. The tx_byte_pool_
prioritize service places the highest-priority thread suspended for ownership of a specific
memory byte pool at the front of the suspension list. All other threads remain in the
same FIFO order in which they were suspended. Figure 8.8 shows how this service can
be used.

Prioritizing the memory byte pool suspension list.
TX_BYTE_POOL my_pool;
UINT status;
...
/* Ensure that the highest priority thread will receive
the next free memory from this pool. */
status = tx_byte_pool_prioritize(&my_pool);
/* If status equals TX_SUCCESS, the highest priority
suspended thread is at the front of the list. The
next tx_byte_release call will wake up this thread,
if there is enough memory to satisfy its request. */

If the variable status contains the value TX_SUCCESS, then the operation succeeded:
the highest-priority thread in the suspension list has been placed at the front of the suspension
list. The service also returns TX_SUCCESS if no thread was suspended on this
memory byte pool. In this case the suspension list remains unchanged.

Releasing Memory to a Byte Pool
The tx_byte_release service releases a previously allocated memory area back to its
associated pool. If one or more threads are suspended on this pool, each suspended
thread receives the memory it requested and is resumed—until the pool’s memory is
exhausted or until there are no more suspended threads. This process of allocating
memory to suspended threads always begins with the first thread on the suspension list.

Releasing bytes back to the memory byte pool.
unsigned char *memory_ptr;
UINT status;
...
/* Release a memory back to my_pool. Assume that the memory
area was previously allocated from my_pool. */
status = tx_byte_release((VOID *) memory_ptr);
/* If status equals TX_SUCCESS, the memory pointed to by
memory_ptr has been returned to the pool. */

If the variable status contains the value TX_SUCCESS, then the memory block
pointed to by memory_ptr has been returned to the memory byte pool.

标签:Real,status,MANAGEMENT,Embedded,bytes,block,memory,byte,pool
来源: https://blog.csdn.net/u014100559/article/details/99723109

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有