Friday, January 16, 2015

why mempool and calloc better than malloc


memory allocation cost:

Every memory allocation has a performance cost. That cost includes the time it takes to allocate the memory in your program’s logical address space and the time it takes to assign that address space to physical memory.


best point:

1. to avoid memory internal fragmentation  by malloc
2. its better than heap based algorithm from system dynamically

http://www.embeddedlinux.org.cn/RTConforEmbSys/5107final/LiB0081.html
http://www.embeddedlinux.org.cn/RTConforEmbSys/5107final/LiB0082.html

internal vs external fragmentation:

https://techdifferences.com/difference-between-internal-and-external-fragmentation.html


point :

Initialize Memory Blocks Efficiently

Small blocks of memory, allocated using the malloc function, are not guaranteed to be initialized with zeroes. Although you could use the memset function to initialize the memory, a better choice is to use the calloc routine to allocate the memory in the first place. The calloc function reserves the required virtual address space for the memory but waits until the memory is actually used before initializing it. This approach is much more efficient than using memset, which forces the virtual memory system to map the corresponding pages into physical memory in order to zero-initialize them. Another advantage of using the calloc function is that it lets the system initialize pages as they’re used, as opposed to all at once.


link:
https://developer.apple.com/library/mac/documentation/Performance/Conceptual/ManagingMemory/Articles/MemoryAlloc.html


why custom allocator best:

  • Speed: return &pool[last++] is faster than malloc. (A real pool would usually be slower than that, but still faster than malloc; especially since your "free", ready-to-be-allocated objects in the pool could have a lot of state initialized already since the last time they were used, unlike a malloc'd buffer – in OO terms, you don't need to call the constructor after allocating).
  • Predictability: people usually refer to "the pool advantage" as "lower fragmentation" and hence less chances of running out of memory due to "sudden" fragmentation in unexpected circumstances. Actually, fragmentation is higher with pools: a pool of 100 objects of type A can not be used to allocate objects of type B, even if you're using just one (or zero) A objects right now – so your memory is very much fragmented. However, it's fragmented predictably, leading to predictable allocation times.
  • Stability: Another things which higher fragmentation buys. Pools let you allocate B objects after running out of A objects from the predictably available "B fragment" (pool). This means you can actually handle out-of-memory conditions if you can live without another A object. A malloc-based program "runs out of everything" when it runs out of memory, so it's very unlikely to survive.


 Ref:
http://yosefk.com/blog/why-custom-allocatorspools-are-hard.html






 point:
http://voices.canonical.com/jussi.pakkanen/2011/09/27/is-malloc-slow/



lazy memory  allocation:

Every memory allocation has a performance cost. That cost includes the time it takes to allocate the memory in your program’s logical address space and the time it takes to assign that address space to physical memory. If you do not plan to use a particular block of memory right away, deferring the allocation until the time when you actually need it is the best course of action. For example, to avoid the appearance of your app launching slowly, minimize the amount of memory you allocate at launch time. Instead, focus your initial memory allocations on the objects needed to display your user interface and respond to input from the user. Defer other allocations until the user issues starts interacting with your application and issuing commands. This lazy allocation of memory saves time right away and ensures that any memory that is allocated is actually used.


No comments:

Post a Comment