lkml.org 
[lkml]   [2015]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/3] Slab infrastructure for array operations
On Tue, 17 Feb 2015 10:03:51 -0600 (CST)
Christoph Lameter <cl@linux.com> wrote:

> On Tue, 17 Feb 2015, Joonsoo Kim wrote:
>
[...]
> > If we allocate objects from local cache as much as possible, we can
> > keep temporal locality and return objects as fast as possible since
> > returing objects from local cache just needs memcpy from local array
> > cache to destination array.
>
> I thought the point was that this is used to allocate very large amounts
> of objects. The hotness is not that big of an issue.
>

(My use-case is in area of 32-64 elems)

[...]
>
> Its not that detailed. It is just layin out the basic strategy for the
> array allocs. First go to the partial lists to decrease fragmentation.
> Then bypass the allocator layers completely and go direct to the page
> allocator if all objects that the page will accomodate can be put into
> the array. Lastly use the cpu hot objects to fill in the leftover (which
> would in any case be less than the objects in a page).

IMHO this strategy is a bit off, from what I was looking for.

I would prefer the first elements to be cache hot, and the later/rest of
the elements can be more cache-cold. Reasoning behind this is,
subsystem calling this alloc_array have likely ran out of elems (from
it's local store/prev-call) and need to handout one elem immediately
after this call returns.

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer


\
 
 \ /
  Last update: 2015-02-17 22:41    [W:0.040 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site