lkml.org 
[lkml]   [2019]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v4 3/7] slob: Use slab_list instead of lru
Date
Currently we use the page->lru list for maintaining lists of slabs.  We
have a list_head in the page structure (slab_list) that can be used for
this purpose. Doing so makes the code cleaner since we are not
overloading the lru list.

The slab_list is part of a union within the page struct (included here
stripped down):

union {
struct { /* Page cache and anonymous pages */
struct list_head lru;
...
};
struct {
dma_addr_t dma_addr;
};
struct { /* slab, slob and slub */
union {
struct list_head slab_list;
struct { /* Partial pages */
struct page *next;
int pages; /* Nr of pages left */
int pobjects; /* Approximate count */
};
};
...

Here we see that slab_list and lru are the same bits. We can verify
that this change is safe to do by examining the object file produced from
slob.c before and after this patch is applied.

Steps taken to verify:

1. checkout current tip of Linus' tree

commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)")

2. configure and build (select SLOB allocator)

CONFIG_SLOB=y
CONFIG_SLAB_MERGE_DEFAULT=y

3. dissasemble object file `objdump -dr mm/slub.o > before.s
4. apply patch
5. build
6. dissasemble object file `objdump -dr mm/slub.o > after.s
7. diff before.s after.s

Use slab_list list_head instead of the lru list_head for maintaining
lists of slabs.

Reviewed-by: Roman Gushchin <guro@fb.com>
Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
mm/slob.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 39ad9217ffea..21af3fdb457a 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp)

static void set_slob_page_free(struct page *sp, struct list_head *list)
{
- list_add(&sp->lru, list);
+ list_add(&sp->slab_list, list);
__SetPageSlobFree(sp);
}

static inline void clear_slob_page_free(struct page *sp)
{
- list_del(&sp->lru);
+ list_del(&sp->slab_list);
__ClearPageSlobFree(sp);
}

@@ -282,7 +282,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)

spin_lock_irqsave(&slob_lock, flags);
/* Iterate through each partially free page, try to find room */
- list_for_each_entry(sp, slob_list, lru) {
+ list_for_each_entry(sp, slob_list, slab_list) {
#ifdef CONFIG_NUMA
/*
* If there's a node specification, search for a partial
@@ -299,22 +299,22 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
* Cache previous entry because slob_page_alloc() may
* remove sp from slob_list.
*/
- prev = list_prev_entry(sp, lru);
+ prev = list_prev_entry(sp, slab_list);

/* Attempt to alloc */
b = slob_page_alloc(sp, size, align);
if (!b)
continue;

- next = list_next_entry(prev, lru); /* This may or may not be sp */
+ next = list_next_entry(prev, slab_list); /* This may or may not be sp */

/*
* Improve fragment distribution and reduce our average
* search time by starting our next search here. (see
* Knuth vol 1, sec 2.5, pg 449)
*/
- if (!list_is_first(&next->lru, slob_list))
- list_rotate_to_front(&next->lru, slob_list);
+ if (!list_is_first(&next->slab_list, slob_list))
+ list_rotate_to_front(&next->slab_list, slob_list);

break;
}
@@ -331,7 +331,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
spin_lock_irqsave(&slob_lock, flags);
sp->units = SLOB_UNITS(PAGE_SIZE);
sp->freelist = b;
- INIT_LIST_HEAD(&sp->lru);
+ INIT_LIST_HEAD(&sp->slab_list);
set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
set_slob_page_free(sp, slob_list);
b = slob_page_alloc(sp, size, align);
--
2.21.0
\
 
 \ /
  Last update: 2019-03-18 01:05    [W:0.038 / U:1.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site