[lkml]   [2019]   [Dec]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [PATCH] fs: inode: Reduce volatile inode wraparound risk when ino_t is 64 bit
On Fri, Dec 20, 2019 at 2:16 PM Chris Down <> wrote:
> Hi Amir,
> Thanks for getting back, I appreciate it.
> Amir Goldstein writes:
> >How about something like this:
> >
> >/* just to explain - use an existing macro */
> >shmem_ino_shift = ilog2(sizeof(void *));
> >inode->i_ino = (__u64)inode >> shmem_ino_shift;
> >
> >This should solve the reported problem with little complexity,
> >but it exposes internal kernel address to userspace.
> One problem I can see with that approach is that get_next_ino doesn't
> discriminate based on the context (for example, when it is called for a
> particular tmpfs mount) which means that eventually wraparound risk is still
> pushed to the limit on such machines for other users of get_next_ino (like
> named pipes, sockets, procfs, etc). Granted then the space for collisions
> between them is less likely due to their general magnitude of inodes at one
> time compared to some tmpfs workloads, but still.

If you ask me, trying to solve all the problems that may or may not exist
is not the best way to solve "your" problem. I am not saying you shouldn't
look around to see if you can improve something for more cases, but every
case is different, so not sure there is one solution that fits all.

If you came forward with a suggestion to improve get_next_ino() because
it solves a microbenchmark, I suspect that you wouldn't have gotten far.
Instead, you came forward with a report of a real life problem:
"In Facebook production we are seeing heavy inode number wraparounds
on tmpfs..." and Hugh confessed that Google are facing the same problem
and carry a private patch (per-sb ino counter).

There is no doubt that tmpfs is growing to bigger scales than it used to
be accustomed to in the past. I do doubt that other filesystems that use
get_next_ino() like pseudo filesystems really have a wraparound problem.
If there is such a real world problem, let someone come forward with the
report for the use case.

IMO, tmpfs should be taken out of the list of get_next_ino() (ab)users and
then the rest of the cases should be fine. When I say "tmpfs" I mean
every filesystem with similar use pattern as tmpfs, which is using inode
cache pool and has potential to recycle a very large number of inodes.

> >Can we do anything to mitigate this risk?
> >
> >For example, instead of trying to maintain a unique map of
> >ino_t to struct shmem_inode_info * in the system
> >it would be enough (and less expensive) to maintain a unique map of
> >shmem_ino_range_t to slab.
> >The ino_range id can then be mixes with the relative object index in
> >slab to compose i_ino.
> >
> >The big win here is not having to allocate an id every bunch of inodes
> >instead of every inode, but the fact that recycled (i.e. delete/create)
> >shmem_inode_info objects get the same i_ino without having to
> >allocate any id.
> >
> >This mimics a standard behavior of blockdev filesystem like ext4/xfs
> >where inode number is determined by logical offset on disk and is
> >quite often recycled on delete/create.
> >
> >I realize that the method I described with slab it crossing module layers
> >and would probably be NACKED.
> Yeah, that's more or less my concern with that approach as well, hence why I
> went for something that seemed less intrusive and keeps with the current inode
> allocation strategy :-)
> >Similar result could be achieved by shmem keeping a small stash of
> >recycled inode objects, which are not returned to slab right away and
> >retain their allocated i_ino. This at least should significantly reduce the
> >rate of burning get_next_ino allocation.
> While this issue happens to present itself currently on tmpfs, I'm worried that
> future users of get_next_ino based on historic precedent might end up hitting
> this as well. That's the main reason why I'm inclined to try and improve
> get_next_ino's strategy itself.

I am not going to stop you from trying to improve get_next_ino()
I just think there is a MUCH simpler solution to your problem (see below).

> >Anyway, to add another consideration to the mix, overlayfs uses
> >the high ino bits to multiplex several layers into a single ino domain
> >(mount option xino=on).
> >
> >tmpfs is a very commonly used filesystem as overlayfs upper layer,
> >so many users are going to benefit from keeping the higher most bits
> >of tmpfs ino inodes unused.
> >
> >For this reason, I dislike the current "grow forever" approach of
> >get_next_ino() and prefer that we use a smarter scheme when
> >switching over to 64bit values.
> By "a smarter scheme when switching over to 64bit values", you mean keeping
> i_ino as low magnitude as possible while still avoiding simultaneous reuse,
> right?


> To that extent, if we can reliably and expediently recycle inode numbers, I'm
> not against sticking to the existing typing scheme in get_next_ino. It's just a
> matter of agreeing by what method and at what level of the stack that should
> take place :-)

1. Extend the kmem_cache API to let the ctor() know if it is
initializing an object
for the first time (new page) or recycling an object.
2. Let shmem_init_inode retain the value of i_ino of recycled shmem_inode_info
3. i_ino is initialized with get_next_ino() only in case it it zero

Alternatively to 1., if simpler to implement and acceptable by slab developers:
1.b. remove the assertion from cache_grow_begin()/new_slab_objects():
WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO));
and pass __GFP_ZERO in shmem_alloc_inode()

You see, when you look at the big picture, all the smarts of an id allocator
that you could possibly need is already there in the slab allocator for shmem
inode objects. You just need a way to access that "'id" information for recycled
objects without having to write any performance sensitive code.

> I'd appreciate your thoughts on approaches forward. One potential option is to
> reimplement get_next_ino using an IDA, as mentioned in my patch message. Other
> than the potential to upset microbenchmarks, do you have concerns with that as
> a patch?

Only that it will be subject to performance regression reports from hardware and
workloads that you do not have access to - It's going to be hard for
you to prove
that you did not hurt any workload, so it's not an easy way forward.


 \ /
  Last update: 2019-12-20 14:42    [W:0.497 / U:5.352 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site