Messages in this thread |  | | Date | Wed, 5 Oct 2016 14:17:41 -0400 | From | Benjamin LaHaise <> | Subject | Re: aio: questions with ioctx_alloc() and large num_possible_cpus() |
| |
On Wed, Oct 05, 2016 at 02:58:12PM -0300, Mauricio Faria de Oliveira wrote: > Hi Benjamin, > > On 10/05/2016 02:41 PM, Benjamin LaHaise wrote: > >I'd suggest increasing the default limit by changing how it is calculated. > >The current number came about 13 years ago when machines had orders of > >magnitude less RAM than they do today. > > Thanks for the suggestion. > > Does the default also have implications other than memory usage? > For example, concurrency/performance of as much aio contexts running, > or if userspace could try to exploit some point with a larger number?
Anything's possible when a local user can run code. It's the same problem as determining how much memory can be mlock()ed, or how much i/o a process should be allowed to do. Nothing prevents an app from doing a huge amount of readahed() calls to make the system prefetch gigabytes of data. That said, local users tend not to DoS themselves.
> Wondering about it because it can be set based on num_possible_cpus(), > but that might be really large on high-end systems.
Today's high end systems are tomorrow's desktops... It probably makes sense to implement per-user limits rather than the current global limit, and maybe even convert them to an rlimit to better fit in with the available frameworks for managing these things.
-ben
> Regards, > > -- > Mauricio Faria de Oliveira > IBM Linux Technology Center
-- "Thought is the essence of where you are now."
|  |