lkml.org 
[lkml]   [2020]   [Jan]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH v8 4/5] locking/qspinlock: Introduce starvation avoidance into CNA
    From
    Date
    On 1/21/20 2:50 PM, Peter Zijlstra wrote:
    > On Tue, Jan 21, 2020 at 02:29:49PM +0100, Peter Zijlstra wrote:
    >> On Mon, Dec 30, 2019 at 02:40:41PM -0500, Alex Kogan wrote:
    >>
    >>> +/*
    >>> + * Controls the threshold for the number of intra-node lock hand-offs before
    >>> + * the NUMA-aware variant of spinlock is forced to be passed to a thread on
    >>> + * another NUMA node. By default, the chosen value provides reasonable
    >>> + * long-term fairness without sacrificing performance compared to a lock
    >>> + * that does not have any fairness guarantees. The default setting can
    >>> + * be changed with the "numa_spinlock_threshold" boot option.
    >>> + */
    >>> +int intra_node_handoff_threshold __ro_after_init = 1 << 16;
    >> There is a distinct lack of quantitative data to back up that
    >> 'reasonable' claim there.
    >>
    >> Where is the table of inter-node latencies observed for the various
    >> values tested, and on what criteria is this number deemed reasonable?
    >>
    >> To me, 64k lock hold times seems like a giant number, entirely outside
    >> of reasonable.
    > Daniel, IIRC you just did a paper on constructing worst case latencies
    > from measuring pieces. Do you have data on average lock hold times?
    >

    I am still writing the paper, but I do not have the (avg) lock times. It is it
    is in the TODO list, though!

    -- Daniel

    \
     
     \ /
      Last update: 2020-01-21 22:20    [W:16.698 / U:0.160 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site