Messages in this thread |  | | Date | Wed, 24 Jan 2018 18:19:21 +0000 | From | Mel Gorman <> | Subject | Re: [PATCH 2/2] free_pcppages_bulk: prefetch buddy while not holding lock |
| |
On Wed, Jan 24, 2018 at 08:57:43AM -0800, Dave Hansen wrote: > On 01/24/2018 08:43 AM, Mel Gorman wrote: > > I'm less convinced by this for a microbenchmark. Prefetch has not been a > > universal win in the past and we cannot be sure that it's a good idea on > > all architectures or doesn't have other side-effects such as consuming > > memory bandwidth for data we don't need or evicting cache hot data for > > buddy information that is not used. > > I had the same reaction. > > But, I think this case is special. We *always* do buddy merging (well, > before the next patch in the series is applied) and check an order-0 > page's buddy to try to merge it when it goes into the main allocator. > So, the cacheline will always come in. > > IOW, I don't think this has the same downsides normally associated with > prefetch() since the data is always used.
That doesn't side-step the calculations are done twice in the free_pcppages_bulk path and there is no guarantee that one prefetch in the list of pages being freed will not evict a previous prefetch due to collisions. At least on the machine I'm writing this from, the prefetches necessary for a standard drain are 1/16th of the L1D cache so some collisions/evictions are possible. We're doing definite work in one path on the chance it'll still be cache-resident when it's recalculated. I suspect that only a microbenchmark that is doing very large amounts of frees (or a large munmap or exit) will notice and the costs of a large munmap/exit are so high that the prefetch will be negligible savings.
-- Mel Gorman SUSE Labs
|  |