lkml.org 
[lkml]   [2019]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 07/10] mm/hmm: add an helper function that fault pages and map them to a device
On Mon, Mar 18, 2019 at 01:21:00PM -0700, Dan Williams wrote:
> On Tue, Jan 29, 2019 at 8:55 AM <jglisse@redhat.com> wrote:
> >
> > From: Jérôme Glisse <jglisse@redhat.com>
> >
> > This is a all in one helper that fault pages in a range and map them to
> > a device so that every single device driver do not have to re-implement
> > this common pattern.
>
> Ok, correct me if I am wrong but these seem effectively be the typical
> "get_user_pages() + dma_map_page()" pattern that non-HMM drivers would
> follow. Could we just teach get_user_pages() to take an HMM shortcut
> based on the range?
>
> I'm interested in being able to share code across drivers and not have
> to worry about the HMM special case at the api level.
>
> And to be clear this isn't an anti-HMM critique this is a "yes, let's
> do this, but how about a more fundamental change".

It is a yes and no, HMM have the synchronization with mmu notifier
which is not common to all device driver ie you have device driver
that do not synchronize with mmu notifier and use GUP. For instance
see the range->valid test in below code this is HMM specific and it
would not apply to GUP user.

Nonetheless i want to remove more HMM code and grow GUP to do some
of this too so that HMM and non HMM driver can share the common part
(under GUP). But right now updating GUP is a too big endeavor. I need
to make progress on more driver with HMM before thinking of messing
with GUP code. Making that code HMM only for now will make the GUP
factorization easier and smaller down the road (should only need to
update HMM helper and not each individual driver which use HMM).

FYI here is my todo list:
- this patchset
- HMM ODP
- mmu notifier changes for optimization and device range binding
- device range binding (amdgpu/nouveau/...)
- factor out some nouveau deep inner-layer code to outer-layer for
more code sharing
- page->mapping endeavor for generic page protection for instance
KSM with file back page
- grow GUP to remove HMM code and consolidate with GUP code
...

Cheers,
Jérôme

>
> >
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > ---
> > include/linux/hmm.h | 9 +++
> > mm/hmm.c | 152 ++++++++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 161 insertions(+)
> >
> > diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> > index 4263f8fb32e5..fc3630d0bbfd 100644
> > --- a/include/linux/hmm.h
> > +++ b/include/linux/hmm.h
> > @@ -502,6 +502,15 @@ int hmm_range_register(struct hmm_range *range,
> > void hmm_range_unregister(struct hmm_range *range);
> > long hmm_range_snapshot(struct hmm_range *range);
> > long hmm_range_fault(struct hmm_range *range, bool block);
> > +long hmm_range_dma_map(struct hmm_range *range,
> > + struct device *device,
> > + dma_addr_t *daddrs,
> > + bool block);
> > +long hmm_range_dma_unmap(struct hmm_range *range,
> > + struct vm_area_struct *vma,
> > + struct device *device,
> > + dma_addr_t *daddrs,
> > + bool dirty);
> >
> > /*
> > * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range
> > diff --git a/mm/hmm.c b/mm/hmm.c
> > index 0a4ff31e9d7a..9cd68334a759 100644
> > --- a/mm/hmm.c
> > +++ b/mm/hmm.c
> > @@ -30,6 +30,7 @@
> > #include <linux/hugetlb.h>
> > #include <linux/memremap.h>
> > #include <linux/jump_label.h>
> > +#include <linux/dma-mapping.h>
> > #include <linux/mmu_notifier.h>
> > #include <linux/memory_hotplug.h>
> >
> > @@ -985,6 +986,157 @@ long hmm_range_fault(struct hmm_range *range, bool block)
> > return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT;
> > }
> > EXPORT_SYMBOL(hmm_range_fault);
> > +
> > +/*
> > + * hmm_range_dma_map() - hmm_range_fault() and dma map page all in one.
> > + * @range: range being faulted
> > + * @device: device against to dma map page to
> > + * @daddrs: dma address of mapped pages
> > + * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem)
> > + * Returns: number of pages mapped on success, -EAGAIN if mmap_sem have been
> > + * drop and you need to try again, some other error value otherwise
> > + *
> > + * Note same usage pattern as hmm_range_fault().
> > + */
> > +long hmm_range_dma_map(struct hmm_range *range,
> > + struct device *device,
> > + dma_addr_t *daddrs,
> > + bool block)
> > +{
> > + unsigned long i, npages, mapped;
> > + long ret;
> > +
> > + ret = hmm_range_fault(range, block);
> > + if (ret <= 0)
> > + return ret ? ret : -EBUSY;
> > +
> > + npages = (range->end - range->start) >> PAGE_SHIFT;
> > + for (i = 0, mapped = 0; i < npages; ++i) {
> > + enum dma_data_direction dir = DMA_FROM_DEVICE;
> > + struct page *page;
> > +
> > + /*
> > + * FIXME need to update DMA API to provide invalid DMA address
> > + * value instead of a function to test dma address value. This
> > + * would remove lot of dumb code duplicated accross many arch.
> > + *
> > + * For now setting it to 0 here is good enough as the pfns[]
> > + * value is what is use to check what is valid and what isn't.
> > + */
> > + daddrs[i] = 0;
> > +
> > + page = hmm_pfn_to_page(range, range->pfns[i]);
> > + if (page == NULL)
> > + continue;
> > +
> > + /* Check if range is being invalidated */
> > + if (!range->valid) {
> > + ret = -EBUSY;
> > + goto unmap;
> > + }
> > +
> > + /* If it is read and write than map bi-directional. */
> > + if (range->pfns[i] & range->values[HMM_PFN_WRITE])
> > + dir = DMA_BIDIRECTIONAL;
> > +
> > + daddrs[i] = dma_map_page(device, page, 0, PAGE_SIZE, dir);
> > + if (dma_mapping_error(device, daddrs[i])) {
> > + ret = -EFAULT;
> > + goto unmap;
> > + }
> > +
> > + mapped++;
> > + }
> > +
> > + return mapped;
> > +
> > +unmap:
> > + for (npages = i, i = 0; (i < npages) && mapped; ++i) {
> > + enum dma_data_direction dir = DMA_FROM_DEVICE;
> > + struct page *page;
> > +
> > + page = hmm_pfn_to_page(range, range->pfns[i]);
> > + if (page == NULL)
> > + continue;
> > +
> > + if (dma_mapping_error(device, daddrs[i]))
> > + continue;
> > +
> > + /* If it is read and write than map bi-directional. */
> > + if (range->pfns[i] & range->values[HMM_PFN_WRITE])
> > + dir = DMA_BIDIRECTIONAL;
> > +
> > + dma_unmap_page(device, daddrs[i], PAGE_SIZE, dir);
> > + mapped--;
> > + }
> > +
> > + return ret;
> > +}
> > +EXPORT_SYMBOL(hmm_range_dma_map);
> > +
> > +/*
> > + * hmm_range_dma_unmap() - unmap range of that was map with hmm_range_dma_map()
> > + * @range: range being unmapped
> > + * @vma: the vma against which the range (optional)
> > + * @device: device against which dma map was done
> > + * @daddrs: dma address of mapped pages
> > + * @dirty: dirty page if it had the write flag set
> > + * Returns: number of page unmapped on success, -EINVAL otherwise
> > + *
> > + * Note that caller MUST abide by mmu notifier or use HMM mirror and abide
> > + * to the sync_cpu_device_pagetables() callback so that it is safe here to
> > + * call set_page_dirty(). Caller must also take appropriate locks to avoid
> > + * concurrent mmu notifier or sync_cpu_device_pagetables() to make progress.
> > + */
> > +long hmm_range_dma_unmap(struct hmm_range *range,
> > + struct vm_area_struct *vma,
> > + struct device *device,
> > + dma_addr_t *daddrs,
> > + bool dirty)
> > +{
> > + unsigned long i, npages;
> > + long cpages = 0;
> > +
> > + /* Sanity check. */
> > + if (range->end <= range->start)
> > + return -EINVAL;
> > + if (!daddrs)
> > + return -EINVAL;
> > + if (!range->pfns)
> > + return -EINVAL;
> > +
> > + npages = (range->end - range->start) >> PAGE_SHIFT;
> > + for (i = 0; i < npages; ++i) {
> > + enum dma_data_direction dir = DMA_FROM_DEVICE;
> > + struct page *page;
> > +
> > + page = hmm_pfn_to_page(range, range->pfns[i]);
> > + if (page == NULL)
> > + continue;
> > +
> > + /* If it is read and write than map bi-directional. */
> > + if (range->pfns[i] & range->values[HMM_PFN_WRITE]) {
> > + dir = DMA_BIDIRECTIONAL;
> > +
> > + /*
> > + * See comments in function description on why it is
> > + * safe here to call set_page_dirty()
> > + */
> > + if (dirty)
> > + set_page_dirty(page);
> > + }
> > +
> > + /* Unmap and clear pfns/dma address */
> > + dma_unmap_page(device, daddrs[i], PAGE_SIZE, dir);
> > + range->pfns[i] = range->values[HMM_PFN_NONE];
> > + /* FIXME see comments in hmm_vma_dma_map() */
> > + daddrs[i] = 0;
> > + cpages++;
> > + }
> > +
> > + return cpages;
> > +}
> > +EXPORT_SYMBOL(hmm_range_dma_unmap);
> > #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
> >
> >
> > --
> > 2.17.2
> >

\
 
 \ /
  Last update: 2019-03-18 21:42    [W:0.097 / U:13.404 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site