Messages in this thread |  | | Subject | Re: [v4,1/3] drm/prime: use dma length macro when mapping sg | From | Marek Szyprowski <> | Date | Fri, 27 Mar 2020 08:54:53 +0100 |
| |
Hi All,
On 2020-03-25 10:07, Shane Francis wrote: > As dma_map_sg can reorganize scatter-gather lists in a > way that can cause some later segments to be empty we should > always use the sg_dma_len macro to fetch the actual length. > > This could now be 0 and not need to be mapped to a page or > address array > > Signed-off-by: Shane Francis <bigbeeshane@gmail.com> > Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> This patch landed in linux-next 20200326 and it causes a kernel panic on various Exynos SoC based boards. > --- > drivers/gpu/drm/drm_prime.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c > index 86d9b0e45c8c..1de2cde2277c 100644 > --- a/drivers/gpu/drm/drm_prime.c > +++ b/drivers/gpu/drm/drm_prime.c > @@ -967,7 +967,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages, > > index = 0; > for_each_sg(sgt->sgl, sg, sgt->nents, count) { > - len = sg->length; > + len = sg_dma_len(sg); > page = sg_page(sg); > addr = sg_dma_address(sg); >
Sorry, but this code is wrong :(
The scatterlist elements (sg) describes memory chunks in physical memory and in the DMA (IO virtual) space. However in general, you cannot assume 1:1 mapping between them. If you access sg_page(sg) (basically sg->page), you must match it with sg->length. When you access sg_dma_address(sg) (again, in most cases it is sg->dma_address), then you must match it with sg_dma_len(sg). The sg->dma_address might not be the dma address of the sg->page.
In some cases (when IOMMU is available, it performs aggregation of the scatterlist chunks and a few other, minor requirements), the whole scatterlist might be mapped into contiguous DMA address space and filled only to the first sg element.
The proper way to iterate over a scatterlists to get both the pages and the DMA addresses assigned to them is:
int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages, dma_addr_t *addrs, int max_entries) { unsigned count; struct scatterlist *sg; struct page *page; u32 page_len, page_index; dma_addr_t addr; u32 dma_len, dma_index;
page_index = 0; dma_index = 0; for_each_sg(sgt->sgl, sg, sgt->nents, count) { page_len = sg->length; page = sg_page(sg); dma_len = sg_dma_len(sg); addr = sg_dma_address(sg);
while (pages && page_len > 0) { if (WARN_ON(page_index >= max_entries)) return -1; pages[page_index] = page; page++; page_len -= PAGE_SIZE; page_index++; }
while (addrs && dma_len > 0) { if (WARN_ON(dma_index >= max_entries)) return -1; addrs[dma_index] = addr; addr += PAGE_SIZE; dma_len -= PAGE_SIZE; dma_index++; } }
return 0; }
I will send a patch in a few minutes with the above fixed code.
Best regards -- Marek Szyprowski, PhD Samsung R&D Institute Poland
|  |