Changes since v1:
* Rebased onto next-20170424 * Removed the _offset version of these functions per Christoph's suggestion * Added an SG_MAP_MUST_NOT_FAIL flag which will BUG_ON in future cases that can't gracefully fail. This removes a bunch of the noise added in v1 to a couple of the drivers. (Per David Laight's suggestion) This flag is only meant for old code * Split the libiscsi patch into two (per Christoph's suggestion) the prep patch (patch 2 in this series) has already been sent separately * Fixed a locking mistake in the target patch (pointed out by a bot) * Dropped the nvmet patch and handled it with a different patch that has been sent separately * Dropped the chcr patch as they have already removed the code that needed to be changed
I'm still hoping to only get Patch 1 in the series merged. (Any volunteers?) I'm willing to chase down the maintainers for the remaining patches separately after the first patch is in.
The patchset is based on next-20170424 and can be found in the sg_map_v2 branch from this git tree:
https://github.com/sbates130272/linux-p2pmem.git
--
Hi Everyone,
As part of my effort to enable P2P DMA transactions with PCI cards, we've identified the need to be able to safely put IO memory into scatterlists (and eventually other spots). This probably involves a conversion from struct page to pfn_t but that migration is a ways off and those decisions are yet to be made.
As an initial step in that direction, I've started cleaning up some of the scatterlist code by trying to carve out a better defined layer between it and it's users. The longer term goal would be to remove sg_page or replace it with something that can potentially fail.
This patchset is the first step in that effort. I've introduced a common function to map scatterlist memory and converted all the common kmap(sg_page()) cases. This removes about 66 sg_page calls (of ~331).
Seeing this is a fairly large cleanup set that touches a wide swath of the kernel I have limited the people I've sent this to. I'd suggest we look toward merging the first patch and then I can send the individual subsystem patches on to their respective maintainers and get them merged independantly. (This is to avoid the conflicts I created with my last cleanup set... Sorry) Though, I'm certainly open to other suggestions to get it merged.
Logan Gunthorpe (21): scatterlist: Introduce sg_map helper functions libiscsi: Add an internal error code libiscsi: Make use of new the sg_map helper function target: Make use of the new sg_map function at 16 call sites drm/i915: Make use of the new sg_map helper function crypto: hifn_795x: Make use of the new sg_map helper function crypto: shash, caam: Make use of the new sg_map helper function dm-crypt: Make use of the new sg_map helper in 4 call sites staging: unisys: visorbus: Make use of the new sg_map helper function RDS: Make use of the new sg_map helper function scsi: ipr, pmcraid, isci: Make use of the new sg_map helper scsi: hisi_sas, mvsas, gdth: Make use of the new sg_map helper function scsi: arcmsr, ips, megaraid: Make use of the new sg_map helper function scsi: libfc, csiostor: Change to sg_copy_buffer in two drivers xen-blkfront: Make use of the new sg_map helper function mmc: sdhci: Make use of the new sg_map helper function mmc: spi: Make use of the new sg_map helper function mmc: tmio: Make use of the new sg_map helper function mmc: sdricoh_cs: Make use of the new sg_map helper function mmc: tifm_sd: Make use of the new sg_map helper function memstick: Make use of the new sg_map helper function
crypto/shash.c | 9 ++- drivers/block/xen-blkfront.c | 20 ++--- drivers/crypto/caam/caamalg.c | 8 +- drivers/crypto/hifn_795x.c | 32 +++++--- drivers/gpu/drm/i915/i915_gem.c | 27 ++++--- drivers/md/dm-crypt.c | 39 ++++++--- drivers/memstick/host/jmb38x_ms.c | 11 +-- drivers/memstick/host/tifm_ms.c | 11 +-- drivers/mmc/host/mmc_spi.c | 26 ++++-- drivers/mmc/host/sdhci.c | 14 ++-- drivers/mmc/host/sdricoh_cs.c | 14 ++-- drivers/mmc/host/tifm_sd.c | 50 +++++++----- drivers/mmc/host/tmio_mmc.h | 7 +- drivers/mmc/host/tmio_mmc_pio.c | 12 +++ drivers/scsi/arcmsr/arcmsr_hba.c | 16 +++- drivers/scsi/csiostor/csio_scsi.c | 54 +------------ drivers/scsi/cxgbi/libcxgbi.c | 5 ++ drivers/scsi/gdth.c | 9 ++- drivers/scsi/hisi_sas/hisi_sas_v1_hw.c | 14 ++-- drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 13 ++- drivers/scsi/ipr.c | 27 ++++--- drivers/scsi/ips.c | 8 +- drivers/scsi/isci/request.c | 42 ++++++---- drivers/scsi/libfc/fc_libfc.c | 49 +++-------- drivers/scsi/libiscsi_tcp.c | 32 +++++--- drivers/scsi/megaraid.c | 9 ++- drivers/scsi/mvsas/mv_sas.c | 10 +-- drivers/scsi/pmcraid.c | 19 +++-- drivers/staging/unisys/visorhba/visorhba_main.c | 12 +-- drivers/target/iscsi/iscsi_target.c | 29 ++++--- drivers/target/target_core_rd.c | 3 +- drivers/target/target_core_sbc.c | 103 +++++++++++++++--------- drivers/target/target_core_transport.c | 18 +++-- drivers/target/target_core_user.c | 45 ++++++++--- include/linux/scatterlist.h | 85 +++++++++++++++++++ include/scsi/libiscsi_tcp.h | 3 +- include/target/target_core_backend.h | 4 +- net/rds/ib_recv.c | 8 +- 38 files changed, 553 insertions(+), 344 deletions(-)
-- 2.1.4
This patch introduces functions which kmap the pages inside an sgl. These functions replace a common pattern of kmap(sg_page(sg)) that is used in more than 50 places within the kernel.
The motivation for this work is to eventually safely support sgls that contain io memory. In order for that to work, any access to the contents of an iomem SGL will need to be done with iomemcpy or hit some warning. (The exact details of how this will work have yet to be worked out.) Having all the kmaps in one place is just a first step in that direction. Additionally, seeing this helps cut down the users of sg_page, it should make any effort to go to struct-page-less DMAs a little easier (should that idea ever swing back into favour again).
A flags option is added to select between a regular or atomic mapping so these functions can replace kmap(sg_page or kmap_atomic(sg_page. Future work may expand this to have flags for using page_address or vmap. We include a flag to require the function not to fail to support legacy code that has no easy error path. Much further in the future, there may be a flag to allocate memory and copy the data from/to iomem.
We also add the semantic that sg_map can fail to create a mapping, despite the fact that the current code this is replacing is assumed to never fail and the current version of these functions cannot fail. This is to support iomem which may either have to fail to create the mapping or allocate memory as a bounce buffer which itself can fail.
Also, in terms of cleanup, a few of the existing kmap(sg_page) users play things a bit loose in terms of whether they apply sg->offset so using these helper functions should help avoid such issues.
Signed-off-by: Logan Gunthorpe logang@deltatee.com --- include/linux/scatterlist.h | 85 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index cb3c8fe..fad170b 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -5,6 +5,7 @@ #include <linux/types.h> #include <linux/bug.h> #include <linux/mm.h> +#include <linux/highmem.h> #include <asm/io.h>
struct scatterlist { @@ -126,6 +127,90 @@ static inline struct page *sg_page(struct scatterlist *sg) return (struct page *)((sg)->page_link & ~0x3); }
+#define SG_KMAP (1 << 0) /* create a mapping with kmap */ +#define SG_KMAP_ATOMIC (1 << 1) /* create a mapping with kmap_atomic */ +#define SG_MAP_MUST_NOT_FAIL (1 << 2) /* indicate sg_map should not fail */ + +/** + * sg_map - kmap a page inside an sgl + * @sg: SG entry + * @offset: Offset into entry + * @flags: Flags for creating the mapping + * + * Description: + * Use this function to map a page in the scatterlist at the specified + * offset. sg->offset is already added for you. Note: the semantics of + * this function are that it may fail. Thus, its output should be checked + * with IS_ERR and PTR_ERR. Otherwise, a pointer to the specified offset + * in the mapped page is returned. + * + * Flags can be any of: + * * SG_KMAP - Use kmap to create the mapping + * * SG_KMAP_ATOMIC - Use kmap_atomic to map the page atommically. + * Thus, the rules of that function apply: the + * cpu may not sleep until it is unmaped. + * * SG_MAP_MUST_NOT_FAIL - Indicate that sg_map must not fail. + * If it does, it will issue a BUG_ON instead. + * This is intended for legacy code only, it + * is not to be used in new code. + * + * Also, consider carefully whether this function is appropriate. It is + * largely not recommended for new code and if the sgl came from another + * subsystem and you don't know what kind of memory might be in the list + * then you definitely should not call it. Non-mappable memory may be in + * the sgl and thus this function may fail unexpectedly. Consider using + * sg_copy_to_buffer instead. + **/ +static inline void *sg_map(struct scatterlist *sg, size_t offset, int flags) +{ + struct page *pg; + unsigned int pg_off; + void *ret; + + offset += sg->offset; + pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT); + pg_off = offset_in_page(offset); + + if (flags & SG_KMAP_ATOMIC) + ret = kmap_atomic(pg) + pg_off; + else if (flags & SG_KMAP) + ret = kmap(pg) + pg_off; + else + ret = ERR_PTR(-EINVAL); + + /* + * In theory, this can't happen yet. Once we start adding + * unmapable memory, it also shouldn't happen unless developers + * start putting unmappable struct pages in sgls and passing + * it to code that doesn't support it. + */ + BUG_ON(flags & SG_MAP_MUST_NOT_FAIL && IS_ERR(ret)); + + return ret; +} + +/** + * sg_unmap - unmap a page that was mapped with sg_map_offset + * @sg: SG entry + * @addr: address returned by sg_map_offset + * @offset: Offset into entry (same as specified for sg_map) + * @flags: Flags, which are the same specified for sg_map + * + * Description: + * Unmap the page that was mapped with sg_map_offset + **/ +static inline void sg_unmap(struct scatterlist *sg, void *addr, + size_t offset, int flags) +{ + struct page *pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT); + unsigned int pg_off = offset_in_page(offset); + + if (flags & SG_KMAP_ATOMIC) + kunmap_atomic(addr - sg->offset - pg_off); + else if (flags & SG_KMAP) + kunmap(pg); +} + /** * sg_set_buf - Set sg entry to point at given data * @sg: SG entry
On Tue, Apr 25, 2017 at 12:20:48PM -0600, Logan Gunthorpe wrote:
This patch introduces functions which kmap the pages inside an sgl. These functions replace a common pattern of kmap(sg_page(sg)) that is used in more than 50 places within the kernel.
The motivation for this work is to eventually safely support sgls that contain io memory. In order for that to work, any access to the contents of an iomem SGL will need to be done with iomemcpy or hit some warning. (The exact details of how this will work have yet to be worked out.)
I think we'll at least need a draft of those to make sense of these patches. Otherwise they just look very clumsy.
- Use this function to map a page in the scatterlist at the specified
- offset. sg->offset is already added for you. Note: the semantics of
- this function are that it may fail. Thus, its output should be checked
- with IS_ERR and PTR_ERR. Otherwise, a pointer to the specified offset
- in the mapped page is returned.
- Flags can be any of:
- SG_KMAP - Use kmap to create the mapping
- SG_KMAP_ATOMIC - Use kmap_atomic to map the page atommically.
Thus, the rules of that function apply: the
cpu may not sleep until it is unmaped.
- SG_MAP_MUST_NOT_FAIL - Indicate that sg_map must not fail.
If it does, it will issue a BUG_ON instead.
This is intended for legacy code only, it
is not to be used in new code.
I'm sorry but this API is just a trainwreck. Right now we have the nice little kmap_atomic API, which never fails and has a very nice calling convention where we just pass back the return address, but does not support sleeping inside the critical section.
And kmap, whіch may fail and requires the original page to be passed back. Anything that mixes these two concepts up is simply a non-starter.
On 26/04/17 01:44 AM, Christoph Hellwig wrote:
I think we'll at least need a draft of those to make sense of these patches. Otherwise they just look very clumsy.
Ok, I'll work up a draft proposal and send it in a couple days. But without a lot of cleanup such as this series it's not going to even be able to compile.
I'm sorry but this API is just a trainwreck. Right now we have the nice little kmap_atomic API, which never fails and has a very nice calling convention where we just pass back the return address, but does not support sleeping inside the critical section.
And kmap, whіch may fail and requires the original page to be passed back. Anything that mixes these two concepts up is simply a non-starter.
Ok, well for starters I think you are mistaken about kmap being able to fail. I'm having a hard time finding many users of that function that bother to check for an error when calling it. The main difficulty we have now is that neither of those functions are expected to fail and we need them to be able to in cases where the page doesn't map to system RAM. This patch series is trying to address it for users of scatterlist. I'm certainly open to other suggestions.
I also have to disagree that kmap and kmap_atomic are all that "nice". Except for the sleeping restriction and performance, they effectively do the same thing. And it was necessary to write a macro wrapper around kunmap_atomic to ensure that users of that function don't screw it up. (See 597781f3e5.) I'd say the kmap/kmap_atomic functions are the trainwreck and I'm trying to do my best to cleanup a few cases.
There are a fair number of cases in the kernel that do something like:
if (something) x = kmap(page); else x = kmap_atomic(page); ... if (something) kunmap(page) else kunmap_atomic(x)
Which just seems cumbersome to me.
In any case, if you can accept an sg_kmap and sg_kmap_atomic api just say so and I'll make the change. But I'll still need a flags variable for SG_MAP_MUST_NOT_FAIL to support legacy cases that have no fail path and both of those functions will need to be pretty nearly replicas of each other.
Logan
On Wed, Apr 26, 2017 at 12:11:33PM -0600, Logan Gunthorpe wrote:
Ok, well for starters I think you are mistaken about kmap being able to fail. I'm having a hard time finding many users of that function that bother to check for an error when calling it.
A quick audit of the arch code shows you're right - kmap can't fail anywhere anymore.
The main difficulty we have now is that neither of those functions are expected to fail and we need them to be able to in cases where the page doesn't map to system RAM. This patch series is trying to address it for users of scatterlist. I'm certainly open to other suggestions.
I think you'll need to follow the existing kmap semantics and never fail the iomem version either. Otherwise you'll have a special case that's almost never used that has a different error path.
There are a fair number of cases in the kernel that do something like:
if (something) x = kmap(page); else x = kmap_atomic(page); ... if (something) kunmap(page) else kunmap_atomic(x)
Which just seems cumbersome to me.
Passing a different flag based on something isn't really much better.
In any case, if you can accept an sg_kmap and sg_kmap_atomic api just say so and I'll make the change. But I'll still need a flags variable for SG_MAP_MUST_NOT_FAIL to support legacy cases that have no fail path and both of those functions will need to be pretty nearly replicas of each other.
Again, wrong way. Suddenly making things fail for your special case that normally don't fail is a receipe for bugs.
On Thu, Apr 27, 2017 at 08:53:38AM +0200, Christoph Hellwig wrote:
The main difficulty we have now is that neither of those functions are expected to fail and we need them to be able to in cases where the page doesn't map to system RAM. This patch series is trying to address it for users of scatterlist. I'm certainly open to other suggestions.
I think you'll need to follow the existing kmap semantics and never fail the iomem version either. Otherwise you'll have a special case that's almost never used that has a different error path.
How about first switching as many call sites as possible to use sg_copy_X_buffer instead of kmap?
A random audit of Logan's series suggests this is actually a fairly common thing.
eg drivers/mmc/host/sdhci.c is only doing this:
buffer = sdhci_kmap_atomic(sg, &flags); memcpy(buffer, align, size); sdhci_kunmap_atomic(buffer, &flags);
drivers/scsi/mvsas/mv_sas.c is this:
+ to = sg_map(sg_resp, 0, SG_KMAP_ATOMIC); + memcpy(to, + slot->response + sizeof(struct mvs_err_info), + sg_dma_len(sg_resp)); + sg_unmap(sg_resp, to, 0, SG_KMAP_ATOMIC);
etc.
Lots of other places seem similar, if not sometimes a little bit more convoluted..
Switching all the trivial cases to use copy might bring more clarity as to what is actually required for the remaining few users? If there are only a few then it may no longer matter if the API is not idyllic.
Jason
On 27/04/17 09:27 AM, Jason Gunthorpe wrote:
On Thu, Apr 27, 2017 at 08:53:38AM +0200, Christoph Hellwig wrote: How about first switching as many call sites as possible to use sg_copy_X_buffer instead of kmap?
Yeah, I could look at doing that first.
One problem is we might get more Naks of the form of Herbert Xu's who might be concerned with the performance implications.
These are definitely a bit more invasive changes than thin wrappers around kmap calls.
A random audit of Logan's series suggests this is actually a fairly common thing.
It's not _that_ common but there are a significant fraction. One of my patches actually did this to two places that seemed to be reimplementing the sg_copy_X_buffer logic.
Thanks,
Logan
On 27/04/17 12:53 AM, Christoph Hellwig wrote:
I think you'll need to follow the existing kmap semantics and never fail the iomem version either. Otherwise you'll have a special case that's almost never used that has a different error path.
Again, wrong way. Suddenly making things fail for your special case that normally don't fail is a receipe for bugs.
I don't disagree but these restrictions make the problem impossible to solve? If there is iomem behind a page in an SGL and someone tries to map it, we either have to fail or we break iomem safety which was your original concern.
Logan
On 26/04/17 01:44 AM, Christoph Hellwig wrote:
I think we'll at least need a draft of those to make sense of these patches. Otherwise they just look very clumsy.
Ok, what follows is a draft patch attempting to show where I'm thinking of going with this. Obviously it will not compile because it assumes the users throughout the kernel are a bit different than they are today. Notably, there is no sg_page anymore.
There's also likely a ton of issues and arguments to have over a bunch of the specifics below and I'd expect the concept to evolve more as cleanup occurs. This itself is an evolution of the draft I posted replying to you in my last RFC thread.
Also, before any of this is truly useful to us, pfn_t would have to infect a few other places in the kernel.
Thanks,
Logan
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index fad170b..85ef928 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -6,13 +6,14 @@ #include <linux/bug.h> #include <linux/mm.h> #include <linux/highmem.h> +#include <linux/pfn_t.h> #include <asm/io.h>
struct scatterlist { #ifdef CONFIG_DEBUG_SG unsigned long sg_magic; #endif - unsigned long page_link; + pfn_t pfn; unsigned int offset; unsigned int length; dma_addr_t dma_address; @@ -60,15 +61,68 @@ struct sg_table {
#define SG_MAGIC 0x87654321
-/* - * We overload the LSB of the page pointer to indicate whether it's - * a valid sg entry, or whether it points to the start of a new scatterlist. - * Those low bits are there for everyone! (thanks mason :-) - */ -#define sg_is_chain(sg) ((sg)->page_link & 0x01) -#define sg_is_last(sg) ((sg)->page_link & 0x02) -#define sg_chain_ptr(sg) \ - ((struct scatterlist *) ((sg)->page_link & ~0x03)) +static inline bool sg_is_chain(struct scatterlist *sg) +{ + return sg->pfn.val & PFN_SG_CHAIN; +} + +static inline bool sg_is_last(struct scatterlist *sg) +{ + return sg->pfn.val & PFN_SG_LAST; +} + +static inline struct scatterlist *sg_chain_ptr(struct scatterlist *sg) +{ + unsigned long sgl = pfn_t_to_pfn(sg->pfn); + return (struct scatterlist *)(sgl << PAGE_SHIFT); +} + +static inline bool sg_is_iomem(struct scatterlist *sg) +{ + return pfn_t_is_iomem(sg->pfn); +} + +/** + * sg_assign_pfn - Assign a given pfn_t to an SG entry + * @sg: SG entry + * @pfn: The pfn + * + * Description: + * Assign a pfn to sg entry. Also see sg_set_pfn(), the most commonly used + * variant.w + * + **/ +static inline void sg_assign_pfn(struct scatterlist *sg, pfn_t pfn) +{ +#ifdef CONFIG_DEBUG_SG + BUG_ON(sg->sg_magic != SG_MAGIC); + BUG_ON(sg_is_chain(sg)); + BUG_ON(pfn.val & (PFN_SG_CHAIN | PFN_SG_LAST)); +#endif + + sg->pfn = pfn; +} + +/** + * sg_set_pfn - Set sg entry to point at given pfn + * @sg: SG entry + * @pfn: The page + * @len: Length of data + * @offset: Offset into page + * + * Description: + * Use this function to set an sg entry pointing at a pfn, never assign + * the page directly. We encode sg table information in the lower bits + * of the page pointer. See sg_pfn_t for looking up the pfn_t belonging + * to an sg entry. + **/ +static inline void sg_set_pfn(struct scatterlist *sg, pfn_t pfn, + unsigned int len, unsigned int offset) +{ + sg_assign_pfn(sg, pfn); + sg->offset = offset; + sg->length = len; +}
/** * sg_assign_page - Assign a given page to an SG entry @@ -82,18 +136,13 @@ struct sg_table { **/ static inline void sg_assign_page(struct scatterlist *sg, struct page *page) { - unsigned long page_link = sg->page_link & 0x3; + if (!page) { + pfn_t null_pfn = {0}; + sg_assign_pfn(sg, null_pfn); + return; + }
- /* - * In order for the low bit stealing approach to work, pages - * must be aligned at a 32-bit boundary as a minimum. - */ - BUG_ON((unsigned long) page & 0x03); -#ifdef CONFIG_DEBUG_SG - BUG_ON(sg->sg_magic != SG_MAGIC); - BUG_ON(sg_is_chain(sg)); -#endif - sg->page_link = page_link | (unsigned long) page; + sg_assign_pfn(sg, page_to_pfn_t(page)); }
/** @@ -106,8 +155,7 @@ static inline void sg_assign_page(struct scatterlist *sg, struct page *page) * Description: * Use this function to set an sg entry pointing at a page, never assign * the page directly. We encode sg table information in the lower bits - * of the page pointer. See sg_page() for looking up the page belonging - * to an sg entry. + * of the page pointer. * **/ static inline void sg_set_page(struct scatterlist *sg, struct page *page, @@ -118,13 +166,53 @@ static inline void sg_set_page(struct scatterlist *sg, struct page *page, sg->length = len; }
-static inline struct page *sg_page(struct scatterlist *sg) +/** + * sg_pfn_t - Return the pfn_t for the sg + * @sg: SG entry + * + **/ +static inline pfn_t sg_pfn_t(struct scatterlist *sg) { #ifdef CONFIG_DEBUG_SG BUG_ON(sg->sg_magic != SG_MAGIC); BUG_ON(sg_is_chain(sg)); #endif - return (struct page *)((sg)->page_link & ~0x3); + + return sg->pfn; +} + +/** + * sg_to_mappable_page - Try to return a struct page safe for general + * use in the kernel + * @sg: SG entry + * @page: A pointer to the returned page + * + * Description: + * If possible, return a mappable page that's safe for use around the + * kernel. Should only be used in legacy situations. sg_pfn_t() is a + * better choice for new code. This is deliberately more awkward than + * the old sg_page to enforce the __must_check rule and discourage future + * use. + * + * An example where this is required is in nvme-fabrics: a page from an + * sgl is placed into a bio. This function would be required until we can + * convert bios to use pfn_t as well. Similar issues with skbs, etc. + **/ +static inline __must_check int sg_to_mappable_page(struct scatterlist *sg, + struct page **ret) +{ + struct page *pg; + + if (unlikely(sg_is_iomem(sg))) + return -EFAULT; + + pg = pfn_t_to_page(sg->pfn); + if (unlikely(!pg)) + return -EFAULT; + + *ret = pg; + + return 0; }
#define SG_KMAP (1 << 0) /* create a mapping with kmap */ @@ -167,8 +255,19 @@ static inline void *sg_map(struct scatterlist *sg, size_t offset, int flags) unsigned int pg_off; void *ret;
+ if (unlikely(sg_is_iomem(sg))) { + ret = ERR_PTR(-EFAULT); + goto out; + } + + pg = pfn_t_to_page(sg->pfn); + if (unlikely(!pg)) { + ret = ERR_PTR(-EFAULT); + goto out; + } + offset += sg->offset; - pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT); + pg = nth_page(pg, offset >> PAGE_SHIFT); pg_off = offset_in_page(offset);
if (flags & SG_KMAP_ATOMIC) @@ -178,12 +277,7 @@ static inline void *sg_map(struct scatterlist *sg, size_t offset, int flags) else ret = ERR_PTR(-EINVAL);
- /* - * In theory, this can't happen yet. Once we start adding - * unmapable memory, it also shouldn't happen unless developers - * start putting unmappable struct pages in sgls and passing - * it to code that doesn't support it. - */ +out: BUG_ON(flags & SG_MAP_MUST_NOT_FAIL && IS_ERR(ret));
return ret; @@ -202,9 +296,15 @@ static inline void *sg_map(struct scatterlist *sg, size_t offset, int flags) static inline void sg_unmap(struct scatterlist *sg, void *addr, size_t offset, int flags) { - struct page *pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT); + struct page *pg; unsigned int pg_off = offset_in_page(offset);
+ pg = pfn_t_to_page(sg->pfn); + if (unlikely(!pg)) + return; + + pg = nth_page(pg, offset >> PAGE_SHIFT); + if (flags & SG_KMAP_ATOMIC) kunmap_atomic(addr - sg->offset - pg_off); else if (flags & SG_KMAP) @@ -246,17 +346,18 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, struct scatterlist *sgl) { + pfn_t pfn; + unsigned long _sgl = (unsigned long) sgl; + /* * offset and length are unused for chain entry. Clear them. */ prv[prv_nents - 1].offset = 0; prv[prv_nents - 1].length = 0;
- /* - * Set lowest bit to indicate a link pointer, and make sure to clear - * the termination bit if it happens to be set. - */ - prv[prv_nents - 1].page_link = ((unsigned long) sgl | 0x01) & ~0x02; + BUG_ON(_sgl & PAGE_MASK); + pfn = __pfn_to_pfn_t(_sgl >> PAGE_SHIFT, PFN_SG_CHAIN); + prv[prv_nents - 1].pfn = pfn; }
/** @@ -276,8 +377,8 @@ static inline void sg_mark_end(struct scatterlist *sg) /* * Set termination bit, clear potential chain bit */ - sg->page_link |= 0x02; - sg->page_link &= ~0x01; + sg->pfn.val |= PFN_SG_LAST; + sg->pfn.val &= ~PFN_SG_CHAIN; }
/** @@ -293,7 +394,7 @@ static inline void sg_unmark_end(struct scatterlist *sg) #ifdef CONFIG_DEBUG_SG BUG_ON(sg->sg_magic != SG_MAGIC); #endif - sg->page_link &= ~0x02; + sg->pfn.val &= ~PFN_SG_LAST; }
/** @@ -301,14 +402,13 @@ static inline void sg_unmark_end(struct scatterlist *sg) * @sg: SG entry * * Description: - * This calls page_to_phys() on the page in this sg entry, and adds the - * sg offset. The caller must know that it is legal to call page_to_phys() - * on the sg page. + * This calls pfn_t_to_phys() on the pfn in this sg entry, and adds the + * sg offset. * **/ static inline dma_addr_t sg_phys(struct scatterlist *sg) { - return page_to_phys(sg_page(sg)) + sg->offset; + return pfn_t_to_phys(sg->pfn) + sg->offset; }
/** @@ -323,7 +423,12 @@ static inline dma_addr_t sg_phys(struct scatterlist *sg) **/ static inline void *sg_virt(struct scatterlist *sg) { - return page_address(sg_page(sg)) + sg->offset; + struct page *pg = pfn_t_to_page(sg->pfn); + + BUG_ON(sg_is_iomem(sg)); + BUG_ON(!pg); + + return page_address(pg) + sg->offset; }
int sg_nents(struct scatterlist *sg); @@ -422,10 +527,18 @@ void __sg_page_iter_start(struct sg_page_iter *piter, /** * sg_page_iter_page - get the current page held by the page iterator * @piter: page iterator holding the page + * + * This function will require some cleanup. Some users simply mark + * attributes of the pages which are fine, others actually map it and + * will require some saftey there. */ static inline struct page *sg_page_iter_page(struct sg_page_iter *piter) { - return nth_page(sg_page(piter->sg), piter->sg_pgoffset); + struct page *pg = pfn_t_to_page(piter->sg->pfn); + if (!pg) + return NULL; + + return nth_page(pg, piter->sg_pgoffset); }
/** @@ -468,11 +581,13 @@ static inline dma_addr_t sg_page_iter_dma_address(struct sg_page_iter *piter) #define SG_MITER_ATOMIC (1 << 0) /* use kmap_atomic */ #define SG_MITER_TO_SG (1 << 1) /* flush back to phys on unmap */ #define SG_MITER_FROM_SG (1 << 2) /* nop */ +#define SG_MITER_SUPPORTS_IOMEM (1 << 3) /* iteratee supports iomem */
struct sg_mapping_iter { /* the following three fields can be accessed directly */ struct page *page; /* currently mapped page */ void *addr; /* pointer to the mapped area */ + void __iomem *ioaddr; /* pointer to iomem */ size_t length; /* length of the mapped area */ size_t consumed; /* number of consumed bytes */ struct sg_page_iter piter; /* page iterator */ diff --git a/lib/scatterlist.c b/lib/scatterlist.c index c6cf822..2d1c58c 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -571,6 +571,8 @@ EXPORT_SYMBOL(sg_miter_skip); */ bool sg_miter_next(struct sg_mapping_iter *miter) { + void *addr; + sg_miter_stop(miter);
/* @@ -580,13 +582,25 @@ bool sg_miter_next(struct sg_mapping_iter *miter) if (!sg_miter_get_next_page(miter)) return false;
+ if (sg_is_iomem(miter->piter.sg) && + !(miter->__flags & SG_MITER_SUPPORTS_IOMEM)) + return false; + miter->page = sg_page_iter_page(&miter->piter); miter->consumed = miter->length = miter->__remaining;
if (miter->__flags & SG_MITER_ATOMIC) - miter->addr = kmap_atomic(miter->page) + miter->__offset; + addr = kmap_atomic(miter->page) + miter->__offset; else - miter->addr = kmap(miter->page) + miter->__offset; + addr = kmap(miter->page) + miter->__offset; + + if (sg_is_iomem(miter->piter.sg)) { + miter->addr = NULL; + miter->ioaddr = (void * __iomem) addr; + } else { + miter->addr = addr; + miter->ioaddr = NULL; + }
return true; } @@ -651,7 +665,7 @@ size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf, { unsigned int offset = 0; struct sg_mapping_iter miter; - unsigned int sg_flags = SG_MITER_ATOMIC; + unsigned int sg_flags = SG_MITER_ATOMIC | SG_MITER_SUPPORTS_IOMEM;
if (to_buffer) sg_flags |= SG_MITER_FROM_SG; @@ -668,10 +682,17 @@ size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
len = min(miter.length, buflen - offset);
- if (to_buffer) - memcpy(buf + offset, miter.addr, len); - else - memcpy(miter.addr, buf + offset, len); + if (miter.addr) { + if (to_buffer) + memcpy(buf + offset, miter.addr, len); + else + memcpy(miter.addr, buf + offset, len); + } else if (miter.ioaddr) { + if (to_buffer) + memcpy_fromio(buf + offset, miter.addr, len); + else + memcpy_toio(miter.addr, buf + offset, len); + }
offset += len; }
Am 25.04.2017 um 20:20 schrieb Logan Gunthorpe:
This patch introduces functions which kmap the pages inside an sgl. These functions replace a common pattern of kmap(sg_page(sg)) that is used in more than 50 places within the kernel.
The motivation for this work is to eventually safely support sgls that contain io memory. In order for that to work, any access to the contents of an iomem SGL will need to be done with iomemcpy or hit some warning. (The exact details of how this will work have yet to be worked out.) Having all the kmaps in one place is just a first step in that direction. Additionally, seeing this helps cut down the users of sg_page, it should make any effort to go to struct-page-less DMAs a little easier (should that idea ever swing back into favour again).
A flags option is added to select between a regular or atomic mapping so these functions can replace kmap(sg_page or kmap_atomic(sg_page. Future work may expand this to have flags for using page_address or vmap. We include a flag to require the function not to fail to support legacy code that has no easy error path. Much further in the future, there may be a flag to allocate memory and copy the data from/to iomem.
We also add the semantic that sg_map can fail to create a mapping, despite the fact that the current code this is replacing is assumed to never fail and the current version of these functions cannot fail. This is to support iomem which may either have to fail to create the mapping or allocate memory as a bounce buffer which itself can fail.
Also, in terms of cleanup, a few of the existing kmap(sg_page) users play things a bit loose in terms of whether they apply sg->offset so using these helper functions should help avoid such issues.
Signed-off-by: Logan Gunthorpe logang@deltatee.com
Good to know that somebody is working on this. Those problems troubled us as well.
Patch is Acked-by: Christian König christian.koenig@amd.com.
Regards, Christian.
include/linux/scatterlist.h | 85 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index cb3c8fe..fad170b 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -5,6 +5,7 @@ #include <linux/types.h> #include <linux/bug.h> #include <linux/mm.h> +#include <linux/highmem.h> #include <asm/io.h>
struct scatterlist { @@ -126,6 +127,90 @@ static inline struct page *sg_page(struct scatterlist *sg) return (struct page *)((sg)->page_link & ~0x3); }
+#define SG_KMAP (1 << 0) /* create a mapping with kmap */ +#define SG_KMAP_ATOMIC (1 << 1) /* create a mapping with kmap_atomic */ +#define SG_MAP_MUST_NOT_FAIL (1 << 2) /* indicate sg_map should not fail */
+/**
- sg_map - kmap a page inside an sgl
- @sg: SG entry
- @offset: Offset into entry
- @flags: Flags for creating the mapping
- Description:
- Use this function to map a page in the scatterlist at the specified
- offset. sg->offset is already added for you. Note: the semantics of
- this function are that it may fail. Thus, its output should be checked
- with IS_ERR and PTR_ERR. Otherwise, a pointer to the specified offset
- in the mapped page is returned.
- Flags can be any of:
- SG_KMAP - Use kmap to create the mapping
- SG_KMAP_ATOMIC - Use kmap_atomic to map the page atommically.
Thus, the rules of that function apply: the
cpu may not sleep until it is unmaped.
- SG_MAP_MUST_NOT_FAIL - Indicate that sg_map must not fail.
If it does, it will issue a BUG_ON instead.
This is intended for legacy code only, it
is not to be used in new code.
- Also, consider carefully whether this function is appropriate. It is
- largely not recommended for new code and if the sgl came from another
- subsystem and you don't know what kind of memory might be in the list
- then you definitely should not call it. Non-mappable memory may be in
- the sgl and thus this function may fail unexpectedly. Consider using
- sg_copy_to_buffer instead.
- **/
+static inline void *sg_map(struct scatterlist *sg, size_t offset, int flags) +{
- struct page *pg;
- unsigned int pg_off;
- void *ret;
- offset += sg->offset;
- pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT);
- pg_off = offset_in_page(offset);
- if (flags & SG_KMAP_ATOMIC)
ret = kmap_atomic(pg) + pg_off;
- else if (flags & SG_KMAP)
ret = kmap(pg) + pg_off;
- else
ret = ERR_PTR(-EINVAL);
- /*
* In theory, this can't happen yet. Once we start adding
* unmapable memory, it also shouldn't happen unless developers
* start putting unmappable struct pages in sgls and passing
* it to code that doesn't support it.
*/
- BUG_ON(flags & SG_MAP_MUST_NOT_FAIL && IS_ERR(ret));
- return ret;
+}
+/**
- sg_unmap - unmap a page that was mapped with sg_map_offset
- @sg: SG entry
- @addr: address returned by sg_map_offset
- @offset: Offset into entry (same as specified for sg_map)
- @flags: Flags, which are the same specified for sg_map
- Description:
- Unmap the page that was mapped with sg_map_offset
- **/
+static inline void sg_unmap(struct scatterlist *sg, void *addr,
size_t offset, int flags)
+{
- struct page *pg = nth_page(sg_page(sg), offset >> PAGE_SHIFT);
- unsigned int pg_off = offset_in_page(offset);
- if (flags & SG_KMAP_ATOMIC)
kunmap_atomic(addr - sg->offset - pg_off);
- else if (flags & SG_KMAP)
kunmap(pg);
+}
- /**
- sg_set_buf - Set sg entry to point at given data
- @sg: SG entry
On 26/04/17 02:59 AM, wrote:
Good to know that somebody is working on this. Those problems troubled us as well.
Thanks Christian. It's a daunting problem and a there's a lot of work to do before we will ever be where we need to be so any help, even an ack, is greatly appreciated.
Logan
This is a prep patch to add a new error code to libiscsi. We want to rework some kmap calls to be able to fail. When we do, we'd like to use this error code.
This patch simply introduces ISCSI_TCP_INTERNAL_ERR and prints "Internal Error." when it gets hit.
Signed-off-by: Logan Gunthorpe logang@deltatee.com --- drivers/scsi/cxgbi/libcxgbi.c | 5 +++++ include/scsi/libiscsi_tcp.h | 1 + 2 files changed, 6 insertions(+)
diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c index bd7d39e..e38d0c1 100644 --- a/drivers/scsi/cxgbi/libcxgbi.c +++ b/drivers/scsi/cxgbi/libcxgbi.c @@ -1556,6 +1556,11 @@ static inline int read_pdu_skb(struct iscsi_conn *conn, */ iscsi_conn_printk(KERN_ERR, conn, "Invalid pdu or skb."); return -EFAULT; + case ISCSI_TCP_INTERNAL_ERR: + pr_info("skb 0x%p, off %u, %d, TCP_INTERNAL_ERR.\n", + skb, offset, offloaded); + iscsi_conn_printk(KERN_ERR, conn, "Internal error."); + return -EFAULT; case ISCSI_TCP_SEGMENT_DONE: log_debug(1 << CXGBI_DBG_PDU_RX, "skb 0x%p, off %u, %d, TCP_SEG_DONE, rc %d.\n", diff --git a/include/scsi/libiscsi_tcp.h b/include/scsi/libiscsi_tcp.h index 30520d5..90691ad 100644 --- a/include/scsi/libiscsi_tcp.h +++ b/include/scsi/libiscsi_tcp.h @@ -92,6 +92,7 @@ enum { ISCSI_TCP_SKB_DONE, /* skb is out of data */ ISCSI_TCP_CONN_ERR, /* iscsi layer has fired a conn err */ ISCSI_TCP_SUSPENDED, /* conn is suspended */ + ISCSI_TCP_INTERNAL_ERR, /* an internal error occurred */ };
extern void iscsi_tcp_hdr_recv_prep(struct iscsi_tcp_conn *tcp_conn);
On Tue, Apr 25, 2017 at 12:20:49PM -0600, Logan Gunthorpe wrote:
This is a prep patch to add a new error code to libiscsi. We want to rework some kmap calls to be able to fail. When we do, we'd like to use this error code.
The kmap case in iscsi_tcp_segment_map can already fail. Please add handling of that failure to this patch, and we should get it merged ASAP.
Convert the kmap and kmap_atomic uses to the sg_map function. We now store the flags for the kmap instead of a boolean to indicate atomicitiy. We use ISCSI_TCP_INTERNAL_ERR error type that was prepared earlier for this.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Lee Duncan lduncan@suse.com Cc: Chris Leech cleech@redhat.com --- drivers/scsi/libiscsi_tcp.c | 32 ++++++++++++++++++++------------ include/scsi/libiscsi_tcp.h | 2 +- 2 files changed, 21 insertions(+), 13 deletions(-)
diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c index 63a1d69..a34e25c 100644 --- a/drivers/scsi/libiscsi_tcp.c +++ b/drivers/scsi/libiscsi_tcp.c @@ -133,25 +133,23 @@ static void iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv) if (page_count(sg_page(sg)) >= 1 && !recv) return;
- if (recv) { - segment->atomic_mapped = true; - segment->sg_mapped = kmap_atomic(sg_page(sg)); - } else { - segment->atomic_mapped = false; - /* the xmit path can sleep with the page mapped so use kmap */ - segment->sg_mapped = kmap(sg_page(sg)); + /* the xmit path can sleep with the page mapped so don't use atomic */ + segment->sg_map_flags = recv ? SG_KMAP_ATOMIC : SG_KMAP; + segment->sg_mapped = sg_map(sg, 0, segment->sg_map_flags); + + if (IS_ERR(segment->sg_mapped)) { + segment->sg_mapped = NULL; + return; }
- segment->data = segment->sg_mapped + sg->offset + segment->sg_offset; + segment->data = segment->sg_mapped + segment->sg_offset; }
void iscsi_tcp_segment_unmap(struct iscsi_segment *segment) { if (segment->sg_mapped) { - if (segment->atomic_mapped) - kunmap_atomic(segment->sg_mapped); - else - kunmap(sg_page(segment->sg)); + sg_unmap(segment->sg, segment->sg_mapped, 0, + segment->sg_map_flags); segment->sg_mapped = NULL; segment->data = NULL; } @@ -304,6 +302,9 @@ iscsi_tcp_segment_recv(struct iscsi_tcp_conn *tcp_conn, break; }
+ if (segment->data) + return -EFAULT; + copy = min(len - copied, segment->size - segment->copied); ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "copying %d\n", copy); memcpy(segment->data + segment->copied, ptr + copied, copy); @@ -927,6 +928,13 @@ int iscsi_tcp_recv_skb(struct iscsi_conn *conn, struct sk_buff *skb, avail); rc = iscsi_tcp_segment_recv(tcp_conn, segment, ptr, avail); BUG_ON(rc == 0); + if (rc < 0) { + ISCSI_DBG_TCP(conn, "memory fault. Consumed %d\n", + consumed); + *status = ISCSI_TCP_INTERNAL_ERR; + goto skb_done; + } + consumed += rc;
if (segment->total_copied >= segment->total_size) { diff --git a/include/scsi/libiscsi_tcp.h b/include/scsi/libiscsi_tcp.h index 90691ad..58c79af 100644 --- a/include/scsi/libiscsi_tcp.h +++ b/include/scsi/libiscsi_tcp.h @@ -47,7 +47,7 @@ struct iscsi_segment { struct scatterlist *sg; void *sg_mapped; unsigned int sg_offset; - bool atomic_mapped; + int sg_map_flags;
iscsi_segment_done_fn_t *done; };
Fairly straightforward conversions in all spots. In a couple of cases any error gets propogated up should sg_map fail. In other cases a warning is issued if the kmap fails seeing there's no clear error path. This should not be an issue until someone tries to use unmappable memory in the sgl with this driver.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: "Nicholas A. Bellinger" nab@linux-iscsi.org --- drivers/target/iscsi/iscsi_target.c | 29 +++++++--- drivers/target/target_core_rd.c | 3 +- drivers/target/target_core_sbc.c | 103 +++++++++++++++++++++------------ drivers/target/target_core_transport.c | 18 ++++-- drivers/target/target_core_user.c | 45 +++++++++----- include/target/target_core_backend.h | 4 +- 6 files changed, 134 insertions(+), 68 deletions(-)
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index e3f9ed3..3ab8d21 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -578,7 +578,7 @@ iscsit_xmit_nondatain_pdu(struct iscsi_conn *conn, struct iscsi_cmd *cmd, }
static int iscsit_map_iovec(struct iscsi_cmd *, struct kvec *, u32, u32); -static void iscsit_unmap_iovec(struct iscsi_cmd *); +static void iscsit_unmap_iovec(struct iscsi_cmd *, struct kvec *); static u32 iscsit_do_crypto_hash_sg(struct ahash_request *, struct iscsi_cmd *, u32, u32, u32, u8 *); static int @@ -645,7 +645,7 @@ iscsit_xmit_datain_pdu(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
ret = iscsit_fe_sendpage_sg(cmd, conn);
- iscsit_unmap_iovec(cmd); + iscsit_unmap_iovec(cmd, &cmd->iov_data[1]);
if (ret < 0) { iscsit_tx_thread_wait_for_tcp(conn); @@ -924,7 +924,10 @@ static int iscsit_map_iovec( while (data_length) { u32 cur_len = min_t(u32, data_length, sg->length - page_off);
- iov[i].iov_base = kmap(sg_page(sg)) + sg->offset + page_off; + iov[i].iov_base = sg_map(sg, page_off, SG_KMAP); + if (IS_ERR(iov[i].iov_base)) + goto map_err; + iov[i].iov_len = cur_len;
data_length -= cur_len; @@ -936,17 +939,25 @@ static int iscsit_map_iovec( cmd->kmapped_nents = i;
return i; + +map_err: + cmd->kmapped_nents = i - 1; + iscsit_unmap_iovec(cmd, iov); + return -1; }
-static void iscsit_unmap_iovec(struct iscsi_cmd *cmd) +static void iscsit_unmap_iovec(struct iscsi_cmd *cmd, struct kvec *iov) { u32 i; struct scatterlist *sg; + unsigned int page_off = cmd->first_data_sg_off;
sg = cmd->first_data_sg;
- for (i = 0; i < cmd->kmapped_nents; i++) - kunmap(sg_page(&sg[i])); + for (i = 0; i < cmd->kmapped_nents; i++) { + sg_unmap(&sg[i], iov[i].iov_base, page_off, SG_KMAP); + page_off = 0; + } }
static void iscsit_ack_from_expstatsn(struct iscsi_conn *conn, u32 exp_statsn) @@ -1609,7 +1620,7 @@ iscsit_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);
- iscsit_unmap_iovec(cmd); + iscsit_unmap_iovec(cmd, iov);
if (rx_got != rx_size) return -1; @@ -1710,7 +1721,7 @@ int iscsit_setup_nop_out(struct iscsi_conn *conn, struct iscsi_cmd *cmd, if (!cmd) return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR, (unsigned char *)hdr); - + return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR, (unsigned char *)hdr); } @@ -2625,7 +2636,7 @@ static int iscsit_handle_immediate_data(
rx_got = rx_data(conn, &cmd->iov_data[0], iov_count, rx_size);
- iscsit_unmap_iovec(cmd); + iscsit_unmap_iovec(cmd, cmd->iov_data);
if (rx_got != rx_size) { iscsit_rx_thread_wait_for_tcp(conn); diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c index 5f23f34..348211c 100644 --- a/drivers/target/target_core_rd.c +++ b/drivers/target/target_core_rd.c @@ -432,7 +432,8 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read) cmd->t_prot_sg, 0); } if (!rc) - sbc_dif_copy_prot(cmd, sectors, is_read, prot_sg, prot_offset); + rc = sbc_dif_copy_prot(cmd, sectors, is_read, prot_sg, + prot_offset);
return rc; } diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c index ee35c90..8ac07c6 100644 --- a/drivers/target/target_core_sbc.c +++ b/drivers/target/target_core_sbc.c @@ -420,17 +420,17 @@ static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success,
offset = 0; for_each_sg(cmd->t_bidi_data_sg, sg, cmd->t_bidi_data_nents, count) { - addr = kmap_atomic(sg_page(sg)); - if (!addr) { + addr = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(addr)) { ret = TCM_OUT_OF_RESOURCES; goto out; }
for (i = 0; i < sg->length; i++) - *(addr + sg->offset + i) ^= *(buf + offset + i); + *(addr + i) ^= *(buf + offset + i);
offset += sg->length; - kunmap_atomic(addr); + sg_unmap(sg, addr, 0, SG_KMAP_ATOMIC); }
out: @@ -541,8 +541,8 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes * Compare against SCSI READ payload against verify payload */ for_each_sg(cmd->t_bidi_data_sg, sg, cmd->t_bidi_data_nents, i) { - addr = (unsigned char *)kmap_atomic(sg_page(sg)); - if (!addr) { + addr = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(addr)) { ret = TCM_OUT_OF_RESOURCES; goto out; } @@ -552,10 +552,10 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes if (memcmp(addr, buf + offset, len)) { pr_warn("Detected MISCOMPARE for addr: %p buf: %p\n", addr, buf + offset); - kunmap_atomic(addr); + sg_unmap(sg, addr, 0, SG_KMAP_ATOMIC); goto miscompare; } - kunmap_atomic(addr); + sg_unmap(sg, addr, 0, SG_KMAP_ATOMIC);
offset += len; compare_len -= len; @@ -1315,8 +1315,8 @@ sbc_dif_generate(struct se_cmd *cmd) unsigned int block_size = dev->dev_attrib.block_size;
for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) { - paddr = kmap_atomic(sg_page(psg)) + psg->offset; - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + paddr = sg_map(psg, 0, SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); + daddr = sg_map(dsg, 0, SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL);
for (j = 0; j < psg->length; j += sizeof(*sdt)) { @@ -1325,26 +1325,30 @@ sbc_dif_generate(struct se_cmd *cmd)
if (offset >= dsg->length) { offset -= dsg->length; - kunmap_atomic(daddr - dsg->offset); + sg_unmap(dsg, daddr, 0, SG_KMAP_ATOMIC); dsg = sg_next(dsg); if (!dsg) { - kunmap_atomic(paddr - psg->offset); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); return; } - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + daddr = sg_map(dsg, 0, SG_KMAP_ATOMIC | + SG_MAP_MUST_NOT_FAIL); }
sdt = paddr + j; avail = min(block_size, dsg->length - offset); crc = crc_t10dif(daddr + offset, avail); if (avail < block_size) { - kunmap_atomic(daddr - dsg->offset); + sg_unmap(dsg, daddr, 0, SG_KMAP_ATOMIC); dsg = sg_next(dsg); if (!dsg) { - kunmap_atomic(paddr - psg->offset); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); return; } - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + + daddr = sg_map(dsg, 0, SG_KMAP_ATOMIC | + SG_MAP_MUST_NOT_FAIL); + offset = block_size - avail; crc = crc_t10dif_update(crc, daddr, offset); } else { @@ -1366,8 +1370,8 @@ sbc_dif_generate(struct se_cmd *cmd) sector++; }
- kunmap_atomic(daddr - dsg->offset); - kunmap_atomic(paddr - psg->offset); + sg_unmap(dsg, daddr, 0, SG_KMAP_ATOMIC); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); } }
@@ -1412,8 +1416,8 @@ sbc_dif_v1_verify(struct se_cmd *cmd, struct t10_pi_tuple *sdt, return 0; }
-void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read, - struct scatterlist *sg, int sg_off) +int sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read, + struct scatterlist *sg, int sg_off) { struct se_device *dev = cmd->se_dev; struct scatterlist *psg; @@ -1422,18 +1426,25 @@ void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read, unsigned int offset = sg_off;
if (!sg) - return; + return 0;
left = sectors * dev->prot_length;
for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) { unsigned int psg_len, copied = 0;
- paddr = kmap_atomic(sg_page(psg)) + psg->offset; + paddr = sg_map(psg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(paddr)) + return TCM_OUT_OF_RESOURCES; + psg_len = min(left, psg->length); while (psg_len) { len = min(psg_len, sg->length - offset); - addr = kmap_atomic(sg_page(sg)) + sg->offset + offset; + addr = sg_map(sg, offset, SG_KMAP_ATOMIC); + if (IS_ERR(addr)) { + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); + return TCM_OUT_OF_RESOURCES; + }
if (read) memcpy(paddr + copied, addr, len); @@ -1445,15 +1456,17 @@ void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read, copied += len; psg_len -= len;
- kunmap_atomic(addr - sg->offset - offset); + sg_unmap(sg, addr, offset, SG_KMAP_ATOMIC);
if (offset >= sg->length) { sg = sg_next(sg); offset = 0; } } - kunmap_atomic(paddr - psg->offset); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); } + + return 0; } EXPORT_SYMBOL(sbc_dif_copy_prot);
@@ -1472,8 +1485,13 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors, unsigned int block_size = dev->dev_attrib.block_size;
for (; psg && sector < start + sectors; psg = sg_next(psg)) { - paddr = kmap_atomic(sg_page(psg)) + psg->offset; - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + paddr = sg_map(psg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(paddr)) + goto sg_map_err; + + daddr = sg_map(dsg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(daddr)) + goto sg_map_err;
for (i = psg_off; i < psg->length && sector < start + sectors; @@ -1483,13 +1501,15 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
if (dsg_off >= dsg->length) { dsg_off -= dsg->length; - kunmap_atomic(daddr - dsg->offset); + sg_unmap(dsg, daddr, 0, SG_KMAP_ATOMIC); dsg = sg_next(dsg); if (!dsg) { - kunmap_atomic(paddr - psg->offset); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); return 0; } - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + daddr = sg_map(dsg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(daddr)) + goto sg_map_err; }
sdt = paddr + i; @@ -1507,13 +1527,16 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors, avail = min(block_size, dsg->length - dsg_off); crc = crc_t10dif(daddr + dsg_off, avail); if (avail < block_size) { - kunmap_atomic(daddr - dsg->offset); + sg_unmap(dsg, daddr, 0, SG_KMAP_ATOMIC); dsg = sg_next(dsg); if (!dsg) { - kunmap_atomic(paddr - psg->offset); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); return 0; } - daddr = kmap_atomic(sg_page(dsg)) + dsg->offset; + daddr = sg_map(dsg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(daddr)) + goto sg_map_err; + dsg_off = block_size - avail; crc = crc_t10dif_update(crc, daddr, dsg_off); } else { @@ -1522,8 +1545,8 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
rc = sbc_dif_v1_verify(cmd, sdt, crc, sector, ei_lba); if (rc) { - kunmap_atomic(daddr - dsg->offset); - kunmap_atomic(paddr - psg->offset); + sg_unmap(dsg, daddr, 0, SG_KMAP_ATOMIC); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); cmd->bad_sector = sector; return rc; } @@ -1533,10 +1556,16 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors, }
psg_off = 0; - kunmap_atomic(daddr - dsg->offset); - kunmap_atomic(paddr - psg->offset); + sg_unmap(dsg, daddr, 0, SG_KMAP_ATOMIC); + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); }
return 0; + +sg_map_err: + if (!IS_ERR_OR_NULL(paddr)) + sg_unmap(psg, paddr, 0, SG_KMAP_ATOMIC); + + return TCM_OUT_OF_RESOURCES; } EXPORT_SYMBOL(sbc_dif_verify); diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index a0cd56e..345e547 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -1506,11 +1506,11 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess unsigned char *buf = NULL;
if (sgl) - buf = kmap(sg_page(sgl)) + sgl->offset; + buf = sg_map(sgl, 0, SG_KMAP);
- if (buf) { + if (buf && !IS_ERR(buf)) { memset(buf, 0, sgl->length); - kunmap(sg_page(sgl)); + sg_unmap(sgl, buf, 0, SG_KMAP); } }
@@ -2307,8 +2307,14 @@ void *transport_kmap_data_sg(struct se_cmd *cmd) return NULL;
BUG_ON(!sg); - if (cmd->t_data_nents == 1) - return kmap(sg_page(sg)) + sg->offset; + if (cmd->t_data_nents == 1) { + cmd->t_data_vmap = sg_map(sg, 0, SG_KMAP); + if (IS_ERR(cmd->t_data_vmap)) { + cmd->t_data_vmap = NULL; + return NULL; + } + return cmd->t_data_vmap; + }
/* >1 page. use vmap */ pages = kmalloc(sizeof(*pages) * cmd->t_data_nents, GFP_KERNEL); @@ -2334,7 +2340,7 @@ void transport_kunmap_data_sg(struct se_cmd *cmd) if (!cmd->t_data_nents) { return; } else if (cmd->t_data_nents == 1) { - kunmap(sg_page(cmd->t_data_sg)); + sg_unmap(cmd->t_data_sg, cmd->t_data_vmap, 0, SG_KMAP); return; }
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index f615c3b..b55f7e2 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -260,7 +260,7 @@ static inline size_t iov_tail(struct tcmu_dev *udev, struct iovec *iov) return (size_t)iov->iov_base + iov->iov_len; }
-static void alloc_and_scatter_data_area(struct tcmu_dev *udev, +static int alloc_and_scatter_data_area(struct tcmu_dev *udev, struct scatterlist *data_sg, unsigned int data_nents, struct iovec **iov, int *iov_cnt, bool copy_data) { @@ -272,7 +272,10 @@ static void alloc_and_scatter_data_area(struct tcmu_dev *udev,
for_each_sg(data_sg, sg, data_nents, i) { int sg_remaining = sg->length; - from = kmap_atomic(sg_page(sg)) + sg->offset; + from = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(from)) + return PTR_ERR(from); + while (sg_remaining > 0) { if (block_remaining == 0) { block = find_first_zero_bit(udev->data_bitmap, @@ -301,8 +304,10 @@ static void alloc_and_scatter_data_area(struct tcmu_dev *udev, sg_remaining -= copy_bytes; block_remaining -= copy_bytes; } - kunmap_atomic(from - sg->offset); + sg_unmap(sg, from, 0, SG_KMAP_ATOMIC); } + + return 0; }
static void free_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd) @@ -311,8 +316,8 @@ static void free_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd) DATA_BLOCK_BITS); }
-static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, - bool bidi) +static int gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, + bool bidi) { struct se_cmd *se_cmd = cmd->se_cmd; int i, block; @@ -348,7 +353,10 @@ static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
for_each_sg(data_sg, sg, data_nents, i) { int sg_remaining = sg->length; - to = kmap_atomic(sg_page(sg)) + sg->offset; + to = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(to)) + return PTR_ERR(to); + while (sg_remaining > 0) { if (block_remaining == 0) { block = find_first_bit(bitmap, @@ -368,8 +376,10 @@ static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, sg_remaining -= copy_bytes; block_remaining -= copy_bytes; } - kunmap_atomic(to - sg->offset); + sg_unmap(sg, to, 0, SG_KMAP_ATOMIC); } + + return 0; }
static inline size_t spc_bitmap_free(unsigned long *bitmap) @@ -546,8 +556,12 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd) iov_cnt = 0; copy_to_data_area = (se_cmd->data_direction == DMA_TO_DEVICE || se_cmd->se_cmd_flags & SCF_BIDI); - alloc_and_scatter_data_area(udev, se_cmd->t_data_sg, - se_cmd->t_data_nents, &iov, &iov_cnt, copy_to_data_area); + if (alloc_and_scatter_data_area(udev, se_cmd->t_data_sg, + se_cmd->t_data_nents, &iov, &iov_cnt, copy_to_data_area)) { + spin_unlock_irq(&udev->cmdr_lock); + return TCM_OUT_OF_RESOURCES; + } + entry->req.iov_cnt = iov_cnt; entry->req.iov_dif_cnt = 0;
@@ -555,9 +569,12 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd) if (se_cmd->se_cmd_flags & SCF_BIDI) { iov_cnt = 0; iov++; - alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg, + if (alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg, se_cmd->t_bidi_data_nents, &iov, &iov_cnt, - false); + false)) { + spin_unlock_irq(&udev->cmdr_lock); + return TCM_OUT_OF_RESOURCES; + } entry->req.iov_bidi_cnt = iov_cnt; } /* cmd's data_bitmap is what changed in process */ @@ -637,10 +654,12 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry * free_data_area(udev, cmd); } else if (se_cmd->se_cmd_flags & SCF_BIDI) { /* Get Data-In buffer before clean up */ - gather_data_area(udev, cmd, true); + if (gather_data_area(udev, cmd, true)) + entry->rsp.scsi_status = SAM_STAT_CHECK_CONDITION; free_data_area(udev, cmd); } else if (se_cmd->data_direction == DMA_FROM_DEVICE) { - gather_data_area(udev, cmd, false); + if (gather_data_area(udev, cmd, false)) + entry->rsp.scsi_status = SAM_STAT_CHECK_CONDITION; free_data_area(udev, cmd); } else if (se_cmd->data_direction == DMA_TO_DEVICE) { free_data_area(udev, cmd); diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h index 1b0f447..c39ecd9 100644 --- a/include/target/target_core_backend.h +++ b/include/target/target_core_backend.h @@ -82,8 +82,8 @@ sector_t sbc_get_write_same_sectors(struct se_cmd *cmd); void sbc_dif_generate(struct se_cmd *); sense_reason_t sbc_dif_verify(struct se_cmd *, sector_t, unsigned int, unsigned int, struct scatterlist *, int); -void sbc_dif_copy_prot(struct se_cmd *, unsigned int, bool, - struct scatterlist *, int); +int sbc_dif_copy_prot(struct se_cmd *, unsigned int, bool, + struct scatterlist *, int); void transport_set_vpd_proto_id(struct t10_vpd *, unsigned char *); int transport_set_vpd_assoc(struct t10_vpd *, unsigned char *); int transport_set_vpd_ident_type(struct t10_vpd *, unsigned char *);
This is a single straightforward conversion from kmap to sg_map.
We also create the i915_gem_object_unmap function to common up the unmap code.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Acked-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/i915/i915_gem.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 07e9b27..2c33000 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2202,6 +2202,15 @@ static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj) radix_tree_delete(&obj->mm.get_page.radix, iter.index); }
+static void i915_gem_object_unmap(const struct drm_i915_gem_object *obj, + void *ptr) +{ + if (is_vmalloc_addr(ptr)) + vunmap(ptr); + else + sg_unmap(obj->mm.pages->sgl, ptr, 0, SG_KMAP); +} + void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj, enum i915_mm_subclass subclass) { @@ -2229,10 +2238,7 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj, void *ptr;
ptr = ptr_mask_bits(obj->mm.mapping); - if (is_vmalloc_addr(ptr)) - vunmap(ptr); - else - kunmap(kmap_to_page(ptr)); + i915_gem_object_unmap(obj, ptr);
obj->mm.mapping = NULL; } @@ -2499,8 +2505,11 @@ static void *i915_gem_object_map(const struct drm_i915_gem_object *obj, void *addr;
/* A single page can always be kmapped */ - if (n_pages == 1 && type == I915_MAP_WB) - return kmap(sg_page(sgt->sgl)); + if (n_pages == 1 && type == I915_MAP_WB) { + addr = sg_map(sgt->sgl, 0, SG_KMAP); + if (IS_ERR(addr)) + return NULL; + }
if (n_pages > ARRAY_SIZE(stack_pages)) { /* Too big for stack -- allocate temporary array instead */ @@ -2567,11 +2576,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, goto err_unpin; }
- if (is_vmalloc_addr(ptr)) - vunmap(ptr); - else - kunmap(kmap_to_page(ptr)); - + i915_gem_object_unmap(obj, ptr); ptr = obj->mm.mapping = NULL; }
Conversion of a couple kmap_atomic instances to the sg_map helper function.
However, it looks like there was a bug in the original code: the source scatter lists offset (t->offset) was passed to ablkcipher_get which added it to the destination address. This doesn't make a lot of sense, but t->offset is likely always zero anyway. So, this patch cleans that brokeness up.
Also, a change to the error path: if ablkcipher_get failed, everything seemed to proceed as if it hadn't. Setting 'error' should hopefully clear that up.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: "David S. Miller" davem@davemloft.net --- drivers/crypto/hifn_795x.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-)
diff --git a/drivers/crypto/hifn_795x.c b/drivers/crypto/hifn_795x.c index e09d405..34b1870 100644 --- a/drivers/crypto/hifn_795x.c +++ b/drivers/crypto/hifn_795x.c @@ -1619,7 +1619,7 @@ static int hifn_start_device(struct hifn_device *dev) return 0; }
-static int ablkcipher_get(void *saddr, unsigned int *srestp, unsigned int offset, +static int ablkcipher_get(void *saddr, unsigned int *srestp, struct scatterlist *dst, unsigned int size, unsigned int *nbytesp) { unsigned int srest = *srestp, nbytes = *nbytesp, copy; @@ -1632,15 +1632,17 @@ static int ablkcipher_get(void *saddr, unsigned int *srestp, unsigned int offset while (size) { copy = min3(srest, dst->length, size);
- daddr = kmap_atomic(sg_page(dst)); - memcpy(daddr + dst->offset + offset, saddr, copy); - kunmap_atomic(daddr); + daddr = sg_map(dst, 0, SG_KMAP_ATOMIC); + if (IS_ERR(daddr)) + return PTR_ERR(daddr); + + memcpy(daddr, saddr, copy); + sg_unmap(dst, daddr, 0, SG_KMAP_ATOMIC);
nbytes -= copy; size -= copy; srest -= copy; saddr += copy; - offset = 0;
pr_debug("%s: copy: %u, size: %u, srest: %u, nbytes: %u.\n", __func__, copy, size, srest, nbytes); @@ -1671,11 +1673,12 @@ static inline void hifn_complete_sa(struct hifn_device *dev, int i)
static void hifn_process_ready(struct ablkcipher_request *req, int error) { + int err; struct hifn_request_context *rctx = ablkcipher_request_ctx(req);
if (rctx->walk.flags & ASYNC_FLAGS_MISALIGNED) { unsigned int nbytes = req->nbytes; - int idx = 0, err; + int idx = 0; struct scatterlist *dst, *t; void *saddr;
@@ -1695,17 +1698,24 @@ static void hifn_process_ready(struct ablkcipher_request *req, int error) continue; }
- saddr = kmap_atomic(sg_page(t)); + saddr = sg_map(t, 0, SG_KMAP_ATOMIC); + if (IS_ERR(saddr)) { + if (!error) + error = PTR_ERR(saddr); + break; + } + + err = ablkcipher_get(saddr, &t->length, + dst, nbytes, &nbytes); + sg_unmap(t, saddr, 0, SG_KMAP_ATOMIC);
- err = ablkcipher_get(saddr, &t->length, t->offset, - dst, nbytes, &nbytes); if (err < 0) { - kunmap_atomic(saddr); + if (!error) + error = err; break; }
idx += err; - kunmap_atomic(saddr); }
hifn_cipher_walk_exit(&rctx->walk);
Very straightforward conversion to the new function in the caam driver and shash library.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: "David S. Miller" davem@davemloft.net --- crypto/shash.c | 9 ++++++--- drivers/crypto/caam/caamalg.c | 8 +++----- 2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/crypto/shash.c b/crypto/shash.c index 5e31c8d..5914881 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -283,10 +283,13 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc) if (nbytes < min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset)) { void *data;
- data = kmap_atomic(sg_page(sg)); - err = crypto_shash_digest(desc, data + offset, nbytes, + data = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(data)) + return PTR_ERR(data); + + err = crypto_shash_digest(desc, data, nbytes, req->result); - kunmap_atomic(data); + sg_unmap(sg, data, 0, SG_KMAP_ATOMIC); crypto_yield(desc->flags); } else err = crypto_shash_init(desc) ?: diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 398807d..62d2f5d 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -89,7 +89,6 @@ static void dbg_dump_sg(const char *level, const char *prefix_str, struct scatterlist *sg, size_t tlen, bool ascii) { struct scatterlist *it; - void *it_page; size_t len; void *buf;
@@ -98,19 +97,18 @@ static void dbg_dump_sg(const char *level, const char *prefix_str, * make sure the scatterlist's page * has a valid virtual memory mapping */ - it_page = kmap_atomic(sg_page(it)); - if (unlikely(!it_page)) { + buf = sg_map(it, 0, SG_KMAP_ATOMIC); + if (IS_ERR(buf)) { printk(KERN_ERR "dbg_dump_sg: kmap failed\n"); return; }
- buf = it_page + it->offset; len = min_t(size_t, tlen, it->length); print_hex_dump(level, prefix_str, prefix_type, rowsize, groupsize, buf, len, ascii); tlen -= len;
- kunmap_atomic(it_page); + sg_unmap(it, buf, 0, SG_KMAP_ATOMIC); } } #endif
On Tue, Apr 25, 2017 at 12:20:54PM -0600, Logan Gunthorpe wrote:
Very straightforward conversion to the new function in the caam driver and shash library.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: "David S. Miller" davem@davemloft.net
crypto/shash.c | 9 ++++++--- drivers/crypto/caam/caamalg.c | 8 +++----- 2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/crypto/shash.c b/crypto/shash.c index 5e31c8d..5914881 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -283,10 +283,13 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc) if (nbytes < min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset)) { void *data;
data = kmap_atomic(sg_page(sg));
err = crypto_shash_digest(desc, data + offset, nbytes,
data = sg_map(sg, 0, SG_KMAP_ATOMIC);
if (IS_ERR(data))
return PTR_ERR(data);
err = crypto_shash_digest(desc, data, nbytes, req->result);
kunmap_atomic(data);
crypto_yield(desc->flags); } else err = crypto_shash_init(desc) ?:sg_unmap(sg, data, 0, SG_KMAP_ATOMIC);
Nack. This is an optimisation for the special case of a single SG list entry. In fact in the common case the kmap_atomic should disappear altogether in the no-highmem case. So replacing it with sg_map is not acceptable.
Cheers,
On 26/04/17 09:56 PM, Herbert Xu wrote:
On Tue, Apr 25, 2017 at 12:20:54PM -0600, Logan Gunthorpe wrote:
Very straightforward conversion to the new function in the caam driver and shash library.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: "David S. Miller" davem@davemloft.net
crypto/shash.c | 9 ++++++--- drivers/crypto/caam/caamalg.c | 8 +++----- 2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/crypto/shash.c b/crypto/shash.c index 5e31c8d..5914881 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -283,10 +283,13 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc) if (nbytes < min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset)) { void *data;
data = kmap_atomic(sg_page(sg));
err = crypto_shash_digest(desc, data + offset, nbytes,
data = sg_map(sg, 0, SG_KMAP_ATOMIC);
if (IS_ERR(data))
return PTR_ERR(data);
err = crypto_shash_digest(desc, data, nbytes, req->result);
kunmap_atomic(data);
crypto_yield(desc->flags); } else err = crypto_shash_init(desc) ?:sg_unmap(sg, data, 0, SG_KMAP_ATOMIC);
Nack. This is an optimisation for the special case of a single SG list entry. In fact in the common case the kmap_atomic should disappear altogether in the no-highmem case. So replacing it with sg_map is not acceptable.
What you seem to have missed is that sg_map is just a thin wrapper around kmap_atomic. Perhaps with a future check for a mappable page. This change should have zero impact on performance.
Logan
On Thu, Apr 27, 2017 at 09:45:57AM -0600, Logan Gunthorpe wrote:
On 26/04/17 09:56 PM, Herbert Xu wrote:
On Tue, Apr 25, 2017 at 12:20:54PM -0600, Logan Gunthorpe wrote:
Very straightforward conversion to the new function in the caam driver and shash library.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Herbert Xu herbert@gondor.apana.org.au Cc: "David S. Miller" davem@davemloft.net
crypto/shash.c | 9 ++++++--- drivers/crypto/caam/caamalg.c | 8 +++----- 2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/crypto/shash.c b/crypto/shash.c index 5e31c8d..5914881 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -283,10 +283,13 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc) if (nbytes < min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset)) { void *data;
data = kmap_atomic(sg_page(sg));
err = crypto_shash_digest(desc, data + offset, nbytes,
data = sg_map(sg, 0, SG_KMAP_ATOMIC);
if (IS_ERR(data))
return PTR_ERR(data);
err = crypto_shash_digest(desc, data, nbytes, req->result);
kunmap_atomic(data);
crypto_yield(desc->flags); } else err = crypto_shash_init(desc) ?:sg_unmap(sg, data, 0, SG_KMAP_ATOMIC);
Nack. This is an optimisation for the special case of a single SG list entry. In fact in the common case the kmap_atomic should disappear altogether in the no-highmem case. So replacing it with sg_map is not acceptable.
What you seem to have missed is that sg_map is just a thin wrapper around kmap_atomic. Perhaps with a future check for a mappable page. This change should have zero impact on performance.
You are right. Indeed the existing code looks buggy as they don't take sg->offset into account when doing the kmap. Could you send me some patches that fix these problems first so that they can be easily backported?
Thanks,
On 28/04/17 12:30 AM, Herbert Xu wrote:
You are right. Indeed the existing code looks buggy as they don't take sg->offset into account when doing the kmap. Could you send me some patches that fix these problems first so that they can be easily backported?
Ok, I think the only buggy one in crypto is hifn_795x. Shash and caam both do have the sg->offset accounted for. I'll send a patch for the buggy one shortly.
Logan
On Fri, Apr 28, 2017 at 10:53:45AM -0600, Logan Gunthorpe wrote:
On 28/04/17 12:30 AM, Herbert Xu wrote:
You are right. Indeed the existing code looks buggy as they don't take sg->offset into account when doing the kmap. Could you send me some patches that fix these problems first so that they can be easily backported?
Ok, I think the only buggy one in crypto is hifn_795x. Shash and caam both do have the sg->offset accounted for. I'll send a patch for the buggy one shortly.
I think they're all buggy when sg->offset is greater than PAGE_SIZE.
Thanks,
On 28/04/17 11:51 AM, Herbert Xu wrote:
On Fri, Apr 28, 2017 at 10:53:45AM -0600, Logan Gunthorpe wrote:
On 28/04/17 12:30 AM, Herbert Xu wrote:
You are right. Indeed the existing code looks buggy as they don't take sg->offset into account when doing the kmap. Could you send me some patches that fix these problems first so that they can be easily backported?
Ok, I think the only buggy one in crypto is hifn_795x. Shash and caam both do have the sg->offset accounted for. I'll send a patch for the buggy one shortly.
I think they're all buggy when sg->offset is greater than PAGE_SIZE.
Yes, technically. But that's a _very_ common mistake. Pretty nearly every case I looked at did not take that into account. I don't think sg's that point to more than one continuous page are all that common.
Fixing all those cases without making a common function is a waste of time IMO.
Logan
Very straightforward conversion to the new function in all four spots.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Alasdair Kergon agk@redhat.com Cc: Mike Snitzer snitzer@redhat.com --- drivers/md/dm-crypt.c | 39 ++++++++++++++++++++++++++------------- 1 file changed, 26 insertions(+), 13 deletions(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 8dbecf1..841f1fc 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -635,9 +635,12 @@ static int crypt_iv_lmk_gen(struct crypt_config *cc, u8 *iv,
if (bio_data_dir(dmreq->ctx->bio_in) == WRITE) { sg = crypt_get_sg_data(cc, dmreq->sg_in); - src = kmap_atomic(sg_page(sg)); - r = crypt_iv_lmk_one(cc, iv, dmreq, src + sg->offset); - kunmap_atomic(src); + src = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(src)) + return PTR_ERR(src); + + r = crypt_iv_lmk_one(cc, iv, dmreq, src); + sg_unmap(sg, src, 0, SG_KMAP_ATOMIC); } else memset(iv, 0, cc->iv_size);
@@ -655,14 +658,18 @@ static int crypt_iv_lmk_post(struct crypt_config *cc, u8 *iv, return 0;
sg = crypt_get_sg_data(cc, dmreq->sg_out); - dst = kmap_atomic(sg_page(sg)); - r = crypt_iv_lmk_one(cc, iv, dmreq, dst + sg->offset); + dst = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(dst)) + return PTR_ERR(dst); + + r = crypt_iv_lmk_one(cc, iv, dmreq, dst);
/* Tweak the first block of plaintext sector */ if (!r) - crypto_xor(dst + sg->offset, iv, cc->iv_size); + crypto_xor(dst, iv, cc->iv_size); + + sg_unmap(sg, dst, 0, SG_KMAP_ATOMIC);
- kunmap_atomic(dst); return r; }
@@ -786,9 +793,12 @@ static int crypt_iv_tcw_gen(struct crypt_config *cc, u8 *iv, /* Remove whitening from ciphertext */ if (bio_data_dir(dmreq->ctx->bio_in) != WRITE) { sg = crypt_get_sg_data(cc, dmreq->sg_in); - src = kmap_atomic(sg_page(sg)); - r = crypt_iv_tcw_whitening(cc, dmreq, src + sg->offset); - kunmap_atomic(src); + src = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(src)) + return PTR_ERR(src); + + r = crypt_iv_tcw_whitening(cc, dmreq, src); + sg_unmap(sg, src, 0, SG_KMAP_ATOMIC); }
/* Calculate IV */ @@ -812,9 +822,12 @@ static int crypt_iv_tcw_post(struct crypt_config *cc, u8 *iv,
/* Apply whitening on ciphertext */ sg = crypt_get_sg_data(cc, dmreq->sg_out); - dst = kmap_atomic(sg_page(sg)); - r = crypt_iv_tcw_whitening(cc, dmreq, dst + sg->offset); - kunmap_atomic(dst); + dst = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(dst)) + return PTR_ERR(dst); + + r = crypt_iv_tcw_whitening(cc, dmreq, dst); + sg_unmap(sg, dst, 0, SG_KMAP_ATOMIC);
return r; }
Straightforward conversion to the new function.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Acked-by: David Kershner david.kershner@unisys.com --- drivers/staging/unisys/visorhba/visorhba_main.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/unisys/visorhba/visorhba_main.c b/drivers/staging/unisys/visorhba/visorhba_main.c index d372115..c77426c 100644 --- a/drivers/staging/unisys/visorhba/visorhba_main.c +++ b/drivers/staging/unisys/visorhba/visorhba_main.c @@ -843,7 +843,6 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd) struct scatterlist *sg; unsigned int i; char *this_page; - char *this_page_orig; int bufind = 0; struct visordisk_info *vdisk; struct visorhba_devdata *devdata; @@ -870,11 +869,14 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd)
sg = scsi_sglist(scsicmd); for (i = 0; i < scsi_sg_count(scsicmd); i++) { - this_page_orig = kmap_atomic(sg_page(sg + i)); - this_page = (void *)((unsigned long)this_page_orig | - sg[i].offset); + this_page = sg_map(sg + i, 0, SG_KMAP_ATOMIC); + if (IS_ERR(this_page)) { + scsicmd->result = DID_ERROR << 16; + return; + } + memcpy(this_page, buf + bufind, sg[i].length); - kunmap_atomic(this_page_orig); + sg_unmap(sg + i, this_page, 0, SG_KMAP_ATOMIC); } } else { devdata = (struct visorhba_devdata *)scsidev->host->hostdata;
Straightforward conversion except there's no error path, so we make use of SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Santosh Shilimkar santosh.shilimkar@oracle.com Cc: "David S. Miller" davem@davemloft.net --- net/rds/ib_recv.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c index e10624a..c665689 100644 --- a/net/rds/ib_recv.c +++ b/net/rds/ib_recv.c @@ -800,10 +800,10 @@ static void rds_ib_cong_recv(struct rds_connection *conn,
to_copy = min(RDS_FRAG_SIZE - frag_off, PAGE_SIZE - map_off); BUG_ON(to_copy & 7); /* Must be 64bit aligned. */ + addr = sg_map(&frag->f_sg, 0, + SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL);
- addr = kmap_atomic(sg_page(&frag->f_sg)); - - src = addr + frag->f_sg.offset + frag_off; + src = addr + frag_off; dst = (void *)map->m_page_addrs[map_page] + map_off; for (k = 0; k < to_copy; k += 8) { /* Record ports that became uncongested, ie @@ -811,7 +811,7 @@ static void rds_ib_cong_recv(struct rds_connection *conn, uncongested |= ~(*src) & *dst; *dst++ = *src++; } - kunmap_atomic(addr); + sg_unmap(&frag->f_sg, addr, 0, SG_KMAP_ATOMIC);
copied += to_copy;
Very straightforward conversion of three scsi drivers.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Brian King brking@us.ibm.com Cc: Artur Paszkiewicz artur.paszkiewicz@intel.com --- drivers/scsi/ipr.c | 27 ++++++++++++++------------- drivers/scsi/isci/request.c | 42 +++++++++++++++++++++++++----------------- drivers/scsi/pmcraid.c | 19 ++++++++++++------- 3 files changed, 51 insertions(+), 37 deletions(-)
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index b0c68d2..b2324e1 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -3895,7 +3895,7 @@ static void ipr_free_ucode_buffer(struct ipr_sglist *sglist) static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, u8 *buffer, u32 len) { - int bsize_elem, i, result = 0; + int bsize_elem, i; struct scatterlist *scatterlist; void *kaddr;
@@ -3905,32 +3905,33 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, scatterlist = sglist->scatterlist;
for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) { - struct page *page = sg_page(&scatterlist[i]); + kaddr = sg_map(&scatterlist[i], 0, SG_KMAP); + if (IS_ERR(kaddr)) { + ipr_trace; + return PTR_ERR(kaddr); + }
- kaddr = kmap(page); memcpy(kaddr, buffer, bsize_elem); - kunmap(page); + sg_unmap(&scatterlist[i], kaddr, 0, SG_KMAP);
scatterlist[i].length = bsize_elem; - - if (result != 0) { - ipr_trace; - return result; - } }
if (len % bsize_elem) { - struct page *page = sg_page(&scatterlist[i]); + kaddr = sg_map(&scatterlist[i], 0, SG_KMAP); + if (IS_ERR(kaddr)) { + ipr_trace; + return PTR_ERR(kaddr); + }
- kaddr = kmap(page); memcpy(kaddr, buffer, len % bsize_elem); - kunmap(page); + sg_unmap(&scatterlist[i], kaddr, 0, SG_KMAP);
scatterlist[i].length = len % bsize_elem; }
sglist->buffer_len = len; - return result; + return 0; }
/** diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c index 47f66e9..6f5521b 100644 --- a/drivers/scsi/isci/request.c +++ b/drivers/scsi/isci/request.c @@ -1424,12 +1424,14 @@ sci_stp_request_pio_data_in_copy_data_buffer(struct isci_stp_request *stp_req, sg = task->scatter;
while (total_len > 0) { - struct page *page = sg_page(sg); - copy_len = min_t(int, total_len, sg_dma_len(sg)); - kaddr = kmap_atomic(page); - memcpy(kaddr + sg->offset, src_addr, copy_len); - kunmap_atomic(kaddr); + kaddr = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(kaddr)) + return SCI_FAILURE; + + memcpy(kaddr, src_addr, copy_len); + sg_unmap(sg, kaddr, 0, SG_KMAP_ATOMIC); + total_len -= copy_len; src_addr += copy_len; sg = sg_next(sg); @@ -1771,14 +1773,16 @@ sci_io_request_frame_handler(struct isci_request *ireq, case SCI_REQ_SMP_WAIT_RESP: { struct sas_task *task = isci_request_access_task(ireq); struct scatterlist *sg = &task->smp_task.smp_resp; - void *frame_header, *kaddr; + void *frame_header; u8 *rsp;
sci_unsolicited_frame_control_get_header(&ihost->uf_control, frame_index, &frame_header); - kaddr = kmap_atomic(sg_page(sg)); - rsp = kaddr + sg->offset; + rsp = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(rsp)) + return SCI_FAILURE; + sci_swab32_cpy(rsp, frame_header, 1);
if (rsp[0] == SMP_RESPONSE) { @@ -1814,7 +1818,7 @@ sci_io_request_frame_handler(struct isci_request *ireq, ireq->sci_status = SCI_FAILURE_CONTROLLER_SPECIFIC_IO_ERR; sci_change_state(&ireq->sm, SCI_REQ_COMPLETED); } - kunmap_atomic(kaddr); + sg_unmap(sg, rsp, 0, SG_KMAP_ATOMIC);
sci_controller_release_frame(ihost, frame_index);
@@ -2919,15 +2923,18 @@ static void isci_request_io_request_complete(struct isci_host *ihost, case SAS_PROTOCOL_SMP: { struct scatterlist *sg = &task->smp_task.smp_req; struct smp_req *smp_req; - void *kaddr;
dma_unmap_sg(&ihost->pdev->dev, sg, 1, DMA_TO_DEVICE);
/* need to swab it back in case the command buffer is re-used */ - kaddr = kmap_atomic(sg_page(sg)); - smp_req = kaddr + sg->offset; + smp_req = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(smp_req)) { + status = SAS_ABORTED_TASK; + break; + } + sci_swab32_cpy(smp_req, smp_req, sg->length / sizeof(u32)); - kunmap_atomic(kaddr); + sg_unmap(sg, smp_req, 0, SG_KMAP_ATOMIC); break; } default: @@ -3190,12 +3197,13 @@ sci_io_request_construct_smp(struct device *dev, struct scu_task_context *task_context; struct isci_port *iport; struct smp_req *smp_req; - void *kaddr; u8 req_len; u32 cmd;
- kaddr = kmap_atomic(sg_page(sg)); - smp_req = kaddr + sg->offset; + smp_req = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(smp_req)) + return SCI_FAILURE; + /* * Look at the SMP requests' header fields; for certain SAS 1.x SMP * functions under SAS 2.0, a zero request length really indicates @@ -3220,7 +3228,7 @@ sci_io_request_construct_smp(struct device *dev, req_len = smp_req->req_len; sci_swab32_cpy(smp_req, smp_req, sg->length / sizeof(u32)); cmd = *(u32 *) smp_req; - kunmap_atomic(kaddr); + sg_unmap(sg, smp_req, 0, SG_KMAP_ATOMIC);
if (!dma_map_sg(dev, sg, 1, DMA_TO_DEVICE)) return SCI_FAILURE; diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c index 49e70a3..e0d041a 100644 --- a/drivers/scsi/pmcraid.c +++ b/drivers/scsi/pmcraid.c @@ -3342,9 +3342,12 @@ static int pmcraid_copy_sglist( scatterlist = sglist->scatterlist;
for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) { - struct page *page = sg_page(&scatterlist[i]); + kaddr = sg_map(&scatterlist[i], 0, SG_KMAP); + if (IS_ERR(kaddr)) { + pmcraid_err("failed to copy user data into sg list\n"); + return PTR_ERR(kaddr); + }
- kaddr = kmap(page); if (direction == DMA_TO_DEVICE) rc = __copy_from_user(kaddr, (void *)buffer, @@ -3352,7 +3355,7 @@ static int pmcraid_copy_sglist( else rc = __copy_to_user((void *)buffer, kaddr, bsize_elem);
- kunmap(page); + sg_unmap(&scatterlist[i], kaddr, 0, SG_KMAP);
if (rc) { pmcraid_err("failed to copy user data into sg list\n"); @@ -3363,9 +3366,11 @@ static int pmcraid_copy_sglist( }
if (len % bsize_elem) { - struct page *page = sg_page(&scatterlist[i]); - - kaddr = kmap(page); + kaddr = sg_map(&scatterlist[i], 0, SG_KMAP); + if (IS_ERR(kaddr)) { + pmcraid_err("failed to copy user data into sg list\n"); + return PTR_ERR(kaddr); + }
if (direction == DMA_TO_DEVICE) rc = __copy_from_user(kaddr, @@ -3376,7 +3381,7 @@ static int pmcraid_copy_sglist( kaddr, len % bsize_elem);
- kunmap(page); + sg_unmap(&scatterlist[i], kaddr, 0, SG_KMAP);
scatterlist[i].length = len % bsize_elem; }
Very straightforward conversion of three scsi drivers.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Achim Leubner achim_leubner@adaptec.com Cc: John Garry john.garry@huawei.com --- drivers/scsi/gdth.c | 9 +++++++-- drivers/scsi/hisi_sas/hisi_sas_v1_hw.c | 14 +++++++++----- drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 13 +++++++++---- drivers/scsi/mvsas/mv_sas.c | 10 +++++----- 4 files changed, 30 insertions(+), 16 deletions(-)
diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c index d020a13..c70248a2 100644 --- a/drivers/scsi/gdth.c +++ b/drivers/scsi/gdth.c @@ -2301,10 +2301,15 @@ static void gdth_copy_internal_data(gdth_ha_str *ha, Scsi_Cmnd *scp, return; } local_irq_save(flags); - address = kmap_atomic(sg_page(sl)) + sl->offset; + address = sg_map(sl, 0, SG_KMAP_ATOMIC); + if (IS_ERR(address)) { + scp->result = DID_ERROR << 16; + return; + } + memcpy(address, buffer, cpnow); flush_dcache_page(sg_page(sl)); - kunmap_atomic(address); + sg_unmap(sl, address, 0, SG_KMAP_ATOMIC); local_irq_restore(flags); if (cpsum == cpcount) break; diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c index fc1c1b2..b3953e3 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c @@ -1381,18 +1381,22 @@ static int slot_complete_v1_hw(struct hisi_hba *hisi_hba, void *to; struct scatterlist *sg_resp = &task->smp_task.smp_resp;
- ts->stat = SAM_STAT_GOOD; - to = kmap_atomic(sg_page(sg_resp)); + to = sg_map(sg_resp, 0, SG_KMAP_ATOMIC); + if (IS_ERR(to)) { + dev_err(dev, "slot complete: error mapping memory"); + ts->stat = SAS_SG_ERR; + break; + }
+ ts->stat = SAM_STAT_GOOD; dma_unmap_sg(dev, &task->smp_task.smp_resp, 1, DMA_FROM_DEVICE); dma_unmap_sg(dev, &task->smp_task.smp_req, 1, DMA_TO_DEVICE); - memcpy(to + sg_resp->offset, - slot->status_buffer + + memcpy(to, slot->status_buffer + sizeof(struct hisi_sas_err_record), sg_dma_len(sg_resp)); - kunmap_atomic(to); + sg_unmap(sg_resp, to, 0, SG_KMAP_ATOMIC); break; } case SAS_PROTOCOL_SATA: diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c index e241921..3e674a4 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c @@ -2307,18 +2307,23 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot) struct scatterlist *sg_resp = &task->smp_task.smp_resp; void *to;
+ to = sg_map(sg_resp, 0, SG_KMAP_ATOMIC); + if (IS_ERR(to)) { + dev_err(dev, "slot complete: error mapping memory"); + ts->stat = SAS_SG_ERR; + break; + } + ts->stat = SAM_STAT_GOOD; - to = kmap_atomic(sg_page(sg_resp));
dma_unmap_sg(dev, &task->smp_task.smp_resp, 1, DMA_FROM_DEVICE); dma_unmap_sg(dev, &task->smp_task.smp_req, 1, DMA_TO_DEVICE); - memcpy(to + sg_resp->offset, - slot->status_buffer + + memcpy(to, slot->status_buffer + sizeof(struct hisi_sas_err_record), sg_dma_len(sg_resp)); - kunmap_atomic(to); + sg_unmap(sg_resp, to, 0, SG_KMAP_ATOMIC); break; } case SAS_PROTOCOL_SATA: diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c index c7cc803..a72e0ce 100644 --- a/drivers/scsi/mvsas/mv_sas.c +++ b/drivers/scsi/mvsas/mv_sas.c @@ -1798,11 +1798,11 @@ int mvs_slot_complete(struct mvs_info *mvi, u32 rx_desc, u32 flags) case SAS_PROTOCOL_SMP: { struct scatterlist *sg_resp = &task->smp_task.smp_resp; tstat->stat = SAM_STAT_GOOD; - to = kmap_atomic(sg_page(sg_resp)); - memcpy(to + sg_resp->offset, - slot->response + sizeof(struct mvs_err_info), - sg_dma_len(sg_resp)); - kunmap_atomic(to); + to = sg_map(sg_resp, 0, SG_KMAP_ATOMIC); + memcpy(to, + slot->response + sizeof(struct mvs_err_info), + sg_dma_len(sg_resp)); + sg_unmap(sg_resp, to, 0, SG_KMAP_ATOMIC); break; }
Very straightforward conversion of three scsi drivers
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Adaptec OEM Raid Solutions aacraid@adaptec.com Cc: Kashyap Desai kashyap.desai@broadcom.com Cc: Sumit Saxena sumit.saxena@broadcom.com Cc: Shivasharan S shivasharan.srikanteshwara@broadcom.com --- drivers/scsi/arcmsr/arcmsr_hba.c | 16 ++++++++++++---- drivers/scsi/ips.c | 8 ++++---- drivers/scsi/megaraid.c | 9 +++++++-- 3 files changed, 23 insertions(+), 10 deletions(-)
diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c index af032c4..8c2de17 100644 --- a/drivers/scsi/arcmsr/arcmsr_hba.c +++ b/drivers/scsi/arcmsr/arcmsr_hba.c @@ -2306,7 +2306,10 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,
use_sg = scsi_sg_count(cmd); sg = scsi_sglist(cmd); - buffer = kmap_atomic(sg_page(sg)) + sg->offset; + buffer = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(buffer)) + return ARCMSR_MESSAGE_FAIL; + if (use_sg > 1) { retvalue = ARCMSR_MESSAGE_FAIL; goto message_out; @@ -2539,7 +2542,7 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb, message_out: if (use_sg) { struct scatterlist *sg = scsi_sglist(cmd); - kunmap_atomic(buffer - sg->offset); + sg_unmap(sg, buffer, 0, SG_KMAP_ATOMIC); } return retvalue; } @@ -2590,11 +2593,16 @@ static void arcmsr_handle_virtual_command(struct AdapterControlBlock *acb, strncpy(&inqdata[32], "R001", 4); /* Product Revision */
sg = scsi_sglist(cmd); - buffer = kmap_atomic(sg_page(sg)) + sg->offset; + buffer = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(buffer)) { + cmd->result = (DID_ERROR << 16); + cmd->scsi_done(cmd); + return; + }
memcpy(buffer, inqdata, sizeof(inqdata)); sg = scsi_sglist(cmd); - kunmap_atomic(buffer - sg->offset); + sg_unmap(sg, buffer, 0, SG_KMAP_ATOMIC);
cmd->scsi_done(cmd); } diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c index 3419e1b..6e91729 100644 --- a/drivers/scsi/ips.c +++ b/drivers/scsi/ips.c @@ -1506,14 +1506,14 @@ static int ips_is_passthru(struct scsi_cmnd *SC) /* kmap_atomic() ensures addressability of the user buffer.*/ /* local_irq_save() protects the KM_IRQ0 address slot. */ local_irq_save(flags); - buffer = kmap_atomic(sg_page(sg)) + sg->offset; - if (buffer && buffer[0] == 'C' && buffer[1] == 'O' && + buffer = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (!IS_ERR(buffer) && buffer[0] == 'C' && buffer[1] == 'O' && buffer[2] == 'P' && buffer[3] == 'P') { - kunmap_atomic(buffer - sg->offset); + sg_unmap(sg, buffer, 0, SG_KMAP_ATOMIC); local_irq_restore(flags); return 1; } - kunmap_atomic(buffer - sg->offset); + sg_unmap(sg, buffer, 0, SG_KMAP_ATOMIC); local_irq_restore(flags); } return 0; diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c index 3c63c29..f8aee59 100644 --- a/drivers/scsi/megaraid.c +++ b/drivers/scsi/megaraid.c @@ -663,10 +663,15 @@ mega_build_cmd(adapter_t *adapter, Scsi_Cmnd *cmd, int *busy) struct scatterlist *sg;
sg = scsi_sglist(cmd); - buf = kmap_atomic(sg_page(sg)) + sg->offset; + buf = sg_map(sg, 0, SG_KMAP_ATOMIC); + if (IS_ERR(buf)) { + cmd->result = (DID_ERROR << 16); + cmd->scsi_done(cmd); + return NULL; + }
memset(buf, 0, cmd->cmnd[4]); - kunmap_atomic(buf - sg->offset); + sg_unmap(sg, buf, 0, SG_KMAP_ATOMIC);
cmd->result = (DID_OK << 16); cmd->scsi_done(cmd);
These two drivers appear to duplicate the functionality of sg_copy_buffer. So we clean them up to use the common code.
This helps us remove a couple of instances that would otherwise be slightly tricky sg_map usages.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Johannes Thumshirn jth@kernel.org --- drivers/scsi/csiostor/csio_scsi.c | 54 +++------------------------------------ drivers/scsi/libfc/fc_libfc.c | 49 ++++++++--------------------------- 2 files changed, 14 insertions(+), 89 deletions(-)
diff --git a/drivers/scsi/csiostor/csio_scsi.c b/drivers/scsi/csiostor/csio_scsi.c index a1ff75f..bd9d062 100644 --- a/drivers/scsi/csiostor/csio_scsi.c +++ b/drivers/scsi/csiostor/csio_scsi.c @@ -1489,60 +1489,14 @@ static inline uint32_t csio_scsi_copy_to_sgl(struct csio_hw *hw, struct csio_ioreq *req) { struct scsi_cmnd *scmnd = (struct scsi_cmnd *)csio_scsi_cmnd(req); - struct scatterlist *sg; - uint32_t bytes_left; - uint32_t bytes_copy; - uint32_t buf_off = 0; - uint32_t start_off = 0; - uint32_t sg_off = 0; - void *sg_addr; - void *buf_addr; struct csio_dma_buf *dma_buf; + size_t copied;
- bytes_left = scsi_bufflen(scmnd); - sg = scsi_sglist(scmnd); dma_buf = (struct csio_dma_buf *)csio_list_next(&req->gen_list); + copied = sg_copy_from_buffer(scsi_sglist(scmnd), scsi_sg_count(scmnd), + dma_buf->vaddr, scsi_bufflen(scmnd));
- /* Copy data from driver buffer to SGs of SCSI CMD */ - while (bytes_left > 0 && sg && dma_buf) { - if (buf_off >= dma_buf->len) { - buf_off = 0; - dma_buf = (struct csio_dma_buf *) - csio_list_next(dma_buf); - continue; - } - - if (start_off >= sg->length) { - start_off -= sg->length; - sg = sg_next(sg); - continue; - } - - buf_addr = dma_buf->vaddr + buf_off; - sg_off = sg->offset + start_off; - bytes_copy = min((dma_buf->len - buf_off), - sg->length - start_off); - bytes_copy = min((uint32_t)(PAGE_SIZE - (sg_off & ~PAGE_MASK)), - bytes_copy); - - sg_addr = kmap_atomic(sg_page(sg) + (sg_off >> PAGE_SHIFT)); - if (!sg_addr) { - csio_err(hw, "failed to kmap sg:%p of ioreq:%p\n", - sg, req); - break; - } - - csio_dbg(hw, "copy_to_sgl:sg_addr %p sg_off %d buf %p len %d\n", - sg_addr, sg_off, buf_addr, bytes_copy); - memcpy(sg_addr + (sg_off & ~PAGE_MASK), buf_addr, bytes_copy); - kunmap_atomic(sg_addr); - - start_off += bytes_copy; - buf_off += bytes_copy; - bytes_left -= bytes_copy; - } - - if (bytes_left > 0) + if (copied != scsi_bufflen(scmnd)) return DID_ERROR; else return DID_OK; diff --git a/drivers/scsi/libfc/fc_libfc.c b/drivers/scsi/libfc/fc_libfc.c index d623d08..ce0805a 100644 --- a/drivers/scsi/libfc/fc_libfc.c +++ b/drivers/scsi/libfc/fc_libfc.c @@ -113,45 +113,16 @@ u32 fc_copy_buffer_to_sglist(void *buf, size_t len, u32 *nents, size_t *offset, u32 *crc) { - size_t remaining = len; - u32 copy_len = 0; - - while (remaining > 0 && sg) { - size_t off, sg_bytes; - void *page_addr; - - if (*offset >= sg->length) { - /* - * Check for end and drop resources - * from the last iteration. - */ - if (!(*nents)) - break; - --(*nents); - *offset -= sg->length; - sg = sg_next(sg); - continue; - } - sg_bytes = min(remaining, sg->length - *offset); - - /* - * The scatterlist item may be bigger than PAGE_SIZE, - * but we are limited to mapping PAGE_SIZE at a time. - */ - off = *offset + sg->offset; - sg_bytes = min(sg_bytes, - (size_t)(PAGE_SIZE - (off & ~PAGE_MASK))); - page_addr = kmap_atomic(sg_page(sg) + (off >> PAGE_SHIFT)); - if (crc) - *crc = crc32(*crc, buf, sg_bytes); - memcpy((char *)page_addr + (off & ~PAGE_MASK), buf, sg_bytes); - kunmap_atomic(page_addr); - buf += sg_bytes; - *offset += sg_bytes; - remaining -= sg_bytes; - copy_len += sg_bytes; - } - return copy_len; + size_t copied; + + copied = sg_pcopy_from_buffer(sg, sg_nents(sg), + buf, len, *offset); + + *offset += copied; + if (crc) + *crc = crc32(*crc, buf, copied); + + return copied; }
/**
Straightforward conversion to the new helper, except due to the lack of error path, we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Boris Ostrovsky boris.ostrovsky@oracle.com Cc: Juergen Gross jgross@suse.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: "Roger Pau Monné" roger.pau@citrix.com --- drivers/block/xen-blkfront.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 3945963..ed62175 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -816,8 +816,9 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri BUG_ON(sg->offset + sg->length > PAGE_SIZE);
if (setup.need_copy) { - setup.bvec_off = sg->offset; - setup.bvec_data = kmap_atomic(sg_page(sg)); + setup.bvec_off = 0; + setup.bvec_data = sg_map(sg, 0, SG_KMAP_ATOMIC | + SG_MAP_MUST_NOT_FAIL); }
gnttab_foreach_grant_in_range(sg_page(sg), @@ -827,7 +828,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri &setup);
if (setup.need_copy) - kunmap_atomic(setup.bvec_data); + sg_unmap(sg, setup.bvec_data, 0, SG_KMAP_ATOMIC); } if (setup.segments) kunmap_atomic(setup.segments); @@ -1053,7 +1054,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset) case XEN_SCSI_DISK5_MAJOR: case XEN_SCSI_DISK6_MAJOR: case XEN_SCSI_DISK7_MAJOR: - *offset = (*minor / PARTS_PER_DISK) + + *offset = (*minor / PARTS_PER_DISK) + ((major - XEN_SCSI_DISK1_MAJOR + 1) * 16) + EMULATED_SD_DISK_NAME_OFFSET; *minor = *minor + @@ -1068,7 +1069,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset) case XEN_SCSI_DISK13_MAJOR: case XEN_SCSI_DISK14_MAJOR: case XEN_SCSI_DISK15_MAJOR: - *offset = (*minor / PARTS_PER_DISK) + + *offset = (*minor / PARTS_PER_DISK) + ((major - XEN_SCSI_DISK8_MAJOR + 8) * 16) + EMULATED_SD_DISK_NAME_OFFSET; *minor = *minor + @@ -1119,7 +1120,7 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity, if (!VDEV_IS_EXTENDED(info->vdevice)) { err = xen_translate_vdev(info->vdevice, &minor, &offset); if (err) - return err; + return err; nr_parts = PARTS_PER_DISK; } else { minor = BLKIF_MINOR_EXT(info->vdevice); @@ -1483,8 +1484,9 @@ static bool blkif_completion(unsigned long *id, for_each_sg(s->sg, sg, num_sg, i) { BUG_ON(sg->offset + sg->length > PAGE_SIZE);
- data.bvec_offset = sg->offset; - data.bvec_data = kmap_atomic(sg_page(sg)); + data.bvec_offset = 0; + data.bvec_data = sg_map(sg, 0, SG_KMAP_ATOMIC | + SG_MAP_MUST_NOT_FAIL);
gnttab_foreach_grant_in_range(sg_page(sg), sg->offset, @@ -1492,7 +1494,7 @@ static bool blkif_completion(unsigned long *id, blkif_copy_from_grant, &data);
- kunmap_atomic(data.bvec_data); + sg_unmap(sg, data.bvec_data, 0, SG_KMAP_ATOMIC); } } /* Add the persistent grant into the list of free grants */
On Tue, Apr 25, 2017 at 12:21:02PM -0600, Logan Gunthorpe wrote:
Straightforward conversion to the new helper, except due to the lack of error path, we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Boris Ostrovsky boris.ostrovsky@oracle.com Cc: Juergen Gross jgross@suse.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: "Roger Pau Monné" roger.pau@citrix.com
drivers/block/xen-blkfront.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 3945963..ed62175 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -816,8 +816,9 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri BUG_ON(sg->offset + sg->length > PAGE_SIZE);
if (setup.need_copy) {
setup.bvec_off = sg->offset;
setup.bvec_data = kmap_atomic(sg_page(sg));
setup.bvec_off = 0;
setup.bvec_data = sg_map(sg, 0, SG_KMAP_ATOMIC |
SG_MAP_MUST_NOT_FAIL);
I assume that sg_map already adds sg->offset to the address?
Also wondering whether we can get rid of bvec_off and just increment bvec_data, adding Julien who IIRC added this code.
} gnttab_foreach_grant_in_range(sg_page(sg),
@@ -827,7 +828,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri &setup);
if (setup.need_copy)
kunmap_atomic(setup.bvec_data);
} if (setup.segments) kunmap_atomic(setup.segments);sg_unmap(sg, setup.bvec_data, 0, SG_KMAP_ATOMIC);
@@ -1053,7 +1054,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset) case XEN_SCSI_DISK5_MAJOR: case XEN_SCSI_DISK6_MAJOR: case XEN_SCSI_DISK7_MAJOR:
*offset = (*minor / PARTS_PER_DISK) +
*offset = (*minor / PARTS_PER_DISK) + ((major - XEN_SCSI_DISK1_MAJOR + 1) * 16) + EMULATED_SD_DISK_NAME_OFFSET; *minor = *minor +
@@ -1068,7 +1069,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset) case XEN_SCSI_DISK13_MAJOR: case XEN_SCSI_DISK14_MAJOR: case XEN_SCSI_DISK15_MAJOR:
*offset = (*minor / PARTS_PER_DISK) +
*offset = (*minor / PARTS_PER_DISK) + ((major - XEN_SCSI_DISK8_MAJOR + 8) * 16) + EMULATED_SD_DISK_NAME_OFFSET; *minor = *minor +
@@ -1119,7 +1120,7 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity, if (!VDEV_IS_EXTENDED(info->vdevice)) { err = xen_translate_vdev(info->vdevice, &minor, &offset); if (err)
return err;
return err;
Cosmetic changes should go in a separate patch please.
Roger.
On 26/04/17 01:37 AM, Roger Pau Monné wrote:
On Tue, Apr 25, 2017 at 12:21:02PM -0600, Logan Gunthorpe wrote:
Straightforward conversion to the new helper, except due to the lack of error path, we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Boris Ostrovsky boris.ostrovsky@oracle.com Cc: Juergen Gross jgross@suse.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: "Roger Pau Monné" roger.pau@citrix.com
drivers/block/xen-blkfront.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 3945963..ed62175 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -816,8 +816,9 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri BUG_ON(sg->offset + sg->length > PAGE_SIZE);
if (setup.need_copy) {
setup.bvec_off = sg->offset;
setup.bvec_data = kmap_atomic(sg_page(sg));
setup.bvec_off = 0;
setup.bvec_data = sg_map(sg, 0, SG_KMAP_ATOMIC |
SG_MAP_MUST_NOT_FAIL);
I assume that sg_map already adds sg->offset to the address?
Correct.
Also wondering whether we can get rid of bvec_off and just increment bvec_data, adding Julien who IIRC added this code.
bvec_off is used to keep track of the offset within the current mapping so it's not a great idea given that you'd want to kunmap_atomic the original address and not something with an offset. It would be nice if this could be converted to use the sg_miter interface but that's a much more invasive change that would require someone who knows this code and can properly test it. I'd be very grateful if someone actually took that on.
Logan
On Thu, Apr 27, 2017 at 02:19:24PM -0600, Logan Gunthorpe wrote:
On 26/04/17 01:37 AM, Roger Pau Monné wrote:
On Tue, Apr 25, 2017 at 12:21:02PM -0600, Logan Gunthorpe wrote:
Straightforward conversion to the new helper, except due to the lack of error path, we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Boris Ostrovsky boris.ostrovsky@oracle.com Cc: Juergen Gross jgross@suse.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: "Roger Pau Monné" roger.pau@citrix.com drivers/block/xen-blkfront.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 3945963..ed62175 100644 +++ b/drivers/block/xen-blkfront.c @@ -816,8 +816,9 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri BUG_ON(sg->offset + sg->length > PAGE_SIZE);
if (setup.need_copy) {
setup.bvec_off = sg->offset;
setup.bvec_data = kmap_atomic(sg_page(sg));
setup.bvec_off = 0;
setup.bvec_data = sg_map(sg, 0, SG_KMAP_ATOMIC |
SG_MAP_MUST_NOT_FAIL);
I assume that sg_map already adds sg->offset to the address?
Correct.
Also wondering whether we can get rid of bvec_off and just increment bvec_data, adding Julien who IIRC added this code.
bvec_off is used to keep track of the offset within the current mapping so it's not a great idea given that you'd want to kunmap_atomic the original address and not something with an offset. It would be nice if this could be converted to use the sg_miter interface but that's a much more invasive change that would require someone who knows this code and can properly test it. I'd be very grateful if someone actually took that on.
blkfront is one of the drivers I looked at, and it appears to only be memcpying with the bvec_data pointer, so I wonder why it does not use sg_copy_X_buffer instead..
Jason
On 27/04/17 02:53 PM, Jason Gunthorpe wrote:
blkfront is one of the drivers I looked at, and it appears to only be memcpying with the bvec_data pointer, so I wonder why it does not use sg_copy_X_buffer instead..
Yes, sort of...
But you'd potentially end up calling sg_copy_to_buffer multiple times per page within the sg (given that gnttab_foreach_grant_in_range might call blkif_copy_from_grant/blkif_setup_rw_req_grant multiple times). Even calling sg_copy_to_buffer once per page seems rather inefficient as it uses sg_miter internally.
Switching the for_each_sg to sg_miter is probably the nicer solution as it takes care of the mapping and the offset/length accounting for you and will have similar performance.
But, yes, if performance is not an issue, switching it to sg_copy_to_buffer would be a less invasive change than sg_miter. Which the same might be said about a lot of these cases.
Unfortunately, changing from kmap_atomic (which is a null operation in a lot of cases) to sg_copy_X_buffer is a pretty big performance hit.
Logan
On Thu, Apr 27, 2017 at 03:53:37PM -0600, Logan Gunthorpe wrote:
On 27/04/17 02:53 PM, Jason Gunthorpe wrote:
blkfront is one of the drivers I looked at, and it appears to only be memcpying with the bvec_data pointer, so I wonder why it does not use sg_copy_X_buffer instead..
But you'd potentially end up calling sg_copy_to_buffer multiple times per page within the sg (given that gnttab_foreach_grant_in_range might call blkif_copy_from_grant/blkif_setup_rw_req_grant multiple times). Even calling sg_copy_to_buffer once per page seems rather inefficient as it uses sg_miter internally.
Well, that is in the current form, with more users it would make sense to optimize for the single page case, eg by providing the existing call, providing a faster single-page-only variant of the copy, perhaps even one that is inlined.
Switching the for_each_sg to sg_miter is probably the nicer solution as it takes care of the mapping and the offset/length accounting for you and will have similar performance.
sg_miter will still fail when the sg contains __iomem, however I would expect that the sg_copy will work with iomem, by using the __iomem memcpy variant.
So, sg_copy should always be preferred in this new world with mixed __iomem since it is the only primitive that can transparently handle it.
Jason
On 27/04/17 04:11 PM, Jason Gunthorpe wrote:
On Thu, Apr 27, 2017 at 03:53:37PM -0600, Logan Gunthorpe wrote: Well, that is in the current form, with more users it would make sense to optimize for the single page case, eg by providing the existing call, providing a faster single-page-only variant of the copy, perhaps even one that is inlined.
Ok, does it make sense then to have an sg_copy_page_to_buffer (or some such... I'm having trouble thinking of a sane name that isn't too long). That just does k(un)map_atomic and memcpy? I could try that if it makes sense to people.
Switching the for_each_sg to sg_miter is probably the nicer solution as it takes care of the mapping and the offset/length accounting for you and will have similar performance.
sg_miter will still fail when the sg contains __iomem, however I would expect that the sg_copy will work with iomem, by using the __iomem memcpy variant.
Yes, that's true. Any sg_miters that ever see iomem will need to be converted to support it. This isn't much different than the other kmap(sg_page()) users I was converting that will also fail if they see iomem. Though, I suspect an sg_miter user would be easier to convert to iomem than a random kmap user.
Logan
On Thu, Apr 27, 2017 at 05:03:45PM -0600, Logan Gunthorpe wrote:
On 27/04/17 04:11 PM, Jason Gunthorpe wrote:
On Thu, Apr 27, 2017 at 03:53:37PM -0600, Logan Gunthorpe wrote: Well, that is in the current form, with more users it would make sense to optimize for the single page case, eg by providing the existing call, providing a faster single-page-only variant of the copy, perhaps even one that is inlined.
Ok, does it make sense then to have an sg_copy_page_to_buffer (or some such... I'm having trouble thinking of a sane name that isn't too long). That just does k(un)map_atomic and memcpy? I could try that if it makes sense to people.
It seems the most robust: test for iomem, and jump to a slow path copy, otherwise inline the kmap and memcpy
Every place doing memcpy from sgl will need that pattern to be correct.
sg_miter will still fail when the sg contains __iomem, however I would expect that the sg_copy will work with iomem, by using the __iomem memcpy variant.
Yes, that's true. Any sg_miters that ever see iomem will need to be converted to support it. This isn't much different than the other kmap(sg_page()) users I was converting that will also fail if they see iomem. Though, I suspect an sg_miter user would be easier to convert to iomem than a random kmap user.
How? sg_miter seems like the next nightmare down this path, what is sg_miter_next supposed to do when something hits an iomem sgl?
miter.addr is supposed to be a kernel pointer that must not be __iomem..
Jason
On 27/04/17 05:20 PM, Jason Gunthorpe wrote:
It seems the most robust: test for iomem, and jump to a slow path copy, otherwise inline the kmap and memcpy
Every place doing memcpy from sgl will need that pattern to be correct.
Ok, sounds like a good place to start to me. I'll see what I can do for a v3 of this set. Though, I probably won't send anything until after the merge window.
sg_miter will still fail when the sg contains __iomem, however I would expect that the sg_copy will work with iomem, by using the __iomem memcpy variant.
Yes, that's true. Any sg_miters that ever see iomem will need to be converted to support it. This isn't much different than the other kmap(sg_page()) users I was converting that will also fail if they see iomem. Though, I suspect an sg_miter user would be easier to convert to iomem than a random kmap user.
How? sg_miter seems like the next nightmare down this path, what is sg_miter_next supposed to do when something hits an iomem sgl?
My proposal is roughly included in the draft I sent upthread. We add an sg_miter flag indicating the iteratee supports iomem and if miter finds iomem (with the support flag set) it sets ioaddr which is __iomem. The iteratee then just needs to null check addr and ioaddr and perform the appropriate action.
Logan
Straightforward conversion, except due to the lack of an error path we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Adrian Hunter adrian.hunter@intel.com Cc: Ulf Hansson ulf.hansson@linaro.org --- drivers/mmc/host/sdhci.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index ecd0d43..239507f 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -513,15 +513,19 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host, return sg_count; }
+/* + * Note this function may return PTR_ERR and must be checked. + */ static char *sdhci_kmap_atomic(struct scatterlist *sg, unsigned long *flags) { local_irq_save(*flags); - return kmap_atomic(sg_page(sg)) + sg->offset; + return sg_map(sg, 0, SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); }
-static void sdhci_kunmap_atomic(void *buffer, unsigned long *flags) +static void sdhci_kunmap_atomic(struct scatterlist *sg, void *buffer, + unsigned long *flags) { - kunmap_atomic(buffer); + sg_unmap(sg, buffer, 0, SG_KMAP_ATOMIC); local_irq_restore(*flags); }
@@ -585,7 +589,7 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, if (data->flags & MMC_DATA_WRITE) { buffer = sdhci_kmap_atomic(sg, &flags); memcpy(align, buffer, offset); - sdhci_kunmap_atomic(buffer, &flags); + sdhci_kunmap_atomic(sg, buffer, &flags); }
/* tran, valid */ @@ -663,7 +667,7 @@ static void sdhci_adma_table_post(struct sdhci_host *host,
buffer = sdhci_kmap_atomic(sg, &flags); memcpy(buffer, align, size); - sdhci_kunmap_atomic(buffer, &flags); + sdhci_kunmap_atomic(sg, buffer, &flags);
align += SDHCI_ADMA2_ALIGN; }
We use the sg_map helper but it's slightly more complicated as we only check for the error when the mapping actually gets used. Such that if the mapping failed but wasn't needed then no error occurs.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Ulf Hansson ulf.hansson@linaro.org --- drivers/mmc/host/mmc_spi.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c index 476e53d..d614f36 100644 --- a/drivers/mmc/host/mmc_spi.c +++ b/drivers/mmc/host/mmc_spi.c @@ -676,9 +676,15 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t, struct scratch *scratch = host->data; u32 pattern;
- if (host->mmc->use_spi_crc) + if (host->mmc->use_spi_crc) { + if (IS_ERR(t->tx_buf)) + return PTR_ERR(t->tx_buf); + scratch->crc_val = cpu_to_be16( crc_itu_t(0, t->tx_buf, t->len)); + t->tx_buf += t->len; + } + if (host->dma_dev) dma_sync_single_for_device(host->dma_dev, host->data_dma, sizeof(*scratch), @@ -743,7 +749,6 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t, return status; }
- t->tx_buf += t->len; if (host->dma_dev) t->tx_dma += t->len;
@@ -809,6 +814,11 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t, } leftover = status << 1;
+ if (bitshift || host->mmc->use_spi_crc) { + if (IS_ERR(t->rx_buf)) + return PTR_ERR(t->rx_buf); + } + if (host->dma_dev) { dma_sync_single_for_device(host->dma_dev, host->data_dma, sizeof(*scratch), @@ -860,9 +870,10 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t, scratch->crc_val, crc, t->len); return -EILSEQ; } + + t->rx_buf += t->len; }
- t->rx_buf += t->len; if (host->dma_dev) t->rx_dma += t->len;
@@ -933,11 +944,11 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd, }
/* allow pio too; we don't allow highmem */ - kmap_addr = kmap(sg_page(sg)); + kmap_addr = sg_map(sg, 0, SG_KMAP); if (direction == DMA_TO_DEVICE) - t->tx_buf = kmap_addr + sg->offset; + t->tx_buf = kmap_addr; else - t->rx_buf = kmap_addr + sg->offset; + t->rx_buf = kmap_addr;
/* transfer each block, and update request status */ while (length) { @@ -967,7 +978,8 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd, /* discard mappings */ if (direction == DMA_FROM_DEVICE) flush_kernel_dcache_page(sg_page(sg)); - kunmap(sg_page(sg)); + if (!IS_ERR(kmap_addr)) + sg_unmap(sg, kmap_addr, 0, SG_KMAP); if (dma_dev) dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir);
Straightforward conversion to sg_map helper. Seeing there is no cleare error path, SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Wolfram Sang wsa+renesas@sang-engineering.com Cc: Ulf Hansson ulf.hansson@linaro.org --- drivers/mmc/host/tmio_mmc.h | 7 +++++-- drivers/mmc/host/tmio_mmc_pio.c | 12 ++++++++++++ 2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h index d0edb57..bc43eb0 100644 --- a/drivers/mmc/host/tmio_mmc.h +++ b/drivers/mmc/host/tmio_mmc.h @@ -202,17 +202,20 @@ void tmio_mmc_enable_mmc_irqs(struct tmio_mmc_host *host, u32 i); void tmio_mmc_disable_mmc_irqs(struct tmio_mmc_host *host, u32 i); irqreturn_t tmio_mmc_irq(int irq, void *devid);
+/* Note: this function may return PTR_ERR and must be checked! */ static inline char *tmio_mmc_kmap_atomic(struct scatterlist *sg, unsigned long *flags) { + void *ret; + local_irq_save(*flags); - return kmap_atomic(sg_page(sg)) + sg->offset; + return sg_map(sg, 0, SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); }
static inline void tmio_mmc_kunmap_atomic(struct scatterlist *sg, unsigned long *flags, void *virt) { - kunmap_atomic(virt - sg->offset); + sg_unmap(sg, virt, 0, SG_KMAP_ATOMIC); local_irq_restore(*flags); }
diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c index a2d92f1..bbb4f19 100644 --- a/drivers/mmc/host/tmio_mmc_pio.c +++ b/drivers/mmc/host/tmio_mmc_pio.c @@ -506,6 +506,18 @@ static void tmio_mmc_check_bounce_buffer(struct tmio_mmc_host *host) if (host->sg_ptr == &host->bounce_sg) { unsigned long flags; void *sg_vaddr = tmio_mmc_kmap_atomic(host->sg_orig, &flags); + if (IS_ERR(sg_vaddr)) { + /* + * This should really never happen unless + * the code is changed to use memory that is + * not mappable in the sg. Seeing there doesn't + * seem to be any error path out of here, + * we can only WARN. + */ + WARN(1, "Non-mappable memory used in sg!"); + return; + } + memcpy(sg_vaddr, host->bounce_buf, host->bounce_sg.length); tmio_mmc_kunmap_atomic(host->sg_orig, &flags, sg_vaddr); }
This is a straightforward conversion to the new function.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Sascha Sommer saschasommer@freenet.de Cc: Ulf Hansson ulf.hansson@linaro.org --- drivers/mmc/host/sdricoh_cs.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/host/sdricoh_cs.c b/drivers/mmc/host/sdricoh_cs.c index 5ff26ab..03225c3 100644 --- a/drivers/mmc/host/sdricoh_cs.c +++ b/drivers/mmc/host/sdricoh_cs.c @@ -319,16 +319,20 @@ static void sdricoh_request(struct mmc_host *mmc, struct mmc_request *mrq) for (i = 0; i < data->blocks; i++) { size_t len = data->blksz; u8 *buf; - struct page *page; int result; - page = sg_page(data->sg);
- buf = kmap(page) + data->sg->offset + (len * i); + buf = sg_map(data->sg, (len * i), SG_KMAP); + if (IS_ERR(buf)) { + cmd->error = PTR_ERR(buf); + break; + } + result = sdricoh_blockio(host, data->flags & MMC_DATA_READ, buf, len); - kunmap(page); - flush_dcache_page(page); + sg_unmap(data->sg, buf, (len * i), SG_KMAP); + + flush_dcache_page(sg_page(data->sg)); if (result) { dev_err(dev, "sdricoh_request: cmd %i " "block transfer failed\n", cmd->opcode);
This conversion is a bit complicated. We modiy the read_fifo, write_fifo and copy_page functions to take a scatterlist instead of a page. Thus we can use sg_map instead of kmap_atomic. There's a bit of accounting that needed to be done for the offset for this to work. (Seeing sg_map takes care of the offset but it's already added and used earlier in the code.)
There's also no error path, so we use SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Alex Dubov oakad@yahoo.com Cc: Ulf Hansson ulf.hansson@linaro.org --- drivers/mmc/host/tifm_sd.c | 50 +++++++++++++++++++++++++++------------------- 1 file changed, 29 insertions(+), 21 deletions(-)
diff --git a/drivers/mmc/host/tifm_sd.c b/drivers/mmc/host/tifm_sd.c index 93c4b40..e64345a 100644 --- a/drivers/mmc/host/tifm_sd.c +++ b/drivers/mmc/host/tifm_sd.c @@ -111,14 +111,16 @@ struct tifm_sd { };
/* for some reason, host won't respond correctly to readw/writew */ -static void tifm_sd_read_fifo(struct tifm_sd *host, struct page *pg, +static void tifm_sd_read_fifo(struct tifm_sd *host, struct scatterlist *sg, unsigned int off, unsigned int cnt) { struct tifm_dev *sock = host->dev; unsigned char *buf; unsigned int pos = 0, val;
- buf = kmap_atomic(pg) + off; + buf = sg_map(sg, off - sg->offset, + SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); + if (host->cmd_flags & DATA_CARRY) { buf[pos++] = host->bounce_buf_data[0]; host->cmd_flags &= ~DATA_CARRY; @@ -134,17 +136,19 @@ static void tifm_sd_read_fifo(struct tifm_sd *host, struct page *pg, } buf[pos++] = (val >> 8) & 0xff; } - kunmap_atomic(buf - off); + sg_unmap(sg, buf, off - sg->offset, SG_KMAP_ATOMIC); }
-static void tifm_sd_write_fifo(struct tifm_sd *host, struct page *pg, +static void tifm_sd_write_fifo(struct tifm_sd *host, struct scatterlist *sg, unsigned int off, unsigned int cnt) { struct tifm_dev *sock = host->dev; unsigned char *buf; unsigned int pos = 0, val;
- buf = kmap_atomic(pg) + off; + buf = sg_map(sg, off - sg->offset, + SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); + if (host->cmd_flags & DATA_CARRY) { val = host->bounce_buf_data[0] | ((buf[pos++] << 8) & 0xff00); writel(val, sock->addr + SOCK_MMCSD_DATA); @@ -161,7 +165,7 @@ static void tifm_sd_write_fifo(struct tifm_sd *host, struct page *pg, val |= (buf[pos++] << 8) & 0xff00; writel(val, sock->addr + SOCK_MMCSD_DATA); } - kunmap_atomic(buf - off); + sg_unmap(sg, buf, off - sg->offset, SG_KMAP_ATOMIC); }
static void tifm_sd_transfer_data(struct tifm_sd *host) @@ -170,7 +174,6 @@ static void tifm_sd_transfer_data(struct tifm_sd *host) struct scatterlist *sg = r_data->sg; unsigned int off, cnt, t_size = TIFM_MMCSD_FIFO_SIZE * 2; unsigned int p_off, p_cnt; - struct page *pg;
if (host->sg_pos == host->sg_len) return; @@ -192,33 +195,39 @@ static void tifm_sd_transfer_data(struct tifm_sd *host) } off = sg[host->sg_pos].offset + host->block_pos;
- pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, cnt); p_cnt = min(p_cnt, t_size);
if (r_data->flags & MMC_DATA_READ) - tifm_sd_read_fifo(host, pg, p_off, p_cnt); + tifm_sd_read_fifo(host, &sg[host->sg_pos], p_off, + p_cnt); else if (r_data->flags & MMC_DATA_WRITE) - tifm_sd_write_fifo(host, pg, p_off, p_cnt); + tifm_sd_write_fifo(host, &sg[host->sg_pos], p_off, + p_cnt);
t_size -= p_cnt; host->block_pos += p_cnt; } }
-static void tifm_sd_copy_page(struct page *dst, unsigned int dst_off, - struct page *src, unsigned int src_off, +static void tifm_sd_copy_page(struct scatterlist *dst, unsigned int dst_off, + struct scatterlist *src, unsigned int src_off, unsigned int count) { - unsigned char *src_buf = kmap_atomic(src) + src_off; - unsigned char *dst_buf = kmap_atomic(dst) + dst_off; + unsigned char *src_buf, *dst_buf; + + src_off -= src->offset; + dst_off -= dst->offset; + + src_buf = sg_map(src, src_off, SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); + dst_buf = sg_map(dst, dst_off, SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL);
memcpy(dst_buf, src_buf, count);
- kunmap_atomic(dst_buf - dst_off); - kunmap_atomic(src_buf - src_off); + sg_unmap(dst, dst_buf, dst_off, SG_KMAP_ATOMIC); + sg_unmap(src, src_buf, src_off, SG_KMAP_ATOMIC); }
static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data) @@ -227,7 +236,6 @@ static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data) unsigned int t_size = r_data->blksz; unsigned int off, cnt; unsigned int p_off, p_cnt; - struct page *pg;
dev_dbg(&host->dev->dev, "bouncing block\n"); while (t_size) { @@ -241,18 +249,18 @@ static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data) } off = sg[host->sg_pos].offset + host->block_pos;
- pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, cnt); p_cnt = min(p_cnt, t_size);
if (r_data->flags & MMC_DATA_WRITE) - tifm_sd_copy_page(sg_page(&host->bounce_buf), + tifm_sd_copy_page(&host->bounce_buf, r_data->blksz - t_size, - pg, p_off, p_cnt); + &sg[host->sg_pos], p_off, p_cnt); else if (r_data->flags & MMC_DATA_READ) - tifm_sd_copy_page(pg, p_off, sg_page(&host->bounce_buf), + tifm_sd_copy_page(&sg[host->sg_pos], p_off, + &host->bounce_buf, r_data->blksz - t_size, p_cnt);
t_size -= p_cnt;
Straightforward conversion, but we have to make use of SG_MAP_MUST_NOT_FAIL which may BUG_ON in certain cases in the future.
Signed-off-by: Logan Gunthorpe logang@deltatee.com Cc: Alex Dubov oakad@yahoo.com --- drivers/memstick/host/jmb38x_ms.c | 11 ++++++----- drivers/memstick/host/tifm_ms.c | 11 ++++++----- 2 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/memstick/host/jmb38x_ms.c b/drivers/memstick/host/jmb38x_ms.c index 48db922..9019e37 100644 --- a/drivers/memstick/host/jmb38x_ms.c +++ b/drivers/memstick/host/jmb38x_ms.c @@ -303,7 +303,6 @@ static int jmb38x_ms_transfer_data(struct jmb38x_ms_host *host) unsigned int off; unsigned int t_size, p_cnt; unsigned char *buf; - struct page *pg; unsigned long flags = 0;
if (host->req->long_data) { @@ -318,14 +317,14 @@ static int jmb38x_ms_transfer_data(struct jmb38x_ms_host *host) unsigned int uninitialized_var(p_off);
if (host->req->long_data) { - pg = nth_page(sg_page(&host->req->sg), - off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, length);
local_irq_save(flags); - buf = kmap_atomic(pg) + p_off; + buf = sg_map(&host->req->sg, + off - host->req->sg.offset, + SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); } else { buf = host->req->data + host->block_pos; p_cnt = host->req->data_len - host->block_pos; @@ -341,7 +340,9 @@ static int jmb38x_ms_transfer_data(struct jmb38x_ms_host *host) : jmb38x_ms_read_reg_data(host, buf, p_cnt);
if (host->req->long_data) { - kunmap_atomic(buf - p_off); + sg_unmap(&host->req->sg, buf, + off - host->req->sg.offset, + SG_KMAP_ATOMIC); local_irq_restore(flags); }
diff --git a/drivers/memstick/host/tifm_ms.c b/drivers/memstick/host/tifm_ms.c index 7bafa72..304985d 100644 --- a/drivers/memstick/host/tifm_ms.c +++ b/drivers/memstick/host/tifm_ms.c @@ -186,7 +186,6 @@ static unsigned int tifm_ms_transfer_data(struct tifm_ms *host) unsigned int off; unsigned int t_size, p_cnt; unsigned char *buf; - struct page *pg; unsigned long flags = 0;
if (host->req->long_data) { @@ -203,14 +202,14 @@ static unsigned int tifm_ms_transfer_data(struct tifm_ms *host) unsigned int uninitialized_var(p_off);
if (host->req->long_data) { - pg = nth_page(sg_page(&host->req->sg), - off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, length);
local_irq_save(flags); - buf = kmap_atomic(pg) + p_off; + buf = sg_map(&host->req->sg, + off - host->req->sg.offset, + SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); } else { buf = host->req->data + host->block_pos; p_cnt = host->req->data_len - host->block_pos; @@ -221,7 +220,9 @@ static unsigned int tifm_ms_transfer_data(struct tifm_ms *host) : tifm_ms_read_data(host, buf, p_cnt);
if (host->req->long_data) { - kunmap_atomic(buf - p_off); + sg_unmap(&host->req->sg, buf, + off - host->req->sg.offset, + SG_KMAP_ATOMIC | SG_MAP_MUST_NOT_FAIL); local_irq_restore(flags); }
dri-devel@lists.freedesktop.org