If clwb is available on the system, use it in clflush_cache_range. If clwb is not available, fall back to clflushopt if you can. If clflushopt is not supported, fall all the way back to clflush.
Signed-off-by: Ross Zwisler ross.zwisler@linux.intel.com Cc: H Peter Anvin h.peter.anvin@intel.com Cc: Ingo Molnar mingo@kernel.org Cc: Thomas Gleixner tglx@linutronix.de Cc: David Airlie airlied@linux.ie Cc: dri-devel@lists.freedesktop.org Cc: x86@kernel.org --- arch/x86/mm/pageattr.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 36de293..5229d45 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -126,8 +126,8 @@ within(unsigned long addr, unsigned long start, unsigned long end) * @vaddr: virtual start address * @size: number of bytes to flush * - * clflushopt is an unordered instruction which needs fencing with mfence or - * sfence to avoid ordering issues. + * clflushopt and clwb are unordered instructions which need fencing with + * mfence or sfence to avoid ordering issues. */ void clflush_cache_range(void *vaddr, unsigned int size) { @@ -136,11 +136,11 @@ void clflush_cache_range(void *vaddr, unsigned int size) mb();
for (; vaddr < vend; vaddr += boot_cpu_data.x86_clflush_size) - clflushopt(vaddr); + clwb(vaddr); /* * Flush any possible final partial cacheline: */ - clflushopt(vend); + clwb(vend);
mb(); }