This patch series provides a generic helper function, prot_guest_has(), to replace the sme_active(), sev_active(), sev_es_active() and mem_encrypt_active() functions.
It is expected that as new protected virtualization technologies are added to the kernel, they can all be covered by a single function call instead of a collection of specific function calls all called from the same locations.
The powerpc and s390 patches have been compile tested only. Can the folks copied on this series verify that nothing breaks for them.
Cc: Andi Kleen ak@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Dave Young dyoung@redhat.com Cc: David Airlie airlied@linux.ie Cc: Heiko Carstens hca@linux.ibm.com Cc: Ingo Molnar mingo@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Will Deacon will@kernel.org
---
Patches based on: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master 0b52902cd2d9 ("Merge branch 'efi/urgent'")
Changes since v1: - Move some arch ioremap functions within #ifdef CONFIG_AMD_MEM_ENCRYPT in prep for use of prot_guest_has() by TDX. - Add type includes to the the protected_guest.h header file to prevent build errors outside of x86. - Make amd_prot_guest_has() EXPORT_SYMBOL_GPL - Use amd_prot_guest_has() in place of checking sme_me_mask in the arch/x86/mm/mem_encrypt.c file.
Tom Lendacky (12): x86/ioremap: Selectively build arch override encryption functions mm: Introduce a function to check for virtualization protection features x86/sev: Add an x86 version of prot_guest_has() powerpc/pseries/svm: Add a powerpc version of prot_guest_has() x86/sme: Replace occurrences of sme_active() with prot_guest_has() x86/sev: Replace occurrences of sev_active() with prot_guest_has() x86/sev: Replace occurrences of sev_es_active() with prot_guest_has() treewide: Replace the use of mem_encrypt_active() with prot_guest_has() mm: Remove the now unused mem_encrypt_active() function x86/sev: Remove the now unused mem_encrypt_active() function powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function s390/mm: Remove the now unused mem_encrypt_active() function
arch/Kconfig | 3 ++ arch/powerpc/include/asm/mem_encrypt.h | 5 -- arch/powerpc/include/asm/protected_guest.h | 30 +++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + arch/s390/include/asm/mem_encrypt.h | 2 - arch/x86/Kconfig | 1 + arch/x86/include/asm/io.h | 8 +++ arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 13 +---- arch/x86/include/asm/protected_guest.h | 29 +++++++++++ arch/x86/kernel/crash_dump_64.c | 4 +- arch/x86/kernel/head64.c | 4 +- arch/x86/kernel/kvm.c | 3 +- arch/x86/kernel/kvmclock.c | 4 +- arch/x86/kernel/machine_kexec_64.c | 19 +++---- arch/x86/kernel/pci-swiotlb.c | 9 ++-- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/kernel/sev.c | 6 +-- arch/x86/kvm/svm/svm.c | 3 +- arch/x86/mm/ioremap.c | 18 +++---- arch/x86/mm/mem_encrypt.c | 60 +++++++++++++++------- arch/x86/mm/mem_encrypt_identity.c | 3 +- arch/x86/mm/pat/set_memory.c | 3 +- arch/x86/platform/efi/efi_64.c | 9 ++-- arch/x86/realmode/init.c | 8 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +- drivers/gpu/drm/drm_cache.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-- drivers/iommu/amd/init.c | 7 +-- drivers/iommu/amd/iommu.c | 3 +- drivers/iommu/amd/iommu_v2.c | 3 +- drivers/iommu/iommu.c | 3 +- fs/proc/vmcore.c | 6 +-- include/linux/mem_encrypt.h | 4 -- include/linux/protected_guest.h | 40 +++++++++++++++ kernel/dma/swiotlb.c | 4 +- 37 files changed, 232 insertions(+), 105 deletions(-) create mode 100644 arch/powerpc/include/asm/protected_guest.h create mode 100644 arch/x86/include/asm/protected_guest.h create mode 100644 include/linux/protected_guest.h
In prep for other uses of the prot_guest_has() function besides AMD's memory encryption support, selectively build the AMD memory encryption architecture override functions only when CONFIG_AMD_MEM_ENCRYPT=y. These functions are: - early_memremap_pgprot_adjust() - arch_memremap_can_ram_remap()
Additionally, routines that are only invoked by these architecture override functions can also be conditionally built. These functions are: - memremap_should_map_decrypted() - memremap_is_efi_data() - memremap_is_setup_data() - early_memremap_is_setup_data()
And finally, phys_mem_access_encrypted() is conditionally built as well, but requires a static inline version of it when CONFIG_AMD_MEM_ENCRYPT is not set.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/io.h | 8 ++++++++ arch/x86/mm/ioremap.c | 2 +- 2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index 841a5d104afa..5c6a4af0b911 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -391,6 +391,7 @@ extern void arch_io_free_memtype_wc(resource_size_t start, resource_size_t size) #define arch_io_reserve_memtype_wc arch_io_reserve_memtype_wc #endif
+#ifdef CONFIG_AMD_MEM_ENCRYPT extern bool arch_memremap_can_ram_remap(resource_size_t offset, unsigned long size, unsigned long flags); @@ -398,6 +399,13 @@ extern bool arch_memremap_can_ram_remap(resource_size_t offset,
extern bool phys_mem_access_encrypted(unsigned long phys_addr, unsigned long size); +#else +static inline bool phys_mem_access_encrypted(unsigned long phys_addr, + unsigned long size) +{ + return true; +} +#endif
/** * iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 60ade7dd71bd..ccff76cedd8f 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -508,6 +508,7 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) memunmap((void *)((unsigned long)addr & PAGE_MASK)); }
+#ifdef CONFIG_AMD_MEM_ENCRYPT /* * Examine the physical address to determine if it is an area of memory * that should be mapped decrypted. If the memory is not part of the @@ -746,7 +747,6 @@ bool phys_mem_access_encrypted(unsigned long phys_addr, unsigned long size) return arch_memremap_can_ram_remap(phys_addr, size, 0); }
-#ifdef CONFIG_AMD_MEM_ENCRYPT /* Remap memory with encryption */ void __init *early_memremap_encrypted(resource_size_t phys_addr, unsigned long size)
On Fri, Aug 13, 2021 at 11:59:20AM -0500, Tom Lendacky wrote:
In prep for other uses of the prot_guest_has() function besides AMD's memory encryption support, selectively build the AMD memory encryption architecture override functions only when CONFIG_AMD_MEM_ENCRYPT=y. These functions are:
- early_memremap_pgprot_adjust()
- arch_memremap_can_ram_remap()
Additionally, routines that are only invoked by these architecture override functions can also be conditionally built. These functions are:
- memremap_should_map_decrypted()
- memremap_is_efi_data()
- memremap_is_setup_data()
- early_memremap_is_setup_data()
And finally, phys_mem_access_encrypted() is conditionally built as well, but requires a static inline version of it when CONFIG_AMD_MEM_ENCRYPT is not set.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/x86/include/asm/io.h | 8 ++++++++ arch/x86/mm/ioremap.c | 2 +- 2 files changed, 9 insertions(+), 1 deletion(-)
LGTM.
In prep for other protected virtualization technologies, introduce a generic helper function, prot_guest_has(), that can be used to check for specific protection attributes, like memory encryption. This is intended to eliminate having to add multiple technology-specific checks to the code (e.g. if (sev_active() || tdx_active())).
Reviewed-by: Joerg Roedel jroedel@suse.de Co-developed-by: Andi Kleen ak@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/Kconfig | 3 +++ include/linux/protected_guest.h | 35 +++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) create mode 100644 include/linux/protected_guest.h
diff --git a/arch/Kconfig b/arch/Kconfig index 98db63496bab..bd4f60c581f1 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1231,6 +1231,9 @@ config RELR config ARCH_HAS_MEM_ENCRYPT bool
+config ARCH_HAS_PROTECTED_GUEST + bool + config HAVE_SPARSE_SYSCALL_NR bool help diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h new file mode 100644 index 000000000000..43d4dde94793 --- /dev/null +++ b/include/linux/protected_guest.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Protected Guest (and Host) Capability checks + * + * Copyright (C) 2021 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky thomas.lendacky@amd.com + */ + +#ifndef _PROTECTED_GUEST_H +#define _PROTECTED_GUEST_H + +#ifndef __ASSEMBLY__ + +#include <linux/types.h> +#include <linux/stddef.h> + +#define PATTR_MEM_ENCRYPT 0 /* Encrypted memory */ +#define PATTR_HOST_MEM_ENCRYPT 1 /* Host encrypted memory */ +#define PATTR_GUEST_MEM_ENCRYPT 2 /* Guest encrypted memory */ +#define PATTR_GUEST_PROT_STATE 3 /* Guest encrypted state */ + +#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST + +#include <asm/protected_guest.h> + +#else /* !CONFIG_ARCH_HAS_PROTECTED_GUEST */ + +static inline bool prot_guest_has(unsigned int attr) { return false; } + +#endif /* CONFIG_ARCH_HAS_PROTECTED_GUEST */ + +#endif /* __ASSEMBLY__ */ + +#endif /* _PROTECTED_GUEST_H */
On 8/13/21 9:59 AM, Tom Lendacky wrote:
In prep for other protected virtualization technologies, introduce a generic helper function, prot_guest_has(), that can be used to check for specific protection attributes, like memory encryption. This is intended to eliminate having to add multiple technology-specific checks to the code (e.g. if (sev_active() || tdx_active())).
Reviewed-by: Joerg Roedeljroedel@suse.de Co-developed-by: Andi Kleenak@linux.intel.com Signed-off-by: Andi Kleenak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanansathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanansathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendackythomas.lendacky@amd.com
arch/Kconfig | 3 +++ include/linux/protected_guest.h | 35 +++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) create mode 100644 include/linux/protected_guest.h
Reviewed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com
On Fri, Aug 13, 2021 at 11:59:21AM -0500, Tom Lendacky wrote:
In prep for other protected virtualization technologies, introduce a generic helper function, prot_guest_has(), that can be used to check for specific protection attributes, like memory encryption. This is intended to eliminate having to add multiple technology-specific checks to the code (e.g. if (sev_active() || tdx_active())).
Reviewed-by: Joerg Roedel jroedel@suse.de Co-developed-by: Andi Kleen ak@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/Kconfig | 3 +++ include/linux/protected_guest.h | 35 +++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) create mode 100644 include/linux/protected_guest.h
diff --git a/arch/Kconfig b/arch/Kconfig index 98db63496bab..bd4f60c581f1 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1231,6 +1231,9 @@ config RELR config ARCH_HAS_MEM_ENCRYPT bool
+config ARCH_HAS_PROTECTED_GUEST
- bool
config HAVE_SPARSE_SYSCALL_NR bool help diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h new file mode 100644 index 000000000000..43d4dde94793 --- /dev/null +++ b/include/linux/protected_guest.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendacky thomas.lendacky@amd.com
- */
+#ifndef _PROTECTED_GUEST_H +#define _PROTECTED_GUEST_H
+#ifndef __ASSEMBLY__
^^^^^^^^^^^^^
Do you really need that guard? It builds fine without it too. Or something coming later does need it...?
On 8/14/21 1:32 PM, Borislav Petkov wrote:
On Fri, Aug 13, 2021 at 11:59:21AM -0500, Tom Lendacky wrote:
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h new file mode 100644 index 000000000000..43d4dde94793 --- /dev/null +++ b/include/linux/protected_guest.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendacky thomas.lendacky@amd.com
- */
+#ifndef _PROTECTED_GUEST_H +#define _PROTECTED_GUEST_H
+#ifndef __ASSEMBLY__
^^^^^^^^^^^^^
Do you really need that guard? It builds fine without it too. Or something coming later does need it...?
No, I probably did it out of habit. I can remove it in the next version.
Thanks, Tom
Introduce an x86 version of the prot_guest_has() function. This will be used in the more generic x86 code to replace vendor specific calls like sev_active(), etc.
While the name suggests this is intended mainly for guests, it will also be used for host memory encryption checks in place of sme_active().
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Reviewed-by: Joerg Roedel jroedel@suse.de Co-developed-by: Andi Kleen ak@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/Kconfig | 1 + arch/x86/include/asm/mem_encrypt.h | 2 ++ arch/x86/include/asm/protected_guest.h | 29 ++++++++++++++++++++++++++ arch/x86/mm/mem_encrypt.c | 25 ++++++++++++++++++++++ include/linux/protected_guest.h | 5 +++++ 5 files changed, 62 insertions(+) create mode 100644 arch/x86/include/asm/protected_guest.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 421fa9e38c60..82e5fb713261 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1514,6 +1514,7 @@ config AMD_MEM_ENCRYPT select ARCH_HAS_FORCE_DMA_UNENCRYPTED select INSTRUCTION_DECODER select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS + select ARCH_HAS_PROTECTED_GUEST help Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 9c80c68d75b5..a46d47662772 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -53,6 +53,7 @@ void __init sev_es_init_vc_handling(void); bool sme_active(void); bool sev_active(void); bool sev_es_active(void); +bool amd_prot_guest_has(unsigned int attr);
#define __bss_decrypted __section(".bss..decrypted")
@@ -78,6 +79,7 @@ static inline void sev_es_init_vc_handling(void) { } static inline bool sme_active(void) { return false; } static inline bool sev_active(void) { return false; } static inline bool sev_es_active(void) { return false; } +static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
static inline int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; } diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h new file mode 100644 index 000000000000..51e4eefd9542 --- /dev/null +++ b/arch/x86/include/asm/protected_guest.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Protected Guest (and Host) Capability checks + * + * Copyright (C) 2021 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky thomas.lendacky@amd.com + */ + +#ifndef _X86_PROTECTED_GUEST_H +#define _X86_PROTECTED_GUEST_H + +#include <linux/mem_encrypt.h> + +#ifndef __ASSEMBLY__ + +static inline bool prot_guest_has(unsigned int attr) +{ +#ifdef CONFIG_AMD_MEM_ENCRYPT + if (sme_me_mask) + return amd_prot_guest_has(attr); +#endif + + return false; +} + +#endif /* __ASSEMBLY__ */ + +#endif /* _X86_PROTECTED_GUEST_H */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index ff08dc463634..edc67ddf065d 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -20,6 +20,7 @@ #include <linux/bitops.h> #include <linux/dma-mapping.h> #include <linux/virtio_config.h> +#include <linux/protected_guest.h>
#include <asm/tlbflush.h> #include <asm/fixmap.h> @@ -389,6 +390,30 @@ bool noinstr sev_es_active(void) return sev_status & MSR_AMD64_SEV_ES_ENABLED; }
+bool amd_prot_guest_has(unsigned int attr) +{ + switch (attr) { + case PATTR_MEM_ENCRYPT: + return sme_me_mask != 0; + + case PATTR_SME: + case PATTR_HOST_MEM_ENCRYPT: + return sme_active(); + + case PATTR_SEV: + case PATTR_GUEST_MEM_ENCRYPT: + return sev_active(); + + case PATTR_SEV_ES: + case PATTR_GUEST_PROT_STATE: + return sev_es_active(); + + default: + return false; + } +} +EXPORT_SYMBOL_GPL(amd_prot_guest_has); + /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) { diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h index 43d4dde94793..5ddef1b6a2ea 100644 --- a/include/linux/protected_guest.h +++ b/include/linux/protected_guest.h @@ -20,6 +20,11 @@ #define PATTR_GUEST_MEM_ENCRYPT 2 /* Guest encrypted memory */ #define PATTR_GUEST_PROT_STATE 3 /* Guest encrypted state */
+/* 0x800 - 0x8ff reserved for AMD */ +#define PATTR_SME 0x800 +#define PATTR_SEV 0x801 +#define PATTR_SEV_ES 0x802 + #ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
#include <asm/protected_guest.h>
On Fri, Aug 13, 2021 at 11:59:22AM -0500, Tom Lendacky wrote:
diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h new file mode 100644 index 000000000000..51e4eefd9542 --- /dev/null +++ b/arch/x86/include/asm/protected_guest.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendacky thomas.lendacky@amd.com
- */
+#ifndef _X86_PROTECTED_GUEST_H +#define _X86_PROTECTED_GUEST_H
+#include <linux/mem_encrypt.h>
+#ifndef __ASSEMBLY__
+static inline bool prot_guest_has(unsigned int attr) +{ +#ifdef CONFIG_AMD_MEM_ENCRYPT
- if (sme_me_mask)
return amd_prot_guest_has(attr);
+#endif
- return false;
+}
+#endif /* __ASSEMBLY__ */
+#endif /* _X86_PROTECTED_GUEST_H */
I think this can be simplified more, diff ontop below:
- no need for the ifdeffery as amd_prot_guest_has() has versions for both when CONFIG_AMD_MEM_ENCRYPT is set or not.
- the sme_me_mask check is pushed there too.
- and since this is vendor-specific, I'm checking the vendor bit. Yeah, yeah, cross-vendor but I don't really believe that.
--- diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h index 51e4eefd9542..8541c76d5da4 100644 --- a/arch/x86/include/asm/protected_guest.h +++ b/arch/x86/include/asm/protected_guest.h @@ -12,18 +12,13 @@
#include <linux/mem_encrypt.h>
-#ifndef __ASSEMBLY__ - static inline bool prot_guest_has(unsigned int attr) { -#ifdef CONFIG_AMD_MEM_ENCRYPT - if (sme_me_mask) + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) return amd_prot_guest_has(attr); -#endif
return false; }
-#endif /* __ASSEMBLY__ */ - #endif /* _X86_PROTECTED_GUEST_H */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index edc67ddf065d..5a0442a6f072 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -392,6 +392,9 @@ bool noinstr sev_es_active(void)
bool amd_prot_guest_has(unsigned int attr) { + if (!sme_me_mask) + return false; + switch (attr) { case PATTR_MEM_ENCRYPT: return sme_me_mask != 0;
On 8/14/21 2:08 PM, Borislav Petkov wrote:
On Fri, Aug 13, 2021 at 11:59:22AM -0500, Tom Lendacky wrote:
diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h new file mode 100644 index 000000000000..51e4eefd9542 --- /dev/null +++ b/arch/x86/include/asm/protected_guest.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendacky thomas.lendacky@amd.com
- */
+#ifndef _X86_PROTECTED_GUEST_H +#define _X86_PROTECTED_GUEST_H
+#include <linux/mem_encrypt.h>
+#ifndef __ASSEMBLY__
+static inline bool prot_guest_has(unsigned int attr) +{ +#ifdef CONFIG_AMD_MEM_ENCRYPT
- if (sme_me_mask)
return amd_prot_guest_has(attr);
+#endif
- return false;
+}
+#endif /* __ASSEMBLY__ */
+#endif /* _X86_PROTECTED_GUEST_H */
I think this can be simplified more, diff ontop below:
- no need for the ifdeffery as amd_prot_guest_has() has versions for
both when CONFIG_AMD_MEM_ENCRYPT is set or not.
Ugh, yeah, not sure why I put that in for this version since I have the static inline for when CONFIG_AMD_MEM_ENCRYPT is not set.
the sme_me_mask check is pushed there too.
and since this is vendor-specific, I'm checking the vendor bit. Yeah,
yeah, cross-vendor but I don't really believe that.
It's not a cross-vendor thing as opposed to a KVM or other hypervisor thing where the family doesn't have to be reported as AMD or HYGON. That's why I made the if check be for sme_me_mask. I think that is the safer way to go.
Thanks, Tom
diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h index 51e4eefd9542..8541c76d5da4 100644 --- a/arch/x86/include/asm/protected_guest.h +++ b/arch/x86/include/asm/protected_guest.h @@ -12,18 +12,13 @@
#include <linux/mem_encrypt.h>
-#ifndef __ASSEMBLY__
- static inline bool prot_guest_has(unsigned int attr) {
-#ifdef CONFIG_AMD_MEM_ENCRYPT
- if (sme_me_mask)
- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
return amd_prot_guest_has(attr);boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
-#endif
return false; }
-#endif /* __ASSEMBLY__ */
- #endif /* _X86_PROTECTED_GUEST_H */
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index edc67ddf065d..5a0442a6f072 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -392,6 +392,9 @@ bool noinstr sev_es_active(void)
bool amd_prot_guest_has(unsigned int attr) {
- if (!sme_me_mask)
return false;
- switch (attr) { case PATTR_MEM_ENCRYPT: return sme_me_mask != 0;
On Sun, Aug 15, 2021 at 08:53:31AM -0500, Tom Lendacky wrote:
It's not a cross-vendor thing as opposed to a KVM or other hypervisor thing where the family doesn't have to be reported as AMD or HYGON.
What would be the use case? A HV starts a guest which is supposed to be encrypted using the AMD's confidential guest technology but the HV tells the guest that it is not running on an AMD SVM HV but something else?
Is that even an actual use case?
Or am I way off?
I know we have talked about this in the past but this still sounds insane.
On 8/15/21 9:39 AM, Borislav Petkov wrote:
On Sun, Aug 15, 2021 at 08:53:31AM -0500, Tom Lendacky wrote:
It's not a cross-vendor thing as opposed to a KVM or other hypervisor thing where the family doesn't have to be reported as AMD or HYGON.
What would be the use case? A HV starts a guest which is supposed to be encrypted using the AMD's confidential guest technology but the HV tells the guest that it is not running on an AMD SVM HV but something else?
Is that even an actual use case?
Or am I way off?
I know we have talked about this in the past but this still sounds insane.
Maybe the KVM folks have a better understanding of it...
I can change it to be an AMD/HYGON check... although, I'll have to check to see if any (very) early use of the function will work with that.
At a minimum, the check in arch/x86/kernel/head64.c will have to be changed or removed. I'll take a closer look.
Thanks, Tom
On Tue, Aug 17, 2021 at 10:22:52AM -0500, Tom Lendacky wrote:
I can change it to be an AMD/HYGON check... although, I'll have to check to see if any (very) early use of the function will work with that.
We can always change it later if really needed. It is just that I'm not a fan of such "preemptive" changes.
At a minimum, the check in arch/x86/kernel/head64.c will have to be changed or removed. I'll take a closer look.
Yeah, sme_me_mask, already discussed on IRC.
Thx.
Introduce a powerpc version of the prot_guest_has() function. This will be used to replace the powerpc mem_encrypt_active() implementation, so the implementation will initially only support the PATTR_MEM_ENCRYPT attribute.
Cc: Michael Ellerman mpe@ellerman.id.au Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Paul Mackerras paulus@samba.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/powerpc/include/asm/protected_guest.h | 30 ++++++++++++++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + 2 files changed, 31 insertions(+) create mode 100644 arch/powerpc/include/asm/protected_guest.h
diff --git a/arch/powerpc/include/asm/protected_guest.h b/arch/powerpc/include/asm/protected_guest.h new file mode 100644 index 000000000000..ce55c2c7e534 --- /dev/null +++ b/arch/powerpc/include/asm/protected_guest.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Protected Guest (and Host) Capability checks + * + * Copyright (C) 2021 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky thomas.lendacky@amd.com + */ + +#ifndef _POWERPC_PROTECTED_GUEST_H +#define _POWERPC_PROTECTED_GUEST_H + +#include <asm/svm.h> + +#ifndef __ASSEMBLY__ + +static inline bool prot_guest_has(unsigned int attr) +{ + switch (attr) { + case PATTR_MEM_ENCRYPT: + return is_secure_guest(); + + default: + return false; + } +} + +#endif /* __ASSEMBLY__ */ + +#endif /* _POWERPC_PROTECTED_GUEST_H */ diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig index 5e037df2a3a1..8ce5417d6feb 100644 --- a/arch/powerpc/platforms/pseries/Kconfig +++ b/arch/powerpc/platforms/pseries/Kconfig @@ -159,6 +159,7 @@ config PPC_SVM select SWIOTLB select ARCH_HAS_MEM_ENCRYPT select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select ARCH_HAS_PROTECTED_GUEST help There are certain POWER platforms which support secure guests using the Protected Execution Facility, with the help of an Ultravisor
On Fri, Aug 13, 2021 at 11:59:23AM -0500, Tom Lendacky wrote:
Introduce a powerpc version of the prot_guest_has() function. This will be used to replace the powerpc mem_encrypt_active() implementation, so the implementation will initially only support the PATTR_MEM_ENCRYPT attribute.
Cc: Michael Ellerman mpe@ellerman.id.au Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Paul Mackerras paulus@samba.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/powerpc/include/asm/protected_guest.h | 30 ++++++++++++++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + 2 files changed, 31 insertions(+) create mode 100644 arch/powerpc/include/asm/protected_guest.h
diff --git a/arch/powerpc/include/asm/protected_guest.h b/arch/powerpc/include/asm/protected_guest.h new file mode 100644 index 000000000000..ce55c2c7e534 --- /dev/null +++ b/arch/powerpc/include/asm/protected_guest.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendacky thomas.lendacky@amd.com
- */
+#ifndef _POWERPC_PROTECTED_GUEST_H +#define _POWERPC_PROTECTED_GUEST_H
+#include <asm/svm.h>
+#ifndef __ASSEMBLY__
Same thing here. Pls audit the whole set whether those __ASSEMBLY__ guards are really needed and remove them if not.
Thx.
On 8/17/21 3:35 AM, Borislav Petkov wrote:
On Fri, Aug 13, 2021 at 11:59:23AM -0500, Tom Lendacky wrote:
Introduce a powerpc version of the prot_guest_has() function. This will be used to replace the powerpc mem_encrypt_active() implementation, so the implementation will initially only support the PATTR_MEM_ENCRYPT attribute.
Cc: Michael Ellerman mpe@ellerman.id.au Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Paul Mackerras paulus@samba.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/powerpc/include/asm/protected_guest.h | 30 ++++++++++++++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + 2 files changed, 31 insertions(+) create mode 100644 arch/powerpc/include/asm/protected_guest.h
diff --git a/arch/powerpc/include/asm/protected_guest.h b/arch/powerpc/include/asm/protected_guest.h new file mode 100644 index 000000000000..ce55c2c7e534 --- /dev/null +++ b/arch/powerpc/include/asm/protected_guest.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendacky thomas.lendacky@amd.com
- */
+#ifndef _POWERPC_PROTECTED_GUEST_H +#define _POWERPC_PROTECTED_GUEST_H
+#include <asm/svm.h>
+#ifndef __ASSEMBLY__
Same thing here. Pls audit the whole set whether those __ASSEMBLY__ guards are really needed and remove them if not.
Will do.
Thanks, Tom
Thx.
Tom Lendacky thomas.lendacky@amd.com writes:
Introduce a powerpc version of the prot_guest_has() function. This will be used to replace the powerpc mem_encrypt_active() implementation, so the implementation will initially only support the PATTR_MEM_ENCRYPT attribute.
Cc: Michael Ellerman mpe@ellerman.id.au Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Paul Mackerras paulus@samba.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/powerpc/include/asm/protected_guest.h | 30 ++++++++++++++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + 2 files changed, 31 insertions(+) create mode 100644 arch/powerpc/include/asm/protected_guest.h
diff --git a/arch/powerpc/include/asm/protected_guest.h b/arch/powerpc/include/asm/protected_guest.h new file mode 100644 index 000000000000..ce55c2c7e534 --- /dev/null +++ b/arch/powerpc/include/asm/protected_guest.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendacky thomas.lendacky@amd.com
- */
+#ifndef _POWERPC_PROTECTED_GUEST_H +#define _POWERPC_PROTECTED_GUEST_H
Minor nit, we would usually use _ASM_POWERPC_PROTECTED_GUEST_H
Otherwise looks OK to me.
Acked-by: Michael Ellerman mpe@ellerman.id.au
cheers
Replace occurrences of sme_active() with the more generic prot_guest_has() using PATTR_HOST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c where PATTR_SME will be used. If future support is added for other memory encryption technologies, the use of PATTR_HOST_MEM_ENCRYPT can be updated, as required, to use PATTR_SME.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Reviewed-by: Joerg Roedel jroedel@suse.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 2 -- arch/x86/kernel/machine_kexec_64.c | 3 ++- arch/x86/kernel/pci-swiotlb.c | 9 ++++----- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/mm/ioremap.c | 6 +++--- arch/x86/mm/mem_encrypt.c | 10 +++++----- arch/x86/mm/mem_encrypt_identity.c | 3 ++- arch/x86/realmode/init.c | 5 +++-- drivers/iommu/amd/init.c | 7 ++++--- 10 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 0a6e34b07017..11b7c06e2828 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -129,7 +129,7 @@ relocate_kernel(unsigned long indirection_page, unsigned long page_list, unsigned long start_address, unsigned int preserve_context, - unsigned int sme_active); + unsigned int host_mem_enc_active); #endif
#define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index a46d47662772..956338406cec 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void); void __init mem_encrypt_init(void);
void __init sev_es_init_vc_handling(void); -bool sme_active(void); bool sev_active(void); bool sev_es_active(void); bool amd_prot_guest_has(unsigned int attr); @@ -76,7 +75,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { }
static inline void sev_es_init_vc_handling(void) { } -static inline bool sme_active(void) { return false; } static inline bool sev_active(void) { return false; } static inline bool sev_es_active(void) { return false; } static inline bool amd_prot_guest_has(unsigned int attr) { return false; } diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 131f30fdcfbd..8e7b517ad738 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -17,6 +17,7 @@ #include <linux/suspend.h> #include <linux/vmalloc.h> #include <linux/efi.h> +#include <linux/protected_guest.h>
#include <asm/init.h> #include <asm/tlbflush.h> @@ -358,7 +359,7 @@ void machine_kexec(struct kimage *image) (unsigned long)page_list, image->start, image->preserve_context, - sme_active()); + prot_guest_has(PATTR_HOST_MEM_ENCRYPT));
#ifdef CONFIG_KEXEC_JUMP if (image->preserve_context) diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index c2cfa5e7c152..bd9a9cfbc9a2 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -6,7 +6,7 @@ #include <linux/swiotlb.h> #include <linux/memblock.h> #include <linux/dma-direct.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <asm/iommu.h> #include <asm/swiotlb.h> @@ -45,11 +45,10 @@ int __init pci_swiotlb_detect_4gb(void) swiotlb = 1;
/* - * If SME is active then swiotlb will be set to 1 so that bounce - * buffers are allocated and used for devices that do not support - * the addressing range required for the encryption mask. + * Set swiotlb to 1 so that bounce buffers are allocated and used for + * devices that can't support DMA to encrypted memory. */ - if (sme_active()) + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) swiotlb = 1;
return swiotlb; diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S index c53271aebb64..c8fe74a28143 100644 --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86/kernel/relocate_kernel_64.S @@ -47,7 +47,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel) * %rsi page_list * %rdx start address * %rcx preserve_context - * %r8 sme_active + * %r8 host_mem_enc_active */
/* Save the CPU context, used for jumping back */ diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index ccff76cedd8f..583afd54c7e1 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -14,7 +14,7 @@ #include <linux/slab.h> #include <linux/vmalloc.h> #include <linux/mmiotrace.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/efi.h> #include <linux/pgtable.h>
@@ -703,7 +703,7 @@ bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size, if (flags & MEMREMAP_DEC) return false;
- if (sme_active()) { + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) { if (memremap_is_setup_data(phys_addr, size) || memremap_is_efi_data(phys_addr, size)) return false; @@ -729,7 +729,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
encrypted_prot = true;
- if (sme_active()) { + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) { if (early_memremap_is_setup_data(phys_addr, size) || memremap_is_efi_data(phys_addr, size)) encrypted_prot = false; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index edc67ddf065d..5635ca9a1fbe 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -144,7 +144,7 @@ void __init sme_unmap_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active()) + if (!amd_prot_guest_has(PATTR_SME)) return;
/* Get the command line address before unmapping the real_mode_data */ @@ -164,7 +164,7 @@ void __init sme_map_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active()) + if (!amd_prot_guest_has(PATTR_SME)) return;
__sme_early_map_unmap_mem(real_mode_data, sizeof(boot_params), true); @@ -378,7 +378,7 @@ bool sev_active(void) return sev_status & MSR_AMD64_SEV_ENABLED; }
-bool sme_active(void) +static bool sme_active(void) { return sme_me_mask && !sev_active(); } @@ -428,7 +428,7 @@ bool force_dma_unencrypted(struct device *dev) * device does not support DMA to addresses that include the * encryption mask. */ - if (sme_active()) { + if (amd_prot_guest_has(PATTR_SME)) { u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); @@ -469,7 +469,7 @@ static void print_mem_encrypt_feature_info(void) pr_info("AMD Memory Encryption Features active:");
/* Secure Memory Encryption */ - if (sme_active()) { + if (amd_prot_guest_has(PATTR_SME)) { /* * SME is mutually exclusive with any of the SEV * features below. diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c index 470b20208430..088c8ab7dcc1 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/mm/mem_encrypt_identity.c @@ -30,6 +30,7 @@ #include <linux/kernel.h> #include <linux/mm.h> #include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <asm/setup.h> #include <asm/sections.h> @@ -287,7 +288,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp) unsigned long pgtable_area_len; unsigned long decrypted_base;
- if (!sme_active()) + if (!prot_guest_has(PATTR_SME)) return;
/* diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c index 6534c92d0f83..2109ae569c67 100644 --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -3,6 +3,7 @@ #include <linux/slab.h> #include <linux/memblock.h> #include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h>
#include <asm/set_memory.h> @@ -44,7 +45,7 @@ void __init reserve_real_mode(void) static void sme_sev_setup_real_mode(struct trampoline_header *th) { #ifdef CONFIG_AMD_MEM_ENCRYPT - if (sme_active()) + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) th->flags |= TH_FLAGS_SME_ACTIVE;
if (sev_es_active()) { @@ -81,7 +82,7 @@ static void __init setup_real_mode(void) * decrypted memory in order to bring up other processors * successfully. This is not needed for SEV. */ - if (sme_active()) + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) set_memory_decrypted((unsigned long)base, size >> PAGE_SHIFT);
memcpy(base, real_mode_blob, size); diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index 46280e6e1535..05e770e3e631 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -20,7 +20,7 @@ #include <linux/amd-iommu.h> #include <linux/export.h> #include <linux/kmemleak.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <asm/pci-direct.h> #include <asm/iommu.h> #include <asm/apic.h> @@ -965,7 +965,7 @@ static bool copy_device_table(void) pr_err("The address of old device table is above 4G, not trustworthy!\n"); return false; } - old_devtb = (sme_active() && is_kdump_kernel()) + old_devtb = (prot_guest_has(PATTR_HOST_MEM_ENCRYPT) && is_kdump_kernel()) ? (__force void *)ioremap_encrypted(old_devtb_phys, dev_table_size) : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB); @@ -3022,7 +3022,8 @@ static int __init amd_iommu_init(void)
static bool amd_iommu_sme_check(void) { - if (!sme_active() || (boot_cpu_data.x86 != 0x17)) + if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT) || + (boot_cpu_data.x86 != 0x17)) return true;
/* For Fam17h, a specific level of support is required */
On Fri, Aug 13, 2021 at 11:59:24AM -0500, Tom Lendacky wrote:
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index edc67ddf065d..5635ca9a1fbe 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -144,7 +144,7 @@ void __init sme_unmap_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active())
if (!amd_prot_guest_has(PATTR_SME)) return;
/* Get the command line address before unmapping the real_mode_data */
@@ -164,7 +164,7 @@ void __init sme_map_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active())
if (!amd_prot_guest_has(PATTR_SME)) return;
__sme_early_map_unmap_mem(real_mode_data, sizeof(boot_params), true);
@@ -378,7 +378,7 @@ bool sev_active(void) return sev_status & MSR_AMD64_SEV_ENABLED; }
-bool sme_active(void) +static bool sme_active(void)
Just get rid of it altogether. Also, there's an
EXPORT_SYMBOL_GPL(sev_active);
which needs to go under the actual function. Here's a diff ontop:
--- diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 5635ca9a1fbe..a3a2396362a5 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -364,8 +364,9 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) /* * SME and SEV are very similar but they are not the same, so there are * times that the kernel will need to distinguish between SME and SEV. The - * sme_active() and sev_active() functions are used for this. When a - * distinction isn't needed, the mem_encrypt_active() function can be used. + * PATTR_HOST_MEM_ENCRYPT and PATTR_GUEST_MEM_ENCRYPT flags to + * amd_prot_guest_has() are used for this. When a distinction isn't needed, + * the mem_encrypt_active() function can be used. * * The trampoline code is a good example for this requirement. Before * paging is activated, SME will access all memory as decrypted, but SEV @@ -377,11 +378,6 @@ bool sev_active(void) { return sev_status & MSR_AMD64_SEV_ENABLED; } - -static bool sme_active(void) -{ - return sme_me_mask && !sev_active(); -} EXPORT_SYMBOL_GPL(sev_active);
/* Needs to be called from non-instrumentable code */ @@ -398,7 +394,7 @@ bool amd_prot_guest_has(unsigned int attr)
case PATTR_SME: case PATTR_HOST_MEM_ENCRYPT: - return sme_active(); + return sme_me_mask && !sev_active();
case PATTR_SEV: case PATTR_GUEST_MEM_ENCRYPT:
{ return sme_me_mask && !sev_active(); } @@ -428,7 +428,7 @@ bool force_dma_unencrypted(struct device *dev) * device does not support DMA to addresses that include the * encryption mask. */
- if (sme_active()) {
- if (amd_prot_guest_has(PATTR_SME)) {
So I'm not sure: you add PATTR_SME which you call with amd_prot_guest_has() and PATTR_HOST_MEM_ENCRYPT which you call with prot_guest_has() and they both end up being the same thing on AMD.
So why even bother with PATTR_SME?
This is only going to cause confusion later and I'd say let's simply use prot_guest_has(PATTR_HOST_MEM_ENCRYPT) everywhere...
On 8/17/21 4:00 AM, Borislav Petkov wrote:
On Fri, Aug 13, 2021 at 11:59:24AM -0500, Tom Lendacky wrote:
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index edc67ddf065d..5635ca9a1fbe 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -144,7 +144,7 @@ void __init sme_unmap_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active())
if (!amd_prot_guest_has(PATTR_SME)) return;
/* Get the command line address before unmapping the real_mode_data */
@@ -164,7 +164,7 @@ void __init sme_map_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active())
if (!amd_prot_guest_has(PATTR_SME)) return;
__sme_early_map_unmap_mem(real_mode_data, sizeof(boot_params), true);
@@ -378,7 +378,7 @@ bool sev_active(void) return sev_status & MSR_AMD64_SEV_ENABLED; }
-bool sme_active(void) +static bool sme_active(void)
Just get rid of it altogether. Also, there's an
EXPORT_SYMBOL_GPL(sev_active);
which needs to go under the actual function. Here's a diff ontop:
Will do.
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 5635ca9a1fbe..a3a2396362a5 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -364,8 +364,9 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) /*
- SME and SEV are very similar but they are not the same, so there are
- times that the kernel will need to distinguish between SME and SEV. The
- sme_active() and sev_active() functions are used for this. When a
- distinction isn't needed, the mem_encrypt_active() function can be used.
- PATTR_HOST_MEM_ENCRYPT and PATTR_GUEST_MEM_ENCRYPT flags to
- amd_prot_guest_has() are used for this. When a distinction isn't needed,
- the mem_encrypt_active() function can be used.
- The trampoline code is a good example for this requirement. Before
- paging is activated, SME will access all memory as decrypted, but SEV
@@ -377,11 +378,6 @@ bool sev_active(void) { return sev_status & MSR_AMD64_SEV_ENABLED; }
-static bool sme_active(void) -{
- return sme_me_mask && !sev_active();
-} EXPORT_SYMBOL_GPL(sev_active);
/* Needs to be called from non-instrumentable code */ @@ -398,7 +394,7 @@ bool amd_prot_guest_has(unsigned int attr)
case PATTR_SME: case PATTR_HOST_MEM_ENCRYPT:
return sme_active();
return sme_me_mask && !sev_active();
case PATTR_SEV: case PATTR_GUEST_MEM_ENCRYPT:
{ return sme_me_mask && !sev_active(); } @@ -428,7 +428,7 @@ bool force_dma_unencrypted(struct device *dev) * device does not support DMA to addresses that include the * encryption mask. */
- if (sme_active()) {
- if (amd_prot_guest_has(PATTR_SME)) {
So I'm not sure: you add PATTR_SME which you call with amd_prot_guest_has() and PATTR_HOST_MEM_ENCRYPT which you call with prot_guest_has() and they both end up being the same thing on AMD.
So why even bother with PATTR_SME?
This is only going to cause confusion later and I'd say let's simply use prot_guest_has(PATTR_HOST_MEM_ENCRYPT) everywhere...
Ok, I can do that. I was trying to ensure that anything that is truly SME or SEV specific would be called out now.
I'm ok with letting the TDX folks make changes to these calls to be SME or SEV specific, if necessary, later.
Thanks, Tom
On Tue, Aug 17, 2021 at 09:46:58AM -0500, Tom Lendacky wrote:
I'm ok with letting the TDX folks make changes to these calls to be SME or SEV specific, if necessary, later.
Yap, exactly. Let's add the specific stuff only when really needed.
Thx.
Replace occurrences of sev_active() with the more generic prot_guest_has() using PATTR_GUEST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c where PATTR_SEV will be used. If future support is added for other memory encryption technologies, the use of PATTR_GUEST_MEM_ENCRYPT can be updated, as required, to use PATTR_SEV.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Ard Biesheuvel ardb@kernel.org Reviewed-by: Joerg Roedel jroedel@suse.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/mem_encrypt.h | 2 -- arch/x86/kernel/crash_dump_64.c | 4 +++- arch/x86/kernel/kvm.c | 3 ++- arch/x86/kernel/kvmclock.c | 4 ++-- arch/x86/kernel/machine_kexec_64.c | 16 ++++++++-------- arch/x86/kvm/svm/svm.c | 3 ++- arch/x86/mm/ioremap.c | 6 +++--- arch/x86/mm/mem_encrypt.c | 15 +++++++-------- arch/x86/platform/efi/efi_64.c | 9 +++++---- 9 files changed, 32 insertions(+), 30 deletions(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 956338406cec..7e25de37c148 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void); void __init mem_encrypt_init(void);
void __init sev_es_init_vc_handling(void); -bool sev_active(void); bool sev_es_active(void); bool amd_prot_guest_has(unsigned int attr);
@@ -75,7 +74,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { }
static inline void sev_es_init_vc_handling(void) { } -static inline bool sev_active(void) { return false; } static inline bool sev_es_active(void) { return false; } static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c index 045e82e8945b..0cfe35f03e67 100644 --- a/arch/x86/kernel/crash_dump_64.c +++ b/arch/x86/kernel/crash_dump_64.c @@ -10,6 +10,7 @@ #include <linux/crash_dump.h> #include <linux/uaccess.h> #include <linux/io.h> +#include <linux/protected_guest.h>
static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, unsigned long offset, int userbuf, @@ -73,5 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos) { - return read_from_oldmem(buf, count, ppos, 0, sev_active()); + return read_from_oldmem(buf, count, ppos, 0, + prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)); } diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index a26643dc6bd6..9d08ad2f3faa 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -27,6 +27,7 @@ #include <linux/nmi.h> #include <linux/swait.h> #include <linux/syscore_ops.h> +#include <linux/protected_guest.h> #include <asm/timer.h> #include <asm/cpu.h> #include <asm/traps.h> @@ -418,7 +419,7 @@ static void __init sev_map_percpu_data(void) { int cpu;
- if (!sev_active()) + if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) return;
for_each_possible_cpu(cpu) { diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index ad273e5861c1..f7ba78a23dcd 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -16,9 +16,9 @@ #include <linux/mm.h> #include <linux/slab.h> #include <linux/set_memory.h> +#include <linux/protected_guest.h>
#include <asm/hypervisor.h> -#include <asm/mem_encrypt.h> #include <asm/x86_init.h> #include <asm/kvmclock.h>
@@ -232,7 +232,7 @@ static void __init kvmclock_init_mem(void) * hvclock is shared between the guest and the hypervisor, must * be mapped decrypted. */ - if (sev_active()) { + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) { r = set_memory_decrypted((unsigned long) hvclock_mem, 1UL << order); if (r) { diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 8e7b517ad738..66ff788b79c9 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd) } pte = pte_offset_kernel(pmd, vaddr);
- if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) prot = PAGE_KERNEL_EXEC;
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot)); @@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) level4p = (pgd_t *)__va(start_pgtable); clear_page(level4p);
- if (sev_active()) { + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) { info.page_flag |= _PAGE_ENC; info.kernpg_flag |= _PAGE_ENC; } @@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void) */ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp) { - if (sev_active()) + if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return 0;
/* - * If SME is active we need to be sure that kexec pages are - * not encrypted because when we boot to the new kernel the + * If host memory encryption is active we need to be sure that kexec + * pages are not encrypted because when we boot to the new kernel the * pages won't be accessed encrypted (initially). */ return set_memory_decrypted((unsigned long)vaddr, pages); @@ -583,12 +583,12 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages) { - if (sev_active()) + if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return;
/* - * If SME is active we need to reset the pages back to being - * an encrypted mapping before freeing them. + * If host memory encryption is active we need to reset the pages back + * to being an encrypted mapping before freeing them. */ set_memory_encrypted((unsigned long)vaddr, pages); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index e8ccab50ebf6..b69f5ac622d5 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -25,6 +25,7 @@ #include <linux/pagemap.h> #include <linux/swap.h> #include <linux/rwsem.h> +#include <linux/protected_guest.h>
#include <asm/apic.h> #include <asm/perf_event.h> @@ -457,7 +458,7 @@ static int has_svm(void) return 0; }
- if (sev_active()) { + if (prot_guest_has(PATTR_SEV)) { pr_info("KVM is unsupported when running as an SEV guest\n"); return 0; } diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 583afd54c7e1..3ed0f28f12af 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -92,7 +92,7 @@ static unsigned int __ioremap_check_ram(struct resource *res) */ static unsigned int __ioremap_check_encrypted(struct resource *res) { - if (!sev_active()) + if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) return 0;
switch (res->desc) { @@ -112,7 +112,7 @@ static unsigned int __ioremap_check_encrypted(struct resource *res) */ static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *desc) { - if (!sev_active()) + if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) return;
if (!IS_ENABLED(CONFIG_EFI)) @@ -556,7 +556,7 @@ static bool memremap_should_map_decrypted(resource_size_t phys_addr, case E820_TYPE_NVS: case E820_TYPE_UNUSABLE: /* For SEV, these areas are encrypted */ - if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) break; fallthrough;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 5635ca9a1fbe..83bc928f529e 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -194,7 +194,7 @@ void __init sme_early_init(void) for (i = 0; i < ARRAY_SIZE(protection_map); i++) protection_map[i] = pgprot_encrypted(protection_map[i]);
- if (sev_active()) + if (amd_prot_guest_has(PATTR_SEV)) swiotlb_force = SWIOTLB_FORCE; }
@@ -203,7 +203,7 @@ void __init sev_setup_arch(void) phys_addr_t total_mem = memblock_phys_mem_size(); unsigned long size;
- if (!sev_active()) + if (!amd_prot_guest_has(PATTR_SEV)) return;
/* @@ -373,7 +373,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) * up under SME the trampoline area cannot be encrypted, whereas under SEV * the trampoline area must be encrypted. */ -bool sev_active(void) +static bool sev_active(void) { return sev_status & MSR_AMD64_SEV_ENABLED; } @@ -382,7 +382,6 @@ static bool sme_active(void) { return sme_me_mask && !sev_active(); } -EXPORT_SYMBOL_GPL(sev_active);
/* Needs to be called from non-instrumentable code */ bool noinstr sev_es_active(void) @@ -420,7 +419,7 @@ bool force_dma_unencrypted(struct device *dev) /* * For SEV, all DMA must be to unencrypted addresses. */ - if (sev_active()) + if (amd_prot_guest_has(PATTR_SEV)) return true;
/* @@ -479,7 +478,7 @@ static void print_mem_encrypt_feature_info(void) }
/* Secure Encrypted Virtualization */ - if (sev_active()) + if (amd_prot_guest_has(PATTR_SEV)) pr_cont(" SEV");
/* Encrypted Register State */ @@ -502,7 +501,7 @@ void __init mem_encrypt_init(void) * With SEV, we need to unroll the rep string I/O instructions, * but SEV-ES supports them through the #VC handler. */ - if (sev_active() && !sev_es_active()) + if (amd_prot_guest_has(PATTR_SEV) && !sev_es_active()) static_branch_enable(&sev_enable_key);
print_mem_encrypt_feature_info(); @@ -510,6 +509,6 @@ void __init mem_encrypt_init(void)
int arch_has_restricted_virtio_memory_access(void) { - return sev_active(); + return amd_prot_guest_has(PATTR_SEV); } EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access); diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c index 7515e78ef898..94737fcc1e21 100644 --- a/arch/x86/platform/efi/efi_64.c +++ b/arch/x86/platform/efi/efi_64.c @@ -33,7 +33,7 @@ #include <linux/reboot.h> #include <linux/slab.h> #include <linux/ucs2_string.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/sched/task.h>
#include <asm/setup.h> @@ -284,7 +284,8 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va) if (!(md->attribute & EFI_MEMORY_WB)) flags |= _PAGE_PCD;
- if (sev_active() && md->type != EFI_MEMORY_MAPPED_IO) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT) && + md->type != EFI_MEMORY_MAPPED_IO) flags |= _PAGE_ENC;
pfn = md->phys_addr >> PAGE_SHIFT; @@ -390,7 +391,7 @@ static int __init efi_update_mem_attr(struct mm_struct *mm, efi_memory_desc_t *m if (!(md->attribute & EFI_MEMORY_RO)) pf |= _PAGE_RW;
- if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) pf |= _PAGE_ENC;
return efi_update_mappings(md, pf); @@ -438,7 +439,7 @@ void __init efi_runtime_update_mappings(void) (md->type != EFI_RUNTIME_SERVICES_CODE)) pf |= _PAGE_RW;
- if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) pf |= _PAGE_ENC;
efi_update_mappings(md, pf);
On Fri, Aug 13, 2021 at 11:59:25AM -0500, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 8e7b517ad738..66ff788b79c9 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd) } pte = pte_offset_kernel(pmd, vaddr);
- if (sev_active())
if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) prot = PAGE_KERNEL_EXEC;
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
@@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) level4p = (pgd_t *)__va(start_pgtable); clear_page(level4p);
- if (sev_active()) {
- if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) { info.page_flag |= _PAGE_ENC; info.kernpg_flag |= _PAGE_ENC; }
@@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void) */ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp) {
- if (sev_active())
if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return 0;
/*
* If SME is active we need to be sure that kexec pages are
* not encrypted because when we boot to the new kernel the
* If host memory encryption is active we need to be sure that kexec
* pages are not encrypted because when we boot to the new kernel the
*/
- pages won't be accessed encrypted (initially).
That hunk belongs logically into the previous patch which removes sme_active().
return set_memory_decrypted((unsigned long)vaddr, pages); @@ -583,12 +583,12 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages) {
- if (sev_active())
if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return;
/*
* If SME is active we need to reset the pages back to being
* an encrypted mapping before freeing them.
* If host memory encryption is active we need to reset the pages back
*/ set_memory_encrypted((unsigned long)vaddr, pages);* to being an encrypted mapping before freeing them.
} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index e8ccab50ebf6..b69f5ac622d5 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -25,6 +25,7 @@ #include <linux/pagemap.h> #include <linux/swap.h> #include <linux/rwsem.h> +#include <linux/protected_guest.h>
#include <asm/apic.h> #include <asm/perf_event.h> @@ -457,7 +458,7 @@ static int has_svm(void) return 0; }
- if (sev_active()) {
- if (prot_guest_has(PATTR_SEV)) { pr_info("KVM is unsupported when running as an SEV guest\n"); return 0;
Same question as for PATTR_SME. PATTR_GUEST_MEM_ENCRYPT should be enough.
@@ -373,7 +373,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
- up under SME the trampoline area cannot be encrypted, whereas under SEV
- the trampoline area must be encrypted.
*/ -bool sev_active(void) +static bool sev_active(void) { return sev_status & MSR_AMD64_SEV_ENABLED; } @@ -382,7 +382,6 @@ static bool sme_active(void) { return sme_me_mask && !sev_active(); } -EXPORT_SYMBOL_GPL(sev_active);
Just get rid of it altogether.
Thx.
On 8/17/21 5:02 AM, Borislav Petkov wrote:
On Fri, Aug 13, 2021 at 11:59:25AM -0500, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 8e7b517ad738..66ff788b79c9 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd) } pte = pte_offset_kernel(pmd, vaddr);
- if (sev_active())
if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) prot = PAGE_KERNEL_EXEC;
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
@@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) level4p = (pgd_t *)__va(start_pgtable); clear_page(level4p);
- if (sev_active()) {
- if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) { info.page_flag |= _PAGE_ENC; info.kernpg_flag |= _PAGE_ENC; }
@@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void) */ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp) {
- if (sev_active())
if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return 0;
/*
* If SME is active we need to be sure that kexec pages are
* not encrypted because when we boot to the new kernel the
* If host memory encryption is active we need to be sure that kexec
* pages are not encrypted because when we boot to the new kernel the
*/
- pages won't be accessed encrypted (initially).
That hunk belongs logically into the previous patch which removes sme_active().
I was trying to keep the sev_active() changes separate... so even though it's an SME thing, I kept it here. But I can move it to the previous patch, it just might look strange.
return set_memory_decrypted((unsigned long)vaddr, pages); @@ -583,12 +583,12 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages) {
- if (sev_active())
if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return;
/*
* If SME is active we need to reset the pages back to being
* an encrypted mapping before freeing them.
* If host memory encryption is active we need to reset the pages back
*/ set_memory_encrypted((unsigned long)vaddr, pages);* to being an encrypted mapping before freeing them.
} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index e8ccab50ebf6..b69f5ac622d5 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -25,6 +25,7 @@ #include <linux/pagemap.h> #include <linux/swap.h> #include <linux/rwsem.h> +#include <linux/protected_guest.h>
#include <asm/apic.h> #include <asm/perf_event.h> @@ -457,7 +458,7 @@ static int has_svm(void) return 0; }
- if (sev_active()) {
- if (prot_guest_has(PATTR_SEV)) { pr_info("KVM is unsupported when running as an SEV guest\n"); return 0;
Same question as for PATTR_SME. PATTR_GUEST_MEM_ENCRYPT should be enough.
Yup, I'll change them all.
@@ -373,7 +373,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
- up under SME the trampoline area cannot be encrypted, whereas under SEV
- the trampoline area must be encrypted.
*/ -bool sev_active(void) +static bool sev_active(void) { return sev_status & MSR_AMD64_SEV_ENABLED; } @@ -382,7 +382,6 @@ static bool sme_active(void) { return sme_me_mask && !sev_active(); } -EXPORT_SYMBOL_GPL(sev_active);
Just get rid of it altogether.
Ok.
Thanks, Tom
Thx.
On Tue, Aug 17, 2021 at 10:26:18AM -0500, Tom Lendacky wrote:
/*
* If SME is active we need to be sure that kexec pages are
* not encrypted because when we boot to the new kernel the
* If host memory encryption is active we need to be sure that kexec
* pages are not encrypted because when we boot to the new kernel the
*/
- pages won't be accessed encrypted (initially).
That hunk belongs logically into the previous patch which removes sme_active().
I was trying to keep the sev_active() changes separate... so even though it's an SME thing, I kept it here. But I can move it to the previous patch, it just might look strange.
Oh I meant only the comment because it is a SME-related change. But not too important so whatever.
Replace occurrences of sev_es_active() with the more generic prot_guest_has() using PATTR_GUEST_PROT_STATE, except for in arch/x86/kernel/sev*.c and arch/x86/mm/mem_encrypt*.c where PATTR_SEV_ES will be used. If future support is added for other memory encyrption techonologies, the use of PATTR_GUEST_PROT_STATE can be updated, as required, to specifically use PATTR_SEV_ES.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/mem_encrypt.h | 2 -- arch/x86/kernel/sev.c | 6 +++--- arch/x86/mm/mem_encrypt.c | 7 +++---- arch/x86/realmode/init.c | 3 +-- 4 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 7e25de37c148..797146e0cd6b 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void); void __init mem_encrypt_init(void);
void __init sev_es_init_vc_handling(void); -bool sev_es_active(void); bool amd_prot_guest_has(unsigned int attr);
#define __bss_decrypted __section(".bss..decrypted") @@ -74,7 +73,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { }
static inline void sev_es_init_vc_handling(void) { } -static inline bool sev_es_active(void) { return false; } static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
static inline int __init diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a6895e440bc3..66a4ab9d95d7 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -11,7 +11,7 @@
#include <linux/sched/debug.h> /* For show_regs() */ #include <linux/percpu-defs.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/printk.h> #include <linux/mm_types.h> #include <linux/set_memory.h> @@ -615,7 +615,7 @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd) int cpu; u64 pfn;
- if (!sev_es_active()) + if (!prot_guest_has(PATTR_SEV_ES)) return 0;
pflags = _PAGE_NX | _PAGE_RW; @@ -774,7 +774,7 @@ void __init sev_es_init_vc_handling(void)
BUILD_BUG_ON(offsetof(struct sev_es_runtime_data, ghcb_page) % PAGE_SIZE);
- if (!sev_es_active()) + if (!prot_guest_has(PATTR_SEV_ES)) return;
if (!sev_es_check_cpu_features()) diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 83bc928f529e..38dfa84b77a1 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -383,8 +383,7 @@ static bool sme_active(void) return sme_me_mask && !sev_active(); }
-/* Needs to be called from non-instrumentable code */ -bool noinstr sev_es_active(void) +static bool sev_es_active(void) { return sev_status & MSR_AMD64_SEV_ES_ENABLED; } @@ -482,7 +481,7 @@ static void print_mem_encrypt_feature_info(void) pr_cont(" SEV");
/* Encrypted Register State */ - if (sev_es_active()) + if (amd_prot_guest_has(PATTR_SEV_ES)) pr_cont(" SEV-ES");
pr_cont("\n"); @@ -501,7 +500,7 @@ void __init mem_encrypt_init(void) * With SEV, we need to unroll the rep string I/O instructions, * but SEV-ES supports them through the #VC handler. */ - if (amd_prot_guest_has(PATTR_SEV) && !sev_es_active()) + if (amd_prot_guest_has(PATTR_SEV) && !amd_prot_guest_has(PATTR_SEV_ES)) static_branch_enable(&sev_enable_key);
print_mem_encrypt_feature_info(); diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c index 2109ae569c67..7711d0071f41 100644 --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -2,7 +2,6 @@ #include <linux/io.h> #include <linux/slab.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> #include <linux/protected_guest.h> #include <linux/pgtable.h>
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header *th) if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) th->flags |= TH_FLAGS_SME_ACTIVE;
- if (sev_es_active()) { + if (prot_guest_has(PATTR_GUEST_PROT_STATE)) { /* * Skip the call to verify_cpu() in secondary_startup_64 as it * will cause #VC exceptions when the AP can't handle them yet.
On Fri, Aug 13, 2021 at 11:59:26AM -0500, Tom Lendacky wrote:
Replace occurrences of sev_es_active() with the more generic prot_guest_has() using PATTR_GUEST_PROT_STATE, except for in arch/x86/kernel/sev*.c and arch/x86/mm/mem_encrypt*.c where PATTR_SEV_ES will be used. If future support is added for other memory encyrption techonologies, the use of PATTR_GUEST_PROT_STATE can be updated, as required, to specifically use PATTR_SEV_ES.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/x86/include/asm/mem_encrypt.h | 2 -- arch/x86/kernel/sev.c | 6 +++--- arch/x86/mm/mem_encrypt.c | 7 +++---- arch/x86/realmode/init.c | 3 +-- 4 files changed, 7 insertions(+), 11 deletions(-)
Same comments to this one as for the previous two.
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has() with the PATTR_MEM_ENCRYPT attribute.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Cc: Dave Young dyoung@redhat.com Cc: Baoquan He bhe@redhat.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/kernel/head64.c | 4 ++-- arch/x86/mm/ioremap.c | 4 ++-- arch/x86/mm/mem_encrypt.c | 5 ++--- arch/x86/mm/pat/set_memory.c | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++- drivers/gpu/drm/drm_cache.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++--- drivers/iommu/amd/iommu.c | 3 ++- drivers/iommu/amd/iommu_v2.c | 3 ++- drivers/iommu/iommu.c | 3 ++- fs/proc/vmcore.c | 6 +++--- kernel/dma/swiotlb.c | 4 ++-- 13 files changed, 29 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h>
#include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted; for (; vaddr < vaddr_end; vaddr += PMD_SIZE) { diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 3ed0f28f12af..7f012fc1b600 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -694,7 +694,7 @@ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr, bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size, unsigned long flags) { - if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return true;
if (flags & MEMREMAP_ENC) @@ -724,7 +724,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr, { bool encrypted_prot;
- if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return prot;
encrypted_prot = true; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 38dfa84b77a1..69aed9935b5e 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) /* * SME and SEV are very similar but they are not the same, so there are * times that the kernel will need to distinguish between SME and SEV. The - * sme_active() and sev_active() functions are used for this. When a - * distinction isn't needed, the mem_encrypt_active() function can be used. + * sme_active() and sev_active() functions are used for this. * * The trampoline code is a good example for this requirement. Before * paging is activated, SME will access all memory as decrypted, but SEV @@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void) * The unused memory range was mapped decrypted, change the encryption * attribute from decrypted to encrypted before freeing it. */ - if (mem_encrypt_active()) { + if (amd_prot_guest_has(PATTR_MEM_ENCRYPT)) { r = set_memory_encrypted(vaddr, npages); if (r) { pr_warn("failed to free unused decrypted pages\n"); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ad8a5c586a35..6925f2bb4be1 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -18,6 +18,7 @@ #include <linux/libnvdimm.h> #include <linux/vmstat.h> #include <linux/kernel.h> +#include <linux/protected_guest.h>
#include <asm/e820/api.h> #include <asm/processor.h> @@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) int ret;
/* Nothing to do if memory encryption is not active */ - if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return 0;
/* Should not be working on unaligned addresses */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index 971c5b8e75dc..21c1e3056070 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -38,6 +38,7 @@ #include <drm/drm_probe_helper.h> #include <linux/mmu_notifier.h> #include <linux/suspend.h> +#include <linux/protected_guest.h>
#include "amdgpu.h" #include "amdgpu_irq.h" @@ -1250,7 +1251,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, * however, SME requires an indirect IOMMU mapping because the encryption * bit is beyond the DMA mask of the chip. */ - if (mem_encrypt_active() && ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { + if (prot_guest_has(PATTR_MEM_ENCRYPT) && + ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { dev_info(&pdev->dev, "SME is not compatible with RAVEN\n"); return -ENOTSUPP; diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c index 546599f19a93..4d01d44012fd 100644 --- a/drivers/gpu/drm/drm_cache.c +++ b/drivers/gpu/drm/drm_cache.c @@ -31,7 +31,7 @@ #include <linux/dma-buf-map.h> #include <linux/export.h> #include <linux/highmem.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <xen/xen.h>
#include <drm/drm_cache.h> @@ -204,7 +204,7 @@ bool drm_need_swiotlb(int dma_bits) * Enforce dma_alloc_coherent when memory encryption is active as well * for the same reasons as for Xen paravirtual hosts. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return true;
for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 45aeeca9b8f6..498f52ba08ea 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -29,7 +29,7 @@ #include <linux/dma-mapping.h> #include <linux/module.h> #include <linux/pci.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <drm/drm_aperture.h> #include <drm/drm_drv.h> @@ -633,7 +633,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv) [vmw_dma_map_bind] = "Giving up DMA mappings early."};
/* TTM currently doesn't fully support SEV encryption. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -EINVAL;
if (vmw_force_coherent) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c index 3d08f5700bdb..0c70573d3dce 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c @@ -28,7 +28,7 @@ #include <linux/kernel.h> #include <linux/module.h> #include <linux/slab.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <asm/hypervisor.h>
@@ -153,7 +153,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel, unsigned long msg_len = strlen(msg);
/* HB port can't access encrypted memory. */ - if (hb && !mem_encrypt_active()) { + if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_high;
si = (uintptr_t) msg; @@ -208,7 +208,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply, unsigned long si, di, eax, ebx, ecx, edx;
/* HB port can't access encrypted memory */ - if (hb && !mem_encrypt_active()) { + if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_low;
si = channel->cookie_high; diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 811a49a95d04..def63a8deab4 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -31,6 +31,7 @@ #include <linux/irqdomain.h> #include <linux/percpu.h> #include <linux/io-pgtable.h> +#include <linux/protected_guest.h> #include <asm/irq_remapping.h> #include <asm/io_apic.h> #include <asm/apic.h> @@ -2178,7 +2179,7 @@ static int amd_iommu_def_domain_type(struct device *dev) * active, because some of those devices (AMD GPUs) don't have the * encryption bit in their DMA-mask and require remapping. */ - if (!mem_encrypt_active() && dev_data->iommu_v2) + if (!prot_guest_has(PATTR_MEM_ENCRYPT) && dev_data->iommu_v2) return IOMMU_DOMAIN_IDENTITY;
return 0; diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index f8d4ad421e07..ac359bc98523 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -16,6 +16,7 @@ #include <linux/wait.h> #include <linux/pci.h> #include <linux/gfp.h> +#include <linux/protected_guest.h>
#include "amd_iommu.h"
@@ -741,7 +742,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids) * When memory encryption is active the device is likely not in a * direct-mapped domain. Forbid using IOMMUv2 functionality for now. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -ENODEV;
if (!amd_iommu_v2_supported()) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5419c4b9f27a..ddbedb1b5b6b 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -23,6 +23,7 @@ #include <linux/property.h> #include <linux/fsl/mc.h> #include <linux/module.h> +#include <linux/protected_guest.h> #include <trace/events/iommu.h>
static struct kset *iommu_group_kset; @@ -127,7 +128,7 @@ static int __init iommu_subsys_init(void) else iommu_set_default_translated(false);
- if (iommu_default_passthrough() && mem_encrypt_active()) { + if (iommu_default_passthrough() && prot_guest_has(PATTR_MEM_ENCRYPT)) { pr_info("Memory encryption detected - Disabling default IOMMU Passthrough\n"); iommu_set_default_translated(false); } diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 9a15334da208..b466f543dc00 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -26,7 +26,7 @@ #include <linux/vmalloc.h> #include <linux/pagemap.h> #include <linux/uaccess.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <asm/io.h> #include "internal.h"
@@ -177,7 +177,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos) */ ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos) { - return read_from_oldmem(buf, count, ppos, 0, mem_encrypt_active()); + return read_from_oldmem(buf, count, ppos, 0, prot_guest_has(PATTR_MEM_ENCRYPT)); }
/* @@ -378,7 +378,7 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, buflen); start = m->paddr + *fpos - m->offset; tmp = read_from_oldmem(buffer, tsz, &start, - userbuf, mem_encrypt_active()); + userbuf, prot_guest_has(PATTR_MEM_ENCRYPT)); if (tmp < 0) return tmp; buflen -= tsz; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e50df8d8f87e..2e8dee23a624 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -34,7 +34,7 @@ #include <linux/highmem.h> #include <linux/gfp.h> #include <linux/scatterlist.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/set_memory.h> #ifdef CONFIG_DEBUG_FS #include <linux/debugfs.h> @@ -515,7 +515,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, if (!mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
- if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
if (mapping_size > alloc_size) {
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Reviewed-by: Joerg Roedel jroedel@suse.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- include/linux/mem_encrypt.h | 4 ---- 1 file changed, 4 deletions(-)
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 5c4a18a91f89..ae4526389261 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -16,10 +16,6 @@
#include <asm/mem_encrypt.h>
-#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */ - -static inline bool mem_encrypt_active(void) { return false; } - #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
#ifdef CONFIG_AMD_MEM_ENCRYPT
On Fri, Aug 13, 2021 at 11:59:28AM -0500, Tom Lendacky wrote:
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Reviewed-by: Joerg Roedel jroedel@suse.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
include/linux/mem_encrypt.h | 4 ---- 1 file changed, 4 deletions(-)
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 5c4a18a91f89..ae4526389261 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -16,10 +16,6 @@
#include <asm/mem_encrypt.h>
-#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
-static inline bool mem_encrypt_active(void) { return false; }
#endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
#ifdef CONFIG_AMD_MEM_ENCRYPT
This one wants to be part of the previous patch.
On Tue, Aug 17, 2021 at 12:22:33PM +0200, Borislav Petkov wrote:
This one wants to be part of the previous patch.
... and the three following patches too - the treewide patch does a single atomic :) replacement and that's it.
On 8/17/21 5:24 AM, Borislav Petkov wrote:
On Tue, Aug 17, 2021 at 12:22:33PM +0200, Borislav Petkov wrote:
This one wants to be part of the previous patch.
... and the three following patches too - the treewide patch does a single atomic :) replacement and that's it.
Ok, I'll squash those all together.
Thanks, Tom
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Reviewed-by: Joerg Roedel jroedel@suse.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/mem_encrypt.h | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 797146e0cd6b..94c089e9ea69 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -97,11 +97,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { }
extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
-static inline bool mem_encrypt_active(void) -{ - return sme_me_mask; -} - static inline u64 sme_get_me_mask(void) { return sme_me_mask;
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Cc: Michael Ellerman mpe@ellerman.id.au Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Paul Mackerras paulus@samba.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/powerpc/include/asm/mem_encrypt.h | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h index ba9dab07c1be..2f26b8fc8d29 100644 --- a/arch/powerpc/include/asm/mem_encrypt.h +++ b/arch/powerpc/include/asm/mem_encrypt.h @@ -10,11 +10,6 @@
#include <asm/svm.h>
-static inline bool mem_encrypt_active(void) -{ - return is_secure_guest(); -} - static inline bool force_dma_unencrypted(struct device *dev) { return is_secure_guest();
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation. Since the default implementation of the prot_guest_has() matches the s390 implementation of mem_encrypt_active(), prot_guest_has() does not need to be implemented in s390 (the config option ARCH_HAS_PROTECTED_GUEST is not set).
Cc: Heiko Carstens hca@linux.ibm.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Christian Borntraeger borntraeger@de.ibm.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/s390/include/asm/mem_encrypt.h | 2 -- 1 file changed, 2 deletions(-)
diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h index 2542cbf7e2d1..08a8b96606d7 100644 --- a/arch/s390/include/asm/mem_encrypt.h +++ b/arch/s390/include/asm/mem_encrypt.h @@ -4,8 +4,6 @@
#ifndef __ASSEMBLY__
-static inline bool mem_encrypt_active(void) { return false; } - int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages);
On 8/13/21 11:59 AM, Tom Lendacky wrote:
This patch series provides a generic helper function, prot_guest_has(), to replace the sme_active(), sev_active(), sev_es_active() and mem_encrypt_active() functions.
It is expected that as new protected virtualization technologies are added to the kernel, they can all be covered by a single function call instead of a collection of specific function calls all called from the same locations.
The powerpc and s390 patches have been compile tested only. Can the folks copied on this series verify that nothing breaks for them.
There are some patches related to PPC that added new calls to the mem_encrypt_active() function that are not yet in the tip tree. After the merge window, I'll need to send a v3 with those additional changes before this series can be applied.
Thanks, Tom
dri-devel@lists.freedesktop.org