This patch series provides a generic helper function, prot_guest_has(), to replace the sme_active(), sev_active(), sev_es_active() and mem_encrypt_active() functions.
It is expected that as new protected virtualization technologies are added to the kernel, they can all be covered by a single function call instead of a collection of specific function calls all called from the same locations.
The powerpc and s390 patches have been compile tested only. Can the folks copied on this series verify that nothing breaks for them.
Cc: Andi Kleen ak@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Dave Young dyoung@redhat.com Cc: David Airlie airlied@linux.ie Cc: Heiko Carstens hca@linux.ibm.com Cc: Ingo Molnar mingo@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Will Deacon will@kernel.org
---
Patches based on: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
Tom Lendacky (11): mm: Introduce a function to check for virtualization protection features x86/sev: Add an x86 version of prot_guest_has() powerpc/pseries/svm: Add a powerpc version of prot_guest_has() x86/sme: Replace occurrences of sme_active() with prot_guest_has() x86/sev: Replace occurrences of sev_active() with prot_guest_has() x86/sev: Replace occurrences of sev_es_active() with prot_guest_has() treewide: Replace the use of mem_encrypt_active() with prot_guest_has() mm: Remove the now unused mem_encrypt_active() function x86/sev: Remove the now unused mem_encrypt_active() function powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function s390/mm: Remove the now unused mem_encrypt_active() function
arch/Kconfig | 3 ++ arch/powerpc/include/asm/mem_encrypt.h | 5 -- arch/powerpc/include/asm/protected_guest.h | 30 +++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + arch/s390/include/asm/mem_encrypt.h | 2 - arch/x86/Kconfig | 1 + arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 13 +---- arch/x86/include/asm/protected_guest.h | 27 ++++++++++ arch/x86/kernel/crash_dump_64.c | 4 +- arch/x86/kernel/head64.c | 4 +- arch/x86/kernel/kvm.c | 3 +- arch/x86/kernel/kvmclock.c | 4 +- arch/x86/kernel/machine_kexec_64.c | 19 +++---- arch/x86/kernel/pci-swiotlb.c | 9 ++-- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/kernel/sev.c | 6 +-- arch/x86/kvm/svm/svm.c | 3 +- arch/x86/mm/ioremap.c | 16 +++--- arch/x86/mm/mem_encrypt.c | 60 +++++++++++++++------- arch/x86/mm/mem_encrypt_identity.c | 3 +- arch/x86/mm/pat/set_memory.c | 3 +- arch/x86/platform/efi/efi_64.c | 9 ++-- arch/x86/realmode/init.c | 8 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +- drivers/gpu/drm/drm_cache.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-- drivers/iommu/amd/init.c | 7 +-- drivers/iommu/amd/iommu.c | 3 +- drivers/iommu/amd/iommu_v2.c | 3 +- drivers/iommu/iommu.c | 3 +- fs/proc/vmcore.c | 6 +-- include/linux/mem_encrypt.h | 4 -- include/linux/protected_guest.h | 37 +++++++++++++ kernel/dma/swiotlb.c | 4 +- 36 files changed, 218 insertions(+), 104 deletions(-) create mode 100644 arch/powerpc/include/asm/protected_guest.h create mode 100644 arch/x86/include/asm/protected_guest.h create mode 100644 include/linux/protected_guest.h
In prep for other protected virtualization technologies, introduce a generic helper function, prot_guest_has(), that can be used to check for specific protection attributes, like memory encryption. This is intended to eliminate having to add multiple technology-specific checks to the code (e.g. if (sev_active() || tdx_active())).
Co-developed-by: Andi Kleen ak@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/Kconfig | 3 +++ include/linux/protected_guest.h | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) create mode 100644 include/linux/protected_guest.h
diff --git a/arch/Kconfig b/arch/Kconfig index 129df498a8e1..a47cf283f2ff 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1231,6 +1231,9 @@ config RELR config ARCH_HAS_MEM_ENCRYPT bool
+config ARCH_HAS_PROTECTED_GUEST + bool + config HAVE_SPARSE_SYSCALL_NR bool help diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h new file mode 100644 index 000000000000..f8ed7b72967b --- /dev/null +++ b/include/linux/protected_guest.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Protected Guest (and Host) Capability checks + * + * Copyright (C) 2021 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky thomas.lendacky@amd.com + */ + +#ifndef _PROTECTED_GUEST_H +#define _PROTECTED_GUEST_H + +#ifndef __ASSEMBLY__ + +#define PATTR_MEM_ENCRYPT 0 /* Encrypted memory */ +#define PATTR_HOST_MEM_ENCRYPT 1 /* Host encrypted memory */ +#define PATTR_GUEST_MEM_ENCRYPT 2 /* Guest encrypted memory */ +#define PATTR_GUEST_PROT_STATE 3 /* Guest encrypted state */ + +#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST + +#include <asm/protected_guest.h> + +#else /* !CONFIG_ARCH_HAS_PROTECTED_GUEST */ + +static inline bool prot_guest_has(unsigned int attr) { return false; } + +#endif /* CONFIG_ARCH_HAS_PROTECTED_GUEST */ + +#endif /* __ASSEMBLY__ */ + +#endif /* _PROTECTED_GUEST_H */
On Tue, Jul 27, 2021 at 05:26:04PM -0500, Tom Lendacky wrote:
In prep for other protected virtualization technologies, introduce a generic helper function, prot_guest_has(), that can be used to check for specific protection attributes, like memory encryption. This is intended to eliminate having to add multiple technology-specific checks to the code (e.g. if (sev_active() || tdx_active())).
Co-developed-by: Andi Kleen ak@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
Reviewed-by: Joerg Roedel jroedel@suse.de
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h new file mode 100644 index 000000000000..f8ed7b72967b --- /dev/null +++ b/include/linux/protected_guest.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendackythomas.lendacky@amd.com
- */
+#ifndef _PROTECTED_GUEST_H +#define _PROTECTED_GUEST_H
+#ifndef __ASSEMBLY__
Can you include headers for bool type and false definition?
--- a/include/linux/protected_guest.h +++ b/include/linux/protected_guest.h @@ -12,6 +12,9 @@
#ifndef __ASSEMBLY__
+#include <linux/types.h> +#include <linux/stddef.h>
Otherwise, I see following errors in multi-config auto testing.
include/linux/protected_guest.h:40:15: error: unknown type name 'bool' include/linux/protected_guest.h:40:63: error: 'false' undeclared (first use in this functi
+#define PATTR_MEM_ENCRYPT 0 /* Encrypted memory */ +#define PATTR_HOST_MEM_ENCRYPT 1 /* Host encrypted memory */ +#define PATTR_GUEST_MEM_ENCRYPT 2 /* Guest encrypted memory */ +#define PATTR_GUEST_PROT_STATE 3 /* Guest encrypted state */
+#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
+#include <asm/protected_guest.h>
+#else /* !CONFIG_ARCH_HAS_PROTECTED_GUEST */
+static inline bool prot_guest_has(unsigned int attr) { return false; }
+#endif /* CONFIG_ARCH_HAS_PROTECTED_GUEST */
+#endif /* __ASSEMBLY__ */
+#endif /* _PROTECTED_GUEST_H */
On 8/11/21 9:53 AM, Kuppuswamy, Sathyanarayanan wrote:
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h new file mode 100644 index 000000000000..f8ed7b72967b --- /dev/null +++ b/include/linux/protected_guest.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Protected Guest (and Host) Capability checks
- Copyright (C) 2021 Advanced Micro Devices, Inc.
- Author: Tom Lendackythomas.lendacky@amd.com
- */
+#ifndef _PROTECTED_GUEST_H +#define _PROTECTED_GUEST_H
+#ifndef __ASSEMBLY__
Can you include headers for bool type and false definition?
Can do.
Thanks, Tom
--- a/include/linux/protected_guest.h +++ b/include/linux/protected_guest.h @@ -12,6 +12,9 @@
#ifndef __ASSEMBLY__
+#include <linux/types.h> +#include <linux/stddef.h>
Otherwise, I see following errors in multi-config auto testing.
include/linux/protected_guest.h:40:15: error: unknown type name 'bool' include/linux/protected_guest.h:40:63: error: 'false' undeclared (first use in this functi
Introduce an x86 version of the prot_guest_has() function. This will be used in the more generic x86 code to replace vendor specific calls like sev_active(), etc.
While the name suggests this is intended mainly for guests, it will also be used for host memory encryption checks in place of sme_active().
The amd_prot_guest_has() function does not use EXPORT_SYMBOL_GPL for the same reasons previously stated when changing sme_active(), sev_active and sme_me_mask to EXPORT_SYBMOL: commit 87df26175e67 ("x86/mm: Unbreak modules that rely on external PAGE_KERNEL availability") commit 9d5f38ba6c82 ("x86/mm: Unbreak modules that use the DMA API")
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Co-developed-by: Andi Kleen ak@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/Kconfig | 1 + arch/x86/include/asm/mem_encrypt.h | 2 ++ arch/x86/include/asm/protected_guest.h | 27 ++++++++++++++++++++++++++ arch/x86/mm/mem_encrypt.c | 25 ++++++++++++++++++++++++ include/linux/protected_guest.h | 5 +++++ 5 files changed, 60 insertions(+) create mode 100644 arch/x86/include/asm/protected_guest.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 49270655e827..e47213cbfc55 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1514,6 +1514,7 @@ config AMD_MEM_ENCRYPT select ARCH_HAS_FORCE_DMA_UNENCRYPTED select INSTRUCTION_DECODER select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS + select ARCH_HAS_PROTECTED_GUEST help Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 9c80c68d75b5..a46d47662772 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -53,6 +53,7 @@ void __init sev_es_init_vc_handling(void); bool sme_active(void); bool sev_active(void); bool sev_es_active(void); +bool amd_prot_guest_has(unsigned int attr);
#define __bss_decrypted __section(".bss..decrypted")
@@ -78,6 +79,7 @@ static inline void sev_es_init_vc_handling(void) { } static inline bool sme_active(void) { return false; } static inline bool sev_active(void) { return false; } static inline bool sev_es_active(void) { return false; } +static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
static inline int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; } diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h new file mode 100644 index 000000000000..b4a267dddf93 --- /dev/null +++ b/arch/x86/include/asm/protected_guest.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Protected Guest (and Host) Capability checks + * + * Copyright (C) 2021 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky thomas.lendacky@amd.com + */ + +#ifndef _X86_PROTECTED_GUEST_H +#define _X86_PROTECTED_GUEST_H + +#include <linux/mem_encrypt.h> + +#ifndef __ASSEMBLY__ + +static inline bool prot_guest_has(unsigned int attr) +{ + if (sme_me_mask) + return amd_prot_guest_has(attr); + + return false; +} + +#endif /* __ASSEMBLY__ */ + +#endif /* _X86_PROTECTED_GUEST_H */ diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index ff08dc463634..7d3b2c6f5f88 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -20,6 +20,7 @@ #include <linux/bitops.h> #include <linux/dma-mapping.h> #include <linux/virtio_config.h> +#include <linux/protected_guest.h>
#include <asm/tlbflush.h> #include <asm/fixmap.h> @@ -389,6 +390,30 @@ bool noinstr sev_es_active(void) return sev_status & MSR_AMD64_SEV_ES_ENABLED; }
+bool amd_prot_guest_has(unsigned int attr) +{ + switch (attr) { + case PATTR_MEM_ENCRYPT: + return sme_me_mask != 0; + + case PATTR_SME: + case PATTR_HOST_MEM_ENCRYPT: + return sme_active(); + + case PATTR_SEV: + case PATTR_GUEST_MEM_ENCRYPT: + return sev_active(); + + case PATTR_SEV_ES: + case PATTR_GUEST_PROT_STATE: + return sev_es_active(); + + default: + return false; + } +} +EXPORT_SYMBOL(amd_prot_guest_has); + /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) { diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h index f8ed7b72967b..7a7120abbb62 100644 --- a/include/linux/protected_guest.h +++ b/include/linux/protected_guest.h @@ -17,6 +17,11 @@ #define PATTR_GUEST_MEM_ENCRYPT 2 /* Guest encrypted memory */ #define PATTR_GUEST_PROT_STATE 3 /* Guest encrypted state */
+/* 0x800 - 0x8ff reserved for AMD */ +#define PATTR_SME 0x800 +#define PATTR_SEV 0x801 +#define PATTR_SEV_ES 0x802 + #ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
#include <asm/protected_guest.h>
On Tue, Jul 27, 2021 at 05:26:05PM -0500, Tom Lendacky wrote:
Introduce an x86 version of the prot_guest_has() function. This will be used in the more generic x86 code to replace vendor specific calls like sev_active(), etc.
While the name suggests this is intended mainly for guests, it will also be used for host memory encryption checks in place of sme_active().
The amd_prot_guest_has() function does not use EXPORT_SYMBOL_GPL for the same reasons previously stated when changing sme_active(), sev_active and sme_me_mask to EXPORT_SYBMOL: commit 87df26175e67 ("x86/mm: Unbreak modules that rely on external PAGE_KERNEL availability") commit 9d5f38ba6c82 ("x86/mm: Unbreak modules that use the DMA API")
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Co-developed-by: Andi Kleen ak@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Co-developed-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Kuppuswamy Sathyanarayanan sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
Reviewed-by: Joerg Roedel jroedel@suse.de
Introduce a powerpc version of the prot_guest_has() function. This will be used to replace the powerpc mem_encrypt_active() implementation, so the implementation will initially only support the PATTR_MEM_ENCRYPT attribute.
Cc: Michael Ellerman mpe@ellerman.id.au Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Paul Mackerras paulus@samba.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/powerpc/include/asm/protected_guest.h | 30 ++++++++++++++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + 2 files changed, 31 insertions(+) create mode 100644 arch/powerpc/include/asm/protected_guest.h
diff --git a/arch/powerpc/include/asm/protected_guest.h b/arch/powerpc/include/asm/protected_guest.h new file mode 100644 index 000000000000..ce55c2c7e534 --- /dev/null +++ b/arch/powerpc/include/asm/protected_guest.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Protected Guest (and Host) Capability checks + * + * Copyright (C) 2021 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky thomas.lendacky@amd.com + */ + +#ifndef _POWERPC_PROTECTED_GUEST_H +#define _POWERPC_PROTECTED_GUEST_H + +#include <asm/svm.h> + +#ifndef __ASSEMBLY__ + +static inline bool prot_guest_has(unsigned int attr) +{ + switch (attr) { + case PATTR_MEM_ENCRYPT: + return is_secure_guest(); + + default: + return false; + } +} + +#endif /* __ASSEMBLY__ */ + +#endif /* _POWERPC_PROTECTED_GUEST_H */ diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig index 5e037df2a3a1..8ce5417d6feb 100644 --- a/arch/powerpc/platforms/pseries/Kconfig +++ b/arch/powerpc/platforms/pseries/Kconfig @@ -159,6 +159,7 @@ config PPC_SVM select SWIOTLB select ARCH_HAS_MEM_ENCRYPT select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select ARCH_HAS_PROTECTED_GUEST help There are certain POWER platforms which support secure guests using the Protected Execution Facility, with the help of an Ultravisor
Replace occurrences of sme_active() with the more generic prot_guest_has() using PATTR_HOST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c where PATTR_SME will be used. If future support is added for other memory encryption technologies, the use of PATTR_HOST_MEM_ENCRYPT can be updated, as required, to use PATTR_SME.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 2 -- arch/x86/kernel/machine_kexec_64.c | 3 ++- arch/x86/kernel/pci-swiotlb.c | 9 ++++----- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/mm/ioremap.c | 6 +++--- arch/x86/mm/mem_encrypt.c | 10 +++++----- arch/x86/mm/mem_encrypt_identity.c | 3 ++- arch/x86/realmode/init.c | 5 +++-- drivers/iommu/amd/init.c | 7 ++++--- 10 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 0a6e34b07017..11b7c06e2828 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -129,7 +129,7 @@ relocate_kernel(unsigned long indirection_page, unsigned long page_list, unsigned long start_address, unsigned int preserve_context, - unsigned int sme_active); + unsigned int host_mem_enc_active); #endif
#define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index a46d47662772..956338406cec 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void); void __init mem_encrypt_init(void);
void __init sev_es_init_vc_handling(void); -bool sme_active(void); bool sev_active(void); bool sev_es_active(void); bool amd_prot_guest_has(unsigned int attr); @@ -76,7 +75,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { }
static inline void sev_es_init_vc_handling(void) { } -static inline bool sme_active(void) { return false; } static inline bool sev_active(void) { return false; } static inline bool sev_es_active(void) { return false; } static inline bool amd_prot_guest_has(unsigned int attr) { return false; } diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 131f30fdcfbd..8e7b517ad738 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -17,6 +17,7 @@ #include <linux/suspend.h> #include <linux/vmalloc.h> #include <linux/efi.h> +#include <linux/protected_guest.h>
#include <asm/init.h> #include <asm/tlbflush.h> @@ -358,7 +359,7 @@ void machine_kexec(struct kimage *image) (unsigned long)page_list, image->start, image->preserve_context, - sme_active()); + prot_guest_has(PATTR_HOST_MEM_ENCRYPT));
#ifdef CONFIG_KEXEC_JUMP if (image->preserve_context) diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index c2cfa5e7c152..bd9a9cfbc9a2 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -6,7 +6,7 @@ #include <linux/swiotlb.h> #include <linux/memblock.h> #include <linux/dma-direct.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <asm/iommu.h> #include <asm/swiotlb.h> @@ -45,11 +45,10 @@ int __init pci_swiotlb_detect_4gb(void) swiotlb = 1;
/* - * If SME is active then swiotlb will be set to 1 so that bounce - * buffers are allocated and used for devices that do not support - * the addressing range required for the encryption mask. + * Set swiotlb to 1 so that bounce buffers are allocated and used for + * devices that can't support DMA to encrypted memory. */ - if (sme_active()) + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) swiotlb = 1;
return swiotlb; diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S index c53271aebb64..c8fe74a28143 100644 --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86/kernel/relocate_kernel_64.S @@ -47,7 +47,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel) * %rsi page_list * %rdx start address * %rcx preserve_context - * %r8 sme_active + * %r8 host_mem_enc_active */
/* Save the CPU context, used for jumping back */ diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 60ade7dd71bd..f899f02c0241 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -14,7 +14,7 @@ #include <linux/slab.h> #include <linux/vmalloc.h> #include <linux/mmiotrace.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/efi.h> #include <linux/pgtable.h>
@@ -702,7 +702,7 @@ bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size, if (flags & MEMREMAP_DEC) return false;
- if (sme_active()) { + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) { if (memremap_is_setup_data(phys_addr, size) || memremap_is_efi_data(phys_addr, size)) return false; @@ -728,7 +728,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
encrypted_prot = true;
- if (sme_active()) { + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) { if (early_memremap_is_setup_data(phys_addr, size) || memremap_is_efi_data(phys_addr, size)) encrypted_prot = false; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 7d3b2c6f5f88..d246a630feb9 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -144,7 +144,7 @@ void __init sme_unmap_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active()) + if (!amd_prot_guest_has(PATTR_SME)) return;
/* Get the command line address before unmapping the real_mode_data */ @@ -164,7 +164,7 @@ void __init sme_map_bootdata(char *real_mode_data) struct boot_params *boot_data; unsigned long cmdline_paddr;
- if (!sme_active()) + if (!amd_prot_guest_has(PATTR_SME)) return;
__sme_early_map_unmap_mem(real_mode_data, sizeof(boot_params), true); @@ -378,7 +378,7 @@ bool sev_active(void) return sev_status & MSR_AMD64_SEV_ENABLED; }
-bool sme_active(void) +static bool sme_active(void) { return sme_me_mask && !sev_active(); } @@ -428,7 +428,7 @@ bool force_dma_unencrypted(struct device *dev) * device does not support DMA to addresses that include the * encryption mask. */ - if (sme_active()) { + if (amd_prot_guest_has(PATTR_SME)) { u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); @@ -469,7 +469,7 @@ static void print_mem_encrypt_feature_info(void) pr_info("AMD Memory Encryption Features active:");
/* Secure Memory Encryption */ - if (sme_active()) { + if (amd_prot_guest_has(PATTR_SME)) { /* * SME is mutually exclusive with any of the SEV * features below. diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c index 470b20208430..088c8ab7dcc1 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/mm/mem_encrypt_identity.c @@ -30,6 +30,7 @@ #include <linux/kernel.h> #include <linux/mm.h> #include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <asm/setup.h> #include <asm/sections.h> @@ -287,7 +288,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp) unsigned long pgtable_area_len; unsigned long decrypted_base;
- if (!sme_active()) + if (!prot_guest_has(PATTR_SME)) return;
/* diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c index 6534c92d0f83..2109ae569c67 100644 --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -3,6 +3,7 @@ #include <linux/slab.h> #include <linux/memblock.h> #include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h>
#include <asm/set_memory.h> @@ -44,7 +45,7 @@ void __init reserve_real_mode(void) static void sme_sev_setup_real_mode(struct trampoline_header *th) { #ifdef CONFIG_AMD_MEM_ENCRYPT - if (sme_active()) + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) th->flags |= TH_FLAGS_SME_ACTIVE;
if (sev_es_active()) { @@ -81,7 +82,7 @@ static void __init setup_real_mode(void) * decrypted memory in order to bring up other processors * successfully. This is not needed for SEV. */ - if (sme_active()) + if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) set_memory_decrypted((unsigned long)base, size >> PAGE_SHIFT);
memcpy(base, real_mode_blob, size); diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index 46280e6e1535..05e770e3e631 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -20,7 +20,7 @@ #include <linux/amd-iommu.h> #include <linux/export.h> #include <linux/kmemleak.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <asm/pci-direct.h> #include <asm/iommu.h> #include <asm/apic.h> @@ -965,7 +965,7 @@ static bool copy_device_table(void) pr_err("The address of old device table is above 4G, not trustworthy!\n"); return false; } - old_devtb = (sme_active() && is_kdump_kernel()) + old_devtb = (prot_guest_has(PATTR_HOST_MEM_ENCRYPT) && is_kdump_kernel()) ? (__force void *)ioremap_encrypted(old_devtb_phys, dev_table_size) : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB); @@ -3022,7 +3022,8 @@ static int __init amd_iommu_init(void)
static bool amd_iommu_sme_check(void) { - if (!sme_active() || (boot_cpu_data.x86 != 0x17)) + if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT) || + (boot_cpu_data.x86 != 0x17)) return true;
/* For Fam17h, a specific level of support is required */
On Tue, Jul 27, 2021 at 05:26:07PM -0500, Tom Lendacky wrote:
Replace occurrences of sme_active() with the more generic prot_guest_has() using PATTR_HOST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c where PATTR_SME will be used. If future support is added for other memory encryption technologies, the use of PATTR_HOST_MEM_ENCRYPT can be updated, as required, to use PATTR_SME.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
Reviewed-by: Joerg Roedel jroedel@suse.de
Replace occurrences of sev_active() with the more generic prot_guest_has() using PATTR_GUEST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c where PATTR_SEV will be used. If future support is added for other memory encryption technologies, the use of PATTR_GUEST_MEM_ENCRYPT can be updated, as required, to use PATTR_SEV.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Ard Biesheuvel ardb@kernel.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/mem_encrypt.h | 2 -- arch/x86/kernel/crash_dump_64.c | 4 +++- arch/x86/kernel/kvm.c | 3 ++- arch/x86/kernel/kvmclock.c | 4 ++-- arch/x86/kernel/machine_kexec_64.c | 16 ++++++++-------- arch/x86/kvm/svm/svm.c | 3 ++- arch/x86/mm/ioremap.c | 6 +++--- arch/x86/mm/mem_encrypt.c | 15 +++++++-------- arch/x86/platform/efi/efi_64.c | 9 +++++---- 9 files changed, 32 insertions(+), 30 deletions(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 956338406cec..7e25de37c148 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void); void __init mem_encrypt_init(void);
void __init sev_es_init_vc_handling(void); -bool sev_active(void); bool sev_es_active(void); bool amd_prot_guest_has(unsigned int attr);
@@ -75,7 +74,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { }
static inline void sev_es_init_vc_handling(void) { } -static inline bool sev_active(void) { return false; } static inline bool sev_es_active(void) { return false; } static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c index 045e82e8945b..0cfe35f03e67 100644 --- a/arch/x86/kernel/crash_dump_64.c +++ b/arch/x86/kernel/crash_dump_64.c @@ -10,6 +10,7 @@ #include <linux/crash_dump.h> #include <linux/uaccess.h> #include <linux/io.h> +#include <linux/protected_guest.h>
static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, unsigned long offset, int userbuf, @@ -73,5 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos) { - return read_from_oldmem(buf, count, ppos, 0, sev_active()); + return read_from_oldmem(buf, count, ppos, 0, + prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)); } diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index a26643dc6bd6..9d08ad2f3faa 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -27,6 +27,7 @@ #include <linux/nmi.h> #include <linux/swait.h> #include <linux/syscore_ops.h> +#include <linux/protected_guest.h> #include <asm/timer.h> #include <asm/cpu.h> #include <asm/traps.h> @@ -418,7 +419,7 @@ static void __init sev_map_percpu_data(void) { int cpu;
- if (!sev_active()) + if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) return;
for_each_possible_cpu(cpu) { diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index ad273e5861c1..f7ba78a23dcd 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -16,9 +16,9 @@ #include <linux/mm.h> #include <linux/slab.h> #include <linux/set_memory.h> +#include <linux/protected_guest.h>
#include <asm/hypervisor.h> -#include <asm/mem_encrypt.h> #include <asm/x86_init.h> #include <asm/kvmclock.h>
@@ -232,7 +232,7 @@ static void __init kvmclock_init_mem(void) * hvclock is shared between the guest and the hypervisor, must * be mapped decrypted. */ - if (sev_active()) { + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) { r = set_memory_decrypted((unsigned long) hvclock_mem, 1UL << order); if (r) { diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 8e7b517ad738..66ff788b79c9 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd) } pte = pte_offset_kernel(pmd, vaddr);
- if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) prot = PAGE_KERNEL_EXEC;
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot)); @@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable) level4p = (pgd_t *)__va(start_pgtable); clear_page(level4p);
- if (sev_active()) { + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) { info.page_flag |= _PAGE_ENC; info.kernpg_flag |= _PAGE_ENC; } @@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void) */ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp) { - if (sev_active()) + if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return 0;
/* - * If SME is active we need to be sure that kexec pages are - * not encrypted because when we boot to the new kernel the + * If host memory encryption is active we need to be sure that kexec + * pages are not encrypted because when we boot to the new kernel the * pages won't be accessed encrypted (initially). */ return set_memory_decrypted((unsigned long)vaddr, pages); @@ -583,12 +583,12 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages) { - if (sev_active()) + if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) return;
/* - * If SME is active we need to reset the pages back to being - * an encrypted mapping before freeing them. + * If host memory encryption is active we need to reset the pages back + * to being an encrypted mapping before freeing them. */ set_memory_encrypted((unsigned long)vaddr, pages); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 664d20f0689c..48c906f6593a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -25,6 +25,7 @@ #include <linux/pagemap.h> #include <linux/swap.h> #include <linux/rwsem.h> +#include <linux/protected_guest.h>
#include <asm/apic.h> #include <asm/perf_event.h> @@ -457,7 +458,7 @@ static int has_svm(void) return 0; }
- if (sev_active()) { + if (prot_guest_has(PATTR_SEV)) { pr_info("KVM is unsupported when running as an SEV guest\n"); return 0; } diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index f899f02c0241..0f2d5ace5986 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -92,7 +92,7 @@ static unsigned int __ioremap_check_ram(struct resource *res) */ static unsigned int __ioremap_check_encrypted(struct resource *res) { - if (!sev_active()) + if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) return 0;
switch (res->desc) { @@ -112,7 +112,7 @@ static unsigned int __ioremap_check_encrypted(struct resource *res) */ static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *desc) { - if (!sev_active()) + if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) return;
if (!IS_ENABLED(CONFIG_EFI)) @@ -555,7 +555,7 @@ static bool memremap_should_map_decrypted(resource_size_t phys_addr, case E820_TYPE_NVS: case E820_TYPE_UNUSABLE: /* For SEV, these areas are encrypted */ - if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) break; fallthrough;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index d246a630feb9..eb5cae93b238 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -194,7 +194,7 @@ void __init sme_early_init(void) for (i = 0; i < ARRAY_SIZE(protection_map); i++) protection_map[i] = pgprot_encrypted(protection_map[i]);
- if (sev_active()) + if (amd_prot_guest_has(PATTR_SEV)) swiotlb_force = SWIOTLB_FORCE; }
@@ -203,7 +203,7 @@ void __init sev_setup_arch(void) phys_addr_t total_mem = memblock_phys_mem_size(); unsigned long size;
- if (!sev_active()) + if (!amd_prot_guest_has(PATTR_SEV)) return;
/* @@ -373,7 +373,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) * up under SME the trampoline area cannot be encrypted, whereas under SEV * the trampoline area must be encrypted. */ -bool sev_active(void) +static bool sev_active(void) { return sev_status & MSR_AMD64_SEV_ENABLED; } @@ -382,7 +382,6 @@ static bool sme_active(void) { return sme_me_mask && !sev_active(); } -EXPORT_SYMBOL_GPL(sev_active);
/* Needs to be called from non-instrumentable code */ bool noinstr sev_es_active(void) @@ -420,7 +419,7 @@ bool force_dma_unencrypted(struct device *dev) /* * For SEV, all DMA must be to unencrypted addresses. */ - if (sev_active()) + if (amd_prot_guest_has(PATTR_SEV)) return true;
/* @@ -479,7 +478,7 @@ static void print_mem_encrypt_feature_info(void) }
/* Secure Encrypted Virtualization */ - if (sev_active()) + if (amd_prot_guest_has(PATTR_SEV)) pr_cont(" SEV");
/* Encrypted Register State */ @@ -502,7 +501,7 @@ void __init mem_encrypt_init(void) * With SEV, we need to unroll the rep string I/O instructions, * but SEV-ES supports them through the #VC handler. */ - if (sev_active() && !sev_es_active()) + if (amd_prot_guest_has(PATTR_SEV) && !sev_es_active()) static_branch_enable(&sev_enable_key);
print_mem_encrypt_feature_info(); @@ -510,6 +509,6 @@ void __init mem_encrypt_init(void)
int arch_has_restricted_virtio_memory_access(void) { - return sev_active(); + return amd_prot_guest_has(PATTR_SEV); } EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access); diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c index 7515e78ef898..94737fcc1e21 100644 --- a/arch/x86/platform/efi/efi_64.c +++ b/arch/x86/platform/efi/efi_64.c @@ -33,7 +33,7 @@ #include <linux/reboot.h> #include <linux/slab.h> #include <linux/ucs2_string.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/sched/task.h>
#include <asm/setup.h> @@ -284,7 +284,8 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va) if (!(md->attribute & EFI_MEMORY_WB)) flags |= _PAGE_PCD;
- if (sev_active() && md->type != EFI_MEMORY_MAPPED_IO) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT) && + md->type != EFI_MEMORY_MAPPED_IO) flags |= _PAGE_ENC;
pfn = md->phys_addr >> PAGE_SHIFT; @@ -390,7 +391,7 @@ static int __init efi_update_mem_attr(struct mm_struct *mm, efi_memory_desc_t *m if (!(md->attribute & EFI_MEMORY_RO)) pf |= _PAGE_RW;
- if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) pf |= _PAGE_ENC;
return efi_update_mappings(md, pf); @@ -438,7 +439,7 @@ void __init efi_runtime_update_mappings(void) (md->type != EFI_RUNTIME_SERVICES_CODE)) pf |= _PAGE_RW;
- if (sev_active()) + if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) pf |= _PAGE_ENC;
efi_update_mappings(md, pf);
On Tue, Jul 27, 2021 at 05:26:08PM -0500, Tom Lendacky wrote:
Replace occurrences of sev_active() with the more generic prot_guest_has() using PATTR_GUEST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c where PATTR_SEV will be used. If future support is added for other memory encryption technologies, the use of PATTR_GUEST_MEM_ENCRYPT can be updated, as required, to use PATTR_SEV.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Ard Biesheuvel ardb@kernel.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
Reviewed-by: Joerg Roedel jroedel@suse.de
Replace occurrences of sev_es_active() with the more generic prot_guest_has() using PATTR_GUEST_PROT_STATE, except for in arch/x86/kernel/sev*.c and arch/x86/mm/mem_encrypt*.c where PATTR_SEV_ES will be used. If future support is added for other memory encyrption techonologies, the use of PATTR_GUEST_PROT_STATE can be updated, as required, to specifically use PATTR_SEV_ES.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/mem_encrypt.h | 2 -- arch/x86/kernel/sev.c | 6 +++--- arch/x86/mm/mem_encrypt.c | 7 +++---- arch/x86/realmode/init.c | 3 +-- 4 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 7e25de37c148..797146e0cd6b 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void); void __init mem_encrypt_init(void);
void __init sev_es_init_vc_handling(void); -bool sev_es_active(void); bool amd_prot_guest_has(unsigned int attr);
#define __bss_decrypted __section(".bss..decrypted") @@ -74,7 +73,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { }
static inline void sev_es_init_vc_handling(void) { } -static inline bool sev_es_active(void) { return false; } static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
static inline int __init diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a6895e440bc3..66a4ab9d95d7 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -11,7 +11,7 @@
#include <linux/sched/debug.h> /* For show_regs() */ #include <linux/percpu-defs.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/printk.h> #include <linux/mm_types.h> #include <linux/set_memory.h> @@ -615,7 +615,7 @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd) int cpu; u64 pfn;
- if (!sev_es_active()) + if (!prot_guest_has(PATTR_SEV_ES)) return 0;
pflags = _PAGE_NX | _PAGE_RW; @@ -774,7 +774,7 @@ void __init sev_es_init_vc_handling(void)
BUILD_BUG_ON(offsetof(struct sev_es_runtime_data, ghcb_page) % PAGE_SIZE);
- if (!sev_es_active()) + if (!prot_guest_has(PATTR_SEV_ES)) return;
if (!sev_es_check_cpu_features()) diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index eb5cae93b238..451de8e84fce 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -383,8 +383,7 @@ static bool sme_active(void) return sme_me_mask && !sev_active(); }
-/* Needs to be called from non-instrumentable code */ -bool noinstr sev_es_active(void) +static bool sev_es_active(void) { return sev_status & MSR_AMD64_SEV_ES_ENABLED; } @@ -482,7 +481,7 @@ static void print_mem_encrypt_feature_info(void) pr_cont(" SEV");
/* Encrypted Register State */ - if (sev_es_active()) + if (amd_prot_guest_has(PATTR_SEV_ES)) pr_cont(" SEV-ES");
pr_cont("\n"); @@ -501,7 +500,7 @@ void __init mem_encrypt_init(void) * With SEV, we need to unroll the rep string I/O instructions, * but SEV-ES supports them through the #VC handler. */ - if (amd_prot_guest_has(PATTR_SEV) && !sev_es_active()) + if (amd_prot_guest_has(PATTR_SEV) && !amd_prot_guest_has(PATTR_SEV_ES)) static_branch_enable(&sev_enable_key);
print_mem_encrypt_feature_info(); diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c index 2109ae569c67..7711d0071f41 100644 --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -2,7 +2,6 @@ #include <linux/io.h> #include <linux/slab.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> #include <linux/protected_guest.h> #include <linux/pgtable.h>
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header *th) if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) th->flags |= TH_FLAGS_SME_ACTIVE;
- if (sev_es_active()) { + if (prot_guest_has(PATTR_GUEST_PROT_STATE)) { /* * Skip the call to verify_cpu() in secondary_startup_64 as it * will cause #VC exceptions when the AP can't handle them yet.
On Tue, Jul 27, 2021 at 05:26:09PM -0500, Tom Lendacky wrote:
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header *th) if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) th->flags |= TH_FLAGS_SME_ACTIVE;
- if (sev_es_active()) {
- if (prot_guest_has(PATTR_GUEST_PROT_STATE)) { /*
- Skip the call to verify_cpu() in secondary_startup_64 as it
- will cause #VC exceptions when the AP can't handle them yet.
Not sure how TDX will handle AP booting, are you sure it needs this special setup as well? Otherwise a check for SEV-ES would be better instead of the generic PATTR_GUEST_PROT_STATE.
Regards,
Joerg
On 8/2/21 5:45 AM, Joerg Roedel wrote:
On Tue, Jul 27, 2021 at 05:26:09PM -0500, Tom Lendacky wrote:
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header *th) if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT)) th->flags |= TH_FLAGS_SME_ACTIVE;
- if (sev_es_active()) {
- if (prot_guest_has(PATTR_GUEST_PROT_STATE)) { /*
- Skip the call to verify_cpu() in secondary_startup_64 as it
- will cause #VC exceptions when the AP can't handle them yet.
Not sure how TDX will handle AP booting, are you sure it needs this special setup as well? Otherwise a check for SEV-ES would be better instead of the generic PATTR_GUEST_PROT_STATE.
Yes, I'm not sure either. I figure that change can be made, if needed, as part of the TDX support.
Thanks, Tom
Regards,
Joerg
On 8/9/21 2:59 PM, Tom Lendacky wrote:
Not sure how TDX will handle AP booting, are you sure it needs this special setup as well? Otherwise a check for SEV-ES would be better instead of the generic PATTR_GUEST_PROT_STATE.
Yes, I'm not sure either. I figure that change can be made, if needed, as part of the TDX support.
We don't plan to set PROT_STATE. So it does not affect TDX. For SMP, we use MADT ACPI table for AP booting.
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has() with the PATTR_MEM_ENCRYPT attribute.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Cc: Dave Young dyoung@redhat.com Cc: Baoquan He bhe@redhat.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/kernel/head64.c | 4 ++-- arch/x86/mm/ioremap.c | 4 ++-- arch/x86/mm/mem_encrypt.c | 5 ++--- arch/x86/mm/pat/set_memory.c | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++- drivers/gpu/drm/drm_cache.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++--- drivers/iommu/amd/iommu.c | 3 ++- drivers/iommu/amd/iommu_v2.c | 3 ++- drivers/iommu/iommu.c | 3 ++- fs/proc/vmcore.c | 6 +++--- kernel/dma/swiotlb.c | 4 ++-- 13 files changed, 29 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h>
#include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted; for (; vaddr < vaddr_end; vaddr += PMD_SIZE) { diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 0f2d5ace5986..5e1c1f5cbbe8 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -693,7 +693,7 @@ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr, bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size, unsigned long flags) { - if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return true;
if (flags & MEMREMAP_ENC) @@ -723,7 +723,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr, { bool encrypted_prot;
- if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return prot;
encrypted_prot = true; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 451de8e84fce..0f1533dbe81c 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) /* * SME and SEV are very similar but they are not the same, so there are * times that the kernel will need to distinguish between SME and SEV. The - * sme_active() and sev_active() functions are used for this. When a - * distinction isn't needed, the mem_encrypt_active() function can be used. + * sme_active() and sev_active() functions are used for this. * * The trampoline code is a good example for this requirement. Before * paging is activated, SME will access all memory as decrypted, but SEV @@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void) * The unused memory range was mapped decrypted, change the encryption * attribute from decrypted to encrypted before freeing it. */ - if (mem_encrypt_active()) { + if (sme_me_mask) { r = set_memory_encrypted(vaddr, npages); if (r) { pr_warn("failed to free unused decrypted pages\n"); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ad8a5c586a35..6925f2bb4be1 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -18,6 +18,7 @@ #include <linux/libnvdimm.h> #include <linux/vmstat.h> #include <linux/kernel.h> +#include <linux/protected_guest.h>
#include <asm/e820/api.h> #include <asm/processor.h> @@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) int ret;
/* Nothing to do if memory encryption is not active */ - if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return 0;
/* Should not be working on unaligned addresses */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index abb928894eac..8407224717df 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -38,6 +38,7 @@ #include <drm/drm_probe_helper.h> #include <linux/mmu_notifier.h> #include <linux/suspend.h> +#include <linux/protected_guest.h>
#include "amdgpu.h" #include "amdgpu_irq.h" @@ -1239,7 +1240,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, * however, SME requires an indirect IOMMU mapping because the encryption * bit is beyond the DMA mask of the chip. */ - if (mem_encrypt_active() && ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { + if (prot_guest_has(PATTR_MEM_ENCRYPT) && + ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { dev_info(&pdev->dev, "SME is not compatible with RAVEN\n"); return -ENOTSUPP; diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c index 546599f19a93..4d01d44012fd 100644 --- a/drivers/gpu/drm/drm_cache.c +++ b/drivers/gpu/drm/drm_cache.c @@ -31,7 +31,7 @@ #include <linux/dma-buf-map.h> #include <linux/export.h> #include <linux/highmem.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <xen/xen.h>
#include <drm/drm_cache.h> @@ -204,7 +204,7 @@ bool drm_need_swiotlb(int dma_bits) * Enforce dma_alloc_coherent when memory encryption is active as well * for the same reasons as for Xen paravirtual hosts. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return true;
for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index dde8b35bb950..06ec95a650ba 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -29,7 +29,7 @@ #include <linux/dma-mapping.h> #include <linux/module.h> #include <linux/pci.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <drm/ttm/ttm_range_manager.h> #include <drm/drm_aperture.h> @@ -634,7 +634,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv) [vmw_dma_map_bind] = "Giving up DMA mappings early."};
/* TTM currently doesn't fully support SEV encryption. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -EINVAL;
if (vmw_force_coherent) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c index 3d08f5700bdb..0c70573d3dce 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c @@ -28,7 +28,7 @@ #include <linux/kernel.h> #include <linux/module.h> #include <linux/slab.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <asm/hypervisor.h>
@@ -153,7 +153,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel, unsigned long msg_len = strlen(msg);
/* HB port can't access encrypted memory. */ - if (hb && !mem_encrypt_active()) { + if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_high;
si = (uintptr_t) msg; @@ -208,7 +208,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply, unsigned long si, di, eax, ebx, ecx, edx;
/* HB port can't access encrypted memory */ - if (hb && !mem_encrypt_active()) { + if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_low;
si = channel->cookie_high; diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 811a49a95d04..def63a8deab4 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -31,6 +31,7 @@ #include <linux/irqdomain.h> #include <linux/percpu.h> #include <linux/io-pgtable.h> +#include <linux/protected_guest.h> #include <asm/irq_remapping.h> #include <asm/io_apic.h> #include <asm/apic.h> @@ -2178,7 +2179,7 @@ static int amd_iommu_def_domain_type(struct device *dev) * active, because some of those devices (AMD GPUs) don't have the * encryption bit in their DMA-mask and require remapping. */ - if (!mem_encrypt_active() && dev_data->iommu_v2) + if (!prot_guest_has(PATTR_MEM_ENCRYPT) && dev_data->iommu_v2) return IOMMU_DOMAIN_IDENTITY;
return 0; diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index f8d4ad421e07..ac359bc98523 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -16,6 +16,7 @@ #include <linux/wait.h> #include <linux/pci.h> #include <linux/gfp.h> +#include <linux/protected_guest.h>
#include "amd_iommu.h"
@@ -741,7 +742,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids) * When memory encryption is active the device is likely not in a * direct-mapped domain. Forbid using IOMMUv2 functionality for now. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -ENODEV;
if (!amd_iommu_v2_supported()) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5419c4b9f27a..ddbedb1b5b6b 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -23,6 +23,7 @@ #include <linux/property.h> #include <linux/fsl/mc.h> #include <linux/module.h> +#include <linux/protected_guest.h> #include <trace/events/iommu.h>
static struct kset *iommu_group_kset; @@ -127,7 +128,7 @@ static int __init iommu_subsys_init(void) else iommu_set_default_translated(false);
- if (iommu_default_passthrough() && mem_encrypt_active()) { + if (iommu_default_passthrough() && prot_guest_has(PATTR_MEM_ENCRYPT)) { pr_info("Memory encryption detected - Disabling default IOMMU Passthrough\n"); iommu_set_default_translated(false); } diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 9a15334da208..b466f543dc00 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -26,7 +26,7 @@ #include <linux/vmalloc.h> #include <linux/pagemap.h> #include <linux/uaccess.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <asm/io.h> #include "internal.h"
@@ -177,7 +177,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos) */ ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos) { - return read_from_oldmem(buf, count, ppos, 0, mem_encrypt_active()); + return read_from_oldmem(buf, count, ppos, 0, prot_guest_has(PATTR_MEM_ENCRYPT)); }
/* @@ -378,7 +378,7 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, buflen); start = m->paddr + *fpos - m->offset; tmp = read_from_oldmem(buffer, tsz, &start, - userbuf, mem_encrypt_active()); + userbuf, prot_guest_has(PATTR_MEM_ENCRYPT)); if (tmp < 0) return tmp; buflen -= tsz; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e50df8d8f87e..2e8dee23a624 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -34,7 +34,7 @@ #include <linux/highmem.h> #include <linux/gfp.h> #include <linux/scatterlist.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/set_memory.h> #ifdef CONFIG_DEBUG_FS #include <linux/debugfs.h> @@ -515,7 +515,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, if (!mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
- if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
if (mapping_size > alloc_size) {
On Tue, Jul 27, 2021, Tom Lendacky wrote:
@@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void) * The unused memory range was mapped decrypted, change the encryption * attribute from decrypted to encrypted before freeing it. */
- if (mem_encrypt_active()) {
- if (sme_me_mask) {
Any reason this uses sme_me_mask? The helper it calls, __set_memory_enc_dec(), uses prot_guest_has(PATTR_MEM_ENCRYPT) so I assume it's available?
r = set_memory_encrypted(vaddr, npages); if (r) { pr_warn("failed to free unused decrypted pages\n");
On 7/30/21 5:34 PM, Sean Christopherson wrote:
On Tue, Jul 27, 2021, Tom Lendacky wrote:
@@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void) * The unused memory range was mapped decrypted, change the encryption * attribute from decrypted to encrypted before freeing it. */
- if (mem_encrypt_active()) {
- if (sme_me_mask) {
Any reason this uses sme_me_mask? The helper it calls, __set_memory_enc_dec(), uses prot_guest_has(PATTR_MEM_ENCRYPT) so I assume it's available?
Probably just a slip on my part. I was debating at one point calling the helper vs. referencing the variables/functions directly in the mem_encrypt.c file.
Thanks, Tom
r = set_memory_encrypted(vaddr, npages); if (r) { pr_warn("failed to free unused decrypted pages\n");
Le 28/07/2021 à 00:26, Tom Lendacky a écrit :
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has() with the PATTR_MEM_ENCRYPT attribute.
What about https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20210730114231.23445... ?
Christophe
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Cc: Dave Young dyoung@redhat.com Cc: Baoquan He bhe@redhat.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/x86/kernel/head64.c | 4 ++-- arch/x86/mm/ioremap.c | 4 ++-- arch/x86/mm/mem_encrypt.c | 5 ++--- arch/x86/mm/pat/set_memory.c | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++- drivers/gpu/drm/drm_cache.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++--- drivers/iommu/amd/iommu.c | 3 ++- drivers/iommu/amd/iommu_v2.c | 3 ++- drivers/iommu/iommu.c | 3 ++- fs/proc/vmcore.c | 6 +++--- kernel/dma/swiotlb.c | 4 ++-- 13 files changed, 29 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h>
#include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */
- if (mem_encrypt_active()) {
- if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted; for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 0f2d5ace5986..5e1c1f5cbbe8 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -693,7 +693,7 @@ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr, bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size, unsigned long flags) {
- if (!mem_encrypt_active())
if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return true;
if (flags & MEMREMAP_ENC)
@@ -723,7 +723,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr, { bool encrypted_prot;
- if (!mem_encrypt_active())
if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return prot;
encrypted_prot = true;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 451de8e84fce..0f1533dbe81c 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) /*
- SME and SEV are very similar but they are not the same, so there are
- times that the kernel will need to distinguish between SME and SEV. The
- sme_active() and sev_active() functions are used for this. When a
- distinction isn't needed, the mem_encrypt_active() function can be used.
- sme_active() and sev_active() functions are used for this.
- The trampoline code is a good example for this requirement. Before
- paging is activated, SME will access all memory as decrypted, but SEV
@@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void) * The unused memory range was mapped decrypted, change the encryption * attribute from decrypted to encrypted before freeing it. */
- if (mem_encrypt_active()) {
- if (sme_me_mask) { r = set_memory_encrypted(vaddr, npages); if (r) { pr_warn("failed to free unused decrypted pages\n");
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ad8a5c586a35..6925f2bb4be1 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -18,6 +18,7 @@ #include <linux/libnvdimm.h> #include <linux/vmstat.h> #include <linux/kernel.h> +#include <linux/protected_guest.h>
#include <asm/e820/api.h> #include <asm/processor.h> @@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) int ret;
/* Nothing to do if memory encryption is not active */
- if (!mem_encrypt_active())
if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return 0;
/* Should not be working on unaligned addresses */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index abb928894eac..8407224717df 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -38,6 +38,7 @@ #include <drm/drm_probe_helper.h> #include <linux/mmu_notifier.h> #include <linux/suspend.h> +#include <linux/protected_guest.h>
#include "amdgpu.h" #include "amdgpu_irq.h" @@ -1239,7 +1240,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, * however, SME requires an indirect IOMMU mapping because the encryption * bit is beyond the DMA mask of the chip. */
- if (mem_encrypt_active() && ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) {
- if (prot_guest_has(PATTR_MEM_ENCRYPT) &&
dev_info(&pdev->dev, "SME is not compatible with RAVEN\n"); return -ENOTSUPP;((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) {
diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c index 546599f19a93..4d01d44012fd 100644 --- a/drivers/gpu/drm/drm_cache.c +++ b/drivers/gpu/drm/drm_cache.c @@ -31,7 +31,7 @@ #include <linux/dma-buf-map.h> #include <linux/export.h> #include <linux/highmem.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <xen/xen.h>
#include <drm/drm_cache.h> @@ -204,7 +204,7 @@ bool drm_need_swiotlb(int dma_bits) * Enforce dma_alloc_coherent when memory encryption is active as well * for the same reasons as for Xen paravirtual hosts. */
- if (mem_encrypt_active())
if (prot_guest_has(PATTR_MEM_ENCRYPT)) return true;
for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index dde8b35bb950..06ec95a650ba 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -29,7 +29,7 @@ #include <linux/dma-mapping.h> #include <linux/module.h> #include <linux/pci.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <drm/ttm/ttm_range_manager.h> #include <drm/drm_aperture.h> @@ -634,7 +634,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv) [vmw_dma_map_bind] = "Giving up DMA mappings early."};
/* TTM currently doesn't fully support SEV encryption. */
- if (mem_encrypt_active())
if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -EINVAL;
if (vmw_force_coherent)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c index 3d08f5700bdb..0c70573d3dce 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c @@ -28,7 +28,7 @@ #include <linux/kernel.h> #include <linux/module.h> #include <linux/slab.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h>
#include <asm/hypervisor.h>
@@ -153,7 +153,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel, unsigned long msg_len = strlen(msg);
/* HB port can't access encrypted memory. */
- if (hb && !mem_encrypt_active()) {
if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_high;
si = (uintptr_t) msg;
@@ -208,7 +208,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply, unsigned long si, di, eax, ebx, ecx, edx;
/* HB port can't access encrypted memory */
- if (hb && !mem_encrypt_active()) {
if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_low;
si = channel->cookie_high;
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 811a49a95d04..def63a8deab4 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -31,6 +31,7 @@ #include <linux/irqdomain.h> #include <linux/percpu.h> #include <linux/io-pgtable.h> +#include <linux/protected_guest.h> #include <asm/irq_remapping.h> #include <asm/io_apic.h> #include <asm/apic.h> @@ -2178,7 +2179,7 @@ static int amd_iommu_def_domain_type(struct device *dev) * active, because some of those devices (AMD GPUs) don't have the * encryption bit in their DMA-mask and require remapping. */
- if (!mem_encrypt_active() && dev_data->iommu_v2)
if (!prot_guest_has(PATTR_MEM_ENCRYPT) && dev_data->iommu_v2) return IOMMU_DOMAIN_IDENTITY;
return 0;
diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index f8d4ad421e07..ac359bc98523 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -16,6 +16,7 @@ #include <linux/wait.h> #include <linux/pci.h> #include <linux/gfp.h> +#include <linux/protected_guest.h>
#include "amd_iommu.h"
@@ -741,7 +742,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids) * When memory encryption is active the device is likely not in a * direct-mapped domain. Forbid using IOMMUv2 functionality for now. */
- if (mem_encrypt_active())
if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -ENODEV;
if (!amd_iommu_v2_supported())
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5419c4b9f27a..ddbedb1b5b6b 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -23,6 +23,7 @@ #include <linux/property.h> #include <linux/fsl/mc.h> #include <linux/module.h> +#include <linux/protected_guest.h> #include <trace/events/iommu.h>
static struct kset *iommu_group_kset; @@ -127,7 +128,7 @@ static int __init iommu_subsys_init(void) else iommu_set_default_translated(false);
if (iommu_default_passthrough() && mem_encrypt_active()) {
}if (iommu_default_passthrough() && prot_guest_has(PATTR_MEM_ENCRYPT)) { pr_info("Memory encryption detected - Disabling default IOMMU Passthrough\n"); iommu_set_default_translated(false);
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 9a15334da208..b466f543dc00 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -26,7 +26,7 @@ #include <linux/vmalloc.h> #include <linux/pagemap.h> #include <linux/uaccess.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <asm/io.h> #include "internal.h"
@@ -177,7 +177,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos) */ ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos) {
- return read_from_oldmem(buf, count, ppos, 0, mem_encrypt_active());
return read_from_oldmem(buf, count, ppos, 0, prot_guest_has(PATTR_MEM_ENCRYPT)); }
/*
@@ -378,7 +378,7 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, buflen); start = m->paddr + *fpos - m->offset; tmp = read_from_oldmem(buffer, tsz, &start,
userbuf, mem_encrypt_active());
userbuf, prot_guest_has(PATTR_MEM_ENCRYPT)); if (tmp < 0) return tmp; buflen -= tsz;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e50df8d8f87e..2e8dee23a624 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -34,7 +34,7 @@ #include <linux/highmem.h> #include <linux/gfp.h> #include <linux/scatterlist.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/set_memory.h> #ifdef CONFIG_DEBUG_FS #include <linux/debugfs.h> @@ -515,7 +515,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, if (!mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");
- if (mem_encrypt_active())
if (prot_guest_has(PATTR_MEM_ENCRYPT)) pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");
if (mapping_size > alloc_size) {
On 8/2/21 7:42 AM, Christophe Leroy wrote:
Le 28/07/2021 à 00:26, Tom Lendacky a écrit :
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has() with the PATTR_MEM_ENCRYPT attribute.
What about https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.... ?
Ah, looks like that just went into the PPC tree and isn't part of the tip tree. I'll have to look into how to handle that one.
Thanks, Tom
Christophe
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Cc: Dave Young dyoung@redhat.com Cc: Baoquan He bhe@redhat.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
arch/x86/kernel/head64.c | 4 ++-- arch/x86/mm/ioremap.c | 4 ++-- arch/x86/mm/mem_encrypt.c | 5 ++--- arch/x86/mm/pat/set_memory.c | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++- drivers/gpu/drm/drm_cache.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++--- drivers/iommu/amd/iommu.c | 3 ++- drivers/iommu/amd/iommu_v2.c | 3 ++- drivers/iommu/iommu.c | 3 ++- fs/proc/vmcore.c | 6 +++--- kernel/dma/swiotlb.c | 4 ++-- 13 files changed, 29 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h> #include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted; for (; vaddr < vaddr_end; vaddr += PMD_SIZE) { diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 0f2d5ace5986..5e1c1f5cbbe8 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -693,7 +693,7 @@ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr, bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size, unsigned long flags) { - if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return true; if (flags & MEMREMAP_ENC) @@ -723,7 +723,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr, { bool encrypted_prot; - if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return prot; encrypted_prot = true; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 451de8e84fce..0f1533dbe81c 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) /* * SME and SEV are very similar but they are not the same, so there are * times that the kernel will need to distinguish between SME and SEV. The
- sme_active() and sev_active() functions are used for this. When a
- distinction isn't needed, the mem_encrypt_active() function can be
used.
- sme_active() and sev_active() functions are used for this.
* * The trampoline code is a good example for this requirement. Before * paging is activated, SME will access all memory as decrypted, but SEV @@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void) * The unused memory range was mapped decrypted, change the encryption * attribute from decrypted to encrypted before freeing it. */ - if (mem_encrypt_active()) { + if (sme_me_mask) { r = set_memory_encrypted(vaddr, npages); if (r) { pr_warn("failed to free unused decrypted pages\n"); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ad8a5c586a35..6925f2bb4be1 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -18,6 +18,7 @@ #include <linux/libnvdimm.h> #include <linux/vmstat.h> #include <linux/kernel.h> +#include <linux/protected_guest.h> #include <asm/e820/api.h> #include <asm/processor.h> @@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) int ret; /* Nothing to do if memory encryption is not active */ - if (!mem_encrypt_active()) + if (!prot_guest_has(PATTR_MEM_ENCRYPT)) return 0; /* Should not be working on unaligned addresses */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c index abb928894eac..8407224717df 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c @@ -38,6 +38,7 @@ #include <drm/drm_probe_helper.h> #include <linux/mmu_notifier.h> #include <linux/suspend.h> +#include <linux/protected_guest.h> #include "amdgpu.h" #include "amdgpu_irq.h" @@ -1239,7 +1240,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, * however, SME requires an indirect IOMMU mapping because the encryption * bit is beyond the DMA mask of the chip. */ - if (mem_encrypt_active() && ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { + if (prot_guest_has(PATTR_MEM_ENCRYPT) && + ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) { dev_info(&pdev->dev, "SME is not compatible with RAVEN\n"); return -ENOTSUPP; diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c index 546599f19a93..4d01d44012fd 100644 --- a/drivers/gpu/drm/drm_cache.c +++ b/drivers/gpu/drm/drm_cache.c @@ -31,7 +31,7 @@ #include <linux/dma-buf-map.h> #include <linux/export.h> #include <linux/highmem.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <xen/xen.h> #include <drm/drm_cache.h> @@ -204,7 +204,7 @@ bool drm_need_swiotlb(int dma_bits) * Enforce dma_alloc_coherent when memory encryption is active as well * for the same reasons as for Xen paravirtual hosts. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return true; for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index dde8b35bb950..06ec95a650ba 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -29,7 +29,7 @@ #include <linux/dma-mapping.h> #include <linux/module.h> #include <linux/pci.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <drm/ttm/ttm_range_manager.h> #include <drm/drm_aperture.h> @@ -634,7 +634,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv) [vmw_dma_map_bind] = "Giving up DMA mappings early."}; /* TTM currently doesn't fully support SEV encryption. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -EINVAL; if (vmw_force_coherent) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c index 3d08f5700bdb..0c70573d3dce 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c @@ -28,7 +28,7 @@ #include <linux/kernel.h> #include <linux/module.h> #include <linux/slab.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <asm/hypervisor.h> @@ -153,7 +153,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel, unsigned long msg_len = strlen(msg); /* HB port can't access encrypted memory. */ - if (hb && !mem_encrypt_active()) { + if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_high; si = (uintptr_t) msg; @@ -208,7 +208,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply, unsigned long si, di, eax, ebx, ecx, edx; /* HB port can't access encrypted memory */ - if (hb && !mem_encrypt_active()) { + if (hb && !prot_guest_has(PATTR_MEM_ENCRYPT)) { unsigned long bp = channel->cookie_low; si = channel->cookie_high; diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 811a49a95d04..def63a8deab4 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -31,6 +31,7 @@ #include <linux/irqdomain.h> #include <linux/percpu.h> #include <linux/io-pgtable.h> +#include <linux/protected_guest.h> #include <asm/irq_remapping.h> #include <asm/io_apic.h> #include <asm/apic.h> @@ -2178,7 +2179,7 @@ static int amd_iommu_def_domain_type(struct device *dev) * active, because some of those devices (AMD GPUs) don't have the * encryption bit in their DMA-mask and require remapping. */ - if (!mem_encrypt_active() && dev_data->iommu_v2) + if (!prot_guest_has(PATTR_MEM_ENCRYPT) && dev_data->iommu_v2) return IOMMU_DOMAIN_IDENTITY; return 0; diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index f8d4ad421e07..ac359bc98523 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -16,6 +16,7 @@ #include <linux/wait.h> #include <linux/pci.h> #include <linux/gfp.h> +#include <linux/protected_guest.h> #include "amd_iommu.h" @@ -741,7 +742,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids) * When memory encryption is active the device is likely not in a * direct-mapped domain. Forbid using IOMMUv2 functionality for now. */ - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) return -ENODEV; if (!amd_iommu_v2_supported()) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5419c4b9f27a..ddbedb1b5b6b 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -23,6 +23,7 @@ #include <linux/property.h> #include <linux/fsl/mc.h> #include <linux/module.h> +#include <linux/protected_guest.h> #include <trace/events/iommu.h> static struct kset *iommu_group_kset; @@ -127,7 +128,7 @@ static int __init iommu_subsys_init(void) else iommu_set_default_translated(false); - if (iommu_default_passthrough() && mem_encrypt_active()) { + if (iommu_default_passthrough() && prot_guest_has(PATTR_MEM_ENCRYPT)) { pr_info("Memory encryption detected - Disabling default IOMMU Passthrough\n"); iommu_set_default_translated(false); } diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 9a15334da208..b466f543dc00 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -26,7 +26,7 @@ #include <linux/vmalloc.h> #include <linux/pagemap.h> #include <linux/uaccess.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <asm/io.h> #include "internal.h" @@ -177,7 +177,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos) */ ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos) { - return read_from_oldmem(buf, count, ppos, 0, mem_encrypt_active()); + return read_from_oldmem(buf, count, ppos, 0, prot_guest_has(PATTR_MEM_ENCRYPT)); } /* @@ -378,7 +378,7 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos, buflen); start = m->paddr + *fpos - m->offset; tmp = read_from_oldmem(buffer, tsz, &start, - userbuf, mem_encrypt_active()); + userbuf, prot_guest_has(PATTR_MEM_ENCRYPT)); if (tmp < 0) return tmp; buflen -= tsz; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e50df8d8f87e..2e8dee23a624 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -34,7 +34,7 @@ #include <linux/highmem.h> #include <linux/gfp.h> #include <linux/scatterlist.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/set_memory.h> #ifdef CONFIG_DEBUG_FS #include <linux/debugfs.h> @@ -515,7 +515,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, if (!mem) panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); - if (mem_encrypt_active()) + if (prot_guest_has(PATTR_MEM_ENCRYPT)) pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); if (mapping_size > alloc_size) {
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h>
#include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */
- if (mem_encrypt_active()) {
- if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted;
Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in TDX.
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h> #include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted;
Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in TDX.
This is a direct replacement for now. I think the change you're requesting should be done as part of the TDX support patches so it's clear why it is being changed.
But, wouldn't TDX still need to do something with this shared/unencrypted area, though? Or since it is shared, there's actually nothing you need to do (the bss decrpyted section exists even if CONFIG_AMD_MEM_ENCRYPT is not configured)?
Thanks, Tom
On 8/10/21 12:48 PM, Tom Lendacky wrote:
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h> #include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted;
Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in TDX.
This is a direct replacement for now. I think the change you're requesting should be done as part of the TDX support patches so it's clear why it is being changed.
Ok. I will include it part of TDX changes.
But, wouldn't TDX still need to do something with this shared/unencrypted area, though? Or since it is shared, there's actually nothing you need to do (the bss decrpyted section exists even if CONFIG_AMD_MEM_ENCRYPT is not configured)?
Kirill had a requirement to turn on CONFIG_AMD_MEM_ENCRYPT for adding lazy accept support in TDX guest kernel. Kirill, can you add details here?
Thanks, Tom
On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h> #include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted;
Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in TDX.
This is a direct replacement for now.
With current implementation of prot_guest_has() for TDX it breaks boot for me.
Looking at code agains, now I *think* the reason is accessing a global variable from __startup_64() inside TDX version of prot_guest_has().
__startup_64() is special. If you access any global variable you need to use fixup_pointer(). See comment before __startup_64().
I'm not sure how you get away with accessing sme_me_mask directly from there. Any clues? Maybe just a luck and complier generates code just right for your case, I donno.
A separate point is that TDX version of prot_guest_has() relies on cpu_feature_enabled() which is not ready at this point.
I think __bss_decrypted fixup has to be done if sme_me_mask is non-zero. Or just do it uncoditionally because it's NOP for sme_me_mask == 0.
I think the change you're requesting should be done as part of the TDX support patches so it's clear why it is being changed.
But, wouldn't TDX still need to do something with this shared/unencrypted area, though? Or since it is shared, there's actually nothing you need to do (the bss decrpyted section exists even if CONFIG_AMD_MEM_ENCRYPT is not configured)?
AFAICS, only kvmclock uses __bss_decrypted. We don't enable kvmclock in TDX at the moment. It may change in the future.
On 8/11/21 7:19 AM, Kirill A. Shutemov wrote:
On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h> #include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted;
Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in TDX.
This is a direct replacement for now.
With current implementation of prot_guest_has() for TDX it breaks boot for me.
Looking at code agains, now I *think* the reason is accessing a global variable from __startup_64() inside TDX version of prot_guest_has().
__startup_64() is special. If you access any global variable you need to use fixup_pointer(). See comment before __startup_64().
I'm not sure how you get away with accessing sme_me_mask directly from there. Any clues? Maybe just a luck and complier generates code just right for your case, I donno.
Hmm... yeah, could be that the compiler is using rip-relative addressing for it because it lives in the .data section?
For the static variables in mem_encrypt_identity.c I did an assembler rip relative LEA, but probably could have passed physaddr to sme_enable() and used a fixup_pointer() style function, instead.
A separate point is that TDX version of prot_guest_has() relies on cpu_feature_enabled() which is not ready at this point.
Does TDX have to do anything special to make memory able to be shared with the hypervisor? You might have to use something that is available earlier than cpu_feature_enabled() in that case (should you eventually support kvmclock).
I think __bss_decrypted fixup has to be done if sme_me_mask is non-zero. Or just do it uncoditionally because it's NOP for sme_me_mask == 0.
For SNP, we'll have to additionally call the HV to update the RMP to make the memory shared. But that could also be done unconditionally since the early_snp_set_memory_shared() routine will check for SNP before doing anything.
Thanks, Tom
I think the change you're requesting should be done as part of the TDX support patches so it's clear why it is being changed.
But, wouldn't TDX still need to do something with this shared/unencrypted area, though? Or since it is shared, there's actually nothing you need to do (the bss decrpyted section exists even if CONFIG_AMD_MEM_ENCRYPT is not configured)?
AFAICS, only kvmclock uses __bss_decrypted. We don't enable kvmclock in TDX at the moment. It may change in the future.
On Wed, Aug 11, 2021 at 10:52:55AM -0500, Tom Lendacky wrote:
On 8/11/21 7:19 AM, Kirill A. Shutemov wrote:
On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
On 7/27/21 3:26 PM, Tom Lendacky wrote:
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index de01903c3735..cafed6456d45 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -19,7 +19,7 @@ #include <linux/start_kernel.h> #include <linux/io.h> #include <linux/memblock.h> -#include <linux/mem_encrypt.h> +#include <linux/protected_guest.h> #include <linux/pgtable.h> #include <asm/processor.h> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr, * there is no need to zero it after changing the memory encryption * attribute. */ - if (mem_encrypt_active()) { + if (prot_guest_has(PATTR_MEM_ENCRYPT)) { vaddr = (unsigned long)__start_bss_decrypted; vaddr_end = (unsigned long)__end_bss_decrypted;
Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in TDX.
This is a direct replacement for now.
With current implementation of prot_guest_has() for TDX it breaks boot for me.
Looking at code agains, now I *think* the reason is accessing a global variable from __startup_64() inside TDX version of prot_guest_has().
__startup_64() is special. If you access any global variable you need to use fixup_pointer(). See comment before __startup_64().
I'm not sure how you get away with accessing sme_me_mask directly from there. Any clues? Maybe just a luck and complier generates code just right for your case, I donno.
Hmm... yeah, could be that the compiler is using rip-relative addressing for it because it lives in the .data section?
I guess. It has to be fixed. It may break with complier upgrade or any random change around the code.
BTW, does it work with clang for you?
For the static variables in mem_encrypt_identity.c I did an assembler rip relative LEA, but probably could have passed physaddr to sme_enable() and used a fixup_pointer() style function, instead.
Sounds like a plan.
A separate point is that TDX version of prot_guest_has() relies on cpu_feature_enabled() which is not ready at this point.
Does TDX have to do anything special to make memory able to be shared with the hypervisor?
Yes. But there's nothing that required any changes in early boot. It handled in ioremap/set_memory.
You might have to use something that is available earlier than cpu_feature_enabled() in that case (should you eventually support kvmclock).
Maybe.
I think __bss_decrypted fixup has to be done if sme_me_mask is non-zero. Or just do it uncoditionally because it's NOP for sme_me_mask == 0.
For SNP, we'll have to additionally call the HV to update the RMP to make the memory shared. But that could also be done unconditionally since the early_snp_set_memory_shared() routine will check for SNP before doing anything.
On 8/12/21 5:07 AM, Kirill A. Shutemov wrote:
On Wed, Aug 11, 2021 at 10:52:55AM -0500, Tom Lendacky wrote:
On 8/11/21 7:19 AM, Kirill A. Shutemov wrote:
On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
...
Looking at code agains, now I *think* the reason is accessing a global variable from __startup_64() inside TDX version of prot_guest_has().
__startup_64() is special. If you access any global variable you need to use fixup_pointer(). See comment before __startup_64().
I'm not sure how you get away with accessing sme_me_mask directly from there. Any clues? Maybe just a luck and complier generates code just right for your case, I donno.
Hmm... yeah, could be that the compiler is using rip-relative addressing for it because it lives in the .data section?
I guess. It has to be fixed. It may break with complier upgrade or any random change around the code.
I'll look at doing that separate from this series.
BTW, does it work with clang for you?
I haven't tried with clang, I'll check on that.
Thanks, Tom
On 8/13/21 12:08 PM, Tom Lendacky wrote:
On 8/12/21 5:07 AM, Kirill A. Shutemov wrote:
On Wed, Aug 11, 2021 at 10:52:55AM -0500, Tom Lendacky wrote:
On 8/11/21 7:19 AM, Kirill A. Shutemov wrote:
On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
...
Looking at code agains, now I *think* the reason is accessing a global variable from __startup_64() inside TDX version of prot_guest_has().
__startup_64() is special. If you access any global variable you need to use fixup_pointer(). See comment before __startup_64().
I'm not sure how you get away with accessing sme_me_mask directly from there. Any clues? Maybe just a luck and complier generates code just right for your case, I donno.
Hmm... yeah, could be that the compiler is using rip-relative addressing for it because it lives in the .data section?
I guess. It has to be fixed. It may break with complier upgrade or any random change around the code.
I'll look at doing that separate from this series.
BTW, does it work with clang for you?
I haven't tried with clang, I'll check on that.
Just as an fyi, clang also uses rip relative addressing for those variables. No issues booting SME and SEV guests built with clang.
Thanks, Tom
Thanks, Tom
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- include/linux/mem_encrypt.h | 4 ---- 1 file changed, 4 deletions(-)
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 5c4a18a91f89..ae4526389261 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -16,10 +16,6 @@
#include <asm/mem_encrypt.h>
-#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */ - -static inline bool mem_encrypt_active(void) { return false; } - #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
#ifdef CONFIG_AMD_MEM_ENCRYPT
On Tue, Jul 27, 2021 at 05:26:11PM -0500, Tom Lendacky wrote:
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
Reviewed-by: Joerg Roedel jroedel@suse.de
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/x86/include/asm/mem_encrypt.h | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 797146e0cd6b..94c089e9ea69 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -97,11 +97,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { }
extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
-static inline bool mem_encrypt_active(void) -{ - return sme_me_mask; -} - static inline u64 sme_get_me_mask(void) { return sme_me_mask;
On Tue, Jul 27, 2021 at 05:26:12PM -0500, Tom Lendacky wrote:
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Signed-off-by: Tom Lendacky thomas.lendacky@amd.com
Reviewed-by: Joerg Roedel jroedel@suse.de
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation.
Cc: Michael Ellerman mpe@ellerman.id.au Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Paul Mackerras paulus@samba.org Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/powerpc/include/asm/mem_encrypt.h | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h index ba9dab07c1be..2f26b8fc8d29 100644 --- a/arch/powerpc/include/asm/mem_encrypt.h +++ b/arch/powerpc/include/asm/mem_encrypt.h @@ -10,11 +10,6 @@
#include <asm/svm.h>
-static inline bool mem_encrypt_active(void) -{ - return is_secure_guest(); -} - static inline bool force_dma_unencrypted(struct device *dev) { return is_secure_guest();
The mem_encrypt_active() function has been replaced by prot_guest_has(), so remove the implementation. Since the default implementation of the prot_guest_has() matches the s390 implementation of mem_encrypt_active(), prot_guest_has() does not need to be implemented in s390 (the config option ARCH_HAS_PROTECTED_GUEST is not set).
Cc: Heiko Carstens hca@linux.ibm.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Christian Borntraeger borntraeger@de.ibm.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com --- arch/s390/include/asm/mem_encrypt.h | 2 -- 1 file changed, 2 deletions(-)
diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h index 2542cbf7e2d1..08a8b96606d7 100644 --- a/arch/s390/include/asm/mem_encrypt.h +++ b/arch/s390/include/asm/mem_encrypt.h @@ -4,8 +4,6 @@
#ifndef __ASSEMBLY__
-static inline bool mem_encrypt_active(void) { return false; } - int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages);
On 7/27/21 5:26 PM, Tom Lendacky wrote:
This patch series provides a generic helper function, prot_guest_has(), to replace the sme_active(), sev_active(), sev_es_active() and mem_encrypt_active() functions.
It is expected that as new protected virtualization technologies are added to the kernel, they can all be covered by a single function call instead of a collection of specific function calls all called from the same locations.
The powerpc and s390 patches have been compile tested only. Can the folks copied on this series verify that nothing breaks for them.
I wanted to get this out before I head out on vacation at the end of the week. I'll only be out for a week, but I won't be able to respond to any feedback until I get back.
I'm still not a fan of the name prot_guest_has() because it is used for some baremetal checks, but really haven't been able to come up with anything better. So take it with a grain of salt where the sme_active() calls are replaced by prot_guest_has().
Also, let me know if the treewide changes in patch #7 need to be further split out by tree.
Thanks, Tom
Cc: Andi Kleen ak@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Dave Young dyoung@redhat.com Cc: David Airlie airlied@linux.ie Cc: Heiko Carstens hca@linux.ibm.com Cc: Ingo Molnar mingo@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Will Deacon will@kernel.org
Patches based on: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
Tom Lendacky (11): mm: Introduce a function to check for virtualization protection features x86/sev: Add an x86 version of prot_guest_has() powerpc/pseries/svm: Add a powerpc version of prot_guest_has() x86/sme: Replace occurrences of sme_active() with prot_guest_has() x86/sev: Replace occurrences of sev_active() with prot_guest_has() x86/sev: Replace occurrences of sev_es_active() with prot_guest_has() treewide: Replace the use of mem_encrypt_active() with prot_guest_has() mm: Remove the now unused mem_encrypt_active() function x86/sev: Remove the now unused mem_encrypt_active() function powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function s390/mm: Remove the now unused mem_encrypt_active() function
arch/Kconfig | 3 ++ arch/powerpc/include/asm/mem_encrypt.h | 5 -- arch/powerpc/include/asm/protected_guest.h | 30 +++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + arch/s390/include/asm/mem_encrypt.h | 2 - arch/x86/Kconfig | 1 + arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 13 +---- arch/x86/include/asm/protected_guest.h | 27 ++++++++++ arch/x86/kernel/crash_dump_64.c | 4 +- arch/x86/kernel/head64.c | 4 +- arch/x86/kernel/kvm.c | 3 +- arch/x86/kernel/kvmclock.c | 4 +- arch/x86/kernel/machine_kexec_64.c | 19 +++---- arch/x86/kernel/pci-swiotlb.c | 9 ++-- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/kernel/sev.c | 6 +-- arch/x86/kvm/svm/svm.c | 3 +- arch/x86/mm/ioremap.c | 16 +++--- arch/x86/mm/mem_encrypt.c | 60 +++++++++++++++------- arch/x86/mm/mem_encrypt_identity.c | 3 +- arch/x86/mm/pat/set_memory.c | 3 +- arch/x86/platform/efi/efi_64.c | 9 ++-- arch/x86/realmode/init.c | 8 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +- drivers/gpu/drm/drm_cache.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-- drivers/iommu/amd/init.c | 7 +-- drivers/iommu/amd/iommu.c | 3 +- drivers/iommu/amd/iommu_v2.c | 3 +- drivers/iommu/iommu.c | 3 +- fs/proc/vmcore.c | 6 +-- include/linux/mem_encrypt.h | 4 -- include/linux/protected_guest.h | 37 +++++++++++++ kernel/dma/swiotlb.c | 4 +- 36 files changed, 218 insertions(+), 104 deletions(-) create mode 100644 arch/powerpc/include/asm/protected_guest.h create mode 100644 arch/x86/include/asm/protected_guest.h create mode 100644 include/linux/protected_guest.h
Am 28.07.21 um 00:26 schrieb Tom Lendacky:
This patch series provides a generic helper function, prot_guest_has(), to replace the sme_active(), sev_active(), sev_es_active() and mem_encrypt_active() functions.
It is expected that as new protected virtualization technologies are added to the kernel, they can all be covered by a single function call instead of a collection of specific function calls all called from the same locations.
The powerpc and s390 patches have been compile tested only. Can the folks copied on this series verify that nothing breaks for them.
As GPU driver dev I'm only one end user of this, but at least from the high level point of view that makes totally sense to me.
Feel free to add an Acked-by: Christian König christian.koenig@amd.com.
We could run that through the AMD GPU unit tests, but I fear we actually don't test on a system with SEV/SME active.
Going to raise that on our team call today.
Regards, Christian.
Cc: Andi Kleen ak@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Dave Young dyoung@redhat.com Cc: David Airlie airlied@linux.ie Cc: Heiko Carstens hca@linux.ibm.com Cc: Ingo Molnar mingo@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Will Deacon will@kernel.org
Patches based on: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
Tom Lendacky (11): mm: Introduce a function to check for virtualization protection features x86/sev: Add an x86 version of prot_guest_has() powerpc/pseries/svm: Add a powerpc version of prot_guest_has() x86/sme: Replace occurrences of sme_active() with prot_guest_has() x86/sev: Replace occurrences of sev_active() with prot_guest_has() x86/sev: Replace occurrences of sev_es_active() with prot_guest_has() treewide: Replace the use of mem_encrypt_active() with prot_guest_has() mm: Remove the now unused mem_encrypt_active() function x86/sev: Remove the now unused mem_encrypt_active() function powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function s390/mm: Remove the now unused mem_encrypt_active() function
arch/Kconfig | 3 ++ arch/powerpc/include/asm/mem_encrypt.h | 5 -- arch/powerpc/include/asm/protected_guest.h | 30 +++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + arch/s390/include/asm/mem_encrypt.h | 2 - arch/x86/Kconfig | 1 + arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 13 +---- arch/x86/include/asm/protected_guest.h | 27 ++++++++++ arch/x86/kernel/crash_dump_64.c | 4 +- arch/x86/kernel/head64.c | 4 +- arch/x86/kernel/kvm.c | 3 +- arch/x86/kernel/kvmclock.c | 4 +- arch/x86/kernel/machine_kexec_64.c | 19 +++---- arch/x86/kernel/pci-swiotlb.c | 9 ++-- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/kernel/sev.c | 6 +-- arch/x86/kvm/svm/svm.c | 3 +- arch/x86/mm/ioremap.c | 16 +++--- arch/x86/mm/mem_encrypt.c | 60 +++++++++++++++------- arch/x86/mm/mem_encrypt_identity.c | 3 +- arch/x86/mm/pat/set_memory.c | 3 +- arch/x86/platform/efi/efi_64.c | 9 ++-- arch/x86/realmode/init.c | 8 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +- drivers/gpu/drm/drm_cache.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-- drivers/iommu/amd/init.c | 7 +-- drivers/iommu/amd/iommu.c | 3 +- drivers/iommu/amd/iommu_v2.c | 3 +- drivers/iommu/iommu.c | 3 +- fs/proc/vmcore.c | 6 +-- include/linux/mem_encrypt.h | 4 -- include/linux/protected_guest.h | 37 +++++++++++++ kernel/dma/swiotlb.c | 4 +- 36 files changed, 218 insertions(+), 104 deletions(-) create mode 100644 arch/powerpc/include/asm/protected_guest.h create mode 100644 arch/x86/include/asm/protected_guest.h create mode 100644 include/linux/protected_guest.h
Hi Tom,
On 7/27/21 3:26 PM, Tom Lendacky wrote:
This patch series provides a generic helper function, prot_guest_has(), to replace the sme_active(), sev_active(), sev_es_active() and mem_encrypt_active() functions.
It is expected that as new protected virtualization technologies are added to the kernel, they can all be covered by a single function call instead of a collection of specific function calls all called from the same locations.
The powerpc and s390 patches have been compile tested only. Can the folks copied on this series verify that nothing breaks for them.
With this patch set, select ARCH_HAS_PROTECTED_GUEST and set CONFIG_AMD_MEM_ENCRYPT=n, creates following error.
ld: arch/x86/mm/ioremap.o: in function `early_memremap_is_setup_data': arch/x86/mm/ioremap.c:672: undefined reference to `early_memremap_decrypted'
It looks like early_memremap_is_setup_data() is not protected with appropriate config.
Cc: Andi Kleen ak@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Dave Young dyoung@redhat.com Cc: David Airlie airlied@linux.ie Cc: Heiko Carstens hca@linux.ibm.com Cc: Ingo Molnar mingo@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Will Deacon will@kernel.org
Patches based on: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
Tom Lendacky (11): mm: Introduce a function to check for virtualization protection features x86/sev: Add an x86 version of prot_guest_has() powerpc/pseries/svm: Add a powerpc version of prot_guest_has() x86/sme: Replace occurrences of sme_active() with prot_guest_has() x86/sev: Replace occurrences of sev_active() with prot_guest_has() x86/sev: Replace occurrences of sev_es_active() with prot_guest_has() treewide: Replace the use of mem_encrypt_active() with prot_guest_has() mm: Remove the now unused mem_encrypt_active() function x86/sev: Remove the now unused mem_encrypt_active() function powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function s390/mm: Remove the now unused mem_encrypt_active() function
arch/Kconfig | 3 ++ arch/powerpc/include/asm/mem_encrypt.h | 5 -- arch/powerpc/include/asm/protected_guest.h | 30 +++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + arch/s390/include/asm/mem_encrypt.h | 2 - arch/x86/Kconfig | 1 + arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 13 +---- arch/x86/include/asm/protected_guest.h | 27 ++++++++++ arch/x86/kernel/crash_dump_64.c | 4 +- arch/x86/kernel/head64.c | 4 +- arch/x86/kernel/kvm.c | 3 +- arch/x86/kernel/kvmclock.c | 4 +- arch/x86/kernel/machine_kexec_64.c | 19 +++---- arch/x86/kernel/pci-swiotlb.c | 9 ++-- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/kernel/sev.c | 6 +-- arch/x86/kvm/svm/svm.c | 3 +- arch/x86/mm/ioremap.c | 16 +++--- arch/x86/mm/mem_encrypt.c | 60 +++++++++++++++------- arch/x86/mm/mem_encrypt_identity.c | 3 +- arch/x86/mm/pat/set_memory.c | 3 +- arch/x86/platform/efi/efi_64.c | 9 ++-- arch/x86/realmode/init.c | 8 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +- drivers/gpu/drm/drm_cache.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-- drivers/iommu/amd/init.c | 7 +-- drivers/iommu/amd/iommu.c | 3 +- drivers/iommu/amd/iommu_v2.c | 3 +- drivers/iommu/iommu.c | 3 +- fs/proc/vmcore.c | 6 +-- include/linux/mem_encrypt.h | 4 -- include/linux/protected_guest.h | 37 +++++++++++++ kernel/dma/swiotlb.c | 4 +- 36 files changed, 218 insertions(+), 104 deletions(-) create mode 100644 arch/powerpc/include/asm/protected_guest.h create mode 100644 arch/x86/include/asm/protected_guest.h create mode 100644 include/linux/protected_guest.h
On 8/8/21 8:41 PM, Kuppuswamy, Sathyanarayanan wrote:
Hi Tom,
On 7/27/21 3:26 PM, Tom Lendacky wrote:
This patch series provides a generic helper function, prot_guest_has(), to replace the sme_active(), sev_active(), sev_es_active() and mem_encrypt_active() functions.
It is expected that as new protected virtualization technologies are added to the kernel, they can all be covered by a single function call instead of a collection of specific function calls all called from the same locations.
The powerpc and s390 patches have been compile tested only. Can the folks copied on this series verify that nothing breaks for them.
With this patch set, select ARCH_HAS_PROTECTED_GUEST and set CONFIG_AMD_MEM_ENCRYPT=n, creates following error.
ld: arch/x86/mm/ioremap.o: in function `early_memremap_is_setup_data': arch/x86/mm/ioremap.c:672: undefined reference to `early_memremap_decrypted'
It looks like early_memremap_is_setup_data() is not protected with appropriate config.
Ok, thanks for finding that. I'll fix that.
Thanks, Tom
Cc: Andi Kleen ak@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Dave Young dyoung@redhat.com Cc: David Airlie airlied@linux.ie Cc: Heiko Carstens hca@linux.ibm.com Cc: Ingo Molnar mingo@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: VMware Graphics linux-graphics-maintainer@vmware.com Cc: Will Deacon will@kernel.org
Patches based on: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel... master commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
Tom Lendacky (11): mm: Introduce a function to check for virtualization protection features x86/sev: Add an x86 version of prot_guest_has() powerpc/pseries/svm: Add a powerpc version of prot_guest_has() x86/sme: Replace occurrences of sme_active() with prot_guest_has() x86/sev: Replace occurrences of sev_active() with prot_guest_has() x86/sev: Replace occurrences of sev_es_active() with prot_guest_has() treewide: Replace the use of mem_encrypt_active() with prot_guest_has() mm: Remove the now unused mem_encrypt_active() function x86/sev: Remove the now unused mem_encrypt_active() function powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function s390/mm: Remove the now unused mem_encrypt_active() function
arch/Kconfig | 3 ++ arch/powerpc/include/asm/mem_encrypt.h | 5 -- arch/powerpc/include/asm/protected_guest.h | 30 +++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + arch/s390/include/asm/mem_encrypt.h | 2 - arch/x86/Kconfig | 1 + arch/x86/include/asm/kexec.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 13 +---- arch/x86/include/asm/protected_guest.h | 27 ++++++++++ arch/x86/kernel/crash_dump_64.c | 4 +- arch/x86/kernel/head64.c | 4 +- arch/x86/kernel/kvm.c | 3 +- arch/x86/kernel/kvmclock.c | 4 +- arch/x86/kernel/machine_kexec_64.c | 19 +++---- arch/x86/kernel/pci-swiotlb.c | 9 ++-- arch/x86/kernel/relocate_kernel_64.S | 2 +- arch/x86/kernel/sev.c | 6 +-- arch/x86/kvm/svm/svm.c | 3 +- arch/x86/mm/ioremap.c | 16 +++--- arch/x86/mm/mem_encrypt.c | 60 +++++++++++++++------- arch/x86/mm/mem_encrypt_identity.c | 3 +- arch/x86/mm/pat/set_memory.c | 3 +- arch/x86/platform/efi/efi_64.c | 9 ++-- arch/x86/realmode/init.c | 8 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +- drivers/gpu/drm/drm_cache.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-- drivers/iommu/amd/init.c | 7 +-- drivers/iommu/amd/iommu.c | 3 +- drivers/iommu/amd/iommu_v2.c | 3 +- drivers/iommu/iommu.c | 3 +- fs/proc/vmcore.c | 6 +-- include/linux/mem_encrypt.h | 4 -- include/linux/protected_guest.h | 37 +++++++++++++ kernel/dma/swiotlb.c | 4 +- 36 files changed, 218 insertions(+), 104 deletions(-) create mode 100644 arch/powerpc/include/asm/protected_guest.h create mode 100644 arch/x86/include/asm/protected_guest.h create mode 100644 include/linux/protected_guest.h
dri-devel@lists.freedesktop.org