From nobody Wed Dec 25 05:40:02 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 198.145.21.10 is neither permitted nor denied by domain of lists.01.org) smtp.mailfrom=edk2-devel-bounces@lists.01.org Return-Path: Received: from ml01.01.org (ml01.01.org [198.145.21.10]) by mx.zohomail.com with SMTPS id 1512376572580467.7520397628467; Mon, 4 Dec 2017 00:36:12 -0800 (PST) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 854E3220EE096; Mon, 4 Dec 2017 00:31:36 -0800 (PST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 1D401220EE092 for ; Mon, 4 Dec 2017 00:31:36 -0800 (PST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Dec 2017 00:36:05 -0800 Received: from jwang36-mobl2.ccr.corp.intel.com ([10.239.192.42]) by fmsmga001.fm.intel.com with ESMTP; 04 Dec 2017 00:36:04 -0800 X-Original-To: edk2-devel@lists.01.org Received-SPF: none (zoho.com: 198.145.21.10 is neither permitted nor denied by domain of lists.01.org) client-ip=198.145.21.10; envelope-from=edk2-devel-bounces@lists.01.org; helo=ml01.01.org; Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=134.134.136.24; helo=mga09.intel.com; envelope-from=jian.j.wang@intel.com; receiver=edk2-devel@lists.01.org X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,358,1508828400"; d="scan'208";a="10201770" From: Jian J Wang To: edk2-devel@lists.01.org Date: Mon, 4 Dec 2017 16:35:56 +0800 Message-Id: <20171204083556.19416-5-jian.j.wang@intel.com> X-Mailer: git-send-email 2.14.1.windows.1 In-Reply-To: <20171204083556.19416-1-jian.j.wang@intel.com> References: <20171204083556.19416-1-jian.j.wang@intel.com> Subject: [edk2] [PATCH v2 4/4] UefiCpuPkg/CpuDxe: Enable protection for newly added page table X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ruiyu Ni , Laszlo Ersek , Jiewen Yao , Eric Dong MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Errors-To: edk2-devel-bounces@lists.01.org Sender: "edk2-devel" X-ZohoMail: RSF_4 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" > v2: > Use the page table pool to allocate new page tables and save code > to enable protection for them separately. One of the functionalities of CpuDxe is to update memory paging attributes. If page table protection is applied, it must be disabled temporarily before any attributes update and enabled again afterwards. Another job in this patch is to re-use the page table pool reserved in DxeI= pl, if there're still free pages in it. Otherwise, the driver will reserve anot= her block of memory (specified by PcdPageTablePoolUnitSize and PcdPageTablePoolAlignment) as new page table pool. The protection will be o= nly applied to the whole pool instead of individual table pages. Same as DxeIpl, this helps to reduce potential "split" operation and recursive calling of SetMemorySpaceAttributes(). Cc: Jiewen Yao Cc: Eric Dong Cc: Laszlo Ersek Cc: Ruiyu Ni Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jian J Wang --- UefiCpuPkg/CpuDxe/CpuDxe.c | 17 +- UefiCpuPkg/CpuDxe/CpuDxe.h | 2 + UefiCpuPkg/CpuDxe/CpuDxe.inf | 3 + UefiCpuPkg/CpuDxe/CpuPageTable.c | 329 +++++++++++++++++++++++++++++++++++= +++- UefiCpuPkg/CpuDxe/CpuPageTable.h | 22 +++ 5 files changed, 364 insertions(+), 9 deletions(-) diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.c b/UefiCpuPkg/CpuDxe/CpuDxe.c index 8ddebabd02..6ae2dcd1c7 100644 --- a/UefiCpuPkg/CpuDxe/CpuDxe.c +++ b/UefiCpuPkg/CpuDxe/CpuDxe.c @@ -25,6 +25,7 @@ BOOLEAN InterruptState =3D FALSE; EFI_HANDLE mCpuHandle =3D NULL; BOOLEAN mIsFlushingGCD; +BOOLEAN mIsAllocatingPageTable =3D FALSE; UINT64 mValidMtrrAddressMask; UINT64 mValidMtrrBitsMask; UINT64 mTimerPeriod =3D 0; @@ -407,6 +408,20 @@ CpuSetMemoryAttributes ( return EFI_SUCCESS; } =20 + // + // During memory attributes updating, new pages may be allocated to setup + // smaller granularity of page table. Page allocation action might then = cause + // another calling of CpuSetMemoryAttributes() recursively, due to memory + // protection policy configured (such as PcdDxeNxMemoryProtectionPolicy). + // Since this driver will always protect memory used as page table by it= self, + // there's no need to apply protection policy requested from memory serv= ice. + // So it's safe to just return EFI_SUCCESS if this time of calling is ca= used + // by page table memory allocation. + // + if (mIsAllocatingPageTable) { + DEBUG((DEBUG_VERBOSE, " Allocating page table memory\n")); + return EFI_SUCCESS; + } =20 CacheAttributes =3D Attributes & CACHE_ATTRIBUTE_MASK; MemoryAttributes =3D Attributes & MEMORY_ATTRIBUTE_MASK; @@ -487,7 +502,7 @@ CpuSetMemoryAttributes ( // // Set memory attribute by page table // - return AssignMemoryPageAttributes (NULL, BaseAddress, Length, MemoryAttr= ibutes, AllocatePages); + return AssignMemoryPageAttributes (NULL, BaseAddress, Length, MemoryAttr= ibutes, NULL); } =20 /** diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.h b/UefiCpuPkg/CpuDxe/CpuDxe.h index 9c0d22359d..540f5f2dbf 100644 --- a/UefiCpuPkg/CpuDxe/CpuDxe.h +++ b/UefiCpuPkg/CpuDxe/CpuDxe.h @@ -273,5 +273,7 @@ RefreshGcdMemoryAttributesFromPaging ( VOID ); =20 +extern BOOLEAN mIsAllocatingPageTable; + #endif =20 diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.inf b/UefiCpuPkg/CpuDxe/CpuDxe.inf index 3e8d196739..0a45285427 100644 --- a/UefiCpuPkg/CpuDxe/CpuDxe.inf +++ b/UefiCpuPkg/CpuDxe/CpuDxe.inf @@ -74,6 +74,7 @@ [Guids] gIdleLoopEventGuid ## CONSUMES ## E= vent gEfiVectorHandoffTableGuid ## SOMETIMES_CONSUMES ## S= ystemTable + gPageTablePoolGuid ## CONSUMES =20 [Ppis] gEfiSecPlatformInformation2PpiGuid ## UNDEFINED # HOB @@ -81,6 +82,8 @@ =20 [Pcd] gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ##= CONSUMES + gEfiMdeModulePkgTokenSpaceGuid.PcdPageTablePoolUnitSize ##= CONSUMES + gEfiMdeModulePkgTokenSpaceGuid.PcdPageTablePoolAlignment ##= CONSUMES =20 [Depex] TRUE diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTa= ble.c index 9658ed74c5..03b54a2111 100644 --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c @@ -62,6 +62,9 @@ #define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull #define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull =20 +#define PAGE_TABLE_POOL_ALIGN_MASK \ + (~(EFI_PHYSICAL_ADDRESS)(FixedPcdGet32 (PcdPageTablePoolAlignment) - 1= )) + typedef enum { PageNone, Page4K, @@ -87,6 +90,8 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] =3D { {Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64}, }; =20 +PAGE_TABLE_POOL_HEADER *mPageTablePool =3D NULL; + /** Enable write protection function for AP. =20 @@ -172,10 +177,6 @@ GetCurrentPagingContext ( } if ((AsmReadCr0 () & BIT31) !=3D 0) { PagingContext->ContextData.X64.PageTableBase =3D (AsmReadCr3 () & PAGI= NG_4K_ADDRESS_MASK_64); - if ((AsmReadCr0 () & BIT16) =3D=3D 0) { - AsmWriteCr0 (AsmReadCr0 () | BIT16); - SyncMemoryPageAttributesAp (SyncCpuEnableWriteProtection); - } } else { PagingContext->ContextData.X64.PageTableBase =3D 0; } @@ -561,6 +562,59 @@ SplitPage ( } } =20 +/** + Check the WP status in CR0 register. This bit is used to lock or unlock w= rite + access to pages marked as read-only. + + @retval TRUE Write protection is enabled. + @retval FALSE Write protection is disabled. +**/ +BOOLEAN +IsReadOnlyPageWriteProtected ( + VOID + ) +{ + return ((AsmReadCr0 () & BIT16) !=3D 0); +} + +/** + Disable write protection function for AP. + + @param[in,out] Buffer The pointer to private data buffer. +**/ +VOID +EFIAPI +SyncCpuDisableWriteProtection ( + IN OUT VOID *Buffer + ) +{ + AsmWriteCr0 (AsmReadCr0() & ~BIT16); +} + +/** + Disable Write Protect on pages marked as read-only. +**/ +VOID +DisableReadOnlyPageWriteProtect ( + VOID + ) +{ + AsmWriteCr0 (AsmReadCr0() & ~BIT16); + SyncMemoryPageAttributesAp (SyncCpuDisableWriteProtection); +} + +/** + Enable Write Protect on pages marked as read-only. +**/ +VOID +EnableReadOnlyPageWriteProtect ( + VOID + ) +{ + AsmWriteCr0 (AsmReadCr0() | BIT16); + SyncMemoryPageAttributesAp (SyncCpuEnableWriteProtection); +} + /** This function modifies the page attributes for the memory region specifi= ed by BaseAddress and Length from their current attributes to the attributes specified by Attr= ibutes. @@ -609,6 +663,7 @@ ConvertMemoryPageAttributes ( PAGE_ATTRIBUTE SplitAttribute; RETURN_STATUS Status; BOOLEAN IsEntryModified; + BOOLEAN IsWpEnabled; =20 if ((BaseAddress & (SIZE_4KB - 1)) !=3D 0) { DEBUG ((DEBUG_ERROR, "BaseAddress(0x%lx) is not aligned!\n", BaseAddre= ss)); @@ -665,14 +720,27 @@ ConvertMemoryPageAttributes ( if (IsModified !=3D NULL) { *IsModified =3D FALSE; } + if (AllocatePagesFunc =3D=3D NULL) { + AllocatePagesFunc =3D AllocatePageTableMemory; + } + + // + // Make sure that the page table is changeable. + // + IsWpEnabled =3D IsReadOnlyPageWriteProtected (); + if (IsWpEnabled) { + DisableReadOnlyPageWriteProtect (); + } =20 // // Below logic is to check 2M/4K page to make sure we donot waist memory. // + Status =3D EFI_SUCCESS; while (Length !=3D 0) { PageEntry =3D GetPageTableEntry (&CurrentPagingContext, BaseAddress, &= PageAttribute); if (PageEntry =3D=3D NULL) { - return RETURN_UNSUPPORTED; + Status =3D RETURN_UNSUPPORTED; + goto Done; } PageEntryLength =3D PageAttributeToLength (PageAttribute); SplitAttribute =3D NeedSplitPage (BaseAddress, Length, PageEntry, Page= Attribute); @@ -690,11 +758,13 @@ ConvertMemoryPageAttributes ( Length -=3D PageEntryLength; } else { if (AllocatePagesFunc =3D=3D NULL) { - return RETURN_UNSUPPORTED; + Status =3D RETURN_UNSUPPORTED; + goto Done; } Status =3D SplitPage (PageEntry, PageAttribute, SplitAttribute, Allo= catePagesFunc); if (RETURN_ERROR (Status)) { - return RETURN_UNSUPPORTED; + Status =3D RETURN_UNSUPPORTED; + goto Done; } if (IsSplitted !=3D NULL) { *IsSplitted =3D TRUE; @@ -709,7 +779,14 @@ ConvertMemoryPageAttributes ( } } =20 - return RETURN_SUCCESS; +Done: + // + // Restore page table write protection, if any. + // + if (IsWpEnabled) { + EnableReadOnlyPageWriteProtect (); + } + return Status; } =20 /** @@ -922,6 +999,230 @@ RefreshGcdMemoryAttributesFromPaging ( FreePool (MemorySpaceMap); } =20 +/** + Try to find the page table pool reserved before. + + Since the page table pool is always allocated at the boundary specified = by + PcdPageTablePoolAlignment, we can definitely find the header address of = pool + containing the page table given by CR3, by just checking the aligned add= ress + lower than value of it. + + @param[in] PagingContext The paging context. + + @retval Address of page table pool reserved before. + @retval NULL The page table pool was not found. +**/ +PAGE_TABLE_POOL_HEADER * +FindPageTablePool ( + IN PAGE_TABLE_LIB_PAGING_CONTEXT *PagingContext + ) +{ + PAGE_TABLE_POOL_HEADER *Pool; + VOID *BaseAddress; + UINTN Index; + + BaseAddress =3D (VOID *)(UINTN)PagingContext->ContextData.X64.PageTableB= ase; + for (Index =3D 0; Index < EFI_PAGE_SIZE / sizeof (UINT64); ++Index) { + // + // Because the pool header occupies one page, let's always check the a= ddress + // one page before. + // + Pool =3D (VOID *)(UINTN)(((UINTN)BaseAddress - EFI_PAGE_SIZE) & + PAGE_TABLE_POOL_ALIGN_MASK); + + // + // Check the signature. + // + if (!CompareMem (&Pool->Signature, &gPageTablePoolGuid, sizeof (EFI_GU= ID))) { + return Pool; + } + + // + // Check the address at previous alginment boundary. + // + BaseAddress =3D Pool; + } + + return NULL; +} + +/** + Initialize a buffer pool for page table use only. + + To reduce the potential split operation on page table, the pages reserve= d for + page table should be allocated in the times of 512 (=3D SIZE_2MB) and at= the + boundary of SIZE_2MB. So the page pool is always initialized with number= of + pages greater than or equal to the given PoolPages. + + Once the pages in the pool are used up, this method should be called aga= in to + reserve at least another 512 pages. Usually this won't happen often in + practice. + + But for the first time calling of this method, it will search page table= pool + reserved before (DxeIpl) and try to re-use it if there're still free pag= es + in it. + + @param[in] PagingContext The paging context. + @param[in] PoolPages The least page number of the pool to be create= d. + + @retval TRUE The pool is initialized successfully. + @retval FALSE The memory is out of resource. +**/ +BOOLEAN +InitializePageTablePool ( + IN PAGE_TABLE_LIB_PAGING_CONTEXT *PagingContext, + IN UINTN PoolPages + ) +{ + VOID *Buffer; + BOOLEAN IsModified; + PAGE_TABLE_POOL_HEADER *HeadPool; + PAGE_TABLE_POOL_HEADER *Pool; + UINTN PoolUnitPages; + + // + // There must be page table pool already reserved in IPL or PEI. Let's r= euse + // them if there're still pages available. + // + if (mPageTablePool =3D=3D NULL) { + HeadPool =3D FindPageTablePool (PagingContext); + + if (HeadPool !=3D NULL) { + Pool =3D HeadPool; + do { + ASSERT ( + !CompareMem (&Pool->Signature, &gPageTablePoolGuid, sizeof (EFI_= GUID)) + ); + + if (mPageTablePool =3D=3D NULL || + mPageTablePool->FreePages < Pool->FreePages) { + mPageTablePool =3D Pool; + } + + Pool =3D (VOID *)(UINTN)Pool->NextPool; + } while (Pool !=3D HeadPool); + + // + // Enough pages available? + // + if (PoolPages <=3D mPageTablePool->FreePages) { + return TRUE; + } + } + } + + // + // Reserve at least PcdPageTablePoolUnitSize. + // + PoolUnitPages =3D EFI_SIZE_TO_PAGES (PcdGet32 (PcdPageTablePoolUnitSize)= ); + if (PoolPages <=3D PoolUnitPages) { + PoolPages =3D PoolUnitPages; + } else { + PoolPages =3D (PoolPages + PoolUnitPages) % PoolUnitPages; + } + + // + // Set guard flag to avoid recursive calling of SetMemoryAttributes. + // Protection will be applied in this methond instead before return. + // + mIsAllocatingPageTable =3D TRUE; + Buffer =3D AllocateAlignedPages ( + PoolPages, + FixedPcdGet32 (PcdPageTablePoolAlignment) + ); + mIsAllocatingPageTable =3D FALSE; + + if (Buffer =3D=3D NULL) { + DEBUG ((DEBUG_ERROR, "ERROR: Out of pages aligned at 0x%x\n", + FixedPcdGet32 (PcdPageTablePoolAlignment))); + return FALSE; + } + + if (mPageTablePool =3D=3D NULL) { + mPageTablePool =3D Buffer; + mPageTablePool->NextPool =3D (EFI_PHYSICAL_ADDRESS)(UINTN)Buffer; + } + + // + // Link all pools into a list. + // + ((PAGE_TABLE_POOL_HEADER *)Buffer)->NextPool =3D mPageTablePool->NextPoo= l; + mPageTablePool->NextPool =3D (EFI_PHYSICAL_ADDRESS)(UINTN)Buffer; + + // + // Reserve one page for pool header. + // + mPageTablePool =3D Buffer; + CopyMem (&mPageTablePool->Signature, &gPageTablePoolGuid, sizeof (EFI_GU= ID)); + mPageTablePool->FreePages =3D PoolPages - 1; + mPageTablePool->Offset =3D EFI_PAGES_TO_SIZE (1); + + // + // Mark the whole pool pages as read-only. + // + ConvertMemoryPageAttributes ( + NULL, + (PHYSICAL_ADDRESS)(UINTN)Buffer, + EFI_PAGES_TO_SIZE (PoolPages), + EFI_MEMORY_RO, + PageActionSet, + AllocatePageTableMemory, + NULL, + &IsModified + ); + ASSERT (IsModified =3D=3D TRUE); + + return TRUE; +} + +/** + This API provides a way to allocate memory for page table. + + This API can be called more than once to allocate memory for page tables. + + Allocates the number of 4KB pages and returns a pointer to the allocated + buffer. The buffer returned is aligned on a 4KB boundary. + + If Pages is 0, then NULL is returned. + If there is not enough memory remaining to satisfy the request, then NUL= L is + returned. + + @param Pages The number of 4 KB pages to allocate. + + @return A pointer to the allocated buffer or NULL if allocation fails. + +**/ +VOID * +EFIAPI +AllocatePageTableMemory ( + IN UINTN Pages + ) +{ + PAGE_TABLE_LIB_PAGING_CONTEXT PagingContext; + VOID *Buffer; + + if (Pages =3D=3D 0) { + return NULL; + } + + // + // Renew the pool if necessary. + // + if (Pages > mPageTablePool->FreePages) { + GetCurrentPagingContext (&PagingContext); + if (!InitializePageTablePool (&PagingContext, Pages)) { + return NULL; + } + } + + Buffer =3D (UINT8 *)mPageTablePool + mPageTablePool->Offset; + + mPageTablePool->Offset +=3D EFI_PAGES_TO_SIZE (Pages); + mPageTablePool->FreePages -=3D Pages; + + return Buffer; +} + /** Initialize the Page Table lib. **/ @@ -933,6 +1234,18 @@ InitializePageTableLib ( PAGE_TABLE_LIB_PAGING_CONTEXT CurrentPagingContext; =20 GetCurrentPagingContext (&CurrentPagingContext); + + // + // Reserve memory of page tables for future uses, if paging is enabled. + // + if (CurrentPagingContext.ContextData.X64.PageTableBase !=3D 0 && + (CurrentPagingContext.ContextData.Ia32.Attributes & + PAGE_TABLE_LIB_PAGING_CONTEXT_IA32_X64_ATTRIBUTES_PAE) !=3D 0) { + DisableReadOnlyPageWriteProtect(); + InitializePageTablePool (&CurrentPagingContext, 0); + EnableReadOnlyPageWriteProtect (); + } + DEBUG ((DEBUG_INFO, "CurrentPagingContext:\n", CurrentPagingContext.Mach= ineType)); DEBUG ((DEBUG_INFO, " MachineType - 0x%x\n", CurrentPagingContext.Mac= hineType)); DEBUG ((DEBUG_INFO, " PageTableBase - 0x%x\n", CurrentPagingContext.Con= textData.X64.PageTableBase)); diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.h b/UefiCpuPkg/CpuDxe/CpuPageTa= ble.h index eaff595b4c..0faa11830d 100644 --- a/UefiCpuPkg/CpuDxe/CpuPageTable.h +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.h @@ -16,6 +16,7 @@ #define _PAGE_TABLE_LIB_H_ =20 #include +#include =20 #define PAGE_TABLE_LIB_PAGING_CONTEXT_IA32_X64_ATTRIBUTES_PSE = BIT0 #define PAGE_TABLE_LIB_PAGING_CONTEXT_IA32_X64_ATTRIBUTES_PAE = BIT1 @@ -110,4 +111,25 @@ InitializePageTableLib ( VOID ); =20 +/** + This API provides a way to allocate memory for page table. + + This API can be called more once to allocate memory for page tables. + + Allocates the number of 4KB pages of type EfiRuntimeServicesData and ret= urns a pointer to the + allocated buffer. The buffer returned is aligned on a 4KB boundary. If= Pages is 0, then NULL + is returned. If there is not enough memory remaining to satisfy the req= uest, then NULL is + returned. + + @param Pages The number of 4 KB pages to allocate. + + @return A pointer to the allocated buffer or NULL if allocation fails. + +**/ +VOID * +EFIAPI +AllocatePageTableMemory ( + IN UINTN Pages + ); + #endif --=20 2.14.1.windows.1 _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel