12.1 本篇概述
前面的所有篇章都是基於PC端的延遲渲染管線闡述UE的渲染體系的,特別是剖析虛幻渲染體系(04)- 延遲渲染管線詳盡地闡述了在PC端的延遲渲染管線的流程和步驟。
此篇只要針對UE的移動端的渲染管線進行闡述,最終還會對比移動和和PC端的渲染差異,以及特殊的優化措施。本篇主要闡述UE渲染體系的以下內容:
- FMobileSceneRenderer的主要流程和步驟。
- 移動端的前向和延遲渲染管線。
- 移動端的光影和陰影。
- 移動端和PC端的異同,以及涉及的特殊優化技巧。
特別要指出的是,本篇分析的UE原始碼升級到了4.27.1,需要同步看原始碼的同學注意更新了。
如果要在PC的UE編輯器開啟移動端渲染管線,可以選擇如下所示的選單:
等待Shader編譯完成,UE編輯器的視口內便是移動端的預覽效果。
12.1.1 移動裝置的特點
相比PC桌面平臺,移動端在尺寸、電量、硬體效能等諸多方面都存在顯著的差異,具體表現在:
-
更小的尺寸。移動端的便攜性就要求整機裝置必須輕巧,可置於掌中或口袋內,所以整機只能限制在非常小的體積之內。
-
有限的能量和功率。受限於電池儲存技術,目前的主流鋰電池普通在1萬毫安,但移動裝置的解析度、畫質卻越來越高,為了滿足足夠長的續航和散熱限制,必須嚴格控制移動裝置的整機功率,通常在5w以內。
-
散熱方式受限。PC裝置通常可以安裝散熱風扇、甚至水冷系統,而移動裝置不具備這些主動散熱方式,只能靠熱傳導散熱。如果散熱不當,CPU和GPU都會主動降頻,以非常有限的效能執行,以免裝置元器件因過熱而損毀。
-
有限的硬體效能。移動裝置的各類元件(CPU、頻寬、記憶體、GPU等)的效能都只是PC裝置的數十分之一。
2018年,主流PC裝置(NV GV100-400-A1 Titan V)和主流移動裝置(Samsung Exynos 9 8895)的效能對比圖。移動裝置的很多硬體效能只是PC裝置的幾十分之一,但解析度卻接近PC的一半,更加突顯了移動裝置的挑戰和窘境。
到了2020年,主流的移動裝置效能如下所示:
-
特殊的硬體架構。如CPU和GPU共享記憶體儲存裝置,被稱為耦合式架構,還有GPU的TB(D)R架構,目的都是為了在低功耗內完成儘可能多的操作。
PC裝置的解耦硬體架構和移動裝置的耦合式硬體架構對比圖。
除此之外,不同於PC端的CPU和GPU純粹地追求計算效能,衡量移動端的效能有三個指標:效能(Performance)、能量(Power)、面積(Area),俗稱PPA。(下圖)
衡量移動裝置的三個基本引數:Performance、Area、Power,其中Compute Density(計算密度)涉及效能和麵積,能耗比涉及效能和能力消耗,越大越好。
與移動裝置一起崛起的還有XR裝置,它是移動裝置的一個重要發展分支。目前存在著各種不同大小、功能、應用場景的XR裝置:
各種形式的XR裝置。
隨著近來元宇宙(Metaverse)的爆火,以及FaceBook改名為Meta,加之Apple、MicroSoft、NVidia、Google等科技巨頭都在加緊佈局面向未來的沉浸式體驗,XR裝置作為能夠最接近元宇宙暢想的載體和入口,自然成為一條未來非常有潛力能夠出現巨無霸的全新賽道。
12.2 UE移動端渲染特性
本章闡述一下UE4.27在移動端的渲染特性。
12.2.1 Feature Level
UE在移動端支援以下圖形API:
Feature Level | 說明 |
---|---|
OpenGL ES 3.1 | 安卓系統的預設特性等級,可以在工程設定(Project Settings > Platforms > Android Material Quality - ES31)配置具體的材質引數。 |
Android Vulkan | 可用於某些特定Android裝置的高階渲染器,支援Vulkan 1.2 API,輕量級設計理念的Vulkan,多數情況下會比OpenGL更高效。 |
Metal 2.0 | 專用於iOS裝置的特性等級。可以在Project Settings > Platforms > iOS Material Quality配置材質引數。 |
在目前的主流安卓裝置,使用Vulkan能獲得更好的效能,原因在於Vulkan輕量級的設計理念,使得UE等應用程式能夠更加精準地執行優化。下面是Vulkan和OpenGL的對照表:
Vulkan | OpenGL |
---|---|
基於物件的狀態,沒有全域性狀態。 | 單一的全域性狀態機。 |
所有的狀態概念都放置到命令緩衝區中。 | 狀態被繫結到單個上下文。 |
可以多執行緒編碼。 | 渲染操作只能被順序執行。 |
可以精確、顯式地操控GPU的記憶體和同步。 | GPU的記憶體和同步細節通常被驅動程式隱藏起來。 |
驅動程式沒有執行時錯誤檢測,但存在針對開發人員的驗證層。 | 廣泛的執行時錯誤檢測。 |
如果在Windows平臺,UE編輯器也可以啟動OpenGL、Vulkan、Metal的模擬器,以便在編輯器時預覽效果,但可能跟實際的執行裝置的畫面有差異,不可完全依賴此功能。
開啟Vulkan前需要在工程中配置一些引數,具體參看官方文件Android Vulkan Mobile Renderer。
另外,UE早前幾個版本就移除了windows下的OpenGL支援,雖然目前UE編輯器還存在OpenGL的模擬選項,但實際上底層是用D3D渲染的。
12.2.2 Deferred Shading
UE的Deferred Shading(延遲著色)是在4.26才加入的功能,使得開發者能夠在移動端實現較複雜的光影效果,諸如高質量反射、多動態光照、貼花、高階光照特性。
上:前向渲染;下:延遲渲染。
如果要在移動端開啟延遲渲染,需要在工程配置目錄下的DefaultEngine.ini新增r.Mobile.ShadingPath=1
欄位,然後重啟編輯器。
12.2.3 Ground Truth Ambient Occlusion
Ground Truth Ambient Occlusion (GTAO)是接近現實世界的環境遮擋技術,是陰影的一種補償,能夠遮蔽一部分非直接光照,從而獲得良好的軟陰影效果。
開啟了GTAO的效果,注意機器人靠近牆面時,會在牆面留下具有漸變的軟陰影效果。
為了開啟GTAO,需要勾選以下所示的選項:
此外,GTAO依賴Mobile HDR的選項,為了在對應目標裝置開啟,還需要在[Platform]Scalability.ini的配置中新增r.Mobile.AmbientOcclusionQuality
欄位,並且值需要大於0,否則GTAO將被禁用。
值得注意的是,GTAO在Mali裝置上存在效能問題,因為它們的最大Compute Shader執行緒數量少於1024個。
12.2.4 Dynamic Lighting and Shadow
UE在移動端實現的光源特性有:
- 線性空間的HDR光照。
- 帶方向的光照圖(考慮了法線)。
- 太陽(平行光)支援距離場陰影 + 解析的鏡面高光。
- IBL光照:每個物體取樣了最近的一個反射捕捉器,而沒有視差校正。
- 動態物體能正確地接受光照,也可以投射陰影。
UE移動端支援的動態光源的型別、數量、陰影等資訊如下:
光源型別 | 最大數量 | 陰影 | 描述 |
---|---|---|---|
平行光 | 1 | CSM | CSM預設是2級,最多支援4級。 |
點光源 | 4 | 不支援 | 點光源陰影需要立方體陰影圖,而單Pass渲染立方體陰影(OnePassPointLightShadow)的技術需要GS(SM5才有)才支援。 |
聚光燈 | 4 | 支援 | 預設禁用,需要在工程中開啟。 |
區域光 | 0 | 不支援 | 目前不支援動態區域光照效果。 |
動態的聚光燈需要在工程配置中顯式開啟:
在移動BasePass的畫素著色器中,聚光燈陰影圖與CSM共享相同的紋理取樣器,並且聚光燈陰影和CSM使用相同的陰影圖圖集。CSM能夠保證有足夠的空間,而聚光燈將按陰影解析度排序。
預設情況下,可見陰影的最大數量被限制為8個,但可以通過改變r.Mobile.MaxVisibleMovableSpotLightsShadow
的值來改變上限值。聚光燈陰影的解析度是基於螢幕大小和r.Shadow.TexelsPerPixelSpotlight
。
在前向渲染路徑中,區域性光源(點光源和聚光燈)的總數不能超過4個。
移動端還支援一種特殊的陰影模式,那就是調製陰影(Modulated Shadows),只能用於固定(Stationary)的平行光。開啟了調製陰影的效果圖如下:
調製陰影還支援改變陰影顏色和混合比例:
左:動態陰影;右:調製陰影。
移動端的陰影同樣支援自陰影、陰影質量等級(r.shadowquality)、深度偏移等引數的設定。
此外,移動端預設使用了GGX的高光反射,如果想切換到傳統的高光著色模型,可以在以下配置裡修改:
12.2.5 Pixel Projected Reflection
UE針對移動端做了一個優化版的SSR,被稱為Pixel Projected Reflection(PPR),也是複用螢幕空間畫素的核心思想。
PPR效果圖。
為了開啟PPR效果,需要滿足以下條件:
-
開啟MobileHDR選項。
-
r.Mobile.PixelProjectedReflectionQuality
的值大於0。 -
設定Project Settings > Mobile and set the Planar Reflection Mode成正確的模式:
Planar Reflection Mode有3個選項:
- Usual:平面反射Actor在所有平臺上的作用都是相同的。
- MobilePPR:平面反射Actor在PC/主機平臺上正常工作,但在移動平臺上使用PPR渲染。
- MobilePPRExclusive:平面反射Actor將只用於移動平臺上的PPR,為PC和Console專案留下了使用傳統SSR的空間。
預設只有高階移動裝置才會在[Project]Scalability.ini開啟r.Mobile.PixelProjectedReflectionQuality
。
12.2.6 Mesh Auto-Instancing
PC端的網格繪製管線已經支援了網格的自動化例項和合並繪製,這個特性可以極大提升渲染效能。4.27已經在移動端支援了這一特性。
若想開啟,則需要開啟工程配置目錄下的DefaultEngine.ini,新增以下欄位:
r.Mobile.SupportGPUScene=1
r.Mobile.UseGPUSceneTexture=1
重啟編輯器,等待Shader編譯完即可預覽效果。
由於需要GPUSceneTexture支援,而Mali裝置的Uniform Buffer最大隻有64kb,以致無法支援足夠大的空間,所以,Mali裝置會使用紋理而非緩衝區來儲存GPUScene資料。
但也存在一些限制:
-
移動裝置上的自動例項化主要有利於CPU密集型專案,而不是GPU密集型專案。雖然啟用自動例項化不太可能會對GPU密集型的專案造成損害,但不太可能看到使用它帶來的顯著效能改進。
-
如果一款遊戲或應用需要大量記憶體,那麼關閉
r.Mobile.UseGPUSceneTexture
並使用緩衝區可能會更有好處,因為它無法在Mali裝置上正常執行。也可以針對Mali裝置關閉
r.Mobile.UseGPUSceneTexture
,而其它GPU廠商的裝置正常使用。
自動例項化的有效性很大程度上取決於專案的確切規範和定位,建議建立一個啟用了自動例項化的構建,並對其進行概要分析,以確定是否會看到實質性的效能提升。
12.2.7 Post Processing
由於移動裝置存在更慢的依賴紋理讀取(dependent texture read)、有限的硬體特性、特殊的硬體架構、額外的渲染目標解析、有限的頻寬等限制性因素,故而後處理在移動裝置上執行起來會比較耗效能,有些極端情況會卡住渲染管線。
儘管如此,在某些畫質要求高的遊戲或應用,依然非常依賴後處理的強勁表現力,為高品質邁上幾個臺階。UE不會限制開發者使用後處理。
為了開啟後處理,必須先開啟MobileHDR選項:
開啟後處理之後,就可以在後處理體積(Post Process Volume)設定各種後處理效果。
在移動端可以支援的後處理有Mobile Tonemapper、Color Grading、Lens、Bloom、Dirt Mask、Auto Exposure、Lens Flares、Depth of Field等等。
為了獲得更好的效能,官方給出的建議是在移動端只開啟Bloom和TAA。
12.2.8 其它特性和限制
- Reflection Capture Compression
移動端支援反射捕捉器元件(Reflection Capture Component)的壓縮,可以減少Reflection Capture執行時的記憶體和頻寬,提升渲染效率。需要在工程配置中開啟:
開啟之後,預設使用ETC2進行壓縮。另外,也可以針對每個Reflection Capture Component進行調整:
- 材質特性
移動平臺上的材質(特性級別Open ES 3.1)使用與其他平臺相同的基於節點的建立過程,並且絕大多數節點在移動端都支援。
移動平臺支援的材質屬性有:BaseColor、Roughness、Metallic、Specular、Normal、Emissive、Refraction,但不支援Scene Color表示式、Tessellation輸入、次表面散射著色模型。
移動平臺支援的材質存在一些限制:
- 由於硬體限制,只能使用16個紋理取樣器。
- 只有DefaultLit和Unlit著色模型可用。
- 自定義UV應該用來避免依賴紋理讀取(沒有紋理uv的數學計算)。
- 半透明和Masked材質是及其耗效能, 建議儘量使用不透明材質。
- 深度淡出(Depth fade)可以在iOS平臺的半透明材質中使用,但在硬體不支援從深度緩衝區獲取資料的平臺上,是不受支援的,將導致不可接受的效能成本。
材質屬性皮膚存在一些針對移動端的特殊選項:
這些屬性的說明如下:
-
Mobile Separate Translucency:是否在移動端開啟單獨的半透明渲染紋理。
-
Use Full Precision:是否使用全精度,如果是,可以減少頻寬佔用和能耗,提升效能,但可能會出現遠處物體的瑕疵:
左:全精度材質;右:半精度材質,遠處的太陽出現了瑕疵。
-
Use Lightmap Directionality:是否開啟光照圖的方向性,若勾選,會考慮光照圖的方向和畫素法線,但會提升效能消耗。
-
Use Alpha to Coverage:是否為Masked材質開啟MSAA抗鋸齒,若勾選,會開啟MSAA。
-
Fully Rough:是否完全粗糙,如果勾選,將極大提升此材質的渲染效率。
此外,移動端支援的網格型別有:
- Skeletal Mesh
- Static Mesh
- Landscape
- CPU particle sprites, particle meshes
除上述型別以外的其它都不被支援。其它限制還有:
- 單個網格最多隻能到65k,因為頂點索引只有16位。
- 單個Skeletal Mesh的骨骼數量必須在75個以內,因為受硬體效能的限制。
12.3 FMobileSceneRenderer
FMobileSceneRenderer繼承自FSceneRenderer,它負責移動端的場景渲染流程,而PC端是同樣繼承自FSceneRenderer的FDeferredShadingSceneRenderer。它們的繼承關係圖如下:
前述多篇文章已經提及了FDeferredShadingSceneRenderer,它的渲染流程尤為複雜,包含了複雜的光影和渲染步驟。相比之下,FMobileSceneRenderer的邏輯和步驟會簡單許多,下面是RenderDoc的截幀:
以上主要包含了InitViews、ShadowDepths、PrePass、BasePass、OcclusionTest、ShadowProjectionOnOpaque、Translucency、PostProcessing等步驟。其中這些步驟在PC端都是存在的,但實現過程可能會有所不同。見後續章節剖析。
12.3.1 渲染器主流程
移動端的場景渲染器的主流程也發生在FMobileSceneRenderer::Render
中,程式碼和解析如下:
// Engine\Source\Runtime\Renderer\Private\MobileShadingRenderer.cpp
void FMobileSceneRenderer::Render(FRHICommandListImmediate& RHICmdList)
{
// 更新圖元場景資訊。
Scene->UpdateAllPrimitiveSceneInfos(RHICmdList);
// 準備檢視的渲染區域.
PrepareViewRectsForRendering(RHICmdList);
// 準備天空大氣的資料
if (ShouldRenderSkyAtmosphere(Scene, ViewFamily.EngineShowFlags))
{
for (int32 LightIndex = 0; LightIndex < NUM_ATMOSPHERE_LIGHTS; ++LightIndex)
{
if (Scene->AtmosphereLights[LightIndex])
{
PrepareSunLightProxy(*Scene->GetSkyAtmosphereSceneInfo(), LightIndex, *Scene->AtmosphereLights[LightIndex]);
}
}
}
else
{
Scene->ResetAtmosphereLightsProperties();
}
if(!ViewFamily.EngineShowFlags.Rendering)
{
return;
}
// 等待遮擋剔除測試.
WaitOcclusionTests(RHICmdList);
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
// 初始化檢視, 查詢可見圖元, 準備渲染所需的RT和緩衝區等資料.
InitViews(RHICmdList);
if (GRHINeedsExtraDeletionLatency || !GRHICommandList.Bypass())
{
QUICK_SCOPE_CYCLE_COUNTER(STAT_FMobileSceneRenderer_PostInitViewsFlushDel);
// 可能會暫停遮擋查詢,所以最好在等待時讓RHI執行緒和GPU工作. 此外,當執行RHI執行緒時,這是唯一將處理掛起刪除的位置.
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
FRHICommandListExecutor::GetImmediateCommandList().ImmediateFlush(EImmediateFlushType::FlushRHIThreadFlushResources);
}
GEngine->GetPreRenderDelegate().Broadcast();
// 在渲染開始前提交全域性動態緩衝.
DynamicIndexBuffer.Commit();
DynamicVertexBuffer.Commit();
DynamicReadBuffer.Commit();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_SceneSim));
if (ViewFamily.bLateLatchingEnabled)
{
BeginLateLatching(RHICmdList);
}
FSceneRenderTargets& SceneContext = FSceneRenderTargets::Get(RHICmdList);
// 處理虛擬紋理
if (bUseVirtualTexturing)
{
SCOPED_GPU_STAT(RHICmdList, VirtualTextureUpdate);
FVirtualTextureSystem::Get().Update(RHICmdList, FeatureLevel, Scene);
// Clear virtual texture feedback to default value
FUnorderedAccessViewRHIRef FeedbackUAV = SceneContext.GetVirtualTextureFeedbackUAV();
RHICmdList.Transition(FRHITransitionInfo(FeedbackUAV, ERHIAccess::SRVMask, ERHIAccess::UAVMask));
RHICmdList.ClearUAVUint(FeedbackUAV, FUintVector4(~0u, ~0u, ~0u, ~0u));
RHICmdList.Transition(FRHITransitionInfo(FeedbackUAV, ERHIAccess::UAVMask, ERHIAccess::UAVMask));
RHICmdList.BeginUAVOverlap(FeedbackUAV);
}
// 已排序的光源資訊.
FSortedLightSetSceneInfo SortedLightSet;
// 延遲渲染.
if (bDeferredShading)
{
// 收集和排序光源.
GatherAndSortLights(SortedLightSet);
int32 NumReflectionCaptures = Views[0].NumBoxReflectionCaptures + Views[0].NumSphereReflectionCaptures;
bool bCullLightsToGrid = (NumReflectionCaptures > 0 || GMobileUseClusteredDeferredShading != 0);
FRDGBuilder GraphBuilder(RHICmdList);
// 計算光源格子.
ComputeLightGrid(GraphBuilder, bCullLightsToGrid, SortedLightSet);
GraphBuilder.Execute();
}
// 生成天空/大氣LUT.
const bool bShouldRenderSkyAtmosphere = ShouldRenderSkyAtmosphere(Scene, ViewFamily.EngineShowFlags);
if (bShouldRenderSkyAtmosphere)
{
FRDGBuilder GraphBuilder(RHICmdList);
RenderSkyAtmosphereLookUpTables(GraphBuilder);
GraphBuilder.Execute();
}
// 通知特效系統場景準備渲染.
if (FXSystem && ViewFamily.EngineShowFlags.Particles)
{
FXSystem->PreRender(RHICmdList, NULL, !Views[0].bIsPlanarReflection);
if (FGPUSortManager* GPUSortManager = FXSystem->GetGPUSortManager())
{
GPUSortManager->OnPreRender(RHICmdList);
}
}
// 輪詢遮擋剔除請求.
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Shadows));
// 渲染陰影.
RenderShadowDepthMaps(RHICmdList);
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
// 收集檢視列表.
TArray<const FViewInfo*> ViewList;
for (int32 ViewIndex = 0; ViewIndex < Views.Num(); ViewIndex++)
{
ViewList.Add(&Views[ViewIndex]);
}
// 渲染自定義深度.
if (bShouldRenderCustomDepth)
{
FRDGBuilder GraphBuilder(RHICmdList);
FSceneTextureShaderParameters SceneTextures = CreateSceneTextureShaderParameters(GraphBuilder, Views[0].GetFeatureLevel(), ESceneTextureSetupMode::None);
RenderCustomDepthPass(GraphBuilder, SceneTextures);
GraphBuilder.Execute();
}
// 渲染深度PrePass.
if (bIsFullPrepassEnabled)
{
// SDF和AO需要完整的PrePass深度.
FRHIRenderPassInfo DepthPrePassRenderPassInfo(
SceneContext.GetSceneDepthSurface(),
EDepthStencilTargetActions::ClearDepthStencil_StoreDepthStencil);
DepthPrePassRenderPassInfo.NumOcclusionQueries = ComputeNumOcclusionQueriesToBatch();
DepthPrePassRenderPassInfo.bOcclusionQueries = DepthPrePassRenderPassInfo.NumOcclusionQueries != 0;
RHICmdList.BeginRenderPass(DepthPrePassRenderPassInfo, TEXT("DepthPrepass"));
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLM_MobilePrePass));
// 渲染完整的深度PrePass.
RenderPrePass(RHICmdList);
// 提交遮擋剔除.
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Occlusion));
RenderOcclusion(RHICmdList);
RHICmdList.EndRenderPass();
// SDF陰影
if (bRequiresDistanceFieldShadowingPass)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderSDFShadowing);
RenderSDFShadowing(RHICmdList);
}
// HZB.
if (bShouldRenderHZB)
{
RenderHZB(RHICmdList, SceneContext.SceneDepthZ);
}
// AO.
if (bRequiresAmbientOcclusionPass)
{
RenderAmbientOcclusion(RHICmdList, SceneContext.SceneDepthZ);
}
}
FRHITexture* SceneColor = nullptr;
// 延遲渲染.
if (bDeferredShading)
{
SceneColor = RenderDeferred(RHICmdList, ViewList, SortedLightSet);
}
// 前向渲染.
else
{
SceneColor = RenderForward(RHICmdList, ViewList);
}
// 渲染速度緩衝.
if (bShouldRenderVelocities)
{
FRDGBuilder GraphBuilder(RHICmdList);
FRDGTextureMSAA SceneDepthTexture = RegisterExternalTextureMSAA(GraphBuilder, SceneContext.SceneDepthZ);
FRDGTextureRef VelocityTexture = TryRegisterExternalTexture(GraphBuilder, SceneContext.SceneVelocity);
if (VelocityTexture != nullptr)
{
AddClearRenderTargetPass(GraphBuilder, VelocityTexture);
}
// 渲染可移動物體的速度緩衝.
AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLMM_Velocity));
RenderVelocities(GraphBuilder, SceneDepthTexture.Resolve, VelocityTexture, FSceneTextureShaderParameters(), EVelocityPass::Opaque, false);
AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLMM_AfterVelocity));
// 渲染透明物體的速度緩衝.
AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLMM_TranslucentVelocity));
RenderVelocities(GraphBuilder, SceneDepthTexture.Resolve, VelocityTexture, GetSceneTextureShaderParameters(CreateMobileSceneTextureUniformBuffer(GraphBuilder, EMobileSceneTextureSetupMode::SceneColor)), EVelocityPass::Translucent, false);
GraphBuilder.Execute();
}
// 處理場景渲染後的邏輯.
{
FRendererModule& RendererModule = static_cast<FRendererModule&>(GetRendererModule());
FRDGBuilder GraphBuilder(RHICmdList);
RendererModule.RenderPostOpaqueExtensions(GraphBuilder, Views, SceneContext);
if (FXSystem && Views.IsValidIndex(0))
{
AddUntrackedAccessPass(GraphBuilder, [this](FRHICommandListImmediate& RHICmdList)
{
check(RHICmdList.IsOutsideRenderPass());
FXSystem->PostRenderOpaque(
RHICmdList,
Views[0].ViewUniformBuffer,
nullptr,
nullptr,
Views[0].AllowGPUParticleUpdate()
);
if (FGPUSortManager* GPUSortManager = FXSystem->GetGPUSortManager())
{
GPUSortManager->OnPostRenderOpaque(RHICmdList);
}
});
}
GraphBuilder.Execute();
}
// 重新整理/提交命令緩衝.
if (bSubmitOffscreenRendering)
{
RHICmdList.SubmitCommandsHint();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
// 轉換場景顏色成SRV, 以供後續步驟讀取.
if (!bGammaSpace || bRenderToSceneColor)
{
RHICmdList.Transition(FRHITransitionInfo(SceneColor, ERHIAccess::Unknown, ERHIAccess::SRVMask));
}
if (bDeferredShading)
{
// 釋放場景渲染目標上的原始引用.
SceneContext.AdjustGBufferRefCount(RHICmdList, -1);
}
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Post));
// 處理虛擬紋理.
if (bUseVirtualTexturing)
{
SCOPED_GPU_STAT(RHICmdList, VirtualTextureUpdate);
// No pass after this should make VT page requests
RHICmdList.EndUAVOverlap(SceneContext.VirtualTextureFeedbackUAV);
RHICmdList.Transition(FRHITransitionInfo(SceneContext.VirtualTextureFeedbackUAV, ERHIAccess::UAVMask, ERHIAccess::SRVMask));
TArray<FIntRect, TInlineAllocator<4>> ViewRects;
ViewRects.AddUninitialized(Views.Num());
for (int32 ViewIndex = 0; ViewIndex < Views.Num(); ++ViewIndex)
{
ViewRects[ViewIndex] = Views[ViewIndex].ViewRect;
}
FVirtualTextureFeedbackBufferDesc Desc;
Desc.Init2D(SceneContext.GetBufferSizeXY(), ViewRects, SceneContext.GetVirtualTextureFeedbackScale());
SubmitVirtualTextureFeedbackBuffer(RHICmdList, SceneContext.VirtualTextureFeedback, Desc);
}
FMemMark Mark(FMemStack::Get());
FRDGBuilder GraphBuilder(RHICmdList);
FRDGTextureRef ViewFamilyTexture = TryCreateViewFamilyTexture(GraphBuilder, ViewFamily);
// 解析場景
if (ViewFamily.bResolveScene)
{
if (!bGammaSpace || bRenderToSceneColor)
{
// 完成每個檢視的渲染或完整的立體聲緩衝區(如果啟用)
{
RDG_EVENT_SCOPE(GraphBuilder, "PostProcessing");
SCOPE_CYCLE_COUNTER(STAT_FinishRenderViewTargetTime);
TArray<TRDGUniformBufferRef<FMobileSceneTextureUniformParameters>, TInlineAllocator<1, SceneRenderingAllocator>> MobileSceneTexturesPerView;
MobileSceneTexturesPerView.SetNumZeroed(Views.Num());
const auto SetupMobileSceneTexturesPerView = [&]()
{
for (int32 ViewIndex = 0; ViewIndex < Views.Num(); ++ViewIndex)
{
EMobileSceneTextureSetupMode SetupMode = EMobileSceneTextureSetupMode::SceneColor;
if (Views[ViewIndex].bCustomDepthStencilValid)
{
SetupMode |= EMobileSceneTextureSetupMode::CustomDepth;
}
if (bShouldRenderVelocities)
{
SetupMode |= EMobileSceneTextureSetupMode::SceneVelocity;
}
MobileSceneTexturesPerView[ViewIndex] = CreateMobileSceneTextureUniformBuffer(GraphBuilder, SetupMode);
}
};
SetupMobileSceneTexturesPerView();
FMobilePostProcessingInputs PostProcessingInputs;
PostProcessingInputs.ViewFamilyTexture = ViewFamilyTexture;
// 渲染後處理效果.
for (int32 ViewIndex = 0; ViewIndex < Views.Num(); ViewIndex++)
{
RDG_EVENT_SCOPE_CONDITIONAL(GraphBuilder, Views.Num() > 1, "View%d", ViewIndex);
PostProcessingInputs.SceneTextures = MobileSceneTexturesPerView[ViewIndex];
AddMobilePostProcessingPasses(GraphBuilder, Views[ViewIndex], PostProcessingInputs, NumMSAASamples > 1);
}
}
}
}
GEngine->GetPostRenderDelegate().Broadcast();
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_SceneEnd));
if (bShouldRenderVelocities)
{
SceneContext.SceneVelocity.SafeRelease();
}
if (ViewFamily.bLateLatchingEnabled)
{
EndLateLatching(RHICmdList, Views[0]);
}
RenderFinish(GraphBuilder, ViewFamilyTexture);
GraphBuilder.Execute();
// 輪詢遮擋剔除請求.
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
FRHICommandListExecutor::GetImmediateCommandList().ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
看過剖析虛幻渲染體系(04)- 延遲渲染管線篇章的同學應該都知道,移動端的場景渲染過程精簡了很多很多步驟,相當於是PC端場景渲染器的一個子集。當然,為了適應移動端特有的GPU硬體架構,移動端的場景渲染也有區別於PC端的地方。後面會詳細剖析。移動端場景的主要步驟和流程如下所示:
關於上面的流程圖,有以下幾點需要加以說明:
- 流程圖節點
bDeferredShading
和bDeferredShading2
是同一個變數,這裡區分開主要是為了防止mermaid
語法繪圖錯誤。 - 帶*的節點是有條件的,非必然執行的步驟。
UE4.26便加入了移動端的延遲渲染管線,所以上述程式碼中有前向渲染分支RenderForward
和延遲渲染分支RenderDeferred
,它們返回的都是渲染結果SceneColor。
移動端也支援了圖元GPU場景、SDF陰影、AO、天空大氣、虛擬紋理、遮擋剔除等渲染特性。
自UE4.26開始,渲染體系廣泛地使用了RDG系統,移動端的場景渲染器也不例外。上述程式碼中總共宣告瞭數個FRDGBuilder例項,用於計算光源格子,以及渲染天空大氣LUT、自定義深度、速度緩衝、渲染後置事件、後處理等,它們都是相對獨立的功能模組或渲染階段。
12.3.2 RenderForward
RenderForward
在移動端場景渲染器中負責前向渲染的分支,它的程式碼和解析如下:
FRHITexture* FMobileSceneRenderer::RenderForward(FRHICommandListImmediate& RHICmdList, const TArrayView<const FViewInfo*> ViewList)
{
const FViewInfo& View = *ViewList[0];
FSceneRenderTargets& SceneContext = FSceneRenderTargets::Get(RHICmdList);
FRHITexture* SceneColor = nullptr;
FRHITexture* SceneColorResolve = nullptr;
FRHITexture* SceneDepth = nullptr;
ERenderTargetActions ColorTargetAction = ERenderTargetActions::Clear_Store;
EDepthStencilTargetActions DepthTargetAction = EDepthStencilTargetActions::ClearDepthStencil_DontStoreDepthStencil;
// 是否啟用移動端MSAA.
bool bMobileMSAA = NumMSAASamples > 1 && SceneContext.GetSceneColorSurface()->GetNumSamples() > 1;
// 是否啟用移動端多試圖模式.
static const auto CVarMobileMultiView = IConsoleManager::Get().FindTConsoleVariableDataInt(TEXT("vr.MobileMultiView"));
const bool bIsMultiViewApplication = (CVarMobileMultiView && CVarMobileMultiView->GetValueOnAnyThread() != 0);
// gamma空間的渲染分支.
if (bGammaSpace && !bRenderToSceneColor)
{
// 如果開啟MSAA, 則從SceneContext獲取渲染紋理(包含場景顏色和解析紋理)
if (bMobileMSAA)
{
SceneColor = SceneContext.GetSceneColorSurface();
SceneColorResolve = ViewFamily.RenderTarget->GetRenderTargetTexture();
ColorTargetAction = ERenderTargetActions::Clear_Resolve;
RHICmdList.Transition(FRHITransitionInfo(SceneColorResolve, ERHIAccess::Unknown, ERHIAccess::RTV | ERHIAccess::ResolveDst));
}
// 非MSAA,從檢視家族獲取渲染紋理.
else
{
SceneColor = ViewFamily.RenderTarget->GetRenderTargetTexture();
RHICmdList.Transition(FRHITransitionInfo(SceneColor, ERHIAccess::Unknown, ERHIAccess::RTV));
}
SceneDepth = SceneContext.GetSceneDepthSurface();
}
// 線性空間或渲染到場景紋理.
else
{
SceneColor = SceneContext.GetSceneColorSurface();
if (bMobileMSAA)
{
SceneColorResolve = SceneContext.GetSceneColorTexture();
ColorTargetAction = ERenderTargetActions::Clear_Resolve;
RHICmdList.Transition(FRHITransitionInfo(SceneColorResolve, ERHIAccess::Unknown, ERHIAccess::RTV | ERHIAccess::ResolveDst));
}
else
{
SceneColorResolve = nullptr;
ColorTargetAction = ERenderTargetActions::Clear_Store;
}
SceneDepth = SceneContext.GetSceneDepthSurface();
if (bRequiresMultiPass)
{
// store targets after opaque so translucency render pass can be restarted
ColorTargetAction = ERenderTargetActions::Clear_Store;
DepthTargetAction = EDepthStencilTargetActions::ClearDepthStencil_StoreDepthStencil;
}
if (bKeepDepthContent)
{
// store depth if post-processing/capture needs it
DepthTargetAction = EDepthStencilTargetActions::ClearDepthStencil_StoreDepthStencil;
}
}
// prepass的深度紋理狀態.
if (bIsFullPrepassEnabled)
{
ERenderTargetActions DepthTarget = MakeRenderTargetActions(ERenderTargetLoadAction::ELoad, GetStoreAction(GetDepthActions(DepthTargetAction)));
ERenderTargetActions StencilTarget = MakeRenderTargetActions(ERenderTargetLoadAction::ELoad, GetStoreAction(GetStencilActions(DepthTargetAction)));
DepthTargetAction = MakeDepthStencilTargetActions(DepthTarget, StencilTarget);
}
FRHITexture* ShadingRateTexture = nullptr;
if (!View.bIsSceneCapture && !View.bIsReflectionCapture)
{
TRefCountPtr<IPooledRenderTarget> ShadingRateTarget = GVRSImageManager.GetMobileVariableRateShadingImage(ViewFamily);
if (ShadingRateTarget.IsValid())
{
ShadingRateTexture = ShadingRateTarget->GetRenderTargetItem().ShaderResourceTexture;
}
}
// 場景顏色渲染Pass資訊.
FRHIRenderPassInfo SceneColorRenderPassInfo(
SceneColor,
ColorTargetAction,
SceneColorResolve,
SceneDepth,
DepthTargetAction,
nullptr, // we never resolve scene depth on mobile
ShadingRateTexture,
VRSRB_Sum,
FExclusiveDepthStencil::DepthWrite_StencilWrite
);
SceneColorRenderPassInfo.SubpassHint = ESubpassHint::DepthReadSubpass;
if (!bIsFullPrepassEnabled)
{
SceneColorRenderPassInfo.NumOcclusionQueries = ComputeNumOcclusionQueriesToBatch();
SceneColorRenderPassInfo.bOcclusionQueries = SceneColorRenderPassInfo.NumOcclusionQueries != 0;
}
// 如果場景顏色不是多檢視,但應用程式是,需要渲染為單檢視的多檢視給著色器.
SceneColorRenderPassInfo.MultiViewCount = View.bIsMobileMultiViewEnabled ? 2 : (bIsMultiViewApplication ? 1 : 0);
// 開始渲染場景顏色.
RHICmdList.BeginRenderPass(SceneColorRenderPassInfo, TEXT("SceneColorRendering"));
if (GIsEditor && !View.bIsSceneCapture)
{
DrawClearQuad(RHICmdList, Views[0].BackgroundColor);
}
if (!bIsFullPrepassEnabled)
{
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLM_MobilePrePass));
// 渲染深度pre-pass
RenderPrePass(RHICmdList);
}
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Opaque));
// 渲染BasePass: 不透明和masked物體.
RenderMobileBasePass(RHICmdList, ViewList);
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
//渲染除錯模式.
#if !(UE_BUILD_SHIPPING || UE_BUILD_TEST)
if (ViewFamily.UseDebugViewPS())
{
// Here we use the base pass depth result to get z culling for opaque and masque.
// The color needs to be cleared at this point since shader complexity renders in additive.
DrawClearQuad(RHICmdList, FLinearColor::Black);
RenderMobileDebugView(RHICmdList, ViewList);
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
#endif // !(UE_BUILD_SHIPPING || UE_BUILD_TEST)
const bool bAdrenoOcclusionMode = CVarMobileAdrenoOcclusionMode.GetValueOnRenderThread() != 0;
if (!bIsFullPrepassEnabled)
{
// 遮擋剔除
if (!bAdrenoOcclusionMode)
{
// 提交遮擋剔除
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Occlusion));
RenderOcclusion(RHICmdList);
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
}
// 後置事件, 處理外掛渲染.
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(ViewExtensionPostRenderBasePass);
QUICK_SCOPE_CYCLE_COUNTER(STAT_FMobileSceneRenderer_ViewExtensionPostRenderBasePass);
for (int32 ViewExt = 0; ViewExt < ViewFamily.ViewExtensions.Num(); ++ViewExt)
{
for (int32 ViewIndex = 0; ViewIndex < ViewFamily.Views.Num(); ++ViewIndex)
{
ViewFamily.ViewExtensions[ViewExt]->PostRenderBasePass_RenderThread(RHICmdList, Views[ViewIndex]);
}
}
}
// 如果需要渲染透明物體或畫素投影的反射, 則需要拆分pass.
if (bRequiresMultiPass || bRequiresPixelProjectedPlanarRelfectionPass)
{
RHICmdList.EndRenderPass();
}
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Translucency));
// 如果需要, 則重新開啟透明渲染通道.
if (bRequiresMultiPass || bRequiresPixelProjectedPlanarRelfectionPass)
{
check(RHICmdList.IsOutsideRenderPass());
// 如果當前硬體不支援讀寫相同的深度緩衝區,則複製場景深度.
ConditionalResolveSceneDepth(RHICmdList, View);
if (bRequiresPixelProjectedPlanarRelfectionPass)
{
const FPlanarReflectionSceneProxy* PlanarReflectionSceneProxy = Scene ? Scene->GetForwardPassGlobalPlanarReflection() : nullptr;
RenderPixelProjectedReflection(RHICmdList, SceneContext, PlanarReflectionSceneProxy);
FRHITransitionInfo TranslucentRenderPassTransitions[] = {
FRHITransitionInfo(SceneColor, ERHIAccess::SRVMask, ERHIAccess::RTV),
FRHITransitionInfo(SceneDepth, ERHIAccess::SRVMask, ERHIAccess::DSVWrite)
};
RHICmdList.Transition(MakeArrayView(TranslucentRenderPassTransitions, UE_ARRAY_COUNT(TranslucentRenderPassTransitions)));
}
DepthTargetAction = EDepthStencilTargetActions::LoadDepthStencil_DontStoreDepthStencil;
FExclusiveDepthStencil::Type ExclusiveDepthStencil = FExclusiveDepthStencil::DepthRead_StencilRead;
if (bModulatedShadowsInUse)
{
ExclusiveDepthStencil = FExclusiveDepthStencil::DepthRead_StencilWrite;
}
// 用於移動端畫素投影反射的不透明網格必須將深度寫入深度RT, 因為只渲染一次網格(如果質量水平低於或等於BestPerformance).
if (IsMobilePixelProjectedReflectionEnabled(View.GetShaderPlatform())
&& GetMobilePixelProjectedReflectionQuality() == EMobilePixelProjectedReflectionQuality::BestPerformance)
{
ExclusiveDepthStencil = FExclusiveDepthStencil::DepthWrite_StencilWrite;
}
if (bKeepDepthContent && !bMobileMSAA)
{
DepthTargetAction = EDepthStencilTargetActions::LoadDepthStencil_StoreDepthStencil;
}
#if PLATFORM_HOLOLENS
if (bShouldRenderDepthToTranslucency)
{
ExclusiveDepthStencil = FExclusiveDepthStencil::DepthWrite_StencilWrite;
}
#endif
// 透明物體渲染Pass.
FRHIRenderPassInfo TranslucentRenderPassInfo(
SceneColor,
SceneColorResolve ? ERenderTargetActions::Load_Resolve : ERenderTargetActions::Load_Store,
SceneColorResolve,
SceneDepth,
DepthTargetAction,
nullptr,
ShadingRateTexture,
VRSRB_Sum,
ExclusiveDepthStencil
);
TranslucentRenderPassInfo.NumOcclusionQueries = 0;
TranslucentRenderPassInfo.bOcclusionQueries = false;
TranslucentRenderPassInfo.SubpassHint = ESubpassHint::DepthReadSubpass;
// 開始渲染半透明物體.
RHICmdList.BeginRenderPass(TranslucentRenderPassInfo, TEXT("SceneColorTranslucencyRendering"));
}
// 場景深度是隻讀的,可以獲取.
RHICmdList.NextSubpass();
if (!View.bIsPlanarReflection)
{
// 渲染貼花.
if (ViewFamily.EngineShowFlags.Decals)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderDecals);
RenderDecals(RHICmdList);
}
// 渲染調製陰影投射.
if (ViewFamily.EngineShowFlags.DynamicShadows)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderShadowProjections);
RenderModulatedShadowProjections(RHICmdList);
}
}
// 繪製半透明.
if (ViewFamily.EngineShowFlags.Translucency)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderTranslucency);
SCOPE_CYCLE_COUNTER(STAT_TranslucencyDrawTime);
RenderTranslucency(RHICmdList, ViewList);
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
if (!bIsFullPrepassEnabled)
{
// Adreno遮擋剔除模式.
if (bAdrenoOcclusionMode)
{
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Occlusion));
// flush
RHICmdList.SubmitCommandsHint();
bSubmitOffscreenRendering = false; // submit once
// Issue occlusion queries
RenderOcclusion(RHICmdList);
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
}
// 在MSAA被解析前預計算色調對映(只在iOS有效)
if (!bGammaSpace)
{
PreTonemapMSAA(RHICmdList);
}
// 結束場景顏色渲染.
RHICmdList.EndRenderPass();
// 優化返回場景顏色的解析紋理(開啟了MSAA才有).
return SceneColorResolve ? SceneColorResolve : SceneColor;
}
移動端前向渲染主要步驟跟PC端類似,依次渲染PrePass、BasePass、特殊渲染(貼花、AO、遮擋剔除等)、半透明物體。它們的流程圖如下:
其中遮擋剔除和GPU廠商相關,比如高通Adreno系列GPU晶片要求在Flush渲染指令和Switch FBO之間:
Render Opaque -> Render Translucent -> Flush -> Render Queries -> Switch FBO
那麼UE也遵循了Adreno系列晶片的特殊要求,對其的遮擋剔除做了特殊的處理。
Adreno系列晶片支援TBDR架構的Bin和普通的Direct兩種混合模式的渲染,會在遮擋查詢時自動切換到Direct模式,以降低遮擋查詢的開銷。如果不在Flush渲染指令和Switch FBO之間提交查詢,會卡住整個渲染管線,引發渲染效能下降。
MSAA由於天然硬體支援且效果和效率達到很好的平衡,是UE在移動端前向渲染的首選抗鋸齒。因此,上述程式碼中出現了不少處理MSAA的邏輯,包含顏色和深度紋理及其資源狀態。如果開啟了MSAA,預設情況下是在RHICmdList.EndRenderPass()
解析場景顏色(同時將晶片分塊上的資料寫回到系統視訊記憶體中),由此獲得抗鋸齒的紋理。移動端的MSAA預設不開啟,但可在以下介面中設定:
前向渲染支援Gamma空間和HDR(線性空間)兩種顏色空間模式。如果是線性空間,則渲染後期需要色調對映等步驟。預設是HDR,可在專案配置中更改:
上述程式碼的bRequiresMultiPass標明瞭是否需要專用的渲染Pass繪製半透明物體,決定它的值由以下程式碼完成:
// Engine\Source\Runtime\Renderer\Private\MobileShadingRenderer.cpp
bool FMobileSceneRenderer::RequiresMultiPass(FRHICommandListImmediate& RHICmdList, const FViewInfo& View) const
{
// Vulkan uses subpasses
if (IsVulkanPlatform(ShaderPlatform))
{
return false;
}
// All iOS support frame_buffer_fetch
if (IsMetalMobilePlatform(ShaderPlatform))
{
return false;
}
if (IsMobileDeferredShadingEnabled(ShaderPlatform))
{
// TODO: add GL support
return true;
}
// Some Androids support frame_buffer_fetch
if (IsAndroidOpenGLESPlatform(ShaderPlatform) && (GSupportsShaderFramebufferFetch || GSupportsShaderDepthStencilFetch))
{
return false;
}
// Always render reflection capture in single pass
if (View.bIsPlanarReflection || View.bIsSceneCapture)
{
return false;
}
// Always render LDR in single pass
if (!IsMobileHDR())
{
return false;
}
// MSAA depth can't be sampled or resolved, unless we are on PC (no vulkan)
if (NumMSAASamples > 1 && !IsSimulatedPlatform(ShaderPlatform))
{
return false;
}
return true;
}
與之類似但意義不同的是bIsMultiViewApplication和bIsMobileMultiViewEnabled標記,標明是否開啟多檢視渲染以及多檢視的個數。只用於VR,由控制檯變數vr.MobileMultiView
及圖形API等因素決定。MultiView用於XR,用於優化渲染兩次的情形,它存在Basic和Advanced兩種模式:
用於優化VR等渲染的MultiView對比圖。上:未採用MultiView模式的渲染,兩個眼睛各自提交繪製指令;中:基礎MultiView模式,複用提交指令,在GPU層複製多一份Command List;下:高階MultiView模式,可以複用DC、Command List、幾何資訊。
bKeepDepthContent標明是否要保留深度內容,決定它的程式碼:
bKeepDepthContent =
bRequiresMultiPass ||
bForceDepthResolve ||
bRequiresPixelProjectedPlanarRelfectionPass ||
bSeparateTranslucencyActive ||
Views[0].bIsReflectionCapture ||
(bDeferredShading && bPostProcessUsesSceneDepth) ||
bShouldRenderVelocities ||
bIsFullPrepassEnabled;
// 帶MSAA的深度從不保留.
bKeepDepthContent = (NumMSAASamples > 1 ? false : bKeepDepthContent);
上述程式碼還揭示了平面反射在移動端的一種特殊渲染方式:Pixel Projected Reflection(PPR)。它的實現原理類似於SSR,但需要的資料更少,只需要場景顏色、深度緩衝和反射區域。它的核心步驟:
- 計算場景顏色的所有畫素在反射平面的映象位置。
- 測試畫素的反射是否在反射區域內。
- 光線投射到映象畫素位置。
- 測試交點是否在反射區域內。
- 如果找到相交點,計算畫素在螢幕的映象位置。
- 在交點處寫入映象畫素的顏色。
- 如果反射區域內的交點被其它物體遮擋,則剔除此位置的反射。
PPR效果一覽。
PPR可以在工程配置中設定:
12.3.3 RenderDeferred
UE在4.26在移動端渲染管線增加了延遲渲染分支,並在4.27做了改進和優化。移動端是否開啟延遲著色的特性由以下程式碼決定:
// Engine\Source\Runtime\RenderCore\Private\RenderUtils.cpp
bool IsMobileDeferredShadingEnabled(const FStaticShaderPlatform Platform)
{
// 禁用OpenGL的延遲著色.
if (IsOpenGLPlatform(Platform))
{
// needs MRT framebuffer fetch or PLS
return false;
}
// 控制檯變數"r.Mobile.ShadingPath"要為1.
static auto* MobileShadingPathCvar = IConsoleManager::Get().FindTConsoleVariableDataInt(TEXT("r.Mobile.ShadingPath"));
return MobileShadingPathCvar->GetValueOnAnyThread() == 1;
}
簡單地說就是非OpenGL圖形API且控制檯變數r.Mobile.ShadingPath
設為1。
r.Mobile.ShadingPath
不可在編輯器動態設定值,只能在專案工程根目錄/Config/DefaultEngine.ini增加以下欄位來開啟:[/Script/Engine.RendererSettings]
r.Mobile.ShadingPath=1
新增以上欄位後,重啟UE編輯器,等待shader編譯完成即可預覽移動端延遲著色效果。
以下是延遲渲染分支FMobileSceneRenderer::RenderDeferred
的程式碼和解析:
FRHITexture* FMobileSceneRenderer::RenderDeferred(FRHICommandListImmediate& RHICmdList, const TArrayView<const FViewInfo*> ViewList, const FSortedLightSetSceneInfo& SortedLightSet)
{
FSceneRenderTargets& SceneContext = FSceneRenderTargets::Get(RHICmdList);
// 準備GBuffer.
FRHITexture* ColorTargets[4] = {
SceneContext.GetSceneColorSurface(),
SceneContext.GetGBufferATexture().GetReference(),
SceneContext.GetGBufferBTexture().GetReference(),
SceneContext.GetGBufferCTexture().GetReference()
};
// RHI是否需要將GBuffer儲存到GPU的系統記憶體中,並在單獨的渲染通道中進行著色.
ERenderTargetActions GBufferAction = bRequiresMultiPass ? ERenderTargetActions::Clear_Store : ERenderTargetActions::Clear_DontStore;
EDepthStencilTargetActions DepthAction = bKeepDepthContent ? EDepthStencilTargetActions::ClearDepthStencil_StoreDepthStencil : EDepthStencilTargetActions::ClearDepthStencil_DontStoreDepthStencil;
// RT的load/store動作.
ERenderTargetActions ColorTargetsAction[4] = {ERenderTargetActions::Clear_Store, GBufferAction, GBufferAction, GBufferAction};
if (bIsFullPrepassEnabled)
{
ERenderTargetActions DepthTarget = MakeRenderTargetActions(ERenderTargetLoadAction::ELoad, GetStoreAction(GetDepthActions(DepthAction)));
ERenderTargetActions StencilTarget = MakeRenderTargetActions(ERenderTargetLoadAction::ELoad, GetStoreAction(GetStencilActions(DepthAction)));
DepthAction = MakeDepthStencilTargetActions(DepthTarget, StencilTarget);
}
FRHIRenderPassInfo BasePassInfo = FRHIRenderPassInfo();
int32 ColorTargetIndex = 0;
for (; ColorTargetIndex < UE_ARRAY_COUNT(ColorTargets); ++ColorTargetIndex)
{
BasePassInfo.ColorRenderTargets[ColorTargetIndex].RenderTarget = ColorTargets[ColorTargetIndex];
BasePassInfo.ColorRenderTargets[ColorTargetIndex].ResolveTarget = nullptr;
BasePassInfo.ColorRenderTargets[ColorTargetIndex].ArraySlice = -1;
BasePassInfo.ColorRenderTargets[ColorTargetIndex].MipIndex = 0;
BasePassInfo.ColorRenderTargets[ColorTargetIndex].Action = ColorTargetsAction[ColorTargetIndex];
}
if (MobileRequiresSceneDepthAux(ShaderPlatform))
{
BasePassInfo.ColorRenderTargets[ColorTargetIndex].RenderTarget = SceneContext.SceneDepthAux->GetRenderTargetItem().ShaderResourceTexture.GetReference();
BasePassInfo.ColorRenderTargets[ColorTargetIndex].ResolveTarget = nullptr;
BasePassInfo.ColorRenderTargets[ColorTargetIndex].ArraySlice = -1;
BasePassInfo.ColorRenderTargets[ColorTargetIndex].MipIndex = 0;
BasePassInfo.ColorRenderTargets[ColorTargetIndex].Action = GBufferAction;
ColorTargetIndex++;
}
BasePassInfo.DepthStencilRenderTarget.DepthStencilTarget = SceneContext.GetSceneDepthSurface();
BasePassInfo.DepthStencilRenderTarget.ResolveTarget = nullptr;
BasePassInfo.DepthStencilRenderTarget.Action = DepthAction;
BasePassInfo.DepthStencilRenderTarget.ExclusiveDepthStencil = FExclusiveDepthStencil::DepthWrite_StencilWrite;
BasePassInfo.SubpassHint = ESubpassHint::DeferredShadingSubpass;
if (!bIsFullPrepassEnabled)
{
BasePassInfo.NumOcclusionQueries = ComputeNumOcclusionQueriesToBatch();
BasePassInfo.bOcclusionQueries = BasePassInfo.NumOcclusionQueries != 0;
}
BasePassInfo.ShadingRateTexture = nullptr;
BasePassInfo.bIsMSAA = false;
BasePassInfo.MultiViewCount = 0;
RHICmdList.BeginRenderPass(BasePassInfo, TEXT("BasePassRendering"));
if (GIsEditor && !Views[0].bIsSceneCapture)
{
DrawClearQuad(RHICmdList, Views[0].BackgroundColor);
}
// 深度PrePass
if (!bIsFullPrepassEnabled)
{
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLM_MobilePrePass));
// Depth pre-pass
RenderPrePass(RHICmdList);
}
// BasePass: 不透明和鏤空物體.
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Opaque));
RenderMobileBasePass(RHICmdList, ViewList);
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
// 遮擋剔除.
if (!bIsFullPrepassEnabled)
{
// Issue occlusion queries
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Occlusion));
RenderOcclusion(RHICmdList);
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
// 非多Pass模式
if (!bRequiresMultiPass)
{
// 下個子Pass: SSceneColor + GBuffer寫入, SceneDepth只讀.
RHICmdList.NextSubpass();
// 渲染貼花.
if (ViewFamily.EngineShowFlags.Decals)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderDecals);
RenderDecals(RHICmdList);
}
// 下個子Pass: SceneColor寫入, SceneDepth只讀
RHICmdList.NextSubpass();
// 延遲光照著色.
MobileDeferredShadingPass(RHICmdList, *Scene, ViewList, SortedLightSet);
// 繪製半透明.
if (ViewFamily.EngineShowFlags.Translucency)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderTranslucency);
SCOPE_CYCLE_COUNTER(STAT_TranslucencyDrawTime);
RenderTranslucency(RHICmdList, ViewList);
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
// 結束渲染Pass.
RHICmdList.EndRenderPass();
}
// 多Pass模式(PC裝置模擬的移動端).
else
{
// 結束子pass.
RHICmdList.NextSubpass();
RHICmdList.NextSubpass();
RHICmdList.EndRenderPass();
// SceneColor + GBuffer write, SceneDepth is read only
{
for (int32 Index = 0; Index < UE_ARRAY_COUNT(ColorTargets); ++Index)
{
BasePassInfo.ColorRenderTargets[Index].Action = ERenderTargetActions::Load_Store;
}
BasePassInfo.DepthStencilRenderTarget.Action = EDepthStencilTargetActions::LoadDepthStencil_StoreDepthStencil;
BasePassInfo.DepthStencilRenderTarget.ExclusiveDepthStencil = FExclusiveDepthStencil::DepthRead_StencilRead;
BasePassInfo.SubpassHint = ESubpassHint::None;
BasePassInfo.NumOcclusionQueries = 0;
BasePassInfo.bOcclusionQueries = false;
RHICmdList.BeginRenderPass(BasePassInfo, TEXT("AfterBasePass"));
// 渲染貼花.
if (ViewFamily.EngineShowFlags.Decals)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderDecals);
RenderDecals(RHICmdList);
}
RHICmdList.EndRenderPass();
}
// SceneColor write, SceneDepth is read only
{
FRHIRenderPassInfo ShadingPassInfo(
SceneContext.GetSceneColorSurface(),
ERenderTargetActions::Load_Store,
nullptr,
SceneContext.GetSceneDepthSurface(),
EDepthStencilTargetActions::LoadDepthStencil_StoreDepthStencil,
nullptr,
nullptr,
VRSRB_Passthrough,
FExclusiveDepthStencil::DepthRead_StencilWrite
);
ShadingPassInfo.NumOcclusionQueries = 0;
ShadingPassInfo.bOcclusionQueries = false;
RHICmdList.BeginRenderPass(ShadingPassInfo, TEXT("MobileShadingPass"));
// 延遲光照著色.
MobileDeferredShadingPass(RHICmdList, *Scene, ViewList, SortedLightSet);
// 繪製半透明.
if (ViewFamily.EngineShowFlags.Translucency)
{
CSV_SCOPED_TIMING_STAT_EXCLUSIVE(RenderTranslucency);
SCOPE_CYCLE_COUNTER(STAT_TranslucencyDrawTime);
RenderTranslucency(RHICmdList, ViewList);
FRHICommandListExecutor::GetImmediateCommandList().PollOcclusionQueries();
RHICmdList.ImmediateFlush(EImmediateFlushType::DispatchToRHIThread);
}
RHICmdList.EndRenderPass();
}
}
return ColorTargets[0];
}
由上面可知,移動端的延遲渲染管線和PC比較類似,先渲染BasePass,獲得GBuffer幾何資訊,再執行光照計算。它們的流程圖如下:
當然,也有和PC不一樣的地方,最明顯的是移動端使用了適配TB(D)R架構的SubPass渲染,使得移動端在渲染PrePass深度、BasePass、光照計算時,讓場景顏色、深度、GBuffer等資訊一直在On-Chip的緩衝區中,提升渲染效率,降低裝置能耗。
12.3.3.1 MobileDeferredShadingPass
延遲渲染光照的過程由MobileDeferredShadingPass
擔當:
void MobileDeferredShadingPass(
FRHICommandListImmediate& RHICmdList,
const FScene& Scene,
const TArrayView<const FViewInfo*> PassViews,
const FSortedLightSetSceneInfo &SortedLightSet)
{
SCOPED_DRAW_EVENT(RHICmdList, MobileDeferredShading);
const FViewInfo& View0 = *PassViews[0];
FSceneRenderTargets& SceneContext = FSceneRenderTargets::Get(RHICmdList);
// 建立Uniform Buffer.
FUniformBufferRHIRef PassUniformBuffer = CreateMobileSceneTextureUniformBuffer(RHICmdList);
FUniformBufferStaticBindings GlobalUniformBuffers(PassUniformBuffer);
SCOPED_UNIFORM_BUFFER_GLOBAL_BINDINGS(RHICmdList, GlobalUniformBuffers);
// 設定視口.
RHICmdList.SetViewport(View0.ViewRect.Min.X, View0.ViewRect.Min.Y, 0.0f, View0.ViewRect.Max.X, View0.ViewRect.Max.Y, 1.0f);
// 光照的預設材質.
FCachedLightMaterial DefaultMaterial;
DefaultMaterial.MaterialProxy = UMaterial::GetDefaultMaterial(MD_LightFunction)->GetRenderProxy();
DefaultMaterial.Material = DefaultMaterial.MaterialProxy->GetMaterialNoFallback(ERHIFeatureLevel::ES3_1);
check(DefaultMaterial.Material);
// 繪製平行光.
RenderDirectLight(RHICmdList, Scene, View0, DefaultMaterial);
if (GMobileUseClusteredDeferredShading == 0)
{
// 渲染非分簇的簡單光源.
RenderSimpleLights(RHICmdList, Scene, PassViews, SortedLightSet, DefaultMaterial);
}
// 渲染非分簇的區域性光源.
int32 NumLights = SortedLightSet.SortedLights.Num();
int32 StandardDeferredStart = SortedLightSet.SimpleLightsEnd;
if (GMobileUseClusteredDeferredShading != 0)
{
StandardDeferredStart = SortedLightSet.ClusteredSupportedEnd;
}
// 渲染區域性光源.
for (int32 LightIdx = StandardDeferredStart; LightIdx < NumLights; ++LightIdx)
{
const FSortedLightSceneInfo& SortedLight = SortedLightSet.SortedLights[LightIdx];
const FLightSceneInfo& LightSceneInfo = *SortedLight.LightSceneInfo;
RenderLocalLight(RHICmdList, Scene, View0, LightSceneInfo, DefaultMaterial);
}
}
下面繼續分析渲染不同型別光源的介面:
// Engine\Source\Runtime\Renderer\Private\MobileDeferredShadingPass.cpp
// 渲染平行光
static void RenderDirectLight(FRHICommandListImmediate& RHICmdList, const FScene& Scene, const FViewInfo& View, const FCachedLightMaterial& DefaultLightMaterial)
{
FSceneRenderTargets& SceneContext = FSceneRenderTargets::Get(RHICmdList);
// 查詢第一個平行光.
FLightSceneInfo* DirectionalLight = nullptr;
for (int32 ChannelIdx = 0; ChannelIdx < UE_ARRAY_COUNT(Scene.MobileDirectionalLights) && !DirectionalLight; ChannelIdx++)
{
DirectionalLight = Scene.MobileDirectionalLights[ChannelIdx];
}
// 渲染狀態.
FGraphicsPipelineStateInitializer GraphicsPSOInit;
RHICmdList.ApplyCachedRenderTargets(GraphicsPSOInit);
// 增加自發光到SceneColor.
GraphicsPSOInit.BlendState = TStaticBlendState<CW_RGB, BO_Add, BF_One, BF_One>::GetRHI();
GraphicsPSOInit.RasterizerState = TStaticRasterizerState<>::GetRHI();
// 只繪製預設光照模型(MSM_DefaultLit)的畫素.
uint8 StencilRef = GET_STENCIL_MOBILE_SM_MASK(MSM_DefaultLit);
GraphicsPSOInit.DepthStencilState = TStaticDepthStencilState<
false, CF_Always,
true, CF_Equal, SO_Keep, SO_Keep, SO_Keep,
false, CF_Always, SO_Keep, SO_Keep, SO_Keep,
GET_STENCIL_MOBILE_SM_MASK(0x7), 0x00>::GetRHI(); // 4 bits for shading models
// 處理VS.
TShaderMapRef<FPostProcessVS> VertexShader(View.ShaderMap);
const FMaterialRenderProxy* LightFunctionMaterialProxy = nullptr;
if (View.Family->EngineShowFlags.LightFunctions && DirectionalLight)
{
LightFunctionMaterialProxy = DirectionalLight->Proxy->GetLightFunctionMaterial();
}
FMobileDirectLightFunctionPS::FPermutationDomain PermutationVector = FMobileDirectLightFunctionPS::BuildPermutationVector(View, DirectionalLight != nullptr);
FCachedLightMaterial LightMaterial;
TShaderRef<FMobileDirectLightFunctionPS> PixelShader;
GetLightMaterial(DefaultLightMaterial, LightFunctionMaterialProxy, PermutationVector.ToDimensionValueId(), LightMaterial, PixelShader);
GraphicsPSOInit.BoundShaderState.VertexDeclarationRHI = GFilterVertexDeclaration.VertexDeclarationRHI;
GraphicsPSOInit.BoundShaderState.VertexShaderRHI = VertexShader.GetVertexShader();
GraphicsPSOInit.BoundShaderState.PixelShaderRHI = PixelShader.GetPixelShader();
GraphicsPSOInit.PrimitiveType = PT_TriangleList;
SetGraphicsPipelineState(RHICmdList, GraphicsPSOInit);
// 處理PS.
FMobileDirectLightFunctionPS::FParameters PassParameters;
PassParameters.Forward = View.ForwardLightingResources->ForwardLightDataUniformBuffer;
PassParameters.MobileDirectionalLight = Scene.UniformBuffers.MobileDirectionalLightUniformBuffers[1];
PassParameters.ReflectionCaptureData = Scene.UniformBuffers.ReflectionCaptureUniformBuffer;
FReflectionUniformParameters ReflectionUniformParameters;
SetupReflectionUniformParameters(View, ReflectionUniformParameters);
PassParameters.ReflectionsParameters = CreateUniformBufferImmediate(ReflectionUniformParameters, UniformBuffer_SingleDraw);
PassParameters.LightFunctionParameters = FVector4(1.0f, 1.0f, 0.0f, 0.0f);
if (DirectionalLight)
{
const bool bUseMovableLight = DirectionalLight && !DirectionalLight->Proxy->HasStaticShadowing();
PassParameters.LightFunctionParameters2 = FVector(DirectionalLight->Proxy->GetLightFunctionFadeDistance(), DirectionalLight->Proxy->GetLightFunctionDisabledBrightness(), bUseMovableLight ? 1.0f : 0.0f);
const FVector Scale = DirectionalLight->Proxy->GetLightFunctionScale();
// Switch x and z so that z of the user specified scale affects the distance along the light direction
const FVector InverseScale = FVector(1.f / Scale.Z, 1.f / Scale.Y, 1.f / Scale.X);
PassParameters.WorldToLight = DirectionalLight->Proxy->GetWorldToLight() * FScaleMatrix(FVector(InverseScale));
}
FMobileDirectLightFunctionPS::SetParameters(RHICmdList, PixelShader, View, LightMaterial.MaterialProxy, *LightMaterial.Material, PassParameters);
RHICmdList.SetStencilRef(StencilRef);
const FIntPoint TargetSize = SceneContext.GetBufferSizeXY();
// 用全螢幕的矩形繪製.
DrawRectangle(
RHICmdList,
0, 0,
View.ViewRect.Width(), View.ViewRect.Height(),
View.ViewRect.Min.X, View.ViewRect.Min.Y,
View.ViewRect.Width(), View.ViewRect.Height(),
FIntPoint(View.ViewRect.Width(), View.ViewRect.Height()),
TargetSize,
VertexShader);
}
// 渲染非分簇模式的簡單光源.
static void RenderSimpleLights(
FRHICommandListImmediate& RHICmdList,
const FScene& Scene,
const TArrayView<const FViewInfo*> PassViews,
const FSortedLightSetSceneInfo &SortedLightSet,
const FCachedLightMaterial& DefaultMaterial)
{
const FSimpleLightArray& SimpleLights = SortedLightSet.SimpleLights;
const int32 NumViews = PassViews.Num();
const FViewInfo& View0 = *PassViews[0];
// 處理VS.
TShaderMapRef<TDeferredLightVS<true>> VertexShader(View0.ShaderMap);
TShaderRef<FMobileRadialLightFunctionPS> PixelShaders[2];
{
const FMaterialShaderMap* MaterialShaderMap = DefaultMaterial.Material->GetRenderingThreadShaderMap();
FMobileRadialLightFunctionPS::FPermutationDomain PermutationVector;
PermutationVector.Set<FMobileRadialLightFunctionPS::FSpotLightDim>(false);
PermutationVector.Set<FMobileRadialLightFunctionPS::FIESProfileDim>(false);
PermutationVector.Set<FMobileRadialLightFunctionPS::FInverseSquaredDim>(false);
PixelShaders[0] = MaterialShaderMap->GetShader<FMobileRadialLightFunctionPS>(PermutationVector);
PermutationVector.Set<FMobileRadialLightFunctionPS::FInverseSquaredDim>(true);
PixelShaders[1] = MaterialShaderMap->GetShader<FMobileRadialLightFunctionPS>(PermutationVector);
}
// 設定PSO.
FGraphicsPipelineStateInitializer GraphicsPSOLight[2];
{
SetupSimpleLightPSO(RHICmdList, View0, VertexShader, PixelShaders[0], GraphicsPSOLight[0]);
SetupSimpleLightPSO(RHICmdList, View0, VertexShader, PixelShaders[1], GraphicsPSOLight[1]);
}
// 設定模板緩衝.
FGraphicsPipelineStateInitializer GraphicsPSOLightMask;
{
RHICmdList.ApplyCachedRenderTargets(GraphicsPSOLightMask);
GraphicsPSOLightMask.PrimitiveType = PT_TriangleList;
GraphicsPSOLightMask.BlendState = TStaticBlendStateWriteMask<CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE>::GetRHI();
GraphicsPSOLightMask.RasterizerState = View0.bReverseCulling ? TStaticRasterizerState<FM_Solid, CM_CCW>::GetRHI() : TStaticRasterizerState<FM_Solid, CM_CW>::GetRHI();
// set stencil to 1 where depth test fails
GraphicsPSOLightMask.DepthStencilState = TStaticDepthStencilState<
false, CF_DepthNearOrEqual,
true, CF_Always, SO_Keep, SO_Replace, SO_Keep,
false, CF_Always, SO_Keep, SO_Keep, SO_Keep,
0x00, STENCIL_SANDBOX_MASK>::GetRHI();
GraphicsPSOLightMask.BoundShaderState.VertexDeclarationRHI = GetVertexDeclarationFVector4();
GraphicsPSOLightMask.BoundShaderState.VertexShaderRHI = VertexShader.GetVertexShader();
GraphicsPSOLightMask.BoundShaderState.PixelShaderRHI = nullptr;
}
// 遍歷所有簡單光源列表, 執行著色計算.
for (int32 LightIndex = 0; LightIndex < SimpleLights.InstanceData.Num(); LightIndex++)
{
const FSimpleLightEntry& SimpleLight = SimpleLights.InstanceData[LightIndex];
for (int32 ViewIndex = 0; ViewIndex < NumViews; ViewIndex++)
{
const FViewInfo& View = *PassViews[ViewIndex];
const FSimpleLightPerViewEntry& SimpleLightPerViewData = SimpleLights.GetViewDependentData(LightIndex, ViewIndex, NumViews);
const FSphere LightBounds(SimpleLightPerViewData.Position, SimpleLight.Radius);
if (NumViews > 1)
{
// set viewports only we we have more than one
// otherwise it is set at the start of the pass
RHICmdList.SetViewport(View.ViewRect.Min.X, View.ViewRect.Min.Y, 0.0f, View.ViewRect.Max.X, View.ViewRect.Max.Y, 1.0f);
}
// 渲染光源遮罩.
SetGraphicsPipelineState(RHICmdList, GraphicsPSOLightMask);
VertexShader->SetSimpleLightParameters(RHICmdList, View, LightBounds);
RHICmdList.SetStencilRef(1);
StencilingGeometry::DrawSphere(RHICmdList);
// 渲染光源.
FMobileRadialLightFunctionPS::FParameters PassParameters;
FDeferredLightUniformStruct DeferredLightUniformsValue;
SetupSimpleDeferredLightParameters(SimpleLight, SimpleLightPerViewData, DeferredLightUniformsValue);
PassParameters.DeferredLightUniforms = TUniformBufferRef<FDeferredLightUniformStruct>::CreateUniformBufferImmediate(DeferredLightUniformsValue, EUniformBufferUsage::UniformBuffer_SingleFrame);
PassParameters.IESTexture = GWhiteTexture->TextureRHI;
PassParameters.IESTextureSampler = GWhiteTexture->SamplerStateRHI;
if (SimpleLight.Exponent == 0)
{
SetGraphicsPipelineState(RHICmdList, GraphicsPSOLight[1]);
FMobileRadialLightFunctionPS::SetParameters(RHICmdList, PixelShaders[1], View, DefaultMaterial.MaterialProxy, *DefaultMaterial.Material, PassParameters);
}
else
{
SetGraphicsPipelineState(RHICmdList, GraphicsPSOLight[0]);
FMobileRadialLightFunctionPS::SetParameters(RHICmdList, PixelShaders[0], View, DefaultMaterial.MaterialProxy, *DefaultMaterial.Material, PassParameters);
}
VertexShader->SetSimpleLightParameters(RHICmdList, View, LightBounds);
// 只繪製預設光照模型(MSM_DefaultLit)的畫素.
uint8 StencilRef = GET_STENCIL_MOBILE_SM_MASK(MSM_DefaultLit);
RHICmdList.SetStencilRef(StencilRef);
// 用球體渲染光源(點光源和聚光燈), 以快速剔除光源影響之外的畫素.
StencilingGeometry::DrawSphere(RHICmdList);
}
}
}
// 渲染區域性光源.
static void RenderLocalLight(
FRHICommandListImmediate& RHICmdList,
const FScene& Scene,
const FViewInfo& View,
const FLightSceneInfo& LightSceneInfo,
const FCachedLightMaterial& DefaultLightMaterial)
{
if (!LightSceneInfo.ShouldRenderLight(View))
{
return;
}
// 忽略非區域性光源(光源和聚光燈之外的光源).
const uint8 LightType = LightSceneInfo.Proxy->GetLightType();
const bool bIsSpotLight = LightType == LightType_Spot;
const bool bIsPointLight = LightType == LightType_Point;
if (!bIsSpotLight && !bIsPointLight)
{
return;
}
// 繪製光源模板.
if (GMobileUseLightStencilCulling != 0)
{
RenderLocalLight_StencilMask(RHICmdList, Scene, View, LightSceneInfo);
}
// 處理IES光照.
bool bUseIESTexture = false;
FTexture* IESTextureResource = GWhiteTexture;
if (View.Family->EngineShowFlags.TexturedLightProfiles && LightSceneInfo.Proxy->GetIESTextureResource())
{
IESTextureResource = LightSceneInfo.Proxy->GetIESTextureResource();
bUseIESTexture = true;
}
FGraphicsPipelineStateInitializer GraphicsPSOInit;
RHICmdList.ApplyCachedRenderTargets(GraphicsPSOInit);
GraphicsPSOInit.BlendState = TStaticBlendState<CW_RGBA, BO_Add, BF_One, BF_One, BO_Add, BF_One, BF_One>::GetRHI();
GraphicsPSOInit.PrimitiveType = PT_TriangleList;
const FSphere LightBounds = LightSceneInfo.Proxy->GetBoundingSphere();
// 設定光源光柵化和深度狀態.
if (GMobileUseLightStencilCulling != 0)
{
SetLocalLightRasterizerAndDepthState_StencilMask(GraphicsPSOInit, View);
}
else
{
SetLocalLightRasterizerAndDepthState(GraphicsPSOInit, View, LightBounds);
}
// 設定VS
TShaderMapRef<TDeferredLightVS<true>> VertexShader(View.ShaderMap);
const FMaterialRenderProxy* LightFunctionMaterialProxy = nullptr;
if (View.Family->EngineShowFlags.LightFunctions)
{
LightFunctionMaterialProxy = LightSceneInfo.Proxy->GetLightFunctionMaterial();
}
FMobileRadialLightFunctionPS::FPermutationDomain PermutationVector;
PermutationVector.Set<FMobileRadialLightFunctionPS::FSpotLightDim>(bIsSpotLight);
PermutationVector.Set<FMobileRadialLightFunctionPS::FInverseSquaredDim>(LightSceneInfo.Proxy->IsInverseSquared());
PermutationVector.Set<FMobileRadialLightFunctionPS::FIESProfileDim>(bUseIESTexture);
FCachedLightMaterial LightMaterial;
TShaderRef<FMobileRadialLightFunctionPS> PixelShader;
GetLightMaterial(DefaultLightMaterial, LightFunctionMaterialProxy, PermutationVector.ToDimensionValueId(), LightMaterial, PixelShader);
GraphicsPSOInit.BoundShaderState.VertexDeclarationRHI = GetVertexDeclarationFVector4();
GraphicsPSOInit.BoundShaderState.VertexShaderRHI = VertexShader.GetVertexShader();
GraphicsPSOInit.BoundShaderState.PixelShaderRHI = PixelShader.GetPixelShader();
SetGraphicsPipelineState(RHICmdList, GraphicsPSOInit);
VertexShader->SetParameters(RHICmdList, View, &LightSceneInfo);
// 設定PS.
FMobileRadialLightFunctionPS::FParameters PassParameters;
PassParameters.DeferredLightUniforms = TUniformBufferRef<FDeferredLightUniformStruct>::CreateUniformBufferImmediate(GetDeferredLightParameters(View, LightSceneInfo), EUniformBufferUsage::UniformBuffer_SingleFrame);
PassParameters.IESTexture = IESTextureResource->TextureRHI;
PassParameters.IESTextureSampler = IESTextureResource->SamplerStateRHI;
const float TanOuterAngle = bIsSpotLight ? FMath::Tan(LightSceneInfo.Proxy->GetOuterConeAngle()) : 1.0f;
PassParameters.LightFunctionParameters = FVector4(TanOuterAngle, 1.0f /*ShadowFadeFraction*/, bIsSpotLight ? 1.0f : 0.0f, bIsPointLight ? 1.0f : 0.0f);
PassParameters.LightFunctionParameters2 = FVector(LightSceneInfo.Proxy->GetLightFunctionFadeDistance(), LightSceneInfo.Proxy->GetLightFunctionDisabledBrightness(), 0.0f);
const FVector Scale = LightSceneInfo.Proxy->GetLightFunctionScale();
// Switch x and z so that z of the user specified scale affects the distance along the light direction
const FVector InverseScale = FVector(1.f / Scale.Z, 1.f / Scale.Y, 1.f / Scale.X);
PassParameters.WorldToLight = LightSceneInfo.Proxy->GetWorldToLight() * FScaleMatrix(FVector(InverseScale));
FMobileRadialLightFunctionPS::SetParameters(RHICmdList, PixelShader, View, LightMaterial.MaterialProxy, *LightMaterial.Material, PassParameters);
// 只繪製預設光照模型(MSM_DefaultLit)的畫素.
uint8 StencilRef = GET_STENCIL_MOBILE_SM_MASK(MSM_DefaultLit);
RHICmdList.SetStencilRef(StencilRef);
// 點光源用球體繪製.
if (LightType == LightType_Point)
{
StencilingGeometry::DrawSphere(RHICmdList);
}
// 聚光燈用錐體繪製.
else // LightType_Spot
{
StencilingGeometry::DrawCone(RHICmdList);
}
}
繪製光源時,按光源型別劃分為三個步驟:平行光、非分簇簡單光源、區域性光源(點光源和聚光燈)。需要注意的是,移動端只支援預設光照模型(MSM_DefaultLit)的計算,其它高階光照模型(頭髮、次表面散射、清漆、眼睛、布料等)暫不支援。
繪製平行光時,最多隻能繪製1個,採用的是全螢幕矩形繪製,支援若干級CSM陰影。
繪製非分簇簡單光源時,無論是點光源還是聚光燈,都採用球體繪製,不支援陰影。
繪製區域性光源時,會複雜許多,先繪製區域性光源模板緩衝,再設定光柵化和深度狀態,最後才繪製光源。其中點光源採用球體繪製,不支援陰影;聚光燈採用錐體繪製,可以支援陰影,預設情況下,聚光燈不支援動態光影計算,需要在工程配置中開啟:
此外,是否開啟模板剔除光源不相交的畫素由GMobileUseLightStencilCulling決定,而GMobileUseLightStencilCulling又由r.Mobile.UseLightStencilCulling
決定,預設為1(即開啟狀態)。渲染光源的模板緩衝程式碼如下:
static void RenderLocalLight_StencilMask(FRHICommandListImmediate& RHICmdList, const FScene& Scene, const FViewInfo& View, const FLightSceneInfo& LightSceneInfo)
{
const uint8 LightType = LightSceneInfo.Proxy->GetLightType();
FGraphicsPipelineStateInitializer GraphicsPSOInit;
// 應用快取好的RT(顏色/深度等).
RHICmdList.ApplyCachedRenderTargets(GraphicsPSOInit);
GraphicsPSOInit.PrimitiveType = PT_TriangleList;
// 禁用所有RT的寫操作.
GraphicsPSOInit.BlendState = TStaticBlendStateWriteMask<CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE, CW_NONE>::GetRHI();
GraphicsPSOInit.RasterizerState = View.bReverseCulling ? TStaticRasterizerState<FM_Solid, CM_CCW>::GetRHI() : TStaticRasterizerState<FM_Solid, CM_CW>::GetRHI();
// 如果深度測試失敗, 則寫入模板緩衝值為1.
GraphicsPSOInit.DepthStencilState = TStaticDepthStencilState<
false, CF_DepthNearOrEqual,
true, CF_Always, SO_Keep, SO_Replace, SO_Keep,
false, CF_Always, SO_Keep, SO_Keep, SO_Keep,
0x00,
// 注意只寫入Pass專用的沙盒(SANBOX)位, 即模板緩衝的索引為0的位.
STENCIL_SANDBOX_MASK>::GetRHI();
// 繪製光源模板的VS是TDeferredLightVS.
TShaderMapRef<TDeferredLightVS<true> > VertexShader(View.ShaderMap);
GraphicsPSOInit.BoundShaderState.VertexDeclarationRHI = GetVertexDeclarationFVector4();
GraphicsPSOInit.BoundShaderState.VertexShaderRHI = VertexShader.GetVertexShader();
// PS為空.
GraphicsPSOInit.BoundShaderState.PixelShaderRHI = nullptr;
SetGraphicsPipelineState(RHICmdList, GraphicsPSOInit);
VertexShader->SetParameters(RHICmdList, View, &LightSceneInfo);
// 模板值為1.
RHICmdList.SetStencilRef(1);
// 根據不同光源用不同形狀繪製.
if (LightType == LightType_Point)
{
StencilingGeometry::DrawSphere(RHICmdList);
}
else // LightType_Spot
{
StencilingGeometry::DrawCone(RHICmdList);
}
}
每個區域性光源首先繪製光源範圍內的Mask,再計算通過了Stencil測試(Early-Z)的畫素的光照。具體的剖析過程以下圖的聚光燈為例:
上:場景中一盞等待渲染的聚光燈;中:利用模板Pass繪製出的模板Mask(白色區域),標記了螢幕空間中和聚光燈形狀重疊且深度更近的畫素 ;下:對有效畫素進行光照計算後的效果。
對有效畫素進行光照計算時,使用的DepthStencil狀態如下:
翻譯成文字就是,執行光照的畫素必須在光源形狀體之內,光源形狀之外的畫素會被剔除。模板Pass標記的是比光源形狀深度更近的畫素(光源形狀體之外的畫素),光源繪製Pass通過模板測試剔除模板Pass標記的畫素,然後再通過深度測試找出在光源形狀體內的畫素,從而提升光照計算效率。
移動端的這種光源模板裁剪(Light Stencil Culling)技術和Siggraph2020的Unity演講Deferred Shading in Unity URP提及的基於模板的光照計算相似(思想一致,但做法可能不完全一樣)。該論文還提出了更加契合光源形狀的幾何體模擬:
以及對比了各種光源計算方法在PC和移動端的效能,下面是Mali GPU的對比圖:
Mali Gpu在使用不同光照渲染技術的效能對比,可見在移動端,基於模板裁剪的光照演算法要優於常規和分塊演算法。
值得一提的是,光源模板裁剪技術結合GPU的Early-Z技術,將極大提升光照渲染效能。而當前主流的移動端GPU都支援Early-Z技術,也為光源模板裁剪的應用奠定了基礎。
UE目前實現的光源裁剪演算法興許還有改進的空間,比如背向光源的畫素(下圖紅框所示)其實也是可以不計算的。(但如何快速有效地找到背向光源的畫素又是一個問題)
12.3.3.2 MobileBasePassShader
本節主要闡述移動端BasePass涉及的shader,包括VS和PS。先看VS:
// Engine\Shaders\Private\MobileBasePassVertexShader.usf
(......)
struct FMobileShadingBasePassVSToPS
{
FVertexFactoryInterpolantsVSToPS FactoryInterpolants;
FMobileBasePassInterpolantsVSToPS BasePassInterpolants;
float4 Position : SV_POSITION;
};
#define FMobileShadingBasePassVSOutput FMobileShadingBasePassVSToPS
#define VertexFactoryGetInterpolants VertexFactoryGetInterpolantsVSToPS
// VS主入口.
void Main(
FVertexFactoryInput Input
, out FMobileShadingBasePassVSOutput Output
#if INSTANCED_STEREO
, uint InstanceId : SV_InstanceID
, out uint LayerIndex : SV_RenderTargetArrayIndex
#elif MOBILE_MULTI_VIEW
, in uint ViewId : SV_ViewID
#endif
)
{
// 立體檢視模式.
#if INSTANCED_STEREO
const uint EyeIndex = GetEyeIndex(InstanceId);
ResolvedView = ResolveView(EyeIndex);
LayerIndex = EyeIndex;
Output.BasePassInterpolants.MultiViewId = float(EyeIndex);
// 多檢視模式.
#elif MOBILE_MULTI_VIEW
#if COMPILER_GLSL_ES3_1
const int MultiViewId = int(ViewId);
ResolvedView = ResolveView(uint(MultiViewId));
Output.BasePassInterpolants.MultiViewId = float(MultiViewId);
#else
ResolvedView = ResolveView(ViewId);
Output.BasePassInterpolants.MultiViewId = float(ViewId);
#endif
#else
ResolvedView = ResolveView();
#endif
// 初始化打包的插值資料.
#if PACK_INTERPOLANTS
float4 PackedInterps[NUM_VF_PACKED_INTERPOLANTS];
UNROLL
for(int i = 0; i < NUM_VF_PACKED_INTERPOLANTS; ++i)
{
PackedInterps[i] = 0;
}
#endif
// 處理頂點工廠資料.
FVertexFactoryIntermediates VFIntermediates = GetVertexFactoryIntermediates(Input);
float4 WorldPositionExcludingWPO = VertexFactoryGetWorldPosition(Input, VFIntermediates);
float4 WorldPosition = WorldPositionExcludingWPO;
// 獲取材質的頂點資料, 處理座標等.
half3x3 TangentToLocal = VertexFactoryGetTangentToLocal(Input, VFIntermediates);
FMaterialVertexParameters VertexParameters = GetMaterialVertexParameters(Input, VFIntermediates, WorldPosition.xyz, TangentToLocal);
half3 WorldPositionOffset = GetMaterialWorldPositionOffset(VertexParameters);
WorldPosition.xyz += WorldPositionOffset;
float4 RasterizedWorldPosition = VertexFactoryGetRasterizedWorldPosition(Input, VFIntermediates, WorldPosition);
Output.Position = mul(RasterizedWorldPosition, ResolvedView.TranslatedWorldToClip);
Output.BasePassInterpolants.PixelPosition = WorldPosition;
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
Output.BasePassInterpolants.PixelPositionExcludingWPO = WorldPositionExcludingWPO.xyz;
#endif
// 裁剪面.
#if USE_PS_CLIP_PLANE
Output.BasePassInterpolants.OutClipDistance = dot(ResolvedView.GlobalClippingPlane, float4(WorldPosition.xyz - ResolvedView.PreViewTranslation.xyz, 1));
#endif
// 頂點霧.
#if USE_VERTEX_FOG
float4 VertexFog = CalculateHeightFog(WorldPosition.xyz - ResolvedView.TranslatedWorldCameraOrigin);
#if PROJECT_SUPPORT_SKY_ATMOSPHERE && MATERIAL_IS_SKY==0 // Do not apply aerial perpsective on sky materials
if (ResolvedView.SkyAtmosphereApplyCameraAerialPerspectiveVolume > 0.0f)
{
const float OneOverPreExposure = USE_PREEXPOSURE ? ResolvedView.OneOverPreExposure : 1.0f;
// Sample the aerial perspective (AP). It is also blended under the VertexFog parameter.
VertexFog = GetAerialPerspectiveLuminanceTransmittanceWithFogOver(
ResolvedView.RealTimeReflectionCapture, ResolvedView.SkyAtmosphereCameraAerialPerspectiveVolumeSizeAndInvSize,
Output.Position, WorldPosition.xyz*CM_TO_SKY_UNIT, ResolvedView.TranslatedWorldCameraOrigin*CM_TO_SKY_UNIT,
View.CameraAerialPerspectiveVolume, View.CameraAerialPerspectiveVolumeSampler,
ResolvedView.SkyAtmosphereCameraAerialPerspectiveVolumeDepthResolutionInv,
ResolvedView.SkyAtmosphereCameraAerialPerspectiveVolumeDepthResolution,
ResolvedView.SkyAtmosphereAerialPerspectiveStartDepthKm,
ResolvedView.SkyAtmosphereCameraAerialPerspectiveVolumeDepthSliceLengthKm,
ResolvedView.SkyAtmosphereCameraAerialPerspectiveVolumeDepthSliceLengthKmInv,
OneOverPreExposure, VertexFog);
}
#endif
#if PACK_INTERPOLANTS
PackedInterps[0] = VertexFog;
#else
Output.BasePassInterpolants.VertexFog = VertexFog;
#endif // PACK_INTERPOLANTS
#endif // USE_VERTEX_FOG
(......)
// 獲取待插值的資料.
Output.FactoryInterpolants = VertexFactoryGetInterpolants(Input, VFIntermediates, VertexParameters);
Output.BasePassInterpolants.PixelPosition.w = Output.Position.w;
// 打包插值資料.
#if PACK_INTERPOLANTS
VertexFactoryPackInterpolants(Output.FactoryInterpolants, PackedInterps);
#endif // PACK_INTERPOLANTS
#if !OUTPUT_MOBILE_HDR && COMPILER_GLSL_ES3_1
Output.Position.y *= -1;
#endif
}
以上可知,檢視例項會根據立體繪製、多檢視和普通模式不同而不同處理。支援頂點霧,但預設是關閉的,需要在工程配置內開啟。
存在打包插值模式,為了壓縮VS到PS之間的插值消耗和頻寬。是否開啟由巨集PACK_INTERPOLANTS
決定,它的定義如下:
// Engine\Shaders\Private\MobileBasePassCommon.ush
#define PACK_INTERPOLANTS (USE_VERTEX_FOG && NUM_VF_PACKED_INTERPOLANTS > 0 && (ES3_1_PROFILE))
也就是說,只有開啟頂點霧、存在頂點工廠打包插值資料且是OpenGLES3.1著色平臺才開啟打包插值的特性。相比PC端的BasePass的VS,移動端的做了大量的簡化,可以簡單地認為只是PC端的一個很小的子集。下面繼續分析PS:
// Engine\Shaders\Private\MobileBasePassVertexShader.usf
#include "Common.ush"
// 各類巨集定義.
#define MobileSceneTextures MobileBasePass.SceneTextures
#define EyeAdaptationStruct MobileBasePass
(......)
// 最接近被渲染物件的場景的預歸一化捕獲(完全粗糙材質不支援)
#if !FULLY_ROUGH
#if HQ_REFLECTIONS
#define MAX_HQ_REFLECTIONS 3
TextureCube ReflectionCubemap0;
SamplerState ReflectionCubemapSampler0;
TextureCube ReflectionCubemap1;
SamplerState ReflectionCubemapSampler1;
TextureCube ReflectionCubemap2;
SamplerState ReflectionCubemapSampler2;
// x,y,z - inverted average brightness for 0, 1, 2; w - sky cube texture max mips.
float4 ReflectionAverageBrigtness;
float4 ReflectanceMaxValueRGBM;
float4 ReflectionPositionsAndRadii[MAX_HQ_REFLECTIONS];
#if ALLOW_CUBE_REFLECTIONS
float4x4 CaptureBoxTransformArray[MAX_HQ_REFLECTIONS];
float4 CaptureBoxScalesArray[MAX_HQ_REFLECTIONS];
#endif
#endif
#endif
// 反射球/IBL等介面.
half4 GetPlanarReflection(float3 WorldPosition, half3 WorldNormal, half Roughness);
half MobileComputeMixingWeight(half IndirectIrradiance, half AverageBrightness, half Roughness);
half3 GetLookupVectorForBoxCaptureMobile(half3 ReflectionVector, ...);
half3 GetLookupVectorForSphereCaptureMobile(half3 ReflectionVector, ...);
void GatherSpecularIBL(FMaterialPixelParameters MaterialParameters, ...);
void BlendReflectionCaptures(FMaterialPixelParameters MaterialParameters, ...)
half3 GetImageBasedReflectionLighting(FMaterialPixelParameters MaterialParameters, ...);
// 其它介面.
half3 FrameBufferBlendOp(half4 Source);
bool UseCSM();
void ApplyPixelDepthOffsetForMobileBasePass(inout FMaterialPixelParameters MaterialParameters, FPixelMaterialInputs PixelMaterialInputs, out float OutDepth);
// 累積動態點光源.
#if MAX_DYNAMIC_POINT_LIGHTS > 0
void AccumulateLightingOfDynamicPointLight(
FMaterialPixelParameters MaterialParameters,
FMobileShadingModelContext ShadingModelContext,
FGBufferData GBuffer,
float4 LightPositionAndInvRadius,
float4 LightColorAndFalloffExponent,
float4 SpotLightDirectionAndSpecularScale,
float4 SpotLightAnglesAndSoftTransitionScaleAndLightShadowType,
#if SUPPORT_SPOTLIGHTS_SHADOW
FPCFSamplerSettings Settings,
float4 SpotLightShadowSharpenAndShadowFadeFraction,
float4 SpotLightShadowmapMinMax,
float4x4 SpotLightShadowWorldToShadowMatrix,
#endif
inout half3 Color)
{
uint LightShadowType = SpotLightAnglesAndSoftTransitionScaleAndLightShadowType.w;
float FadedShadow = 1.0f;
// 計算聚光燈陰影.
#if SUPPORT_SPOTLIGHTS_SHADOW
if ((LightShadowType & LightShadowType_Shadow) == LightShadowType_Shadow)
{
float4 HomogeneousShadowPosition = mul(float4(MaterialParameters.AbsoluteWorldPosition, 1), SpotLightShadowWorldToShadowMatrix);
float2 ShadowUVs = HomogeneousShadowPosition.xy / HomogeneousShadowPosition.w;
if (all(ShadowUVs >= SpotLightShadowmapMinMax.xy && ShadowUVs <= SpotLightShadowmapMinMax.zw))
{
// Clamp pixel depth in light space for shadowing opaque, because areas of the shadow depth buffer that weren't rendered to will have been cleared to 1
// We want to force the shadow comparison to result in 'unshadowed' in that case, regardless of whether the pixel being shaded is in front or behind that plane
float LightSpacePixelDepthForOpaque = min(HomogeneousShadowPosition.z, 0.99999f);
Settings.SceneDepth = LightSpacePixelDepthForOpaque;
Settings.TransitionScale = SpotLightAnglesAndSoftTransitionScaleAndLightShadowType.z;
half Shadow = MobileShadowPCF(ShadowUVs, Settings);
Shadow = saturate((Shadow - 0.5) * SpotLightShadowSharpenAndShadowFadeFraction.x + 0.5);
FadedShadow = lerp(1.0f, Square(Shadow), SpotLightShadowSharpenAndShadowFadeFraction.y);
}
}
#endif
// 計算光照.
if ((LightShadowType & ValidLightType) != 0)
{
float3 ToLight = LightPositionAndInvRadius.xyz - MaterialParameters.AbsoluteWorldPosition;
float DistanceSqr = dot(ToLight, ToLight);
float3 L = ToLight * rsqrt(DistanceSqr);
half3 PointH = normalize(MaterialParameters.CameraVector + L);
half PointNoL = max(0, dot(MaterialParameters.WorldNormal, L));
half PointNoH = max(0, dot(MaterialParameters.WorldNormal, PointH));
// 計算光源的衰減.
float Attenuation;
if (LightColorAndFalloffExponent.w == 0)
{
// Sphere falloff (technically just 1/d2 but this avoids inf)
Attenuation = 1 / (DistanceSqr + 1);
float LightRadiusMask = Square(saturate(1 - Square(DistanceSqr * (LightPositionAndInvRadius.w * LightPositionAndInvRadius.w))));
Attenuation *= LightRadiusMask;
}
else
{
Attenuation = RadialAttenuation(ToLight * LightPositionAndInvRadius.w, LightColorAndFalloffExponent.w);
}
#if PROJECT_MOBILE_ENABLE_MOVABLE_SPOTLIGHTS
if ((LightShadowType & LightShadowType_SpotLight) == LightShadowType_SpotLight)
{
Attenuation *= SpotAttenuation(L, -SpotLightDirectionAndSpecularScale.xyz, SpotLightAnglesAndSoftTransitionScaleAndLightShadowType.xy) * FadedShadow;
}
#endif
// 累加光照結果.
#if !FULLY_ROUGH
FMobileDirectLighting Lighting = MobileIntegrateBxDF(ShadingModelContext, GBuffer, PointNoL, MaterialParameters.CameraVector, PointH, PointNoH);
Color += min(65000.0, (Attenuation) * LightColorAndFalloffExponent.rgb * (1.0 / PI) * (Lighting.Diffuse + Lighting.Specular * SpotLightDirectionAndSpecularScale.w));
#else
Color += (Attenuation * PointNoL) * LightColorAndFalloffExponent.rgb * (1.0 / PI) * ShadingModelContext.DiffuseColor;
#endif
}
}
#endif
(......)
// 計算非直接光照.
half ComputeIndirect(VTPageTableResult LightmapVTPageTableResult, FVertexFactoryInterpolantsVSToPS Interpolants, float3 DiffuseDir, FMobileShadingModelContext ShadingModelContext, out half IndirectIrradiance, out half3 Color)
{
//To keep IndirectLightingCache conherence with PC, initialize the IndirectIrradiance to zero.
IndirectIrradiance = 0;
Color = 0;
// 非直接漫反射.
#if LQ_TEXTURE_LIGHTMAP
float2 LightmapUV0, LightmapUV1;
uint LightmapDataIndex;
GetLightMapCoordinates(Interpolants, LightmapUV0, LightmapUV1, LightmapDataIndex);
half4 LightmapColor = GetLightMapColorLQ(LightmapVTPageTableResult, LightmapUV0, LightmapUV1, LightmapDataIndex, DiffuseDir);
Color += LightmapColor.rgb * ShadingModelContext.DiffuseColor * View.IndirectLightingColorScale;
IndirectIrradiance = LightmapColor.a;
#elif CACHED_POINT_INDIRECT_LIGHTING
#if MATERIALBLENDING_MASKED || MATERIALBLENDING_SOLID
// 將法線應用到半透明物體.
FThreeBandSHVectorRGB PointIndirectLighting;
PointIndirectLighting.R.V0 = IndirectLightingCache.IndirectLightingSHCoefficients0[0];
PointIndirectLighting.R.V1 = IndirectLightingCache.IndirectLightingSHCoefficients1[0];
PointIndirectLighting.R.V2 = IndirectLightingCache.IndirectLightingSHCoefficients2[0];
PointIndirectLighting.G.V0 = IndirectLightingCache.IndirectLightingSHCoefficients0[1];
PointIndirectLighting.G.V1 = IndirectLightingCache.IndirectLightingSHCoefficients1[1];
PointIndirectLighting.G.V2 = IndirectLightingCache.IndirectLightingSHCoefficients2[1];
PointIndirectLighting.B.V0 = IndirectLightingCache.IndirectLightingSHCoefficients0[2];
PointIndirectLighting.B.V1 = IndirectLightingCache.IndirectLightingSHCoefficients1[2];
PointIndirectLighting.B.V2 = IndirectLightingCache.IndirectLightingSHCoefficients2[2];
FThreeBandSHVector DiffuseTransferSH = CalcDiffuseTransferSH3(DiffuseDir, 1);
// 計算加入了法線影響的漫反射光照.
half3 DiffuseGI = max(half3(0, 0, 0), DotSH3(PointIndirectLighting, DiffuseTransferSH));
IndirectIrradiance = Luminance(DiffuseGI);
Color += ShadingModelContext.DiffuseColor * DiffuseGI * View.IndirectLightingColorScale;
#else
// 半透明使用無方向(Non-directional), 漫反射被打包在xyz, 已經在cpu端除了PI和SH漫反射.
half3 PointIndirectLighting = IndirectLightingCache.IndirectLightingSHSingleCoefficient.rgb;
half3 DiffuseGI = PointIndirectLighting;
IndirectIrradiance = Luminance(DiffuseGI);
Color += ShadingModelContext.DiffuseColor * DiffuseGI * View.IndirectLightingColorScale;
#endif
#endif
return IndirectIrradiance;
}
// PS主入口.
PIXELSHADER_EARLYDEPTHSTENCIL
void Main(
FVertexFactoryInterpolantsVSToPS Interpolants
, FMobileBasePassInterpolantsVSToPS BasePassInterpolants
, in float4 SvPosition : SV_Position
OPTIONAL_IsFrontFace
, out half4 OutColor : SV_Target0
#if DEFERRED_SHADING_PATH
, out half4 OutGBufferA : SV_Target1
, out half4 OutGBufferB : SV_Target2
, out half4 OutGBufferC : SV_Target3
#endif
#if USE_SCENE_DEPTH_AUX
, out float OutSceneDepthAux : SV_Target4
#endif
#if OUTPUT_PIXEL_DEPTH_OFFSET
, out float OutDepth : SV_Depth
#endif
)
{
#if MOBILE_MULTI_VIEW
ResolvedView = ResolveView(uint(BasePassInterpolants.MultiViewId));
#else
ResolvedView = ResolveView();
#endif
#if USE_PS_CLIP_PLANE
clip(BasePassInterpolants.OutClipDistance);
#endif
// 解壓打包的插值資料.
#if PACK_INTERPOLANTS
float4 PackedInterpolants[NUM_VF_PACKED_INTERPOLANTS];
VertexFactoryUnpackInterpolants(Interpolants, PackedInterpolants);
#endif
#if COMPILER_GLSL_ES3_1 && !OUTPUT_MOBILE_HDR && !MOBILE_EMULATION
// LDR Mobile needs screen vertical flipped
SvPosition.y = ResolvedView.BufferSizeAndInvSize.y - SvPosition.y - 1;
#endif
// 獲取材質的畫素屬性.
FMaterialPixelParameters MaterialParameters = GetMaterialPixelParameters(Interpolants, SvPosition);
FPixelMaterialInputs PixelMaterialInputs;
{
float4 ScreenPosition = SvPositionToResolvedScreenPosition(SvPosition);
float3 WorldPosition = BasePassInterpolants.PixelPosition.xyz;
float3 WorldPositionExcludingWPO = BasePassInterpolants.PixelPosition.xyz;
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
WorldPositionExcludingWPO = BasePassInterpolants.PixelPositionExcludingWPO;
#endif
CalcMaterialParametersEx(MaterialParameters, PixelMaterialInputs, SvPosition, ScreenPosition, bIsFrontFace, WorldPosition, WorldPositionExcludingWPO);
#if FORCE_VERTEX_NORMAL
// Quality level override of material's normal calculation, can be used to avoid normal map reads etc.
MaterialParameters.WorldNormal = MaterialParameters.TangentToWorld[2];
MaterialParameters.ReflectionVector = ReflectionAboutCustomWorldNormal(MaterialParameters, MaterialParameters.WorldNormal, false);
#endif
}
// 畫素深度偏移.
#if OUTPUT_PIXEL_DEPTH_OFFSET
ApplyPixelDepthOffsetForMobileBasePass(MaterialParameters, PixelMaterialInputs, OutDepth);
#endif
// Mask材質.
#if !EARLY_Z_PASS_ONLY_MATERIAL_MASKING
//Clip if the blend mode requires it.
GetMaterialCoverageAndClipping(MaterialParameters, PixelMaterialInputs);
#endif
// 計算並快取GBuffer資料, 防止後續多次採用紋理.
FGBufferData GBuffer = (FGBufferData)0;
GBuffer.WorldNormal = MaterialParameters.WorldNormal;
GBuffer.BaseColor = GetMaterialBaseColor(PixelMaterialInputs);
GBuffer.Metallic = GetMaterialMetallic(PixelMaterialInputs);
GBuffer.Specular = GetMaterialSpecular(PixelMaterialInputs);
GBuffer.Roughness = GetMaterialRoughness(PixelMaterialInputs);
GBuffer.ShadingModelID = GetMaterialShadingModel(PixelMaterialInputs);
half MaterialAO = GetMaterialAmbientOcclusion(PixelMaterialInputs);
// 應用AO.
#if APPLY_AO
half4 GatheredAmbientOcclusion = Texture2DSample(AmbientOcclusionTexture, AmbientOcclusionSampler, SvPositionToBufferUV(SvPosition));
MaterialAO *= GatheredAmbientOcclusion.r;
#endif
GBuffer.GBufferAO = MaterialAO;
// 由於IEEE 754 (FP16)可表示的最小標準值是2^-24 = 5.96e-8, 而後面的粗糙度涉及到1.0 / Roughness^4的計算, 所以為了防止除零錯誤, 需保證Roughness^4 >= 5.96e-8, 此處直接Clamp粗糙度到0.015625(0.015625^4 = 5.96e-8).
// 另外, 為了匹配PC端的延遲渲染(粗糙度儲存在8位的值), 因此也自動Clamp到1.0.
GBuffer.Roughness = max(0.015625, GetMaterialRoughness(PixelMaterialInputs));
// 初始化移動端著色模型上下文FMobileShadingModelContext.
FMobileShadingModelContext ShadingModelContext = (FMobileShadingModelContext)0;
ShadingModelContext.Opacity = GetMaterialOpacity(PixelMaterialInputs);
// 薄層透明度物
#if MATERIAL_SHADINGMODEL_THIN_TRANSLUCENT
(......)
#endif
half3 Color = 0;
// 自定義資料.
half CustomData0 = GetMaterialCustomData0(MaterialParameters);
half CustomData1 = GetMaterialCustomData1(MaterialParameters);
InitShadingModelContext(ShadingModelContext, GBuffer, MaterialParameters.SvPosition, MaterialParameters.CameraVector, CustomData0, CustomData1);
float3 DiffuseDir = MaterialParameters.WorldNormal;
// 頭髮模型.
#if MATERIAL_SHADINGMODEL_HAIR
(......)
#endif
// 光照圖虛擬紋理.
VTPageTableResult LightmapVTPageTableResult = (VTPageTableResult)0.0f;
#if LIGHTMAP_VT_ENABLED
{
float2 LightmapUV0, LightmapUV1;
uint LightmapDataIndex;
GetLightMapCoordinates(Interpolants, LightmapUV0, LightmapUV1, LightmapDataIndex);
LightmapVTPageTableResult = LightmapGetVTSampleInfo(LightmapUV0, LightmapDataIndex, SvPosition.xy);
}
#endif
#if LIGHTMAP_VT_ENABLED
// This must occur after CalcMaterialParameters(), which is required to initialize the VT feedback mechanism
// Lightmap request is always the first VT sample in the shader
StoreVirtualTextureFeedback(MaterialParameters.VirtualTextureFeedback, 0, LightmapVTPageTableResult.PackedRequest);
#endif
// 計算非直接光.
half IndirectIrradiance;
half3 IndirectColor;
ComputeIndirect(LightmapVTPageTableResult, Interpolants, DiffuseDir, ShadingModelContext, IndirectIrradiance, IndirectColor);
Color += IndirectColor;
// 預計算的陰影圖.
half Shadow = GetPrimaryPrecomputedShadowMask(LightmapVTPageTableResult, Interpolants).r;
#if DEFERRED_SHADING_PATH
float4 OutGBufferD;
float4 OutGBufferE;
float4 OutGBufferF;
float4 OutGBufferVelocity = 0;
GBuffer.IndirectIrradiance = IndirectIrradiance;
GBuffer.PrecomputedShadowFactors.r = Shadow;
// 編碼GBuffer資料.
EncodeGBuffer(GBuffer, OutGBufferA, OutGBufferB, OutGBufferC, OutGBufferD, OutGBufferE, OutGBufferF, OutGBufferVelocity);
#else
#if !MATERIAL_SHADINGMODEL_UNLIT
// 天光.
#if ENABLE_SKY_LIGHT
half3 SkyDiffuseLighting = GetSkySHDiffuseSimple(MaterialParameters.WorldNormal);
half3 DiffuseLookup = SkyDiffuseLighting * ResolvedView.SkyLightColor.rgb;
IndirectIrradiance += Luminance(DiffuseLookup);
#endif
Color *= MaterialAO;
IndirectIrradiance *= MaterialAO;
float ShadowPositionZ = 0;
#if DIRECTIONAL_LIGHT_CSM && !MATERIAL_SHADINGMODEL_SINGLELAYERWATER
// CSM陰影.
if (UseCSM())
{
half ShadowMap = MobileDirectionalLightCSM(MaterialParameters.ScreenPosition.xy, MaterialParameters.ScreenPosition.w, ShadowPositionZ);
#if ALLOW_STATIC_LIGHTING
Shadow = min(ShadowMap, Shadow);
#else
Shadow = ShadowMap;
#endif
}
#endif /* DIRECTIONAL_LIGHT_CSM */
// 距離場陰影.
#if APPLY_DISTANCE_FIELD
if (ShadowPositionZ == 0)
{
Shadow = Texture2DSample(MobileBasePass.ScreenSpaceShadowMaskTexture, MobileBasePass.ScreenSpaceShadowMaskSampler, SvPositionToBufferUV(SvPosition)).x;
}
#endif
half NoL = max(0, dot(MaterialParameters.WorldNormal, MobileDirectionalLight.DirectionalLightDirectionAndShadowTransition.xyz));
half3 H = normalize(MaterialParameters.CameraVector + MobileDirectionalLight.DirectionalLightDirectionAndShadowTransition.xyz);
half NoH = max(0, dot(MaterialParameters.WorldNormal, H));
// 平行光 + IBL
#if FULLY_ROUGH
Color += (Shadow * NoL) * MobileDirectionalLight.DirectionalLightColor.rgb * ShadingModelContext.DiffuseColor;
#else
FMobileDirectLighting Lighting = MobileIntegrateBxDF(ShadingModelContext, GBuffer, NoL, MaterialParameters.CameraVector, H, NoH);
// MobileDirectionalLight.DirectionalLightDistanceFadeMADAndSpecularScale.z儲存了平行光的SpecularScale.
Color += (Shadow) * MobileDirectionalLight.DirectionalLightColor.rgb * (Lighting.Diffuse + Lighting.Specular * MobileDirectionalLight.DirectionalLightDistanceFadeMADAndSpecularScale.z);
// 頭髮著色.
#if !(MATERIAL_SINGLE_SHADINGMODEL && MATERIAL_SHADINGMODEL_HAIR)
(......)
#endif
#endif /* FULLY_ROUGH */
// 區域性光源, 最多4個.
#if MAX_DYNAMIC_POINT_LIGHTS > 0 && !MATERIAL_SHADINGMODEL_SINGLELAYERWATER
if(NumDynamicPointLights > 0)
{
#if SUPPORT_SPOTLIGHTS_SHADOW
FPCFSamplerSettings Settings;
Settings.ShadowDepthTexture = DynamicSpotLightShadowTexture;
Settings.ShadowDepthTextureSampler = DynamicSpotLightShadowSampler;
Settings.ShadowBufferSize = DynamicSpotLightShadowBufferSize;
Settings.bSubsurface = false;
Settings.bTreatMaxDepthUnshadowed = false;
Settings.DensityMulConstant = 0;
Settings.ProjectionDepthBiasParameters = 0;
#endif
AccumulateLightingOfDynamicPointLight(MaterialParameters, ...);
if (MAX_DYNAMIC_POINT_LIGHTS > 1 && NumDynamicPointLights > 1)
{
AccumulateLightingOfDynamicPointLight(MaterialParameters, ...);
if (MAX_DYNAMIC_POINT_LIGHTS > 2 && NumDynamicPointLights > 2)
{
AccumulateLightingOfDynamicPointLight(MaterialParameters, ...);
if (MAX_DYNAMIC_POINT_LIGHTS > 3 && NumDynamicPointLights > 3)
{
AccumulateLightingOfDynamicPointLight(MaterialParameters, ...);
}
}
}
}
#endif
// 天空光.
#if ENABLE_SKY_LIGHT
#if MATERIAL_TWOSIDED && LQ_TEXTURE_LIGHTMAP
if (NoL == 0)
{
#endif
#if MATERIAL_SHADINGMODEL_SINGLELAYERWATER
ShadingModelContext.WaterDiffuseIndirectLuminance += SkyDiffuseLighting;
#endif
Color += SkyDiffuseLighting * half3(ResolvedView.SkyLightColor.rgb) * ShadingModelContext.DiffuseColor * MaterialAO;
#if MATERIAL_TWOSIDED && LQ_TEXTURE_LIGHTMAP
}
#endif
#endif
#endif /* !MATERIAL_SHADINGMODEL_UNLIT */
#if MATERIAL_SHADINGMODEL_SINGLELAYERWATER
(......)
#endif // MATERIAL_SHADINGMODEL_SINGLELAYERWATER
#endif// DEFERRED_SHADING_PATH
// 處理頂點霧.
half4 VertexFog = half4(0, 0, 0, 1);
#if USE_VERTEX_FOG
#if PACK_INTERPOLANTS
VertexFog = PackedInterpolants[0];
#else
VertexFog = BasePassInterpolants.VertexFog;
#endif
#endif
// 自發光.
half3 Emissive = GetMaterialEmissive(PixelMaterialInputs);
#if MATERIAL_SHADINGMODEL_THIN_TRANSLUCENT
Emissive *= TopMaterialCoverage;
#endif
Color += Emissive;
#if !MATERIAL_SHADINGMODEL_UNLIT && MOBILE_EMULATION
Color = lerp(Color, ShadingModelContext.DiffuseColor, ResolvedView.UnlitViewmodeMask);
#endif
// 組合霧顏色到輸出顏色.
#if MATERIALBLENDING_ALPHACOMPOSITE || MATERIAL_SHADINGMODEL_SINGLELAYERWATER
OutColor = half4(Color * VertexFog.a + VertexFog.rgb * ShadingModelContext.Opacity, ShadingModelContext.Opacity);
#elif MATERIALBLENDING_ALPHAHOLDOUT
// not implemented for holdout
OutColor = half4(Color * VertexFog.a + VertexFog.rgb * ShadingModelContext.Opacity, ShadingModelContext.Opacity);
#elif MATERIALBLENDING_TRANSLUCENT
OutColor = half4(Color * VertexFog.a + VertexFog.rgb, ShadingModelContext.Opacity);
#elif MATERIALBLENDING_ADDITIVE
OutColor = half4(Color * (VertexFog.a * ShadingModelContext.Opacity.x), 0.0f);
#elif MATERIALBLENDING_MODULATE
half3 FoggedColor = lerp(half3(1, 1, 1), Color, VertexFog.aaa * VertexFog.aaa);
OutColor = half4(FoggedColor, ShadingModelContext.Opacity);
#else
OutColor.rgb = Color * VertexFog.a + VertexFog.rgb;
#if !MATERIAL_USE_ALPHA_TO_COVERAGE
// Scene color alpha is not used yet so we set it to 1
OutColor.a = 1.0;
#if OUTPUT_MOBILE_HDR
// Store depth in FP16 alpha. This depth value can be fetched during translucency or sampled in post-processing
OutColor.a = SvPosition.z;
#endif
#else
half MaterialOpacityMask = GetMaterialMaskInputRaw(PixelMaterialInputs);
OutColor.a = GetMaterialMask(PixelMaterialInputs) / max(abs(ddx(MaterialOpacityMask)) + abs(ddy(MaterialOpacityMask)), 0.0001f) + 0.5f;
#endif
#endif
#if !MATERIALBLENDING_MODULATE && USE_PREEXPOSURE
OutColor.rgb *= ResolvedView.PreExposure;
#endif
#if MATERIAL_IS_SKY
OutColor.rgb = min(OutColor.rgb, Max10BitsFloat.xxx * 0.5f);
#endif
#if USE_SCENE_DEPTH_AUX
OutSceneDepthAux = SvPosition.z;
#endif
// 處理顏色的alpha.
#if USE_EDITOR_COMPOSITING && (MOBILE_EMULATION)
// Editor primitive depth testing
OutColor.a = 1.0;
#if MATERIALBLENDING_MASKED
// some material might have an opacity value
OutColor.a = GetMaterialMaskInputRaw(PixelMaterialInputs);
#endif
clip(OutColor.a - GetMaterialOpacityMaskClipValue());
#else
#if OUTPUT_GAMMA_SPACE
OutColor.rgb = sqrt(OutColor.rgb);
#endif
#endif
#if NUM_VIRTUALTEXTURE_SAMPLES || LIGHTMAP_VT_ENABLED
FinalizeVirtualTextureFeedback(
MaterialParameters.VirtualTextureFeedback,
MaterialParameters.SvPosition,
ShadingModelContext.Opacity,
View.FrameNumber,
View.VTFeedbackBuffer
);
#endif
}
移動端的BasePassPS的處理過程比較複雜,步驟繁多,主要有:解壓插值資料,獲取並計算材質屬性,計算並緩村GBuffer,處理或調整GBuffer資料,計算前向渲染分支的光照(平行光、區域性光),計算距離場、CSM等陰影,計算天空光,處理靜態光照、非直接光和IBL,計算霧效,以及處理水體、頭髮、薄層透明度等特殊著色模型。
由於標準16位浮點數(FP16)可表示的最小值是\(\cfrac{1.0}{2^{24}} = 5.96 \cdot 10^{-8}\),而後續的光照計算涉及到粗糙度的4次方運算(\(\cfrac{1.0}{\text{Roughness}^4}\)),為了防止除零錯誤,需要將粗糙度擷取到\(0.015625\)(\(0.015625^4 = 5.96 \cdot 10^{-8}\))。
GBuffer.Roughness = max(0.015625, GetMaterialRoughness(PixelMaterialInputs));
這也警示我們在開發移動端的渲染特性時,需要格外注意和把控資料精度,否則在低端裝置經常由於資料精度不足而出現各種奇葩的畫面異常。
雖然上面的程式碼較多,但由很多巨集控制著,實際渲染單個材質所需的程式碼可能只是其中的很小的一個子集。比如說,預設支援4個區域性光源,但如果在工程配置(下圖)中可以設為2或更少,則實際執行的光源指令少了很多。
如果是前向渲染分支,則GBuffer的很多處理將被忽略;如果是延遲渲染分支,則平行光、區域性光源的計算將被忽略,由延遲渲染Pass的shader執行。
下面對重要介面EncodeGBuffer
做剖析:
void EncodeGBuffer(
FGBufferData GBuffer,
out float4 OutGBufferA,
out float4 OutGBufferB,
out float4 OutGBufferC,
out float4 OutGBufferD,
out float4 OutGBufferE,
out float4 OutGBufferVelocity,
float QuantizationBias = 0 // -0.5 to 0.5 random float. Used to bias quantization.
)
{
if (GBuffer.ShadingModelID == SHADINGMODELID_UNLIT)
{
OutGBufferA = 0;
SetGBufferForUnlit(OutGBufferB);
OutGBufferC = 0;
OutGBufferD = 0;
OutGBufferE = 0;
}
else
{
// GBufferA: 八面體壓縮後的法線, 預計算陰影因子, 逐物體資料.
#if MOBILE_DEFERRED_SHADING
OutGBufferA.rg = UnitVectorToOctahedron( normalize(GBuffer.WorldNormal) ) * 0.5f + 0.5f;
OutGBufferA.b = GBuffer.PrecomputedShadowFactors.x;
OutGBufferA.a = GBuffer.PerObjectGBufferData;
#else
(......)
#endif
// GBufferB: 金屬度, 高光度, 粗糙度, 著色模型, 其它Mask.
OutGBufferB.r = GBuffer.Metallic;
OutGBufferB.g = GBuffer.Specular;
OutGBufferB.b = GBuffer.Roughness;
OutGBufferB.a = EncodeShadingModelIdAndSelectiveOutputMask(GBuffer.ShadingModelID, GBuffer.SelectiveOutputMask);
// GBufferC: 基礎色, AO或非直接光.
OutGBufferC.rgb = EncodeBaseColor( GBuffer.BaseColor );
#if ALLOW_STATIC_LIGHTING
// No space for AO. Multiply IndirectIrradiance by AO instead of storing.
OutGBufferC.a = EncodeIndirectIrradiance(GBuffer.IndirectIrradiance * GBuffer.GBufferAO) + QuantizationBias * (1.0 / 255.0);
#else
OutGBufferC.a = GBuffer.GBufferAO;
#endif
OutGBufferD = GBuffer.CustomData;
OutGBufferE = GBuffer.PrecomputedShadowFactors;
}
#if WRITES_VELOCITY_TO_GBUFFER
OutGBufferVelocity = GBuffer.Velocity;
#else
OutGBufferVelocity = 0;
#endif
}
在預設光照模型(DefaultLit)下,BasePass輸出的結果有以下幾種紋理:
12.3.3.3 MobileDeferredShading
移動端延遲光照的VS和PC端是一樣的,都是DeferredLightVertexShaders.usf,但PS不一樣,用的是MobileDeferredShading.usf。由於VS和PC一樣,且沒有特殊的操作,此處就忽略,如果有興趣的同學可以看第五篇的小節5.5.3.1 DeferredLightVertexShader。
下面直接分析PS程式碼:
// Engine\Shaders\Private\MobileDeferredShading.usf
(......)
// 移動端光源資料結構體.
struct FMobileLightData
{
float3 Position;
float InvRadius;
float3 Color;
float FalloffExponent;
float3 Direction;
float2 SpotAngles;
float SourceRadius;
float SpecularScale;
bool bInverseSquared;
bool bSpotLight;
};
// 獲取GBuffer資料.
void FetchGBuffer(in float2 UV, out float4 GBufferA, out float4 GBufferB, out float4 GBufferC, out float4 GBufferD, out float SceneDepth)
{
// Vulkan的子pass獲取資料.
#if VULKAN_PROFILE
GBufferA = VulkanSubpassFetch1();
GBufferB = VulkanSubpassFetch2();
GBufferC = VulkanSubpassFetch3();
GBufferD = 0;
SceneDepth = ConvertFromDeviceZ(VulkanSubpassDepthFetch());
// Metal的子pass獲取資料.
#elif METAL_PROFILE
GBufferA = SubpassFetchRGBA_1();
GBufferB = SubpassFetchRGBA_2();
GBufferC = SubpassFetchRGBA_3();
GBufferD = 0;
SceneDepth = ConvertFromDeviceZ(SubpassFetchR_4());
// 其它平臺(DX, OpenGL)的子pass獲取資料.
#else
GBufferA = Texture2DSampleLevel(MobileSceneTextures.GBufferATexture, MobileSceneTextures.GBufferATextureSampler, UV, 0);
GBufferB = Texture2DSampleLevel(MobileSceneTextures.GBufferBTexture, MobileSceneTextures.GBufferBTextureSampler, UV, 0);
GBufferC = Texture2DSampleLevel(MobileSceneTextures.GBufferCTexture, MobileSceneTextures.GBufferCTextureSampler, UV, 0);
GBufferD = 0;
SceneDepth = ConvertFromDeviceZ(Texture2DSampleLevel(MobileSceneTextures.SceneDepthTexture, MobileSceneTextures.SceneDepthTextureSampler, UV, 0).r);
#endif
}
// 解壓GBuffer資料.
FGBufferData DecodeGBufferMobile(
float4 InGBufferA,
float4 InGBufferB,
float4 InGBufferC,
float4 InGBufferD)
{
FGBufferData GBuffer;
GBuffer.WorldNormal = OctahedronToUnitVector( InGBufferA.xy * 2.0f - 1.0f );
GBuffer.PrecomputedShadowFactors = InGBufferA.z;
GBuffer.PerObjectGBufferData = InGBufferA.a;
GBuffer.Metallic = InGBufferB.r;
GBuffer.Specular = InGBufferB.g;
GBuffer.Roughness = max(0.015625, InGBufferB.b);
// Note: must match GetShadingModelId standalone function logic
// Also Note: SimpleElementPixelShader directly sets SV_Target2 ( GBufferB ) to indicate unlit.
// An update there will be required if this layout changes.
GBuffer.ShadingModelID = DecodeShadingModelId(InGBufferB.a);
GBuffer.SelectiveOutputMask = DecodeSelectiveOutputMask(InGBufferB.a);
GBuffer.BaseColor = DecodeBaseColor(InGBufferC.rgb);
#if ALLOW_STATIC_LIGHTING
GBuffer.GBufferAO = 1;
GBuffer.IndirectIrradiance = DecodeIndirectIrradiance(InGBufferC.a);
#else
GBuffer.GBufferAO = InGBufferC.a;
GBuffer.IndirectIrradiance = 1;
#endif
GBuffer.CustomData = HasCustomGBufferData(GBuffer.ShadingModelID) ? InGBufferD : 0;
return GBuffer;
}
// 直接光照.
half3 GetDirectLighting(
FMobileLightData LightData,
FMobileShadingModelContext ShadingModelContext,
FGBufferData GBuffer,
float3 WorldPosition,
half3 CameraVector)
{
half3 DirectLighting = 0;
float3 ToLight = LightData.Position - WorldPosition;
float DistanceSqr = dot(ToLight, ToLight);
float3 L = ToLight * rsqrt(DistanceSqr);
// 光源衰減.
float Attenuation = 0.0;
if (LightData.bInverseSquared)
{
// Sphere falloff (technically just 1/d2 but this avoids inf)
Attenuation = 1.0f / (DistanceSqr + 1.0f);
Attenuation *= Square(saturate(1 - Square(DistanceSqr * Square(LightData.InvRadius))));
}
else
{
Attenuation = RadialAttenuation(ToLight * LightData.InvRadius, LightData.FalloffExponent);
}
// 聚光燈衰減.
if (LightData.bSpotLight)
{
Attenuation *= SpotAttenuation(L, -LightData.Direction, LightData.SpotAngles);
}
// 如果衰減不為0, 則計算直接光照.
if (Attenuation > 0.0)
{
half3 H = normalize(CameraVector + L);
half NoL = max(0.0, dot(GBuffer.WorldNormal, L));
half NoH = max(0.0, dot(GBuffer.WorldNormal, H));
FMobileDirectLighting Lighting = MobileIntegrateBxDF(ShadingModelContext, GBuffer, NoL, CameraVector, H, NoH);
DirectLighting = (Lighting.Diffuse + Lighting.Specular * LightData.SpecularScale) * (LightData.Color * (1.0 / PI) * Attenuation);
}
return DirectLighting;
}
// 光照函式.
half ComputeLightFunctionMultiplier(float3 WorldPosition);
// 使用光網格新增區域性光照, 不支援動態陰影, 因為需要逐光源陰影圖.
half3 GetLightGridLocalLighting(const FCulledLightsGridData InLightGridData, ...);
// 平行光的PS主入口.
void MobileDirectLightPS(
noperspective float4 UVAndScreenPos : TEXCOORD0,
float4 SvPosition : SV_POSITION,
out half4 OutColor : SV_Target0)
{
// 恢復(讀取)GBuffer資料.
FGBufferData GBuffer = (FGBufferData)0;
float SceneDepth = 0;
{
float4 GBufferA = 0;
float4 GBufferB = 0;
float4 GBufferC = 0;
float4 GBufferD = 0;
FetchGBuffer(UVAndScreenPos.xy, GBufferA, GBufferB, GBufferC, GBufferD, SceneDepth);
GBuffer = DecodeGBufferMobile(GBufferA, GBufferB, GBufferC, GBufferD);
}
// 計算基礎向量.
float2 ScreenPos = UVAndScreenPos.zw;
float3 WorldPosition = mul(float4(ScreenPos * SceneDepth, SceneDepth, 1), View.ScreenToWorld).xyz;
half3 CameraVector = normalize(View.WorldCameraOrigin - WorldPosition);
half NoV = max(0, dot(GBuffer.WorldNormal, CameraVector));
half3 ReflectionVector = GBuffer.WorldNormal * (NoV * 2.0) - CameraVector;
half3 Color = 0;
// Check movable light param to determine if we should be using precomputed shadows
half Shadow = LightFunctionParameters2.z > 0.0f ? 1.0f : GBuffer.PrecomputedShadowFactors.r;
// CSM陰影.
#if APPLY_CSM
float ShadowPositionZ = 0;
float4 ScreenPosition = SvPositionToScreenPosition(float4(SvPosition.xyz,SceneDepth));
float ShadowMap = MobileDirectionalLightCSM(ScreenPosition.xy, SceneDepth, ShadowPositionZ);
Shadow = min(ShadowMap, Shadow);
#endif
// 著色模型上下文.
FMobileShadingModelContext ShadingModelContext = (FMobileShadingModelContext)0;
{
half DielectricSpecular = 0.08 * GBuffer.Specular;
ShadingModelContext.DiffuseColor = GBuffer.BaseColor - GBuffer.BaseColor * GBuffer.Metallic; // 1 mad
ShadingModelContext.SpecularColor = (DielectricSpecular - DielectricSpecular * GBuffer.Metallic) + GBuffer.BaseColor * GBuffer.Metallic; // 2 mad
// 計算環境的BRDF.
ShadingModelContext.SpecularColor = GetEnvBRDF(ShadingModelContext.SpecularColor, GBuffer.Roughness, NoV);
}
// 區域性光源.
float2 LocalPosition = SvPosition.xy - View.ViewRectMin.xy;
uint GridIndex = ComputeLightGridCellIndex(uint2(LocalPosition.x, LocalPosition.y), SceneDepth);
// 分簇光源
#if USE_CLUSTERED
{
const uint EyeIndex = 0;
const FCulledLightsGridData CulledLightGridData = GetCulledLightsGrid(GridIndex, EyeIndex);
Color += GetLightGridLocalLighting(CulledLightGridData, ShadingModelContext, GBuffer, WorldPosition, CameraVector, EyeIndex, 0);
}
#endif
// 計算平行光.
half NoL = max(0, dot(GBuffer.WorldNormal, MobileDirectionalLight.DirectionalLightDirectionAndShadowTransition.xyz));
half3 H = normalize(CameraVector + MobileDirectionalLight.DirectionalLightDirectionAndShadowTransition.xyz);
half NoH = max(0, dot(GBuffer.WorldNormal, H));
FMobileDirectLighting Lighting;
Lighting.Specular = ShadingModelContext.SpecularColor * CalcSpecular(GBuffer.Roughness, NoH);
Lighting.Diffuse = ShadingModelContext.DiffuseColor;
Color += (Shadow * NoL) * MobileDirectionalLight.DirectionalLightColor.rgb * (Lighting.Diffuse + Lighting.Specular * MobileDirectionalLight.DirectionalLightDistanceFadeMADAndSpecularScale.z);
// 處理反射(IBL, 反射捕捉器).
#if APPLY_REFLECTION
uint NumCulledEntryIndex = (ForwardLightData.NumGridCells + GridIndex) * NUM_CULLED_LIGHTS_GRID_STRIDE;
uint NumLocalReflectionCaptures = min(ForwardLightData.NumCulledLightsGrid[NumCulledEntryIndex + 0], ForwardLightData.NumReflectionCaptures);
uint DataStartIndex = ForwardLightData.NumCulledLightsGrid[NumCulledEntryIndex + 1];
float3 SpecularIBL = CompositeReflectionCapturesAndSkylight(
1.0f,
WorldPosition,
ReflectionVector,//RayDirection,
GBuffer.Roughness,
GBuffer.IndirectIrradiance,
1.0f,
0.0f,
NumLocalReflectionCaptures,
DataStartIndex,
0,
true);
Color += SpecularIBL * ShadingModelContext.SpecularColor;
#elif APPLY_SKY_REFLECTION
float SkyAverageBrightness = 1.0f;
float3 SpecularIBL = GetSkyLightReflection(ReflectionVector, GBuffer.Roughness, SkyAverageBrightness);
SpecularIBL *= ComputeMixingWeight(GBuffer.IndirectIrradiance, SkyAverageBrightness, GBuffer.Roughness);
Color += SpecularIBL * ShadingModelContext.SpecularColor;
#endif
// 天空光漫反射.
half3 SkyDiffuseLighting = GetSkySHDiffuseSimple(GBuffer.WorldNormal);
Color+= SkyDiffuseLighting * half3(View.SkyLightColor.rgb) * ShadingModelContext.DiffuseColor * GBuffer.GBufferAO;
half LightAttenuation = ComputeLightFunctionMultiplier(WorldPosition);
#if USE_PREEXPOSURE
// MobileHDR applies PreExposure in tonemapper
LightAttenuation *= View.PreExposure;
#endif
OutColor.rgb = Color.rgb * LightAttenuation;
OutColor.a = 1;
}
// 區域性光源的PS主入口.
void MobileRadialLightPS(
float4 InScreenPosition : TEXCOORD0,
float4 SVPos : SV_POSITION,
out half4 OutColor : SV_Target0
)
{
FGBufferData GBuffer = (FGBufferData)0;
float SceneDepth = 0;
{
float2 ScreenUV = InScreenPosition.xy / InScreenPosition.w * View.ScreenPositionScaleBias.xy + View.ScreenPositionScaleBias.wz;
float4 GBufferA = 0;
float4 GBufferB = 0;
float4 GBufferC = 0;
float4 GBufferD = 0;
FetchGBuffer(ScreenUV, GBufferA, GBufferB, GBufferC, GBufferD, SceneDepth);
GBuffer = DecodeGBufferMobile(GBufferA, GBufferB, GBufferC, GBufferD);
}
// With a perspective projection, the clip space position is NDC * Clip.w
// With an orthographic projection, clip space is the same as NDC
float2 ClipPosition = InScreenPosition.xy / InScreenPosition.w * (View.ViewToClip[3][3] < 1.0f ? SceneDepth : 1.0f);
float3 WorldPosition = mul(float4(ClipPosition, SceneDepth, 1), View.ScreenToWorld).xyz;
half3 CameraVector = normalize(View.WorldCameraOrigin - WorldPosition);
half NoV = max(0, dot(GBuffer.WorldNormal, CameraVector));
// 組裝光源資料結構體.
FMobileLightData LightData = (FMobileLightData)0;
{
LightData.Position = DeferredLightUniforms.Position;
LightData.InvRadius = DeferredLightUniforms.InvRadius;
LightData.Color = DeferredLightUniforms.Color;
LightData.FalloffExponent = DeferredLightUniforms.FalloffExponent;
LightData.Direction = DeferredLightUniforms.Direction;
LightData.SpotAngles = DeferredLightUniforms.SpotAngles;
LightData.SpecularScale = 1.0;
LightData.bInverseSquared = INVERSE_SQUARED_FALLOFF;
LightData.bSpotLight = IS_SPOT_LIGHT;
}
FMobileShadingModelContext ShadingModelContext = (FMobileShadingModelContext)0;
{
half DielectricSpecular = 0.08 * GBuffer.Specular;
ShadingModelContext.DiffuseColor = GBuffer.BaseColor - GBuffer.BaseColor * GBuffer.Metallic; // 1 mad
ShadingModelContext.SpecularColor = (DielectricSpecular - DielectricSpecular * GBuffer.Metallic) + GBuffer.BaseColor * GBuffer.Metallic; // 2 mad
// 計算環境BRDF.
ShadingModelContext.SpecularColor = GetEnvBRDF(ShadingModelContext.SpecularColor, GBuffer.Roughness, NoV);
}
// 計算直接光.
half3 Color = GetDirectLighting(LightData, ShadingModelContext, GBuffer, WorldPosition, CameraVector);
// IES, 光照函式.
half LightAttenuation = ComputeLightProfileMultiplier(WorldPosition, DeferredLightUniforms.Position, -DeferredLightUniforms.Direction, DeferredLightUniforms.Tangent);
LightAttenuation*= ComputeLightFunctionMultiplier(WorldPosition);
#if USE_PREEXPOSURE
// MobileHDR applies PreExposure in tonemapper
LightAttenuation*= View.PreExposure;
#endif
OutColor.rgb = Color * LightAttenuation;
OutColor.a = 1;
}
以上可知,平行光和區域性光源的PS是不同的入口,主要是因為兩者的區別較大,平行光直接在主入口計算光照,附帶計算了反射(IBL、捕捉器)、天空光漫反射;而區域性光源會構建一個光源結構體,進入直接光計算函式,最後處理區域性光源特有的IES和光照函式。
另外,獲取GBuffer時,採用了SubPass特有的讀取模式,不同的著色平臺有所不同:
// Vulkan
[[vk::input_attachment_index(1)]]
SubpassInput<float4> GENERATED_SubpassFetchAttachment0;
#define VulkanSubpassFetch0() GENERATED_SubpassFetchAttachment0.SubpassLoad()
// Metal
Texture2D<float4> gl_LastFragDataRGBA_1;
#define SubpassFetchRGBA_1() gl_LastFragDataRGBA_1.Load(uint3(0, 0, 0), 0)
// DX / OpenGL
Texture2DSampleLevel(GBufferATexture, GBufferATextureSampler, UV, 0);
團隊招員
博主所在的團隊正在用UE4開發一種全新的沉浸式體驗產品,急需各路豪士一同加入,共謀巨集圖大業。目前急招以下職位:
- UE邏輯開發。
- UE引擎程式。
- UE圖形渲染。
- TA(技術向、美術向)。
要求:對技術有熱情,紮實的技術基礎,良好的溝通和合作能力,有UE使用經驗或移動端開發經驗者更佳。
有意向或想了解更多的請新增博主微信:81079389(註明部落格園求職),或者發簡歷到博主郵箱:81079389#qq.com(#換成@)。
靜待各路英雄豪傑相會。
特別說明
- Part 1結束,Part 2的內容有:
- 移動端渲染技術
- 移動端優化技巧
- 感謝所有參考文獻的作者,部分圖片來自參考文獻和網路,侵刪。
- 本系列文章為筆者原創,只發表在部落格園上,歡迎分享本文連結,但未經同意,不允許轉載!
- 系列文章,未完待續,完整目錄請戳內容綱目。
- 系列文章,未完待續,完整目錄請戳內容綱目。
- 系列文章,未完待續,完整目錄請戳內容綱目。
參考文獻
- Unreal Engine Source
- Rendering and Graphics
- Materials
- Graphics Programming
- Mobile Rendering
- Qualcomm® Adreno™ GPU
- PowerVR Developer Documentation
- Arm Mali GPU Best Practices Developer Guide
- Arm Mali GPU Graphics and Gaming Development
- Moving Mobile Graphics
- GDC Vault
- Siggraph Conference Content
- GameDev Best Practices
- Accelerating Mobile XR
- Frequently Asked Questions
- Google Developer Contributes Universal Bandwidth Compression To Freedreno Driver
- Using pipeline barriers efficiently
- Optimized pixel-projected reflections for planar reflectors
- UE4畫面表現移動端較PC端差異及最小化差異的分享
- Deferred Shading in Unity URP
- 移動遊戲效能優化通用技法
- 深入GPU硬體架構及執行機制
- Adaptive Performance in Call of Duty Mobile
- Jet Set Vulkan : Reflecting on the move to Vulkan
- Vulkan Best Practices - Memory limits with Vulkan on Mali GPUs
- A Year in a Fortnite
- The Challenges of Porting Traha to Vulkan
- L2M - Binding and Format Optimization
- Adreno Best Practices
- 移動裝置GPU架構知識彙總
- Mali GPU Architectures
- Cyclic Redundancy Check
- Arm Guide for Unreal Engine
- Arm Virtual Reality
- Best Practices for VR on Unreal Engine
- Optimizing Assets for Mobile VR
- Arm® Guide for Unreal Engine 4 Optimizing Mobile Gaming Graphics
- Adaptive Scalable Texture Compression
- Tile-Based Rendering
- Understanding Render Passes
- Intro to Moving Mobile Graphics
- Mobile Graphics 101
- Intro to Moving Mobile Graphics
- Mobile Graphics 101
- Vulkan API
- Best Practices for Shaders