1:RenderPipelineManager类

  1. - RenderPipelineManager类具有四个事件:
  • beginFrameRendering,beginCameraRendering,endCameraRendering和endFrameRendering。
    • 您可以订阅这些事件以在渲染管线中的特定点执行代码。
  • 请注意,如果要编写自己的自定义SRP,则必须确保SRP在适当的时间引发这些事件。

    1. - Unity呈现单独的相机之前,您可以使用可以使用自定义代码。
    2. - Unity调用RenderPipeline.BeginCamerArendering时,它将在此代表的调用列表中执行方法。
    3. - 在通用渲染管道(URP)和高清渲染管道(HDRP)中,Unity调用RenderPipeline.BeginCamerArendering自动。如果您正在编写自定义的脚本渲染管道,并且要使用此委托,则必须在renderpipeline.begincamerarendering中添加一个呼叫。
    4. - 以下代码示例演示了如何在该委托的调用列表中添加方法,然后将其删除。
    ```csharp using UnityEngine; using UnityEngine.Rendering;

public class ExampleClass : MonoBehaviour { void Start() { RenderPipelineManager.beginCameraRendering += OnBeginCameraRendering; }

void OnBeginCameraRendering(ScriptableRenderContext context, Camera camera) { //把你想要在相机渲染之前执行的代码放在这里 //如果你使用URP或HDRP, Unity会自动调用这个方法 //如果你正在编写一个自定义的SRP,你必须调用RenderPipeline。BeginCameraRendering }

void OnDestroy() { RenderPipelineManager.beginCameraRendering -= OnBeginCameraRendering; } }

  1. <a name="tqNbL"></a>
  2. # 2:UniversalAdditionalCameraData(通用附加相机数据)
  3. 通用附加相机数据组件是通用渲染管道(URP)用于内部数据存储的组件。通用附加摄像头数据组件允许URP扩展和覆盖Unity标准摄像机组件的功能和外观。 <br />在URP中,具有相机组件的游戏对象还必须具有通用的附加相机数据组件。如果您的项目使用URP,则在创建相机游戏对象时,Unity会自动添加通用附加相机数据组件。您无法从相机游戏对象中删除通用的其他相机数据组件。<br />如果您不使用脚本来控制和自定义URP,则无需使用Universala Addage Camera Data组件做任何事情。<br />如果您使用脚本来控制和自定义URP,则可以在这样的脚本中访问相机的通用附加相机数据组件:
  4. ```csharp
  5. var cameraData = camera.GetUniversalAdditionalCameraData();
  6. //项目写法
  7. var camData = cam.GetComponent<UniversalAdditionalCameraData>();
  1. 有关更多信息,请参见Universal Additionalcameradata API文档。<br />如果您需要在脚本中经常访问通用附加摄像机数据组件,则应缓存对其的引用,以避免不必要的CPU工作。

3:RenderTexture.RenderTexture Constructor

说明:

创建一个新的RenderTexture对象。
渲染纹理以宽度按高度大小创建,深度缓冲区的深度缓冲区(深度可以为0、16、24或32),格式格式和SRGB读取 /写入 /写入或关闭。
请注意,构建RenderTexture对象不会立即创建硬件表示。实际渲染纹理是在首次使用时或手动调用创建时创建的。因此,在构造渲染纹理后,可以设置其他变量,例如格式,尺寸等。

  1. using UnityEngine;
  2. public class Example : MonoBehaviour
  3. {
  4. public RenderTexture rt;
  5. void Start()
  6. {
  7. rt = new RenderTexture(256, 256, 16, RenderTextureFormat.ARGB32);
  8. }
  9. }

说明:

public RenderTexture(int width, int height, int depth, RenderTextureFormat format = RenderTextureFormat.Default, RenderTextureReadWrite readWrite = RenderTextureReadWrite.Default);

width Texture width in pixels.
纹理的像素宽度。
height Texture height in pixels.
纹理的像素高度。
depth Number of bits in depth buffer (0, 16, 24 or 32). Note that only 24 and 32 bit depth have stencil buffer support.
深度缓冲区的位数(0、16、24或32)。请注意,只有24位和32位深度具有模具缓冲区的支撑。
format Texture color format.
纹理颜色格式。
colorFormat The color format for the RenderTexture.
聚集延迟的颜色格式。
depthStencilFormat The depth stencil format for the RenderTexture.
在渲染纹理中格式化深度蒙版缓冲
mipCount Amount of mips to allocate for the RenderTexture.
分配新的MIP量。
readWrite How or if color space conversions should be done on texture read/write.
在纹理读取/写入方面,应如何进行颜色空间转换。
desc Create the RenderTexture with the settings in the RenderTextureDescriptor.
使用RenderTeDledscriptor中的设置创建RenderTexture。
textureToCopy Copy the settings from another RenderTexture.
从另一个RenderTexture复制设置。

4:RenderTextureDescriptor(渲染纹理描述符)

说明:

该结构包含创建重新文化所需的所有信息。它可以复制,缓存和重复使用,以轻松创建所有共享相同属性的重新文章。避免使用默认构造函数,因为它不会用推荐值初始化某些标志。

自写的代码:

  1. //创建一张RT
  2. RenderTextureDescriptor SSdesc = renderingData.cameraData.cameraTargetDescriptor;
  3. SSdesc.depthBufferBits = 0;
  4. //CommandBufferPool命令缓冲池,到最后需要释放
  5. CommandBuffer cmd = CommandBufferPool.Get(name);
  6. cmd.GetTemporaryRT(GrabTexID, SSdesc);

属性:

autoGenerateMips Mipmap levels are generated automatically when this flag is set.
设置此标志时会自动生成MIPMAP级别。
bindMS If true and msaaSamples is greater than 1, the render texture will not be resolved by default. Use this if the render texture needs to be bound as a multisampled texture in a shader.
如果True和Msaasamples大于1,则默认情况下不会解决渲染纹理。如果渲染纹理需要作为着色器中的多样采样纹理绑定,请使用此功能。
colorFormat The format of the RenderTarget is expressed as a RenderTextureFormat. Internally, this format is stored as a GraphicsFormat compatible with the current system (see SystemInfo.GetCompatibleFormat). Therefore, if you set a format and immediately get it again, it may return a different result from the one just set.
RenderTarget的格式表示为RenderTextureFormat。在内部,该格式作为与当前系统兼容的图形图(请参阅SystemInfo.getCompatibleFormat)。因此,如果您设置了格式并立即再次获取格式,则可能会返回与仅设置的结果不同的结果。
depthBufferBits The precision of the render texture’s depth buffer in bits (0, 16, 24 and 32 are supported).
渲染纹理的深度缓冲区的精度(0、16、24和32)得到了支持。
depthStencilFormat The desired format of the depth/stencil buffer.
深度/模板缓冲区的所需格式。
dimension Dimensionality (type) of the render texture.See Also: RenderTexture.dimension.
渲染纹理的维度(类型)。另请参见:RenderTexture.dimension。
enableRandomWrite Enable random access write into this render texture on Shader Model 5.0 level shaders.See Also: RenderTexture.enableRandomWrite.
启用随机访问将其写入着色器模型5.0级着色器上的此渲染纹理。另请参见:RenderTexture.enablerAndomWrite。
flags A set of RenderTextureCreationFlags that control how the texture is created.
一组控制纹理的创建方式。
graphicsFormat The color format for the RenderTexture. You can set this format to None to achieve depth-only rendering.
聚集延迟的颜色格式。您可以将此格式设置为“无深度渲染”。
height The height of the render texture in pixels.
渲染纹理的高度以像素为单位。
memoryless The render texture memoryless mode property.
渲染纹理无内存模式属性。
mipCount User-defined mipmap count.
用户定义的MIPMAP计数。
msaaSamples The multisample antialiasing level for the RenderTexture.See Also: RenderTexture.antiAliasing.
列表延迟的多样本抗质量级别。另请参见:RenderTexture.antialiasing。
shadowSamplingMode Determines how the RenderTexture is sampled if it is used as a shadow map.See Also: ShadowSamplingMode for more details.
如果将其用作阴影映射,请确定如何采样呈现。
sRGB This flag causes the render texture uses sRGB read/write conversions.
此标志导致渲染纹理使用SRGB读/写转换。
stencilFormat The format of the stencil data that you can encapsulate within a RenderTexture.Specifying this property creates a stencil element for the RenderTexture and sets its format. This allows for stencil data to be bound as a Texture to all shader types for the platforms that support it. This property does not specify the format of the stencil buffer, which is constrained by the depth buffer format specified in RenderTexture.depth.Currently, most platforms only support R8_UInt (DirectX11, DirectX12), while PS4 also supports R8_UNorm.
您可以将其封装在重新文字中的模具数据的格式。指定此属性为重新延迟创建模板元素并设置其格式。这允许模板数据作为纹理绑定到支持它的平台的所有着色器类型。该属性未指定模板缓冲区的格式,该格式受到RenderTexture.depth.depth.cy的深度缓冲区格式的约束。目前,大多数平台仅支持R8_UINT(DirectX11,DirectX12),而PS4也支持R8_UNORM。
useDynamicScale Set to true to enable dynamic resolution scaling on this render texture.See Also: RenderTexture.useDynamicScale.
设置为true以启用此渲染纹理上的动态分辨率缩放。另请参见:RenderTexture.usedynamicscale。
useMipMap Render texture has mipmaps when this flag is set.See Also: RenderTexture.useMipMap.
Render texture has mipmaps when this flag is set.See Also: RenderTexture.useMipMap.
volumeDepth Volume extent of a 3D render texture.
3D渲染纹理的体积范围。
vrUsage If this RenderTexture is a VR eye texture used in stereoscopic rendering, this property decides what special rendering occurs, if any. Instead of setting this manually, use the value returned by eyeTextureDesc or other VR functions returning a RenderTextureDescriptor.
如果这种呈现文化是立体渲染中使用的VR眼纹理,则该属性决定发生了什么特殊渲染(如果有)。而不是手动设置此功能,请使用eyeteTrexuredEsc返回的值或其他返回reenderTeDledScriptor的VR函数。
width The width of the render texture in pixels.
渲染纹理的宽度以像素为单位。

5:Command Buffer的使用

前言:

  • 经常在各种后处理或者shader效果中看到Command Buffer这一个东西,但一直都不是特别的了解(或者说忘了),在这里我要做一遍笔记重温一下。

    1:什么是Command Buffer

  • Command Buffer是Unity5新增的一个灰常灰常强大的功能。先祭出 官方介绍和 文档。我们在渲染的时候,给OpenGL或者DX的就是一系列的指令,比如glDrawElement,glClear等等,这些东西目前是引擎去调用的,而Unity也为我们封装了更高一级的API,也就是CommandBuffer,可以让我们更加方便灵活地实现一些效果。CommandBuffer最主要的功能是可以预定义一些列的渲染指令,然后将这些指令在我们想要的时机进行执行。本篇文章简单介绍一下CommandBuffer的使用,首先实现一个简单的摄像机效果,然后通过Command Buffer重置一下之前实现过的两个效果: 景深和 描边效果。

  • 官方文档和链接:

https://docs.unity3d.com/ScriptReference/Rendering.CommandBuffer.html

2:CommandBuffer的基本用法

  • 我们先来看一个最简单的例子,直接在一张RT上画个人,其实类似于摄影机效果,我们用当前的相机看见正常要看见的对象,然后在一张幕布(简单的来说,就是一个。。额,面片)再渲染一次这个人物(也可以直接渲染到UI上)。 ```csharp //Command Buffer测试 //by: puppet_master //2017.5.26

using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.Rendering;

public class CommandBufferTest : MonoBehaviour {

  1. private CommandBuffer commandBuffer = null;
  2. private RenderTexture renderTexture = null;
  3. private Renderer targetRenderer = null;
  4. public GameObject targetObject = null;
  5. public Material replaceMaterial = null;
  6. void OnEnable()
  7. {
  8. targetRenderer = targetObject.GetComponentInChildren<Renderer>();
  9. //申请RT
  10. renderTexture = RenderTexture.GetTemporary(512, 512, 16, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Default, 4);
  11. commandBuffer = new CommandBuffer();
  12. //设置Command Buffer渲染目标为申请的RT
  13. commandBuffer.SetRenderTarget(renderTexture);
  14. //初始颜色设置为灰色
  15. commandBuffer.ClearRenderTarget(true, true, Color.gray);
  16. //绘制目标对象,如果没有替换材质,就用自己的材质
  17. Material mat = replaceMaterial == null ? targetRenderer.sharedMaterial : replaceMaterial;
  18. commandBuffer.DrawRenderer(targetRenderer, mat);
  19. //然后接受物体的材质使用这张RT作为主纹理
  20. this.GetComponent<Renderer>().sharedMaterial.mainTexture = renderTexture;
  21. //直接加入相机的CommandBuffer事件队列中
  22. Camera.main.AddCommandBuffer(CameraEvent.AfterForwardOpaque, commandBuffer);
  23. }
  24. void OnDisable()
  25. {
  26. //移除事件,清理资源
  27. Camera.main.RemoveCommandBuffer(CameraEvent.AfterForwardOpaque, commandBuffer);
  28. commandBuffer.Clear();
  29. renderTexture.Release();
  30. }
  31. //也可以在OnPreRender中直接通过Graphics执行Command Buffer,不过OnPreRender和OnPostRender只在挂在相机的脚本上才有作用!!!
  32. //void OnPreRender()
  33. //{
  34. // //在正式渲染前执行Command Buffer
  35. // Graphics.ExecuteCommandBuffer(commandBuffer);
  36. //}

} ```

  • 然后,我们可以把这个脚本挂在一个对象上,将要渲染的目标拖入就可以了。我们测试一下:image.png
  • Command Buffer在渲染目标的时候,是支持我们使用自定义材质的,我们可以换一个材质,如果我们做了个摄像机,还不带美颜功能的话,肯定是要得差评的,所以,我们给渲染的对象换一个自定义的材质,比如 边缘光效果,直接将调整好的边缘光材质球赋给Replace Material即可:

    6:Camera

    说明:

  • 相机是玩家观看世界的设备。

  • 屏幕空间点以像素定义。屏幕的左下是(0,0);右上角是(PixelWidth,PixelHeight)。Z位置位于相机中的世界单位。
  • 视口空间归一化,相对于相机。相机的左下是(0,0);右上角是(1,1)。Z位置位于相机中的世界单位。
  • 全球坐标定义了世界空间点(例如,transform.position)。
  • 请注意,类不得直接从相机继承。如果您需要从摄像机继承,请参见SparthableCamera。
  • (静态变量跟着类,不跟对象,类和对象的区别是啥?)

    静态属性:

    | allCameras | Returns all enabled cameras in the Scene.
    在场景中返回所有启用摄像机。 | | —- | —- | | allCamerasCount | The number of cameras in the current Scene.
    当前场景中的摄像机数量。 | | current | The camera we are currently rendering with, for low-level render control only (Read Only).
    我们目前正在渲染的相机仅用于低级渲染控制(仅读取)。 | | main | The first enabled Camera component that is tagged “MainCamera” (Read Only).
    标记为“ mainCamera”(仅读取)的第一个启用相机组件。 | | onPostRender | Delegate that you can use to execute custom code after a Camera renders the scene.
    相机呈现场景后,您可以使用可以使用自定义代码的委托。 | | onPreCull | Delegate that you can use to execute custom code before a Camera culls the scene.
    委托您可以在摄像机剔除场景之前使用自定义代码。 | | onPreRender | Delegate that you can use to execute custom code before a Camera renders the scene.
    委托您可以在摄像机呈现场景之前使用自定义代码。 |

属性:

activeTexture Gets the temporary RenderTexture target for this Camera.
actualRenderingPath The rendering path that is currently being used (Read Only).
allowDynamicResolution Dynamic Resolution Scaling.
allowHDR High dynamic range rendering.
allowMSAA MSAA rendering.
areVRStereoViewMatricesWithinSingleCullTolerance Determines whether the stereo view matrices are suitable to allow for a single pass cull.
aspect The aspect ratio (width divided by height).
backgroundColor The color with which the screen will be cleared.
cameraToWorldMatrix Matrix that transforms from camera space to world space (Read Only).
cameraType Identifies what kind of camera this is, using the CameraType enum.
clearFlags How the camera clears the background.
clearStencilAfterLightingPass Should the camera clear the stencil buffer after the deferred light pass?
commandBufferCount Number of command buffers set up on this camera (Read Only).
cullingMask This is used to render parts of the Scene selectively.
cullingMatrix Sets a custom matrix for the camera to use for all culling queries.
depth Camera’s depth in the camera rendering order.
depthTextureMode How and if camera generates a depth texture.
eventMask Mask to select which layers can trigger events on the camera.
farClipPlane The distance of the far clipping plane from the Camera, in world units.
fieldOfView The vertical field of view of the Camera, in degrees.
focalLength The camera focal length, expressed in millimeters. To use this property, enable UsePhysicalProperties.
forceIntoRenderTexture Should camera rendering be forced into a RenderTexture.
gateFit There are two gates for a camera, the sensor gate and the resolution gate. The physical camera sensor gate is defined by the sensorSize property, the resolution gate is defined by the render target area.
layerCullDistances Per-layer culling distances.
layerCullSpherical How to perform per-layer culling for a Camera.
lensShift The lens offset of the camera. The lens shift is relative to the sensor size. For example, a lens shift of 0.5 offsets the sensor by half its horizontal size.
nearClipPlane The distance of the near clipping plane from the the Camera, in world units.
nonJitteredProjectionMatrix Get or set the raw projection matrix with no camera offset (no jittering).
opaqueSortMode Opaque object sorting mode.
orthographic Is the camera orthographic (true) or perspective (false)?
orthographicSize Camera’s half-size when in orthographic mode.
overrideSceneCullingMask Sets the culling maks used to determine which objects from which Scenes to draw. See EditorSceneManager.SetSceneCullingMask.
pixelHeight How tall is the camera in pixels (not accounting for dynamic resolution scaling) (Read Only).
pixelRect Where on the screen is the camera rendered in pixel coordinates.
pixelWidth How wide is the camera in pixels (not accounting for dynamic resolution scaling) (Read Only).
previousViewProjectionMatrix Get the view projection matrix used on the last frame.
projectionMatrix Set a custom projection matrix.
rect Where on the screen is the camera rendered in normalized coordinates.
renderingPath The rendering path that should be used, if possible.
scaledPixelHeight How tall is the camera in pixels (accounting for dynamic resolution scaling) (Read Only).
scaledPixelWidth How wide is the camera in pixels (accounting for dynamic resolution scaling) (Read Only).
scene If not null, the camera will only render the contents of the specified Scene.
sensorSize The size of the camera sensor, expressed in millimeters.
stereoActiveEye Returns the eye that is currently rendering. If called when stereo is not enabled it will return Camera.MonoOrStereoscopicEye.Mono. If called during a camera rendering callback such as OnRenderImage it will return the currently rendering eye. If called outside of a rendering callback and stereo is enabled, it will return the default eye which is Camera.MonoOrStereoscopicEye.Left.
stereoConvergence Distance to a point where virtual eyes converge.
stereoEnabled Stereoscopic rendering.
stereoSeparation The distance between the virtual eyes. Use this to query or set the current eye separation. Note that most VR devices provide this value, in which case setting the value will have no effect.
stereoTargetEye Defines which eye of a VR display the Camera renders into.
targetDisplay Set the target display for this Camera.
targetTexture Destination render texture.
transparencySortAxis An axis that describes the direction along which the distances of objects are measured for the purpose of sorting.
transparencySortMode Transparent object sorting mode.
useJitteredProjectionMatrixForTransparentRendering Should the jittered matrix be used for transparency rendering?
useOcclusionCulling Whether or not the Camera will use occlusion culling during rendering.
usePhysicalProperties Enable [UsePhysicalProperties] to use physical camera properties to compute the field of view and the frustum.
velocity Get the world-space speed of the camera (Read Only).
worldToCameraMatrix Matrix that transforms from world to camera space.

公共方法:

AddCommandBuffer Add a command buffer to be executed at a specified place.
AddCommandBufferAsync Adds a command buffer to the GPU’s async compute queues and executes that command buffer when graphics processing reaches a given point.
CalculateFrustumCorners Given viewport coordinates, calculates the view space vectors pointing to the four frustum corners at the specified camera depth.
CalculateObliqueMatrix Calculates and returns oblique near-plane projection matrix.
CopyFrom Makes this camera’s settings match other camera.
CopyStereoDeviceProjectionMatrixToNonJittered Sets the non-jittered projection matrix, sourced from the VR SDK.
GetCommandBuffers Get command buffers to be executed at a specified place.
GetGateFittedFieldOfView Retrieves the effective vertical field of view of the camera, including GateFit. Fitting the sensor gate and the resolution gate has an impact on the final field of view. If the sensor gate aspect ratio is the same as the resolution gate aspect ratio or if the camera is not in physical mode, then this method returns the same value as the fieldofview property.
GetGateFittedLensShift Retrieves the effective lens offset of the camera, including GateFit. Fitting the sensor gate and the resolution gate has an impact on the final obliqueness of the projection. If the sensor gate aspect ratio is the same as the resolution gate aspect ratio, then this method returns the same value as the lenshift property. If the camera is not in physical mode, then this methods returns Vector2.zero.
GetStereoNonJitteredProjectionMatrix Gets the non-jittered projection matrix of a specific left or right stereoscopic eye.
GetStereoProjectionMatrix Gets the projection matrix of a specific left or right stereoscopic eye.
GetStereoViewMatrix Gets the left or right view matrix of a specific stereoscopic eye.
RemoveAllCommandBuffers Remove all command buffers set on this camera.
RemoveCommandBuffer Remove command buffer from execution at a specified place.
RemoveCommandBuffers Remove command buffers from execution at a specified place.
Render Render the camera manually.
RenderToCubemap Render into a static cubemap from this camera.
RenderWithShader Render the camera with shader replacement.
Reset Revert all camera parameters to default.
ResetAspect Revert the aspect ratio to the screen’s aspect ratio.
ResetCullingMatrix Make culling queries reflect the camera’s built in parameters.
ResetProjectionMatrix Make the projection reflect normal camera’s parameters.
ResetReplacementShader Remove shader replacement from camera.
ResetStereoProjectionMatrices Reset the camera to using the Unity computed projection matrices for all stereoscopic eyes.
ResetStereoViewMatrices Reset the camera to using the Unity computed view matrices for all stereoscopic eyes.
ResetTransparencySortSettings Resets this Camera’s transparency sort settings to the default. Default transparency settings are taken from GraphicsSettings instead of directly from this Camera.
ResetWorldToCameraMatrix Make the rendering position reflect the camera’s position in the Scene.
ScreenPointToRay Returns a ray going from camera through a screen point.
ScreenToViewportPoint Transforms position from screen space into viewport space.
ScreenToWorldPoint Transforms a point from screen space into world space, where world space is defined as the coordinate system at the very top of your game’s hierarchy.
SetReplacementShader Make the camera render with shader replacement.
SetStereoProjectionMatrix Sets a custom projection matrix for a specific stereoscopic eye.
SetStereoViewMatrix Sets a custom view matrix for a specific stereoscopic eye.
SetTargetBuffers Sets the Camera to render to the chosen buffers of one or more RenderTextures.
SubmitRenderRequests Submit a number of Camera.RenderRequests.
TryGetCullingParameters Get culling parameters for a camera.
ViewportPointToRay Returns a ray going from camera through a viewport point.
ViewportToScreenPoint Transforms position from viewport space into screen space.
ViewportToWorldPoint Transforms position from viewport space into world space.
WorldToScreenPoint Transforms position from world space into screen space.
WorldToViewportPoint Transforms position from world space into viewport space.

静态方法:

CalculateProjectionMatrixFromPhysicalProperties Calculates the projection matrix from focal length, sensor size, lens shift, near plane distance, far plane distance, and Gate fit parameters. To calculate the projection matrix without taking Gate fit into account, use Camera.GateFitMode.None . See Also: GateFitParameters
FieldOfViewToFocalLength Converts field of view to focal length. Use either sensor height and vertical field of view or sensor width and horizontal field of view.
FocalLengthToFieldOfView Converts focal length to field of view.
GetAllCameras Fills an array of Camera with the current cameras in the Scene, without allocating a new array.
HorizontalToVerticalFieldOfView Converts the horizontal field of view (FOV) to the vertical FOV, based on the value of the aspect ratio parameter.
VerticalToHorizontalFieldOfView Converts the vertical field of view (FOV) to the horizontal FOV, based on the value of the aspect ratio parameter.

还有很多表格…之后看链接
[https://docs.unity3d.com/ScriptReference/Camera.html

](https://docs.unity3d.com/ScriptReference/Camera.html)