DXVA2_VideoProcessorCaps structure (dxva2api.h)
Describes the capabilities of a DirectX Video Acceleration (DVXA) video processor mode.
Syntax
typedef struct _DXVA2_VideoProcessorCaps {
UINT DeviceCaps;
D3DPOOL InputPool;
UINT NumForwardRefSamples;
UINT NumBackwardRefSamples;
UINT Reserved;
UINT DeinterlaceTechnology;
UINT ProcAmpControlCaps;
UINT VideoProcessorOperations;
UINT NoiseFilterTechnology;
UINT DetailFilterTechnology;
} DXVA2_VideoProcessorCaps;
Members
DeviceCaps
Identifies the type of device. The following values are defined.
InputPool
The Direct3D memory pool used by the device.
NumForwardRefSamples
Number of forward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
NumBackwardRefSamples
Number of backward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
Reserved
Reserved. Must be zero.
DeinterlaceTechnology
Identifies the deinterlacing technique used by the device. This value is a bitwise OR of one or more of the following flags.
ProcAmpControlCaps
Specifies the available video processor (ProcAmp) operations. The value is a bitwise OR of ProcAmp Settings constants.
VideoProcessorOperations
Specifies operations that the device can perform concurrently with the IDirectXVideoProcessor::VideoProcessBlt operation. The value is a bitwise OR of the following flags.
Value | Meaning |
---|---|
|
The device can convert the video from YUV color space to RGB color space, with at least 8 bits of precision for each RGB component. |
|
The device can stretch or shrink the video horizontally. If this capability is present, aspect ratio correction can be performed at the same time as deinterlacing. |
|
The device can stretch or shrink the video vertically. If this capability is present, image resizing and aspect ratio correction can be performed at the same time. |
|
The device can alpha blend the video. |
|
The device can operate on a subrectangle of the video frame. If this capability is present, source images can be cropped before further processing occurs. |
|
The device can accept substreams in addition to the primary video stream, and can composite them. |
|
The device can perform color adjustments on the primary video stream and substreams, at the same time that it deinterlaces the video and composites the substreams. The destination color space is defined in the DestFormat member of the DXVA2_VideoProcessBltParams structure. The source color space for each stream is defined in the SampleFormat member of the DXVA2_VideoSample structure. |
|
The device can convert the video from YUV to RGB color space when it writes the deinterlaced and composited pixels to the destination surface.
An RGB destination surface could be an off-screen surface, texture, Direct3D render target, or combined texture/render target surface. An RGB destination surface must use at least 8 bits for each color channel. |
|
The device can perform an alpha blend operation with the destination surface when it writes the deinterlaced and composited pixels to the destination surface. |
|
The device can downsample the output frame, as specified by the ConstrictionSize member of the DXVA2_VideoProcessBltParams structure. |
|
The device can perform noise filtering. |
|
The device can perform detail filtering. |
|
The device can perform a constant alpha blend to the entire video stream when it composites the video stream and substreams. |
|
The device can perform accurate linear RGB scaling, rather than performing them in nonlinear gamma space. |
|
The device can correct the image to compensate for artifacts introduced when performing scaling in nonlinear gamma space. |
|
The deinterlacing algorithm preserves the original field lines from the interlaced field picture, unless scaling is also applied.
For example, in deinterlacing algorithms such as bob and median filtering, the device copies the original field into every other scan line and then applies a filter to reconstruct the missing scan lines. As a result, the original field can be recovered by discarding the scan lines that were interpolated. If the image is scaled vertically, however, the original field lines cannot be recovered. If the image is scaled horizontally (but not vertically), the resulting field lines will be equivalent to scaling the original field picture. (In other words, discarding the interpolated scan lines will yield the same result as stretching the original picture without deinterlacing.) |
NoiseFilterTechnology
Specifies the supported noise filters. The value is a bitwise OR of the following flags.
DetailFilterTechnology
Specifies the supported detail filters. The value is a bitwise OR of the following flags.
Requirements
Requirement | Value |
---|---|
Minimum supported client | Windows Vista [desktop apps only] |
Minimum supported server | Windows Server 2008 [desktop apps only] |
Header | dxva2api.h |
See also
IDirectXVideoProcessor::GetVideoProcessorCaps
IDirectXVideoProcessorService::GetVideoProcessorCaps