从TrueDepth相机保存深度图像
我正在尝试从iPhoneX TrueDepth相机保存深度图像.使用 AVCamPhotoFilter 示例代码,我可以查看实时转换为灰度格式的深度在手机屏幕上.我无法弄清楚如何以原始(16位或更多)格式保存深度图像序列.
I am trying to save depth images from the iPhoneX TrueDepth camera. Using the AVCamPhotoFilter sample code, I am able to view the depth, converted to grayscale format, on the screen of the phone in real-time. I cannot figure out how to save the sequence of depth images in the raw (16 bits or more) format.
我有 depthData
,它是 AVDepthData
的一个实例.其成员之一是 depthDataMap
,它是 CVPixelBuffer
和图像格式类型为 kCVPixelFormatType_DisparityFloat16
的实例.有没有一种方法可以将其保存到手机中以进行离线操作?
I have depthData
which is an instance of AVDepthData
. One of its members is depthDataMap
which is an instance of CVPixelBuffer
and image format type kCVPixelFormatType_DisparityFloat16
. Is there a way to save it to the phone to transfer for offline manipulation?
原始"深度/视差图没有标准的视频格式,这可能与AVCapture没有真正提供记录方式有关.
There's no standard video format for "raw" depth/disparity maps, which might have something to do with AVCapture not really offering a way to record it.
您有几个值得在这里进行调查的选择:
You have a couple of options worth investigating here:
-
将深度图转换为灰度纹理(您可以使用 AVCamPhotoFilter 示例代码),然后将这些纹理传递给
AVAssetWriter
以生成灰度视频.根据您选择的视频格式和灰度转换方法,您编写的用于读取视频的其他软件可能能够从灰度帧中以足够的精度恢复深度/视差信息.
Convert depth maps to grayscale textures (which you can do using the code in the AVCamPhotoFilter sample code), then pass those textures to
AVAssetWriter
to produce a grayscale video. Depending on the video format and grayscale conversion method you choose, other software you write for reading the video might be able to recover depth/disparity info with sufficient precision for your purposes from the grayscale frames.
每当您有一个 CVPixelBuffer
时,您都可以自己获取数据并随心所欲地对其进行处理.使用 CVPixelBufferLockBaseAddress
(与 CVPixelBufferGetBaseAddress
复制数据>提供给您想要的任何地方.(使用其他像素缓冲区功能查看要复制的字节数,并在完成后解锁缓冲区.)
Anytime you have a CVPixelBuffer
, you can get at the data yourself and do whatever you want with it. Use CVPixelBufferLockBaseAddress
(with the readOnly
flag) to make sure the content won't change while you read it, then copy data from the pointer CVPixelBufferGetBaseAddress
provides to wherever you want. (Use other pixel buffer functions to see how many bytes to copy, and unlock the buffer when you're done.)
但是要当心:如果您花太多时间从缓冲区复制或以其他方式保留它们,则随着捕获系统中新缓冲区的进入,它们将不会被释放,并且捕获会话将挂起.(总而言之,不清楚是否要测试设备是否具有通过这种方式进行大量记录的内存和I/O带宽.)
Watch out, though: if you spend too much time copying from buffers, or otherwise retain them, they won't get deallocated as new buffers come in from the capture system, and your capture session will hang. (All told, it's unclear without testing whether a device has the memory & I/O bandwidth for much recording this way.)