CVPixelBufferLockBaseAddress为什么?使用AVFoundation捕获静止图像
我正在编写一个iPhone应用程序,使用AVFoundation从相机创建静态图像。
阅读编程指南我发现了一个几乎我需要做的代码,所以我试图逆向工程并理解它。
我遇到了一些困难理解将CMSampleBuffer转换为图像的部分。
所以这是我理解的,后来是代码。
CMSampleBuffer代表内存中的缓冲区存储其他数据。后来我调用函数CMSampleBufferGetImageBuffer()只用图像数据接收CVImageBuffer。
现在有一个我不理解的函数,我只能想象它的函数:CVPixelBufferLockBaseAddress(imageBuffer, 0);我无法理解它是否是一个线程锁,以避免对它进行多次操作或锁定缓冲区的地址,以避免在操作期间发生变化(为什么要更改?...另一帧,不是数据被复制在另一个地方?)。剩下的代码对我来说很清楚。
试图在google上搜索,但仍然没有找到任何帮助。
有人可以带点光吗?
I'm writing an iPhone app that creates still images from the camera using AVFoundation.
Reading the programming guide I've found a code that does almost I need to do, so I'm trying to "reverse engineering" and understand it.
I'm founding some difficulties to understand the part that converts a CMSampleBuffer into an image.
So here is what I understood and later the code.
The CMSampleBuffer represent a buffer in the memory where the image with additional data is stored. Later I call the function CMSampleBufferGetImageBuffer() to receive a CVImageBuffer back with just the image data.
Now there is a function that I didn't understand and I can only imagine its function: CVPixelBufferLockBaseAddress(imageBuffer, 0); I can't understand if it is a "thread lock" to avoid multiple operation on it or a lock to the address of the buffer to avoid changes during operation(and why should it change?..another frame, aren't data copied in another location?). The rest of the code it's clear to me.
Tried to search on google but still didn't find nothing helpful.
Can someone bring some light?
-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) sampleBuffer{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
谢谢,
Andrea
Thanks, Andrea
头文件说CVPixelBufferLockBaseAddress使内存可访问。我不确定这究竟是什么意思,但如果你不这样做,CVPixelBufferGetBaseAddress会失败,所以你最好这样做。
The header file says that CVPixelBufferLockBaseAddress makes the memory "accessible". I'm not sure what that means exactly, but if you don't do it, CVPixelBufferGetBaseAddress fails so you'd better do it.
EDIT
这样做是简短的回答。为什么考虑到图像可能不存在于主存储器中,它可能存在于某些GPU上的纹理中(CoreVideo也可以在Mac上工作),甚至可以采用与您期望的格式不同的格式,因此您获得的像素实际上是复制。如果没有锁定/解锁或某种开始/结束对,实现无法知道何时完成重复的像素,这样它们就会被泄露。 CVPixelBufferLockBaseAddress只是提供了CoreVideo范围信息,我不会太依赖它。
Just do it is the short answer. For why consider that image may not live in main memory, it may live in a texture on some GPU somewhere (CoreVideo works on the mac too) or even be in a different format to what you expect, so the pixels you get are actually a copy. Without Lock/Unlock or some kind of Begin/End pair the implementation has no way to know when you've finished with the duplicate pixels so they would effectively be leaked. CVPixelBufferLockBaseAddress simply gives CoreVideo scope information, I wouldn't get too hung up on it.
是的,他们可以简单地从CVPixelBufferGetBaseAddress返回像素并完全消除CVPixelBufferLockBaseAddress。我不知道他们为什么不这样做。
Yes, they could have simply returned the pixels from CVPixelBufferGetBaseAddress and eliminate CVPixelBufferLockBaseAddress altogether. I don't know why they didn't do that.