android – 将相机渲染到多个表面 – 打开和关闭屏幕
作者:互联网
我想将摄像机输出渲染到视图中,偶尔将摄像机输出帧保存到文件中,约束为 – 保存的帧应该与摄像机配置的分辨率相同,而视图小于相机输出(保持纵横比).
基于ContinuousCaptureActivity example in grafika,我认为最好的方法是将相机发送到SurfaceTexture并通常渲染输出并将其缩小为SurfaceView,并在需要时将整个帧渲染到一个没有视图的不同Surface中,按顺序从常规SurfaceView渲染并行检索字节缓冲区.
该示例与我的情况非常相似 – 预览呈现为较小尺寸的视图,可以通过VideoEncoder以全分辨率进行记录和保存.
我用自己的替换了VideoEncoder逻辑,并试图提供一个Surface,就像编码器一样,用于全分辨率渲染.如何创建这样的Surface?我接近这个吗?
基于示例的一些代码构思:
在surfaceCreated(SurfaceHolder holder)方法内(第350行):
@Override // SurfaceHolder.Callback
public void surfaceCreated(SurfaceHolder holder) {
Log.d(TAG, "surfaceCreated holder=" + holder);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mDisplaySurface = new WindowSurface(mEglCore, holder.getSurface(), false);
mDisplaySurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraTexture.setOnFrameAvailableListener(this);
Log.d(TAG, "starting camera preview");
try {
mCamera.setPreviewTexture(mCameraTexture);
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
mCamera.startPreview();
// *** MY EDIT START ***
// Encoder creation no longer needed
// try {
// mCircEncoder = new CircularEncoder(VIDEO_WIDTH, VIDEO_HEIGHT, 6000000,
// mCameraPreviewThousandFps / 1000, 7, mHandler);
// } catch (IOException ioe) {
// throw new RuntimeException(ioe);
// }
mEncoderSurface = new WindowSurface(mEglCore, mCameraTexture); // <-- Crashes with EGL error 0x3003
// *** MY EDIT END ***
updateControls();
}
drawFrame()方法(第420行):
private void drawFrame() {
//Log.d(TAG, "drawFrame");
if (mEglCore == null) {
Log.d(TAG, "Skipping drawFrame after shutdown");
return;
}
// Latch the next frame from the camera.
mDisplaySurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
// Fill the SurfaceView with it.
SurfaceView sv = (SurfaceView) findViewById(R.id.continuousCapture_surfaceView);
int viewWidth = sv.getWidth();
int viewHeight = sv.getHeight();
GLES20.glViewport(0, 0, viewWidth, viewHeight);
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mDisplaySurface.swapBuffers();
// *** MY EDIT START ***
// Send it to the video encoder.
if (someCondition) {
mEncoderSurface.makeCurrent();
GLES20.glViewport(0, 0, VIDEO_WIDTH, VIDEO_HEIGHT);
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mEncoderSurface.swapBuffers();
try {
mEncoderSurface.saveFrame(new File(getExternalFilesDir(null), String.valueOf(System.currentTimeMillis()) + ".png"));
} catch (IOException e) {
e.printStackTrace();
}
}
// *** MY EDIT END ***
}
解决方法:
你走在正确的轨道上. SurfaceTexture只是快速地从相机包裹原始YUV帧,因此“外部”纹理是原始图像,没有任何变化.您无法直接从外部纹理中读取像素,因此您必须先将其渲染到某处.
最简单的方法是创建一个屏幕外的pbuffer表面. Grafika的gles / OffscreenSurface类正是这样做的(通过调用eglCreatePbufferSurface()).将EGLSurface设置为当前,将纹理渲染到FullFrameRect上,然后使用glReadPixels()读取帧缓冲区(有关代码,请参阅EglSurfaceBase#saveFrame()).不要调用eglSwapBuffers().
请注意,您不是为输出创建Android Surface,而是为EGLSurface创建. (They’re different.)
标签:android,android-camera,surfaceview,glsurfaceview 来源: https://codeday.me/bug/20190824/1705669.html