Android鬼點子-通過Google官方示例學NDK(4)

我是綠色大米呀發表於2018-01-08

如果你看遍了網上那些只是在C++裡面輸出一個 ‘ helloWorld ’ 的NDK教程的話,可以看看本系列的文章,本系列是通過NDK的運用的例子來學習NDK。

如果對這方面感興趣,可以看看前三篇。

Android鬼點子-通過Google官方示例學NDK(1)——主要說的是如何在NDK使用多執行緒,還有就是基礎的java與c++的相互呼叫。

Android鬼點子-通過Google官方示例學NDK(2)——主要是說的不使用java程式碼,用c++寫一個activity。

Android鬼點子-通過Google官方示例學NDK(3)——這是一個opengl的例子。

第四個例子是展示的視訊解碼相關的內容,外加opengl部分內容。

程式碼在這裡。

主要的功能是播放在assets資料夾中的一段視訊。視訊可以暫停,繼續播放。

圖1
例子中使用了2個SurfaceView,分別是Android原生的SurfaceView,和使用c++自己實現的MyGLSurfaceView進行播放。在MyGLSurfaceView中有立體旋轉的效果(截圖中下面的那個),主要說一說視訊的播放過程。

這個例子有個很有價值的點就是有一個c++實現的訊息佇列,也就是Looper。我讀過之後,收穫很多。

專案的結構如下:

圖2

先看Activity,NativeCodec.java中:

public void onCreate(Bundle icicle) {
        super.onCreate(icicle);
        setContentView(R.layout.main);

        mGLView1 = (MyGLSurfaceView) findViewById(R.id.glsurfaceview1);

        // set up the Surface 1 video sink
        mSurfaceView1 = (SurfaceView) findViewById(R.id.surfaceview1);
        mSurfaceHolder1 = mSurfaceView1.getHolder();

        mSurfaceHolder1.addCallback(new SurfaceHolder.Callback() {

            @Override
            public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
                Log.v(TAG, "surfaceChanged format=" + format + ", width=" + width + ", height="
                        + height);
            }

            @Override
            public void surfaceCreated(SurfaceHolder holder) {
                Log.v(TAG, "surfaceCreated");
                if (mRadio1.isChecked()) {
                    setSurface(holder.getSurface());
                }
            }

            @Override
            public void surfaceDestroyed(SurfaceHolder holder) {
                Log.v(TAG, "surfaceDestroyed");
            }

        });

        // initialize content source spinner
        Spinner sourceSpinner = (Spinner) findViewById(R.id.source_spinner);
        ArrayAdapter<CharSequence> sourceAdapter = ArrayAdapter.createFromResource(
                this, R.array.source_array, android.R.layout.simple_spinner_item);
        sourceAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
        sourceSpinner.setAdapter(sourceAdapter);
        sourceSpinner.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {

            @Override
            public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) {
                mSourceString = parent.getItemAtPosition(pos).toString();
                Log.v(TAG, "onItemSelected " + mSourceString);
            }

            @Override
            public void onNothingSelected(AdapterView parent) {
                Log.v(TAG, "onNothingSelected");
                mSourceString = null;
            }

        });

        mRadio1 = (RadioButton) findViewById(R.id.radio1);
        mRadio2 = (RadioButton) findViewById(R.id.radio2);

        OnCheckedChangeListener checklistener = new CompoundButton.OnCheckedChangeListener() {

          @Override
          public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
              ···
              if (isChecked) {
                  if (mRadio1.isChecked()) {
                      if (mSurfaceHolder1VideoSink == null) {
                          mSurfaceHolder1VideoSink = new SurfaceHolderVideoSink(mSurfaceHolder1);
                      }
                      mSelectedVideoSink = mSurfaceHolder1VideoSink;
                      mGLView1.onPause();
                      Log.i("@@@@", "glview pause");
                  } else {
                      mGLView1.onResume();
                      if (mGLView1VideoSink == null) {
                          mGLView1VideoSink = new GLViewVideoSink(mGLView1);
                      }
                      mSelectedVideoSink = mGLView1VideoSink;
                  }
                  switchSurface();
              }
          }
        };
        ···
        // native MediaPlayer start/pause
        ((Button) findViewById(R.id.start_native)).setOnClickListener(new View.OnClickListener() {

            @Override
            public void onClick(View view) {
                if (!mCreated) {
                    if (mNativeCodecPlayerVideoSink == null) {
                        if (mSelectedVideoSink == null) {
                            return;
                        }
                        mSelectedVideoSink.useAsSinkForNative();
                        mNativeCodecPlayerVideoSink = mSelectedVideoSink;
                    }
                    if (mSourceString != null) {
                        mCreated = createStreamingMediaPlayer(getResources().getAssets(),
                                mSourceString);
                    }
                }
                if (mCreated) {
                    mIsPlaying = !mIsPlaying;
                    setPlayingStreamingMediaPlayer(mIsPlaying);
                }
            }

        });


        // native MediaPlayer rewind
        ((Button) findViewById(R.id.rewind_native)).setOnClickListener(new View.OnClickListener() {

            @Override
            public void onClick(View view) {
                if (mNativeCodecPlayerVideoSink != null) {
                    rewindStreamingMediaPlayer();
                }
            }

        });
    }
複製程式碼

這裡主要是一些控制元件的時間處理。比如暫停播放,讀取檔案,切換播放的SurfaceView,從頭播放等。

暫停(繼續)播放:setPlayingStreamingMediaPlayer(mIsPlaying);

讀取檔案:mCreated = createStreamingMediaPlayer(getResources().getAssets(),mSourceString);

切換播放的SurfaceView:當前播放的SurfaceView儲存在mNativeCodecPlayerVideoSink中,通過useAsSinkForNative()中呼叫setSurface(s)方法傳入到c++程式碼中。

從頭播放:rewindStreamingMediaPlayer()

退出:shutdown()

所以一共有下面5個jni方法:

    public static native boolean createStreamingMediaPlayer(AssetManager assetMgr, String filename);
    public static native void setPlayingStreamingMediaPlayer(boolean isPlaying);
    public static native void shutdown();
    public static native void setSurface(Surface surface);
    public static native void rewindStreamingMediaPlayer();
複製程式碼

上面的5個方法的實現都在native-codec-jni.cpp中,先看看讀取檔案的方法:

typedef struct {
    int fd;
    ANativeWindow* window;
    AMediaExtractor* ex;
    AMediaCodec *codec;
    int64_t renderstart;
    bool sawInputEOS;
    bool sawOutputEOS;
    bool isPlaying;
    bool renderonce;
} workerdata;

workerdata data = {-1, NULL, NULL, NULL, 0, false, false, false, false};

jboolean Java_com_example_nativecodec_NativeCodec_createStreamingMediaPlayer(JNIEnv* env,
        jclass clazz, jobject assetMgr, jstring filename)
{
    LOGV("@@@ create");

    // convert Java string to UTF-8
    const char *utf8 = env->GetStringUTFChars(filename, NULL);
    LOGV("opening %s", utf8);

    off_t outStart, outLen;
    int fd = AAsset_openFileDescriptor(AAssetManager_open(AAssetManager_fromJava(env, assetMgr), utf8, 0),
                                       &outStart, &outLen);//開啟檔案

    env->ReleaseStringUTFChars(filename, utf8);
    if (fd < 0) {
        LOGE("failed to open file: %s %d (%s)", utf8, fd, strerror(errno));
        return JNI_FALSE;
    }

    data.fd = fd;

    workerdata *d = &data;

    AMediaExtractor *ex = AMediaExtractor_new();//負責將指定型別的媒體檔案從檔案中找到軌道,並填充到MediaCodec的緩衝區中
    media_status_t err = AMediaExtractor_setDataSourceFd(ex, d->fd,
                                                         static_cast<off64_t>(outStart),
                                                         static_cast<off64_t>(outLen));
    close(d->fd);
    if (err != AMEDIA_OK) {
        LOGV("setDataSource error: %d", err);
        return JNI_FALSE;
    }

    int numtracks = AMediaExtractor_getTrackCount(ex);//獲取軌道數

    AMediaCodec *codec = NULL;//負責媒體檔案的編碼和解碼工作

    //log:input has 2 tracks
    LOGV("input has %d tracks", numtracks);
    for (int i = 0; i < numtracks; i++) {
        AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
        const char *s = AMediaFormat_toString(format);
        //track 0 format: mime: string(video/avc), durationUs: int64(10000000), width: int32(480), height: int32(360), max-input-size: int32(55147), csd-0: data, csd-1: data}
        //track 1 format: mime: string(audio/mp4a-latm), durationUs: int64(9914920), channel-count: int32(2), sample-rate: int32(44100), aac-profile: int32(2), bit-width: int32(16), pcm-type: int32(1), max-input-size: int32(694), csd-0: data}
        LOGV("track %d format: %s", i, s);
        const char *mime;
        if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime)) {//在format中取出mime欄位
            LOGV("no mime type");
            return JNI_FALSE;
        } else if (!strncmp(mime, "video/", 6)) {//獲取視訊所在軌道 注:strncmp相同返回0,比較6位 if:非0為真
            // Omitting most error handling for clarity.
            // Production code should check for errors.
            AMediaExtractor_selectTrack(ex, i);//選中軌道
            codec = AMediaCodec_createDecoderByType(mime);//通過mime建立解碼器
            //配置解碼器
            AMediaCodec_configure(codec, format, d->window, NULL, 0);
            d->ex = ex;
            d->codec = codec;
            d->renderstart = -1;
            d->sawInputEOS = false;
            d->sawOutputEOS = false;
            d->isPlaying = false;
            d->renderonce = true;
            //開始解碼
            AMediaCodec_start(codec);
        }
        AMediaFormat_delete(format);
    }

    mlooper = new mylooper();
    mlooper->post(kMsgCodecBuffer, d);

    return JNI_TRUE;
}
複製程式碼

在這個方法中,先找到了視訊的軌道,然後配置瞭解碼器,並把d->window與解碼器繫結。d->window是在setSurface方法中傳入的。

// set the surface
void Java_com_example_nativecodec_NativeCodec_setSurface(JNIEnv *env, jclass clazz, jobject surface)
{
    // obtain a native window from a Java surface
    if (data.window) {
        ANativeWindow_release(data.window);
        data.window = NULL;
    }
    data.window = ANativeWindow_fromSurface(env, surface);
    LOGV("@@@ setsurface %p", data.window);
}
複製程式碼

workerdata *d = &data中儲存了當前播放用到的一些標誌位。d->ex = ex視訊軌道,d->codec = codec解碼器,d->renderstart = -1渲染開始時間。

方法的最後mlooper = new mylooper(); mlooper->post(kMsgCodecBuffer, d);new了一個looper然後傳送了一個事件。例子中無論是暫停播放,開始播放,退出播放,還是每一幀的播放,都是通過mlooper->post(msg)的方法進行的,那麼看看這個looper是如何實現。

程式碼looper.cpp,關鍵部分加了註釋:

struct loopermessage;
typedef struct loopermessage loopermessage;

struct loopermessage {
    int what;
    void *obj;
    loopermessage *next;
    bool quit;
};



void* looper::trampoline(void* p) {
    ((looper*)p)->loop();
    return NULL;
}

looper::looper() {
    sem_init(&headdataavailable, 0, 0);//為0時此訊號量在程式間共享,訊號量的初始值
    sem_init(&headwriteprotect, 0, 1);
    pthread_attr_t attr;
    pthread_attr_init(&attr);//執行緒預設配置
    //起一個執行緒,執行緒上執行trampoline,引數是this,其實就是呼叫了loop()
    pthread_create(&worker, &attr, trampoline, this);
    running = true;
}


looper::~looper() {
    if (running) {
        LOGV("Looper deleted while still running. Some messages will not be processed");
        quit();
    }
}

//進入佇列
void looper::post(int what, void *data, bool flush) {
    //組裝訊息
    loopermessage *msg = new loopermessage();
    msg->what = what;
    msg->obj = data;
    msg->next = NULL;
    msg->quit = false;
    addmsg(msg, flush);
}

void looper::addmsg(loopermessage *msg, bool flush) {
    sem_wait(&headwriteprotect);//等待寫入訊息
    loopermessage *h = head;

    if (flush) {
        //如果flush的話,先清空佇列
        while(h) {
            loopermessage *next = h->next;
            delete h;
            h = next;
        }
        h = NULL;
    }
    if (h) {
        //如果佇列裡面有訊息
        //指標移動到隊尾
        while (h->next) {
            h = h->next;
        }
        //在隊尾插入訊息
        h->next = msg;
    } else {
        //如果佇列裡面沒有訊息,直接入隊
        head = msg;
    }
    LOGV("post msg %d", msg->what);
    sem_post(&headwriteprotect);
    sem_post(&headdataavailable);
}

void looper::loop() {
    while(true) {
        // wait for available message
        //阻塞當前執行緒直到訊號量headdataavailable的值大於0,解除阻塞後將headdataavailable的值-1
        sem_wait(&headdataavailable);

        // get next available message
        sem_wait(&headwriteprotect);
        loopermessage *msg = head;//有訊息了,取出第一個訊息
        if (msg == NULL) {
            LOGV("no msg");
            sem_post(&headwriteprotect);//+1 可以寫入訊息了
            continue;
        }
        head = msg->next;//把頭部指標指到下一條訊息
        sem_post(&headwriteprotect);//+1 可以寫入訊息了

        if (msg->quit) {//如果是退出的訊息
            LOGV("quitting");
            delete msg;
            return;
        }
        LOGV("processing msg %d", msg->what);
        handle(msg->what, msg->obj);//處理訊息
        delete msg;
    }
}

void looper::quit() {
    LOGV("quit");
    loopermessage *msg = new loopermessage();
    msg->what = 0;
    msg->obj = NULL;
    msg->next = NULL;
    msg->quit = true;//傳送一個退出的訊息
    addmsg(msg, false);
    void *retval;
    pthread_join(worker, &retval);
    sem_destroy(&headdataavailable); //釋放訊號量
    sem_destroy(&headwriteprotect);
    running = false;
}

void looper::handle(int what, void* obj) {
    LOGV("dropping msg %d %p", what, obj);
}
複製程式碼

大致的流程是起了一個單獨執行緒,然後在上面監視訊息佇列,有訊息入列void looper::addmsg(loopermessage *msg, bool flush)和讀取looper::loop()的操作。loop()中有一個while,一直監聽著佇列。取到了訊息,就呼叫looper::handle(int what, void* obj)進行處理。程式退出的時候呼叫looper::quit()結束讀取操作。

回到native-codec-jni.cpp中,這裡重寫了訊息的處理方法:

class mylooper: public looper {
    virtual void handle(int what, void* obj);
};

static mylooper *mlooper = NULL;
void mylooper::handle(int what, void* obj) {
    switch (what) {
        case kMsgCodecBuffer:
            doCodecWork((workerdata*)obj);
            break;

        case kMsgDecodeDone:
        {
            workerdata *d = (workerdata*)obj;
            AMediaCodec_stop(d->codec);
            AMediaCodec_delete(d->codec);
            AMediaExtractor_delete(d->ex);
            d->sawInputEOS = true;
            d->sawOutputEOS = true;
        }
        break;

        case kMsgSeek:
        {
            workerdata *d = (workerdata*)obj;
            AMediaExtractor_seekTo(d->ex, 0, AMEDIAEXTRACTOR_SEEK_NEXT_SYNC);
            AMediaCodec_flush(d->codec);
            d->renderstart = -1;
            d->sawInputEOS = false;
            d->sawOutputEOS = false;
            if (!d->isPlaying) {
                d->renderonce = true;
                post(kMsgCodecBuffer, d);
            }
            LOGV("seeked");
        }
        break;

        case kMsgPause:
        {
            workerdata *d = (workerdata*)obj;
            if (d->isPlaying) {
                // flush all outstanding codecbuffer messages with a no-op message
                d->isPlaying = false;
                post(kMsgPauseAck, NULL, true);//清空佇列
            }
        }
        break;

        case kMsgResume:
        {
            workerdata *d = (workerdata*)obj;
            if (!d->isPlaying) {
                d->renderstart = -1;
                d->isPlaying = true;
                post(kMsgCodecBuffer, d);
            }
        }
        break;
    }
}
複製程式碼

讀取了檔案之後,傳送了kMsgCodecBuffer訊息,在這個訊息的處理中呼叫了doCodecWork((workerdata*)obj);看看doCodecWork方法:

//https://www.cnblogs.com/jiy-for-you/p/7282033.html
//https://www.cnblogs.com/Xiegg/p/3428529.html
void doCodecWork(workerdata *d) {

    ssize_t bufidx = -1;
    if (!d->sawInputEOS) {
        //獲取緩衝區,設定超時為2000毫秒
        bufidx = AMediaCodec_dequeueInputBuffer(d->codec, 2000);
        LOGV("input buffer %zd", bufidx);
        if (bufidx >= 0) {
            size_t bufsize;
            //取到緩衝區輸入流
            auto buf = AMediaCodec_getInputBuffer(d->codec, bufidx, &bufsize);
            //開始讀取樣本
            auto sampleSize = AMediaExtractor_readSampleData(d->ex, buf, bufsize);//d->ex已經設定了選中的軌道
            if (sampleSize < 0) {
                //如果讀到了尾部
                sampleSize = 0;
                d->sawInputEOS = true;
                LOGV("EOS");
            }
            //以微秒為單位返回當前樣本的呈現時間。
            auto presentationTimeUs = AMediaExtractor_getSampleTime(d->ex);//d->ex已經設定了選中的軌道

            //將緩衝區傳遞至解碼器
            AMediaCodec_queueInputBuffer(d->codec, bufidx, 0, sampleSize, presentationTimeUs,
                    d->sawInputEOS ? AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM : 0);
            //前進到下一個樣本
            AMediaExtractor_advance(d->ex);
        }
    }

    if (!d->sawOutputEOS) {
        AMediaCodecBufferInfo info;
        //緩衝區 第一步
        auto status = AMediaCodec_dequeueOutputBuffer(d->codec, &info, 0);
        if (status >= 0) {
            if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
                LOGV("output EOS");
                d->sawOutputEOS = true;
            }
            int64_t presentationNano = info.presentationTimeUs * 1000;
            if (d->renderstart < 0) {
                d->renderstart = systemnanotime() - presentationNano;
            }
            int64_t delay = (d->renderstart + presentationNano) - systemnanotime();
            if (delay > 0) {
                //延時操作
                //如果緩衝區裡的可展示時間>當前視訊播放的進度,就休眠一下
                usleep(delay / 1000);
            }
            //渲染 ,如果 info.size != 0 等於 true ,就會渲染到surface上 第二步
            AMediaCodec_releaseOutputBuffer(d->codec, status, info.size != 0);
            if (d->renderonce) {
                d->renderonce = false;
                return;
            }
        } else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED) {
            LOGV("output buffers changed");
        } else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED) {
            auto format = AMediaCodec_getOutputFormat(d->codec);
            LOGV("format changed to: %s", AMediaFormat_toString(format));
            AMediaFormat_delete(format);
        } else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
            //解碼當前幀超時
            LOGV("no output buffer right now");
        } else {
            LOGV("unexpected info code: %zd", status);
        }
    }

    if (!d->sawInputEOS || !d->sawOutputEOS) {
        mlooper->post(kMsgCodecBuffer, d);//如果輸入或輸出沒有結束,就回撥自己
    }
}
複製程式碼

這裡主要是針對解碼器的輸入和輸出操作,註釋如有紕漏請指明。

如果沒有播放完,就會繼續呼叫mlooper->post(kMsgCodecBuffer, d);回撥自己。

如果使用者點選了暫停鍵,傳送kMsgPause訊息並清空訊息佇列,這裡就沒有再繼續回撥doCodecWork((workerdata*)obj),所以播放就暫停了,但是播放的進度會記錄在workerdata *d = &data中(d->ex和d->codec)。當繼續播放時,會繼續傳送post(kMsgCodecBuffer, d)訊息。

如果點選重播按鈕,傳送mlooper->post(kMsgSeek, &data)訊息:

    case kMsgSeek:
        {
            workerdata *d = (workerdata*)obj;
            AMediaExtractor_seekTo(d->ex, 0, AMEDIAEXTRACTOR_SEEK_NEXT_SYNC);
            AMediaCodec_flush(d->codec);
            d->renderstart = -1;
            d->sawInputEOS = false;
            d->sawOutputEOS = false;
            if (!d->isPlaying) {
                d->renderonce = true;
                post(kMsgCodecBuffer, d);
            }
            LOGV("seeked");
        }
複製程式碼

清除了進度AMediaExtractor_seekTo(d->ex, 0, AMEDIAEXTRACTOR_SEEK_NEXT_SYNC); AMediaCodec_flush(d->codec);,然後傳送post(kMsgCodecBuffer, d)訊息開始播放。

視訊的暫停和退出都是見到傳送訊息:

// set the playing state for the streaming media player
void Java_com_example_nativecodec_NativeCodec_setPlayingStreamingMediaPlayer(JNIEnv* env,
        jclass clazz, jboolean isPlaying)
{
    LOGV("@@@ playpause: %d", isPlaying);
    if (mlooper) {
        if (isPlaying) {
            mlooper->post(kMsgResume, &data);
        } else {
            mlooper->post(kMsgPause, &data);
        }
    }
}


// shut down the native media system
void Java_com_example_nativecodec_NativeCodec_shutdown(JNIEnv* env, jclass clazz)
{
    LOGV("@@@ shutdown");
    if (mlooper) {
        mlooper->post(kMsgDecodeDone, &data, true /* flush */);
        mlooper->quit();
        delete mlooper;
        mlooper = NULL;
    }
    if (data.window) {
        ANativeWindow_release(data.window);
        data.window = NULL;
    }
}
複製程式碼

具體的操作都是在mylooper::handle(int what, void* obj)中處理的。

到此,視訊播放的部分大致結束。

相關文章