從原始碼角度看CPU相關日誌

CheapTalks發表於2019-03-04

簡介

(本文原地址在我的部落格CheapTalks, 歡迎大家來看看~)

安卓系統中,普通開發者常常遇到的是ANR(Application Not Responding)問題,即應用主執行緒沒有相應。根本的原因在於安卓框架特殊設定,將專門做UI相關的、使用者能夠敏銳察覺到的操作放在了一個專門的執行緒中,即主執行緒。一旦這個執行緒在規定的時間內沒有完成某個任務,如廣播onReceive,Activity跳轉,或者bindApplication,那麼框架就會產生所謂的ANR,彈出對話方塊,讓使用者選擇繼續等待或者殺死沒有完成任務的應用程式。

老生常談的說法是開發者在主執行緒中執行了耗時操作導致了任務執行時間太久,這樣的問題通常很好定位與解決,往往打個bugreport這個ANR的root cause就原形畢露了,再不濟我們也能夠通過LOG定位到耗時點。

今天我們講的是另一種常見的情況,這種情況往往是由於CPU硬體裝置的落後、底層CPU調控策略不當導致的。這種問題很惱人,明明按照常理絕對不會出現耗時的地方因為它也能夠出現ANR和卡頓,給使用者帶來極其糟糕的體驗。

這篇文章我將先給出一個示例,通過ANR日誌反應當前的系統狀況,然後從原始碼角度看安卓framework是如何打出這些LOG的。

示例

以下是我擷取的一段LOG,系統頻繁的打出BIND_APPLICATION的耗時日誌,這些日誌出現的十分頻繁,平均每兩秒就出現一次,呼叫bindApplication有時甚至能夠超過10s+,

這種情況是十分嚴重的,當系統要啟動一個前臺廣播時,就需要10s內完成這個任務,否則就會出現ANR。如果啟動的這個前臺廣播要執行在一個沒有啟動的程式中,那麼在啟動廣播之前就要開啟一個程式,然後呼叫bindApplication以觸發Application.onCreate。這期間會先將BIND_APPLICATION、RECEIVER依次enqueue到ActivityThread$H主執行緒佇列中,如果BIND_APPLICATION的處理時間過長,將會間接的導致RECEIER的任務沒有得到處理,最終導致ANR。同樣的原理,這種情況甚至會導致Input的任務沒有及時得到處理,最終導致使用者可察覺的卡頓。

08-28 20:35:58.737  4635  4635 I tag_activity_manager: [0,com.android.providers.calendar,110,3120]
08-28 20:35:58.757  4653  4653 I tag_activity_manager: [0,com.xiaomi.metoknlp,110,3073]
08-28 20:35:58.863  4601  4601 I tag_activity_manager: [0,android.process.acore,110,3392]
08-28 20:36:00.320  5040  5040 I tag_activity_manager: [0,com.lbe.security.miui,110,3045]
08-28 20:36:00.911  4233  4233 I tag_activity_manager: [0,com.miui.securitycenter.remote,110,8653]
08-28 20:36:03.254  4808  4808 I tag_activity_manager: [0,com.android.phone,110,7059]
08-28 20:36:05.538  5246  5246 I tag_activity_manager: [0,com.xiaomi.market,110,3406]
08-28 20:36:09.006  5153  5153 I tag_activity_manager: [0,com.miui.klo.bugreport,110,10166]
08-28 20:36:09.070  5118  5118 I tag_activity_manager: [0,com.android.settings,110,10680]
08-28 20:36:11.259  5570  5570 I tag_activity_manager: [0,com.miui.core,110,4895]複製程式碼

ActivityManagerService通過Binder call呼叫到應用的ActivityThread方法,然後將任務enqueue到處理主執行緒佇列中

        // bind call 呼叫到這個方法
        public final void bindApplication(String processName, ApplicationInfo appInfo,
                List<ProviderInfo> providers, ComponentName instrumentationName,
                ProfilerInfo profilerInfo, Bundle instrumentationArgs,
                IInstrumentationWatcher instrumentationWatcher,
                IUiAutomationConnection instrumentationUiConnection, int debugMode,
                boolean enableOpenGlTrace, boolean isRestrictedBackupMode, boolean persistent,
                Configuration config, CompatibilityInfo compatInfo, Map<String, IBinder> services,
                Bundle coreSettings) {

...

                // 封裝AppBindData物件
            AppBindData data = new AppBindData();
...
            sendMessage(H.BIND_APPLICATION, data);
        }
...

    private class H extends Handler {
...
        public static final int BIND_APPLICATION = 110;
...

        public void handleMessage(Message msg) {
            if (DEBUG_MESSAGES) Slog.v(TAG, ">>> handling: " + codeToString(msg.what));
            switch (msg.what) {
            ...
            case BIND_APPLICATION:
                  // 在主執行緒佇列中程式執行
                Trace.traceBegin(Trace.TRACE_TAG_ACTIVITY_MANAGER, "bindApplication");
                AppBindData data = (AppBindData)msg.obj;
                handleBindApplication(data);
                Trace.traceEnd(Trace.TRACE_TAG_ACTIVITY_MANAGER);
                break;
            ...
            }
...複製程式碼

上面的例子中,系統很快不出所料的出現了ANR,並且這個問題不是由於在APP中做了耗時操作,而是因為系統CPU負載過高導致的。以下貼出CPU負載日誌:

// bindApplication卡了主執行緒57s+
Running message is { when=-57s68ms what=110 obj=AppBindData{appInfo=ApplicationInfo{be09873 com.android.settings}} target=android.app.ActivityThread$H planTime=1504652580856 dispatchTime=1504652580897 finishTime=0 }
Message 0: { when=-57s35ms what=140 arg1=5 target=android.app.ActivityThread$H planTime=1504652580890 dispatchTime=0 finishTime=0 }

// 分析了一下發生異常時的系統狀態
08-28 20:36:13.000 2692 2759 E ActivityManager: ANR in com.android.settings 
08-28 20:36:13.000 2692 2759 E ActivityManager: PID: 5118 
// 從左到右,分別是最近1分鐘,5分鐘,15分鐘的CPU負載,超過11就是負載過度
08-28 20:36:13.000 2692 2759 E ActivityManager: Load: 20.12 / 13.05 / 6.96 
// 發生ANR時,CPU的使用情況
08-28 20:36:13.000 2692 2759 E ActivityManager: CPU usage from 2967ms to -4440ms ago:  
// systemserver過於繁忙
08-28 20:36:13.000 2692 2759 E ActivityManager: 73% 2692/system_server: 57% user + 15% kernel / faults: 16217 minor 11 major 

08-28 20:36:13.000 2692 2759 E ActivityManager: 61% 4840/com.miui.home: 55% user + 5.4% kernel / faults: 26648 minor 17 major 
08-28 20:36:13.000 2692 2759 E ActivityManager: 19% 330/mediaserver: 17% user + 2.1% kernel / faults: 5180 minor 18 major 
08-28 20:36:13.000 2692 2759 E ActivityManager: 18% 4096/com.android.systemui: 14% user + 4% kernel / faults: 12965 minor 30 major 
...複製程式碼

當然,證明系統導致ANR不僅僅需要CPU Load日誌,同時也需要排除當前應用是否有耗時操作、耗時binder call的呼叫、是否等待鎖等等情況。排除之後,就可以判斷確認當前問題是由於CPU資源稀缺,導致應用執行bindApplication沒有拿到足夠的時間片,導致任務沒有及時的完成,最終間接的導致佇列排名靠後的廣播或服務ANR。

深入原始碼

AMS.appNotResponding


    private static final String TAG = TAG_WITH_CLASS_NAME ? "ActivityManagerService" : TAG_AM;

    final void appNotResponding(ProcessRecord app, ActivityRecord activity,
                                ActivityRecord parent, boolean aboveSystem, final String annotation) {
...

        long anrTime = SystemClock.uptimeMillis();
        // 如果開啟了CPU監聽,將會先更新CPU的使用情況
        if (MONITOR_CPU_USAGE) {
            updateCpuStatsNow();
        }

...
         // 新建一個使用者跟蹤CPU使用情況的物件
         // 將會列印所以執行緒的CPU使用狀況
        final ProcessCpuTracker processCpuTracker = new ProcessCpuTracker(true);

...

        String cpuInfo = null;
        if (MONITOR_CPU_USAGE) {
            updateCpuStatsNow();
            synchronized (mProcessCpuTracker) {
                cpuInfo = mProcessCpuTracker.printCurrentState(anrTime);
            }
            info.append(processCpuTracker.printCurrentLoad());
            info.append(cpuInfo);
        }

        info.append(processCpuTracker.printCurrentState(anrTime));

         // 直接將LOG列印出來
        Slog.e(TAG, info.toString());
...
    }複製程式碼

AMS.updateCpuStatsNow

    void updateCpuStatsNow() {
        synchronized (mProcessCpuTracker) {
            mProcessCpuMutexFree.set(false);
            final long now = SystemClock.uptimeMillis();
            boolean haveNewCpuStats = false;

              // 最少每5秒更新CPU的資料
            if (MONITOR_CPU_USAGE &&
                    mLastCpuTime.get() < (now - MONITOR_CPU_MIN_TIME)) {
                mLastCpuTime.set(now);
                mProcessCpuTracker.update();
                if (mProcessCpuTracker.hasGoodLastStats()) {
                    haveNewCpuStats = true;
                    //Slog.i(TAG, mProcessCpu.printCurrentState());
                    //Slog.i(TAG, "Total CPU usage: "
                    //        + mProcessCpu.getTotalCpuPercent() + "%");

                    // Slog the cpu usage if the property is set.
                    if ("true".equals(SystemProperties.get("events.cpu"))) {
                         // 使用者態時間
                        int user = mProcessCpuTracker.getLastUserTime();
                        // 系統態時間
                        int system = mProcessCpuTracker.getLastSystemTime();
                        // IO等待時間
                        int iowait = mProcessCpuTracker.getLastIoWaitTime();
                        // 硬中斷時間
                        int irq = mProcessCpuTracker.getLastIrqTime();
                        // 軟中斷時間
                        int softIrq = mProcessCpuTracker.getLastSoftIrqTime();
                        // 閒置時間
                        int idle = mProcessCpuTracker.getLastIdleTime();

                        int total = user + system + iowait + irq + softIrq + idle;
                        if (total == 0) total = 1;

                             // 輸出百分比
                        EventLog.writeEvent(EventLogTags.CPU,
                                ((user + system + iowait + irq + softIrq) * 100) / total,
                                (user * 100) / total,
                                (system * 100) / total,
                                (iowait * 100) / total,
                                (irq * 100) / total,
                                (softIrq * 100) / total);
                    }
                }
            }

              // 各類CPU時間歸類與更新
            final BatteryStatsImpl bstats = mBatteryStatsService.getActiveStatistics();
            synchronized (bstats) {
                synchronized (mPidsSelfLocked) {
                    if (haveNewCpuStats) {
                        if (bstats.startAddingCpuLocked()) {
                            int totalUTime = 0;
                            int totalSTime = 0;
                            // 遍歷所有ProcessCpuTracker
                            final int N = mProcessCpuTracker.countStats();
                            for (int i = 0; i < N; i++) {
                                ProcessCpuTracker.Stats st = mProcessCpuTracker.getStats(i);
                                if (!st.working) {
                                    continue;
                                }
                                // ProcessRecord的CPU時間更新
                                ProcessRecord pr = mPidsSelfLocked.get(st.pid);
                                totalUTime += st.rel_utime;
                                totalSTime += st.rel_stime;
                                if (pr != null) {
                                    BatteryStatsImpl.Uid.Proc ps = pr.curProcBatteryStats;
                                    if (ps == null || !ps.isActive()) {
                                        pr.curProcBatteryStats = ps = bstats.getProcessStatsLocked(
                                                pr.info.uid, pr.processName);
                                    }
                                    ps.addCpuTimeLocked(st.rel_utime, st.rel_stime);
                                    pr.curCpuTime += st.rel_utime + st.rel_stime;
                                } else {
                                    BatteryStatsImpl.Uid.Proc ps = st.batteryStats;
                                    if (ps == null || !ps.isActive()) {
                                        st.batteryStats = ps = bstats.getProcessStatsLocked(
                                                bstats.mapUid(st.uid), st.name);
                                    }
                                    ps.addCpuTimeLocked(st.rel_utime, st.rel_stime);
                                }
                            }
                            // 將資料更新到BatteryStatsImpl
                            final int userTime = mProcessCpuTracker.getLastUserTime();
                            final int systemTime = mProcessCpuTracker.getLastSystemTime();
                            final int iowaitTime = mProcessCpuTracker.getLastIoWaitTime();
                            final int irqTime = mProcessCpuTracker.getLastIrqTime();
                            final int softIrqTime = mProcessCpuTracker.getLastSoftIrqTime();
                            final int idleTime = mProcessCpuTracker.getLastIdleTime();
                            bstats.finishAddingCpuLocked(totalUTime, totalSTime, userTime,
                                    systemTime, iowaitTime, irqTime, softIrqTime, idleTime);
                        }
                    }
                }

                    // 每30分鐘寫入電池資料
                if (mLastWriteTime < (now - BATTERY_STATS_TIME)) {
                    mLastWriteTime = now;
                    mBatteryStatsService.scheduleWriteToDisk();
                }
            }
        }
    }複製程式碼

BatteryStatsImpl.finishAddingCpuLocked

    public void finishAddingCpuLocked(int totalUTime, int totalSTime, int statUserTime,
                                      int statSystemTime, int statIOWaitTime, int statIrqTime,
                                      int statSoftIrqTime, int statIdleTime) {
        if (DEBUG) Slog.d(TAG, "Adding cpu: tuser=" + totalUTime + " tsys=" + totalSTime
                + " user=" + statUserTime + " sys=" + statSystemTime
                + " io=" + statIOWaitTime + " irq=" + statIrqTime
                + " sirq=" + statSoftIrqTime + " idle=" + statIdleTime);
        mCurStepCpuUserTime += totalUTime;
        mCurStepCpuSystemTime += totalSTime;
        mCurStepStatUserTime += statUserTime;
        mCurStepStatSystemTime += statSystemTime;
        mCurStepStatIOWaitTime += statIOWaitTime;
        mCurStepStatIrqTime += statIrqTime;
        mCurStepStatSoftIrqTime += statSoftIrqTime;
        mCurStepStatIdleTime += statIdleTime;
    }複製程式碼

ProcessCpuTracker.update

update方法主要是讀取/proc/stat與/proc/loadavg檔案的資料來更新當前的CPU時間,其中CPU負載介面onLoadChange在LoadAverageService中有使用,用於展示一個動態的View在介面,便於檢視CPU的實時資料。

具體關於這兩個檔案,我會在最後列出這兩個節點檔案的例項資料並作出簡單的解析。

關於/proc目錄,它其實是一個虛擬目錄,其子目錄與子檔案也都是虛擬的,並不佔用實際的儲存空間,它允許動態的讀取出系統的實時資訊。

    public void update() {
        if (DEBUG) Slog.v(TAG, "Update: " + this);

        final long nowUptime = SystemClock.uptimeMillis();
        final long nowRealtime = SystemClock.elapsedRealtime();

          // 複用size=7的LONG陣列
        final long[] sysCpu = mSystemCpuData;
        // 讀取/proc/stat檔案
        if (Process.readProcFile("/proc/stat", SYSTEM_CPU_FORMAT,
                null, sysCpu, null)) {
            // Total user time is user + nice time.
            final long usertime = (sysCpu[0]+sysCpu[1]) * mJiffyMillis;
            // Total system time is simply system time.
            final long systemtime = sysCpu[2] * mJiffyMillis;
            // Total idle time is simply idle time.
            final long idletime = sysCpu[3] * mJiffyMillis;
            // Total irq time is iowait + irq + softirq time.
            final long iowaittime = sysCpu[4] * mJiffyMillis;
            final long irqtime = sysCpu[5] * mJiffyMillis;
            final long softirqtime = sysCpu[6] * mJiffyMillis;

            // This code is trying to avoid issues with idle time going backwards,
            // but currently it gets into situations where it triggers most of the time. :(
            if (true || (usertime >= mBaseUserTime && systemtime >= mBaseSystemTime
                    && iowaittime >= mBaseIoWaitTime && irqtime >= mBaseIrqTime
                    && softirqtime >= mBaseSoftIrqTime && idletime >= mBaseIdleTime)) {
                mRelUserTime = (int)(usertime - mBaseUserTime);
                mRelSystemTime = (int)(systemtime - mBaseSystemTime);
                mRelIoWaitTime = (int)(iowaittime - mBaseIoWaitTime);
                mRelIrqTime = (int)(irqtime - mBaseIrqTime);
                mRelSoftIrqTime = (int)(softirqtime - mBaseSoftIrqTime);
                mRelIdleTime = (int)(idletime - mBaseIdleTime);
                mRelStatsAreGood = true;
...

                mBaseUserTime = usertime;
                mBaseSystemTime = systemtime;
                mBaseIoWaitTime = iowaittime;
                mBaseIrqTime = irqtime;
                mBaseSoftIrqTime = softirqtime;
                mBaseIdleTime = idletime;

            } else {
                mRelUserTime = 0;
                mRelSystemTime = 0;
                mRelIoWaitTime = 0;
                mRelIrqTime = 0;
                mRelSoftIrqTime = 0;
                mRelIdleTime = 0;
                mRelStatsAreGood = false;
                Slog.w(TAG, "/proc/stats has gone backwards; skipping CPU update");
                return;
            }
        }

        mLastSampleTime = mCurrentSampleTime;
        mCurrentSampleTime = nowUptime;
        mLastSampleRealTime = mCurrentSampleRealTime;
        mCurrentSampleRealTime = nowRealtime;

        final StrictMode.ThreadPolicy savedPolicy = StrictMode.allowThreadDiskReads();
        try {
               // 收集/proc檔案節點資訊
            mCurPids = collectStats("/proc", -1, mFirst, mCurPids, mProcStats);
        } finally {
            StrictMode.setThreadPolicy(savedPolicy);
        }

        final float[] loadAverages = mLoadAverageData;
        // 讀取/proc/loadavg檔案資訊
        // 即最新1分鐘,5分鐘,15分鐘的CPU負載
        if (Process.readProcFile("/proc/loadavg", LOAD_AVERAGE_FORMAT,
                null, null, loadAverages)) {
            float load1 = loadAverages[0];
            float load5 = loadAverages[1];
            float load15 = loadAverages[2];
            if (load1 != mLoad1 || load5 != mLoad5 || load15 != mLoad15) {
                mLoad1 = load1;
                mLoad5 = load5;
                mLoad15 = load15;
                // onLoadChanged是個空實現,在LoadAverageService的內部類對它進行了重寫,用來更新CPU負載的資料
                onLoadChanged(load1, load5, load15);
            }
        }
...
    }複製程式碼

ProcessCpuTracker.collectStats

    private int[] collectStats(String statsFile, int parentPid, boolean first,
            int[] curPids, ArrayList<Stats> allProcs) {{
         // 獲取感興趣的程式id
        int[] pids = Process.getPids(statsFile, curPids);
        int NP = (pids == null) ? 0 : pids.length;
        int NS = allProcs.size();
        int curStatsIndex = 0;
        for (int i=0; i<NP; i++) {
            int pid = pids[i];
            if (pid < 0) {
                NP = pid;
                break;
            }
            Stats st = curStatsIndex < NS ? allProcs.get(curStatsIndex) : null;

            if (st != null && st.pid == pid) {
                // Update an existing process...
                st.added = false;
                st.working = false;
                curStatsIndex++;
                if (DEBUG) Slog.v(TAG, "Existing "
                        + (parentPid < 0 ? "process" : "thread")
                        + " pid " + pid + ": " + st);

                if (st.interesting) {
                    final long uptime = SystemClock.uptimeMillis();

                        // 程式狀態緩衝陣列
                    final long[] procStats = mProcessStatsData;
                    if (!Process.readProcFile(st.statFile.toString(),
                            PROCESS_STATS_FORMAT, null, procStats, null)) {
                        continue;
                    }

                    final long minfaults = procStats[PROCESS_STAT_MINOR_FAULTS];
                    final long majfaults = procStats[PROCESS_STAT_MAJOR_FAULTS];
                    final long utime = procStats[PROCESS_STAT_UTIME] * mJiffyMillis;
                    final long stime = procStats[PROCESS_STAT_STIME] * mJiffyMillis;

                    if (utime == st.base_utime && stime == st.base_stime) {
                        st.rel_utime = 0;
                        st.rel_stime = 0;
                        st.rel_minfaults = 0;
                        st.rel_majfaults = 0;
                        if (st.active) {
                            st.active = false;
                        }
                        continue;
                    }

                    if (!st.active) {
                        st.active = true;
                    }
...

                    st.rel_uptime = uptime - st.base_uptime;
                    st.base_uptime = uptime;
                    st.rel_utime = (int)(utime - st.base_utime);
                    st.rel_stime = (int)(stime - st.base_stime);
                    st.base_utime = utime;
                    st.base_stime = stime;
                    st.rel_minfaults = (int)(minfaults - st.base_minfaults);
                    st.rel_majfaults = (int)(majfaults - st.base_majfaults);
                    st.base_minfaults = minfaults;
                    st.base_majfaults = majfaults;
                    st.working = true;
                }

                continue;
            }

            if (st == null || st.pid > pid) {
                // We have a new process!
                st = new Stats(pid, parentPid, mIncludeThreads);
                allProcs.add(curStatsIndex, st);
                curStatsIndex++;
                NS++;
...

                final String[] procStatsString = mProcessFullStatsStringData;
                final long[] procStats = mProcessFullStatsData;
                st.base_uptime = SystemClock.uptimeMillis();
                String path = st.statFile.toString();
                //Slog.d(TAG, "Reading proc file: " + path);
                if (Process.readProcFile(path, PROCESS_FULL_STATS_FORMAT, procStatsString,
                        procStats, null)) {
                    // This is a possible way to filter out processes that
                    // are actually kernel threads...  do we want to?  Some
                    // of them do use CPU, but there can be a *lot* that are
                    // not doing anything.
                    st.vsize = procStats[PROCESS_FULL_STAT_VSIZE];
                    if (true || procStats[PROCESS_FULL_STAT_VSIZE] != 0) {
                        st.interesting = true;
                        st.baseName = procStatsString[0];
                        st.base_minfaults = procStats[PROCESS_FULL_STAT_MINOR_FAULTS];
                        st.base_majfaults = procStats[PROCESS_FULL_STAT_MAJOR_FAULTS];
                        st.base_utime = procStats[PROCESS_FULL_STAT_UTIME] * mJiffyMillis;
                        st.base_stime = procStats[PROCESS_FULL_STAT_STIME] * mJiffyMillis;
                    } else {
                        Slog.i(TAG, "Skipping kernel process pid " + pid
                                + " name " + procStatsString[0]);
                        st.baseName = procStatsString[0];
                    }
                } else {
                    Slog.w(TAG, "Skipping unknown process pid " + pid);
                    st.baseName = "<unknown>";
                    st.base_utime = st.base_stime = 0;
                    st.base_minfaults = st.base_majfaults = 0;
                }

                if (parentPid < 0) {
                    getName(st, st.cmdlineFile);
                    if (st.threadStats != null) {
                        mCurThreadPids = collectStats(st.threadsDir, pid, true,
                                mCurThreadPids, st.threadStats);
                    }
                } else if (st.interesting) {
                    st.name = st.baseName;
                    st.nameWidth = onMeasureProcessName(st.name);
                }

                if (DEBUG) Slog.v("Load", "Stats added " + st.name + " pid=" + st.pid
                        + " utime=" + st.base_utime + " stime=" + st.base_stime
                        + " minfaults=" + st.base_minfaults + " majfaults=" + st.base_majfaults);

                st.rel_utime = 0;
                st.rel_stime = 0;
                st.rel_minfaults = 0;
                st.rel_majfaults = 0;
                st.added = true;
                if (!first && st.interesting) {
                    st.working = true;
                }
                continue;
            }

            // This process has gone away!
            st.rel_utime = 0;
            st.rel_stime = 0;
            st.rel_minfaults = 0;
            st.rel_majfaults = 0;
            st.removed = true;
            st.working = true;
            allProcs.remove(curStatsIndex);
            NS--;
            if (DEBUG) Slog.v(TAG, "Removed "
                    + (parentPid < 0 ? "process" : "thread")
                    + " pid " + pid + ": " + st);
            // Decrement the loop counter so that we process the current pid
            // again the next time through the loop.
            i--;
            continue;
        }

        while (curStatsIndex < NS) {
            // This process has gone away!
            final Stats st = allProcs.get(curStatsIndex);
            st.rel_utime = 0;
            st.rel_stime = 0;
            st.rel_minfaults = 0;
            st.rel_majfaults = 0;
            st.removed = true;
            st.working = true;
            allProcs.remove(curStatsIndex);
            NS--;
            if (localLOGV) Slog.v(TAG, "Removed pid " + st.pid + ": " + st);
        }

        return pids;
    }複製程式碼

Process.readProcFile

jboolean android_os_Process_readProcFile(JNIEnv* env, jobject clazz,
        jstring file, jintArray format, jobjectArray outStrings,
        jlongArray outLongs, jfloatArray outFloats)
{
...
    int fd = open(file8, O_RDONLY);
...
    env->ReleaseStringUTFChars(file, file8);

    // 將檔案資料讀取到buffer中
    char buffer[256];
    const int len = read(fd, buffer, sizeof(buffer)-1);
    close(fd);
...
    buffer[len] = 0;

    return android_os_Process_parseProcLineArray(env, clazz, buffer, 0, len,
            format, outStrings, outLongs, outFloats);

}複製程式碼

Process.cpp parseProcLineArray

jboolean android_os_Process_parseProcLineArray(JNIEnv* env, jobject clazz,
        char* buffer, jint startIndex, jint endIndex, jintArray format,
        jobjectArray outStrings, jlongArray outLongs, jfloatArray outFloats)
{
    // 先獲取要讀取的資料buffer長度
    const jsize NF = env->GetArrayLength(format);
    const jsize NS = outStrings ? env->GetArrayLength(outStrings) : 0;
    const jsize NL = outLongs ? env->GetArrayLength(outLongs) : 0;
    const jsize NR = outFloats ? env->GetArrayLength(outFloats) : 0;

    jint* formatData = env->GetIntArrayElements(format, 0);
    jlong* longsData = outLongs ?
        env->GetLongArrayElements(outLongs, 0) : NULL;
    jfloat* floatsData = outFloats ?
        env->GetFloatArrayElements(outFloats, 0) : NULL;
...

    jsize i = startIndex;
    jsize di = 0;

    jboolean res = JNI_TRUE;

    // 迴圈解析buffer中的資料到xxData中
    for (jsize fi=0; fi<NF; fi++) {
        jint mode = formatData[fi];
        if ((mode&PROC_PARENS) != 0) {
            i++;
        } else if ((mode&PROC_QUOTES) != 0) {
            if (buffer[i] == `"`) {
                i++;
            } else {
                mode &= ~PROC_QUOTES;
            }
        }
        const char term = (char)(mode&PROC_TERM_MASK);
        const jsize start = i;
...

        jsize end = -1;
        if ((mode&PROC_PARENS) != 0) {
            while (i < endIndex && buffer[i] != `)`) {
                i++;
            }
            end = i;
            i++;
        } else if ((mode&PROC_QUOTES) != 0) {
            while (buffer[i] != `"` && i < endIndex) {
                i++;
            }
            end = i;
            i++;
        }
        while (i < endIndex && buffer[i] != term) {
            i++;
        }
        if (end < 0) {
            end = i;
        }

        if (i < endIndex) {
            i++;
            if ((mode&PROC_COMBINE) != 0) {
                while (i < endIndex && buffer[i] == term) {
                    i++;
                }
            }
        }

        if ((mode&(PROC_OUT_FLOAT|PROC_OUT_LONG|PROC_OUT_STRING)) != 0) {
            char c = buffer[end];
            buffer[end] = 0;
            if ((mode&PROC_OUT_FLOAT) != 0 && di < NR) {
                char* end;
                floatsData[di] = strtof(buffer+start, &end);
            }
            if ((mode&PROC_OUT_LONG) != 0 && di < NL) {
                char* end;
                longsData[di] = strtoll(buffer+start, &end, 10);
            }
            if ((mode&PROC_OUT_STRING) != 0 && di < NS) {
                jstring str = env->NewStringUTF(buffer+start);
                env->SetObjectArrayElement(outStrings, di, str);
            }
            buffer[end] = c;
            di++;
        }
    }

    // 將xxData解析到outxxx中
    env->ReleaseIntArrayElements(format, formatData, 0);
    if (longsData != NULL) {
        env->ReleaseLongArrayElements(outLongs, longsData, 0);
    }
    if (floatsData != NULL) {
        env->ReleaseFloatArrayElements(outFloats, floatsData, 0);
    }

    return res;
}複製程式碼

例項資料

/proc/stat

// [1]user, [2]nice, [3]system, [4]idle, [5]iowait, [6]irq, [7]softirq
// 1. 從系統啟動開始累計到當前時刻,使用者態CPU時間
// 2. nice值為負的程式所佔有的CPU時間
// 3. 核心CPU時間
// 4. 除IO等待時間的其它時間
// 5. 硬碟IO等待時間
// 6. 硬中斷時間
// 7. 軟中斷時間
cpu  76704 76700 81879 262824 17071 10 15879 0 0 0
cpu0 19778 22586 34375 106542 7682 7 10185 0 0 0
cpu1 11460 6197 7973 18043 2151 0 1884 0 0 0
cpu2 17438 20917 13339 24945 2845 1 1822 0 0 0
cpu3 28028 27000 26192 113294 4393 2 1988 0 0 0
intr 4942220 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 602630 0 0 0 0 0 0 0 0 0 0 0 0 0 15460 0 0 0 0 0 0 67118 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1854 8 5 0 10 0 0 0 6328 0 0 0 0 0 0 0 0 0 0 892 0 0 0 0 2 106 2 0 2 0 0 0 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 7949 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10256 3838 0 0 0 0 0 0 0 499 69081 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 725052 0 14911 0 0 0 0 0 1054 0 0 0 0 0 0 2073 0 0 0 1371 5 0 659329 654662 0 0 0 0 0 0 0 0 0 6874 0 7 0 0 0 0 913 312 0 0 0 245372 0 0 2637 0 0 0 0 0 0 0 0 0 0 0 0 96 0 0 0 0 0 13906 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8804 0 0 0 0 0 0 0 0 0 0 0 0 2294 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 13860 0 0 5 5 0 0 0 0 1380 362 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7069 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 // 中斷資訊
ctxt 11866606 // 自系統啟動以來CPU發生的上下文交換次數
btime 1507554066 // 系統啟動到現在為止的時間,單位為秒
processes 38582    // 系統啟動以來所建立的任務個數目
procs_running 1 // 當前執行佇列的任務的數目
procs_blocked 0 // 當前被阻塞的任務數目
softirq 2359224 2436 298396 2839 517350 2436 2436 496108 329805 2067 705351複製程式碼

/proc/loadavg

10.55 19.87 25.93 2/2082 7475複製程式碼

/proc/1/stat

1 (init) S 0 0 0 0 -1 4194560 2206 161131 0 62 175 635 175 244 20 0 1 0 0 2547712 313 4294967295 32768 669624 3196243632 3196242928 464108 0 0 0 65536 3224056068 0 0 17 3 0 0 0 0 0 676368 693804 712704複製程式碼

相關文章