linux /proc/sys/vm下核心引數解析
分類: LINUX
[wuyaalan@localhost vm]$ ls
block_dump hugepages_treat_as_movable oom_kill_allocating_task
compact_memory hugetlb_shm_group overcommit_memory
dirty_background_bytes laptop_mode overcommit_ratio
dirty_background_ratio legacy_va_layout page-cluster
dirty_bytes lowmem_reserve_ratio panic_on_oom
dirty_expire_centisecs max_map_count percpu_pagelist_fraction
dirty_ratio min_free_kbytes scan_unevictable_pages
dirty_writeback_centisecs mmap_min_addr stat_interval
drop_caches nr_hugepages swappiness
extfrag_threshold nr_overcommit_hugepages vdso_enabled
extra_free_kbytes nr_pdflush_threads vfs_cache_pressure
highmem_is_dirtyable oom_dump_tasks would_have_oomkilled
從上面結果可以看出,proc檔案系統給使用者提供了很多核心資訊幫助,使得使用者可以透過修改核心引數達到提高系統效能的目的。
接下來對上面列出的部分引數含義進行解釋說明。
一 block_dump block_dump enables block I/O debugging when set to a nonzero value. If you want to find out which process caused the disk to spin up(see /proc/sys/vm/laptop_mode ), you can gather information by setting the flag. When this flag is set, Linux reports all disk read and write operations that take place, and all block dirtyings done to files. This makes it possible to debug why a disk needs to spin up, and to increase battery life even more. The output of block_dump is written to the kernel output, and it can be retrieved using "dmesg". When you use block_dump and your kernel logging level also includes kernel debugging messages, you probably want to turn off klogd, otherwise the output of block_dump will be logged, causing disk activity that is not normally there.
引數block_dump使塊I / O除錯時設定為一個非零的值。如果你想找出哪些過程引起的磁碟旋轉(見/proc/sys/vm/laptop_mode),你可以透過設定標誌收集資訊。設定該標誌後,Linux將會以檔案的形式報告所有磁碟活動時的讀寫操作以及所有髒塊。這使得它可以解釋為什麼一個磁碟需要旋轉起來,甚至可以增加電池壽命。把block_dump輸出寫至核心輸出,可以使用“dmesg”相關資訊。當你使用block_dump和核心日誌記錄級別,還包括核心除錯資訊,你可能要關閉klogd,否則block_dump輸出將被記錄,導致不正常的磁碟活動有。
二 dirty_background_ratio Contains, as a percentage of total system memory, the number of pages at which the pdflush background writeback daemon will start writing out dirty data.
引數dirty_background_ratio是當所有被更改頁面總大小佔工作記憶體超過 一定比例 時,pdflush 會開始寫回工作。使用者可以增加這個比例,以增加頁面駐留在記憶體的時間。
三 dirty_expire_centisecs This tunable is used to define when dirty data is old enough to be eligible for writeout by the pdflush daemons. It is expressed in 100'ths of a second. Data which has been dirty in memory for longer than this interval will be written out next time a pdflush daemon wakes up.
引數dirty_expire_centisecs控制一個更改過的頁面經過多長時間後被認為是過期的、必須被寫回的頁面。
四 dirty_ratio Contains, as a percentage of total system memory, the number of pages at which a process which is generating disk writes will itself start writing out dirty data.
五 dirty_writeback_centisecs The pdflush writeback daemons will periodically wake up and write "old" data out to disk. This tunable expresses the interval between those wakeups, in 100'ths of a second. Setting this to zero disables periodic writeback altogether.
引數dirty_writeback_centisecs 是在pdflash執行緒週期喚醒的時間間隔。也就是每過一定時間pdflsh就會將修改過得資料回寫到磁碟。
六 drop_caches Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache:
- echo 1 > /proc/sys/vm/drop_caches
- echo 2 > /proc/sys/vm/drop_caches
- echo 3 > /proc/sys/vm/drop_caches
As this is a non-destructive operation, and dirty objects are not freeable, the user should run "sync" first in order to make sure all cached objects are freed.This tunable was added in 2.6.16.
七 hugepages_treat_as_movable When a non-zero value is written to this tunable, future allocations for the huge page pool will use ZONE_MOVABLE. Despite huge pages being non-movable, we do not introduce additional external fragmentation of note as huge pages are always the largest contiguous block we care about. Huge pages are not movable so are not allocated from ZONE_MOVABLE by default. However, as ZONE_MOVABLE will always have pages that can be migrated or reclaimed, it can be used to satisfy hugepage allocations even when the system has been running a long time. This allows an administrator to resize the hugepage pool at runtime depending on the size of ZONE_MOVABLE.
八 hugetlb_shm_group hugetlb_shm_group contains group id that is allowed to create SysV shared memory segment using hugetlb page
九 laptop_mode laptop_mode is a knob that controls "laptop mode". When the knob is set, any physical disk I/O (that might have caused the hard disk to spin up, see 。/proc/sys/vm/block_dump) causes Linux to flush all dirty blocks. The result of this is that after a disk has spun down, it will not be spun up anymore to write dirty blocks, because those blocks had already been written immediately after the most recent read operation. The value of the laptop_mode knob determines the time between the occurrence of disk I/O and when the flush is triggered. A sensible value for the knob is 5 seconds. Setting the knob to 0 disables laptop mode.
在“筆記本模式”下,核心更智慧的使用 I/O 系統,它會盡量使磁碟處於低能耗的狀態下。“筆記本模式”會將許多的 I/O 操作組織在一起,一次完成,而在每次的磁碟 I/O 之間是預設長達 10 分鐘的非活動期,這樣會大大減少磁碟啟動的次數。為了完成這麼長時間的非活動期,核心就要在一次活動期時完成儘可能多的 I/O 任務。在一次活動期間,要完成大量的預讀,然後將所有的緩衝同步。
十 legacy_va_layout If non-zero, this sysctl disables the new 32-bit mmap map layout - the kernel will use the legacy (2.4) layout for all processes
十一 lowmem_reserve_ratio Ratio of total pages to free pages for each memory zone.
十二 max_map_count This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries. While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation. The default value is 65536.
十三 min_free_kbytes This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a pages_min value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size.
十四 mmap_min_addr This file indicates the amount of address space which a user process will be restricted from mmaping. Since kernel null dereference bugs could accidentally operate based on the information in the first couple of pages of memory userspace processes should not be allowed to write to them. By default this value is set to 0 and no protections will be enforced by the security module. Setting this value to something like 64k will allow the vast majority of applications to work correctly and provide defense in depth against future potential kernel bugs.
十五 nr_hugepages nr_hugepages configures number of hugetlb page reserved for the system.
十六 nr_pdflush_threads The count of currently-running pdflush threads. This is a read-only value.
十七 numa_zonelist_order This sysctl is only for NUMA. 'Where the memory is allocated from' is controlled by zonelists. In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following: ZONE_NORMAL -> ZONE_DMA. This means that a memory allocation request for GFP_KERNEL will get memory from ZONE_DMA only when ZONE_NORMAL is not available. In NUMA case, you can think of following 2 types of order. Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL:
(A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
(B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. Type(A) offers the best locality for processes on Node(0), but ZONE_DMA will be used before ZONE_NORMAL exhaustion. This increases possibility of out-of-memory (OOM) of ZONE_DMA because ZONE_DMA is tend to be small. Type(B) cannot offer the best locality but is more robust against OOM of the DMA zone. Type(A) is called as "Node" order. Type (B) is "Zone" order. "Node order" orders the zonelists by node, then by zone within each node. Specify "[Nn]ode" for node order. "Zone Order" orders the zonelists by zone type, then by node within each zone. Specify "[Zz]one" for zone order. Specify "[Dd]efault" to request automatic configuration. Autoconfiguration will select "node" order in following case:
(1) if the DMA zone does not exist or
(2) if the DMA zone comprises greater than 50% of the available memory or(3) if any node's DMA zone comprises greater than 60% of its local memory and the amount of local memory is big enough. Otherwise, "zone" order will be selected. Default order is recommended unless this is causing problems for your system/application.
十八 overcommit_memory Controls overcommit of system memory, possibly allowing processes to allocate (but not use) more memory than is actually available.
- 0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slighly more memory in this mode. This is the default.
- 1 - Always overcommit. Appropriate for some scientific applications.
- 2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap plus a configurable percentage (default is 50) of physical RAM. Depending on the percentage you use, in most situations this means a process will not be killed while attempting to use already-allocated memory but will receive errors on memory allocation as appropriate.
physmem = size of physical memory in system
二十 page-cluster page-cluster controls the number of pages which are written to swap in a single attempt. The swap I/O size. It is a logarithmic value - setting it to zero means "1 page", setting it to 1 means "2 pages", setting it to 2 means "4 pages", etc. The default value is three (eight pages at a time). There may be some small benefits in tuning this to a different value if your workload is swap-intensive.
二十一 panic_on_oom This enables or disables panic on out-of-memory feature. If this is set to 1, the kernel panics when out-of-memory happens. If this is set to 0, the kernel will kill some rogue process, by calling oom_kill(). Usually, oom_killer can kill rogue processes and system will survive. If you want to panic the system rather than killing rogue processes, set this to 1. The default value is 0.
二十二 percpu_pagelist_fraction This is the fraction of pages at most (high mark pcp->high) in each zone that are allocated for each per cpu page list. The min value for this is 8. It means that we don't allow more than 1/8th of pages in each zone to be allocated in any single per_cpu_pagelist. This entry only changes the value of hot per cpu pagelists. User can specify a number like 100 to allocate 1/100th of each zone to each per cpu page list. The batch value of each per cpu pagelist is also updated as a result. It is set to pcp->high / 4. The upper limit of batch is (PAGE_SHIFT * 8). The initial value is zero. Kernel does not use this value at boot time to set the high water marks for each per cpu page list.
二十三 stat_interval With this tunable you can configure VM statistics update interval. The default value is 1. This tunable first appeared in 2.6.22 kernel.
二十四 swap_token_timeout This file contains valid hold time of swap out protection token. The Linux VM has token based thrashing control mechanism and uses the token to prevent unnecessary page faults in thrashing situation. The unit of the value is second. The value would be useful to tune thrashing behavior. This tunable was removed in 2.6.20 when the algorithm got improved.
二十五 swappiness swappiness is a parameter which sets the kernel's balance between reclaiming pages from the page cache and swapping process memory. The default value is 60. If you want kernel to swap out more process memory and thus cache more file contents increase the value. Otherwise, if you would like kernel to swap less decrease it.
二十六 vdso_enabled When this flag is set, the kernel maps a vDSO page into newly created processes and passes its address down to glibc upon exec(). This feature is enabled by default. vDSO is a virtual DSO (dynamic shared object) exposed by the kernel at some address in every process' memory. It's purpose is to speed up system calls. The mapping address used to be fixed (0xffffe000), but starting with 2.6.18 it's randomized (besides the security implications, this also helps debuggers
二十七 vfs_cache_pressure Controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects. At the default value of vfs_cache_pressure = 100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/26250550/viewspace-1200400/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 【kernel】從 /proc/sys/net/ipv4/ip_forward 引數看如何玩轉 procfs 核心引數Forward
- linux 程式引數檔案 /proc/pid/cmdline 簡介Linux
- [OS/Linux] Linux核心引數:net.core.somaxconn(高併發場景核心引數)Linux
- linux常用核心引數說明Linux
- 在Linux中,linux核心引數如何修改?Linux
- linux下/proc/meminfo解讀Linux
- tomcat vm 引數設定Tomcat
- linux核心引數優化重要項Linux優化
- 【Linux】bash: /proc/sys/net/ipv4/ip_forward: 許可權不夠LinuxForward
- Oracle 核心引數Oracle
- 用好語言模型:temperature、top-p等核心引數解析模型
- Linux核心版本以及部分引數與效能之二Linux
- 【轉載】Linux核心除錯之使用模組引數Linux除錯
- Linux核心引數overcommit_memory和OOM killer介紹LinuxMITOOM
- Linux 核心引數 arp_ignore & arp_announce 詳解Linux
- node核心模組-vm
- Shell解析引數
- Nginx配置和Linux核心引數的學習與驗證NginxLinux
- 核心proc檔案系統與seq介面(3)---核心proc檔案底層結構淺析
- 雲解析,新趨勢:專業解析的核心引數有哪些?-中科三方
- 解析型別引數型別
- Curl 命令引數解析
- CNN模型引數解析CNN模型
- 使用 Python 解析引數Python
- canvas transform引數解析CanvasORM
- js解析url引數JS
- linux環境下sqlplus sys/sys@ORCL as sysdba報錯 ORA-01031: insufficient privilegesLinuxSQL
- oracle rac 核心引數詳解Oracle
- Hystrix 配置引數全解析
- SpringMVC請求引數解析SpringMVC
- python引數解析argparse用法Python
- linux的命令列解析引數之getopt_long函式使用Linux命令列函式
- histb 引導核心 boot_cmd 引數含義boot
- linux shell特殊引數Linux
- Linux下配置網路引數常用的兩種方式!Linux
- linux 下用 Wget 傳送 帶引數的請求Linuxwget
- Yarn生產環境核心引數Yarn
- swoole優化核心引數調整優化
- [20210826]核心引數kernel.sem.txt