CSAPP英語學習系列:OSTEP

xing393939發表於2021-04-07
26 Concurrency: An Introduction
You might also notice how this ruins our beautiful address space lay-
out. Before, the stack and heap could grow independently and trouble
only arose when you ran out of room in the address space. Here, we
no longer have such a nice situation. Fortunately, this is usually OK, as
stacks do not generally have to be very large (the exception being in pro-
grams that make heavy use of recursion).

ruin
美[ˈruːɪn]
v. 毀壞;破壞

run out
美[ˈrʌn aʊt]
用完;耗盡

recursion
美[rɪˈkɜrʃn]
n. 遞迴
26.1 Why Use Threads?
Of course, in either of the cases mentioned above, you could use multi-
ple processes instead of threads. However, threads share an address space
and thus make it easy to share data, and hence are a natural choice when
constructing these types of programs. Processes are a more sound choice
for logically separate tasks where little sharing of data structures in mem-
ory is needed.

hence
美[hens]
adv. 因此

construct
美[kənˈstrʌkt]
v. 建築;繪製

separate
美[ˈsepəreɪt]
v. (使)分開,分離
26.2 An Example: Thread Creation
As you might be able to see, one way to think about thread creation
is that it is a bit like making a function call; however, instead of fifirst ex-
ecuting the function and then returning to the caller, the system instead
creates a new thread of execution for the routine that is being called, and
it runs independently of the caller, perhaps before returning from the cre-
ate, but perhaps much later. What runs next is determined by the OS
scheduler, and although the scheduler likely implements some sensible
algorithm, it is hard to know what will run at any given moment in time.

determine
美[dɪˈtɜːrmɪn]
v. 查明;測定

sensible
美[ˈsensəbl]
adj. 明智的;合理的
26.3 Why It Gets Worse: Shared Data
The simple thread example we showed above was useful in showing
how threads are created and how they can run in different orders depend-
ing on how the scheduler decides to run them. What it doesn’t show you,
though, is how threads interact when they access shared data.

interact
美[ˌɪntərˈækt]
v. 交流,互動
26.4 The Heart Of The Problem: Uncontrolled Scheduling
What we have demonstrated here is called a race condition (or, more
specififically, a data race): the results depend on the timing execution of
the code. With some bad luck (i.e., context switches that occur at un-
timely points in the execution), we get the wrong result. In fact, we may
get a different result each time; thus, instead of a nice deterministic com-
putation (which we are used to from computers), we call this result inde-
terminate, where it is not known what the output will be and it is indeed
likely to be different across runs.

demonstrate
美[ˈdemənstreɪt]
v. 證明;說明

untimely
美[ʌnˈtaɪmli]
adj. 不合時宜的

deterministic
美[dɪˌtɜmɪ'nɪstɪk]
adj. 確定性的

indeterminate
美[ˌɪndɪˈtɜrmɪnət]
adj. 不定的,不明確的

indeed
美[ɪnˈdiːd]
adv. 實際上
26.5 The Wish For Atomicity
In our theme of exploring concurrency, we’ll be using synchronization
primitives to turn short sequences of instructions into atomic blocks of
execution, but the idea of atomicity is much bigger than that, as we will
see. For example, fifile systems use techniques such as journaling or copy-
on-write in order to atomically transition their on-disk state, critical for
operating correctly in the face of system failures. If that doesn’t make
sense, don’t worry — it will, in some future chapter.

synchronization
美[ˌsɪŋkrənaɪ'zeɪʃn]
n. 同一時刻;同步

primitive
美[ˈprɪmətɪv]
n. 原語

journaling
美['dʒɜnlɪŋ]
n. 日誌

critical
美[ˈkrɪtɪkl]
adj. 關鍵的
26.6 One More Problem: Waiting For Another
This chapter has set up the problem of concurrency as if only one type
of interaction occurs between threads, that of accessing shared variables
and the need to support atomicity for critical sections. As it turns out,
there is another common interaction that arises, where one thread must
wait for another to complete some action before it continues. This inter-
action arises, for example, when a process performs a disk I/O and is put
to sleep; when the I/O completes, the process needs to be roused from its
slumber so it can continue.

interaction
美[ˌɪntəˈrækʃən]
n. 互相影響

critical
美[ˈkrɪtɪkl]
adj. 關鍵的

rouse
美[raʊz]
v. 叫醒

slumber
美[ˈslʌmbɚ]
n. 安眠,熟睡
26.7 Summary: Why in OS Class?
For example, imagine the case where there are two processes running.
Assume they both call write() to write to the file, and both wish to
append the data to the file (i.e., add the data to the end of the file, thus
increasing its length). To do so, both must allocate a new block, record
in the inode of the file where this block lives, and change the size of the
file to reflflect the new larger size (among other things; we’ll learn more
about files in the third part of the book). Because an interrupt may occur
at any time, the code that updates these shared structures (e.g., a bitmap
for allocation, or the file’s inode) are critical sections; thus, OS design-
ers, from the very beginning of the introduction of the interrupt, had to
worry about how the OS updates internal structures. An untimely inter-
rupt causes all of the problems described above. Not surprisingly, page
tables, process lists, file system structures, and virtually every kernel data
structure has to be carefully accessed, with the proper synchronization
primitives, to work correctly.

allocate
美[ˈæləkeɪt]
v. 分配

internal
美[ɪnˈtɜːrnl]
adj. 內部的
27 Interlude: Thread API
This chapter brieflfly covers the main portions of the thread API. Each part
will be explained further in the subsequent chapters, as we show how
to use the API. More details can be found in various books and online
sources [B89, B97, B+96, K+96]. We should note that the subsequent chap-
ters introduce the concepts of locks and condition variables more slowly,
with many examples; this chapter is thus better used as a reference.

Interlude
美[ˈɪntərluːd]
[電影]間奏

briefly
美[ˈbriːfli]
adv. 短暫地

portion
美[ˈpɔːrʃn]
n. 一部分

subsequent
美[ˈsʌbsɪkwənt]
adj. 後來的

various
美[ˈveriəs]
adj. 各種各樣的
27.1 Thread Creation
The second argument, attr, is used to specify any attributes this thread
might have. Some examples include setting the stack size or perhaps in-
formation about the scheduling priority of the thread. An attribute is
initialized with a separate call to pthread attr init(); see the man-
ual page for details. However, in most cases, the defaults will be fine; in
this case, we will simply pass the value NULL in.

priority
美[praɪˈɔːrəti]
n. 優先,優先權
27.2 Thread Completion
We should note that not all code that is multi-threaded uses the join
routine. For example, a multi-threaded web server might create a number
of worker threads, and then use the main thread to accept requests and
pass them to the workers, indefifinitely. Such long-lived programs thus
may not need to join. However, a parallel program that creates threads
to execute a particular task (in parallel) will likely use join to make sure
all such work completes before exiting or moving onto the next stage of
computation.

indefinitely
美[ɪnˈdefɪnətli]
adv. 無限期地

parallel
美[ˈpærəlel]
adj. 並行的

particular
美[pərˈtɪkjələr]
adj. 特定的

computation
美[ˌkɑmpjuˈteɪʃn]
n. 計算
27.3 Locks
// 初始化方法1
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
// 初始化方法2
int rc = pthread_mutex_init(&lock, NULL);
assert(rc == 0);
//加鎖、解鎖、銷燬鎖
int pthread_mutex_lock(pthread_mutex_t *mutex);
int pthread_mutex_unlock(pthread_mutex_t *mutex);
int pthread_mutex_destroy(pthread_mutex_t *mutex);
//These two calls are used in lock acquisition. The trylock version returns failure if the lock is already held; the timedlock version of acquiring a lock returns after a timeout or after acquiring the lock.
int pthread_mutex_trylock(pthread_mutex_t *mutex);
int pthread_mutex_timedlock(pthread_mutex_t *mutex, struct timespec *abs_timeout);

acquisition
美[ˌækwɪˈzɪʃn]
n. 收購;獲得
27.4 Condition Variables
// releases lock when going to sleep
int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex);
// wake up the sleep thread which associated with this cond
int pthread_cond_signal(pthread_cond_t *cond);
27.5 Compiling and Running
All of the code examples in this chapter are relatively easy to get up
and running. To compile them, you must include the header pthread.h
in your code. On the link line, you must also explicitly link with the
pthreads library, by adding the -pthread flflag.

relatively
美[ˈrelətɪvli]
adv. 相對地

explicitly
美[ɪk'splɪsɪtlɪ]
adv. 明白地,明確地
27.6 Summary
// show pthread interfaces
man -k pthread
To share data between threads, the values must be in the heap or otherwise some locale that is globally accessible. 
Always use condition variables to signal between threads. While it is often tempting to use a simple flflag, don’t do it.

locale
美[loʊˈkæl]
n. 場所或地點
本作品採用《CC 協議》,轉載必須註明作者和本文連結

相關文章