Thread safetyIn multi-threaded computer programming, a function is thread-safe when it can be invoked or accessed concurrently by multiple threads without causing unexpected behavior, race conditions, or data corruption.[1][2] As in the multi-threaded context where a program executes several threads simultaneously in a shared address space and each of those threads has access to every other thread's memory, thread-safe functions need to ensure that all those threads behave properly and fulfill their design specifications without unintended interaction.[3] There are various strategies for making thread-safe data structures.[3] Levels of thread safetyDifferent vendors use slightly different terminology for thread-safety,[4] but the most commonly used thread-safety terminology are:[2]
Thread safety guarantees usually also include design steps to prevent or limit the risk of different forms of deadlocks, as well as optimizations to maximize concurrent performance. However, deadlock-free guarantees cannot always be given, since deadlocks can be caused by callbacks and violation of architectural layering independent of the library itself. Software libraries can provide certain thread-safety guarantees.[5] For example, concurrent reads might be guaranteed to be thread-safe, but concurrent writes might not be. Whether a program using such a library is thread-safe depends on whether it uses the library in a manner consistent with those guarantees. Implementation approachesListed are two classes of approaches for avoiding race conditions to achieve thread-safety. The first class of approaches focuses on avoiding shared state and includes:
The second class of approaches are synchronization-related, and are used in situations where shared state cannot be avoided:
ExamplesIn the following piece of Java code, the Java keyword synchronized makes the method thread-safe: class Counter {
private int i = 0;
public synchronized void inc() {
i++;
}
}
In the C programming language, each thread has its own stack. However, a static variable is not kept on the stack; all threads share simultaneous access to it. If multiple threads overlap while running the same function, it is possible that a static variable might be changed by one thread while another is midway through checking it. This difficult-to-diagnose logic error, which may compile and run properly most of the time, is called a race condition. One common way to avoid this is to use another shared variable as a "lock" or "mutex" (from mutual exclusion). In the following piece of C code, the function is thread-safe, but not reentrant:
# include <pthread.h>
int increment_counter ()
{
static int counter = 0;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
// only allow one thread to increment at a time
pthread_mutex_lock(&mutex);
++counter;
// store value before any other threads increment it further
int result = counter;
pthread_mutex_unlock(&mutex);
return result;
}
In the above, The same function can be implemented to be both thread-safe and reentrant using the lock-free atomics in C++11: # include <atomic>
int increment_counter ()
{
static std::atomic<int> counter(0);
// increment is guaranteed to be done atomically
int result = ++counter;
return result;
}
See alsoReferences
External links
|