Using basic synchronization primitives - C#. NET

The following sections describe the basic .NET synchronization primitives. Each of these classes can be used to ensure that only one Task is able to enter a critical region.

Synchronization Primtives

Synchronization Primtives

Locking and Monitoring
The simplest way to use synchronization in C# is with the lock keyword, which is a two-stage process. First, you must create a lock object that is visible to all of your Tasks. Second, you must wrap the critical section in a lock block using the lock, as follows:

This shows the application of the lock keyword to the critical region of the bank account

Applying the lock Keyword

The lock keyword is a C# shortcut for using the System.Threading.Monitor class, which is a heavyweight primitive.

The members of the Monitor class are static, which is why you must provide a lock object—this tells the Monitor class which critical region a Task is trying to enter.

Tip It is important to ensure that all of your Tasks use the same lock object when entering a given critical region. See the discussion of the Isolated Lock References antipattern in this chapter for more details.

The lock keyword automatically takes care of acquiring and releasing the lock for the critical region by calling Monitor.Enter() and Monitor.Exit() for you. If you decide to use the Monitor class directly, you should ensure that you call Monitor.Exit() within a finally block, just as in the preceding fragment.

Monitor.Enter() takes a lock object and a pass-by-reference bool as arguments. The bool is set to true when the lock is acquired and should be checked before releasing the lock with Monitor.Exit(). There are some conditions under which you risk trying to release a lock that you have not acquired.

When one Task has acquired the lock, no other Task can enter the critical region. Calls to Monitor.Enter() will block until the first Task releases the lock by calling Monitor.Exit(). If there are Tasks waiting when the lock is released, Monitor selects one of them and allows it to acquire the lock. Tasks may acquire the lock in any sequence; the order in which Tasks arrive at the critical region doesn’t guarantee anything about the order in which they will acquire the lock.

You can try to acquire the lock by calling one of the overloads of the Monitor.TryEnter() method, which will let your Task try and acquire the lock without waiting indefinitely for it to become available. The overloads are listed in Table.

Overloads of the System.Threading.Monitor.TryEnter Method

Overloads of the System.Threading.Monitor.TryEnter Method

The same lock object, as I have said, must always be used for a given critical region. If you wish to protect two related critical regions (perhaps because they update the same shared data), the same object should be used to enter either region.

This extends our simple bank account example to serialize access to critical regions. There are two groups of Tasks, one of which wants to increment the balance while the other wants to decrement it. By using the same lock object, we ensure that there is at most one Task working in the pair of criticalregions.

Using a Single Lock Object to Serialize Access to Two Critical Regions

Using Interlocked Operations
The System.Threading.Interlocked class provides a set of static methods that use special features of the operating system and hardware to provide high performance synchronized operations. All of the methods in Interlocked are static and synchronized. Table provides a summary of the key members.

Selected Members of the System.Threading.Interlocked Class

Selected Members of the System.Threading.Interlocked Class

The Interlocked.Exchange() method sets the value of a variable. The following statements are functionally equivalent, but manage synchronization using different techniques:

The Add(), Increment(), and Decrement() methods are convenient shortcuts when using integers and work the way that you would expect. Listing shows how we can use Interlocked.Increment() to fix the data race from Listing. Notice that we have had to change the BankAccount class to expose the balance as an integer, because Interlocked methods require arguments modified by the ref keyword and values from properties cannot be used with ref.

Using Interlocked.Increment()

The CompareExchange() method checks to see if a variable has a given value and, if it does, changes the value of variable. This is not as obtuse as it sounds, because this method allows you to tell if another Task has updated a shared variable and act accordingly. Using CompareExchange allows you to mix isolated data and then merge the isolated values with the shared data.

The updates the previous example so that individual Tasks make a note of the starting balance and work with isolated balances to perform their updates. When they have calculated their local balances, they use CompareExchange() to update the shared value. If the shared data has not changed, the account balance is updated; otherwise, a message is printed out. In a real program, instead of simply noting that the shared data has changed, you could repeat the Task calculation or try a different method to update the shared data. For example, in the listing, we could have tried to add the local balance to the shared value.

Convergent Isolation with Interlocked.CompareExchange()

Using Spin Locking
Typically, when waiting to acquire a regular synchronization primitive, your Task is taken out of the schedule waits until it has acquired the primitive and can run again. Spinning takes a different approach; the Task enters a tight execution loop, periodically trying to acquire the primitive.

Spinning avoids the overhead of rescheduling the Task because it never stops running, but it doesn’t allow another Task to take its place. Spinning is useful if you expect the wait to acquire the primitive to be very short.

The System.Threading.Spinlock class is a lightweight, spin-based synchronization primitive. It has a similar structure to other primitives in that it relies on Enter(), TryEnter(), and Exit() methods to acquire and release the lock. Listing shows the bank account example implemented using SpinLock.

Using the SpinLock Primitive

The constructor for SpinLock has an overload that enables or disables owner tracking, which simply means that the primitive keeps a record of which Task has acquired the lock. SpinLock doesn’t support recursive locking, so if you have already acquired the lock, you must not try to acquire it again. If you have enabled owner tracking, attempting recursive locking will cause a
System.Threading.LockRecursionException to be thrown. If you have disabled owner tracking and try to lock recursively, a deadlock will occur. SpinLock has three properties that can help you avoid inadvertent recursive lock attempts, described below

System.Threading.SpinLock Properties

System.Threading.SpinLock Properties

Using Wait Handles and the Mutex Class
Wait handles are wrappers around a Windows feature called synchronization handles. Several .NET synchronization primitives that are based on wait handles, and they all derive from the System.Threading.WaitHandle class. Each class has slightly different characteristics.

The wait handle class that has most relevance to avoiding data races and is System.Threading.Mutex. This shows the basic use of the Mutex class to solve the bank account data race problem. You acquire the lock on Mutex by calling the WaitOne() method and release the lock by calling ReleaseMutex().

Basic Use of the Mutex Class

Acquiring Multiple Locks
All classes that extend from WaitHandle inherit three methods that can be used to acquire the lock. You have seen the WaitOne() instance method. In addition, the static WaitAll() and WaitAny() methods allow you to acquire multiple locks with one call. The above demonstrates the WaitAll() method, which causes the Task to block until all of the locks can be acquired.

The listing creates two Bank Accounts and two Mutexes. Two Tasks are created that modify the balance of one of the two accounts, and each acquires the lock from the Mutex for the account it is working with. The third Task changes the balance of both accounts and, therefore, needs to acquire the lock from both Mutexes to avoid starting a data race with one of the other Tasks.

Acquiring Multiple Locks with Mutex.WaitAll()

The WaitAll() method is inherited from the WaitHandle class and takes an array of WaitHandles as the set of locks to acquire. Notice that although you can acquire multiple locks in a single step, you must release them individually using the Mutex.ReleaseMutex() method. The WaitAny() method returns when any of the locks have been acquired, and it returns an int that tells you the position of the acquired lock in the WaitHandle array passed in as a parameter.

The WaitOne(), WaitAll(), and WaitAny() methods are all overridden so that you can attempt to acquire a lock or set of locks for a given period of time; see the .NET Framework documentation for details.

Configuring Interprocess Synchronization
Wait handles can be shared between processes. The Mutexes in the previous two listings were local, meaning that they were only usable in one process; a local Mutex is created when you use the default constructor.

You can also create a named system Mutex, which is the kind that can be shared between processes. You do this by using the overloaded constructors that take a name argument. When using a named Mutex, it is important to see if the Mutex you are looking for has already been created, because it is possible to create several Mutexes with the same name that exist independently of one another.

You can test to see if a Mutex exists by using the static Mutex.OpenExisting() method, which takes a string argument as the name of the Mutex you wish to create. If a Mutex with the name you have provided exists, it is returned by the Open Existing() method. A System.Threading.WaitHandleCannotBe Opened Exception is thrown if a Mutex has not already been created with that name.

This shows how to use the OpenExisting() method and the overloaded constructor to test for, create, and use a shared Mutex. To test this listing, you must run two or more instances of the compiled program. Control of the Mutex will pass from process to process each time you press the Enter key. If you compile and run the code in this listing, the program will loop forever, so you can safely close the console window when you have had enough.

Interprocess Mutex Use

Tip: You must be careful to pick a distinctive name for your Mutex to avoid conflicting with other programs running on the same machine. You will get some very odd behavior if you share a Mutex with someone else’s application.

Using Declarative Synchronization
So far, we have saw how to selectively apply synchronization to critical regions. An alternative is to declaratively synchronize all of the fields and methods in a class by applying the Synchronization attribute. Your class must extend
System.ContextBoundObject and import the System.Runtime.Remoting.Contexts namespace in order to be able to use the Synchronization attribute.

To demonstrate declarative synchronization with our bank account example, let’ change the BankAccount class so that the balance can be read with the GetBalance() method and incremented with the IncrementBalance() method, as shown in Listing. Now, all of the code statements are contained in a single class and can be synchronized by applying the Synchronization attribute and having the BankAccount class extend ContextBoundObject.

Using Declarative Synchronization

The problem with using the Synchronization attribute is that every field and method of your class, even if they don’t modify shared data, becomes synchronized using a single lock, and this can cause a performance problem. Declarative synchronization is a heavy-handed approach to avoiding data races and should be used with caution.

Using Reader-Writer Locks
The synchronization primitives discussed so far consider all Tasks as equally likely to cause a data race. That idea is reasonable, but in many situations, it is not true. Often, there will be many Tasks that only need to read shared data and only a few that need to modify it. Lots of Tasks can read a data value concurrently without causing a data race—only changing data causes problems.

A reader-writer lock is a common performance optimization and contains two locks—one for reading data and one for writing data—and allows multiple reader Tasks to acquire the read lock simultaneously. When a writer comes along and requests the write lock, it is made to wait for any active readers to release the read lock before being allowed to proceed, at which point the reader acquires both the read and write locks and has exclusive access to the critical region. This means that any requests by readers or writers to acquire either lock are made to wait until the active writer has finished with the critical region and releases the locks.

Using the ReaderWriterLockSlim Class

The System.Threading.ReaderWriterLockSlim class provides a convenient implementation of readerwriter locking that takes care of managing the locks. This class is a lightweight alternative for the heavyweight
System.Threading.ReaderWriter, which Microsoft no longer recommends using. The lightweight version is simpler to use, offers better performance, and avoids some potential deadlocks.

You acquire and release the ReaderWriterLockSlim read lock by calling the EnterReadLock() and ExitReadLock() methods. Similarly, you acquire and release the write lock by calling EnterWriteLock and ExitWriteLock(). The
ReaderWriterLockSlim class only provides the synchronization primitives; it does not enforce the separation between read and write operations in your code. You must be careful to avoid modifying shared data in a Task that has only acquired the read lock. Listing demonstrates the use of the ReaderWriterLockSlim class.

Using the ReaderWriterLockSlim Class

The example creates five Tasks that acquire the read lock, wait for one second, and then release the read lock, repeating this sequence until they are cancelled. As the read lock is acquired and released, a message is printed to the console, and this message shows the number of holders of the read lock, which is available by reading the CurrentReadCount property.

When you press the Enter key, the main application thread acquires the write lock, which is held for two seconds and then released. You can see from the following results that once the write lock has been requested, the number of Tasks holding the read lock starts to drop. This is because calls to EnterReadLock() will now wait until the writer lock has been released to ensure writer exclusivity.

...

Read lock released - count 4

Read lock acquired - count: 5

Requesting write lock

Read lock released - count 4

Read lock released - count 3

Read lock released - count 2

Read lock released - count 1

Read lock released - count 0

Write lock acquired

Press enter to release write lock

Read lock acquired - count: 1

Read lock acquired - count: 3

Read lock acquired - count: 2

Read lock acquired - count: 4

Read lock acquired - count: 5

Read lock released - count 4

...

If you press Enter again, the main application thread releases the write lock, which allows the Tasks to continue their acquire/release sequence once more.

Using Recursion and Upgradable Read Locks
Listing separates the code that reads the shared data from the code that modifies it. Often, you will want to read data and make a change only if some condition is met. You could acquire the write lock to do this, but that requires exclusive access. Because you don’t know in advance if you actually need to make changes, that would be a potential performance problem.

But you are thinking, “Aha! I can acquire the (nonexclusive) read lock, perform the test, and then acquire the (exclusive) write lock if you need to make modifications.” In that case, you would produce some code similar to the following fragment:

Unfortunately, when you came to run this code, you would get the following exception:

Unhandled Exception: System.Threading.LockRecursionException: Write lock

may not be acquired with read lock held. This pattern is prone to deadlocks.

Please ensure that read locks are released before taking a write lock.

If an upgrade is necessary, use an upgrade lock in place of the read lock.

at System.Threading.ReaderWriterLockSlim.TryEnterWriteLockCore(Int32

millisecondsTimeout)
...

Acquiring the lock on a primitive when you already have a lock is called lock recursion. The ReaderWriterLockSlim class doesn’t support lock recursion by default, because lock recursion has the potential to create deadlocks. Instead, you should use an upgradable read lock, which allows you to read the shared data, perform your test, and safely acquire exclusive write access if you need it. You acquire and release an upgradable read lock by calling the EnterUpgradeableReadLock() and ExitUpgradeableReadLock() methods and then acquire and release the write lock (if needed) by calling the EnterWriteLock() and ExitWriteLock() as before.

Once the upgradable lock is acquired, requests for the write lock and further requests for the upgradable read lock will block until Exit UpgradeableReadLock() is called, but multiple holders of theread lock are allowed. Upgrading the lock by calling EnterWriteLock() waits for all of the current holders of the read lock to call ExitReadLock() before the write lock is granted. demonstrates the use of the upgradable read lock by having five Tasks that read shared data and two that use the upgradable read lock to make changes.

Avoiding Lock Recursion by Using an Upgradable Read Lock

Only one holder of the upgradable lock is allowed at a time, which means you should partition your requests for locks carefully to have as few requests as possible for upgradable and write locks. You may be tempted to separate your read and write requests so that you release the read lock and then try to acquire the write lock only if you need to make a change, as shown in the following fragment:

This creates a data race, because between the point at which you release the read lock and acquire the write lock, another Task could have modified the shared data and changed the condition that you were looking for. The only way this approach works is if there is only one Task that can change the shared data. If that is the case, there is no performance impact in using an upgradable lock, because there will be no other upgrade requests to contend with.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

C#. NET Topics