Example: Producer Consumer Relationship Report
Threading in C#. Joseph Albahari A producer/consumer queue is a common requirement in threading. Here's how it works: A queue is set. See posavski-obzor.info .. Producer/Consumer Queue. A thread's foreground/background status has no relation to its priority or. by Ben Albahari, Joseph Albahari Object Orientation · Type Safety · Memory Management · Platform Support · C#'s Relationship with the CLR · The CLR and.
The onus is on the developer to superimpose thread safety, typically with exclusive locks. The collections in System. Concurrent are an exception. Another way to cheat is to minimize thread interaction by minimizing shared data. Since multiple client requests can arrive simultaneously, the server methods they call must be thread-safe. A stateless design popular for reasons of scalability intrinsically limits the possibility of interaction, since classes do not persist data between requests.
Thread interaction is then limited just to the static fields one may choose to create, for such purposes as caching commonly used data in memory and in providing infrastructure services such as authentication and auditing.
The final approach in implementing thread safety is to use an automatic locking regime. Whenever a method or property on such an object is then called, an object-wide lock is automatically taken for the whole execution of the method or property.
Although this reduces the thread-safety burden, it creates problems of its own: For these reasons, manual locking is generally a better option — at least until a less simplistic automatic locking regime becomes available. A good application of this is the. If we had two interrelated lists, we would have to choose a common object upon which to lock we could nominate one of the lists, or better: NET collections is also thread-unsafe in the sense that an exception is thrown if the list is modified during enumeration.
Rather than locking for the duration of enumeration, in this example we first copy the items to an array. Locking around thread-safe objects Sometimes you also need to lock around accessing thread-safe objects. Add newItem ; Whether or not the list was thread-safe, this statement is certainly not! The whole if statement would have to be wrapped in a lock in order to prevent preemption in between testing for containership and adding the new item.
This same lock would then need to be used everywhere we modified that list. For instance, the following statement would also need to be wrapped in the identical lock: Clear ; to ensure that it did not preempt the former statement.
Locking around accessing a collection can cause excessive blocking in highly concurrent environments. To this end, Framework 4. Static members Wrapping access to an object around a custom lock works only if all concurrent threads are aware of — and use — the lock.
This may not be the case if the object is widely scoped. The worst case is with static members in a public type. For instance, imagine if the static property on the DateTime struct, DateTime. Now, was not thread-safe, and that two concurrent calls could result in garbled output or an exception. The only way to remedy this with external locking might be to lock the type itself — lock typeof DateTime — before calling DateTime.
This would work only if all programmers agreed to do this which is unlikely. Furthermore, locking a type creates problems of its own. For this reason, static members on the DateTime struct have been carefully programmed to be thread-safe. This is a common pattern throughout the. Following this pattern also makes sense when writing types for public consumption, so as not to create impossible thread-safety conundrums.
Thread safety in static methods is something that you must explicitly code: Read-only thread safety Making types thread-safe for concurrent read-only access where possible is advantageous because it means that consumers can avoid excessive locking.
NET Framework types follow this principle: Following this principle yourself is simple: However, this would make it thread-unsafe for consumers that expected this to be read-only. In the absence of documentation, it pays to be cautious in assuming whether a method is read-only in nature.
A good example is the Random class: Nextits internal implementation requires that it update private seed values. Therefore, you must either lock around using the Random class, or maintain a separate instance per thread. Thread Safety in Application Servers Application servers need to be multithreaded to handle simultaneous client requests.
Fortunately, such a possibility is rare; a typical server class is either stateless no fields or has an activation model that creates a separate object instance for each client or each request. Interaction usually arises only through static fields, sometimes used for caching in memory parts of a database to improve performance. For example, suppose you have a RetrieveUser method that queries a database: In this example, we choose a practical compromise between simplicity and performance in locking.
Our design actually creates a very small potential for inefficiency: Locking once across the whole method would prevent this, but would create a worse inefficiency: Although each has a separate implementation, they are both very similar in how they function. These objects have thread affinity, which means that only the thread that instantiates them can subsequently access their members.
Violating this causes either unpredictable behavior, or an exception to be thrown. On the negative side, if you want to call a member on object X created on another thread Y, you must marshal the request to thread Y. You can do this explicitly as follows: Invoke and BeginInvoke both accept a delegate, which references the method on the target control that you want to run. Assuming we have a window that contains a text box called txtMessage, whose content we wish a worker thread to update, here's an example for WPF: UI threads and worker threads.
Worker threads typically execute long-running tasks such as fetching data. Most rich client applications have a single UI thread which is also the main application thread and periodically spawn worker threads — either directly or using BackgroundWorker.
These workers then marshal back to the main UI thread in order to update controls or report on progress. So, when would an application have multiple UI threads? The main scenario is when you have an application with multiple top-level windows, often called a Single Document Interface SDI application, such as Microsoft Word. By giving each such window its own UI thread, the application can be made more responsive. Immutable Objects An immutable object is one whose state cannot be altered — externally or internally.
The fields in an immutable object are typically declared read-only and are fully initialized during construction. Immutability is a hallmark of functional programming — where instead of mutating an object, you create a new object with different properties. LINQ follows this paradigm.
C# in a Nutshell, 4th Edition [Book]
Immutability is also valuable in multithreading in that it avoids the problem of shared writable state — by eliminating or minimizing the writable. One pattern is to use immutable objects to encapsulate a group of related fields, to minimize lock durations.
To take a very simple example, suppose we had two fields as follows: Rather than locking around these fields, we could define the following immutable class: Then we can read its values without needing to hold on to the lock: Technically, the last two lines of code are thread-safe by virtue of the preceding lock performing an implicit memory barrier see part 4.
Note that this lock-free approach prevents inconsistency within a group of related fields. But it doesn't prevent data from changing while you subsequently act on it — for this, you usually need a lock. In fact, we can do this without using a single lock, through the use of explicit memory barriers, Interlocked. This is an advanced technique which we describe in later in the parallel programming section.
- Java™ How To Program (Early Objects), Tenth Edition by Harvey Deitel, Paul Deitel
- C# 4.0 in a Nutshell, 4th Edition
- Threading in C# Joseph Albahari
Signaling with Event Wait Handles Event wait handles are used for signaling. Signaling is when one thread waits until it receives notification from another. Event wait handles are the simplest of the signaling constructs, and they are unrelated to C events. They come in three flavors: The former two are based on the common EventWaitHandle class, where they derive all their functionality. A Comparison of Signaling Constructs Construct.
To parallelize this, we could replace the foreach statement with a call to Parallel. And locking around accessing that array would all but kill the potential for parallelization. Aggregate offers a tidy solution. The accumulator, in this case, is an array just like the letterFrequencies array in our preceding example.
Notice that the local accumulation function mutates the localFrequencies array. This ability to perform this optimization is important — and is legitimate because localFrequencies is local to each thread. PFX provides a basic form of structured parallelism via three static methods in the Parallel class: Invoke Executes an array of delegates in parallel Parallel.
For Performs the parallel equivalent of a C for loop Parallel. ForEach Performs the parallel equivalent of a C foreach loop All three methods block until all work is complete. As with PLINQafter an unhandled exception, remaining workers are stopped after their current iteration and the exception or exceptions are thrown back to the caller — wrapped in an AggregateException.
Invoke executes an array of Action delegates in parallel, and then waits for them to complete. The simplest version of the method is defined as follows: Invoke to download two web pages at once: Invoke still works efficiently if you pass in an array of a million delegates. This is because it partitions large numbers of elements into batches which it assigns to a handful of underlying Tasks — rather than creating a separate Task for each delegate.
This means you need to keep thread safety in mind. The following, for instance, is thread-unsafe: A better solution is to use a thread-safe collection such as ConcurrentBag would be ideal in this case. Invoke is also overloaded to accept a ParallelOptions object: Any already-executing delegates will, however, continue to completion. See Cancellation for an example of how to use cancellation tokens.
ForEach perform the equivalent of a C for and foreach loop, but with each iteration executing in parallel instead of sequentially. Here are their simplest signatures: And the following sequential foreach: ToXmlString true ; As with Parallel. Invokewe can feed Parallel. ToArray ; Outer versus inner loops Parallel. ForEach usually work best on outer rather than inner loops. Parallelizing both inner and outer loops is usually unnecessary.
You must instead use the following version of ForEach: The following code loads up a dictionary along with an array of a million words to test: We can perform the spellcheck on our wordsToTest array using the indexed version of Parallel.
Example: Producer Consumer Relationship Report
So, to parallelize this: Write c ; do this: Aside from this difference, calling Break yields at least the same elements as executing the loop sequentially: In contrast, calling Stop instead of Break forces all threads to finish right after their current iteration. In our example, calling Stop could give us a subset of the letters H, e, l, l, and o if another thread was lagging behind. These tell you whether the loop ran to completion, and if not, at what cycle the loop was broken.
If your loop body is long, you might want other threads to break partway through the method body in case of an early Break or Stop.
You can do this by polling the ShouldExitCurrentIteration property at various places in your code; this property becomes true immediately after a Stop — or soon after a Break.
ShouldExitCurrentIteration also becomes true after a cancellation request — or if an exception is thrown in the loop. IsExceptional lets you know whether an exception has occurred on another thread. Optimization with local values Parallel. ForEach each offer a set of overloads that feature a generic type argument called TLocal.
These overloads are designed to help you optimize the collation of data with iteration-intensive loops. The simplest is this: Essentially, the problem is this: Calculating 10 million square roots is easily parallelizable, but summing their values is troublesome because we must lock around updating the total: Imagine a team of volunteers picking up a large volume of litter.
If all workers shared a single trash can, the travel and contention would make the process extremely inefficient. The volunteers are internal worker threads, and the local value represents a local trash can.
In order for Parallel to do this job, you must feed it two additional delegates that indicate: How to initialize a new local value How to combine a local aggregation with the master value Additionally, instead of the body delegate returning void, it should return the new aggregate for the local value.