Most designs deal with process data by collecting it from field I/O device drivers, control tasks, and user interface tasks into a common data area, or database.
The individual data items within this database are associated implicitly or explicitly with process I/O tags. In some cases, more commonly with shared memory , these may be grouped into larger structures encapsulating the data for a particular process area. The choice of access methods to this database normally falls into four broad categories:
Passive data access through message passing. In this instance, the data is sent to and retrieved from the database using inter-process messaging. The database synchronizes access to the data by processing one message at a time. Client tasks must query the database synchronously to determine if new values have arrived. This is often called a "polling" model. The behaviour of the database is largely independent of the API (Application Programming Interface) used to access the database.
Passive data access through shared memory. Typical shared memory approaches treat the database simply as a common memory area where all clients have read and write access. The clients themselves synchronize access to the data through semaphores, allowing only a single writer and multiple readers of the data at any given time. The behaviour of the database is entirely encapsulated in the API used by the clients to access the data. Each client may periodically poll the shared memory for changes in value, and acts upon those changes.
Active data access through message passing. In this case, the data is sent to and retrieved from the database using inter-process messaging. When a value in the database changes, all client tasks registering an interest in that value will be notified asynchronously of that change. This is often called an "event-driven" or "publish/subscribe" model. The database synchronizes write access to the data by processing one message at a time.
Active data access through shared memory. In this approach, the database is offered as a shared memory area where all clients have read and write access. In addition, all clients register a mechanism (process ID and signal, or a proxy mechanism) that allows a writer to inform potential readers of a data change. The reader, upon receipt of the signal or proxy can poll some or all of the database for changed values. This results in a hybrid event-driven and polled mechanism.
In virtually all cases, access to shared memory will be faster than data access through message passing in terms of simple read times. Message passing contains more operating system overhead in both system calls and context switches. This benchmark trivializes the real-world situation however, making shared memory seem more attractive than it actually is. In reality, the choice of message passing vs. shared memory is related to the type of application being designed, the size of the data set and the characteristics of the clients of that data.