What is Distributed Data Processing (DDP)?
An arrangement of networked computers in which data processing capabilities are spread across the network. In DDP, specific jobs are performed by specialized computers which may be far removed from the user and/or from other such computers. This arrangement is in contrast to ‘centralized’ computing in which several client computers share the same server (usually a mini or mainframe computer) or a cluster of servers. DDP provides greater scalability, but also requires more network administration resources.
Understanding of Distributed Data Processing (DDP)
Distributed database system technology is the union of what appear to be two diametrically opposed approaches to data processing: database system and computer network technologies. The database system has taken us from a paradigm of data processing in which each application defined and maintained its own data to one in which the data is defined and administered centrally. This new orientation results in data independence, whereby the application programs are immune to changes in the logical or physical organization of the data. One of the major motivations behind the use of database systems is the desire to integrate the operation data of an enterprise and to provide centralized, thus controlled access to that data. The technology of computer networks, on the other hand, promotes a mode of that work that goes against all centralization efforts. At first glance, it might be difficult to understand how these two contrasting approaches can possibly be synthesized to produce a technology that is more powerful and more promising than either one alone. The key to this understanding is the realization that the most important objective of the database technology is integration, not centralization. It is important to realize that either one of these terms does not necessarily imply the other. It is possible to achieve integration with centralization and that is exactly what at distributed database technology attempts to achieve.
The term distributed processing is probably the most used term in computer science for the last couple of years. It has been used to refer to such diverse system as multiprocessing systems, distributed data processing, and computer networks. Here are some of the other term that has been synonymous with distributed processing distributed/multi-computers, satellite processing /satellite computers, back-end processing, dedicated/special-purpose computers, time-shared systems and functionally modular system.
Obviously, some degree of the distributed processing goes on in any computer system, ever on single-processor computers, starting with the second-generation computers, the central processing. However, it should be quite clear that what we would like to refer to as distributed processing, or distributed computing has nothing to do with this form of distribution of the function of function in a single-processor computer system. Web Developer’s Workflow Become Much Easier with this Innovative Gadgets.
A term that has caused so much confusion is obviously quite difficult to define precisely. The working definition we use for a distributed computing systems states that it is a number of autonomous processing elements that are interconnected by a computer network and that cooperate in performing their assigned tasks. The processing elements referred to in this definition is a computing device that can execute a program on its own.
One fundamental question that needs to be asked is: Distributed is one thing that might be distributed is that processing logic. In fact, the definition of a distributed computing computer system give above implicitly assumes that the processing logic or processing elements are distributed. Another possible distribution is according to function. Various functions of a computer system could be delegated to various pieces of hardware sites. Finally, control can be distributed. The control of execution of various task might be distributed instead of being performed by one computer systems, from the view of distributed instead of being the system, these modes of distribution are all necessary and important. Strategic Role of e-HR (Electronic Human Resource).
A distributed computing system can be classified with respect to a number of criteria. Some of these criteria are as follows: degree of coupling, an interconnection structure, the interdependence of components, and synchronization between components. The degree of coupling refers to a measure that determines closely the processing elements are connected together. This can be measured as the ratio of the amount of data exchanged to the amount of local processing performed in executing a task. If the communication is done a computer network, there exits weak coupling among the processing elements. However, if components are shared we talk about strong coupling. Shared components can be both primary memory or secondary storage devices. As for the interconnection structure, one can talk about those case that has a point to point interconnection channel. The processing elements might depend on each other quite strongly in the execution of a task, or this interdependence might be as minimal as passing message at the beginning of execution and reporting results at the end. Synchronization between processing elements might be maintained by synchronous or by asynchronous means. Note that some of these criteria are not entirely independent of the processing elements to be strongly interdependent and possibly to work in a strongly coupled fashion.