It is a “given” that client/server systems need to be designed to be used across networks. It often comes as a shock to newcomers to discover that the “lightning-fast” task they used to run through a Paradox or Access application on a desktop takes all day after they convert the database to a client/server RDBMS.
“It’s gotta be something wrong with Firebird,” they say. “It can't be my application code—because I didn’t change a thing! It can’t be my database design—because the same design has been perfect for years!” Famous last words.
The focus of application design for clients with desktop-based back-ends is vastly different from that for remote clients and client/servers. The obligatory desktop browsing interface that displays “200,000 records at a glance” has generated a major industry in RAD data-aware grid components. The developer never needed to ponder how many human beings might be capable of browsing 200,000 records in a day, let alone at a glance!
Those RAD grid components that did such a perfect job of presenting unlimited volumes of desktop data in small containers for random browsing are not a friendly interface for remote clients. Behind the screen, the characteristic client looping operation (“start at record 1 and for every record DO repeat”) that seemed so perfect for processing data that were memory images of local tables has the remote client users now demanding the developer’s head on a plate.
It is very common, indeed, for the design of the database itself to have been influenced more strongly by the perception of the client interface—“I want a table just like this spreadsheet!”—than by the elegant wisdom of a sound abstract data model.
When the interface is physically separated from the data by transaction isolation and a network of busy wire, much pondering is required. There is much more to migrating from the desktop than mere data conversion. The ultimate benefits from a critical design review to users, database integrity, and performance will amply justify the effort.
Abstraction of Stored Data
Even in established client/server environments, all too many poorly performing, corruption-prone systems are found to have been “designed” using reports and spreadsheets as the blueprint for both database and user interface design. In summary, it is all too common for desktop database conversions to come through to Firebird with many of the following client/server-unfriendly “features”:
That is not to say that the old desktop system was no good. If it worked well in its environment, it was good for its purpose. Client/server is simply a very different technology from desktop “databasing.” It changes the scale of information management from “file and retrieve” to “store, manage, and manipulate.” It shifts the client application away from its desktop role as principal actor to that of message bearer. Effective client interfaces are lightweight and very elegant in the ways they capture what users want and deliver what users need.
Shedding the Spreadsheet Mind-set
A common characteristic of desktop database applications is that they provide an interface of grids: data arranged in rows and columns, with scrollbars and other navigation devices for browsing from the first row in a table to the last. Often, the grids deliver a visual structure that exactly mimics the metadata structure of the source tables. It is a common trap to import such tables into a client/server system and consider that the migration task is done.
Moving these legacy databases to client/servers usually takes more than a data conversion program. Do your conversion and be prepared to treat your new database objects as holding places. Plan to reanalyze and reengineer to abstract this style of database into structures that work well in the new environment. In Firebird it is very easy to create new tables and move data into them. For storage, think thin—using simple keys; abstracting large table structures into families of linked, normalized relations; massaging calendar-style groups of repeating columns into separate tables; eliminating key structures that compound down dependency levels; eliminating duplicated data; and so on.
If you are all at sea about normalization and abstraction, study some of the excellent literature available in books and websites. Get yourself started using a small data model—a subset of five or six of your primary tables is ideal —rather than hitting a 200-table database as if it were a single problem that you have to solve in a day. This way, the conversion becomes a proactive exercise in self-instruction and solving the tough bits quickly becomes more intuitive. For example, learn about stored procedures and triggers and test what you know by writing a data conversion module.
It is essential to part with the notion that your starting point in designing a relational database is to represent everybody’s favorite reports, spreadsheets, and most-used reference displays as tables in the database. These things are output, and output is retrieved by queries and stored procedures.
The Human Interface
Client applications in a system where enterprise information services have a back-end that is a full-blooded DBMS with powerful data-processing capabilities do not transform user input, beyond parsing it at the source and packaging it into prepared containers the structures for the API transport functions. FOR loops through hundreds or thousands of rows in a client dataset buffer do not have a place in the front-ends of client/server systems.
The application designer needs to think constantly about the cost of redundant throughput. Pulling huge sets across the wire for browsing bottles up the network and makes the user’s experience frustrating. The human face of the system has to concentrate on efficient ways to show users what they need to see and to collect input from them—their instructions and any new data that humans want to add. User interface design should focus on quick and intuitive techniques to capture inputs raw and pass them quickly to the server for any cooking that is required.
Client/server developers can learn a lot from studying successful web form interfaces, even if the applications are not being designed for the Internet, because a web browser is the extreme thin client.
Short, quick queries keep users in touch with the state of the database and dilute the load on the network. Effective database clients present drill-down, searching interfaces, rather than table browsers, and limit sets to no more than 200 rows.
The Relational Data Storage Model
Relational databases depend on robust, highly abstract structures to perform effectively and produce predictable, correct results from operations. Complete analysis of the entities and processes in your system is essential, so that you arrive at a logical model that is free of redundancy and represents every relationship.
The Primary Key
During logical analysis, a primary key is established for each grouping of data. The logical primary key helps to determine which item or items are capable of identifying a group of related data uniquely. The physical design of tables will reflect the logical groupings and uniqueness characteristics of the data model, although the table structures and key columns actually drafted are often not exactly like the model. For example, in an Employee table, the unique key involves first and last name fields, along with others. Because the composite unique key of the data model involves several large items that are vulnerable to human error, a special column would be added to the table to act as a surrogate primary key.
RDBMSs rely on each row having a unique column in each table, in order to target and locate a specific row unambiguously, for matching search conditions and for linking data items or streams.
Relationships in the model map to keys in the tables. Theoretically, every relationship in the model would be realized as a pair of linked keys. When keys are linked to one another by foreign key constraints, the tables become bound into a network of dependencies that reflect how groups of data should interact, regardless of context. Underlying rules in server logic refer to these dependencies in order to protect the referential integrity of the database. The SQL standards formulate rules describing how referential dependencies should work. It is up to the vendor of an individual RDBMS to decide how to implement and enforce the dependencies internally.
In individual server engine implementations, there can be technical reasons for certain key constraints to be left without a formal declaration and implemented in an alternative way. For example, most RDBMSs require mandatory, non-unique indexes on the column elements of foreign keys. Under some data distribution conditions, it may be undesirable to index such columns, if another way can be used to protect the integrity of the relationship concerned.
An RDBMS can also realize relationships that do not involve keys. For example, it can extract sets of data on the basis of comparing values, or expressions involving values, in different columns of a single table or between columns in different tables. The SQL query language, the structures of stored datasets, and the logical skills of the application developer interact to ensure that client/server traffic is kept down and user requests return results that are precise and appropriate.
"Hands-Off" Data Access
RDBMSs that are designed for client/server use do not make data directly accessible to users. When a user application wants an operation performed on a set of data, it tells the client module what it wants and the client negotiates with the server to satisfy the request. If the request is denied for some reason, it is the client that bears the bad news back to the application.
If the application requests to read a set of data, it is the client that fetches the output from the server’s operation and carries it back to the application. The data seen by the application are images of the state of the original data in the database at the moment the conversation between the client and the server began. The images that users see are disconnected—or isolated —from the database. The “moment of isolation” may not be the same moment that the request was received by the server. In a client/server environment, where it is assumed that more than one user is reading and writing data, every request has a context.
Multiple Users and Concurrency
A DBMS that is designed to allow multiple users to work with images of stored data and to request changes that may impact the work that others are doing needs some way to manage concurrency. Concurrency is the set of conditions that is anticipated when two or more users request to change the same row in a table at the same time (i.e., concurrently). Mature DBMSs like Firebird implement some kind of scheme whereby each request is made in a concurrency context. The standard SQL term for this concurrency context is transaction—not to be confused with the “business transactions” that database applications frequently implement!
For the ex-user of desktop databases, the transaction is one of the most confusing abstractions in the client/server and RDBMS environment. With desktop databases and spreadsheet programs, it is taken for granted that, once the user clicks the Save button and the drive light goes out, the operation is done, for better or for worse. It is also a fact that, once the penny drops about transactions, developers tend to spiral rapidly away from the “spreadsheet mind-set” that has preoccupied them during those years when the old database model seemed perfect.
In Firebird, all communications between the client and the server occur during transactions. Even reading a few rows from a table cannot occur if a transaction has not started. A transaction starts when the application asks the client to start it. From the point when the transaction begins until it ends—again, at the request of the application —the client/server conversation is open and the application can ask the client to make requests. During this period, operations to change database state are executed and written to disk. However, they do not change database state and they are reversible.
Transactions end when the application asks the client to request the server to commit all of the work since the transaction began (even if the work was nothing but reads) or, in the event of errors, to roll back the work. The rules of atomicity apply: If one pending change of database state fails, requiring it to be rolled back because it cannot be committed, then all pending changes in that transaction are rolled back too. The rollback includes any work that was executed by triggers and stored procedures during the course of the transaction.
Transactions matter so much in a client/server system that this guide devotes three chapters to them, Chapter Overview of Firebird Transactions, Chapter Configuring Transactions and Chapter Programming with Transactions.
Firebird Related Interview Questions
|RDBMS Interview Questions||MySQL Interview Questions|
|Linux Interview Questions||Mac OS X Deployment Interview Questions|
|Windows Administration Interview Questions||Windows Server 2003 Interview Questions|
|SQL Interview Questions||NoSQL Interview Questions|
|Advanced C++ Interview Questions|
Introduction To Client/server Architecture
About Firebird Data Types
Date And Time Types
Blobs And Arrays
From Drawing Board To Database
Creating And Maintaining A Database
Firebird’s Sql Language
Expressions And Predicates
Querying Multiple Tables
Ordered And Aggregated Sets
Overview Of Firebird Transactions In
Programming With Transactions
Introduction To Firebird Programming
Developing Psql Modules
Error Handling And Events
Security In The Operating Environment
Configuration And Special Features
Interactive Sql Utility (isql)
Database Backup And Restore (gbak)
Housekeeping Tool (gfix)
Understanding The Lock Manager
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.