4 avg. rating (80% score) - 5881 votes
Looking for a career opportunity in Sybase? Need assistance in job search? Here’s a solution built by wisdom jobs interview questions page to create more opportunities for the job candidates. Sybase was an enterprise software and services company builds to develop and manage software to analyze information in relational databases. Sybase was acquired by SAP in 2010. Search and apply jobs on Sybase including Sybase DBA, Sybase developers, Sybase database administrators, SAP SUP consultant, java developer multi-threading, Sybase developer, Sybase sql developer, Sybase engineer application support Unix, mainframe developer, technical lead, AS 400 development,DB2 DBA in mainframes, SAP consultant ABAP and mainframe experts etc. Following Sybase job interview questions and answers page is useful for job seekers to get through the job interviews.
You can tell if a table has identities one of two ways:
1. sp_help [tablename]: there is a field included in the sp_help output
describing a table called “Identity.” It is set to 1 for identity fields, 0 otherwise.
2. Within a database, execute this query:
this will list all the tables and the field within the table that serves as an identity, and the size of the identity field.
Sybase exclusively focuses on mobilizing and managing information. It is known to be the third largest database management company after Oracle and IBM. Sybase products on Investment banking are well known to the world over and it offers many customized solutions for business intelligence.
Sybase has worlds leading products some of them are
16 Levels i.e. 16 nested triggers can be there in Sybase
Sybase storage systems are optimized for Dataware housing applications where data cannot be modified can be modified by searching and accessing the data. Its column based search criteria gives high performance for search but it negates the writing capability to the data. Performance for row based relational based databases is worse because it is very difficult for updating and modifying them.
A trigger is a stored procedure bound to a table, which is implicitly executed when a user attempts to modify data with an insert, delete, or update command.
Use : i. Validation purpose
Type of trigger: –
Two tables are created at the time of trigger execution
2. Deleted which are called magic tables.
Stored procedure is a database object which is collection of SQL statements or constructs. Parameters can be passed and returned, and error codes may be checked. Stored procedures, after their first execution, become memory resident and do not need to be reparsed, reoptimized, or recompiled. So they are faster.
The update statistics command helps the server make the best decisions about which indexes to use when it processes a query, by providing information about the distribution of the key values in the indexes. The update statistics commands create statistics, if there are no statistics for a particular column, or replaces existing statistics if they already exist. The statistics are stored in the system tables systabstats and sysstatistics.
Structure of ASE installation hosts several databases and it comprises of one data server. Also system files occupy only one meta data. User databases are stored in the form of tables. Security and information privacy can be protected by setting permissions.
Sybase IQ stores data base in the form of columns of data rather in rows. This storage model improves the accessibility and search criteria for the data. Performance of the search criteria improves greatly because it searches only columns rather than the rows and columns of the database. This is also environment friendly.
These are the following benefits you can get from mainframe connect: –
1) Client applications can be enabled. External software's can be connected to the mainframe connect for higher performance.
2) Mainframe applications connect to external data through LAN.
3) New applications can be created which enables you to connect to external data or source of information.
This connectivity tool gives access to high performance tools which connects you very fast to the main frame data sources. It gives you read and write access to databases. It also has a facility by which you can connect the main frame data to secondary data storage facilities.
Power designer has two unique tools known as Link and Sync technologies.This technology offers Data base designers to design efficient data models through which one can offer efficient design. Also Data base designers can share resources among them which will improve the productivity and performance
Data integration suite offers advanced development and management tools. Integration of data is very easy and efficient as you can navigate your data to a specified address without much difficulty. The five important features are ETL, Data federation, Replications, Real time events and search events.
The numeric and decimal types are identical in all respects but one:
Only numeric types with a scale of 0 can be used for the IDENTITY column
Enterprise scalability provides increased capacity without any new addition of hardware. It increases the capacity by increasing the availability in the enterprise data layer.
ROI: – Streamlining of application projects and reducing data management costs, data federation increases ROI. Complexity and superior decision making choices make ROI best in its class.
Some of the benefits which you can get from a Data integration suite are
Flexibility: This suite gives a greater flexibility to incorporate data from various sources such as web, file systems, etc and it can transfer this data to third party applications such as SQL, production systems, etc.
Procedure cache is the memory area where compiled query tree run e.g, procedure, batch query. Data Cache is the memory area where data that is required for the current querys running is bought from Disk on to memory for building result sets.
This software works with storage replication systems and replicates the database to an available ASE or any database back up devices such as oracle. It increases the security of applications and also reduces the fail over time for data applications.
Open switch business software helps you to work on the back up system without any hassles making your primary database server secured. It gives you connectivity to the primary database from the secondary server even in the middle of the application.
Replication server software makes you to manage multiple data platforms very easily. Some of the features which it provides are heterogeneous replication and synchronization across various applications bi directionally. It provides services from the client to server and server to client.
Coalesce is also available in Sybase This functions evaluates the listed expressions and returns the first nonnull value. If all the expressions are null, coalesce returns null
USAGE: coalesce(expression, expression [, expression]…)
Returns the first occurrence of a nonNULL value in either the initialqty or finalqty or middleqty
JDBC is a Application programming Interface which helps Java Developers to interact with Database like Sybase via DML Statements (Insert, Update, Delete and so on).
syb_quit(0) command can be used to exit from Sybase database .
Below is an example of the various files (on Irix) that are needed to start/stop an ASE. The information can easily be extended to any UNIX platform. The idea is to allow as much flexibility to the two classes of administrators who manage the machine:
* The System Administrator
* The Database Administrator
Any errors introduced by the DBA will not interfere with the System Administrator’s job.
With that in mind we have the system startup/shutdown file /etc/init.d/sybase invoking a script defined by the DBA:
Sybase open server is a vital component of Sybase and Microsoft Open Data Services . The Sybase open server is a major component for handling distributed database systems and thus forms a vital and powerful server side component of Sybase systems . One of the major feature of this is Sybase open server is interoperability.
No. By design, each table is intrinsically assigned to one segment, called the default segment. When a table is partitioned, any partitions on that table are distributed among the devices assigned to the default segment.In the example under “How Do I Create A Partitioned Table That Spans Multiple Devices?”, the table sits on a userdefined segment that spans three devices
You should partition heap tables that have large amounts of concurrent insert activity. (A heap table is a table with no clustered index.) Here are some examples:
1. An “appendonly” table to which every transaction must write.
2. Tables that provide a history or audit list of activities.
3. A new table into which you load data with bcp in. Once the data is loaded in, you can unpartition the table. This enables you to create a clustered index on the table, or issue other commands not permitted on a partition table.
No. You cannot partition the following kinds of tables:
1. Tables with clustered indexes (as of release 11.5 it is possible to have a clustered index on a partitioned table).
2. ASE system tables.
3. Work tables.
4. Temporary tables.
5. Tables that are already partitioned. However, you can unpartition and then repartition tables to change the number of partitions.
If the columns that comprise the clustered index are monotonically increasing (that is, new row key values are greater than those previously inserted) the following System 11 dbcc tune will not split the page when it’s half way full. Rather it’ll let the page fill and then allocate another page:
dbcc tune(ascinserts, 1, “my_table”)
By the way, SyBooks is wrong when it states that the above needs to be reset when ASE is rebooted. This is a permanent setting.
To undo it:
dbcc tune(ascinserts, 0, “my_table”)
create temporary table with same table structure and then create unique index on this temporary table with option ignore_dup_row. now insert data into temp table from the table in which duplicate records exists. while inserting record into temp table, duplicate rows will get ingored. Finally temp table will have unique records
select distinct * from table_1 into temp_table_1
insert into table_1
select * from temp_table_1
This is one of the ways to eliminate duplicates.
No. Once you have partitioned a table, you cannot use any of the following TransactSQL commands on the table until you unpartition it:
1. drop table
3. truncate table
4. alter table table_name partition n
On releases of ASE prior to 11.5 it was not possible to create a clustered index on a partitioned table either.
Index is used to provide faster access to the data in a table than scanning every page. Sometimes index can be used as mechanism for enforcing uniqueness.
There are 2 types of indexes: Clustered and Non clustered
Clustered Index: – Data is physically sorted. The bottom or leaf level of a clustered index contains the actual data pages of the table. So there can be only one clustered index per table.
Non-Clustered Index: – With a non-clustered index, the physical order of the rows is not the same as their indexed order. The leaf level of a non-clustered index contains pointers to rows on data pages. More precisely, each leaf page contains an indexed value and a pointer to the row with that value. Up to 249 non-clustered indexes are allowed in a table.
When all the keys of non-clustered index are used in select statement and also in where clause then the non-clustered index does not need to go to last level of index it just pick the data from leaf-level so it takes less time to execute the query this is called Index-Covering.
Just write the name or the index number in the bracket to force the index.
Shared Lock: – used by process that is reading pages. Multiple shared locks can be held on one page; a single shared lock prevents any exclusive lock from being acquired. Shared locks typically are held only for the duration of the read on a particular page. If holdlock is specified, shared locks are held until the completion of the command or the transaction it is in.
Update lock: – used by process that updates or deletes data, but have not yet done so. Update locks are acquired as the set of rows to be updated or deleted is being determined. Update locks are read-compatible with shared locks during the pre modification phase, but no other update lock or exclusive lock can be acquired on the page. Update locks automatically are updated to exclusive locks when the data change occurs.
Exclusive Lock: – used by process that currently adding, changing or deleting information on data pages. Exclusive locks prevent any other type of lock (exclusive, update or shared) from being acquired. Exclusive locks are held on all affected pages until an explicit transaction or a command in the implicit transaction is complete.
The numbers of rows you are inserting into a table, that many number of times trigger gets fire.
To maintain the referential integrity.
By creating appropriate indexes on tables. Writing a query based on the index and how to pick up the appropriate index.
Using the SARG’s in the where clause, checking the query plan using the set show plan on. If the query is not considering the proper index, then will have to force the correct index to run the query faster.
By killing a process you can force a transaction to fail.
Constraints enable the RDBMS enforce the integrity of the database automatically, without needing you to create triggers, rule or defaults.
Types of constraints: NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY
When ever there are insertions, updations & deletions in a table we do defrag.
select “Server Start Time” = crdate from master..sydatabases where name = “tempdb” or
select * from sysengines
When there are relations between tables / objects across the different databases, then there is a disadvantage indeed: if you would restore a dump of one of the databases, those relations may not be consistent anymore. This means that you should always back up a consistent set of databases is the unit of backup / restore. Therefore, when making this kind of design decision, backup/restore issues should be considered (and the DBA should be consulted).
The main advantage of an identity column is that it can generate unique, sequential numbers very efficiently,requiring only a minimal amount of I/O. The disadvantage is that the generated values themselves are not transactional, and that the identity values may jump enormously when the server is shutdown the rough way (resulting in “identity gaps”).
If you’re in ASE 11.5 or later, create a view for those rows and BCP out from the view. In earlier ASE versions, you’ll have to select those rows into a separate table first and BCP out from that table. In both cases, the speed of copying the data depends on whether there is a suitable index for retrieving the rows.
Sybase Related Tutorials
|IBM DB2 Tutorial||PHP Tutorial|
|SQL Server 2008 Tutorial||Oracle 11g Tutorial|
|DB2 Using SQL Tutorial||Java Tutorial|
Sybase Related Interview Questions
|IBM DB2 Interview Questions||RDBMS Interview Questions|
|DBMS Interview Questions||PHP Interview Questions|
|SQL Server 2008 Interview Questions||Oracle 11g Interview Questions|
|DB2 Using SQL Interview Questions||MYSQL DBA Interview Questions|
|Database Administration Interview Questions||Autosys Interview Questions|
|Java Interview Questions||DB2 SQL Programming Interview Questions|
|Oracle Performance Tuning Interview Questions||Microsoft Power Bi Interview Questions|
Sybase Related Practice Tests
|IBM DB2 Practice Tests||RDBMS Practice Tests|
|DBMS Practice Tests||PHP Practice Tests|
|SQL Server 2008 Practice Tests||Oracle 11g Practice Tests|
|DB2 Using SQL Practice Tests||MYSQL DBA Practice Tests|
|Database Administration Practice Tests||Autosys Practice Tests|
|DB2 SQL Programming Practice Tests|
An Introduction To Php
Variables, Operators, And Expressions
I/o And Disk Access
Parsing And String Evaluation
Sorting Searching And Random Numbers
Integration With Html
Efficiency And Debugging
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.