Friday, April 15, 2011

How to Change schema name for all Tables in SQL Server 2005


Step 1. Check existing schema name before changing



Existing schema name is OldSchema

Step 2. Run the below script on particular database in which you want change the schema.

SELECT 'ALTER SCHEMA NewSchema TRANSFER ' + TABLE_SCHEMA + '.' + TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'OldSchema'

Provide newschem name & oldschema name in highlighted in yellow colour.



Step 3. After execution the above script you will get output which containing another script to change schema for each table like below.

ALTER SCHEMA dbo TRANSFER OldSchema.Table1
ALTER SCHEMA dbo TRANSFER OldSchema.Table2
Copy entire output script and execute on the database which you want change schema for all tables

Once the script is executed successfully you will get a message as above screen.
Step  4 . Check the new schema name it is changed or not
                                                                           
Conclusion – Before changing schema name was OldSchema now it has been changed to dbo.

Wednesday, April 13, 2011

Single Script to Backup All User Databases in SQL Server

DECLARE @name VARCHAR(50-- database name  DECLARE @path VARCHAR(256-- path for backup files  DECLARE @fileName VARCHAR(256-- filename for backup  DECLARE @fileDate VARCHAR(20-- used for file name
SET @path 'C:\Backup\' 
SELECT @fileDate CONVERT(VARCHAR(20),GETDATE(),112)
DECLARE db_cursor CURSOR FOR 
SELECT 
name FROM master.dbo.sysdatabases WHERE name NOT IN ('master','model','msdb','tempdb'
OPEN db_cursor   FETCH NEXT FROM db_cursor INTO @name  
WHILE @@FETCH_STATUS 0   BEGIN  
       SET 
@fileName @path @name '_' @fileDate '.BAK' 
       
BACKUP DATABASE @name TO DISK = @fileName 

       
FETCH NEXT FROM db_cursor INTO @name   END  

CLOSE 
db_cursor   DEALLOCATE db_cursor

Friday, April 1, 2011

SQL DBA Hand Book - This will help in day-to-day activites & interview

SQL DBA Hand Book
1.       What is collation?
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.
2.       What is Statistics?
Statistics in SQL Server refers specifically to information that the server collects about the distribution of data in columns and indexes. This data in turn is used by the query optimizer to determine the plan of attack for returning results when you run a query
  1. What is clustered index?
A clustered index stores the actual data rows at the leaf level of the index. Returning to the example above, that would mean that the entire row of data associated with the primary key value of 123 would be stored in that leaf node. An important characteristic of the clustered index is that the indexed values are sorted in either ascending or descending order. As a result, there can be only one clustered index on a table or view. In addition, data in a table is sorted only if a clustered index has been defined on a table.
  1. What is no clustered index?
Unlike a clustered indexed, the leaf nodes of a non clustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves. This means that the query engine must take an additional step in order to locate the actual data.
  1. Isolation levels
Isolation levels come into play when you need to isolate a resource for a transaction and protect that resource from other transactions. The protection is done by obtaining locks. What locks need to be set and how it has to be established for the transaction is determined by SQL Server referring to the Isolation Level that has been set. Lower Isolation Levels allow multiple users to access the resource simultaneously (concurrency) but they may introduce concurrency related problems such as dirty-reads and data inaccuracy. Higher Isolation Levels eliminate concurrency related problems and increase the data accuracy but they may introduce blocking.
  1. Read Uncommitted Isolation Level
This is the lowest level and can be set, so that it provides higher concurrency but introduces all concurrency problems; dirty-reads, lost updates, Non repeatable reads (Inconsistent analysis) and phantom reads.
  1. Read Committed Isolation Level
This is the default Isolation Level of SQL Server. This eliminates dirty-reads but all other concurrency related problems.
  1. Repeatable Read Isolation Level
This Isolation Level addresses all concurrency related problems except Phantom reads. Unlike Read Committed, it does not release the shared lock once the record is read. It obtains the shared lock for reading and keeps till the transaction is over. This stops other transactions accessing the resource, avoiding Lost Updates and No repeatable reads.
  1. Serializable Isolation Level
This is the highest Isolation Level and it avoids all the concurrency related problems. The behavior of this level is just like the Repeatable Read with one additional feature. It obtains key range locks based on the filters that have been used. It locks not only current records that stratify the filter but new records fall into same filter.
  1. Snapshot Isolation Level
The Snapshot Isolation Level works with Row Versioning technology. Whenever the transaction requires a modification for a record, SQL Server first stores the consistence version of the record in the tempdb. If another transaction that runs under Snapshot Isolation Level requires the same record, it can be taken from the version store. This Isolation Level prevents all concurrency related problems just like Serializable Isolation Level; in addition to that it allows multiple updates for same resource by different transactions concurrently.
  1. Read Committed Snapshot Isolation Level
This is the new implementation of the Read Committed Isolation Level. It has to be set not at session/connection level but database level. The only different between Read Committed and Read Committed Snapshot is, Read Committed Snapshot is Optimistic whereas Read Committed is Pessimistic. The Read Committed Snapshot differs from Snapshot in two ways; Unlike Snapshot, it always returns latest consistence version and no conflict detection.
  1. Lock types
  1. Shared locks are used for operations that do not change or update data, such as a SELECT statement. Shared locks are compatible with other Shared locks or Update locks.
  2. Update locks are used when SQL Server intends to modify a page, and later promotes the update page lock to an exclusive page lock before actually making the changes. Update locks are compatible with Shared locks only.
  3. Exclusive locks are used for the data modification operations, such as UPDATE, INSERT, or DELETE. Exclusive locks are not compatible with other lock types.
  1. What is blocking?
Blocking in SQL Server is a scenario where one connection to SQL Server locks one or more records, and a second connection to SQL Server requires a conflicting lock type on the record or records locked by the first connection. This causes the second connection to wait until the first connection releases its locks. By default, a connection will wait an unlimited amount of time for the blocking lock to go away.
  1. What is deadlock?
Deadlocking occurs when two user processes have locks on separate objects and each process is trying to acquire a lock on the object that the other process has. When this happens, SQL Server identifies the problem and ends the deadlock by automatically choosing one process and aborting the other process, allowing the other process to continue. The aborted transaction is rolled back and an error message is sent to the user of the aborted process. Generally, the transaction that requires the least amount of overhead to rollback is the transaction that is aborted.
  1. What’s the difference between a primary key and a unique key?
 Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a non clustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.

  1. Define candidate key, alternate key, composite key.
 A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys. A key formed by combining at least two or more columns is called composite key.
  1. What is a transaction and what are ACID properties?
 A transaction is a logical unit of work in which, all the steps must be performed or none. ACID stands for Atomicity, Consistency, Isolation, and Durability. These are the properties of a transaction. For more information and explanation of these properties, see SQL Server books online or any RDBMS fundamentals text book. Explain different isolation levels an isolation level determines the degree of isolation of data between concurrent transactions. The default SQL Server isolation level is Read Committed. Here are the other isolation levels (in the ascending order of isolation): Read Uncommitted, Read Committed, Repeatable Read, and Serializable. See SQL Server books online for an explanation of the isolation levels. Be sure to read about SET TRANSACTION ISOLATION LEVEL, which lets you customize the isolation level at the connection level. Read Committed – A transaction operating at the Read Committed level cannot see changes made by other transactions until those transactions are committed. At this level of isolation, dirty reads are not possible but non repeatable reads and phantoms are possible. Read Uncommitted – A transaction operating at the Read Uncommitted level can see uncommitted changes made by other transactions. At this level of isolation, dirty reads, non repeatable reads, and phantoms are all possible. Repeatable Read – A transaction operating at the Repeatable Read level is guaranteed not to see any changes made by other transactions in values it has already read. At this level of isolation, dirty reads and non repeatable reads are not possible but phantoms are possible. Serializable – A transaction operating at the Serializable level guarantees that all concurrent transactions interact only in ways that produce the same effect as if each transaction were entirely executed one after the other. At this isolation level, dirty reads, non repeatable reads, and phantoms are not possible.
  1. What’s the difference between DELETE TABLE and TRUNCATE TABLE commands?
 DELETE TABLE is a logged operation, so the deletion of each row gets logged in the transaction log, which makes it slow. TRUNCATE TABLE also deletes all the rows in a table, but it won’t log the deletion of each row, instead it logs the deallocation of the data pages of the table, which makes it faster. Of course, TRUNCATE TABLE can be rolled back. TRUNCATE TABLE is functionally identical to DELETE statement with no WHERE clause: both remove all rows in the table. But TRUNCATE TABLE is faster and uses fewer system and transaction log resources than DELETE. The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log. TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column. If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement. You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view

  1. What are the steps you will take to improve performance of a poor performing query?
 This is a very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables. Some of the tools/ways that help you troubleshooting performance problems are: SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer
  1.  What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
  Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other’s piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user’s process. A lovelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A lovelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely. Check out SET DEADLOCK_PRIORITY and “Minimizing Deadlocks” in SQL Server books online
  1. What are statistics, under what circumstances they go out of date, how do you update them?
  Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query. Some situations under which you should update statistics: 1) If there is significant change in the key values in the index 2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated 3) Database is upgraded from a previous version. Look up SQL Server books online for the following commands: UPDATE STATISTICS, STATS_DATE, DBCC SHOW_STATISTICS, CREATE STATISTICS, DROP STATISTICS, sp_autostats, sp_createstats, sp_updatestats
  1.  Index Optimization tips
• Every index increases the time in takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be very much. Try to use maximum 4-5 indexes on one table, not more. If you have read-only table, then the number of indexes may be increased.
• Keep your indexes as narrow as possible. This reduces the size of the index and reduces the number of reads required to read the index.
• Try to create indexes on columns that have integer values rather than character values.
• If you create a composite (multi-column) index, the order of the columns in the key is very important. Try to order the columns in the key as to enhance selectivity, with the most selective columns to the left most of the key.
• If you want to join several tables, try to create surrogate integer keys for this purpose and create indexes on their columns.
• Create surrogate integer primary key (identity for example) if your table will not have many insert operations.
• Clustered indexes are more preferable than nonclustered, if you need to select by a range of values or you need to sort results set with GROUP BY or ORDER BY.
• If your application will be performing the same query over and over on the same table, consider creating a covering index on the table.
• You can use the SQL Server Profiler Create Trace Wizard with “Identify Scans of Large Tables” trace to determine which tables in your database may need indexes. This trace will show which tables are being scanned by queries instead of using an index.
• You can use sp_MSforeachtable undocumented stored procedure to rebuild all indexes in your database. Try to schedule it to execute during CPU idle time and slow production periods.
sp_MSforeachtable @command1=”print ‘?’ DBCC DBREINDEX (‘?’)”
  1. Explain about Clustered and non clustered index? How to choose between a Clustered Index and a Non-Clustered Index?
There are clustered and nonclustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.
Consider using a clustered index for:
o Columns that contain a large number of distinct values.
o Queries that return a range of values using operators such as BETWEEN, >, >=, <, and <=.
o Columns that are accessed sequentially.
o Queries that return large result sets.
Non-clustered indexes have the same B-tree structure as clustered indexes, with two significant differences:
o The data rows are not sorted and stored in order based on their non-clustered keys.
o The leaf layer of a non-clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows. Each index row contains the non-clustered key value and one or more row locators that point to the data row (or rows if the index is not unique) having the key value.
o Per table only 249 non clustered indexes
  1.  How many types of Joins?
Joins can be categorized as:
• Inner joins (the typical join operation, which uses some comparison operator like = or <>). These include equi-joins and natural joins.
Inner joins use a comparison operator to match rows from two tables based on the values in common columns from each table. For example, retrieving all rows where the student identification number is the same in both the students and courses tables.
• Outer joins. Outer joins can be a left, a right, or full outer join.
Outer joins are specified with one of the following sets of keywords when they are specified in the FROM clause:
• LEFT JOIN or LEFT OUTER JOIN -The result set of a left outer join includes all the rows from the left table specified in the LEFT OUTER clause, not just the ones in which the joined columns match. When a row in the left table has no matching rows in the right table, the associated result set row contains null values for all select list columns coming from the right table.
• RIGHT JOIN or RIGHT OUTER JOIN – A right outer join is the reverse of a left outer join. All rows from the right table are returned. Null values are returned for the left table any time a right table row has no matching row in the left table.
• FULL JOIN or FULL OUTER JOIN – A full outer join returns all rows in both the left and right tables. Any time a row has no match in the other table, the select list columns from the other table contain null values. When there is a match between the tables, the entire result set row contains data values from the base tables.
• Cross joins – Cross joins return all rows from the left table, each row from the left table is combined with all rows from the right table. Cross joins are also called Cartesian products. (A Cartesian join will get you a Cartesian product. A Cartesian join is when you join every row of one table to every row of another table. You can also get one by joining every row of a table to every row of itself.
  1. What are the difference between a function and a stored procedure?
  Functions can be used in a select statement where as procedures cannot
  Procedure takes both input and output parameters but Functions takes only input parameters
  Functions cannot return values of type text, ntext, image & timestamps where as procedures can
  Functions can be used as user defined datatypes in create table but procedures cannot
***Eg:-create table <tablename>(name varchar(10),salary getsal(name))
Here getsal is a user defined function which returns a salary type, when table is created no storage is allotted for salary type, and getsal function is also not executed, But when we are fetching some values from this table, getsal function get’s executed and the return
Type is returned as the result set.
  1. What are the basic functions for master, msdb, tempdb databases?
Microsoft® SQL Server 2000 systems have four system databases:
• Master – The master database records all of the system level information for a SQL Server system. It records all login accounts and all system configuration settings. Master is the database that records the existence of all other databases, including the location of the database files.
• Tempdb – tempdb holds all temporary tables and temporary stored procedures. It also fills any other temporary storage needs such as work tables generated by SQL Server. tempdb is re-created every time SQL Server is started so the system starts with a clean copy of the database.
By default, tempdb auto grows as needed while SQL Server is running. If the size defined for tempdb is small, part of your system processing load may be taken up with auto growing tempdb to the size needed to support your workload each time to restart SQL Server. You can avoid this overhead by using ALTER DATABASE to increase the size of tempdb.
• Model – The model database is used as the template for all databases created on a system. When a CREATE DATABASE statement is issued, the first part of the database is created by copying in the contents of the model database, and then the remainder of the new database is filled with empty pages. Because tempdb is created every time SQL Server is started, the model database must always exist on a SQL Server system.
• Msdb – The msdb database is used by SQL Server Agent for scheduling alerts and jobs, and recording operators.
  1. 1st Normal Form (1NF)
Definition: A table (relation) is in 1NF if
1. There are no duplicated rows in the table.
2. Each cell is single-valued (i.e., there are no repeating groups or arrays).
3. Entries in a column (attribute, field) are of the same kind.
Note: The order of the rows is immaterial; the order of the columns is immaterial.
Note: The requirement that there be no duplicated rows in the table means that the table has a key (although the key might be made up of more than one column—even, possibly, of all the columns).
Rule 1: Eliminate Repeating Groups. Make a separate table for each set of related attributes, and give each table a primary key.
  1. 2nd Normal Form (2NF)
Definition: A table is in 2NF if it is in 1NF and if all non-key attributes are dependent on all of the key.
Note: Since a partial dependency occurs when a non-key attribute is dependent on only a part of the (composite) key, the definition of 2NF is sometimes phrased as, “A table is in 2NF if it is in 1NF and if it has no partial dependencies.”
Rule 2: Eliminate Redundant Data. If an attribute depends on only part of a multi-valued key, remove it to a separate table.
  1. 3rd Normal Form (3NF)
Definition: A table is in 3NF if it is in 2NF and if it has no transitive dependencies.
Rule 3: Eliminate Columns Not Dependent on Key. If attributes do not contribute to a description of the key, remove them to a separate table.
Boyce-Codd Normal Form (BCNF)
Definition: A table is in BCNF if it is in 3NF and if every determinant is a candidate key.
  1. 4th Normal Form (4NF)
Definition: A table is in 4NF if it is in BCNF and if it has no multi-valued dependencies.
Rule 4: Isolate Independent Multiple Relationships. No table may contain two or more l:n or n:m relationships that are not directly related.
  1. 5th Normal Form (5NF)
Definition: A table is in 5NF, also called “Projection-Join Normal Form” (PJNF), if it is in 4NF and if every join dependency in the table is a consequence of the candidate keys of the table.
Rule 5: Isolate Semantically Related Multiple Relationships. There may be practical constraints on information that justify separating logically related many-to-many relationships.
Domain-Key Normal Form (DKNF)
Definition: A table is in DKNF if every constraint on the table is a logical consequence of the definition of keys and domains.
  1. Difference between SQL 2005 & SQL 2008

Feature
SQL 2005
SQL 2008
Compress backup files
No
Yes
Storing backup file duration
5 Mins without compression
3 Mins with compression
CPU is used to compress the data before it is written to disk
No
Yes
Central Management Server
No
Yes
Policy-Based Management
No
Yes
Reporting Services no longer requires IIS
No
Yes


  1. How Snapshot replication works?
Snapshot replication is implemented by the Snapshot Agent and the Distribution Agent. The Snapshot Agent prepares snapshot files containing schema and data of published tables and database objects, stores the files in the snapshot folder, and records synchronization jobs in the distribution database on the Distributor. By default, the snapshot folder is located on the Distributor, but you can specify an alternate location instead of or in addition to the default. For more information, see Alternate Snapshot Locations.

The Distribution Agent moves the snapshot held in the distribution database tables to the destination tables at the Subscribers. The distribution database is used only by replication and does not contain any user tables.
  1. How Transactional Replication works?
Transactional replication is implemented by the Snapshot Agent, Log Reader Agent, and Distribution Agent. The Snapshot Agent prepares snapshot files containing schema and data of published tables and database objects, stores the files in the snapshot folder, and records synchronization jobs in the distribution database on the Distributor.
The Log Reader Agent monitors the transaction log of each database configured for transactional replication and copies the transactions marked for replication from the transaction log into the distribution database. The Distribution Agent moves the initial snapshot jobs and the transactions held in the distribution database tables to Subscribers.
  1. How Merge Replication works?
Merge replication is implemented by the Snapshot Agent and Merge Agent. The Snapshot Agent prepares snapshot files containing schema and data of published tables, stores the files in the snapshot folder, and inserts synchronization jobs in the publication database. The Snapshot Agent also creates replication-specific stored procedures, triggers, and system tables.
The Merge Agent applies the initial snapshot jobs held in the publication database tables to the Subscriber. It also merges incremental data changes that occurred at the Publisher or Subscribers after the initial snapshot was created, and reconciles conflicts according to rules you configure or a custom resolver you create.
The role of the Distributor is very limited in merge replication, so implementing the Distributor locally (on the same server as the Publisher) is very common. The Distribution Agent is not used at all during merge replication, and the distribution database on the Distributor stores history and miscellaneous information about merge replication.
  1. How does log shipping works?
Log shipping implementation is straightforward:
a.            Full backup of the database is taken on the primary server and copied to the standby server.
b.            Standby server maintains a copy of the database.
c.             The database is not operational; it can stay in either read-only mode or no-recovery mode.
d.            Transaction log for the "log-shipped" database is backed up on the primary server periodically. Note that only databases that are in FULL recovery mode can be log-shipped.
Transaction log backups are placed on a shared drive; standby server's SQL Server Agent account must have access to this shared drive.
e.            Transaction log backups are copied to the standby server.
f.             Transaction log backups are applied to the database on the standby server in the order that they were taken on the primary server.
g.            Either primary server, standby server, or a separate server can be used to monitor log shipping. If you use a separate server for monitoring, it does NOT have to have Enterprise Edition of SQL Server; any edition (other than MSDE) will do.
If the primary server becomes unavailable due to disk failure or some other reason, DBA can take the following steps to fail the database over to the standby server:
a.            Perform one last backup of the transaction log on the primary server (if possible).
b.            Copy all transaction log backups to the standby server and apply them in the same order they were taken on the primary server.
The last transaction log backup should be restored by using the WITH RECOVERY clause so that the standby database becomes operational.
c.             Transfer any logins that exist on the primary server to the standby server. Only the logins that must have access to the log-shipped database must be transferred.
This step might be further complicated if logins with the same name exist on both servers. In such cases, the DBA needs to ensure that appropriate mappings exist between SQL Server logins and database users on the standby server.
During the initial configuration of log shipping, you can allow the standby database to assume the primary role. That means you can ship transaction logs from the standby server to the primary server after you have failed the primary database over. So if primary server (server A) fails over to standby server (server B), servers can switch roles so that you can fail server B back to server A if needed.
  1. Advantages of log shipping
Although log shipping requires the DBA's help to work, it does offer some advantages over other forms of failover:
a.            Easy setup. Log shipping can be configured by using the Database Maintenance Plan Wizard. With a few mouse clicks and some careful planning, you can have a dependable failover solution.
b.            Standby databases can be available for read-only queries. In some environments, standby servers are used for reporting. Note, however, that a log-shipped database will NOT be available while transaction logs are applied to it. Therefore, if you take transaction log backups every hour, and if applying the backup to the standby server takes 10 minutes, the log-shipped database will be available for queries only 50 minutes out of every hour.
c.             Low maintenance. Log shipping usually works well; if one of the steps fails SQL Server picks up exactly where it left off and very little troubleshooting is required.
d.            Multiple standby servers can be configured. You can ship the transaction logs from the primary server to multiple standby servers. This way, you further reduce the chances of downtime. In addition, each standby server can be used for a different purpose—one server can be used for reporting and another one for providing high availability.
 32. Disadvantages of Log Shipping
 Unfortunately, log shipping does have a few issues that the DBA should be aware of prior to relying on this method of failover:
a.            Possible data loss when the primary server fails. If the primary server becomes completely unusable, transactions that occurred after the last transaction log backup that was copied to the standby server are lost. For example, suppose that server a fails at 5 a.m. and you cannot get to it at all. If the last backup copied to server B was taken at 4:45 a.m., all transactions that occurred between 4:45 a.m. and 5 a.m. are lost forever.
b.            Some manual DBA work is required to bring the standby server online, as discussed in this article.
3.            Log shipping setup cannot be scripted. This means that you cannot mimic the production environment for testing purposes without going through the wizard screens.
c.             The Enterprise edition of SQL Server 2000 is required on primary and standby servers. If you run any other version/edition of SQL Server, you're out of luck. The Developer edition can be used to learn how to set up log shipping, but it cannot be used in a production environment. Note however, that log shipping is merely an automated way of copying transaction log backups, so a savvy DBA can easily set up jobs to accomplish the same functionality.
d.            Difficult troubleshooting. Log shipping usually works very well, but if there are problems, they're difficult to troubleshoot—documentation is sparse and typically not helpful for solving a particular problem. Fortunately Microsoft's Knowledge Base articles have good information for troubleshooting log-shipping issues.
e.            Each database that needs to be log-shipped must be set up through a separate maintenance plan.
33. Log Shipping vs. Replication

Sl no
1.       Log shipping
2.       Replication
3.       1
4.       Log-shipping can backup once every minute and the copy and load frequency can also be every minute
5.       If you use transactional replication or merge replication, the latency can be as low as a few seconds
6.       2
7.       Log-shipping and snapshot replication do not alter the publisher's schema
8.       Updating subscribers (transactional and snapshot) and merge replication will add a guid column if there isn't one there already with the rowguid property
9.       3
10.   Log-shipping makes no alterations to the schema
11.   Snapshot and transactional replication may make subtle schema changes
12.   4
13.   There are no schema requirements for log-shipping
14.   Transactional replication requires primary keys on table articles.
15.   5
16.   Log-shipping takes the whole set of tables, stored procedures, triggers etc existing in the published database
17.   Replication has the granularity to select articles individually.
18.   6
19.   Log-shipping restores the logs to the standby server with No Recovery or with Standby - both options disallow edits to be made to the data.
20.   Replication doesn't enforce such a restriction so explicit permissions would be required to prevent changes to subscriber data.

  1. Question: Is the use of software fault-tolerant disk sets for cluster storage supported?

    Answer: No. For additional information, click the following article number to view the article in the Microsoft Knowledge Base:
  2. Question: Does SQL Server 2000 clustering support load balancing?
    Answer: SQL Server 2000 clustering does not provide load balancing; it provides failover support. To achieve load balancing, you need software that balances the load between clusters, not between servers within a cluster.
  3. Question: Does SQL Server 2000 Full-Text Search support clustering?
    Answer: Yes.
  4. Question: Is it necessary to configure MSDTC as a clustered resource for SQL Server clustering to work?
    Answer: No. MSDTC is required only if you need the ability to perform distributed transactions on your cluster. If you intend to use linked servers or perform replication (snapshot/transactional with immediate/queued updating subscribers), MSDTC is required. For more information, see "Distributed Transactions architecture" in SQL Server 2000 Books Online.
  5. Question: How do I upgrade Microsoft SQL Server 6.5 or 7.0 to SQL Server 2000 when the version 6.5 or 7.0 server is part of a cluster?
    Answer: If your SQL Server 6.5 or 7.0 server is part of a cluster, you must perform the following steps to upgrade the server to SQL Server 2000:
    1. Remove clustering of SQL Server 6.5 or 7.0 on both nodes.
    2. Cluster a group for use with Microsoft Distributed Transaction Coordinator (MS DTC) as described in the "Failover Clustering Dependencies" section of SQL Server 2000 Books Online.
    3. Install SQL Server 2000 on the primary node in the clustered configuration.
    4. For more information, see the following topics in the SQL Server Books Online:
    5. How to upgrade from a SQL Server 6.5 active/passive failover cluster (Setup)
    6. How to upgrade from a SQL Server 6.5 active/active failover cluster (Setup)
    7. How to upgrade from a SQL Server 7.0 active/passive failover cluster (Setup)
    8. How to upgrade from a SQL Server 7.0 active/active failover cluster (Setup)
  6. Question: What hardware is required to run SQL Server 2000 in a clustering environment?
    Answer: Refer to the Microsoft Cluster Server Administrator's Guide for a list of supported hardware configurations and hardware configuration information.

    Note Search by using the word "cluster" because individual components cannot be combined to create a supported system. For additional information, click the following article number to view the article in the Microsoft Knowledge Base:
  7. Question: Can SQL Server 6.5 or 7.0 be installed on one node of a cluster and SQL Server 2000 be installed on the other node?
    Answer: No. For two SQL Server servers to participate in a cluster, they must be the same version.
  8. Question: Where do I place the SQL Server 2000 files to provide failover support?
    Answer: SQL Server setup installs a new instance of SQL Server binaries on the local disk of each computer in the cluster and installs the system databases on the specified shared cluster disk. The binaries are installed in exactly the same path on each cluster node, so you must ensure that each node has a local drive letter in common with all of the other nodes in the cluster.
  9. Question: How do I shut down SQL Server from the command line without the Cluster Service interpreting the shutdown as a failure?
    Answer: The proper way to shut down SQL Server from the command line is to use the Cluster.exe application that comes with Microsoft Windows NT Enterprise Edition, Windows 2000 Advanced Server, or Microsoft Windows 2000 Data Center. Cluster.exe is installed as part of the normal Cluster setup, which can also be run on a Microsoft Windows NT Workstation, Microsoft Windows 2000 Professional, or Microsoft Windows 2000 Server member server computer to install just the Cluster Administrator and the other administrative applications. The basic syntax for this command is as follows.
43.   cluster [cluster name] RESOURCE [resource name] /option

a.       In the preceding command syntax, the /option switch controls this functionality.

The specific options to be used are "/online" and "/offline". These two options are equivalent to the commands "net start mssqlserver" (the method to start SQL Server from the command line) and "net stop mssqlserver" (the method to shut down SQL Server from the command line) for a non-virtualized server respectively. You can perform this procedure on the Generic Service, the SQL Server Agent, and the SQL Server resources.

Following are some examples of how to use this command syntax: To take the SQL Server 2000 resource offline if the Cluster Name is "SQLCluster" and the resource is named "VirtualSQL" (where 'VirtualSQL' is the name of the SQL Server 2000 resource, not the virtual network name resource):
b.      cluster "SQLCluster" resource "VirtualSQL" /offline

    1. To bring the SQL Server 2000 resource back online:
d.      cluster "SQLCluster" resource "VirtualSQL" /online

  1. Question: Can a clustered instance of a SQL Server 2000 virtual server and a stand-alone instance of SQL Server 2000 coexist on the same computer?
    Answer: Yes. Several SQL Server 2000 virtual servers and stand-alone instances can coexist on the same computer. However, only the SQL Server 2000 virtual servers will fail over to other nodes in the cluster. The issues to consider are the same as those when multiple instances of SQL Server 2000 reside on the same computer. Consider the following issues:
    1. A SQL Server instance name must be unique on any particular node. This includes the default instance. Even if the node does not currently own an instance, the node has registry keys and directories whose names include the instance name if it is a configured owner of an instance.
    2. Also, resources on the computer are shared among the instances that currently are active on the computer. Make sure that you understand these resources.

45.   List of Threshold Value for SQL Server Performance Counters

Monitoring database performance is proactive task that a DBA should do. Possible bottleneck for SQL Server is memory, processor, I/O subsystem, tempdb database, and locking. For SQL Server a specific, below list is a minimum set of counters that should be used for monitoring SQL Server performance daily. I would give preferred value but your environment might require the value stick with the preferred one because each database environment is unique. For your initial baseline, it could your reference.
1.       1. SQL Server:Access Methods with counter : Forwarded Records/sec
Preferred value: less than 10 of 100 Batch Requests/Sec

2.       2. SQL Server:Access Methods with counter : Full Scans/sec
Preferred value: (Index searches/sec)/(Full scans/sec) should be greater than 1000

3.      3. SQL Server:Access Methods with counter : Index Searches/sec
Preferred value: same with point 2

4.       4. SQL Server:Access Methods with counter : Page Splits/sec
Preferred value: less than 20 of 100 Batch Requests/sec

5.       5. SQL Server:Buffer Manager with counter : Buffer Cache Hit Ratio
Preferred value: greater than 90%

6.       6. SQL Server:Buffer Manager  with counter : Free List Stalls/sec
Preferred value: less than 2

7.       7. SQL Server:Buffer Manager with counter : Free Pages
Preferred value: greater than 640

8.       8. SQL Server: Buffer Manager with counter: Lazy Writes/Sec
Preferred value: less than 20

9.       9. SQL Server: Buffer Manager with counter: Page Life Expectancy
Preferred value: greater than 600 seconds.

10.   10. SQL Server: Buffer Manager with counter: Page Lookup/sec
Preferred value: (Page Lookup/sec)/(Batch Requests/sec) should less than 100

11.   11. SQL Server: Buffer Manager with counter: Page Reads/sec
Preferred value: less than 90

12.   12. SQL Server: Buffer Manager with counter: Page Writes/sec
Preferred value: same with point 11

13.   13. SQL Server: General Statistics with counter: Logins/sec
Preferred value: less than 2

14.   14. SQL Server: General Statistics with counter: Logouts/sec
Preferred value: less than 2

15.   15. SQL Server: Latches with counter: Latch Waits/sec
Preferred value: (Total Latch Wait Time)/(Latch Waits/Sec) less than 10

16.   16. SQL Server: Latches with counter: Total Latch Wait Time (ms)
Preferred value: same with point 15

17.   17. SQL Server: Locks with counter: Lock Wait Time(ms)
Preferred value: should not exceed 60 seconds.

18.   18. SQL Server: Locks with counter: Lock Waits/sec
Preferred value: should be zero

19.   19. SQL Server: Locks with counter: Number of Deadlocks/sec
Preferred value: less than 1

20.   20. SQL Server: SQL Statistics with counter: SQL Compilations/sec
Preferred value: less than 10% of number of Batch Requests/sec

21.   21. SQL Server: SQL Statistics with counter: SQL Re-Compilations/sec
Preferred value: less than 10% of number of SQL Compilations/sec
22. Physical Disk: % Disk Time
Preferred Value L Disk Time Counter below 2.0
23. Physical Disk: Current Disk Queue Length
Preferred Value: Disk Queue Length above 3.0 also indicates you may have a Disk I/O bottleneck
24. Processor: % Processor Time
Preferred Value: Any processor with a consistent loading of above 80% is maxed out and needs upgrading.
25. % Usage of page file
Preferred Value: below 70% is fine