Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Thursday, September 27, 2012

Various object existence checks in Sql Server

Here are the basic scripts you need to check for the existence of various objects in a Sql Server database (tables, columns, etc)

Check if Database exists:

IF  EXISTS (SELECT name FROM sys.databases WHERE name = N'DataBaseName')
begin

-- Database Exists

end

Check if table exists:

IF  EXISTS (SELECT * FROM sys.objects 
WHERE object_id = OBJECT_ID(N'[schema].[tableName]')
AND type in (N'U'))
BEGIN
    --Table exists
END

Check if foreign key exists:

IF  EXISTS (SELECT * FROM sys.foreign_keys
WHERE object_id = OBJECT_ID(N'[schema].[foreignKeyName]')
AND parent_object_id = OBJECT_ID(N'[schema].[tableName]'))
Begin
    --foreign key exists
End

Check if index exists:

IF  EXISTS (SELECT * FROM sys.indexes
WHERE object_id = OBJECT_ID(N'[schema].[tableName]')
AND name = N'IndexName')Begin
--Index exists
END

Check if view exists:

IF  EXISTS (SELECT * FROM sys.views 
WHERE object_id = OBJECT_ID(N'[schema].[viewName]'))
Begin
--View exists
END

Check if column exists:

if Exists(select * from sys.columns 
where Name = N'columnName'
and Object_ID = Object_ID(N'tableName'))

begin

    -- Column Exists

end

Check if Stored Proc exists:

IF  EXISTS (SELECT * FROM sys.objects 
WHERE object_id = OBJECT_ID(N'[Schema].[StoredProcName]')
AND type in (N'P', N'PC'))
begin

-- Stored Proc exists

end

Check if Function exists:

IF  EXISTS (SELECT * FROM sys.objects 
WHERE object_id = OBJECT_ID(N'[Schema].[FunctionName]')
AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))

begin

-- Function Exists

end

Wednesday, August 08, 2012

Migrate Reporting Services to another machine–Reporting Services Scripter

Came across this nifty little tool called “Reporting Services Scripter” which can be used to migrate RDLs to a different machine. In addition, it can also move other settings like schedules, etc. Another cool feature is that you can use it to migrate RDLs from a 2005 machine to a 2008 machine.

Download it from: http://www.sqldbatips.com/showarticle.asp?ID=62

Wednesday, July 11, 2012

Sql Server–Convert UTC time to Local Time

Here is a quick function that you can plug into your sql query to convert a field that has UTC time to local time:

dateadd(hour, DATEDIFF (HH, GETUTCDATE(), GETDATE()), Timestamp) as ltime

Where Timestamp should be replaced by the field in your table that contains date-time in UTC.

Wednesday, June 06, 2012

Tip for determining connection strings

Here is a nice little cheat for setting up a connection string:

  1. Create a text file and name it so that it has a file extension of “udl”
  2. Double click the file: the “Data link properties” dialog will open.
    image
  3. Setup your connection parameters by going through all the tabs in the dialog.
  4. Click ok to save the connection settings.
  5. Open the UDL file in notepad. Voila, you have your connection string!

Sunday, February 26, 2012

Powershell & SMO–Copy and attach database

Mission: Copy mdf and ldf files from a source location to a destination computer and then attach them to the destination computer:

Script:

#### Variables that need to be specified....

##variables needed for copy: source
$mdfSourcePath = "\\SourceServer\backup\myDb.mdf";
$ldfSourcePath = "\\SourceServer\backup\myDb.ldf";

##variables needed for copy: destination
$mdfCopyToDestinationPath ="\\DestinationServer\d$\MSSQLData";
$ldfCopyToDestinationPath ="\\DestinationServer\l$\MSSQLLog";

##variables regarding the destination database
$databaseServer = "DestinationSqlServer";
$databaseName = "databaseName";
$mdfDestinationFolderForAttach = "d:\MsSqlData\";
$ldfDestinationFolderForAttach = "l:\MsSqlLog\"

##### end of variables that need to be specified


$mdfFileName = [System.IO.Path]::GetFileName($mdfSourcePath);
$ldfFileName = [System.IO.Path]::GetFileName($ldfSourcePath);

cls;


####### Drop the destination database if it already exists

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
$sqlServerSmo = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server ($databaseServer)

Write-Host "Checking to see if $databaseName needs to be dropped....";
if ($sqlServerSmo.databases[$databaseName] -ne $null)
{
  Write-Host "Dropping database $databaseName on server $databaseServer"
  $sqlServerSmo.KillAllProcesses($databaseName)
  $sqlServerSmo.databases[$databaseName].drop()
  Write-Host "Database $databaseName on server $databaseServer has been dropped"
}
else
{
    Write-Host "Database $databaseName does not exist on server $databaseServer"
}
Write-Host "";
######### Copy files from source to destination

Write-Host "Copying mdf $mdfSourcePath to $mdfCopyToDestinationPath...."
Copy-Item $mdfSourcePath $mdfCopyToDestinationPath -Force
Write-Host "Copy complete!"

Write-Host "Copying mdf $ldfSourcePath to $ldfCopyToDestinationPath...."
Copy-Item $ldfSourcePath $ldfCopyToDestinationPath -Force
Write-Host "Copy complete!"
Write-Host "";
#######  Attach the database to detsination server


$datafile = [System.IO.Path]::Combine($mdfDestinationFolderForAttach, $mdfFileName);
$logfile = [System.IO.Path]::Combine($ldfDestinationFolderForAttach, $ldfFileName);

$sc = new-object System.Collections.Specialized.StringCollection;
$sc.Add($datafile) | Out-Null;
$sc.Add($logfile) | Out-Null;

Write-Host "Attaching $datafile and $logfile to $databaseServer....";
$Error.Clear();
try
{
    $sqlServerSmo.AttachDatabase($databaseName, $sc);
}
catch
{
    Write-Host $_.Exception;
    if ($Error.Count -gt 0)
    {
        Write-Host "Error Information" -BackgroundColor Red;
        $error[0] | fl -force ;
    }
}

Write-Host "Completed!!!" -BackgroundColor DarkGreen;

Read-Host "Press any key to continue....";

Monday, November 14, 2011

SqlBulkCopy error: given value cannot be converted to type xxxx

If you get the following error when using SqlBulkCopy: “The given value of type String from the data source cannot be converted to type int of the specified target column.”

Then check for 2 things:

Make sure that you are not setting more characters than what is allowed for a field (i.e., column allows only 5 characters and you set the column to 6 characters).

If you are not setting all the columns, then make sure you use the “SqlBulkCopyColumnMapping”, to setup the mapping (especially true, when you are inserting rows and you are not setting the primary key value as its setup to be an identity column).

Tuesday, August 16, 2011

Time oriented databases

Came across this old PDF but it has lots of ideas regarding creating databases that are time-oriented:

Developing Time-Oriented Database Applications in SQL: http://www.cs.arizona.edu/~rts/tdbbook.pdf

More info:

http://www.simple-talk.com/sql/t-sql-programming/temporal-data-techniques-in-sql-/

Monday, August 01, 2011

PowerShell 64 bit and Sql Cmdlets

I was getting the following error when I tried to execute a script that tried to load the Sql Powershel cmdlets.

Add-PSSnapin : No snap-ins have been registered for Windows PowerShell version 2.
This error occurred only 64 bit machines and only when I ran the 64 bit version of Powershell.
To get around it – I had to go through the following 2 steps:

Step 1: update the registry to insert an entry in the Wow6432Node hive.
After installing Sql Management Studio on the machine where you are seeing this error, run the following script to setup the registry correctly.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\PowerShell\1\ShellIds\Microsoft.SqlServer.Management.PowerShell.sqlps]
"Path"="C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\SQLPS.exe"
"ExecutionPolicy"="RemoteSigned"

Step 2: Install and add the powershell cmdlets.

Next run the following Powershell script to install the Sql powershell cmdlets.

#
# Add the SQL Server Provider.
#
if ($SqlServerMaximumChildItems -ne $Null -and $SqlServerMaximumChildItems -ge 0)
{
    write-host '$SqlServerMaximumChildItems is defined - sql provider is already loaded. Exiting load routines....'
    exit
}
else
{
    write-host '$SqlServerMaximumChildItems is not defined - continuing to check if sql provider needs to be loaded'
}
$ErrorActionPreference = "Stop"
$sqlpsreg="HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.SqlServer.Management.PowerShell.sqlps"
if (Get-ChildItem $sqlpsreg -ErrorAction "SilentlyContinue")
{
    throw "SQL Server Provider for Windows PowerShell is not installed."
}
else
{
    $item = Get-ItemProperty $sqlpsreg
    $sqlpsPath = [System.IO.Path]::GetDirectoryName($item.Path)
}

#
# Set mandatory variables for the SQL Server provider
#
Set-Variable -scope Global -name SqlServerMaximumChildItems -Value 0
Set-Variable -scope Global -name SqlServerConnectionTimeout -Value 30
Set-Variable -scope Global -name SqlServerIncludeSystemObjects -Value $false
Set-Variable -scope Global -name SqlServerMaximumTabCompletion -Value 1000

Push-Location
cd $sqlpsPath
$framework=$([System.Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory())
Set-Alias installutil "$($framework)installutil.exe"
#following needs to be done only once on any machine – the script must be run
#in a Powershell window that has been spawned with admin priviledges

installutil Microsoft.SqlServer.Management.PSSnapins.dll
installutil Microsoft.SqlServer.Management.PSProvider.dll

# Load SqlServerProviderSnapin100
if (!(Get-PSSnapin | ?{$_.name -eq 'SqlServerProviderSnapin100'}))
{
    if(Get-PSSnapin -registered | ?{$_.name -eq 'SqlServerProviderSnapin100'})
    {
       Add-PSSnapin SqlServerProviderSnapin100
       write-host "Loading SqlServerProviderSnapin100 in session"
    }
    else
    {
       write-host "SqlServerProviderSnapin100 is not registered with the system." -Backgroundcolor Red –Foregroundcolor White
       break
    }
}
else
{
  write-host "SqlServerProviderSnapin100 is already loaded"

# Load SqlServerCmdletSnapin100
if (!(Get-PSSnapin | ?{$_.name -eq 'SqlServerCmdletSnapin100'}))
{
    if(Get-PSSnapin -registered | ?{$_.name -eq 'SqlServerCmdletSnapin100'})
    {
       Add-PSSnapin SqlServerCmdletSnapin100
       write-host "Loading SqlServerCmdletSnapin100 in session"
    }
    else
    {
       write-host "SqlServerCmdletSnapin100 is not registered with the system."
       break
    }
}
else
{
  write-host "SqlServerCmdletSnapin100 is already loaded"
}

Pop-Location
write-host 'initializeSqlServerProvider.Ps1 has completed'

Thursday, July 28, 2011

Sql scripts: SetVar, :r and spaces in file paths

Had some issues getting a SqlCmd variable working with the :r SqlCmd command.

Here is how I got it working finally:

:setvar databaseScriptFolder "C:\My Folder\"
:r $(databaseScriptFolder)"test script.sql"

Thursday, June 02, 2011

Limit maximum memory available to Sql-Server

Here is a script that can be used to limit the maximum memory available to Sql-Server

EXEC sys.sp_configure N'max server memory (MB)', N'8192'
GO
RECONFIGURE WITH OVERRIDE
GO

Where 8192 is 8gb in the above example

Tuesday, April 19, 2011

Determining the progress of a database restore process

A useful script for determining the progress of a database restore process.
SELECT  r.session_id,r.command,
	r.start_time,
	CONVERT(NUMERIC(6,2),r.percent_complete) AS [Percent Complete],
	CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
	CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
	CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
	CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
	CONVERT(VARCHAR(1000),
	(
		SELECT SUBSTRING(text,r.statement_start_offset/2,
			CASE  WHEN r.statement_end_offset = -1 THEN 1000  ELSE (r.statement_end_offset-r.statement_start_offset)/2  END)
		FROM sys.dm_exec_sql_text(sql_handle))
	)
	FROM sys.dm_exec_requests r WHERE command IN ('RESTORE DATABASE','BACKUP DATABASE')
and here is what the script outputs:
image

Getting Restore FileListOnly data into a table

Restore FileListOnly command is useful for determining the logical name, physical name, etc of a backedup database (*.bak) file.

Running “Restore FileListOnly” outputs a table and I needed to get the data into a table so as to be able to use it in an actual restore method.

Here is how you do it:

--First create a temp table with all the fields that Restore FileListOnly returns:
--list of fields and their definition pulled for Sql Server 2008 from (http://msdn.microsoft.com/en-us/library/ms173778.aspx)

DECLARE @restoreFileListData table (
    LogicalName nvarchar(128),
    PhysicalName nvarchar(260),
    [Type] char(1),
    FileGroupName nvarchar(128),
    Size numeric(20,0),
    MaxSize numeric(20,0),
    FileId bigint,
    CreateLSN numeric(25,0),
    DropLSN numeric(25,0) null,
    UniqueID UniqueIdentifier,
    ReadOnlyLSN numeric(25,0) null,
    ReadWriteLSN numeric(25,0) null,
    BackupSizeInBytes bigint,
    SourceBlockSize int,
    FileGroupID int,
    LogGroupGUID uniqueIdentifier null,
    DifferentialBaseLSN numeric(25,0)NULL,
    DifferentialBaseGUID uniqueIdentifier,
    IsReadOnly bit,
    IsPresent bit,
    TDEThumbPrint varbinary(32));

--Now execute the method and put the data into the temp table:

declare @cmd varchar(max)
set @cmd = 'RESTORE FILELISTONLY FROM  DISK = ''complete path to the bak file'''
insert into @restoreFileListData exec (@cmd);
select *  from @restoreFileListData t;
-- data is now available in the temp table

Thursday, April 14, 2011

select @@SERVERNAME not returning the correct server name

On my machine select @@SERVERNAME was returning “Windows7” instead of the actual server name.

I wanted to use @@ServerName for some script validation and hence needed it to return the correct machine name.

Here is what I had to do:

EXEC  Sp_dropserver 'Windows7'
EXEC  Sp_dropserver 'machineName' –this is not necessary, but on my machine I had a weird conflict and hence had to do it
EXEC  Sp_addserver 'machineName', 'local' –the local argument is REQUIRED

Finally restart your SqlServer instance.

Now run the following command to make sure everything worked correctly.

SELECT @@SERVERNAME AS SqlServerThinksTheServerIs, Serverproperty('Servername') AS theActualServerNameIs

The @@ServerName is especially important for the database deployment scripts created by Visual Studio’s database project as it uses it to validate that the script is being run on the correct machine.

Tuesday, March 29, 2011

Setting up a role with execute permissions using SSMS

I have always created roles with the execute permissions only through SQL scripts and the reason for this is that I could never figure out how to do it via Sql Management Studio. Well today I figured it out and so here are the here are the steps for setting up a role with execute permissions using Sql Server Management Studio (SSMS).

Why? If you want to control what permissions your users have to the database and want to control it through roles. I used it for defining a separate role that would be used if users needed to have execute permissions on all the stored procs in the database (instead of – per stored proc).

  1. Right click on the roles node under security for your database and select Add New Database Role.
  2. Setup the General properties page
    image
  3. On the securables page, select Search and select all objects of type and click OK
    image
  4. In the next window, select databases and click OK
    image
  5. Back on the securables page, select the database row that just got added and in the Permissions for {DatabaseName}, select grant for the Execute permission.
    image

When you click ok, your role will be created and the steps will be similar to the following script.

USE [databaseName]
GO
CREATE ROLE [db_execute_role] AUTHORIZATION [dbo]
GO
use [databaseName]
GO
GRANT EXECUTE TO [db_execute_role]
GO

Note: this may not be the most secure for your purposes, but the steps can easily be modified to reduce the level at which the execute permissions are provided. (these steps define it at the database level).

Saturday, February 05, 2011

Sql Server Exception: Database name is not allowed with a table-valued parameter

While working on persisting some business entities to a SQL Server database using “user-defined table-types” (UDTs), I was getting the following cryptic error.

The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Table-valued parameter 1 ("@MyTable"), row 0, column 0: Data type 0xF3 (user-defined table type) has a non-zero length database name specified.  Database name is not allowed with a table-valued parameter, only schema name and type name are valid.”

In my case, I was using the Enterprise Library to call a stored procedure and was passing a DataTable to the stored procedure. The problem seems to be the fact that EntLib does not handle DataTable parameters that are being passed to a StoredProc that expects a UDT (especially when you use one of the Execute overloads that take an array of objects as its parameters.

Instead, I had to manually create the SqlParameters that needed to be sent to my stored procedure and also set the SqlDataType to Structured for the DataTable that was being passed to the StoreProc that expected a UDT. Once that was done, the above exception went away.

Here is some sample code: (where database is of type EnterpriseLibrary.Data.Database)


            DbCommand dbCommand = database.GetStoredProcCommand(procName);
            try
            {
                dbCommand.Parameters.Add(new SqlParameter(“UDTParam1”, dataTable) { SqlDbType = SqlDbType.Structured});
                DataSet result = database.ExecuteDataSet(dbCommand);
                …..
            }
            catch (SqlException ex)
            {
                …
            }

Monday, January 31, 2011

Sql Server–Predefined Profiler Templates

SqlServer profiler comes with some predefined templates that provide a good starting point for diagnosing issues. These templates can be found in the following folder: <ProgramFiles(x86)\Microsoft SQL Server>”\100\Tools\Profiler\Templates\Microsoft SQL Server\100”

The predefined templates can be viewed/edited from within SqlProfiler tool via the “trace template properties” dialog: (File->Templates->Edit)
image

Here is what each of the templates can be used for: (from http://sqlserverpedia.com/wiki/Profiler_Trace_Templates)

Stored procedure counts template (SP_Counts.tdf)

This contains a single event of SP:Starting and data columns of EventClass, ServerName, DatabaseID, ObjectID and SPID. This trace is very simple: it records each time a stored procedure is executed along with object identifier of the stored procedure found in the sysobjects system table. Data is ordered by the object identifier of the stored procedure, so it is easy to count the number of executions of each procedure. This trace is useful for identifying stored procedures that are executed most often; these would be good candidates for optimization.

Standard template (Standard.tdf)

This contains the following events:
  • Audit Login
  • Audit Logout
  • Existing Connection
  • RPC: Completed
  • SQL: Batch Completed

With these events, the Standard template can be used for auditing or for tuning stored procedures and SQL statements.

This template tracks the login name, NT user name, start time, CPU time used, duration, reads and writes performed by each event, application name and the text of the event. Since this trace covers such a wide range of events and data columns it's a good starting point for beginners. Most traces that start out with the Standard template will have to be customized to fit particular needs.

TSQL template (TSQL.tdf)

This collects statements in the way they were submitted. This trace is a fine way to view the system activity. The events are almost identical to the Standard template, with the exception of RPC: Completed, which is replaced by RPC: Starting. If you anticipate other events happening on your system you should modify this template accordingly. The only data columns collected by TSQL template (other than required SPID and EventClass) are text data and start time.

TSQL by duration template (TSQL_Duration.tdf)

This shows the SQL statements issued and the number of milliseconds each statement took. Data is ordered by the duration column starting from the least to the greatest. The only two events traced by this template by default are RPC: Completed and SQL: Batch Completed. The collected data columns include text data and duration. This trace can be helpful in tracking down the statements that take longest to complete and are therefore good candidates for tuning.

Grouped TSQL template (TSQL_Grouped.tdf)

This is almost identical to the TSQL template since it collects the same events and data columns. The difference is that the trace records are grouped by application name, NT user name, login name and client process ID. This template can be useful when troubleshooting issues encountered by a particular user or group of users. You could use Grouped TSQL template, for example, when you're troubleshooting blocking locks or slow performance as reported only by a few users.

TSQL for Replay template (TSQL_Replay.tdf)

This collects a wealth of detailed information about SQL statements executed against SQL Server. This trace contains all events necessary to replay the trace later on the same or different server. You could replay an existing trace for a number of reasons - for testing new functionality for performance, to see if the blocking locks occur again if the same set of statements is executed after changing indexes, and so fort. Click here for more information on replaying traces.

TSQL within Stored Procedures template (TSQL_SPs.tdf)

This shows you SQL statements executed by each stored procedure. This template is great for debugging poorly performing stored procedures; you might wish to add the duration data column to this template since it is important to know which statement took a long time to execute. This template is also useful for debugging nested stored procedures (that is, procedures that call other procedures). Events collected by TSQL within Stored Procedures are nearly identical to those of TSQL template, with the addition of SP: Stmt Starting. Data is ordered by the start time of each statement.

Tuning template (Tuning.tdf)

This tracks stored procedures and SQL statements executed against SQL Server. This template includes duration column by default, allowing to quickly pinpoint the long running queries.

More info:

MSDN: http://msdn.microsoft.com/en-us/library/ms190176.aspx

Thursday, January 20, 2011

Sql Server: Convert UTC date-time to Local date-time

This is a quick and DIRTY way to calculate the local date-time for a UTC date-time value (SQL-Server doesn’t seem to have an in-built method to calculate this).

DATEADD(hh,DATEDIFF(hh,GETUTCDATE(), GETDATE()),<TimeValue>)

Important: this should not be used in production as it does not take into account day light savings time and what should be done if the timeValue is from a DST date and the current datetime is not in DST.

Tuesday, November 30, 2010

Sql Server–Execute Permissions

One way to provide users execute permissions on a database: (works in Sql Server 2005 and above)
Create a role and grant it execute permissions. Then add your user to that role.
/* CREATE A NEW ROLE */
CREATE ROLE db_executor

/* GRANT EXECUTE TO THE ROLE */
GRANT EXECUTE TO db_executor

Monday, October 25, 2010

Sql Server–Determining version of client drivers

via: http://blogs.msdn.com/b/sqlcat/archive/2010/10/26/how-to-tell-which-version-of-sql-server-data-access-driver-is-used-by-an-application-client.aspx
T-Sql script to determine the version of SqlServer drivers being used by the client:
SELECT session_id, protocol_type,
driver_version = CASE CONVERT(CHAR(4), CAST(protocol_version AS BINARY(4)), 1)
WHEN '0x70' THEN 'SQL Server 7.0'
WHEN '0x71' THEN 'SQL Server 2000'
WHEN '0x72' THEN 'SQL Server 2005'
WHEN '0x73' THEN 'SQL Server 2008'
ELSE 'Unknown driver'
END
FROM sys.dm_exec_connections


Another useful script (this one provides you a string formated like this "SQL Server 10.0.2531.0 - SP1 (Developer Edition (64-bit))")

SELECT 'SQL Server '
    + CAST(SERVERPROPERTY('productversion') AS VARCHAR) + ' - '
    + CAST(SERVERPROPERTY('productlevel') AS VARCHAR) + ' ('
    + CAST(SERVERPROPERTY('edition') AS VARCHAR) + ')'

Wednesday, October 13, 2010

SQL Server–Performance when using indexes

Just some rules of thumb: (left is better than right)

Seek < Scan

Index < Table

Clustered Index < Index (non-clustered)

Clustered Index Seek < Index Seek < Index Scan < Clustered Index Scan < Table Scan

  1. CLUSTERED INDEX SEEK - Very fast and efficient - the table is physically ordered according to specified column(s) and SQL can go find it AND since all of the data is phsically ordered right, can pull the data sequentially
  2. INDEX SEEK - fast and efficient.  The SQL server knows pretty much where the data is and can go directly to it and seek out the rows it needs.  The data isn't ordered in the DB by the fields in the index so it's likely not pulling the data sequentially like it is in the CLUSTERED INDEX SEEK, but it can still translate to a massive improvement in query execution cost/speed.
  3. INDEX SCAN - still uses the index, but it doesn't know exactly where the data is, so it may scan the whole index or a partial range of the index to find it's data. Can be very slow and costly, but still scans through the index as opposed to scanning the physical table.
  4. CLUSTERED INDEX SCAN - If you see a CLUSTERED INDEX SCAN, it's pretty much the same as a TABLE SCAN for performance because SQL has to physically search through every single row in the clustered index (which really IS the table). 
  5. TABLE SCAN - slow, inefficient.  SQL has to physically look at every single row in the table. Shouldn't see this often, if you do, you may need to rewrite something or add indexes.