- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- Data

Safely Manage Customer Records with T-SQL
In this Code Lab, you will learn to manage data safely and effectively in SQL Server using T-SQL. You will begin by performing fundamental data modification tasks using INSERT, UPDATE, and DELETE. You will then progress to implementing professional-grade safety protocols by first previewing changes with SELECT statements to prevent common errors. Next, you will learn to audit modifications in real-time by capturing row-level changes with the OUTPUT clause. Finally, you will synthesize these skills by wrapping your logic in atomic transactions and building a resilient stored procedure with integrated error handling to ensure data integrity.

Lab Info
Table of Contents
-
Challenge
Step 1: Inserting Data Into a Table
Step 1: Inserting Data into a Table
Inserting new records into a database is one of the most common tasks for any SQL practitioner. Whether you’re adding customers, products, or transactions, it’s important to understand the syntax options available to you and how to structure your queries safely and efficiently.
In this step, you’ll learn how to:
- Insert a single row with explicit column targeting
- Add multiple rows using the table value constructor
- Transfer data from a temporary staging table into a permanent one
These techniques form the foundation for adding data into SQL Server tables — a skill you'll use frequently when managing or transforming relational data.
What You’ll Learn in This Step
You’ll learn how to:
- Use
INSERT INTO ... VALUES
to add a single row - Insert multiple rows in one query using
VALUES (...) , (...)
- Use
INSERT INTO ... SELECT
to migrate data from one table to another
info> You will write all of your T-SQL answers directly in the
1-step-one.sql
file provided in this environment.When you are ready to run the query, simply press Run in the Terminal tab.
This will automatically execute the contents of1-step-one.sql
against the database and display the results.
You do not need to type any commands in the terminal manually.
Optional: How to Log Into the Database Manually
You don’t need to do this for this lab, as it will not be able to be validated. However, if you’d like to explore connecting to the database directly using the
mssql-cli
, here’s how to do it using thesqlcmd
tool.
Database Login and Basics
To connect to the SQL Server manually, run this command in the terminal:
sqlcmd -S localhost -U SA -P 'P@ssword1'
#### What Happens Next-
Once connected, you’ll see a prompt that looks like this:
1>
This means SQL Server is ready to accept commands.
-
Every query you type must end with
GO
on its own line. Example:SELECT @@VERSION; GO
-
To exit, type:
QUIT
Again, this is purely optional and not needed to complete the tasks, it’s just here if you’d like to explore running SQL queries directly.
Database Schema
dbo.Customers Table
| Column Name | Data Type | | ------------ | ---------- | | CustomerID | INT | | FirstName | NVARCHAR | | LastName | NVARCHAR | | Email | NVARCHAR | | Phone | NVARCHAR |
dbo.ChangeLog Table
| Column Name | Data Type | | ------------ | ---------- | | LogID | INT | | CustomerID | INT | | ChangeType | NVARCHAR | | OldValue | NVARCHAR | | NewValue | NVARCHAR | | ChangeDate | DATE |
### Understanding Explicit INSERT Syntax in Real-World ContextsAdding new records into a production database requires precision and predictability. The
INSERT INTO ... VALUES
statement is the standard method for adding new data in SQL Server. It explicitly defines which columns receive values and in what order.This approach is required in professional environments where:
- Certain columns do not allow
NULL
values - Some columns contain default constraints
- Table schemas can change over time
By declaring the target columns explicitly, you ensure that each value maps to the correct field regardless of schema modifications.
Base syntax:
INSERT INTO TableName (Column1, Column2, ...) VALUES (Value1, Value2, ...); GO
Applied to the
Customers
table, which contains columns such asCustomerID
,FirstName
,LastName
,Email
, andPhone
, an example insert statement would be:INSERT INTO Customers (CustomerID, FirstName, LastName, Email, Phone) VALUES (10001, 'First', 'Last', '[email protected]', '555-0101'); GO
This pattern is used whenever a single record must be inserted directly into a table. It is standard practice during administrative updates, migrations, or controlled data entry operations.
Note: SQL Server allows omitting the column list if all table columns are provided in order. However, the explicit form is required in this lab and is considered best practice for maintainability. ### Using Table Value Constructors to Insert Multiple Rows
SQL Server allows you to insert multiple records into a table using a single
INSERT INTO ... VALUES
statement. This approach is known as a table value constructor. It is useful for minimizing code duplication and reducing the number of statements executed on the server.Instead of writing multiple
INSERT
statements, you can provide a single column list followed by a group of value sets:INSERT INTO TableName (Column1, Column2) VALUES (ValueSet1_Col1, ValueSet1_Col2), (ValueSet2_Col1, ValueSet2_Col2); GO
This format is commonly used to:
- Seed a small dataset into a table
- Onboard multiple new records in a controlled deployment
- Reduce network round-trips and improve readability
In the context of this lab, the
Customers
table contains the columnsCustomerID
,FirstName
,LastName
,Email
, andPhone
. To insert more than one row at a time into this table, your syntax should match the following example:INSERT INTO Customers (CustomerID, FirstName, LastName, Email, Phone) VALUES (10001, 'First', 'Last', '[email protected]', '555-0101'), (10002, 'Second', 'User', '[email protected]', '555-0102'); GO
Note: Although the SQL Server engine supports inserting each row in a separate
INSERT
statement, using a table value constructor is more efficient and preferred when the data is available up front. The lab tasks that follow will expect this pattern. ### Moving Data Between Tables Using INSERT INTO ... SELECTIn enterprise workflows, new data is often staged temporarily before it's written into production tables. SQL Server allows you to implement this pattern using
INSERT INTO ... SELECT
.This technique copies rows from a source table (often a temporary table like
#StagingCustomers
) into a destination table likeCustomers
. It's frequently used in:- ETL (Extract, Transform, Load) pipelines
- Data correction workflows
- Integration from temporary APIs or upload tools
Here’s the exact syntax:
INSERT INTO TargetTable (Col1, Col2, ...) SELECT Col1, Col2, ... FROM SourceTable; GO
This pattern requires that:
- The column list in the INSERT matches the number and order of the selected columns
- The data types are compatible between source and target tables
For example, to copy one record from a staging table into the
Customers
table:INSERT INTO Customers (CustomerID, FirstName, LastName, Email, Phone) SELECT CustomerID, FirstName, LastName, Email, Phone FROM #StagingCustomers; GO
If your staging table contains multiple rows, this query will insert all of them. If you want to filter specific rows, you can add a
WHERE
clause to theSELECT
.Note: This pattern has multiple valid forms. For instance, you can omit the column list if the source and target tables have the same columns in the same order, but explicitly listing columns is considered best practice in most professional environments.
-
Challenge
Step 2: Safely Modifying and Deleting Data
Step 2: Safely Modifying and Deleting Data
Updating or deleting records in a database can introduce risk if the query affects more rows than intended. A missing
WHERE
clause or incorrect condition can unintentionally modify the entire table.This step introduces a defensive workflow that helps prevent unintended changes:
- First, preview the rows you plan to modify or delete using a
SELECT
statement. - Once verified, convert that
SELECT
into anUPDATE
orDELETE
, ensuring it targets only the expected rows.
This pattern is especially important when working in systems that do not support easy rollback or when manipulating production data.
What You’ll Learn in This Step
You’ll learn how to:
- Use
SELECT
to validate a target record before modification - Write safe, scoped
UPDATE
andDELETE
queries usingWHERE
- Confirm the behavior of destructive operations through previews
info> You will write all of your T-SQL answers directly in the
2-step-two.sql
file provided in this environment.When you are ready to run the query, simply press Run in the Terminal tab.
This will automatically execute the contents of2-step-two.sql
against the database and display the results.
You do not need to type any commands in the terminal manually.
Database Schema
dbo.Customers Table
| Column Name | Data Type | | ------------ | ---------- | | CustomerID | INT | | FirstName | NVARCHAR | | LastName | NVARCHAR | | Email | NVARCHAR | | Phone | NVARCHAR |
dbo.ChangeLog Table
| Column Name | Data Type | | ------------ | ---------- | | LogID | INT | | CustomerID | INT | | ChangeType | NVARCHAR | | OldValue | NVARCHAR | | NewValue | NVARCHAR | | ChangeDate | DATE |
<br/>
### Using SELECT to Preview Rows Before ModificationWhen performing data modifications in SQL Server, it is critical to confirm that your
WHERE
clause isolates only the intended rows. A common professional practice is to write aSELECT
statement first to preview the data that would be affected by a laterUPDATE
orDELETE
.This technique reduces risk in environments where:
- Data recovery is limited or unavailable
- Unintended changes could lead to compliance or reporting issues
- Modifications must be peer-reviewed before execution
Example:
SELECT * FROM Table_name WHERE Column_name = '###'; GO
Once the query returns the correct row(s), you can use the same
WHERE
clause in a subsequentUPDATE
orDELETE
.This preview-first approach supports traceability, validation, and safer workflows. In this task, you will practice using
SELECT
to confirm the scope of a potential update operation.Note: While multiple variations of filtering logic could return the desired row, your solution should use a precise and unambiguous condition to minimize risk. ### Converting a Verified SELECT Into an UPDATE Statement
Once you have confirmed the scope of your target row using a
SELECT
query, the next step is to use that exact filter condition in anUPDATE
statement. This ensures that only the intended row is modified.The
UPDATE
statement uses the following syntax:UPDATE TableName SET Column1 = Value1, Column2 = Value2 WHERE FilterCondition; GO
It is critical that the
WHERE
clause remains unchanged from the verifiedSELECT
, unless the goal of the update has changed. Failing to include aWHERE
clause will update every row in the table, which is a common and serious error in SQL workflows.For example, to update the
Email
of a specific customer:UPDATE Customers SET Email = '[email protected]' WHERE CustomerID = 10001; GO
This practice is especially important in customer-facing applications where contact data must be accurate and traceable. Maintaining consistency between preview and update ensures the safety of the modification. ### Validating a Deletion Target Before Execution
Before performing a
DELETE
in SQL Server, you should always confirm the record you intend to remove. This prevents accidental data loss, especially in production environments where undeleting is not possible.The standard practice is to first run a
SELECT
query with the exactWHERE
clause you plan to use in theDELETE
. This ensures that the query affects only the correct record.For example:
SELECT * FROM Customers WHERE CustomerID = 10002; GO
Once the row is verified, it can safely be deleted with:
DELETE FROM Customers WHERE CustomerID = 10002; GO
In this task, you'll stage a record into the
Customers
table and confirm that yourSELECT
correctly isolates it for deletion.Alternative approaches exist for identifying rows to delete:
- You can use multiple filters, such as combining
FirstName
andEmail
- You can also use
TOP(1)
withORDER BY
in limited scenarios to constrain scope
Why filter by multiple columns or use TOP?
Filtering by multiple columns can add safety when IDs are unknown or when rows are inserted manually for testing. Using `TOP(1)` is sometimes used in cleanup scripts but is risky if the sort order is not guaranteed. In general, a direct and unique identifier (like `CustomerID`) is preferred for deletions.The
DELETE
statement permanently removes one or more rows from a table. Because of this, it must always include aWHERE
clause that filters the target rows precisely.If no filter is provided, SQL Server removes all rows from the table. This occurs because the
DELETE
statement operates on the full result set produced by the filter condition. Understanding this behavior is essential for controlling query scope.Standard syntax:
DELETE FROM TableName WHERE FilterCondition; GO
The reliability of a
DELETE
depends entirely on the accuracy of the filter logic. The filter should first be tested with aSELECT
statement to confirm that it isolates only the intended rows. After verification, the same filter can be reused in theDELETE
statement.Example:
DELETE FROM Customers WHERE CustomerID = 10005; GO
This removes exactly one record if
CustomerID
is unique in the table.In addition to filtering by a unique key, SQL Server supports other valid approaches for row deletion.
Alternative delete strategies explained
A compound filter such as `WHERE FirstName = 'Sam' AND Email = '[email protected]'` can be used when a unique identifier is unavailable. Another option is `DELETE TOP (1)` with `ORDER BY`, which limits deletion to one row when duplicates are possible. Both methods are valid but rely on the data model and the consistency of field values across rows. - First, preview the rows you plan to modify or delete using a
-
Challenge
Step 3: Auditing Changes with The Output Clause
Step 3: Auditing Changes with the OUTPUT Clause
Data modifications in SQL Server occur silently by default. When an
UPDATE
,INSERT
, orDELETE
executes, the change happens within the engine without directly showing which rows were affected.The
OUTPUT
clause provides visibility into these changes by exposing two logical tables that represent the row states before and after a data modification:DELETED
: Contains the column values before the changeINSERTED
: Contains the column values after the change
By combining these tables, you can observe how a record transitions during an operation, or even store those transitions in an audit table for long-term tracking.
What You’ll Learn in This Step
You’ll learn how to:
- Use the
OUTPUT
clause to view row-level changes as they occur - Access both pre-update and post-update values through
DELETED
andINSERTED
- Write those changes to an audit table for long-term storage
- Verify logged entries using a
SELECT
query
info> You will write all of your T-SQL answers directly in the
3-step-three.sql
file provided in this environment.When you are ready to run the query, simply press Run in the Terminal tab.
This will automatically execute the contents of3-step-three.sql
against the database and display the results.
You do not need to type any commands in the terminal manually.
Database Schema
dbo.Customers Table
| Column Name | Data Type | | ------------ | ---------- | | CustomerID | INT | | FirstName | NVARCHAR | | LastName | NVARCHAR | | Email | NVARCHAR | | Phone | NVARCHAR |
dbo.ChangeLog Table
| Column Name | Data Type | | ------------ | ---------- | | LogID | INT | | CustomerID | INT | | ChangeType | NVARCHAR | | OldValue | NVARCHAR | | NewValue | NVARCHAR | | ChangeDate | DATE |
<br/>
### Understanding the OUTPUT Clause and Row Version TrackingWhen a data modification occurs in SQL Server, the engine keeps temporary versions of each affected row. These are exposed through two logical tables that exist only during the statement’s execution:
DELETED
: The version of the row before the modificationINSERTED
: The version of the row after the modification
The
OUTPUT
clause reads from these tables to show what changed as the command runs. It can either display the results in the query output or send them into another table.Generic syntax pattern:
UPDATE table_name SET column_name = new_value OUTPUT DELETED.column_name, INSERTED.column_name WHERE filter_condition; GO
Each row modified by the
UPDATE
produces one result row containing its previous and new values. This helps confirm what data was changed and provides a foundation for auditing.How SQL Server Generates INSERTED and DELETED
When an `UPDATE` runs, SQL Server writes both the old and new row versions to the transaction log. - The **old version** is captured in `DELETED`. - The **new version** is captured in `INSERTED`. These logical tables exist only for the duration of the statement and are automatically cleared when it finishes.The
OUTPUT
clause can be extended withINTO
to store affected-row data in another table.
This approach allows you to create a permanent audit trail for each change without requiring a separate insert operation.When you use
OUTPUT INTO
, the results from theDELETED
andINSERTED
logical tables are written directly into the destination table as part of the same transaction.Generic syntax pattern:
UPDATE table_name SET column_name = new_value OUTPUT INSERTED.key_column, 'UPDATE', DELETED.column_name, INSERTED.column_name, GETDATE() INTO audit_table (KeyColumn, ChangeType, OldValue, NewValue, ChangeDate) WHERE filter_condition; GO
Every updated row produces one record in the audit table. Because both operations occur within the same transaction, if the
UPDATE
fails or is rolled back, the corresponding audit entry is also removed.How SQL Server Executes OUTPUT INTO
During execution, SQL Server writes each affected row into the audit table as the modification occurs. Both the `UPDATE` and the `OUTPUT INTO` actions share the same transaction context, ensuring consistency. This guarantees that only committed updates appear in the audit log.After recording changes with the
OUTPUT INTO
clause, you should confirm that those entries were successfully written to the audit table.
This verification step ensures that both the data update and the audit log entry occurred as part of the same transaction.Verification uses a simple
SELECT
statement to read from the audit table.
Filtering by a specific identifier allows you to confirm that the correct record was captured.Generic syntax pattern:
SELECT * FROM audit_table_name WHERE key_column = value; GO
When you run this query, it returns the log entries associated with that record, showing what changed and when. A complete audit table typically contains columns such as
CustomerID
,ChangeType
,OldValue
,NewValue
, andChangeDate
.How Audit Verification Confirms Transactional Consistency
The `OUTPUT INTO` clause operates within the same transaction as the data modification. If an update is rolled back, its corresponding audit entry is also removed. Running a verification query immediately after an update confirms that both changes were committed successfully and remain synchronized. -
Challenge
Step 4: Ensuring Data Integrity with Transactions
Step 4: Ensuring Data Integrity with Transactions
In SQL Server, a transaction is a logical unit of work that groups multiple statements together so they either all succeed or all fail.
This ensures that data remains consistent even if an error occurs partway through a process.Transactions follow the ACID principles; Atomicity, Consistency, Isolation, and Durability, ensuring that the database never ends up in a partial or invalid state.
By usingBEGIN TRANSACTION
,COMMIT
, andROLLBACK
, you can explicitly control when a group of statements is finalized or undone.In this step, you will perform and test transactions that modify both the
Customers
andChangeLog
tables.What You’ll Learn in This Step
You’ll learn how to:
- Use
BEGIN TRANSACTION
andCOMMIT TRANSACTION
to group multiple changes into one atomic operation - Use
ROLLBACK TRANSACTION
to undo pending modifications before they are committed - Verify that uncommitted changes are not saved to the database after a rollback
info> You will write all of your T-SQL answers directly in the
4-step-four.sql
file provided in this environment.When you are ready to run the query, simply press Run in the Terminal tab.
This will automatically execute the contents of4-step-four.sql
against the database and display the results.
You do not need to type any commands in the terminal manually.
Database Schema
dbo.Customers Table
| Column Name | Data Type | | ------------ | ---------- | | CustomerID | INT | | FirstName | NVARCHAR | | LastName | NVARCHAR | | Email | NVARCHAR | | Phone | NVARCHAR |
dbo.ChangeLog Table
| Column Name | Data Type | | ------------ | ---------- | | LogID | INT | | CustomerID | INT | | ChangeType | NVARCHAR | | OldValue | NVARCHAR | | NewValue | NVARCHAR | | ChangeDate | DATE |
<br/>
### Grouping Changes with BEGIN TRANSACTION and COMMITA transaction ensures that multiple related operations are treated as a single, atomic unit of work.
If all statements within the transaction succeed, the changes are committed together.
If any statement fails, the transaction can be rolled back, leaving the database unchanged.In SQL Server, transactions are explicitly started with
BEGIN TRANSACTION
and finalized with eitherCOMMIT TRANSACTION
orROLLBACK TRANSACTION
.Generic syntax pattern:
BEGIN TRANSACTION; -- First statement UPDATE table_name SET column_name = new_value WHERE filter_condition; -- Second statement INSERT INTO audit_table (KeyColumn, ChangeType, OldValue, NewValue, ChangeDate) VALUES (value1, 'UPDATE', old_value, new_value, GETDATE()); COMMIT TRANSACTION; GO
When the transaction is committed, all changes become permanent in the database. If a failure occurs before
COMMIT TRANSACTION
, nothing is saved.How SQL Server Manages Transactional Consistency
SQL Server assigns a unique transaction ID to every open transaction. Changes are written to the transaction log but are not applied to data pages permanently until a `COMMIT` is issued. This ensures that incomplete or failed transactions cannot leave the database in an inconsistent state.A rollback cancels all operations performed within an open transaction, restoring the database to its previous state.
This feature is critical for maintaining data integrity when a logic error, constraint violation, or unexpected condition occurs during a series of operations.A transaction remains open until explicitly committed or rolled back.
If you issue aROLLBACK TRANSACTION
beforeCOMMIT
, all pending changes are discarded.Generic syntax pattern:
BEGIN TRANSACTION; UPDATE table_name SET column_name = new_value WHERE filter_condition; -- Optional audit insert or additional operations INSERT INTO audit_table (KeyColumn, ChangeType, OldValue, NewValue, ChangeDate) VALUES (value1, 'UPDATE', old_value, new_value, GETDATE()); ROLLBACK TRANSACTION; GO
When the
ROLLBACK TRANSACTION
command runs, none of the changes take effect — even if multiple statements were executed within the same transaction.How SQL Server Performs a Rollback
SQL Server uses the transaction log to record every operation performed during a transaction. When a rollback occurs, the engine reads those entries in reverse order and restores each modified page to its original state. This process ensures complete reversal of all uncommitted work.After performing a rollback, no data modifications should remain in the database.
A rollback cancels every change made during the active transaction, restoring all affected rows to their previous state.
Verification ensures that the data integrity principles of atomicity and consistency are preserved.To confirm that a rollback succeeded, you can query the table that was modified and compare the current values to their pre-transaction state.
Generic syntax pattern:
SELECT column_name_1, column_name_2, ... FROM table_name WHERE key_column = value; GO
If the rollback was successful, the values in the result set should match their original state before the transaction began.
How SQL Server Ensures Data Reversal After Rollback
SQL Server maintains before-images of all modified pages in the transaction log. During a rollback, it replays those log entries in reverse order to restore the previous committed version of each affected record. This guarantees that no partial or uncommitted updates remain in the database. - Use
-
Challenge
Step 5: Creating a Resilient Data Management Procedure
Step 5: Creating a Resilient Data Management Procedure
A stored procedure allows you to encapsulate multiple SQL statements into a reusable, secure, and consistent operation.
When combined with error handling and transactions, it provides a reliable way to modify data safely and maintain a complete audit trail.This step combines everything learned so far:
- Using
UPDATE
to modify customer records - Logging changes to the
ChangeLog
table - Managing transactions with
BEGIN TRANSACTION
andCOMMIT
- Handling runtime errors with
TRY...CATCH
By the end of this step, you will have created a stored procedure that updates a customer’s contact information, records the change in the audit log, and ensures that any failure results in a rollback.
What You’ll Learn in This Step
You’ll learn how to:
- Define and create a stored procedure using parameters
- Handle runtime errors with
TRY...CATCH
blocks - Use transactions within stored procedures for data integrity
- Combine update and audit logic in one atomic operation
- Test both successful and failed execution paths
info> You will write all of your T-SQL answers directly in the
5-step-five.sql
file provided in this environment.When you are ready to run the query, simply press Run in the Terminal tab.
This will automatically execute the contents of5-step-five.sql
against the database and display the results.
You do not need to type any commands in the terminal manually.
Database Schema
dbo.Customers Table
| Column Name | Data Type | | ------------ | ---------- | | CustomerID | INT | | FirstName | NVARCHAR | | LastName | NVARCHAR | | Email | NVARCHAR | | Phone | NVARCHAR |
dbo.ChangeLog Table
| Column Name | Data Type | | ------------ | ---------- | | LogID | INT | | CustomerID | INT | | ChangeType | NVARCHAR | | OldValue | NVARCHAR | | NewValue | NVARCHAR | | ChangeDate | DATE |
<br/>
### Defining and Creating a Stored ProcedureA stored procedure is a precompiled SQL program that performs a defined operation.
It can accept parameters, execute one or more statements, and return results or status codes.
Stored procedures promote consistency, security, and maintainability by encapsulating logic that would otherwise be repeated across multiple scripts.Generic syntax pattern:
CREATE PROCEDURE procedure_name @parameter1 data_type, @parameter2 data_type AS BEGIN -- Statement or block of logic UPDATE table_name SET column_name = @parameter2 WHERE key_column = @parameter1; END; GO
When executed, parameters are passed in using the
EXEC
command:EXEC procedure_name @parameter1 = value1, @parameter2 = value2; GO
How SQL Server Stores and Executes Procedures
When a stored procedure is created, SQL Server compiles the SQL code into an execution plan and stores it in the database. When executed, it uses that plan directly, improving performance and consistency. Parameters allow input values to be validated and passed dynamically at runtime.In SQL Server, you can use the
TRY...CATCH
construct to gracefully handle errors that occur during execution.
When an error is raised inside theTRY
block, control immediately transfers to theCATCH
block, where you can log, roll back, or report the failure.Generic syntax pattern:
BEGIN TRY -- Statements that might fail UPDATE table_name SET column_name = value WHERE key_column = condition; END TRY BEGIN CATCH -- Error handling logic SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage; END CATCH; GO
This structure allows controlled error management without interrupting execution flow. The
CATCH
block can contain recovery actions, audit inserts, or notifications.Common Use Cases for TRY...CATCH in Data Operations
1. Detecting violations such as duplicate keys or null constraint failures. 2. Logging error messages and timestamps into an error tracking table. 3. Ensuring that uncommitted transactions are rolled back when an error occurs.Transactions inside stored procedures ensure that all included statements succeed or fail as one unit of work.
By combiningBEGIN TRANSACTION
withTRY...CATCH
, you can ensure that successful operations are committed, while errors trigger a rollback.Generic syntax pattern:
BEGIN TRY BEGIN TRANSACTION; -- Statements that modify data UPDATE table_name SET column_name = value WHERE key_column = condition; COMMIT TRANSACTION; END TRY BEGIN CATCH ROLLBACK TRANSACTION; SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage; END CATCH; GO
In this structure:
- The
BEGIN TRANSACTION
starts a new unit of work. - The
COMMIT TRANSACTION
finalizes changes if no error occurs. - If an exception is raised, control jumps to the
CATCH
block, whereROLLBACK TRANSACTION
reverts any changes.
Why Use Transactions in Stored Procedures
Transactions ensure **atomicity**: either every operation in a group succeeds, or none do. This prevents partial updates that could lead to data inconsistencies, particularly when modifying multiple related tables like `Customers` and `ChangeLog`.You can extend the transactional procedure to log every change automatically as it occurs.
TheOUTPUT INTO
clause can write both the old and new values of each modified record into an audit table while the transaction runs.When combined with
TRY...CATCH
and transaction control, this ensures that both the data modification and its audit entry succeed or fail together.Generic syntax pattern:
BEGIN TRY BEGIN TRANSACTION; UPDATE table_name SET column_name = new_value OUTPUT INSERTED.key_column, 'UPDATE', DELETED.column_name, INSERTED.column_name, GETDATE() INTO audit_table (KeyColumn, ChangeType, OldValue, NewValue, ChangeDate) WHERE key_column = @parameter; COMMIT TRANSACTION; END TRY BEGIN CATCH ROLLBACK TRANSACTION; SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage; END CATCH; GO
This approach provides full data traceability: the modification is immediately captured in the
audit_table
, and if any step fails, both operations roll back automatically.Why Combine Update and Audit in One Procedure
Combining these operations ensures that no audit record is written without a corresponding change, and vice versa. It enforces **consistency** and **completeness** in systems where audit data must mirror transactional activity exactly.Testing ensures that your stored procedure behaves predictably under both normal and exceptional conditions.
You should confirm that:- Valid input parameters produce the expected updates and audit entries.
- Invalid or failing operations trigger the
CATCH
block and roll back the transaction correctly.
A well-structured procedure should never leave partial changes in the database, even if an error occurs.
Generic syntax pattern for testing:
-- Successful execution EXEC procedure_name @parameter1 = valid_value, @parameter2 = valid_value, @parameter3 = valid_value; GO -- Failing execution EXEC procedure_name @parameter1 = invalid_value, @parameter2 = valid_value, @parameter3 = valid_value; GO
How to Interpret Test Results
- In the **success** case, both the target table and the audit table should reflect the change. - In the **failure** case, the `CATCH` block should return an error number and message, and neither table should be modified. This validates that your `TRY...CATCH` and transactional logic are functioning correctly.In this lab, you worked through the complete process of managing customer data safely and reliably in SQL Server.
You learned how to:- Insert new records using explicit
INSERT
statements and table value constructors. - Modify and remove data while validating targets with
SELECT
previews. - Audit row-level changes using the
OUTPUT
andOUTPUT INTO
clauses. - Protect data integrity through
BEGIN TRANSACTION
,COMMIT
, andROLLBACK
. - Build a resilient stored procedure that combines updates, auditing, transactions, and error handling.
Each of these concepts contributes to a professional workflow for maintaining data accuracy, traceability, and consistency in production environments.
- Using
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.