SQL Stored Procedures
The Problem: Repeated SQL in Applications
An application processes orders: check inventory, deduct stock, create order, log transaction — the same 120-line SQL runs in every API endpoint, batch job, and mobile app. Changes must be applied everywhere, inconsistently.
Stored procedures solve this by storing SQL logic inside the database. Applications call a single name, the database executes the pre-stored logic.
What is a Stored Procedure?
A stored procedure is a named block of SQL code stored in the database, executable on demand. It accepts input/output parameters and contains control-flow logic. Key characteristics:
- Pre-compiled — execution plan is cached and reused
- Parameterized — accepts input and output parameters
- Encapsulated — hides implementation from callers
- Transactional — can wrap multiple statements in a transaction
Basic Syntax
Examples use SQL Server syntax. Key differences for PostgreSQL/MySQL are noted.
Creating a stored procedure
-- SQL Server
CREATE PROCEDURE GetEmployeesByDepartment
@DeptId INT
AS
BEGIN
SELECT id, name, salary
FROM employees
WHERE dept_id = @DeptId
ORDER BY name;
END;
-- PostgreSQL (uses FUNCTION, not PROCEDURE, for return values)
CREATE OR REPLACE FUNCTION GetEmployeesByDepartment(dept_id INT)
RETURNS TABLE(id INT, name VARCHAR, salary DECIMAL)
LANGUAGE sql AS $$
SELECT id, name, salary FROM employees
WHERE dept_id = $1 ORDER BY name;
$$;
-- MySQL
DELIMITER //
CREATE PROCEDURE GetEmployeesByDepartment(IN dept_id INT)
BEGIN
SELECT id, name, salary FROM employees
WHERE dept_id = dept_id ORDER BY name;
END //
DELIMITER ;
Executing, modifying, and dropping
-- Execute
EXEC GetEmployeesByDepartment @DeptId = 10; -- SQL Server
SELECT * FROM GetEmployeesByDepartment(10); -- PostgreSQL
CALL GetEmployeesByDepartment(10); -- MySQL
-- Alter
ALTER PROCEDURE GetEmployeesByDepartment
@DeptId INT
AS
BEGIN
SELECT id, name, salary, hire_date
FROM employees WHERE dept_id = @DeptId
ORDER BY hire_date DESC;
END;
-- Drop
DROP PROCEDURE GetEmployeesByDepartment; -- SQL Server / MySQL
DROP FUNCTION GetEmployeesByDepartment; -- PostgreSQL
Parameters
Input parameters with optional filters
CREATE PROCEDURE SearchEmployees
@NameFilter NVARCHAR(100) = NULL,
@MinSalary DECIMAL(18,2) = 0,
@DeptId INT = NULL
AS
BEGIN
SELECT id, name, salary, dept_id
FROM employees
WHERE (@NameFilter IS NULL OR name LIKE '%' + @NameFilter + '%')
AND salary >= @MinSalary
AND (@DeptId IS NULL OR dept_id = @DeptId)
ORDER BY salary DESC;
END;
EXEC SearchEmployees @NameFilter = 'Alice', @MinSalary = 50000, @DeptId = 10;
EXEC SearchEmployees @DeptId = 10; -- defaults: NameFilter=NULL, MinSalary=0
EXEC SearchEmployees; -- returns all employees
Use NULL defaults for optional parameters. The WHERE (@Param IS NULL OR column = @Param) pattern makes each filter optional — when NULL is passed, that condition is skipped.
Output parameters
CREATE PROCEDURE GetDepartmentStats
@DeptId INT,
@EmployeeCount INT OUTPUT,
@AvgSalary DECIMAL(18,2) OUTPUT
AS
BEGIN
SELECT @EmployeeCount = COUNT(*),
@AvgSalary = AVG(salary)
FROM employees WHERE dept_id = @DeptId;
END;
DECLARE @Count INT, @Avg DECIMAL(18,2);
EXEC GetDepartmentStats
@DeptId = 10,
@EmployeeCount = @Count OUTPUT,
@AvgSalary = @Avg OUTPUT;
SELECT @Count AS employee_count, @Avg AS avg_salary;
Table-valued parameters (SQL Server)
Pass multiple rows as a single parameter for batch operations:
-- 1. Create a table type
CREATE TYPE EmployeeTableType AS TABLE (
name NVARCHAR(100),
dept_id INT,
salary DECIMAL(18,2)
);
-- 2. Use it as a parameter
CREATE PROCEDURE BulkInsertEmployees
@Employees EmployeeTableType READONLY
AS
BEGIN
INSERT INTO employees (name, dept_id, salary)
SELECT name, dept_id, salary FROM @Employees;
END;
-- 3. Call it
DECLARE @NewEmployees AS EmployeeTableType;
INSERT INTO @NewEmployees VALUES
('Alice', 10, 75000), ('Bob', 20, 65000), ('Carol', 30, 80000);
EXEC BulkInsertEmployees @Employees = @NewEmployees;
Control Flow
Stored procedures support procedural logic inside the database.
Variables and conditionals
CREATE PROCEDURE ApplyRaise
@EmployeeId INT,
@RaisePercent DECIMAL(5,2)
AS
BEGIN
DECLARE @CurrentSalary DECIMAL(18,2);
DECLARE @NewSalary DECIMAL(18,2);
DECLARE @MaxSalary DECIMAL(18,2) = 200000;
SELECT @CurrentSalary = salary FROM employees WHERE id = @EmployeeId;
SET @NewSalary = @CurrentSalary * (1 + @RaisePercent / 100);
IF @NewSalary > @MaxSalary
BEGIN
SET @NewSalary = @MaxSalary;
PRINT 'Salary capped at maximum';
END
UPDATE employees SET salary = @NewSalary WHERE id = @EmployeeId;
SELECT @CurrentSalary AS old_salary, @NewSalary AS new_salary;
END;
Loops
CREATE PROCEDURE GenerateMonthlyReport @Year INT
AS
BEGIN
DECLARE @Month INT = 1;
CREATE TABLE #MonthlyStats (
month INT, order_count INT, total_revenue DECIMAL(18,2)
);
WHILE @Month <= 12
BEGIN
INSERT INTO #MonthlyStats (month, order_count, total_revenue)
SELECT @Month, COUNT(*), ISNULL(SUM(amount), 0)
FROM orders
WHERE YEAR(order_date) = @Year AND MONTH(order_date) = @Month;
SET @Month = @Month + 1;
END
SELECT * FROM #MonthlyStats ORDER BY month;
DROP TABLE #MonthlyStats;
END;
Error Handling
TRY...CATCH (SQL Server)
CREATE PROCEDURE TransferFunds
@FromAccountId INT,
@ToAccountId INT,
@Amount DECIMAL(18,2)
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRY
BEGIN TRANSACTION;
IF @Amount <= 0
RAISERROR('Transfer amount must be positive', 16, 1);
DECLARE @Balance DECIMAL(18,2);
SELECT @Balance = balance FROM accounts WHERE id = @FromAccountId;
IF @Balance < @Amount
RAISERROR('Insufficient funds: balance is %s', 16, 1, CAST(@Balance AS VARCHAR(20)));
UPDATE accounts SET balance = balance - @Amount WHERE id = @FromAccountId;
UPDATE accounts SET balance = balance + @Amount WHERE id = @ToAccountId;
INSERT INTO transfer_log (from_account, to_account, amount, transfer_date)
VALUES (@FromAccountId, @ToAccountId, @Amount, GETDATE());
COMMIT TRANSACTION;
SELECT 'Transfer successful' AS result;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION;
SELECT
ERROR_NUMBER() AS error_number,
ERROR_MESSAGE() AS error_message,
ERROR_SEVERITY() AS error_severity,
ERROR_PROCEDURE() AS error_procedure,
ERROR_LINE() AS error_line;
END CATCH
END;
EXCEPTION handling (PostgreSQL)
CREATE OR REPLACE FUNCTION TransferFunds(
from_account_id INT, to_account_id INT, amount DECIMAL
)
RETURNS TEXT LANGUAGE plpgsql AS $$
DECLARE balance DECIMAL;
BEGIN
IF amount <= 0 THEN
RAISE EXCEPTION 'Transfer amount must be positive';
END IF;
SELECT balance INTO balance FROM accounts WHERE id = from_account_id;
IF balance < amount THEN
RAISE EXCEPTION 'Insufficient funds: balance is %', balance;
END IF;
UPDATE accounts SET balance = balance - amount WHERE id = from_account_id;
UPDATE accounts SET balance = balance + amount WHERE id = to_account_id;
INSERT INTO transfer_log (from_account, to_account, amount, transfer_date)
VALUES (from_account_id, to_account_id, amount, NOW());
RETURN 'Transfer successful';
EXCEPTION
WHEN OTHERS THEN
RETURN 'Transfer failed: ' || SQLERRM;
END;
$$;
Stored Procedures vs User-Defined Functions
Both are named database objects, but behave very differently:
| Stored Procedure | Scalar Function | Table-Valued Function | |
|---|---|---|---|
| Return value | Optional (0+ result sets) | Single scalar value | A table |
| Usage | EXEC / CALL only | In SELECT, WHERE | In FROM clause |
| Modifies data | Yes (INSERT, UPDATE, DELETE) | No — read-only | No — read-only |
| Transactions | Can begin/commit/rollback | Cannot | Cannot |
| Error handling | Full TRY/CATCH | Limited | Limited |
| Side effects | Allowed | Not allowed | Not allowed |
Scalar functions in SELECT or WHERE execute once per row. On a 1M-row table, the function is called 1M times — a major performance killer. Use inline table-valued functions or plain SQL instead.
Performance Considerations
Plan caching
On first execution, the query optimizer creates an execution plan and caches it. Subsequent calls reuse the cached plan — no re-parsing or re-compilation.
The "parameter sniffing" problem
The optimizer creates the plan based on the first parameter values it sees. If the first call uses a highly selective filter (5 rows), the plan is optimized for small results. A later call with a non-selective filter (500K rows) performs poorly with the cached plan.
-- Recompile per execution (SQL Server)
CREATE PROCEDURE SearchOrders @Status VARCHAR(20)
WITH RECOMPILE AS ...
-- Recompile only a specific statement
SELECT * FROM orders WHERE status = @Status
OPTION (RECOMPILE);
-- Use average statistics instead of sniffing
CREATE PROCEDURE SearchOrders @Status VARCHAR(20)
WITH OPTIMIZE FOR UNKNOWN AS ...
Network traffic
| Approach | Sent over network |
|---|---|
| Ad-hoc SQL | Entire SQL text (hundreds of lines) |
| Stored procedure | Just EXEC ProcessOrder @id = 1001 |
Security: permission control
Stored procedures enable granular access control without exposing underlying tables. This follows the principle of least privilege.
-- Application needs only EXECUTE permission — no direct table access
GRANT EXECUTE ON ProcessOrder TO web_app_role;
Dynamic SQL in Stored Procedures
Build SQL statements dynamically at runtime — dynamic ORDER BY, WHERE, or table names.
-- SQL Server: sp_executesql with parameters
CREATE PROCEDURE SearchEmployeesDynamic
@NameFilter NVARCHAR(100) = NULL,
@SortColumn NVARCHAR(50) = 'name',
@SortDirection NVARCHAR(4) = 'ASC'
AS
BEGIN
DECLARE @SQL NVARCHAR(MAX);
DECLARE @Params NVARCHAR(MAX) = N'@NameFilter NVARCHAR(100)';
SET @SQL = N'
SELECT id, name, salary, hire_date FROM employees
WHERE (@NameFilter IS NULL OR name LIKE ''%'' + @NameFilter + ''%'')
ORDER BY ' +
CASE @SortColumn
WHEN 'name' THEN 'name'
WHEN 'salary' THEN 'salary'
WHEN 'hire_date' THEN 'hire_date'
ELSE 'name'
END + ' ' +
CASE @SortDirection WHEN 'DESC' THEN 'DESC' ELSE 'ASC' END;
EXEC sp_executesql @SQL, @Params, @NameFilter = @NameFilter;
END;
-- PostgreSQL: EXECUTE ... USING with format()
CREATE OR REPLACE FUNCTION SearchEmployeesDynamic(
name_filter TEXT DEFAULT NULL,
sort_column TEXT DEFAULT 'name',
sort_direction TEXT DEFAULT 'ASC'
)
RETURNS TABLE(id INT, name VARCHAR, salary DECIMAL, hire_date DATE)
LANGUAGE plpgsql AS $$
BEGIN
IF sort_column NOT IN ('name', 'salary', 'hire_date') THEN
sort_column := 'name';
END IF;
IF sort_direction NOT IN ('ASC', 'DESC') THEN
sort_direction := 'ASC';
END IF;
RETURN QUERY EXECUTE format(
'SELECT id, name, salary, hire_date FROM employees
WHERE ($1::text IS NULL OR name LIKE ''%'' || $1 || ''%'')
ORDER BY %I %s',
sort_column, sort_direction
) USING name_filter;
END;
$$;
Never concatenate user input directly into dynamic SQL — this creates a SQL injection vulnerability. Always use parameterized queries (sp_executesql / EXECUTE ... USING).
Stored Procedures vs Application Code
| Stored Procedures | Application Code | |
|---|---|---|
| Performance | Closer to data, fewer round-trips | Network overhead per query |
| Testability | Hard to unit test | Easy to unit test |
| Version control | Needs migration scripts | Natural Git workflow |
| Debugging | Limited tooling | Full IDE debugging |
| Portability | Database-specific syntax | Database-agnostic (with ORM) |
| Scalability | Scales with database server | Scales independently |
Modern applications use a hybrid approach: application code handles business logic, stored procedures handle data-heavy operations and security encapsulation.
Common Pitfalls
- Overusing stored procedures — putting all business logic in the database makes it hard to test, version, and scale. Use for data-centric operations only.
- Ignoring parameter sniffing — a cached plan optimized for one parameter value may perform poorly for others. Monitor and use
RECOMPILEwhen needed. - Not handling errors — unhandled errors leave transactions open and cause blocking. Always use
TRY...CATCH. - Scalar functions in WHERE — execute per-row, destroying performance on large tables. Rewrite as inline TVFs or plain SQL.
- Dynamic SQL injection — concatenating user input into SQL strings is a critical vulnerability. Always use parameterized dynamic SQL.
- Missing SET NOCOUNT ON — SQL Server sends
DONE_IN_PROCmessages after each statement, adding unnecessary network traffic.
Best Practices
- Keep procedures focused — each procedure does one thing well. Avoid "god procedures."
- Use
SET NOCOUNT ON— prevents row-count messages, reducing network traffic. - Handle errors explicitly — wrap data modifications in
TRY...CATCH, always roll back failed transactions. - Schema-qualify objects — use
dbo.GetEmployees, not justGetEmployees, for performance and clarity. - Parameterize everything — never hardcode values. Use parameters with sensible defaults.
- Validate inputs — check parameter values at the start and raise clear errors for invalid input.
- Version control your DDL — store
CREATE PROCEDUREscripts in your migration system.
Interview Questions
1. What is a stored procedure and what are its advantages?
A named set of SQL statements stored in the database, executable on demand.
| Advantage | Description |
|---|---|
| Performance | Execution plans cached and reused, avoiding recompilation |
| Reduced network traffic | Only the procedure call is sent, not full SQL text |
| Security | Grant EXECUTE without direct table access |
| Reusability | Write once, call from any application |
| Encapsulation | Logic centralized and hidden from callers |
2. Stored procedure vs function?
| Stored Procedure | Function | |
|---|---|---|
| Return value | Optional (0+ result sets) | Must return a value or table |
| Usage | EXEC / CALL | Inside SELECT, WHERE, FROM |
| Data modification | Can INSERT, UPDATE, DELETE | Read-only |
| Transactions | Can BEGIN/COMMIT/ROLLBACK | Cannot |
| Error handling | Full TRY/CATCH | Limited |
3. What is parameter sniffing and how do you handle it?
The optimizer creates an execution plan based on the first parameter values it sees. The cached plan may be suboptimal for different values.
Example: First call uses status = 'shipped' (99% of rows) → full table scan. Second call uses status = 'cancelled' (0.1% of rows) → cached scan plan is inefficient.
Solutions: WITH RECOMPILE, OPTION (RECOMPILE), OPTIMIZE FOR UNKNOWN, or copy parameters to local variables.
4. How do you handle errors in stored procedures?
BEGIN TRY
BEGIN TRANSACTION;
-- SQL statements
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION;
SELECT ERROR_NUMBER(), ERROR_MESSAGE();
END CATCH
Key principles: wrap modifications in TRY...CATCH, check @@TRANCOUNT before rollback, return meaningful error info, log errors to an error table.
5. What is dynamic SQL and what are its risks?
SQL constructed as a string at runtime. Useful for dynamic ORDER BY/WHERE, dynamic table/column names, cross-database queries.
Risks: SQL injection (concatenating user input), harder to debug, plan cache pollution.
Prevention: Use sp_executesql with parameters (SQL Server) or EXECUTE ... USING (PostgreSQL), whitelist column/table names, use quotename() / format() for identifiers.
6. When should you use stored procedures vs application code?
Stored procedures for: data-intensive operations, security encapsulation, cross-application consistency, performance-critical paths.
Application code for: complex business rules needing unit tests, database portability, external service calls, rapidly changing requirements.
Learn More
- ACID Properties — transaction guarantees that protect stored procedure operations
- Database Locking — how locking affects concurrent stored procedure execution
- SQL Joins — joins are commonly used inside stored procedures