1

Missing or Incorrect Indexes

The most common performance killer by far. A missing index on a frequently queried column forces the database engine to scan every row in the table — a full table scan. On a table with millions of rows, this can turn a millisecond query into a multi-second ordeal.

How to find missing indexes in SQL Server

T-SQL — Top missing indexes by potential impact
SELECT TOP 20
    ROUND(s.avg_total_user_cost * s.avg_user_impact * (s.user_seeks + s.user_scans), 0)
        AS estimated_improvement,
    d.statement                 AS table_name,
    d.equality_columns,
    d.inequality_columns,
    d.included_columns
FROM sys.dm_db_missing_index_groups g
JOIN sys.dm_db_missing_index_group_stats s ON s.group_handle = g.index_group_handle
JOIN sys.dm_db_missing_index_details d    ON d.index_handle  = g.index_handle
WHERE d.database_id = DB_ID()
ORDER BY estimated_improvement DESC;

How to find missing indexes in PostgreSQL

PostgreSQL — Tables with sequential scans (candidates for indexing)
SELECT
    schemaname,
    tablename,
    seq_scan,
    seq_tup_read,
    idx_scan,
    seq_scan - idx_scan AS too_much_seq,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_stat_user_tables
WHERE seq_scan > idx_scan
    AND pg_total_relation_size(schemaname||'.'||tablename) > 100000
ORDER BY too_much_seq DESC
LIMIT 20;

Fix: Use the missing index DMVs as a starting point, but don't blindly create every suggested index. Over-indexing slows down writes. Focus on high-impact indexes for your most frequent queries, and always test before deploying to production.

2

Implicit Type Conversions

This mistake is subtle and devastatingly effective at destroying index performance. It happens when the data type of a column doesn't match the data type of the value you're comparing it against — causing the database to convert every value in the column before comparing, which prevents index usage.

T-SQL — The problem: NationalIDNumber is NVARCHAR, but we pass VARCHAR
-- BAD: forces implicit conversion on every row, index not used
SELECT * FROM Person.Person
WHERE NationalIDNumber = 'AW00000095';   -- varchar literal

-- GOOD: explicit type match, index is used
SELECT * FROM Person.Person
WHERE NationalIDNumber = N'AW00000095';  -- nvarchar literal

The same problem occurs in application code when an ORM passes an integer to a VARCHAR column, or a string to a DATE column. Always check the execution plan — an implicit conversion warning (yellow exclamation mark) is the giveaway.

Fix: Always match the data type in your WHERE clause to the column's actual data type. Use CAST or CONVERT explicitly when needed, and check for implicit conversion warnings in execution plans regularly.

3

SELECT * in Production Queries

SELECT * seems harmless in development, but in production it causes real problems. It retrieves every column — including large TEXT, NVARCHAR(MAX), and VARBINARY columns — even when your application only needs two or three fields. This wastes I/O, network bandwidth, and memory.

Worse, SELECT * breaks covering indexes. A covering index includes exactly the columns a query needs, allowing the database to satisfy the query entirely from the index without touching the base table. With SELECT *, this optimisation is impossible.

T-SQL — Find queries using SELECT * in SQL Server
SELECT TOP 50
    qs.execution_count,
    qs.total_logical_reads / qs.execution_count AS avg_logical_reads,
    SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text)
          ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1) AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
WHERE st.text LIKE '%SELECT%*%'
    AND st.text NOT LIKE '%sys.%'
ORDER BY avg_logical_reads DESC;

Fix: Always list the columns you need explicitly. If you genuinely need all columns, list them anyway — it makes the code self-documenting and allows the query optimiser to use covering indexes. Set up a linting rule in your CI pipeline to catch SELECT * before it reaches production.

4

Parameter Sniffing Gone Wrong

Parameter sniffing is a feature, not a bug — but it can behave like one. When SQL Server compiles a stored procedure or parameterised query for the first time, it inspects the parameter values and creates an execution plan optimised for those specific values. This plan is then cached and reused for all subsequent executions, regardless of the actual parameter values.

The problem: if the first execution uses a highly selective parameter (returns 10 rows), the plan uses an Index Seek. When the same plan is later used with a non-selective parameter (returns 5 million rows), it becomes catastrophically slow — but SQL Server reuses the cached plan anyway.

T-SQL — Diagnose parameter sniffing problems
-- Find queries with high variance in execution time (sniffing candidate)
SELECT TOP 20
    qs.sql_handle,
    qs.execution_count,
    qs.min_elapsed_time / 1000   AS min_ms,
    qs.max_elapsed_time / 1000   AS max_ms,
    qs.max_elapsed_time / qs.min_elapsed_time AS ratio,
    SUBSTRING(st.text, 1, 200)   AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
WHERE qs.min_elapsed_time > 0
    AND qs.max_elapsed_time / qs.min_elapsed_time > 100  -- 100x variance
ORDER BY ratio DESC;
T-SQL — Common fixes
-- Option 1: OPTIMIZE FOR UNKNOWN (uses average statistics)
CREATE PROCEDURE dbo.GetOrders @CustomerID INT
AS
    SELECT OrderID, OrderDate, Total
    FROM Orders
    WHERE CustomerID = @CustomerID
    OPTION (OPTIMIZE FOR (@CustomerID UNKNOWN));

-- Option 2: Local variable trick
CREATE PROCEDURE dbo.GetOrders @CustomerID INT
AS
    DECLARE @LocalCustomerID INT = @CustomerID;
    SELECT OrderID, OrderDate, Total
    FROM Orders
    WHERE CustomerID = @LocalCustomerID;

Avoid WITH RECOMPILE as a blanket fix — it forces plan recompilation on every execution, which adds CPU overhead. Use it only for stored procedures that are called infrequently but with very different parameter distributions.

5

Functions on Columns in WHERE Clauses

Wrapping a column in a function inside a WHERE clause prevents the query optimiser from using any index on that column. The function must be evaluated for every row before the filter can be applied, resulting in a full table scan.

T-SQL / PostgreSQL — The problem and the fix
-- BAD: YEAR() function prevents index use on OrderDate
SELECT * FROM Orders
WHERE YEAR(OrderDate) = 2025;

-- GOOD: range condition allows index seek on OrderDate
SELECT * FROM Orders
WHERE OrderDate >= '2025-01-01'
  AND OrderDate  < '2026-01-01';

-- BAD: LOWER() function prevents index use on Email
SELECT * FROM Users
WHERE LOWER(Email) = 'joao@empresa.pt';

-- GOOD: store emails lowercase, or use a functional index (PostgreSQL)
CREATE INDEX idx_users_email_lower ON Users (LOWER(Email));
SELECT * FROM Users
WHERE LOWER(Email) = 'joao@empresa.pt';  -- now uses the functional index

This pattern is extremely common with date functions, string manipulation (UPPER, LOWER, SUBSTRING), and mathematical operations on numeric columns. The fix is to either rewrite the condition to avoid applying a function to the column, or in PostgreSQL, create a functional index that matches the expression exactly.

Fix: Review every WHERE clause that applies a function to a column. Rewrite using range conditions where possible. In PostgreSQL, use functional indexes for cases where you can't avoid the function. In SQL Server, consider computed columns with indexes for frequently used expressions.


Putting It All Together

These five mistakes account for the majority of query performance problems we encounter across SQL Server, PostgreSQL, and MongoDB deployments. The good news is that they're all diagnosable with the right tools — execution plans, DMVs, and pg_stat_* views — and fixable without hardware upgrades.

If you're experiencing unexplained slowdowns or need a systematic performance review of your database environment, contact our team. A focused performance audit typically delivers measurable improvements within days.

JR

João Rocha

Database Administrator and Software Consultant at Bytedivision with 20+ years of experience in SQL Server, PostgreSQL, MongoDB and Azure cloud solutions.