WitDatabase provides a full ADO.NET provider implementing the System.Data.Common abstractions. This comprehensive guide covers everything from basic concepts to advanced patterns.
What is ADO.NET?
ADO.NET (ActiveX Data Objects for .NET) is Microsoft's core data access technology in the .NET Framework. It provides a consistent way to interact with various data sources — relational databases, XML files, and application data — through a unified set of classes and interfaces.
WitDatabase implements a full ADO.NET provider, which means you can use familiar patterns and classes like DbConnection, DbCommand, and DbDataReader to work with WitDatabase. If you've used SQL Server, PostgreSQL, MySQL, or SQLite through ADO.NET, you already know how to use WitDatabase.
Why ADO.NET?
ADO.NET offers several advantages:
Familiarity — Millions of .NET developers already know ADO.NET patterns. No learning curve for basic operations.
Interoperability — Tools, libraries, and frameworks that work with ADO.NET (Dapper, reporting tools, data grids) automatically work with WitDatabase.
Control — Direct SQL execution gives you full control over queries, unlike ORMs that generate SQL for you.
Performance — Minimal abstraction layer means less overhead compared to higher-level ORMs.
Flexibility — Mix raw SQL with parameterized queries, transactions, and batch operations as needed.
Installation
Install the ADO.NET provider package:
dotnet add package OutWit.Database.AdoNet
This package includes all dependencies:
OutWit.Database.Core— storage engineOutWit.Database— SQL engine
Add the namespace to your code:
using OutWit.Database.AdoNet;
For explicit ADO.NET types, also add:
using System.Data;
using System.Data.Common;
Architecture
Provider Components
The WitDatabase ADO.NET provider follows the standard provider model:
[[Svg Src="./witdatabase-adonet-architecture.svg" Alt="witdatabase-adonet-architecture"]]
When you execute a SQL command through ADO.NET:
- WitDbCommand receives your SQL text and parameters
- SQL Parser validates syntax and builds an execution plan
- Query Executor runs the plan against the storage engine
- Result Mapper converts internal data to .NET types
- WitDbDataReader provides access to results row by row
Connected vs Disconnected Models
ADO.NET supports two data access models:
Connected Model — You maintain an open connection while reading data. Data is processed row by row as it's fetched. This is memory-efficient for large result sets.
// Connected: Reader requires open connection
using var connection = new WitDbConnection(connectionString);
connection.Open();
using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT * FROM LargeTable";
using var reader = cmd.ExecuteReader();
while (reader.Read()) // Fetches one row at a time
{
ProcessRow(reader);
}
// Connection closed after processing
Disconnected Model — You fetch all data into memory (DataTable/DataSet), then close the connection. You can work with data offline, modify it, and sync changes back later.
// Disconnected: Data loaded into memory, connection can close
var table = new DataTable();
using (var connection = new WitDbConnection(connectionString))
{
connection.Open();
using var adapter = new WitDbDataAdapter("SELECT * FROM Users", connection);
adapter.Fill(table); // All data loaded
} // Connection closed
// Work with data offline
foreach (DataRow row in table.Rows)
{
ProcessRow(row);
}
When to use each:
| Scenario | Model | Reason |
|---|---|---|
| Large result sets | Connected | Memory efficient |
| Quick read and close | Connected | Simpler code |
| Data binding (WinForms/WPF) | Disconnected | UI needs in-memory data |
| Offline editing | Disconnected | No connection needed |
| Batch updates | Disconnected | Change tracking built-in |
| Web API responses | Connected | Stream directly to response |
Core Classes
WitDatabase implements all standard ADO.NET abstract classes:
| ADO.NET Base | WitDatabase | Purpose |
|---|---|---|
DbConnection |
WitDbConnection |
Manages database connection lifecycle |
DbCommand |
WitDbCommand |
Executes SQL statements |
DbParameter |
WitDbParameter |
Represents a query parameter |
DbParameterCollection |
WitDbParameterCollection |
Collection of parameters |
DbDataReader |
WitDbDataReader |
Reads query results row by row |
DbTransaction |
WitDbTransaction |
Manages ACID transactions |
DbConnectionStringBuilder |
WitDbConnectionStringBuilder |
Builds connection strings programmatically |
DbProviderFactory |
WitDbProviderFactory |
Creates provider objects (factory pattern) |
DbDataAdapter |
WitDbDataAdapter |
Bridges connected and disconnected models |
DbCommandBuilder |
WitDbCommandBuilder |
Auto-generates INSERT/UPDATE/DELETE |
DbException |
WitDbException |
Database-specific exceptions |
Class Relationships
[[Svg Src="./witdatabase-adonet-classes.svg" Alt="witdatabase-adonet-classes"]]
When to Use ADO.NET
ADO.NET vs Entity Framework Core
| Aspect | ADO.NET | EF Core |
|---|---|---|
| Learning curve | Low (if you know SQL) | Medium (LINQ, conventions) |
| Control | Full SQL control | ORM generates SQL |
| Performance | Fastest | Slight overhead |
| Productivity | Lower (more code) | Higher (less boilerplate) |
| Complex queries | Easy (write SQL) | Can be tricky |
| Migrations | Manual | Built-in |
| Change tracking | Manual | Automatic |
Choose ADO.NET when:
- You need maximum performance
- You have complex SQL that's hard to express in LINQ
- You're migrating existing ADO.NET code
- You want full control over every query
- You're building a thin data layer or micro-ORM
Choose EF Core when:
- You want rapid development
- You prefer LINQ over SQL
- You need automatic change tracking
- You want built-in migrations
- You're building a typical business application
ADO.NET vs Core API
| Aspect | ADO.NET | Core API |
|---|---|---|
| Data model | Relational (tables, SQL) | Key-Value |
| Query language | SQL | Direct key lookup |
| Flexibility | High (any query) | Low (key-based only) |
| Performance | Very fast | Fastest possible |
| Use case | General purpose | Caching, simple storage |
Choose Core API when:
- You only need key-value access
- Maximum performance is critical
- You're building a cache layer
- Your data model is simple (no relations)
Quick Start
Here's a complete example showing the main ADO.NET operations:
using OutWit.Database.AdoNet;
// 1. Create and open connection
using var connection = new WitDbConnection("Data Source=quickstart.witdb");
connection.Open();
// 2. Create a table
using var cmd = connection.CreateCommand();
cmd.CommandText = """
CREATE TABLE IF NOT EXISTS Products (
Id BIGINT PRIMARY KEY AUTOINCREMENT,
Name VARCHAR(100) NOT NULL,
Price DECIMAL(10,2) NOT NULL,
Stock INT DEFAULT 0,
CreatedAt DATETIME DEFAULT DATETIME('now')
)
""";
cmd.ExecuteNonQuery();
// 3. Insert data with parameters
cmd.CommandText = "INSERT INTO Products (Name, Price, Stock) VALUES (@name, @price, @stock)";
cmd.Parameters.AddWithValue("@name", "Laptop");
cmd.Parameters.AddWithValue("@price", 999.99m);
cmd.Parameters.AddWithValue("@stock", 50);
cmd.ExecuteNonQuery();
// 4. Insert and get generated ID
cmd.Parameters.Clear();
cmd.CommandText = "INSERT INTO Products (Name, Price) VALUES (@name, @price) RETURNING Id";
cmd.Parameters.AddWithValue("@name", "Mouse");
cmd.Parameters.AddWithValue("@price", 29.99m);
long newId = (long)cmd.ExecuteScalar()!;
Console.WriteLine(
Loading...
quot;Inserted product with ID: {newId}");
// 5. Query data
cmd.Parameters.Clear();
cmd.CommandText = "SELECT Id, Name, Price, Stock FROM Products ORDER BY Name";
using var reader = cmd.ExecuteReader();
Console.WriteLine("\nProducts:");
while (reader.Read())
{
long id = reader.GetInt64(0);
string name = reader.GetString(1);
decimal price = reader.GetDecimal(2);
int stock = reader.IsDBNull(3) ? 0 : reader.GetInt32(3);
Console.WriteLine(
Loading...
quot; [{id}] {name}: ${price:F2} ({stock} in stock)");
}
// 6. Update with transaction
reader.Close();
using var transaction = connection.BeginTransaction();
cmd.Transaction = transaction;
try
{
cmd.CommandText = "UPDATE Products SET Stock = Stock - 1 WHERE Name = @name AND Stock > 0";
cmd.Parameters.Clear();
cmd.Parameters.AddWithValue("@name", "Laptop");
int affected = cmd.ExecuteNonQuery();
if (affected == 0)
{
throw new InvalidOperationException("Product out of stock!");
}
transaction.Commit();
Console.WriteLine("\nSale completed!");
}
catch
{
transaction.Rollback();
throw;
}
Output:
Inserted product with ID: 2
Products:
[1] Laptop: $999.99 (50 in stock)
[2] Mouse: $29.99 (0 in stock)
Sale completed!
This document covers WitDbConnection in depth: lifecycle management, connection states, pooling, and best practices.
What is a Connection?
A database connection represents an active communication channel between your application and the database. In WitDatabase, a connection:
- Opens the database file (or creates it if needed)
- Acquires necessary file locks
- Initializes internal caches and buffers
- Maintains transaction state
- Provides a context for executing commands
Connections are expensive to create — they involve file I/O, memory allocation, and initialization. This is why proper connection management matters.
Connection String
Every connection needs a connection string that specifies where the database is and how to access it:
// Minimal connection string
"Data Source=myapp.witdb"
// With options
"Data Source=myapp.witdb;Encryption=aes-gcm;Password=secret;Cache Size=5000"
Connection Lifecycle
A connection goes through distinct phases during its lifetime:
[[Svg Src="./witdatabase-connection-states.svg" Alt="witdatabase-connection-states"]]
Phase 1: Created
When you create a connection, it exists but isn't connected to anything:
var connection = new WitDbConnection();
// State: Closed
// No file is open, no resources allocated
connection.ConnectionString = "Data Source=app.witdb";
// Still closed, just configured
At this point you can freely modify ConnectionString. The connection is lightweight — just an object in memory.
Phase 2: Open
Calling Open() establishes the actual connection:
connection.Open();
// State: Open
// File is opened, locks acquired, caches initialized
During Open():
- Connection string is parsed and validated
- Database file is opened (or created if
Mode=ReadWriteCreate)
- File locks are acquired (if
File Locking=true)
- Page cache is initialized
- Connection becomes ready for commands
Important: You cannot change ConnectionString while open. This throws an exception:
connection.Open();
connection.ConnectionString = "Data Source=other.witdb"; // InvalidOperationException!
Phase 3: Closed
Closing releases resources:
connection.Close();
// State: Closed
// File closed, locks released, caches flushed
A closed connection can be reopened:
connection.Close();
connection.ConnectionString = "Data Source=different.witdb"; // OK now
connection.Open(); // Opens the new database
Phase 4: Disposed
Disposing permanently releases the connection:
connection.Dispose();
// Connection cannot be reused
After disposal, any operation throws ObjectDisposedException.
Connection States
The State property tells you the current connection status:
Console.WriteLine(connection.State); // ConnectionState enum
State
Meaning
Closed
Not connected (initial state, or after Close)
Open
Connected and ready for commands
Connecting
Open() is in progress (async only)
Broken
Was open but connection was lost
Checking State
Always check state before operations in code that might be called at different times:
public void DoSomething(WitDbConnection connection)
{
if (connection.State != ConnectionState.Open)
{
throw new InvalidOperationException("Connection must be open");
}
// Proceed with operation
}
Or open if needed:
public void EnsureOpen(WitDbConnection connection)
{
if (connection.State == ConnectionState.Closed)
{
connection.Open();
}
else if (connection.State == ConnectionState.Broken)
{
connection.Close();
connection.Open();
}
}
Creating Connections
Constructor Options
// Option 1: Empty constructor, set ConnectionString later
var connection = new WitDbConnection();
connection.ConnectionString = "Data Source=app.witdb";
connection.Open();
// Option 2: Pass connection string to constructor
var connection = new WitDbConnection("Data Source=app.witdb");
connection.Open();
// Option 3: Using connection string builder
var builder = new WitDbConnectionStringBuilder
{
DataSource = "app.witdb",
Encryption = "aes-gcm",
Password = GetPasswordFromSecureStorage()
};
var connection = new WitDbConnection(builder.ConnectionString);
connection.Open();
The using Statement
The most important pattern in ADO.NET is using using to ensure connections are always closed:
// Modern C# (using declaration)
using var connection = new WitDbConnection("Data Source=app.witdb");
connection.Open();
// Use connection...
// Automatically disposed at end of scope
// Traditional (using statement)
using (var connection = new WitDbConnection("Data Source=app.witdb"))
{
connection.Open();
// Use connection...
} // Disposed here, even if exception occurs
// Async
await using var connection = new WitDbConnection("Data Source=app.witdb");
await connection.OpenAsync();
Why this matters: Without using, an exception could leave the connection open, which:
- Holds file locks (blocking other processes)
- Consumes memory for caches
- May leave transactions uncommitted
- Eventually exhausts connection pool
Async Connection Opening
For applications where responsiveness matters (UI apps, web servers), open connections asynchronously:
await using var connection = new WitDbConnection("Data Source=app.witdb");
await connection.OpenAsync();
// With cancellation
using var cts = new CancellationTokenSource(TimeSpan.FromSeconds(5));
await connection.OpenAsync(cts.Token);
Async opening is especially important for:
- Encrypted databases (key derivation takes time)
- Remote file systems (network latency)
- Large databases (initial cache warm-up)
Connection Pooling
Creating connections is expensive. Connection pooling solves this by maintaining a pool of open connections that can be reused.
How Pooling Works
[[Svg Src="./witdatabase-connection-pool.svg" Alt="witdatabase-connection-pool"]]
When pooling is enabled:
- First Open() — Creates a new physical connection
- Close()/Dispose() — Returns connection to pool (doesn't actually close)
- Next Open() — Reuses connection from pool (fast!)
- Pool full — Waits for available connection or timeout
Enabling Pooling
Pooling is controlled via connection string:
// Enable pooling (disabled by default)
"Data Source=app.witdb;Pooling=true"
// Configure pool size
"Data Source=app.witdb;Pooling=true;Min Pool Size=5;Max Pool Size=50"
Parameter
Default
Description
Pooling
false
Enable/disable pooling
Min Pool Size
0
Minimum connections to maintain
Max Pool Size
100
Maximum connections allowed
Connection Lifetime
0
Max age in seconds (0 = unlimited)
Idle Timeout
300
Close idle connections after N seconds
When to Use Pooling
Scenario
Use Pooling?
Why
Web API / ASP.NET Core
✅ Yes
Many short-lived requests
Blazor Server
✅ Yes
Many concurrent users
Background service
⚠️ Maybe
Depends on access pattern
Desktop app (single user)
❌ No
Single long-lived connection works
Blazor WebAssembly
❌ No
Single user in browser
Unit tests
❌ No
Fresh connections preferred
Console app (one-shot)
❌ No
Overhead not worth it
Pool Management
Sometimes you need to manually manage pools:
// Clear all connections for a specific connection string
// Useful before deleting database file
WitDbConnection.ClearPool("Data Source=app.witdb;Pooling=true");
// Clear ALL pools (all connection strings)
// Useful for application shutdown
WitDbConnection.ClearAllPools();
Pooling Pitfalls
Problem: Connection state leaks between uses
If you change connection state (e.g., set a session variable), that state persists when the connection returns to the pool:
using (var conn = new WitDbConnection(pooledConnStr))
{
conn.Open();
ExecuteSql(conn, "SET timezone = 'UTC'"); // This persists!
}
using (var conn = new WitDbConnection(pooledConnStr))
{
conn.Open();
// This connection might have timezone = 'UTC' from before!
}
Solution: Reset state at the beginning or end of each use, or use Connection Lifetime to periodically recycle connections.
Problem: Pool exhaustion
If connections aren't properly closed, the pool runs out:
// BAD: Connections never returned to pool
for (int i = 0; i < 1000; i++)
{
var conn = new WitDbConnection(pooledConnStr);
conn.Open();
// No Close() or Dispose() — pool exhausted after Max Pool Size iterations!
}
Solution: Always use using:
// GOOD: Connections properly returned
for (int i = 0; i < 1000; i++)
{
using var conn = new WitDbConnection(pooledConnStr);
conn.Open();
// Automatically returned to pool
}
Connection Properties
Available Properties
Property
Type
Description
ConnectionString
string
The connection string (read-write when closed)
Database
string
Database name (filename without extension)
DataSource
string
Full path to database file
State
ConnectionState
Current connection state
ServerVersion
string
Returns "WitDatabase X.Y"
ConnectionTimeout
int
Timeout for Open() in seconds
Reading Properties
var connection = new WitDbConnection("Data Source=/data/myapp.witdb");
connection.Open();
Console.WriteLine(
Loading...
quot;Database: {connection.Database}"); // "myapp"
Console.WriteLine(
Loading...
quot;DataSource: {connection.DataSource}"); // "/data/myapp.witdb"
Console.WriteLine(
Loading...
quot;State: {connection.State}"); // "Open"
Console.WriteLine(
Loading...
quot;Version: {connection.ServerVersion}"); // "WitDatabase 1.0"
Connection Methods
Core Methods
Method
Description
Open()
Opens the connection
OpenAsync(CancellationToken)
Opens asynchronously
Close()
Closes the connection
CloseAsync()
Closes asynchronously
Dispose()
Releases all resources
DisposeAsync()
Releases resources asynchronously
Factory Methods
Method
Returns
Description
CreateCommand()
WitDbCommand
Creates a command associated with this connection
BeginTransaction()
WitDbTransaction
Starts a transaction
BeginTransaction(IsolationLevel)
WitDbTransaction
Starts with specific isolation
BeginTransactionAsync(...)
Task<WitDbTransaction>
Async transaction start
Schema Methods
Method
Returns
Description
GetSchema()
DataTable
Gets metadata collections list
GetSchema(string)
DataTable
Gets specific schema collection
GetSchema(string, string[])
DataTable
Gets schema with restrictions
Other Methods
Method
Description
ChangeDatabase(string)
Switches to different database file
EnlistTransaction(Transaction)
Enlists in ambient transaction (System.Transactions)
Best Practices
1. Always Use using
This is the single most important rule:
// ✅ ALWAYS do this
using var connection = new WitDbConnection(connectionString);
connection.Open();
// ❌ NEVER do this
var connection = new WitDbConnection(connectionString);
connection.Open();
// If exception occurs before Close(), connection leaks!
connection.Close();
2. Open Late, Close Early
Minimize the time a connection is open:
// ✅ Good: Open only when needed
var data = PrepareData(); // No connection needed yet
using var connection = new WitDbConnection(connectionString);
connection.Open();
SaveData(connection, data); // Quick operation
// Connection closed immediately after
// ❌ Bad: Connection open too long
using var connection = new WitDbConnection(connectionString);
connection.Open();
var data = PrepareData(); // Connection open but unused!
Thread.Sleep(1000); // Connection still open!
SaveData(connection, data);
3. Don't Hold Connections Across User Interactions
// ❌ Bad: Connection open while waiting for user
using var connection = new WitDbConnection(connectionString);
connection.Open();
Console.Write("Enter name: ");
var name = Console.ReadLine(); // Connection open, doing nothing!
InsertUser(connection, name);
// ✅ Good: Open only for database work
Console.Write("Enter name: ");
var name = Console.ReadLine();
using var connection = new WitDbConnection(connectionString);
connection.Open();
InsertUser(connection, name);
4. Use Async in Web Applications
// ✅ Good: Doesn't block thread while connecting
public async Task<List<User>> GetUsersAsync()
{
await using var connection = new WitDbConnection(connectionString);
await connection.OpenAsync();
// ... async operations
}
// ❌ Avoid in web apps: Blocks thread
public List<User> GetUsers()
{
using var connection = new WitDbConnection(connectionString);
connection.Open(); // Blocks!
// ...
}
5. Configure Pooling for Web Apps
// In ASP.NET Core, configure appropriate pool size
services.AddScoped(sp =>
{
var connection = new WitDbConnection(
"Data Source=app.witdb;Pooling=true;Min Pool Size=5;Max Pool Size=50");
connection.Open();
return connection;
});
6. Handle Connection Errors Gracefully
try
{
using var connection = new WitDbConnection(connectionString);
connection.Open();
// ...
}
catch (WitDbException ex) when (ex.ErrorCode == WitDbErrorCode.IoError)
{
logger.LogError("Cannot access database file: {Message}", ex.Message);
throw new ServiceUnavailableException("Database unavailable");
}
catch (WitDbException ex) when (ex.ErrorCode == WitDbErrorCode.DatabaseLocked)
{
logger.LogWarning("Database locked, retrying...");
// Implement retry logic
}
This document covers WitDbCommand and WitDbParameter: executing SQL statements, parameterization for security and performance, and batch operations.
What is a Command?
A command represents a SQL statement to be executed against the database. It's the bridge between your C# code and the SQL engine. Every database operation — whether reading data, inserting rows, or creating tables — goes through a command.
using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT * FROM Users WHERE Age > 18";
using var reader = cmd.ExecuteReader();
Command Components
A command consists of:
Component
Purpose
CommandText
The SQL statement to execute
Connection
The database connection to use
Parameters
Values to safely inject into the SQL
Transaction
Optional transaction context
CommandTimeout
How long to wait before giving up
Types of SQL Statements
SQL statements fall into categories, each executed differently:
DDL — Data Definition Language
DDL statements modify database structure: creating tables, indexes, views. They don't return data rows.
cmd.CommandText = """
CREATE TABLE Products (
Id BIGINT PRIMARY KEY AUTOINCREMENT,
Name VARCHAR(100) NOT NULL,
Price DECIMAL(10,2)
)
""";
int result = cmd.ExecuteNonQuery(); // Returns 0 for DDL
Common DDL statements:
CREATE TABLE, ALTER TABLE, DROP TABLE
CREATE INDEX, DROP INDEX
CREATE VIEW, DROP VIEW
DML — Data Manipulation Language
DML statements modify data: inserting, updating, deleting rows. They return the number of affected rows.
// INSERT
cmd.CommandText = "INSERT INTO Products (Name, Price) VALUES ('Laptop', 999.99)";
int inserted = cmd.ExecuteNonQuery(); // Returns 1
// UPDATE
cmd.CommandText = "UPDATE Products SET Price = 899.99 WHERE Name = 'Laptop'";
int updated = cmd.ExecuteNonQuery(); // Returns number of updated rows
// DELETE
cmd.CommandText = "DELETE FROM Products WHERE Price < 10";
int deleted = cmd.ExecuteNonQuery(); // Returns number of deleted rows
DQL — Data Query Language
DQL statements retrieve data: SELECT queries. They return result sets that you read through a DataReader.
cmd.CommandText = "SELECT Id, Name, Price FROM Products WHERE Price > 100";
using var reader = cmd.ExecuteReader(); // Returns WitDbDataReader
while (reader.Read())
{
Console.WriteLine(
Loading...
quot;{reader["Name"]}: ${reader["Price"]}");
}
Choosing the Right Execute Method
Method
Use For
Returns
ExecuteNonQuery()
DDL, INSERT, UPDATE, DELETE
Affected row count (int)
ExecuteScalar()
Single value queries
First column of first row (object)
ExecuteReader()
SELECT queries
DataReader for row-by-row access
Creating and Executing Commands
Basic Command Creation
// Method 1: CreateCommand from connection
using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT * FROM Users";
// Method 2: Constructor with connection
using var cmd = new WitDbCommand("SELECT * FROM Users", connection);
// Method 3: Constructor with connection and transaction
using var cmd = new WitDbCommand("UPDATE Users SET Active = 1", connection, transaction);
ExecuteNonQuery
Use for statements that don't return rows:
using var cmd = connection.CreateCommand();
// CREATE TABLE
cmd.CommandText = "CREATE TABLE IF NOT EXISTS Logs (Id BIGINT, Message TEXT)";
cmd.ExecuteNonQuery();
// INSERT
cmd.CommandText = "INSERT INTO Logs (Id, Message) VALUES (1, 'Started')";
int rowsInserted = cmd.ExecuteNonQuery();
Console.WriteLine(
Loading...
quot;Inserted {rowsInserted} row(s)"); // "Inserted 1 row(s)"
// UPDATE multiple rows
cmd.CommandText = "UPDATE Users SET LastSeen = DATETIME('now') WHERE Active = 1";
int rowsUpdated = cmd.ExecuteNonQuery();
Console.WriteLine(
Loading...
quot;Updated {rowsUpdated} row(s)");
// DELETE
cmd.CommandText = "DELETE FROM Sessions WHERE ExpiresAt < DATETIME('now')";
int rowsDeleted = cmd.ExecuteNonQuery();
Console.WriteLine(
Loading...
quot;Deleted {rowsDeleted} expired session(s)");
ExecuteScalar
Use when you need a single value — more efficient than ExecuteReader for simple lookups:
using var cmd = connection.CreateCommand();
// COUNT
cmd.CommandText = "SELECT COUNT(*) FROM Users";
long userCount = (long)cmd.ExecuteScalar()!;
// MAX
cmd.CommandText = "SELECT MAX(Price) FROM Products";
object? result = cmd.ExecuteScalar();
decimal? maxPrice = result == DBNull.Value ? null : (decimal)result;
// EXISTS check
cmd.CommandText = "SELECT 1 FROM Users WHERE Email = 'admin@example.com' LIMIT 1";
bool exists = cmd.ExecuteScalar() != null;
// INSERT with RETURNING (get generated ID)
cmd.CommandText = "INSERT INTO Users (Name) VALUES ('Alice') RETURNING Id";
long newUserId = (long)cmd.ExecuteScalar()!;
Console.WriteLine(
Loading...
quot;Created user with ID: {newUserId}");
Important: ExecuteScalar() returns null if the result set is empty, or DBNull.Value if the first column is NULL. Handle both cases:
object? result = cmd.ExecuteScalar();
if (result == null)
{
Console.WriteLine("No rows returned");
}
else if (result == DBNull.Value)
{
Console.WriteLine("Value is NULL");
}
else
{
Console.WriteLine(
Loading...
quot;Value: {result}");
}
ExecuteReader
Use for queries returning multiple rows or columns:
cmd.CommandText = "SELECT Id, Name, Email FROM Users ORDER BY Name";
using var reader = cmd.ExecuteReader();
while (reader.Read())
{
long id = reader.GetInt64(0);
string name = reader.GetString(1);
string? email = reader.IsDBNull(2) ? null : reader.GetString(2);
Console.WriteLine(
Loading...
quot;[{id}] {name} <{email ?? "no email"}>");
}
Parameters: The Safe Way to Include Values
The SQL Injection Problem
Never concatenate user input directly into SQL. This creates SQL injection vulnerabilities:
// ❌ DANGEROUS — SQL INJECTION VULNERABILITY!
string userInput = "'; DROP TABLE Users; --";
cmd.CommandText =
Loading...
quot;SELECT * FROM Users WHERE Name = '{userInput}'";
// Executes: SELECT * FROM Users WHERE Name = ''; DROP TABLE Users; --'
// Your Users table is now deleted!
SQL injection is one of the most common and dangerous security vulnerabilities. Attackers can:
- Delete or modify data
- Access unauthorized information
- Bypass authentication
- Take control of the database server
Parameters: The Solution
Parameters separate SQL structure from data. The database knows which parts are SQL and which are values:
// ✅ SAFE — Parameterized query
string userInput = "'; DROP TABLE Users; --";
cmd.CommandText = "SELECT * FROM Users WHERE Name = @name";
cmd.Parameters.AddWithValue("@name", userInput);
// The entire input is treated as a literal string value
// No SQL injection possible!
Parameters are always safe because:
- The SQL is parsed first, structure is fixed
- Parameter values are bound separately
- Values can never change the SQL structure
- Special characters are automatically handled
Adding Parameters
Several ways to add parameters:
// Method 1: AddWithValue — quickest for simple cases
cmd.Parameters.AddWithValue("@name", "John");
cmd.Parameters.AddWithValue("@age", 30);
cmd.Parameters.AddWithValue("@active", true);
// Method 2: Add with explicit type — better for performance
cmd.Parameters.Add(new WitDbParameter("@name", DbType.String) { Value = "John" });
cmd.Parameters.Add(new WitDbParameter("@price", DbType.Decimal) { Value = 99.99m });
// Method 3: CreateParameter — most explicit
var param = cmd.CreateParameter();
param.ParameterName = "@email";
param.DbType = DbType.String;
param.Size = 255;
param.Value = "john@example.com";
cmd.Parameters.Add(param);
Parameter Name Formats
WitDatabase accepts two parameter prefix formats:
// @ prefix (SQL Server style) — recommended
cmd.CommandText = "SELECT * FROM Users WHERE Id = @id AND Name = @name";
cmd.Parameters.AddWithValue("@id", 1);
cmd.Parameters.AddWithValue("@name", "John");
// : prefix (Oracle style)
cmd.CommandText = "SELECT * FROM Users WHERE Id = :id AND Name = :name";
cmd.Parameters.AddWithValue(":id", 1);
cmd.Parameters.AddWithValue(":name", "John");
Be consistent within a single command — don't mix @ and : prefixes.
Handling NULL Values
Use DBNull.Value for NULL parameters:
// Explicit DBNull
cmd.Parameters.AddWithValue("@middleName", DBNull.Value);
// Converting nullable types
string? middleName = GetMiddleName(); // might be null
cmd.Parameters.AddWithValue("@middleName", (object?)middleName ?? DBNull.Value);
// Helper method for cleaner code
static object ToDbValue(object? value) => value ?? DBNull.Value;
cmd.Parameters.AddWithValue("@notes", ToDbValue(user.Notes));
cmd.Parameters.AddWithValue("@nickname", ToDbValue(user.Nickname));
Parameter Properties
Property
Type
Description
ParameterName
string
Name including prefix (@name or :name)
Value
object?
The parameter value
DbType
DbType
Data type hint
Size
int
Size for variable-length types
Direction
ParameterDirection
Always Input (output not supported)
IsNullable
bool
Whether NULL is allowed
Precision
byte
Numeric precision
Scale
byte
Numeric scale
Working with Parameter Collections
// Check if parameter exists
if (cmd.Parameters.Contains("@name"))
{
cmd.Parameters["@name"].Value = "New Name";
}
// Get by index
var firstParam = cmd.Parameters[0];
// Get by name
var nameParam = cmd.Parameters["@name"];
// Remove a parameter
cmd.Parameters.Remove(nameParam);
cmd.Parameters.RemoveAt("@name");
cmd.Parameters.RemoveAt(0);
// Clear all parameters (important when reusing command)
cmd.Parameters.Clear();
// Count
Console.WriteLine(
Loading...
quot;Parameter count: {cmd.Parameters.Count}");
Command Reuse and Performance
Reusing Commands in Loops
Creating a command once and reusing it is much faster than creating new commands:
// ✅ GOOD: Create once, reuse many times
cmd.CommandText = "INSERT INTO Logs (Level, Message, Timestamp) VALUES (@level, @msg, @ts)";
var levelParam = cmd.Parameters.Add(new WitDbParameter("@level", DbType.String));
var msgParam = cmd.Parameters.Add(new WitDbParameter("@msg", DbType.String));
var tsParam = cmd.Parameters.Add(new WitDbParameter("@ts", DbType.DateTime));
foreach (var logEntry in logEntries)
{
levelParam.Value = logEntry.Level;
msgParam.Value = logEntry.Message;
tsParam.Value = logEntry.Timestamp;
cmd.ExecuteNonQuery();
}
// ❌ BAD: Creating new command each iteration
foreach (var logEntry in logEntries)
{
using var cmd = connection.CreateCommand(); // Overhead!
cmd.CommandText = "INSERT INTO Logs (Level, Message, Timestamp) VALUES (@level, @msg, @ts)";
cmd.Parameters.AddWithValue("@level", logEntry.Level); // New param objects!
cmd.Parameters.AddWithValue("@msg", logEntry.Message);
cmd.Parameters.AddWithValue("@ts", logEntry.Timestamp);
cmd.ExecuteNonQuery();
}
Performance difference can be 2-10x for many iterations.
Prepared Statements
Preparing a command pre-compiles the SQL, improving performance for repeated execution:
cmd.CommandText = "INSERT INTO Events (Type, Data) VALUES (@type, @data)";
cmd.Parameters.Add(new WitDbParameter("@type", DbType.String));
cmd.Parameters.Add(new WitDbParameter("@data", DbType.String));
cmd.Prepare(); // Pre-compile the SQL
// Now execute many times with different values
foreach (var evt in events)
{
cmd.Parameters["@type"].Value = evt.Type;
cmd.Parameters["@data"].Value = evt.Data;
cmd.ExecuteNonQuery(); // Uses pre-compiled plan
}
When to use Prepare():
- Executing the same SQL many times (>10-20 executions)
- Complex queries with joins or subqueries
- High-performance bulk operations
When NOT to use Prepare():
- One-time queries
- Simple queries (overhead may exceed benefit)
- Dynamic SQL that changes each time
Batch Operations
Single Transaction for Multiple Operations
Wrapping multiple operations in a transaction improves performance dramatically:
// ✅ GOOD: Single transaction (fast)
using var transaction = connection.BeginTransaction();
cmd.Transaction = transaction;
try
{
cmd.CommandText = "INSERT INTO Users (Name) VALUES (@name)";
var nameParam = cmd.Parameters.Add(new WitDbParameter("@name", DbType.String));
foreach (var name in names)
{
nameParam.Value = name;
cmd.ExecuteNonQuery();
}
transaction.Commit(); // Single commit for all inserts
}
catch
{
transaction.Rollback();
throw;
}
// ❌ BAD: Auto-commit each insert (slow)
cmd.CommandText = "INSERT INTO Users (Name) VALUES (@name)";
foreach (var name in names)
{
cmd.Parameters.Clear();
cmd.Parameters.AddWithValue("@name", name);
cmd.ExecuteNonQuery(); // Each insert is its own transaction!
}
Why transactions help:
- Without explicit transaction, each statement auto-commits
- Commit involves disk sync (fsync) — slow!
- Batching 1000 inserts: ~1 second vs ~30+ seconds
Bulk Insert Pattern
Optimal pattern for inserting many rows:
public int BulkInsert(WitDbConnection connection, IEnumerable<Product> products)
{
using var transaction = connection.BeginTransaction();
using var cmd = connection.CreateCommand();
cmd.Transaction = transaction;
cmd.CommandText = @"
INSERT INTO Products (Name, Price, Category, Stock)
VALUES (@name, @price, @category, @stock)";
// Create parameters once
var nameParam = cmd.Parameters.Add(new WitDbParameter("@name", DbType.String));
var priceParam = cmd.Parameters.Add(new WitDbParameter("@price", DbType.Decimal));
var categoryParam = cmd.Parameters.Add(new WitDbParameter("@category", DbType.String));
var stockParam = cmd.Parameters.Add(new WitDbParameter("@stock", DbType.Int32));
// Optional: Prepare for better performance
cmd.Prepare();
int count = 0;
foreach (var product in products)
{
nameParam.Value = product.Name;
priceParam.Value = product.Price;
categoryParam.Value = (object?)product.Category ?? DBNull.Value;
stockParam.Value = product.Stock;
cmd.ExecuteNonQuery();
count++;
}
transaction.Commit();
return count;
}
Batch Size Considerations
For very large batches (millions of rows), commit in chunks to avoid memory issues:
const int BatchSize = 10000;
int count = 0;
WitDbTransaction? transaction = null;
try
{
foreach (var item in largeDataset)
{
// Start new transaction every BatchSize rows
if (count % BatchSize == 0)
{
transaction?.Commit();
transaction?.Dispose();
transaction = connection.BeginTransaction();
cmd.Transaction = transaction;
}
// Set parameters and execute
SetParameters(cmd, item);
cmd.ExecuteNonQuery();
count++;
}
// Commit remaining rows
transaction?.Commit();
}
catch
{
transaction?.Rollback();
throw;
}
finally
{
transaction?.Dispose();
}
Async Execution
When to Use Async
Use async methods in:
- Web applications (ASP.NET Core)
- UI applications (WPF, WinForms, MAUI)
- High-concurrency scenarios
// Async non-query
int affected = await cmd.ExecuteNonQueryAsync();
// Async scalar
object? result = await cmd.ExecuteScalarAsync();
// Async reader
await using var reader = await cmd.ExecuteReaderAsync();
while (await reader.ReadAsync())
{
// Process row
}
// With cancellation token
using var cts = new CancellationTokenSource(TimeSpan.FromSeconds(30));
await cmd.ExecuteNonQueryAsync(cts.Token);
Async Best Practices
public async Task<List<User>> GetActiveUsersAsync(CancellationToken cancellationToken = default)
{
await using var connection = new WitDbConnection(connectionString);
await connection.OpenAsync(cancellationToken);
await using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT Id, Name, Email FROM Users WHERE Active = 1";
var users = new List<User>();
await using var reader = await cmd.ExecuteReaderAsync(cancellationToken);
while (await reader.ReadAsync(cancellationToken))
{
users.Add(new User
{
Id = reader.GetInt64(0),
Name = reader.GetString(1),
Email = reader.IsDBNull(2) ? null : reader.GetString(2)
});
}
return users;
}
Command Properties Reference
Property
Type
Default
Description
CommandText
string
""
SQL statement to execute
CommandType
CommandType
Text
Always Text (stored procedures not supported)
CommandTimeout
int
30
Timeout in seconds (0 = no timeout)
Connection
WitDbConnection?
null
Associated connection
Transaction
WitDbTransaction?
null
Associated transaction
Parameters
WitDbParameterCollection
empty
Query parameters
DesignTimeVisible
bool
true
For designer support
UpdatedRowSource
UpdateRowSource
Both
For DataAdapter updates
Common Patterns
Exists Check
public bool UserExists(WitDbConnection connection, string email)
{
using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT 1 FROM Users WHERE Email = @email LIMIT 1";
cmd.Parameters.AddWithValue("@email", email);
return cmd.ExecuteScalar() != null;
}
Insert and Get ID
public long CreateUser(WitDbConnection connection, string name, string email)
{
using var cmd = connection.CreateCommand();
cmd.CommandText = @"
INSERT INTO Users (Name, Email, CreatedAt)
VALUES (@name, @email, DATETIME('now'))
RETURNING Id";
cmd.Parameters.AddWithValue("@name", name);
cmd.Parameters.AddWithValue("@email", email);
return (long)cmd.ExecuteScalar()!;
}
Update with Optimistic Concurrency
public bool UpdateProduct(WitDbConnection connection, Product product)
{
using var cmd = connection.CreateCommand();
cmd.CommandText = @"
UPDATE Products
SET Name = @name, Price = @price, Version = Version + 1
WHERE Id = @id AND Version = @expectedVersion";
cmd.Parameters.AddWithValue("@id", product.Id);
cmd.Parameters.AddWithValue("@name", product.Name);
cmd.Parameters.AddWithValue("@price", product.Price);
cmd.Parameters.AddWithValue("@expectedVersion", product.Version);
int affected = cmd.ExecuteNonQuery();
if (affected == 0)
{
throw new ConcurrencyException("Product was modified by another user");
}
product.Version++;
return true;
}
Upsert (Insert or Update)
public void UpsertSetting(WitDbConnection connection, string key, string value)
{
using var cmd = connection.CreateCommand();
cmd.CommandText = @"
INSERT INTO Settings (Key, Value, UpdatedAt)
VALUES (@key, @value, DATETIME('now'))
ON CONFLICT (Key) DO UPDATE SET
Value = @value,
UpdatedAt = DATETIME('now')";
cmd.Parameters.AddWithValue("@key", key);
cmd.Parameters.AddWithValue("@value", value);
cmd.ExecuteNonQuery();
}
This document covers WitDbDataReader: reading query results efficiently, understanding the streaming model, type mapping, and handling large datasets.
What is a DataReader?
A DataReader provides forward-only, read-only access to query results. Think of it as a cursor that moves through the result set one row at a time, like reading a book from beginning to end without jumping back.
[[Svg Src="./witdatabase-datareader-position.svg" Alt="witdatabase-datareader-position"]]
Why Forward-Only?
The forward-only design is intentional:
Memory Efficiency — Only one row is in memory at a time. Reading 1 million rows uses the same memory as reading 10 rows.
Streaming — Results stream from database to application as you read. No need to wait for entire result set.
Simplicity — No complex caching or positioning logic needed.
Performance — Minimal overhead, maximum speed.
If you need random access to rows (jumping around, reading backwards), load data into a DataTable instead. But for most scenarios, forward-only is what you want.
Basic Reading Pattern
The Read Loop
Every DataReader follows the same pattern:
cmd.CommandText = "SELECT Id, Name, Email FROM Users";
using var reader = cmd.ExecuteReader();
while (reader.Read()) // Advances to next row, returns false when done
{
// Access current row's data
long id = reader.GetInt64(0); // By column index
string name = reader.GetString(1);
string email = reader.GetString(2);
Console.WriteLine(
Loading...
quot;{id}: {name} <{email}>");
}
// Reader automatically closed by 'using'
Important: You must call Read() before accessing any data. The reader starts before the first row.
using var reader = cmd.ExecuteReader();
// ❌ WRONG: No Read() called yet
string name = reader.GetString(0); // InvalidOperationException!
// ✅ CORRECT: Read() first
if (reader.Read())
{
string name = reader.GetString(0); // Works
}
Single Row Queries
For queries expected to return one row:
cmd.CommandText = "SELECT Name, Email FROM Users WHERE Id = @id";
cmd.Parameters.AddWithValue("@id", userId);
using var reader = cmd.ExecuteReader();
if (reader.Read())
{
var user = new User
{
Name = reader.GetString(0),
Email = reader.GetString(1)
};
return user;
}
else
{
return null; // No user found
}
Checking for Results
using var reader = cmd.ExecuteReader();
// Method 1: Check HasRows property (doesn't advance)
if (!reader.HasRows)
{
Console.WriteLine("No results found");
return;
}
// Method 2: Check Read() return value
if (!reader.Read())
{
Console.WriteLine("No results found");
return;
}
// Process first row...
Accessing Column Values
By Index vs By Name
Two ways to access column values:
// By index (faster)
long id = reader.GetInt64(0);
string name = reader.GetString(1);
string email = reader.GetString(2);
// By name (more readable, slightly slower)
long id = (long)reader["Id"];
string name = (string)reader["Name"];
string email = (string)reader["Email"];
Best practice: Get ordinals once, use indices in loop:
using var reader = cmd.ExecuteReader();
// Get ordinals before loop (one-time lookup)
int idOrdinal = reader.GetOrdinal("Id");
int nameOrdinal = reader.GetOrdinal("Name");
int emailOrdinal = reader.GetOrdinal("Email");
while (reader.Read())
{
// Use ordinals in loop (fast)
long id = reader.GetInt64(idOrdinal);
string name = reader.GetString(nameOrdinal);
string email = reader.GetString(emailOrdinal);
}
This combines readability (meaningful variable names) with performance (index-based access).
Typed Getter Methods
DataReader provides typed methods for each data type:
Method
Returns
SQL Types
GetBoolean(int)
bool
BOOLEAN
GetByte(int)
byte
TINYINT
GetInt16(int)
short
SMALLINT
GetInt32(int)
int
INT
GetInt64(int)
long
BIGINT, INTEGER
GetFloat(int)
float
FLOAT
GetDouble(int)
double
DOUBLE, REAL
GetDecimal(int)
decimal
DECIMAL, NUMERIC
GetString(int)
string
VARCHAR, TEXT, CHAR
GetDateTime(int)
DateTime
DATETIME, DATE, TIMESTAMP
GetGuid(int)
Guid
GUID, UUID
GetValue(int)
object
Any (boxed)
GetFieldValue<T>(int)
T
Any (generic)
Generic GetFieldValue
The generic method works for any type and supports nullables:
// Non-nullable types
long id = reader.GetFieldValue<long>(0);
string name = reader.GetFieldValue<string>(1);
DateTime created = reader.GetFieldValue<DateTime>(3);
// Nullable types (handles DBNull automatically)
int? age = reader.GetFieldValue<int?>(2);
DateTime? deletedAt = reader.GetFieldValue<DateTime?>(4);
GetValue and Indexers
These return object and require casting:
// GetValue returns object
object value = reader.GetValue(0);
long id = (long)value;
// Indexer by position
object value = reader[0];
long id = (long)value;
// Indexer by name
object value = reader["Id"];
long id = (long)value;
// GetValues fills an array
object[] values = new object[reader.FieldCount];
reader.GetValues(values);
Handling NULL Values
NULL values in SQL don't map directly to C# reference types. You must handle them explicitly.
Checking for NULL
// Method 1: IsDBNull check
if (reader.IsDBNull(2))
{
email = null;
}
else
{
email = reader.GetString(2);
}
// Method 2: Conditional expression
string? email = reader.IsDBNull(2) ? null : reader.GetString(2);
// Method 3: GetValue with as/cast
string? email = reader.GetValue(2) as string;
// Method 4: GetFieldValue with nullable type (cleanest)
string? email = reader.GetFieldValue<string?>(2);
int? age = reader.GetFieldValue<int?>(3);
NULL with Value Types
Value types (int, long, DateTime, etc.) cannot be null in C#. You must use nullable types:
// ❌ WRONG: Will throw if NULL
int age = reader.GetInt32(3); // InvalidCastException if NULL!
// ✅ CORRECT: Check first
int? age = reader.IsDBNull(3) ? null : reader.GetInt32(3);
// ✅ BETTER: Use nullable generic
int? age = reader.GetFieldValue<int?>(3);
Helper Extension Methods
Create helpers for cleaner code:
public static class DataReaderExtensions
{
public static T? GetNullable<T>(this DbDataReader reader, int ordinal) where T : struct
{
return reader.IsDBNull(ordinal) ? null : reader.GetFieldValue<T>(ordinal);
}
public static string? GetNullableString(this DbDataReader reader, int ordinal)
{
return reader.IsDBNull(ordinal) ? null : reader.GetString(ordinal);
}
}
// Usage
int? age = reader.GetNullable<int>(ageOrdinal);
DateTime? deletedAt = reader.GetNullable<DateTime>(deletedAtOrdinal);
string? nickname = reader.GetNullableString(nicknameOrdinal);
Column Metadata
Getting Column Information
using var reader = cmd.ExecuteReader();
// Number of columns
int columnCount = reader.FieldCount;
// Column names
for (int i = 0; i < reader.FieldCount; i++)
{
string name = reader.GetName(i);
Console.WriteLine(
Loading...
quot;Column {i}: {name}");
}
// Column index from name
int emailIndex = reader.GetOrdinal("Email"); // Throws if not found
// Column types
for (int i = 0; i < reader.FieldCount; i++)
{
string name = reader.GetName(i);
Type clrType = reader.GetFieldType(i);
string dbTypeName = reader.GetDataTypeName(i);
Console.WriteLine(
Loading...
quot;{name}: {dbTypeName} -> {clrType.Name}");
}
GetSchemaTable
For detailed schema information:
DataTable? schema = reader.GetSchemaTable();
if (schema != null)
{
foreach (DataRow row in schema.Rows)
{
string columnName = (string)row["ColumnName"];
int ordinal = (int)row["ColumnOrdinal"];
Type dataType = (Type)row["DataType"];
bool allowNull = (bool)row["AllowDBNull"];
int? size = row["ColumnSize"] as int?;
Console.WriteLine(
Loading...
quot;{ordinal}: {columnName} ({dataType.Name}), Nullable: {allowNull}");
}
}
Multiple Result Sets
A single command can return multiple result sets:
cmd.CommandText = @"
SELECT Id, Name FROM Users WHERE Active = 1;
SELECT Id, Title, Price FROM Products WHERE InStock = 1;
SELECT COUNT(*) FROM Orders WHERE Status = 'Pending';
";
using var reader = cmd.ExecuteReader();
// First result set: Users
Console.WriteLine("Active Users:");
while (reader.Read())
{
Console.WriteLine(
Loading...
quot; {reader.GetInt64(0)}: {reader.GetString(1)}");
}
// Move to second result set
if (reader.NextResult())
{
Console.WriteLine("\nIn-Stock Products:");
while (reader.Read())
{
Console.WriteLine(
Loading...
quot; {reader.GetString(1)}: ${reader.GetDecimal(2)}");
}
}
// Move to third result set
if (reader.NextResult())
{
if (reader.Read())
{
long pendingOrders = reader.GetInt64(0);
Console.WriteLine(
Loading...
quot;\nPending Orders: {pendingOrders}");
}
}
Use cases for multiple result sets:
- Dashboard queries (multiple stats in one round-trip)
- Master-detail data (orders and their items)
- Lookup tables (load multiple reference tables at once)
Reading Large Data
BLOB (Binary) Data
For binary data like images or files:
cmd.CommandText = "SELECT FileName, Content FROM Files WHERE Id = @id";
cmd.Parameters.AddWithValue("@id", fileId);
using var reader = cmd.ExecuteReader();
if (reader.Read())
{
string fileName = reader.GetString(0);
// Method 1: Get all bytes at once (for smaller files)
byte[] content = (byte[])reader["Content"];
File.WriteAllBytes(fileName, content);
// Method 2: Stream in chunks (for large files)
long dataLength = reader.GetBytes(1, 0, null!, 0, 0); // Get length
using var fileStream = File.Create(fileName);
byte[] buffer = new byte[8192];
long offset = 0;
while (offset < dataLength)
{
int bytesRead = (int)reader.GetBytes(1, offset, buffer, 0, buffer.Length);
fileStream.Write(buffer, 0, bytesRead);
offset += bytesRead;
}
}
Large Text Data
Similarly for large text (CLOB):
// Method 1: Get entire string
string content = reader.GetString(0);
// Method 2: Stream characters
long charLength = reader.GetChars(0, 0, null!, 0, 0);
char[] buffer = new char[4096];
var sb = new StringBuilder();
long offset = 0;
while (offset < charLength)
{
int charsRead = (int)reader.GetChars(0, offset, buffer, 0, buffer.Length);
sb.Append(buffer, 0, charsRead);
offset += charsRead;
}
string content = sb.ToString();
Memory Considerations
When reading large datasets:
// ❌ BAD: Loading everything into memory
var allUsers = new List<User>();
while (reader.Read())
{
allUsers.Add(MapUser(reader)); // Memory grows unbounded!
}
return allUsers; // Might be millions of objects
// ✅ GOOD: Process one at a time
while (reader.Read())
{
var user = MapUser(reader);
ProcessUser(user); // Process immediately, don't store
}
// ✅ GOOD: Yield return (streaming)
public IEnumerable<User> GetAllUsers()
{
using var connection = new WitDbConnection(connectionString);
connection.Open();
using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT * FROM Users";
using var reader = cmd.ExecuteReader();
while (reader.Read())
{
yield return MapUser(reader); // One at a time
}
}
// Caller processes without loading all into memory
foreach (var user in GetAllUsers())
{
ProcessUser(user);
}
CommandBehavior Flags
Control reader behavior with CommandBehavior:
// Close connection when reader closes
using var reader = cmd.ExecuteReader(CommandBehavior.CloseConnection);
// After reader.Close() or Dispose(), connection is also closed
// Optimize for single row
using var reader = cmd.ExecuteReader(CommandBehavior.SingleRow);
// Hint to database that only one row is expected
// Optimize for single result set
using var reader = cmd.ExecuteReader(CommandBehavior.SingleResult);
// Hint that NextResult() won't be called
// Get schema only, no data
using var reader = cmd.ExecuteReader(CommandBehavior.SchemaOnly);
// Read() always returns false, but GetSchemaTable() works
// Get key information for updates
using var reader = cmd.ExecuteReader(CommandBehavior.KeyInfo);
// Schema includes primary key info (for DataAdapter)
// Combine flags
using var reader = cmd.ExecuteReader(
CommandBehavior.SingleRow | CommandBehavior.CloseConnection);
Flag
Effect
Default
No special behavior
CloseConnection
Close connection when reader closes
SingleRow
Optimize for single row
SingleResult
Optimize for single result set
SchemaOnly
Return schema, no data
KeyInfo
Include key column info in schema
SequentialAccess
Access columns in order only (for streaming BLOBs)
Async Reading
Async Pattern
public async Task<List<User>> GetUsersAsync(CancellationToken cancellationToken = default)
{
await using var connection = new WitDbConnection(connectionString);
await connection.OpenAsync(cancellationToken);
await using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT Id, Name, Email FROM Users";
var users = new List<User>();
await using var reader = await cmd.ExecuteReaderAsync(cancellationToken);
while (await reader.ReadAsync(cancellationToken))
{
users.Add(new User
{
Id = reader.GetInt64(0),
Name = reader.GetString(1),
Email = reader.IsDBNull(2) ? null : reader.GetString(2)
});
}
return users;
}
Async Streaming with IAsyncEnumerable
public async IAsyncEnumerable<User> StreamUsersAsync(
[EnumeratorCancellation] CancellationToken cancellationToken = default)
{
await using var connection = new WitDbConnection(connectionString);
await connection.OpenAsync(cancellationToken);
await using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT Id, Name, Email FROM Users";
await using var reader = await cmd.ExecuteReaderAsync(cancellationToken);
while (await reader.ReadAsync(cancellationToken))
{
yield return new User
{
Id = reader.GetInt64(0),
Name = reader.GetString(1),
Email = reader.IsDBNull(2) ? null : reader.GetString(2)
};
}
}
// Usage
await foreach (var user in StreamUsersAsync())
{
Console.WriteLine(user.Name);
}
Data Type Mapping
SQL to CLR Types
SQL Type
CLR Type
Notes
BIGINT
Int64
Default integer type
INT, INTEGER
Int64
Stored as Int64 internally
SMALLINT
Int64
Stored as Int64 internally
TINYINT
Int64
Stored as Int64 internally
BOOLEAN, BOOL
Boolean
DOUBLE, REAL
Double
FLOAT
Double
DECIMAL(p,s)
Decimal
VARCHAR(n)
String
TEXT
String
CHAR(n)
String
Fixed-length, padded
BLOB
Byte[]
DATETIME
DateTime
DATE
DateTime
Date part only
TIME
TimeSpan
GUID, UUID
Guid
Integer Handling
WitDatabase stores all integers as 64-bit internally. Smaller types still work but read as Int64:
// Column defined as SMALLINT, but...
cmd.CommandText = "SELECT SmallIntColumn FROM Table1";
using var reader = cmd.ExecuteReader();
if (reader.Read())
{
// GetInt16 works if value fits
short value = reader.GetInt16(0);
// GetInt64 always works
long value64 = reader.GetInt64(0);
// GetFieldType returns actual storage type
Type type = reader.GetFieldType(0); // typeof(Int64)
}
Type Conversion Errors
// OverflowException if value doesn't fit
try
{
int value = reader.GetInt32(0); // Value is 9,223,372,036,854,775,807
}
catch (OverflowException)
{
// Value too large for Int32
}
// InvalidCastException for wrong type
try
{
string value = reader.GetString(0); // Column is actually INT
}
catch (InvalidCastException)
{
// Type mismatch
}
// InvalidCastException for NULL on non-nullable getter
try
{
int value = reader.GetInt32(0); // Value is NULL
}
catch (InvalidCastException)
{
// Should have checked IsDBNull first
}
Reader Properties Reference
Property
Type
Description
FieldCount
int
Number of columns
HasRows
bool
Whether result has any rows
IsClosed
bool
Whether reader is closed
Depth
int
Nesting depth (always 0)
RecordsAffected
int
Rows affected (for UPDATE/DELETE with RETURNING)
this[int]
object
Get value by index
this[string]
object
Get value by name
Common Patterns
Map to Objects
public User? GetUserById(long id)
{
using var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT Id, Name, Email, CreatedAt FROM Users WHERE Id = @id";
cmd.Parameters.AddWithValue("@id", id);
using var reader = cmd.ExecuteReader();
return reader.Read() ? MapUser(reader) : null;
}
private User MapUser(WitDbDataReader reader)
{
return new User
{
Id = reader.GetInt64(0),
Name = reader.GetString(1),
Email = reader.IsDBNull(2) ? null : reader.GetString(2),
CreatedAt = reader.GetDateTime(3)
};
}
Dynamic Column Mapping
When columns aren't known at compile time:
public List<Dictionary<string, object?>> ExecuteDynamic(string sql)
{
using var cmd = connection.CreateCommand();
cmd.CommandText = sql;
var results = new List<Dictionary<string, object?>>();
using var reader = cmd.ExecuteReader();
while (reader.Read())
{
var row = new Dictionary<string, object?>();
for (int i = 0; i < reader.FieldCount; i++)
{
string name = reader.GetName(i);
object? value = reader.IsDBNull(i) ? null : reader.GetValue(i);
row[name] = value;
}
results.Add(row);
}
return results;
}
Export to CSV
public void ExportToCsv(string sql, string filePath)
{
using var cmd = connection.CreateCommand();
cmd.CommandText = sql;
using var reader = cmd.ExecuteReader();
using var writer = new StreamWriter(filePath);
// Write header
var headers = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
headers[i] = reader.GetName(i);
}
writer.WriteLine(string.Join(",", headers));
// Write data rows
while (reader.Read())
{
var values = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
object? value = reader.IsDBNull(i) ? null : reader.GetValue(i);
values[i] = EscapeCsvValue(value?.ToString() ?? "");
}
writer.WriteLine(string.Join(",", values));
}
}
private string EscapeCsvValue(string value)
{
if (value.Contains(',') || value.Contains('"') || value.Contains('\n'))
{
return
Loading...
quot;\"{value.Replace("\"", "\"\"")}\"";
}
return value;
}
This document covers WitDbTransaction: understanding ACID properties, isolation levels, savepoints, and best practices for data integrity.
What is a Transaction?
A transaction is a unit of work that either completes entirely or doesn't happen at all. Think of it like an "all-or-nothing" contract with the database.
Real-world analogy: Transferring money between bank accounts:
- Deduct $100 from Account A
- Add $100 to Account B
If step 1 succeeds but step 2 fails (power outage, error, etc.), you've lost $100! A transaction ensures either both steps happen, or neither does.
using var transaction = connection.BeginTransaction();
try
{
// Both operations are part of one transaction
ExecuteSql(connection, "UPDATE Accounts SET Balance = Balance - 100 WHERE Id = 1", transaction);
ExecuteSql(connection, "UPDATE Accounts SET Balance = Balance + 100 WHERE Id = 2", transaction);
transaction.Commit(); // Both changes become permanent
}
catch
{
transaction.Rollback(); // Neither change happens
throw;
}
ACID Properties
Transactions guarantee four properties, known by the acronym ACID:
Atomicity
"All or nothing" — Either all operations in a transaction succeed, or none do.
[[Svg Src="./witdatabase-atomicity-comparison.svg" Alt="witdatabase-atomicity-comparison"]]
If any operation fails, all previous operations in the transaction are undone (rolled back).
Consistency
"Rules are always enforced" — The database moves from one valid state to another. Constraints (foreign keys, unique, check) are never violated.
// This transaction will fail atomically if constraints are violated
using var transaction = connection.BeginTransaction();
try
{
// Insert order (requires valid CustomerId - foreign key)
ExecuteSql(connection,
"INSERT INTO Orders (CustomerId, Total) VALUES (999, 50.00)",
transaction);
// If CustomerId 999 doesn't exist, entire transaction fails
transaction.Commit();
}
catch (WitDbConstraintException)
{
transaction.Rollback();
// Database remains consistent - no orphan orders
}
Isolation
"Transactions don't interfere" — Concurrent transactions don't see each other's uncommitted changes (depending on isolation level).
[[Svg Src="./witdatabase-isolation-question.svg" Alt="witdatabase-isolation-question"]]
Isolation level determines what Transaction B sees. More on this below.
Durability
"Committed = Permanent" — Once a transaction commits, the changes survive system crashes, power failures, etc.
[[Svg Src="./witdatabase-durability-timeline.svg" Alt="witdatabase-durability-timeline"]]
WitDatabase ensures durability by:
- Writing to Write-Ahead Log (WAL) before modifying data
- Syncing WAL to disk on commit (unless
Sync Writes=false)
- Recovery from WAL on startup after crash
Basic Transaction Usage
Starting a Transaction
using var connection = new WitDbConnection("Data Source=app.witdb");
connection.Open();
// Start transaction
using var transaction = connection.BeginTransaction();
// All commands must be associated with the transaction
using var cmd = connection.CreateCommand();
cmd.Transaction = transaction; // Important!
cmd.CommandText = "INSERT INTO Logs (Message) VALUES ('Started')";
cmd.ExecuteNonQuery();
// Commit or rollback
transaction.Commit();
The Standard Pattern
Always use try-catch to ensure rollback on errors:
using var transaction = connection.BeginTransaction();
try
{
using var cmd = connection.CreateCommand();
cmd.Transaction = transaction;
// Operation 1
cmd.CommandText = "UPDATE Inventory SET Quantity = Quantity - 1 WHERE ProductId = @id";
cmd.Parameters.AddWithValue("@id", productId);
cmd.ExecuteNonQuery();
// Operation 2
cmd.Parameters.Clear();
cmd.CommandText = "INSERT INTO OrderItems (OrderId, ProductId) VALUES (@orderId, @productId)";
cmd.Parameters.AddWithValue("@orderId", orderId);
cmd.Parameters.AddWithValue("@productId", productId);
cmd.ExecuteNonQuery();
transaction.Commit();
Console.WriteLine("Order placed successfully");
}
catch (Exception ex)
{
transaction.Rollback();
Console.WriteLine(
Loading...
quot;Order failed: {ex.Message}");
throw;
}
Async Transactions
await using var transaction = await connection.BeginTransactionAsync();
try
{
await using var cmd = connection.CreateCommand();
cmd.Transaction = transaction;
cmd.CommandText = "UPDATE Accounts SET Balance = Balance - @amount WHERE Id = @from";
cmd.Parameters.AddWithValue("@amount", amount);
cmd.Parameters.AddWithValue("@from", fromAccountId);
await cmd.ExecuteNonQueryAsync();
cmd.Parameters.Clear();
cmd.CommandText = "UPDATE Accounts SET Balance = Balance + @amount WHERE Id = @to";
cmd.Parameters.AddWithValue("@amount", amount);
cmd.Parameters.AddWithValue("@to", toAccountId);
await cmd.ExecuteNonQueryAsync();
await transaction.CommitAsync();
}
catch
{
await transaction.RollbackAsync();
throw;
}
Isolation Levels
Isolation levels control how transactions interact with each other. Higher isolation = more protection but less concurrency.
Concurrency Anomalies
Different isolation levels prevent different anomalies:
Dirty Read — Reading uncommitted changes from another transaction.
[[Svg Src="./witdatabase-dirty-read.svg" Alt="witdatabase-dirty-read"]]
Non-Repeatable Read — Same query returns different values within one transaction.
[[Svg Src="./witdatabase-nonrepeatable-read.svg" Alt="witdatabase-nonrepeatable-read"]]
Phantom Read — Same query returns different rows within one transaction.
[[Svg Src="./witdatabase-phantom-read.svg" Alt="witdatabase-phantom-read"]]
Isolation Level Comparison
Level
Dirty Reads
Non-Repeatable
Phantoms
Performance
ReadUncommitted
Possible
Possible
Possible
Fastest
ReadCommitted
Prevented
Possible
Possible
Fast
RepeatableRead
Prevented
Prevented
Possible
Medium
Serializable
Prevented
Prevented
Prevented
Slowest
Snapshot
Prevented
Prevented
Prevented
Fast
Setting Isolation Level
// Via BeginTransaction parameter
using var transaction = connection.BeginTransaction(IsolationLevel.Snapshot);
// Check current level
Console.WriteLine(transaction.IsolationLevel);
When to Use Each Level
ReadUncommitted — Almost never. Only for approximate counts where accuracy doesn't matter.
// OK: Dashboard showing "approximately 1.2M users"
using var tx = connection.BeginTransaction(IsolationLevel.ReadUncommitted);
var count = ExecuteScalar<long>("SELECT COUNT(*) FROM Users", tx);
// Might be slightly off, but fast
ReadCommitted — Default. Good for most applications.
// Good for: Single operations, simple queries
using var tx = connection.BeginTransaction(IsolationLevel.ReadCommitted);
RepeatableRead — When you need consistent reads within a transaction.
// Good for: Reports that query same data multiple times
using var tx = connection.BeginTransaction(IsolationLevel.RepeatableRead);
var total = ExecuteScalar<decimal>("SELECT SUM(Amount) FROM Orders", tx);
var count = ExecuteScalar<long>("SELECT COUNT(*) FROM Orders", tx);
var average = total / count; // Consistent: total and count from same snapshot
Serializable — When you need perfect consistency, can tolerate lower concurrency.
// Good for: Financial transactions, inventory management
using var tx = connection.BeginTransaction(IsolationLevel.Serializable);
// Check balance and withdraw atomically
var balance = ExecuteScalar<decimal>("SELECT Balance FROM Accounts WHERE Id = 1", tx);
if (balance >= 100)
{
ExecuteSql("UPDATE Accounts SET Balance = Balance - 100 WHERE Id = 1", tx);
}
// No other transaction can modify balance between SELECT and UPDATE
Snapshot — Best of both worlds. Consistent reads without blocking writers.
// Good for: Read-heavy workloads, reports alongside OLTP
using var tx = connection.BeginTransaction(IsolationLevel.Snapshot);
// Sees database as it was when transaction started
// Writers don't block, and we don't block writers
var data = ExecuteQuery("SELECT * FROM LargeTable", tx);
Snapshot Isolation Explained
Snapshot isolation deserves special attention because it's often the best choice:
[[Svg Src="./witdatabase-snapshot-isolation.svg" Alt="witdatabase-snapshot-isolation"]]
Transaction A sees a consistent snapshot from T1, regardless of other committed transactions. This provides:
- Consistent reads (no anomalies)
- No blocking (readers don't block writers)
- Good performance (uses MVCC - Multi-Version Concurrency Control)
Savepoints
Savepoints create "checkpoints" within a transaction, allowing partial rollback.
Why Savepoints?
Sometimes you want to undo part of a transaction without aborting it entirely:
[[Svg Src="./witdatabase-savepoints.svg" Alt="witdatabase-savepoints"]]
Using Savepoints
using var transaction = connection.BeginTransaction();
using var cmd = connection.CreateCommand();
cmd.Transaction = transaction;
try
{
// Create order
cmd.CommandText = "INSERT INTO Orders (CustomerId, Total) VALUES (1, 0) RETURNING Id";
long orderId = (long)cmd.ExecuteScalar()!;
// Savepoint before items
transaction.Save("before_items");
decimal total = 0;
foreach (var item in items)
{
try
{
cmd.CommandText = "INSERT INTO OrderItems (OrderId, ProductId, Quantity, Price) VALUES (@o, @p, @q, @price)";
cmd.Parameters.Clear();
cmd.Parameters.AddWithValue("@o", orderId);
cmd.Parameters.AddWithValue("@p", item.ProductId);
cmd.Parameters.AddWithValue("@q", item.Quantity);
cmd.Parameters.AddWithValue("@price", item.Price);
cmd.ExecuteNonQuery();
total += item.Quantity * item.Price;
}
catch (WitDbConstraintException)
{
// Item failed (e.g., product doesn't exist)
// Rollback to savepoint and skip this item
transaction.Rollback("before_items");
transaction.Save("before_items"); // Create new savepoint
continue;
}
}
// Update order total
cmd.CommandText = "UPDATE Orders SET Total = @total WHERE Id = @id";
cmd.Parameters.Clear();
cmd.Parameters.AddWithValue("@total", total);
cmd.Parameters.AddWithValue("@id", orderId);
cmd.ExecuteNonQuery();
transaction.Commit();
}
catch
{
transaction.Rollback();
throw;
}
Savepoint Methods
Method
Description
Save(string name)
Create a savepoint
Rollback(string name)
Rollback to savepoint (keeps savepoint)
Release(string name)
Release savepoint (frees resources)
Nested Savepoints
Savepoints can be nested:
transaction.Save("sp1");
// ... operations ...
transaction.Save("sp2");
// ... more operations ...
transaction.Save("sp3");
// ... even more operations ...
// Rollback to sp2 (undoes sp3 operations, keeps sp1)
transaction.Rollback("sp2");
// Release sp1 when no longer needed
transaction.Release("sp1");
transaction.Commit();
Deadlocks
A deadlock occurs when two transactions are each waiting for the other to release a lock.
How Deadlocks Happen
[[Svg Src="./witdatabase-deadlock.svg" Alt="witdatabase-deadlock"]]
Deadlock Detection
WitDatabase detects deadlocks and aborts one transaction:
try
{
cmd.ExecuteNonQuery();
}
catch (WitDbLockedException ex) when (ex.ErrorCode == WitDbErrorCode.DatabaseLocked)
{
// Deadlock detected - this transaction was chosen as victim
transaction.Rollback();
// Retry the entire transaction
}
Preventing Deadlocks
1. Lock in consistent order
// ❌ BAD: Random lock order causes deadlocks
void TransferA() {
Lock(account1);
Lock(account2);
}
void TransferB() {
Lock(account2); // Different order!
Lock(account1);
}
// ✅ GOOD: Always lock in same order (e.g., by ID)
void Transfer(int from, int to) {
int first = Math.Min(from, to);
int second = Math.Max(from, to);
Lock(first);
Lock(second);
}
2. Keep transactions short
// ❌ BAD: Long transaction holds locks
using var tx = connection.BeginTransaction();
// ... lots of operations ...
Thread.Sleep(5000); // Holding locks!
// ... more operations ...
tx.Commit();
// ✅ GOOD: Quick transactions
using var tx = connection.BeginTransaction();
// Only essential operations
tx.Commit();
3. Use appropriate isolation level
Lower isolation levels hold fewer/shorter locks:
// Snapshot isolation is excellent for avoiding deadlocks
using var tx = connection.BeginTransaction(IsolationLevel.Snapshot);
// Readers don't block writers, writers don't block readers
4. Retry on deadlock
public void ExecuteWithRetry(Action<WitDbTransaction> action, int maxRetries = 3)
{
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
using var transaction = connection.BeginTransaction();
try
{
action(transaction);
transaction.Commit();
return;
}
catch (WitDbLockedException) when (attempt < maxRetries)
{
transaction.Rollback();
Thread.Sleep(100 * attempt); // Backoff
}
}
throw new Exception(
Loading...
quot;Failed after {maxRetries} attempts");
}
// Usage
ExecuteWithRetry(tx => {
TransferMoney(tx, fromAccount, toAccount, amount);
});
Transaction Best Practices
1. Keep Transactions Short
// ❌ BAD: User interaction inside transaction
using var tx = connection.BeginTransaction();
var product = GetProduct(tx, id);
Console.WriteLine(
Loading...
quot;Current price: {product.Price}");
Console.Write("Enter new price: ");
var newPrice = decimal.Parse(Console.ReadLine()!); // User thinking = locks held!
UpdatePrice(tx, id, newPrice);
tx.Commit();
// ✅ GOOD: Get input first, then quick transaction
var product = GetProduct(id);
Console.WriteLine(
Loading...
quot;Current price: {product.Price}");
Console.Write("Enter new price: ");
var newPrice = decimal.Parse(Console.ReadLine()!);
using var tx = connection.BeginTransaction();
UpdatePrice(tx, id, newPrice);
tx.Commit(); // Quick!
2. Always Handle Errors
// ✅ GOOD: Always rollback on error
using var tx = connection.BeginTransaction();
try
{
DoWork(tx);
tx.Commit();
}
catch
{
tx.Rollback();
throw;
}
// ✅ ALSO GOOD: Let Dispose handle it
using var tx = connection.BeginTransaction();
DoWork(tx);
tx.Commit();
// If exception before Commit, Dispose calls Rollback automatically
3. Don't Mix Transaction Contexts
// ❌ BAD: Command without transaction operates outside it
using var tx = connection.BeginTransaction();
using var cmd = connection.CreateCommand();
// cmd.Transaction = tx; // FORGOT THIS!
cmd.CommandText = "INSERT INTO Logs (Message) VALUES ('test')";
cmd.ExecuteNonQuery(); // This is auto-committed, not part of tx!
tx.Rollback(); // The log entry is still there!
// ✅ GOOD: Always assign transaction
using var tx = connection.BeginTransaction();
using var cmd = connection.CreateCommand();
cmd.Transaction = tx; // Now it's part of the transaction
4. Use Appropriate Isolation Level
// Consider what you actually need:
// Simple single-row updates → ReadCommitted (default)
using var tx = connection.BeginTransaction();
// Reports reading lots of data → Snapshot
using var tx = connection.BeginTransaction(IsolationLevel.Snapshot);
// Critical financial operations → Serializable
using var tx = connection.BeginTransaction(IsolationLevel.Serializable);
5. Avoid Nested Transactions
WitDatabase doesn't support true nested transactions. Use savepoints instead:
// ❌ Not supported: nested BeginTransaction
using var tx1 = connection.BeginTransaction();
using var tx2 = connection.BeginTransaction(); // Error!
// ✅ Use savepoints for partial rollback
using var tx = connection.BeginTransaction();
tx.Save("inner");
// ... operations ...
tx.Rollback("inner"); // Partial rollback
tx.Commit();
Transaction Properties Reference
Property
Type
Description
Connection
WitDbConnection
The associated connection
IsolationLevel
IsolationLevel
Transaction isolation level
Transaction Methods Reference
Method
Returns
Description
Commit()
void
Commits the transaction
CommitAsync(CancellationToken)
Task
Async commit
Rollback()
void
Rolls back the transaction
RollbackAsync(CancellationToken)
Task
Async rollback
Save(string)
void
Creates a savepoint
Rollback(string)
void
Rolls back to savepoint
Release(string)
void
Releases a savepoint
This document covers advanced ADO.NET topics: error handling, schema discovery, DataAdapter for disconnected data access, and the provider factory pattern.
Error Handling
Proper error handling is essential for robust database applications. WitDatabase provides a hierarchy of exceptions that help you respond appropriately to different error conditions.
Exception Hierarchy
All WitDatabase exceptions inherit from WitDbException, which inherits from the standard DbException:
[[Svg Src="./witdatabase-exception-hierarchy.svg" Alt="witdatabase-exception-hierarchy"]]
This hierarchy lets you catch errors at the appropriate level of specificity:
try
{
ExecuteOperation();
}
catch (WitDbSyntaxException ex)
{
// Handle SQL syntax errors specifically
}
catch (WitDbException ex)
{
// Handle any other database errors
}
catch (Exception ex)
{
// Handle non-database errors
}
WitDbException Properties
Property
Type
Description
ErrorCode
WitDbErrorCode
Specific error code enum
Message
string
Human-readable error description
SqlState
string
SQL standard state code (when available)
Error Codes
The WitDbErrorCode enum provides fine-grained error identification:
Category
Error Code
Description
Syntax
SyntaxError
SQL syntax error
Not Found
TableNotFound
Table doesn't exist
ColumnNotFound
Column doesn't exist
IndexNotFound
Index doesn't exist
Constraints
PrimaryKeyViolation
Duplicate primary key
UniqueViolation
Unique constraint violated
ForeignKeyViolation
Foreign key reference invalid
NotNullViolation
NULL in NOT NULL column
CheckViolation
CHECK constraint failed
Locking
DatabaseLocked
Database file is locked
LockTimeout
Timed out waiting for lock
Deadlock
Deadlock detected
Data
TypeMismatch
Data type conversion error
DivisionByZero
Division by zero
Overflow
Numeric overflow
System
IoError
File I/O error
OutOfMemory
Insufficient memory
DatabaseCorrupt
Database file corrupted
Unknown
Unclassified error
Handling Specific Errors
Constraint Violations
Constraint errors are common and usually indicate invalid user input:
public bool CreateUser(string email, string name)
{
try
{
using var cmd = connection.CreateCommand();
cmd.CommandText = "INSERT INTO Users (Email, Name) VALUES (@email, @name)";
cmd.Parameters.AddWithValue("@email", email);
cmd.Parameters.AddWithValue("@name", name);
cmd.ExecuteNonQuery();
return true;
}
catch (WitDbConstraintException ex)
{
switch (ex.ErrorCode)
{
case WitDbErrorCode.UniqueViolation:
// Email already exists
throw new DuplicateEmailException(
Loading...
quot;Email '{email}' is already registered");
case WitDbErrorCode.NotNullViolation:
// Required field missing
throw new ValidationException("Email and name are required");
case WitDbErrorCode.CheckViolation:
// Business rule violated (e.g., email format)
throw new ValidationException("Invalid email format");
default:
throw; // Re-throw unexpected constraint errors
}
}
}
Lock Errors
Lock errors often benefit from retry logic:
public async Task<T> ExecuteWithRetryAsync<T>(
Func<Task<T>> operation,
int maxRetries = 3,
int baseDelayMs = 100)
{
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
try
{
return await operation();
}
catch (WitDbLockedException ex) when (attempt < maxRetries)
{
// Log the retry
logger.LogWarning(
"Database locked (attempt {Attempt}/{Max}): {Message}",
attempt, maxRetries, ex.Message);
// Exponential backoff with jitter
int delay = baseDelayMs * (int)Math.Pow(2, attempt - 1);
int jitter = Random.Shared.Next(0, delay / 2);
await Task.Delay(delay + jitter);
}
}
throw new Exception(
Loading...
quot;Operation failed after {maxRetries} attempts");
}
// Usage
var result = await ExecuteWithRetryAsync(async () =>
{
using var cmd = connection.CreateCommand();
cmd.CommandText = "UPDATE Counters SET Value = Value + 1 WHERE Name = 'visits'";
return await cmd.ExecuteNonQueryAsync();
});
Syntax Errors
Syntax errors indicate bugs in your SQL code:
try
{
cmd.ExecuteNonQuery();
}
catch (WitDbSyntaxException ex)
{
// Log full details for debugging
logger.LogError(
"SQL Syntax Error: {Message}\nSQL: {Sql}",
ex.Message,
cmd.CommandText);
// Don't expose SQL details to end users
throw new InternalServerError("An error occurred processing your request");
}
Database Corruption
Corruption is rare but serious:
try
{
connection.Open();
}
catch (WitDbCorruptException ex)
{
logger.LogCritical("Database corruption detected: {Message}", ex.Message);
// Alert operations team
alertService.SendCriticalAlert("Database corruption", ex.Message);
// Attempt recovery from backup
throw new ServiceUnavailableException(
"Database is temporarily unavailable. Please try again later.");
}
Logging Best Practices
public void ExecuteSafely(Action<WitDbCommand> operation)
{
using var cmd = connection.CreateCommand();
try
{
operation(cmd);
}
catch (WitDbException ex)
{
// Always log: error code, message, and SQL (sanitize sensitive data)
logger.LogError(
ex,
"Database error [{ErrorCode}]: {Message}\nSQL: {Sql}\nParameters: {Params}",
ex.ErrorCode,
ex.Message,
cmd.CommandText,
FormatParameters(cmd.Parameters));
throw;
}
}
private string FormatParameters(WitDbParameterCollection parameters)
{
var sb = new StringBuilder();
foreach (WitDbParameter p in parameters)
{
// Mask sensitive values
var value = p.ParameterName.Contains("password", StringComparison.OrdinalIgnoreCase)
? "***"
: p.Value?.ToString() ?? "NULL";
sb.Append(
Loading...
quot;{p.ParameterName}={value}, ");
}
return sb.ToString().TrimEnd(',', ' ');
}
Schema Discovery
Schema discovery lets you examine database structure at runtime — useful for tools, ORMs, code generators, and dynamic queries.
GetSchema Overview
The GetSchema() method returns metadata as DataTables:
// List available schema collections
DataTable collections = connection.GetSchema("MetaDataCollections");
foreach (DataRow row in collections.Rows)
{
Console.WriteLine(row["CollectionName"]);
}
Output:
MetaDataCollections
Tables
Columns
Indexes
IndexColumns
Views
ForeignKeys
Tables Collection
// All tables
DataTable tables = connection.GetSchema("Tables");
foreach (DataRow row in tables.Rows)
{
string name = (string)row["TABLE_NAME"];
string type = (string)row["TABLE_TYPE"]; // "TABLE" or "VIEW"
Console.WriteLine(
Loading...
quot;{name} ({type})");
}
// Specific table (with restriction)
DataTable userTable = connection.GetSchema("Tables", new[] { "Users" });
Columns Collection
// All columns in database
DataTable allColumns = connection.GetSchema("Columns");
// Columns for specific table
DataTable userColumns = connection.GetSchema("Columns", new[] { "Users" });
foreach (DataRow col in userColumns.Rows)
{
string table = (string)col["TABLE_NAME"];
string column = (string)col["COLUMN_NAME"];
string dataType = (string)col["DATA_TYPE"];
bool nullable = (bool)col["IS_NULLABLE"];
object? maxLength = col["CHARACTER_MAXIMUM_LENGTH"];
bool isPK = col["PRIMARY_KEY"] as bool? ?? false;
object? defaultValue = col["COLUMN_DEFAULT"];
Console.WriteLine(
Loading...
quot; {column}: {dataType}" +
Loading...
quot;{(nullable ? "" : " NOT NULL")}" +
Loading...
quot;{(isPK ? " PRIMARY KEY" : "")}" +
Loading...
quot;{(defaultValue != null ?
Loading...
quot; DEFAULT {defaultValue}" : "")}");
}
Indexes Collection
DataTable indexes = connection.GetSchema("Indexes");
foreach (DataRow idx in indexes.Rows)
{
string table = (string)idx["TABLE_NAME"];
string index = (string)idx["INDEX_NAME"];
bool unique = (bool)idx["UNIQUE"];
bool pk = (bool)idx["PRIMARY_KEY"];
Console.WriteLine(
Loading...
quot;{table}.{index}: Unique={unique}, PrimaryKey={pk}");
}
// Get columns for a specific index
DataTable indexColumns = connection.GetSchema("IndexColumns", new[] { "Users", "IX_Users_Email" });
Foreign Keys Collection
DataTable foreignKeys = connection.GetSchema("ForeignKeys");
foreach (DataRow fk in foreignKeys.Rows)
{
string constraint = (string)fk["CONSTRAINT_NAME"];
string table = (string)fk["TABLE_NAME"];
string column = (string)fk["COLUMN_NAME"];
string refTable = (string)fk["REFERENCED_TABLE_NAME"];
string refColumn = (string)fk["REFERENCED_COLUMN_NAME"];
string onDelete = (string)fk["DELETE_RULE"];
string onUpdate = (string)fk["UPDATE_RULE"];
Console.WriteLine(
Loading...
quot;{table}.{column} -> {refTable}.{refColumn} " +
Loading...
quot;(ON DELETE {onDelete}, ON UPDATE {onUpdate})");
}
Building a Schema Browser
Here's a complete example that builds a database schema model:
public class DatabaseSchema
{
public List<TableSchema> Tables { get; set; } = new();
}
public class TableSchema
{
public string Name { get; set; } = "";
public List<ColumnSchema> Columns { get; set; } = new();
public List<IndexSchema> Indexes { get; set; } = new();
public List<ForeignKeySchema> ForeignKeys { get; set; } = new();
}
public class ColumnSchema
{
public string Name { get; set; } = "";
public string DataType { get; set; } = "";
public bool IsNullable { get; set; }
public bool IsPrimaryKey { get; set; }
public string? DefaultValue { get; set; }
}
public class IndexSchema
{
public string Name { get; set; } = "";
public bool IsUnique { get; set; }
public List<string> Columns { get; set; } = new();
}
public class ForeignKeySchema
{
public string Name { get; set; } = "";
public string Column { get; set; } = "";
public string ReferencedTable { get; set; } = "";
public string ReferencedColumn { get; set; } = "";
}
public DatabaseSchema GetDatabaseSchema(WitDbConnection connection)
{
var schema = new DatabaseSchema();
// Get all tables
var tables = connection.GetSchema("Tables");
foreach (DataRow tableRow in tables.Rows)
{
if ((string)tableRow["TABLE_TYPE"] != "TABLE") continue;
string tableName = (string)tableRow["TABLE_NAME"];
var table = new TableSchema { Name = tableName };
// Get columns for this table
var columns = connection.GetSchema("Columns", new[] { tableName });
foreach (DataRow colRow in columns.Rows)
{
table.Columns.Add(new ColumnSchema
{
Name = (string)colRow["COLUMN_NAME"],
DataType = (string)colRow["DATA_TYPE"],
IsNullable = (bool)colRow["IS_NULLABLE"],
IsPrimaryKey = colRow["PRIMARY_KEY"] as bool? ?? false,
DefaultValue = colRow["COLUMN_DEFAULT"]?.ToString()
});
}
// Get indexes for this table
var indexes = connection.GetSchema("Indexes", new[] { tableName });
foreach (DataRow idxRow in indexes.Rows)
{
var index = new IndexSchema
{
Name = (string)idxRow["INDEX_NAME"],
IsUnique = (bool)idxRow["UNIQUE"]
};
// Get columns for this index
var idxCols = connection.GetSchema("IndexColumns",
new[] { tableName, index.Name });
foreach (DataRow idxColRow in idxCols.Rows.Cast<DataRow>()
.OrderBy(r => (int)r["ORDINAL_POSITION"]))
{
index.Columns.Add((string)idxColRow["COLUMN_NAME"]);
}
table.Indexes.Add(index);
}
// Get foreign keys for this table
var fks = connection.GetSchema("ForeignKeys", new[] { tableName });
foreach (DataRow fkRow in fks.Rows)
{
table.ForeignKeys.Add(new ForeignKeySchema
{
Name = (string)fkRow["CONSTRAINT_NAME"],
Column = (string)fkRow["COLUMN_NAME"],
ReferencedTable = (string)fkRow["REFERENCED_TABLE_NAME"],
ReferencedColumn = (string)fkRow["REFERENCED_COLUMN_NAME"]
});
}
schema.Tables.Add(table);
}
return schema;
}
DataAdapter and Disconnected Data
The disconnected data model lets you load data into memory, work with it offline, and sync changes back to the database.
When to Use DataAdapter
Scenario
Use DataAdapter?
Why
WinForms/WPF data binding
✅ Yes
DataTable binds directly to grids
Batch editing
✅ Yes
Track changes, submit all at once
Offline scenarios
✅ Yes
Work without connection
Simple CRUD apps
✅ Yes
Less code than manual commands
High-performance reads
❌ No
DataReader is faster
Streaming large data
❌ No
DataTable loads all into memory
Complex transactions
❌ No
Manual control is better
Filling a DataTable
using var adapter = new WitDbDataAdapter(
"SELECT Id, Name, Email, CreatedAt FROM Users WHERE Active = 1",
connection);
var table = new DataTable();
adapter.Fill(table);
// Data is now in memory - connection can close
Console.WriteLine(
Loading...
quot;Loaded {table.Rows.Count} users");
foreach (DataRow row in table.Rows)
{
Console.WriteLine(
Loading...
quot;{row["Id"]}: {row["Name"]} <{row["Email"]}>");
}
Filling a DataSet (Multiple Tables)
var dataSet = new DataSet();
// Load Users
using var userAdapter = new WitDbDataAdapter("SELECT * FROM Users", connection);
userAdapter.Fill(dataSet, "Users");
// Load Orders
using var orderAdapter = new WitDbDataAdapter("SELECT * FROM Orders", connection);
orderAdapter.Fill(dataSet, "Orders");
// Access tables
DataTable users = dataSet.Tables["Users"]!;
DataTable orders = dataSet.Tables["Orders"]!;
// Define relationships (optional, for navigation)
dataSet.Relations.Add("UserOrders",
users.Columns["Id"]!,
orders.Columns["UserId"]!);
// Navigate parent-child
foreach (DataRow user in users.Rows)
{
Console.WriteLine(
Loading...
quot;User: {user["Name"]}");
foreach (DataRow order in user.GetChildRows("UserOrders"))
{
Console.WriteLine(
Loading...
quot; Order #{order["Id"]}: ${order["Total"]}");
}
}
Modifying Data
DataTable tracks changes automatically:
// Fill table
var table = new DataTable();
adapter.Fill(table);
// Modify existing row
DataRow user = table.Rows[0];
user["Name"] = "Updated Name";
// Add new row
DataRow newUser = table.NewRow();
newUser["Name"] = "New User";
newUser["Email"] = "new@example.com";
table.Rows.Add(newUser);
// Delete row
table.Rows[1].Delete();
// Check pending changes
Console.WriteLine(
Loading...
quot;Added: {table.GetChanges(DataRowState.Added)?.Rows.Count ?? 0}");
Console.WriteLine(
Loading...
quot;Modified: {table.GetChanges(DataRowState.Modified)?.Rows.Count ?? 0}");
Console.WriteLine(
Loading...
quot;Deleted: {table.GetChanges(DataRowState.Deleted)?.Rows.Count ?? 0}");
Saving Changes with CommandBuilder
CommandBuilder automatically generates INSERT, UPDATE, DELETE commands:
using var adapter = new WitDbDataAdapter(
"SELECT Id, Name, Email FROM Users",
connection);
// CommandBuilder generates commands based on SELECT
using var builder = new WitDbCommandBuilder(adapter);
// Fill and modify
var table = new DataTable();
adapter.Fill(table);
table.Rows[0]["Name"] = "Updated";
table.Rows.Add(table.NewRow()["Name"] = "New User");
table.Rows[1].Delete();
// Save all changes
int affected = adapter.Update(table);
Console.WriteLine(
Loading...
quot;Saved {affected} changes");
// Accept changes (clear tracking)
table.AcceptChanges();
CommandBuilder requirements:
- SELECT must include primary key column(s)
- Table must have a primary key
- Only single-table queries (no JOINs)
Manual Commands (More Control)
For complex scenarios, configure commands manually:
using var adapter = new WitDbDataAdapter();
// SELECT
adapter.SelectCommand = new WitDbCommand(
"SELECT Id, Name, Email FROM Users WHERE Active = 1",
connection);
// INSERT
adapter.InsertCommand = new WitDbCommand(@"
INSERT INTO Users (Name, Email, Active, CreatedAt)
VALUES (@Name, @Email, 1, DATETIME('now'))
RETURNING Id",
connection);
adapter.InsertCommand.Parameters.Add(new WitDbParameter("@Name", DbType.String, 100, "Name"));
adapter.InsertCommand.Parameters.Add(new WitDbParameter("@Email", DbType.String, 255, "Email"));
// UPDATE
adapter.UpdateCommand = new WitDbCommand(@"
UPDATE Users SET Name = @Name, Email = @Email
WHERE Id = @Id AND Version = @OriginalVersion",
connection);
adapter.UpdateCommand.Parameters.Add(new WitDbParameter("@Id", DbType.Int64, "Id"));
adapter.UpdateCommand.Parameters.Add(new WitDbParameter("@Name", DbType.String, 100, "Name"));
adapter.UpdateCommand.Parameters.Add(new WitDbParameter("@Email", DbType.String, 255, "Email"));
adapter.UpdateCommand.Parameters.Add(new WitDbParameter("@OriginalVersion", DbType.Int32, "Version")
{
SourceVersion = DataRowVersion.Original // Use original value for optimistic concurrency
});
// DELETE
adapter.DeleteCommand = new WitDbCommand(
"DELETE FROM Users WHERE Id = @Id",
connection);
adapter.DeleteCommand.Parameters.Add(new WitDbParameter("@Id", DbType.Int64, "Id")
{
SourceVersion = DataRowVersion.Original
});
Handling Update Conflicts
adapter.RowUpdated += (sender, e) =>
{
if (e.Status == UpdateStatus.ErrorsOccurred)
{
Console.WriteLine(
Loading...
quot;Error updating row: {e.Errors?.Message}");
e.Status = UpdateStatus.SkipCurrentRow; // Continue with other rows
}
else if (e.RecordsAffected == 0 && e.StatementType == StatementType.Update)
{
Console.WriteLine("Optimistic concurrency conflict - row was modified by another user");
e.Status = UpdateStatus.SkipCurrentRow;
}
};
adapter.ContinueUpdateOnError = true; // Don't stop on first error
adapter.Update(table);
DbProviderFactory
The provider factory pattern enables database-agnostic code — write once, run against any database.
Registering the Factory
// At application startup
DbProviderFactories.RegisterFactory(
"OutWit.Database",
WitDbProviderFactory.Instance);
Using the Factory
// Get factory by name (could come from configuration)
string providerName = configuration["Database:Provider"]; // "OutWit.Database"
string connectionString = configuration["Database:ConnectionString"];
var factory = DbProviderFactories.GetFactory(providerName);
// Create objects through factory
using var connection = factory.CreateConnection()!;
connection.ConnectionString = connectionString;
connection.Open();
using var command = factory.CreateCommand()!;
command.Connection = connection;
command.CommandText = "SELECT * FROM Users";
var parameter = factory.CreateParameter()!;
parameter.ParameterName = "@id";
parameter.Value = 1;
command.Parameters.Add(parameter);
Database-Agnostic Repository
public class GenericRepository : IDisposable
{
private readonly DbProviderFactory _factory;
private readonly string _connectionString;
private DbConnection? _connection;
public GenericRepository(string providerName, string connectionString)
{
_factory = DbProviderFactories.GetFactory(providerName);
_connectionString = connectionString;
}
private DbConnection GetConnection()
{
if (_connection == null)
{
_connection = _factory.CreateConnection()!;
_connection.ConnectionString = _connectionString;
_connection.Open();
}
return _connection;
}
public IEnumerable<T> Query<T>(string sql, Func<DbDataReader, T> mapper)
{
using var cmd = _factory.CreateCommand()!;
cmd.Connection = GetConnection();
cmd.CommandText = sql;
using var reader = cmd.ExecuteReader();
while (reader.Read())
{
yield return mapper(reader);
}
}
public int Execute(string sql, Action<DbParameterCollection>? addParameters = null)
{
using var cmd = _factory.CreateCommand()!;
cmd.Connection = GetConnection();
cmd.CommandText = sql;
addParameters?.Invoke(cmd.Parameters);
return cmd.ExecuteNonQuery();
}
public DataTable FillTable(string sql)
{
var adapter = _factory.CreateDataAdapter()!;
adapter.SelectCommand = _factory.CreateCommand()!;
adapter.SelectCommand.Connection = GetConnection();
adapter.SelectCommand.CommandText = sql;
var table = new DataTable();
adapter.Fill(table);
return table;
}
public void Dispose()
{
_connection?.Dispose();
}
}
// Usage - works with any database
var repo = new GenericRepository("OutWit.Database", "Data Source=app.witdb");
var users = repo.Query(
"SELECT Id, Name FROM Users",
reader => new User
{
Id = reader.GetInt64(0),
Name = reader.GetString(1)
});
foreach (var user in users)
{
Console.WriteLine(user.Name);
}
Factory Methods
Method
Returns
Description
CreateConnection()
DbConnection?
Creates a new connection
CreateCommand()
DbCommand?
Creates a new command
CreateParameter()
DbParameter?
Creates a new parameter
CreateDataAdapter()
DbDataAdapter?
Creates a new data adapter
CreateCommandBuilder()
DbCommandBuilder?
Creates a new command builder
CreateConnectionStringBuilder()
DbConnectionStringBuilder?
Creates a string builder
Ambient Transactions (System.Transactions)
WitDatabase supports TransactionScope for distributed transactions:
using System.Transactions;
// Ambient transaction - any enlisted connection participates
using (var scope = new TransactionScope())
{
using var conn1 = new WitDbConnection("Data Source=db1.witdb");
conn1.Open();
ExecuteSql(conn1, "UPDATE Accounts SET Balance = Balance - 100 WHERE Id = 1");
using var conn2 = new WitDbConnection("Data Source=db2.witdb");
conn2.Open();
ExecuteSql(conn2, "UPDATE Accounts SET Balance = Balance + 100 WHERE Id = 2");
scope.Complete(); // Both databases commit
}
// If exception or no Complete(), both roll back
Async with TransactionScope
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
await using var connection = new WitDbConnection(connectionString);
await connection.OpenAsync();
// Connection automatically enlists in ambient transaction
await ExecuteSqlAsync(connection, "INSERT INTO Logs (Message) VALUES ('test')");
scope.Complete();
}