The SQL Engine (WitSqlEngine) provides direct access to WitDatabase's query execution capabilities without the ADO.NET abstraction layer. It offers the best performance for SQL operations while maintaining a simple, straightforward API.
Overview
What is the SQL Engine?
The SQL Engine is the heart of WitDatabase's SQL processing. It parses SQL statements, plans query execution, and returns results. While ADO.NET wraps this engine with standard .NET interfaces (DbConnection, DbCommand, DbDataReader), you can also use the engine directly for maximum performance.
When you execute a query through ADO.NET, the path looks like this:
Your Code → WitDbCommand → WitSqlEngine → Storage
Using the SQL Engine directly eliminates the middle layer:
Your Code → WitSqlEngine → Storage
This direct access removes the overhead of ADO.NET's connection state management, parameter collection objects, and result set abstraction. For high-throughput scenarios or tight loops, this overhead reduction translates to measurably better performance.
When to Use the SQL Engine Directly
The SQL Engine is ideal for:
Maximum performance scenarios. When you're processing millions of rows or executing thousands of queries per second, eliminating ADO.NET overhead matters. Benchmarks show 10-20% improvement for simple queries.
Embedded database logic. When building a service or library that uses WitDatabase internally, the SQL Engine provides a cleaner API than creating fake connection strings and managing DbConnection lifecycle.
Bulk data operations. The engine's BulkInsert and prepared statement batch execution are optimized for high-throughput data loading.
Long-running processes. Background services, data pipelines, and ETL jobs that maintain a single database context benefit from the simpler lifecycle model.
Low-level data access. When you need direct table scans, index seeks, or schema introspection beyond what SQL provides.
When to Use ADO.NET Instead
Stick with ADO.NET when:
- You need compatibility with existing code or libraries expecting
DbConnection - You're using Entity Framework Core
- You want connection pooling managed automatically
- You need
TransactionScopesupport - Interoperability with other ADO.NET providers matters
The ADO.NET provider is built on top of the SQL Engine, so you're not losing any functionality — just gaining compatibility.
Architecture Overview
The SQL Engine sits between your code and the storage layer:
[[Svg Src="./witdatabase-sql-engine-architecture.svg" Alt="witdatabase-sql-engine-architecture"]]
Getting Started
Creating an Engine Instance
The SQL Engine requires a WitDatabase instance, which you create through the builder pattern:
using OutWit.Database.Core.Builder;
using OutWit.Database.Engine;
// Create database with desired configuration
var database = new WitDatabaseBuilder()
.WithFilePath("myapp.witdb")
.WithBTree()
.WithTransactions()
.Build();
// Create the SQL engine
var engine = new WitSqlEngine(database, ownsStore: true);
The ownsStore parameter determines disposal behavior:
true— Engine disposes the database when the engine is disposedfalse— You manage the database lifecycle separately
For in-memory databases, use the convenience method:
var database = WitDatabase.CreateInMemory();
var engine = new WitSqlEngine(database, ownsStore: true);
Basic Query Execution
The engine provides several methods for different query patterns:
// Execute returns a WitSqlResult for any SQL
using var result = engine.Execute("SELECT * FROM Users");
// Iterate through rows
while (result.Read())
{
var id = result.CurrentRow["Id"].AsInt64();
var name = result.CurrentRow["Name"].AsString();
Console.WriteLine(
Loading...
quot;{id}: {name}");
}
For simpler access patterns, use the convenience methods:
// Get all rows as a list
List<WitSqlRow> users = engine.Query("SELECT * FROM Users WHERE IsActive = 1");
// Get first row or null
WitSqlRow? user = engine.QueryFirstOrDefault("SELECT * FROM Users WHERE Id = 1");
// Get a single scalar value
WitSqlValue count = engine.ExecuteScalar("SELECT COUNT(*) FROM Users");
Console.WriteLine(
Loading...
quot;User count: {count.AsInt64()}");
// Execute non-query (INSERT, UPDATE, DELETE) and get rows affected
int affected = engine.ExecuteNonQuery("UPDATE Users SET LastLogin = NOW() WHERE Id = 1");
Working with Parameters
Never concatenate user input into SQL strings. Use parameters:
var parameters = new Dictionary<string, object?>
{
["@name"] = "Alice",
["@email"] = "alice@example.com"
};
var user = engine.QueryFirstOrDefault(
"SELECT * FROM Users WHERE Name = @name AND Email = @email",
parameters);
Parameter names can use @, :, or $ prefix — all are equivalent:
// All equivalent
engine.Execute("SELECT * FROM Users WHERE Id = @id", new Dictionary<string, object?> { ["id"] = 1 });
engine.Execute("SELECT * FROM Users WHERE Id = :id", new Dictionary<string, object?> { ["id"] = 1 });
engine.Execute("SELECT * FROM Users WHERE Id = $id", new Dictionary<string, object?> { ["id"] = 1 });
Timeout and Cancellation
For long-running queries, specify a timeout or use cancellation tokens:
// With timeout
var timeout = TimeSpan.FromSeconds(30);
using var result = engine.Execute(
"SELECT * FROM LargeTable WHERE ComplexCondition(Column)",
parameters: null,
timeout: timeout);
// With CancellationToken
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
using var result = engine.Execute(
"SELECT * FROM LargeTable",
parameters: null,
timeout: null,
cancellationToken: cts.Token);
Cancellation is cooperative — the engine checks the token between row iterations. For very long-running single operations (like a massive JOIN), the check happens at natural boundaries in processing.
// Practical example: cancellable report generation
public async Task<List<ReportRow>> GenerateReportAsync(CancellationToken cancellationToken)
{
var results = new List<ReportRow>();
using var result = engine.Execute(
"SELECT * FROM Orders WHERE Date > @start",
new Dictionary<string, object?> { ["start"] = DateTime.Today.AddMonths(-12) },
timeout: TimeSpan.FromMinutes(5),
cancellationToken: cancellationToken);
while (result.Read())
{
cancellationToken.ThrowIfCancellationRequested();
results.Add(MapToReportRow(result.CurrentRow));
}
return results;
}
Creating Tables
Execute DDL statements the same way as queries:
engine.Execute(@"
CREATE TABLE Users (
Id INTEGER PRIMARY KEY AUTOINCREMENT,
Name VARCHAR(100) NOT NULL,
Email VARCHAR(255) UNIQUE,
CreatedAt DATETIME DEFAULT NOW(),
IsActive BOOLEAN DEFAULT true
)
");
engine.Execute(@"
CREATE INDEX IX_Users_Email ON Users(Email)
");
The WitSqlResult Class
Every query returns a WitSqlResult that provides access to results and metadata.
Result Properties
using var result = engine.Execute("UPDATE Users SET IsActive = 0 WHERE LastLogin < @date",
new Dictionary<string, object?> { ["date"] = DateTime.UtcNow.AddYears(-1) });
// Number of rows affected by INSERT/UPDATE/DELETE
Console.WriteLine(
Loading...
quot;Rows affected: {result.RowsAffected}");
// Column information
Console.WriteLine(
Loading...
quot;Columns: {result.Columns.Count}");
foreach (var col in result.Columns)
{
Console.WriteLine(
Loading...
quot; {col.Name}: {col.Type}");
}
// For SELECT with RETURNING or INSERT...RETURNING
Console.WriteLine(
Loading...
quot;Has rows: {result.HasRows}");
Reading Rows
The result acts as a forward-only cursor:
using var result = engine.Execute("SELECT Id, Name, Email FROM Users");
while (result.Read())
{
// Access by column name
var name = result.CurrentRow["Name"].AsString();
// Access by index
var id = result.CurrentRow[0].AsInt64();
// The CurrentRow property returns a WitSqlRow
WitSqlRow row = result.CurrentRow;
}
For convenience, read all rows at once:
using var result = engine.Execute("SELECT * FROM Users");
List<WitSqlRow> allRows = result.ReadAll();
Working with WitSqlRow
Each row provides column access and type conversion:
WitSqlRow row = result.CurrentRow;
// Column access by name or index
WitSqlValue idValue = row["Id"];
WitSqlValue nameValue = row[1];
// Type conversion methods
long id = row["Id"].AsInt64();
string name = row["Name"].AsString();
decimal? price = row["Price"].AsDecimalOrNull();
DateTime created = row["CreatedAt"].AsDateTime();
bool isActive = row["IsActive"].AsBoolean();
// Check for NULL
if (row["MiddleName"].IsNull)
{
Console.WriteLine("No middle name");
}
// Get column names
IReadOnlyList<string> columns = row.ColumnNames;
WitSqlValue Type Conversions
The WitSqlValue struct provides type-safe access with null handling:
Method
Returns
Null Behavior
AsInt64()
long
Throws if null
AsInt64OrNull()
long?
Returns null
AsInt64OrDefault(defaultValue)
long
Returns default
AsDouble()
double
Throws if null
AsDoubleOrNull()
double?
Returns null
AsDecimal()
decimal
Throws if null
AsDecimalOrNull()
decimal?
Returns null
AsString()
string
Throws if null
AsStringOrNull()
string?
Returns null
AsBoolean()
bool
Throws if null
AsBooleanOrNull()
bool?
Returns null
AsDateTime()
DateTime
Throws if null
AsDateTimeOrNull()
DateTime?
Returns null
AsGuid()
Guid
Throws if null
AsGuidOrNull()
Guid?
Returns null
AsBytes()
byte[]
Throws if null
AsBytesOrNull()
byte[]?
Returns null
IsNull
bool
Check for null
Prepared Statements
Prepared statements parse SQL once and execute multiple times with different parameters. They're essential for high-performance batch operations.
Creating a Prepared Statement
using var stmt = engine.Prepare("SELECT * FROM Users WHERE Department = @dept");
// Set parameters
stmt.SetParameter("dept", "Engineering");
// Execute
using var result = stmt.Execute();
while (result.Read())
{
Console.WriteLine(result.CurrentRow["Name"].AsString());
}
Reusing with Different Parameters
using var stmt = engine.Prepare("SELECT * FROM Products WHERE Category = @cat AND Price < @maxPrice");
// First execution
stmt.SetParameter("cat", "Electronics");
stmt.SetParameter("maxPrice", 500.00m);
using var electronics = stmt.Execute();
// Process electronics...
// Clear and reuse
stmt.ClearParameters();
stmt.SetParameter("cat", "Clothing");
stmt.SetParameter("maxPrice", 100.00m);
using var clothing = stmt.Execute();
// Process clothing...
Fluent Parameter Setting
Chain parameter calls for cleaner code:
using var stmt = engine.Prepare("INSERT INTO Users (Name, Email, Age) VALUES (@name, @email, @age)");
using var result = stmt
.SetParameter("name", "Alice")
.SetParameter("email", "alice@example.com")
.SetParameter("age", 30)
.Execute();
Or use a dictionary:
stmt.SetParameters(new Dictionary<string, object?>
{
["name"] = "Bob",
["email"] = "bob@example.com",
["age"] = 25
});
using var result = stmt.Execute();
Execution with Timeout and Cancellation
Prepared statements also support timeout and cancellation:
using var stmt = engine.Prepare("SELECT * FROM LargeTable WHERE Category = @cat");
stmt.SetParameter("cat", "Reports");
// With CancellationToken
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(30));
using var result = stmt.Execute(cts.Token);
// Process with cancellation checks
while (result.Read())
{
cts.Token.ThrowIfCancellationRequested();
ProcessRow(result.CurrentRow);
}
Batch Execution
For inserting or updating many rows, batch execution is dramatically faster:
using var stmt = engine.Prepare("INSERT INTO Users (Name, Email) VALUES (@name, @email)");
var users = new[]
{
new Dictionary<string, object?> { ["name"] = "Alice", ["email"] = "alice@test.com" },
new Dictionary<string, object?> { ["name"] = "Bob", ["email"] = "bob@test.com" },
new Dictionary<string, object?> { ["name"] = "Charlie", ["email"] = "charlie@test.com" },
};
int rowsAffected = stmt.ExecuteBatch(users);
Console.WriteLine(
Loading...
quot;Inserted {rowsAffected} users");
For large datasets, batch execution provides significant performance benefits:
- SQL is parsed once
- Execution context is reused
- Minimal object allocation per row
Batch execution also supports cancellation:
var cts = new CancellationTokenSource();
// Cancel after 1 minute
cts.CancelAfter(TimeSpan.FromMinutes(1));
try
{
int affected = stmt.ExecuteBatch(millionsOfRows, cts.Token);
Console.WriteLine(
Loading...
quot;Inserted {affected} rows");
}
catch (OperationCanceledException)
{
Console.WriteLine("Batch operation was cancelled");
}
Generic Batch Execution
Use objects directly with property-to-parameter mapping:
public record UserDto(string Name, string Email, int Age);
using var stmt = engine.Prepare("INSERT INTO Users (Name, Email, Age) VALUES (@Name, @Email, @Age)");
var users = new[]
{
new UserDto("Alice", "alice@test.com", 30),
new UserDto("Bob", "bob@test.com", 25),
new UserDto("Charlie", "charlie@test.com", 35),
};
int rowsAffected = stmt.ExecuteBatch(users); // Properties mapped to parameters
Bulk Operations
The SQL Engine provides optimized bulk operations for high-throughput data loading.
BulkInsert with Arrays
Insert many rows with minimal overhead:
var columns = new[] { "Name", "Email", "Age" };
var rows = new List<object?[]>
{
new object?[] { "Alice", "alice@test.com", 25 },
new object?[] { "Bob", "bob@test.com", 30 },
new object?[] { "Charlie", "charlie@test.com", 35 },
};
int inserted = engine.BulkInsert("Users", columns, rows);
Console.WriteLine(
Loading...
quot;Inserted {inserted} rows");
BulkInsert with Dictionaries
When column sets vary or for cleaner code:
var rows = new[]
{
new Dictionary<string, object?> { ["Name"] = "Alice", ["Email"] = "alice@test.com" },
new Dictionary<string, object?> { ["Name"] = "Bob", ["Email"] = "bob@test.com" },
};
int inserted = engine.BulkInsert("Users", rows);
BulkInsert with Cancellation
For very large bulk operations, use cancellation:
var cts = new CancellationTokenSource(TimeSpan.FromMinutes(10));
try
{
int inserted = engine.BulkInsert("Events", columns, millionsOfRows, cts.Token);
Console.WriteLine(
Loading...
quot;Inserted {inserted} rows");
}
catch (OperationCanceledException)
{
Console.WriteLine("Bulk insert was cancelled");
// Partial data may have been inserted if not in a transaction
}
Performance Comparison
For inserting 10,000 rows:
Method
Time
Notes
Individual INSERTs (no transaction)
~30s
Each INSERT auto-commits
Individual INSERTs (in transaction)
~1s
Single commit at end
Prepared statement batch
~0.5s
Parse once, reuse
BulkInsert
~0.3s
Optimized internal path
Always wrap bulk operations in a transaction for best performance:
engine.Execute("BEGIN TRANSACTION");
try
{
engine.BulkInsert("Users", columns, rows);
engine.Execute("COMMIT");
}
catch
{
engine.Execute("ROLLBACK");
throw;
}
Transactions
The SQL Engine supports full ACID transactions with multiple isolation levels.
Basic Transaction Pattern
engine.Execute("BEGIN TRANSACTION");
try
{
engine.Execute("UPDATE Accounts SET Balance = Balance - 100 WHERE Id = 1");
engine.Execute("UPDATE Accounts SET Balance = Balance + 100 WHERE Id = 2");
engine.Execute("COMMIT");
}
catch
{
engine.Execute("ROLLBACK");
throw;
}
Isolation Levels
Set the isolation level for a transaction:
engine.Execute("BEGIN TRANSACTION");
engine.Execute("SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
// Your queries run with serializable isolation
// ...
engine.Execute("COMMIT");
Available isolation levels:
Level
Description
READ UNCOMMITTED
Can see uncommitted changes from other transactions
READ COMMITTED
Only sees committed data (default)
REPEATABLE READ
Same reads return same data within transaction
SERIALIZABLE
Full isolation, as if transactions ran sequentially
SNAPSHOT
Each transaction sees a consistent snapshot
Savepoints
Create savepoints for partial rollback:
engine.Execute("BEGIN TRANSACTION");
engine.Execute("INSERT INTO Orders (CustomerId, Total) VALUES (1, 100)");
engine.Execute("SAVEPOINT order_created");
try
{
engine.Execute("INSERT INTO OrderItems (OrderId, ProductId) VALUES (LAST_INSERT_ID(), 999)");
// This might fail if product 999 doesn't exist
}
catch
{
engine.Execute("ROLLBACK TO order_created");
// Order still exists, but without items
}
engine.Execute("COMMIT");
Transaction via ITransaction Interface
For more control, use the database's transaction interface:
using var transaction = database.BeginTransaction();
try
{
// Operations within transaction context...
engine.Execute("INSERT INTO Logs (Message) VALUES ('Test')");
transaction.Commit();
}
catch
{
transaction.Rollback();
throw;
}
Query Analysis with EXPLAIN
Understanding how the engine executes your queries helps you write efficient SQL. The EXPLAIN statement shows the query execution plan without actually running the query.
Basic EXPLAIN
using var result = engine.Execute("EXPLAIN SELECT * FROM Users WHERE Email = 'alice@test.com'");
while (result.Read())
{
Console.WriteLine(result.CurrentRow["Plan"].AsString());
}
Output might look like:
Scan: Users
Filter: Email = 'alice@test.com'
Estimated rows: 1
Understanding Execution Plans
The execution plan shows the operations the engine will perform:
Operation
Description
Scan: TableName
Full table scan — reads all rows
IndexSeek: IndexName
Uses index for exact match lookup
IndexRangeScan: IndexName
Uses index for range of values
Filter: condition
Applies WHERE condition after reading
Sort: columns
Sorts results (for ORDER BY)
Aggregate: function
Computes COUNT, SUM, etc.
Join: type
Joins two tables (INNER, LEFT, etc.)
Limit: n
Returns only first n rows
Index Usage Analysis
Use EXPLAIN to verify your indexes are being used:
// Without index
engine.Execute("EXPLAIN SELECT * FROM Orders WHERE CustomerId = 123");
// Output: Scan: Orders, Filter: CustomerId = 123
// Estimated rows: 10000
// Create index
engine.Execute("CREATE INDEX IX_Orders_CustomerId ON Orders(CustomerId)");
// With index
engine.Execute("EXPLAIN SELECT * FROM Orders WHERE CustomerId = 123");
// Output: IndexSeek: IX_Orders_CustomerId
// Estimated rows: 15
Analyzing Complex Queries
For JOINs and subqueries, EXPLAIN shows the full operation tree:
var sql = @"
EXPLAIN
SELECT o.Id, c.Name, o.Total
FROM Orders o
JOIN Customers c ON o.CustomerId = c.Id
WHERE o.OrderDate > '2024-01-01'
ORDER BY o.Total DESC
LIMIT 10";
using var result = engine.Execute(sql);
while (result.Read())
{
Console.WriteLine(result.CurrentRow["Plan"].AsString());
}
Output:
Limit: 10
Sort: Total DESC
Join: INNER
Left: IndexRangeScan: IX_Orders_OrderDate (> '2024-01-01')
Right: IndexSeek: PK_Customers
Performance Tips from EXPLAIN
Look for full table scans on large tables. If you see Scan: TableName with many estimated rows, consider adding an index.
Check estimated row counts. Large estimated rows at early stages can slow down the entire query.
Verify JOIN order. The engine processes joins left-to-right. Smaller result sets should be on the left when possible.
// Helper method to analyze query performance
public void AnalyzeQuery(string sql)
{
using var result = engine.Execute(
Loading...
quot;EXPLAIN {sql}");
Console.WriteLine("Execution Plan:");
while (result.Read())
{
var plan = result.CurrentRow["Plan"].AsString();
// Highlight potential issues
if (plan.Contains("Scan:") && !plan.Contains("Estimated rows: 0"))
{
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine(
Loading...
quot; ⚠ {plan}");
Console.ResetColor();
}
else
{
Console.WriteLine(
Loading...
quot; {plan}");
}
}
}
Schema Introspection
The SQL Engine provides methods to inspect database schema programmatically, without executing SQL queries.
Getting Table Information
// Get table metadata
DefinitionTable? table = engine.GetTable("Users");
if (table != null)
{
Console.WriteLine(
Loading...
quot;Table: {table.Name}");
Console.WriteLine(
Loading...
quot;Columns: {table.Columns.Count}");
foreach (var column in table.Columns)
{
Console.WriteLine(
Loading...
quot; {column.Name}: {column.Type}");
Console.WriteLine(
Loading...
quot; Nullable: {column.IsNullable}");
Console.WriteLine(
Loading...
quot; Primary Key: {column.IsPrimaryKey}");
Console.WriteLine(
Loading...
quot; Default: {column.DefaultValue ?? "none"}");
}
}
Table Row Count
Get the row count without scanning the table:
// O(1) operation - uses cached metadata
long rowCount = engine.GetTableRowCount("Users");
Console.WriteLine(
Loading...
quot;Users table has {rowCount} rows");
// Compare with SQL COUNT (scans table)
// This is slower for large tables
var countResult = engine.ExecuteScalar("SELECT COUNT(*) FROM Users");
The GetTableRowCount method returns the cached count maintained by the engine. It's much faster than SELECT COUNT(*) but may be slightly out of date during concurrent writes.
Index Information
// Get a specific index
DefinitionIndex? index = engine.GetIndex("IX_Users_Email");
if (index != null)
{
Console.WriteLine(
Loading...
quot;Index: {index.Name}");
Console.WriteLine(
Loading...
quot;Table: {index.TableName}");
Console.WriteLine(
Loading...
quot;Unique: {index.IsUnique}");
Console.WriteLine(
Loading...
quot;Columns: {string.Join(", ", index.Columns)}");
}
// Get all indexes for a table
foreach (var idx in engine.GetTableIndexes("Orders"))
{
Console.WriteLine(
Loading...
quot; {idx.Name} on ({string.Join(", ", idx.Columns)})");
}
Listing All Tables
// Using INFORMATION_SCHEMA (SQL standard)
var tables = engine.Query(@"
SELECT TABLE_NAME, TABLE_TYPE
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'public'");
foreach (var row in tables)
{
Console.WriteLine(
Loading...
quot;{row["TABLE_NAME"].AsString()} ({row["TABLE_TYPE"].AsString()})");
}
Column Details via INFORMATION_SCHEMA
var columns = engine.Query(@"
SELECT COLUMN_NAME, DATA_TYPE, IS_NULLABLE, COLUMN_DEFAULT
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Users'
ORDER BY ORDINAL_POSITION");
foreach (var col in columns)
{
var nullable = col["IS_NULLABLE"].AsString() == "YES" ? "NULL" : "NOT NULL";
Console.WriteLine(
Loading...
quot; {col["COLUMN_NAME"].AsString()}: {col["DATA_TYPE"].AsString()} {nullable}");
}
Low-Level Table Operations
For advanced scenarios, the SQL Engine provides direct access to table data through iterators. These bypass SQL parsing entirely for maximum performance.
Table Scan Iterator
Iterate through all rows in a table:
// Create a table scan iterator
using var iterator = engine.CreateTableScan("Users");
while (iterator.MoveNext())
{
WitSqlRow row = iterator.Current;
Console.WriteLine(
Loading...
quot;User: {row["Name"].AsString()}");
}
Table scans are useful when:
- You need to process every row
- No index exists for your filter condition
- You're implementing custom filtering logic
Index Seek
Look up rows by exact index match:
// Exact match on indexed column
// Equivalent to: SELECT * FROM Users WHERE Email = 'alice@test.com'
var keyValues = new WitSqlValue[] { WitSqlValue.FromText("alice@test.com") };
using var iterator = engine.CreateIndexSeek("Users", "IX_Users_Email", keyValues);
while (iterator.MoveNext())
{
var row = iterator.Current;
Console.WriteLine(
Loading...
quot;Found: {row["Name"].AsString()}");
}
Index seek is the fastest lookup method when you know the exact key value.
Index Range Scan
Scan a range of index values:
// Range scan: Orders where Total between 100 and 500
// Equivalent to: SELECT * FROM Orders WHERE Total >= 100 AND Total <= 500
var startKey = WitSqlValue.FromDecimal(100m);
var endKey = WitSqlValue.FromDecimal(500m);
using var iterator = engine.CreateIndexRangeScan(
tableName: "Orders",
indexName: "IX_Orders_Total",
startKey: startKey,
startInclusive: true, // >= 100
endKey: endKey,
endInclusive: true); // <= 500
while (iterator.MoveNext())
{
var order = iterator.Current;
Console.WriteLine(
Loading...
quot;Order {order["Id"].AsInt64()}: ${order["Total"].AsDecimal()}");
}
Direct Row Access by ID
Fetch a single row by its internal row ID (fastest possible lookup):
// If you know the row ID (e.g., from a previous query or index lookup)
long rowId = 12345;
WitSqlRow? row = engine.GetRowById("Users", rowId);
if (row != null)
{
Console.WriteLine(
Loading...
quot;User: {row["Name"].AsString()}");
}
Row IDs are internal identifiers and may change after table modifications. Use primary keys for stable references.
Performance Comparison
For looking up a single user by email:
Method
Time
Use Case
SELECT ... WHERE Email = @email
~50µs
General use
CreateIndexSeek()
~10µs
Hot paths, bypassing SQL
GetRowById()
~2µs
When row ID is known
The low-level methods skip SQL parsing and planning, providing significant speedup for simple operations in performance-critical code.
Practical Example: Custom Index Scan
Implement a custom filtered scan that uses an index:
public IEnumerable<WitSqlRow> GetOrdersInRange(decimal minTotal, decimal maxTotal, Func<WitSqlRow, bool> additionalFilter)
{
using var iterator = engine.CreateIndexRangeScan(
"Orders",
"IX_Orders_Total",
WitSqlValue.FromDecimal(minTotal), true,
WitSqlValue.FromDecimal(maxTotal), true);
while (iterator.MoveNext())
{
if (additionalFilter(iterator.Current))
{
yield return iterator.Current;
}
}
}
// Usage
var orders = GetOrdersInRange(100m, 500m,
row => row["Status"].AsString() == "Completed");
Query Plan Cache
The SQL Engine automatically caches parsed query plans for identical SQL strings, improving performance for repeated queries.
How Caching Works
When you execute SQL:
- Engine checks if this exact SQL string has been parsed before
- If cached, reuses the parsed AST (Abstract Syntax Tree)
- If not cached, parses and caches for future use
This means the second execution of the same SQL skips parsing entirely:
// First call: parses SQL, caches plan
engine.Query("SELECT * FROM Users WHERE IsActive = 1");
// Second call: reuses cached plan (faster)
engine.Query("SELECT * FROM Users WHERE IsActive = 1");
Parameters and Caching
Use parameters to maximize cache hits:
// BAD: Different SQL strings, no cache benefit
engine.Query("SELECT * FROM Users WHERE Id = 1");
engine.Query("SELECT * FROM Users WHERE Id = 2");
engine.Query("SELECT * FROM Users WHERE Id = 3");
// GOOD: Same SQL string, parameters vary, cache hit every time
engine.Query("SELECT * FROM Users WHERE Id = @id", new Dictionary<string, object?> { ["id"] = 1 });
engine.Query("SELECT * FROM Users WHERE Id = @id", new Dictionary<string, object?> { ["id"] = 2 });
engine.Query("SELECT * FROM Users WHERE Id = @id", new Dictionary<string, object?> { ["id"] = 3 });
Prepared Statements vs Query Cache
Both improve performance, but differently:
Feature
Query Plan Cache
Prepared Statement
Caches
Parsed AST
Parsed AST + execution context
Scope
Global (all queries)
Single statement instance
Best for
Ad-hoc repeated queries
Known hot paths, batches
Memory
Automatic LRU eviction
Manual disposal
Use prepared statements for known high-frequency queries. The query cache handles the rest automatically.
Error Handling
SQL errors throw exceptions with detailed information.
Exception Types
try
{
engine.Execute("SELECT * FROM NonExistentTable");
}
catch (WitSqlException ex)
{
Console.WriteLine(
Loading...
quot;SQL Error: {ex.Message}");
// Output: SQL Error: Table 'NonExistentTable' does not exist
}
Common Exception Types
Exception
Cause
WitSqlException
General SQL execution error (syntax, missing table, constraint violation)
InvalidOperationException
Invalid engine state (disposed, transaction in wrong state)
ArgumentException
Invalid parameters (null SQL, invalid parameter name)
OperationCanceledException
Query was cancelled via CancellationToken
TimeoutException
Query exceeded specified timeout
Handling Specific Errors
try
{
engine.Execute("INSERT INTO Users (Email) VALUES (@email)",
new Dictionary<string, object?> { ["email"] = "duplicate@test.com" });
}
catch (WitSqlException ex) when (ex.Message.Contains("UNIQUE constraint"))
{
Console.WriteLine("Email already exists");
}
catch (WitSqlException ex) when (ex.Message.Contains("NOT NULL constraint"))
{
Console.WriteLine("Required field is missing");
}
catch (WitSqlException ex)
{
Console.WriteLine(
Loading...
quot;Database error: {ex.Message}");
}
Timeout Handling
try
{
using var result = engine.Execute(
"SELECT * FROM VeryLargeTable WHERE ComplexCondition(Column)",
parameters: null,
timeout: TimeSpan.FromSeconds(5));
// Process results...
}
catch (TimeoutException)
{
Console.WriteLine("Query took too long and was aborted");
}
Checking Results for Errors
Some operations indicate issues through results rather than exceptions:
using var result = engine.Execute("INSERT INTO Users (Email) VALUES ('duplicate@test.com')");
if (result.RowsAffected == 0)
{
// Insert might have failed due to constraint (with ON CONFLICT DO NOTHING)
Console.WriteLine("No rows inserted");
}
Resource Management
The SQL Engine and its results implement IDisposable. Proper disposal prevents resource leaks.
Disposing the Engine
// Option 1: using statement
using var engine = new WitSqlEngine(database, ownsStore: true);
// Engine disposed at end of scope
// Option 2: Manual disposal
var engine = new WitSqlEngine(database, ownsStore: true);
try
{
// Use engine...
}
finally
{
engine.Dispose();
}
Disposing Results
Always dispose query results:
// GOOD: using statement
using var result = engine.Execute("SELECT * FROM Users");
while (result.Read()) { /* ... */ }
// GOOD: Manual disposal
var result = engine.Execute("SELECT * FROM Users");
try
{
while (result.Read()) { /* ... */ }
}
finally
{
result.Dispose();
}
// Convenience methods that return collections handle disposal internally
List<WitSqlRow> rows = engine.Query("SELECT * FROM Users"); // No using needed
Disposing Prepared Statements
using var stmt = engine.Prepare("SELECT * FROM Users WHERE Id = @id");
// Statement disposed at end of scope
Disposing Iterators
Low-level iterators must be disposed:
using var iterator = engine.CreateTableScan("Users");
while (iterator.MoveNext())
{
// Process rows...
}
// Iterator disposed, resources released
Thread Safety
The SQL Engine is not thread-safe. Each thread should have its own engine instance, or you must synchronize access.
Recommended Pattern for Multi-threaded Applications
// Shared database, engine per thread
var database = new WitDatabaseBuilder()
.WithFilePath("shared.witdb")
.WithBTree()
.WithTransactions()
.Build();
Parallel.For(0, 10, i =>
{
// Each thread gets its own engine
using var engine = new WitSqlEngine(database, ownsStore: false);
engine.Execute("INSERT INTO Logs (Message) VALUES (@msg)",
new Dictionary<string, object?> { ["msg"] =
Loading...
quot;Thread {i}" });
});
database.Dispose(); // Dispose database after all engines
Using with Dependency Injection
For web applications, create engines per request:
services.AddSingleton<WitDatabase>(sp =>
{
return new WitDatabaseBuilder()
.WithFilePath("app.witdb")
.WithBTree()
.WithTransactions()
.Build();
});
services.AddScoped<WitSqlEngine>(sp =>
{
var database = sp.GetRequiredService<WitDatabase>();
return new WitSqlEngine(database, ownsStore: false);
});
Thread-Safe Wrapper (If Needed)
For scenarios where you must share an engine:
public class ThreadSafeEngine : IDisposable
{
private readonly WitSqlEngine _engine;
private readonly Lock _lock = new();
public ThreadSafeEngine(WitDatabase database)
{
_engine = new WitSqlEngine(database, ownsStore: false);
}
public List<WitSqlRow> Query(string sql, IDictionary<string, object?>? parameters = null)
{
lock (_lock)
{
return _engine.Query(sql, parameters);
}
}
public int ExecuteNonQuery(string sql, IDictionary<string, object?>? parameters = null)
{
lock (_lock)
{
return _engine.ExecuteNonQuery(sql, parameters);
}
}
public void Dispose() => _engine.Dispose();
}
Note: This serializes all access, which may reduce throughput. Per-thread engines are usually better.
Quick Reference
Engine Creation
// File-based
var db = new WitDatabaseBuilder().WithFilePath("app.witdb").WithBTree().WithTransactions().Build();
var engine = new WitSqlEngine(db, ownsStore: true);
// In-memory
var db = WitDatabase.CreateInMemory();
var engine = new WitSqlEngine(db, ownsStore: true);
Query Methods
Method
Returns
Use Case
Execute(sql, params?, timeout?, token?)
WitSqlResult
Any SQL, full control
Query(sql, params?)
List<WitSqlRow>
SELECT, get all rows
QueryFirstOrDefault(sql, params?)
WitSqlRow?
SELECT, first row or null
ExecuteScalar(sql, params?)
WitSqlValue
SELECT, single value
ExecuteNonQuery(sql, params?)
int
INSERT/UPDATE/DELETE, row count
Prepare(sql)
WitSqlEngineStatement
Reusable statement
Schema Introspection
Method
Returns
Description
GetTable(name)
DefinitionTable?
Table metadata
GetTableRowCount(name)
long
Cached row count (O(1))
GetIndex(name)
DefinitionIndex?
Index metadata
GetTableIndexes(tableName)
IEnumerable<DefinitionIndex>
All indexes for table
Low-Level Access
Method
Returns
Description
CreateTableScan(table)
IResultIterator
Full table scan
CreateIndexSeek(table, index, keys)
IResultIterator
Exact index match
CreateIndexRangeScan(table, index, start, end)
IResultIterator
Index range scan
GetRowById(table, rowId)
WitSqlRow?
Direct row fetch
Prepared Statement Methods
Method
Description
SetParameter(name, value)
Set single parameter
SetParameters(dict)
Set multiple parameters
ClearParameters()
Clear all parameters
Execute(token?)
Execute with current parameters
Execute(params, token?)
Execute with provided parameters
ExecuteBatch(paramSets, token?)
Execute multiple times
Transaction SQL
BEGIN TRANSACTION
COMMIT
ROLLBACK
SAVEPOINT name
ROLLBACK TO name
RELEASE SAVEPOINT name
SET TRANSACTION ISOLATION LEVEL level
Type Conversions
row["Column"].AsInt64() // long
row["Column"].AsInt64OrNull() // long?
row["Column"].AsDouble() // double
row["Column"].AsDecimal() // decimal
row["Column"].AsString() // string
row["Column"].AsStringOrNull() // string?
row["Column"].AsBoolean() // bool
row["Column"].AsDateTime() // DateTime
row["Column"].AsGuid() // Guid
row["Column"].AsBytes() // byte[]
row["Column"].IsNull // bool
EXPLAIN Output
EXPLAIN SELECT * FROM Orders WHERE CustomerId = 123