The Core API provides direct key-value access to WitDatabase's storage layer. It offers maximum performance for simple data patterns where SQL overhead isn't justified, and serves as the foundation upon which all higher-level APIs are built.


Overview

What is the Core API?

At its heart, WitDatabase is a key-value store. Every table row, index entry, and schema definition is ultimately stored as bytes indexed by a key. The Core API exposes this fundamental interface directly.

While SQL and EF Core provide rich query capabilities, they add overhead: parsing, planning, result materialization. For certain workloads, this overhead dominates actual data access time. The Core API eliminates these layers entirely.

SQL Path:      Your Code → EF Core → ADO.NET → SQL Engine → Core API → Storage
Direct Path:   Your Code → Core API → Storage

When to Use the Core API

The Core API excels in specific scenarios:

Caching and session storage. Store serialized objects by key. No schema, no queries — just fast get/put operations. Perfect for user sessions, API response caches, or computed results.

High-throughput key-value workloads. When you're doing millions of simple lookups per second, eliminating SQL parsing overhead matters. The Core API provides sub-microsecond access times.

Custom data structures. Build your own indexing schemes, time-series storage, or graph structures on top of the raw key-value interface.

Blazor WebAssembly. The Core API works in browser environments where ADO.NET and EF Core don't. Combined with IndexedDB storage, it enables rich client-side data persistence.

Embedded scenarios. When WitDatabase is a component inside a larger system that manages its own data model.

When to Use Higher-Level APIs

Choose SQL or EF Core when you need:

  • Queries with filtering, sorting, joining
  • Relational data with foreign keys
  • Schema evolution and migrations
  • Complex aggregations and reporting

The Core API stores bytes — it has no concept of types, columns, or relationships. If your access patterns go beyond "get value by key" and "scan range of keys," the SQL layer is worth its overhead.

Architecture

The Core API consists of two main components:

[[Svg Src="./witdatabase-core-api-layers.svg" Alt="witdatabase-core-api-layers"]]

WitDatabase is the high-level wrapper you typically interact with. It provides convenient factory methods, manages indexes and transactions, and wraps the raw storage interface.

IKeyValueStore is the underlying storage interface. Different implementations (B+Tree, LSM-Tree, In-Memory) provide the same interface with different performance characteristics.


Getting Started

Creating a Database

Use the builder pattern for full control:

using OutWit.Database.Core.Builder;

// File-based with B+Tree (best for general use)
using var database = new WitDatabaseBuilder()
    .WithFilePath("myapp.witdb")
    .WithBTree()
    .WithTransactions()
    .Build();

For common scenarios, use factory methods:

// In-memory database (fastest, data lost on disposal)
using var database = WitDatabase.CreateInMemory();

// In-memory with encryption
using var database = WitDatabase.CreateInMemory("my-secret-password");

Async Creation

For environments where synchronous I/O is restricted (like Blazor WebAssembly with IndexedDB):

// Async factory method
var database = await WitDatabase.CreateAsync(store, indexManager);

Basic Operations

The fundamental operations are get, put, and delete:

// Keys and values are byte arrays
byte[] key = "user:123"u8.ToArray();
byte[] value = Encoding.UTF8.GetBytes("{\"name\":\"Alice\"}");

// Store a value
database.Put(key, value);

// Retrieve a value
byte[]? retrieved = database.Get(key);
if (retrieved != null)
{
    string json = Encoding.UTF8.GetString(retrieved);
    Console.WriteLine(json);  // {"name":"Alice"}
}

// Delete a value
bool deleted = database.Delete(key);
Console.WriteLine(
Loading...
quot;Deleted: {deleted}"); // true // Get returns null for missing keys byte[]? missing = database.Get("nonexistent"u8.ToArray()); Console.WriteLine(missing == null); // true

String Key Convenience

For string keys (the common case), use the overloads that accept strings:

// String keys are automatically UTF-8 encoded
database.Put("user:123", jsonBytes);
byte[]? value = database.Get("user:123");
bool deleted = database.Delete("user:123");

Span-Based Operations

For zero-allocation hot paths, use ReadOnlySpan<byte>:

// Avoid allocations in tight loops
ReadOnlySpan<byte> key = "user:123"u8;
database.Put(key, valueSpan);
byte[]? value = database.Get(key);

Key-Value Operations

Put (Insert or Update)

Put stores a value at a key, creating or replacing as needed:

// Insert new key
database.Put("config:theme", "dark"u8.ToArray());

// Update existing key (same operation)
database.Put("config:theme", "light"u8.ToArray());

Put is idempotent — calling it multiple times with the same key and value has the same effect as calling it once.

Get (Retrieve)

Get retrieves a value by key:

byte[]? value = database.Get("config:theme");

if (value != null)
{
    Console.WriteLine(Encoding.UTF8.GetString(value));
}
else
{
    Console.WriteLine("Key not found");
}

Get returns null for missing keys — it never throws for a non-existent key.

Delete (Remove)

Delete removes a key and returns whether it existed:

bool existed = database.Delete("config:theme");
Console.WriteLine(
Loading...
quot;Key existed: {existed}"); // Deleting non-existent key returns false bool notFound = database.Delete("nonexistent"); Console.WriteLine(
Loading...
quot;Found: {notFound}"); // false

Scan (Range Query)

Scan iterates over keys in sorted order:

// Scan all keys
foreach (var (key, value) in database.Scan())
{
    Console.WriteLine(Encoding.UTF8.GetString(key));
}

// Scan keys in a range
byte[] startKey = "user:"u8.ToArray();
byte[] endKey = "user:~"u8.ToArray();  // ~ is after all alphanumeric chars

foreach (var (key, value) in database.Scan(startKey, endKey))
{
    Console.WriteLine(Encoding.UTF8.GetString(key));
}

Keys are compared lexicographically by their byte values. Design your key scheme to take advantage of this for efficient range scans.

Flush (Force Write to Disk)

Ensure all pending writes are persisted:

// Write some data
database.Put("important:data", criticalBytes);

// Force immediate write to disk
database.Store.Flush();

// Now data is guaranteed to be on disk

By default, writes may be buffered for performance. Call Flush() when you need durability guarantees — for example, after completing a batch of related writes.


Key Design Patterns

How you structure keys significantly impacts performance and usability.

Hierarchical Keys

Use delimiters to create key hierarchies:

// Pattern: entity:id or entity:id:attribute
"user:123"              // User record
"user:123:profile"      // User's profile
"user:123:settings"     // User's settings
"order:456"             // Order record  
"order:456:items"       // Order's line items

This enables efficient prefix scans:

// Get all data for user 123
var startKey = "user:123:"u8.ToArray();
var endKey = "user:123:~"u8.ToArray();

foreach (var (key, value) in database.Scan(startKey, endKey))
{
    // Processes user:123:profile, user:123:settings, etc.
}

Composite Keys

Embed multiple values in keys for compound lookups:

// Pattern: entity:field1:field2
"user-by-email:alice@test.com"        // Secondary index
"order-by-customer:123:2024-01-15"    // Customer + date
"log:2024-01-15:error:001"            // Date + level + sequence

Numeric Keys with Padding

For numeric IDs that should sort correctly, use zero-padding:

// BAD: "user:9" sorts after "user:10" lexicographically
"user:1", "user:10", "user:2", "user:9"  // Wrong order!

// GOOD: Zero-padded numbers sort correctly
"user:0001", "user:0002", "user:0009", "user:0010"  // Correct order

// Helper method
string PaddedKey(string prefix, int id, int width = 10) 
    => 
Loading...
quot;{prefix}:{id.ToString().PadLeft(width, '0')}";

Time-Series Keys

For time-ordered data, put timestamps first:

// Pattern: prefix:timestamp:id
string timestamp = DateTime.UtcNow.ToString("yyyy-MM-dd-HH-mm-ss-fff");
string key = 
Loading...
quot;event:{timestamp}:{eventId}";

This makes time-range scans efficient:

// Events from today
var startKey = 
Loading...
quot;event:{DateTime.Today:yyyy-MM-dd}"u8.ToArray(); var endKey =
Loading...
quot;event:{DateTime.Today.AddDays(1):yyyy-MM-dd}"u8.ToArray(); foreach (var (key, value) in database.Scan(startKey, endKey)) { // Process today's events in chronological order }

Reverse Chronological Order

For "most recent first" access, use inverted timestamps:

// Invert timestamp so newer entries sort first
long invertedTicks = long.MaxValue - DateTime.UtcNow.Ticks;
string key = 
Loading...
quot;feed:{userId}:{invertedTicks}"; // Scan returns newest first foreach (var (key, value) in database.Scan(
Loading...
quot;feed:{userId}:".ToBytes())) { // Most recent items come first }

Secondary Indexes

Create secondary indexes by duplicating data with different key structures:

// Primary record
database.Put(
Loading...
quot;user:{userId}", userData); // Secondary index: lookup user by email database.Put(
Loading...
quot;user-by-email:{email}", BitConverter.GetBytes(userId)); // Lookup by email byte[]? userIdBytes = database.Get(
Loading...
quot;user-by-email:{email}"); if (userIdBytes != null) { int userId = BitConverter.ToInt32(userIdBytes); byte[]? userData = database.Get(
Loading...
quot;user:{userId}"); }

Remember to update secondary indexes when the primary data changes:

public void UpdateUserEmail(int userId, string oldEmail, string newEmail)
{
    using var tx = database.BeginTransaction();
    
    // Update primary record
    var user = GetUser(userId);
    user.Email = newEmail;
    database.Put(
Loading...
quot;user:{userId}", Serialize(user)); // Update secondary index database.Delete(
Loading...
quot;user-by-email:{oldEmail}"); database.Put(
Loading...
quot;user-by-email:{newEmail}", BitConverter.GetBytes(userId)); tx.Commit(); }

Key Design Best Practices

Practice Reason
Use consistent delimiters (: or /) Predictable parsing and scanning
Put high-cardinality fields first Better distribution, efficient scans
Avoid special characters in keys Prevents encoding issues
Keep keys short Reduces storage and comparison overhead
Use fixed-width numeric encoding Ensures correct sort order

Serialization

The Core API works with bytes. You choose how to serialize your data.

JSON Serialization

Simple and human-readable:

using System.Text.Json;

public static class JsonStore
{
    public static void Put<T>(WitDatabase db, string key, T value)
    {
        var bytes = JsonSerializer.SerializeToUtf8Bytes(value);
        db.Put(key, bytes);
    }
    
    public static T? Get<T>(WitDatabase db, string key)
    {
        var bytes = db.Get(key);
        return bytes == null ? default : JsonSerializer.Deserialize<T>(bytes);
    }
}

// Usage
var user = new User { Name = "Alice", Email = "alice@test.com" };
JsonStore.Put(database, 
Loading...
quot;user:{user.Id}", user); var retrieved = JsonStore.Get<User>(database,
Loading...
quot;user:{user.Id}");

MessagePack (Binary)

Faster and more compact:

using MessagePack;

[MessagePackObject]
public class User
{
    [Key(0)] public int Id { get; set; }
    [Key(1)] public string Name { get; set; }
    [Key(2)] public string Email { get; set; }
}

// Serialize
byte[] bytes = MessagePackSerializer.Serialize(user);
database.Put(
Loading...
quot;user:{user.Id}", bytes); // Deserialize byte[]? data = database.Get(
Loading...
quot;user:{user.Id}"); var user = MessagePackSerializer.Deserialize<User>(data);

Protocol Buffers

For maximum performance and schema evolution:

// Requires protobuf-net or Google.Protobuf
using ProtoBuf;

[ProtoContract]
public class User
{
    [ProtoMember(1)] public int Id { get; set; }
    [ProtoMember(2)] public string Name { get; set; }
    [ProtoMember(3)] public string Email { get; set; }
}

using var stream = new MemoryStream();
Serializer.Serialize(stream, user);
database.Put(
Loading...
quot;user:{user.Id}", stream.ToArray());

Raw Binary Encoding

For simple types or maximum control:

// Store an integer
int value = 42;
database.Put("counter", BitConverter.GetBytes(value));

// Retrieve
byte[]? data = database.Get("counter");
int retrieved = BitConverter.ToInt32(data);

// Store multiple values with BinaryWriter
using var ms = new MemoryStream();
using var writer = new BinaryWriter(ms);
writer.Write(user.Id);
writer.Write(user.Name);
writer.Write(user.Email);
writer.Write(user.CreatedAt.ToBinary());
database.Put(
Loading...
quot;user:{user.Id}", ms.ToArray());

Choosing a Format

Format Speed Size Schema Evolution Human Readable
JSON Slow Large Good (add fields) Yes
MessagePack Fast Small Good No
Protobuf Fastest Smallest Excellent No
Raw Binary Fastest Smallest Manual No

For most applications, JSON is fine. Switch to binary formats when profiling shows serialization is a bottleneck.

Generic Repository Pattern

A reusable pattern for typed access:

public class KeyValueRepository<T> where T : class
{
    private readonly WitDatabase _db;
    private readonly string _prefix;
    private readonly JsonSerializerOptions _options;
    
    public KeyValueRepository(WitDatabase db, string prefix)
    {
        _db = db;
        _prefix = prefix;
        _options = new JsonSerializerOptions { PropertyNameCaseInsensitive = true };
    }
    
    public void Set(string id, T value)
    {
        var key = 
Loading...
quot;{_prefix}:{id}"; var bytes = JsonSerializer.SerializeToUtf8Bytes(value, _options); _db.Put(key, bytes); } public T? Get(string id) { var key =
Loading...
quot;{_prefix}:{id}"; var bytes = _db.Get(key); return bytes == null ? null : JsonSerializer.Deserialize<T>(bytes, _options); } public bool Delete(string id) { return _db.Delete(
Loading...
quot;{_prefix}:{id}"); } public IEnumerable<T> GetAll() { var start =
Loading...
quot;{_prefix}:"u8.ToArray(); var end =
Loading...
quot;{_prefix}:~"u8.ToArray(); foreach (var (_, value) in _db.Scan(start, end)) { var item = JsonSerializer.Deserialize<T>(value, _options); if (item != null) yield return item; } } } // Usage var users = new KeyValueRepository<User>(database, "user"); users.Set("123", new User { Name = "Alice" }); var user = users.Get("123");

Transactions

The Core API supports ACID transactions when enabled.

Enabling Transactions

Transactions must be enabled when building the database:

var database = new WitDatabaseBuilder()
    .WithFilePath("app.witdb")
    .WithBTree()
    .WithTransactions()  // Required!
    .Build();

Basic Transaction Pattern

using var transaction = database.BeginTransaction();

try
{
    database.Put("account:1", newBalance1);
    database.Put("account:2", newBalance2);
    
    transaction.Commit();
}
catch
{
    transaction.Rollback();
    throw;
}

Isolation Levels

Specify isolation level when beginning a transaction:

using var tx = database.BeginTransaction(WitIsolationLevel.Serializable);

// Operations run with serializable isolation
database.Put("key", value);

tx.Commit();

Available isolation levels:

Level Description
ReadCommitted Only sees committed data (default)
RepeatableRead Same reads return same data
Serializable Full isolation
Snapshot Consistent snapshot of data

Transaction Isolation in Practice

Transactions see a consistent snapshot of data:

using var tx = database.BeginTransaction();

// Read value
byte[]? value = database.Get("counter");
int count = value == null ? 0 : BitConverter.ToInt32(value);

// Another thread might modify "counter" here
// But our transaction sees the original value

// Increment and save
database.Put("counter", BitConverter.GetBytes(count + 1));

tx.Commit();

Atomic Multi-Key Operations

Multiple operations within a transaction are atomic:

using var tx = database.BeginTransaction();

// All succeed or all fail
database.Put("user:123", userData);
database.Put("user-by-email:alice@test.com", userIdBytes);
database.Put("user-count", BitConverter.GetBytes(newCount));

tx.Commit();  // Atomic commit of all three

Transaction Timeout

Avoid holding transactions open too long:

// BAD: Long-running transaction blocks other writers
using var tx = database.BeginTransaction();
database.Put("key1", value1);
await LongRunningOperationAsync();  // Don't do this!
database.Put("key2", value2);
tx.Commit();

// GOOD: Prepare data first, then quick transaction
var data1 = await PrepareData1Async();
var data2 = await PrepareData2Async();

using var tx = database.BeginTransaction();
database.Put("key1", data1);
database.Put("key2", data2);
tx.Commit();  // Quick commit

Bulk Operations

For high-throughput scenarios, bulk operations reduce overhead.

BulkPut

Insert or update many keys efficiently:

var items = new List<(byte[] Key, byte[] Value)>
{
    ("key1"u8.ToArray(), "value1"u8.ToArray()),
    ("key2"u8.ToArray(), "value2"u8.ToArray()),
    ("key3"u8.ToArray(), "value3"u8.ToArray()),
};

int count = database.Store.BulkPut(items);
Console.WriteLine(
Loading...
quot;Stored {count} items");

BulkDelete

Remove many keys efficiently:

var keys = new[]
{
    "key1"u8.ToArray(),
    "key2"u8.ToArray(),
    "key3"u8.ToArray(),
};

int deleted = database.Store.BulkDelete(keys);
Console.WriteLine(
Loading...
quot;Deleted {deleted} keys");

BulkPut with Flush

Ensure data is persisted immediately:

// Insert and flush in one operation
int count = database.Store.BulkPutAndFlush(items, flushAfter: true);

Streaming Insert

For very large datasets, stream data with periodic flushes:

var items = GenerateMillionsOfItems();

int total = database.Store.StreamingPut(
    items,
    batchSize: 10000,
    progress: count => Console.WriteLine(
Loading...
quot;Inserted {count} items") );

This prevents memory exhaustion and provides progress feedback.

Async Bulk Operations

All bulk operations have async variants:

// Async bulk put
int count = await database.Store.BulkPutAsync(items, cancellationToken);

// Async bulk delete
int deleted = await database.Store.BulkDeleteAsync(keys, cancellationToken);

// Async streaming with progress
int total = await database.Store.StreamingPutAsync(
    items,
    batchSize: 10000,
    progress: count => Console.WriteLine(
Loading...
quot;Inserted {count} items"), cancellationToken: cts.Token );

Performance Comparison

For inserting 100,000 key-value pairs:

Method Time Notes
Individual Put (no flush) ~2s Buffered writes
Individual Put (with flush each) ~30s Disk sync per write
BulkPut ~0.5s Batched internal processing
StreamingPut (batch 10K) ~0.6s Memory-efficient for huge datasets

Async Operations

All Core API operations have async variants for non-blocking I/O.

Basic Async Operations

// Async get
byte[]? value = await database.GetAsync(key);

// Async put
await database.PutAsync(key, value);

// Async delete
bool deleted = await database.DeleteAsync(key);

Async Scan

Use await foreach for async enumeration:

await foreach (var (key, value) in database.ScanAsync(startKey, endKey))
{
    // Process each item without blocking
    await ProcessItemAsync(key, value);
}

Async with Cancellation

All async methods support cancellation:

var cts = new CancellationTokenSource(TimeSpan.FromSeconds(30));

try
{
    byte[]? value = await database.GetAsync(key, cts.Token);
    await database.PutAsync(key, newValue, cts.Token);
}
catch (OperationCanceledException)
{
    Console.WriteLine("Operation was cancelled");
}

When to Use Async

Use async methods in:

  • Web applications (ASP.NET Core) — Don't block request threads
  • UI applications — Keep UI responsive
  • High-concurrency scenarios — Better thread pool utilization
  • Blazor WebAssembly — Required for IndexedDB storage

For simple console apps or background services with dedicated threads, sync methods are fine and slightly faster.

Async Transaction Pattern

using var tx = database.BeginTransaction();

try
{
    var value1 = await database.GetAsync("key1");
    var value2 = await database.GetAsync("key2");
    
    // Process...
    
    await database.PutAsync("result", processedValue);
    
    tx.Commit();  // Commit is synchronous
}
catch
{
    tx.Rollback();
    throw;
}

Storage Engines

WitDatabase supports multiple storage engines with different characteristics.

B+Tree (Default)

Balanced tree structure, excellent for general use:

var database = new WitDatabaseBuilder()
    .WithFilePath("app.witdb")
    .WithBTree()
    .Build();

Characteristics:

  • Consistent read and write performance
  • Good for mixed workloads
  • Efficient range scans
  • Single file storage
  • In-place updates

Best for: General-purpose applications, read-heavy workloads, applications needing predictable performance.

LSM-Tree

Log-Structured Merge-Tree, optimized for writes:

var database = new WitDatabaseBuilder()
    .WithFilePath("app-lsm/")  // Directory, not file
    .WithLsmTree()
    .Build();

Characteristics:

  • Very fast writes (append-only)
  • Background compaction
  • Higher read amplification
  • Directory-based storage
  • Better write throughput

Best for: Write-heavy workloads, logging, event sourcing, time-series data.

LSM-Tree Compaction

LSM-Tree uses background compaction to maintain read performance:

var database = new WitDatabaseBuilder()
    .WithFilePath("app-lsm/")
    .WithLsmTree()
    .Build();

// Compaction happens automatically in the background
// You can monitor compaction status if needed

During compaction:

  • Multiple sorted files are merged into larger files
  • Deleted keys are physically removed
  • Read performance is restored

Compaction runs automatically. For write-heavy workloads, ensure sufficient disk space for temporary files during compaction.

In-Memory

RAM-only storage for maximum speed:

var database = new WitDatabaseBuilder()
    .WithMemoryStorage()
    .WithBTree()  // or WithLsmTree()
    .Build();

Characteristics:

  • Fastest possible operations
  • Data lost on disposal
  • Limited by available RAM
  • No disk I/O

Best for: Testing, caching, temporary data, session storage.

Choosing an Engine

Workload Recommended Engine
General purpose B+Tree
Write-heavy (logs, events) LSM-Tree
Testing, caching In-Memory
Read-heavy, random access B+Tree
Sequential writes LSM-Tree
Limited disk space B+Tree
SSD storage Either (LSM-Tree benefits less)

Cache Configuration

Configure the in-memory cache for optimal performance.

Setting Cache Size

var database = new WitDatabaseBuilder()
    .WithFilePath("app.witdb")
    .WithBTree()
    .WithCacheSize(128 * 1024 * 1024)  // 128 MB cache
    .Build();

Cache Size Guidelines

Working Set Recommended Cache
< 100 MB Match working set
100 MB - 1 GB 50-100% of working set
1 GB - 10 GB 10-25% of working set
> 10 GB Fixed size based on available RAM

For best performance, the cache should hold your "hot" data — the keys you access most frequently.

Monitoring Cache Effectiveness

If you notice slow reads despite sufficient cache:

  • Check if your access pattern is cache-friendly (repeated access to same keys)
  • Consider increasing cache size
  • Review key design (scattered keys reduce cache hits)

Encryption

Protect data at rest with transparent encryption.

Enabling Encryption

var database = new WitDatabaseBuilder()
    .WithFilePath("secure.witdb")
    .WithBTree()
    .WithEncryption("my-secret-password")  // AES-GCM by default
    .Build();

Encryption Algorithms

// AES-GCM (default, fastest on modern CPUs with AES-NI)
.WithAesGcmEncryption("password")

// ChaCha20-Poly1305 (better for ARM, WASM, older CPUs)
.WithChaCha20Encryption("password")

Choosing an algorithm:

Algorithm Best For Hardware Acceleration
AES-GCM x86/x64 with AES-NI Yes (Intel/AMD)
ChaCha20 ARM, WASM, older x86 No (fast in software)

User-Based Key Derivation

Derive encryption keys from user credentials:

.WithEncryption("password", "username")  // Salt includes username

This ensures the same password produces different keys for different users, preventing rainbow table attacks.

Fast Key Derivation (WASM)

For Blazor WebAssembly where PBKDF2 is slow:

.WithEncryption("password")
.WithFastKeyDerivation()  // Fewer PBKDF2 iterations

Note: This reduces security. Use only when performance in WASM is critical.

Opening Encrypted Databases

// Must provide same password to open
var database = new WitDatabaseBuilder()
    .WithFilePath("secure.witdb")
    .WithBTree()
    .WithEncryption("my-secret-password")  // Same password
    .Build();

// Wrong password throws exception
try
{
    var db = new WitDatabaseBuilder()
        .WithFilePath("secure.witdb")
        .WithEncryption("wrong-password")
        .Build();
}
catch (CryptographicException)
{
    Console.WriteLine("Invalid password");
}

Thread Safety

The Core API has specific thread safety guarantees.

WitDatabase Thread Safety

WitDatabase is thread-safe for reads but requires synchronization for writes:

// SAFE: Concurrent reads
Parallel.For(0, 100, i =>
{
    var value = database.Get(
Loading...
quot;key:{i}"); // OK }); // UNSAFE: Concurrent writes without synchronization Parallel.For(0, 100, i => { database.Put(
Loading...
quot;key:{i}", value); // Race condition! });

Thread-Safe Write Pattern

Use locking for concurrent writes:

private readonly Lock _writeLock = new();

public void SafePut(string key, byte[] value)
{
    lock (_writeLock)
    {
        database.Put(key, value);
    }
}

Or use one database per thread:

// Each thread opens its own connection
Parallel.For(0, 10, i =>
{
    using var db = new WitDatabaseBuilder()
        .WithFilePath("shared.witdb")
        .WithBTree()
        .Build();
    
    db.Put(
Loading...
quot;key:{i}", value); });

Transaction Thread Safety

Transactions are bound to the thread that created them:

// WRONG: Using transaction across threads
var tx = database.BeginTransaction();
await Task.Run(() => database.Put("key", value));  // May fail!
tx.Commit();

// CORRECT: Keep transaction on same thread
var tx = database.BeginTransaction();
database.Put("key", value);
tx.Commit();

For web applications:

// Scoped database per request
services.AddScoped<WitDatabase>(sp =>
{
    return new WitDatabaseBuilder()
        .WithFilePath("app.witdb")
        .WithBTree()
        .Build();
});

For background services:

// Single database with write lock
public class DataService
{
    private readonly WitDatabase _db;
    private readonly Lock _lock = new();
    
    public byte[]? Get(string key) => _db.Get(key);
    
    public void Put(string key, byte[] value)
    {
        lock (_lock) { _db.Put(key, value); }
    }
}

Error Handling

Understanding and handling errors properly.

Common Exceptions

Exception Cause Recovery
IOException File access issues Check permissions, disk space
CryptographicException Wrong encryption password Prompt for correct password
InvalidOperationException Invalid state (e.g., disposed) Check object lifecycle
ObjectDisposedException Using disposed database Ensure proper scoping

File Access Errors

try
{
    var database = new WitDatabaseBuilder()
        .WithFilePath("/protected/path/app.witdb")
        .Build();
}
catch (UnauthorizedAccessException)
{
    Console.WriteLine("No permission to access database file");
}
catch (IOException ex)
{
    Console.WriteLine(
Loading...
quot;I/O error: {ex.Message}"); }

Transaction Errors

using var tx = database.BeginTransaction();

try
{
    database.Put("key", value);
    tx.Commit();
}
catch (Exception ex)
{
    // Transaction automatically rolls back on dispose if not committed
    // But explicit rollback is clearer
    tx.Rollback();
    
    _logger.LogError(ex, "Transaction failed");
    throw;
}

Defensive Coding

public byte[]? SafeGet(string key)
{
    try
    {
        return _database.Get(key);
    }
    catch (ObjectDisposedException)
    {
        _logger.LogWarning("Database was disposed");
        return null;
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "Error reading key {Key}", key);
        return null;
    }
}

Database Information

Inspect databases without fully opening them.

Getting Database Info

var info = WitDatabase.GetDatabaseInfo("app.witdb");

Console.WriteLine(
Loading...
quot;Store Type: {info.StoreType}"); Console.WriteLine(
Loading...
quot;Is Encrypted: {info.IsEncrypted}"); Console.WriteLine(
Loading...
quot;Encryption Type: {info.EncryptionType}");

Use Cases

Determining if password is needed:

var info = WitDatabase.GetDatabaseInfo(path);

if (info.IsEncrypted)
{
    var password = PromptForPassword();
    database = OpenEncrypted(path, password);
}
else
{
    database = Open(path);
}

Validating database before opening:

var info = WitDatabase.GetDatabaseInfo(path);

if (info.StoreType == StoreType.Unknown)
{
    throw new InvalidDataException("Not a valid WitDatabase file");
}

Migration detection:

var info = WitDatabase.GetDatabaseInfo(path);

if (info.StoreType == StoreType.BTree && needsLsmMigration)
{
    MigrateToLsm(path);
}

Performance Tips

Optimize your Core API usage for best performance.

Key Design

  • Keep keys short — Every byte adds comparison overhead
  • Use consistent prefixes — Enables efficient range scans
  • Avoid highly variable key lengths — Complicates buffer management

Batch Operations

// BAD: Individual puts
foreach (var item in items)
{
    database.Put(item.Key, item.Value);
}

// GOOD: Bulk put
database.Store.BulkPut(items.Select(i => (i.Key, i.Value)));

Transaction Batching

// BAD: Transaction per item
foreach (var item in items)
{
    using var tx = database.BeginTransaction();
    database.Put(item.Key, item.Value);
    tx.Commit();  // Disk sync each time!
}

// GOOD: Single transaction for batch
using var tx = database.BeginTransaction();
foreach (var item in items)
{
    database.Put(item.Key, item.Value);
}
tx.Commit();  // Single disk sync

Flush Strategy

// Frequent flushes = better durability, slower performance
database.Put("key1", value1);
database.Store.Flush();  // Disk sync
database.Put("key2", value2);
database.Store.Flush();  // Disk sync again

// Batch then flush = better performance
database.Put("key1", value1);
database.Put("key2", value2);
database.Put("key3", value3);
database.Store.Flush();  // Single disk sync for all

Memory Management

// Avoid allocations in hot paths
// BAD: Creates new string each time
for (int i = 0; i < 1000000; i++)
{
    database.Get(
Loading...
quot;key:{i}"); // String allocation + encoding } // GOOD: Reuse buffers var keyBuffer = new byte[20]; for (int i = 0; i < 1000000; i++) { var keyLength = FormatKey(keyBuffer, i); database.Get(keyBuffer.AsSpan(0, keyLength)); }

Performance Checklist

Area Optimization
Writes Use transactions for batches
Reads Size cache for working set
Keys Keep short, use consistent structure
Serialization Use binary format for hot paths
Flush Batch writes before flush
Threads Minimize lock contention

Quick Reference

Database Creation

// In-memory
var db = WitDatabase.CreateInMemory();

// File-based with B+Tree
var db = new WitDatabaseBuilder()
    .WithFilePath("app.witdb")
    .WithBTree()
    .WithTransactions()
    .Build();

// Encrypted
var db = new WitDatabaseBuilder()
    .WithFilePath("secure.witdb")
    .WithBTree()
    .WithEncryption("password")
    .Build();

// LSM-Tree for write-heavy workloads
var db = new WitDatabaseBuilder()
    .WithFilePath("logs/")
    .WithLsmTree()
    .Build();

// With cache configuration
var db = new WitDatabaseBuilder()
    .WithFilePath("app.witdb")
    .WithBTree()
    .WithCacheSize(64 * 1024 * 1024)  // 64 MB
    .Build();

Core Operations

Operation Sync Async
Get db.Get(key) db.GetAsync(key)
Put db.Put(key, value) db.PutAsync(key, value)
Delete db.Delete(key) db.DeleteAsync(key)
Scan db.Scan(start?, end?) db.ScanAsync(start?, end?)
Flush db.Store.Flush()

Bulk Operations

db.Store.BulkPut(items)
db.Store.BulkPutAsync(items, token)
db.Store.BulkDelete(keys)
db.Store.BulkDeleteAsync(keys, token)
db.Store.StreamingPut(items, batchSize, progress?)
db.Store.BulkPutAndFlush(items)

Transactions

using var tx = db.BeginTransaction();
// or with isolation level:
using var tx = db.BeginTransaction(WitIsolationLevel.Serializable);

try
{
    // operations...
    tx.Commit();
}
catch
{
    tx.Rollback();
    throw;
}

Storage Engines

Engine Builder Method Best For
B+Tree .WithBTree() General use, reads
LSM-Tree .WithLsmTree() Write-heavy
Memory .WithMemoryStorage() Testing, caching

Encryption

Algorithm Builder Method Best For
AES-GCM .WithAesGcmEncryption(pwd) x86/x64 with AES-NI
ChaCha20 .WithChaCha20Encryption(pwd) ARM, WASM

Key Design Patterns

// Hierarchical
"user:123:profile"

// Time-series
Loading...
quot;event:{timestamp:yyyy-MM-dd-HH-mm-ss}:{id}" // Secondary index "user-by-email:alice@test.com" → userId bytes // Padded numeric
Loading...
quot;item:{id:D10}" // "item:0000000123"