wdb-sdk
Installation
yarn add wdb-sdkInstantiation
import { DB } from 'wdb-sdk'
const db = new DB({ jwk, url, hb, id, mem })jwk: signer Arweave wallet JWKurl: DB rollup server URL (default:http://localhost:6364)`hb: HyperBEAM WAL node URL (default:http://localhost:10001)`id: DB ID, don't specify it when spawning a new DBmem: use in-memory DB fromwdb-core
In-memory DB is useful for lightning-fast testing without a rollup server and HyperBEAM.
import { DB } from 'wdb-sdk'
import { mem } from 'wdb-core'
const { q, db, io, kv } = mem() // q (queue) wraps db to prevent conflicts
const db = new DB({ jwk: owner.jwk, mem: q })
// q can be shared with multiple clients
const user1 = new DB({ jwk: user1.jwk, mem: q })
const user2 = new DB({ jwk: user2.jwk, mem: q })ready()
You can ensure the rollup server is available and ready when instantiating.
const db = await new DB({ jwk, url, hb }).ready()ready() will ensure ${url}/status returns status="ok".
If you are the HyperBEAM node operator, you can also start a rollup node by passing true to ready().
const db = await new DB({ jwk, url, hb }).ready(true)This will ensure ${hb}/~weavedb@1.0/start returns status=true.
spawn()
Spawn a new DB instance with db.spawn(), which returns a DB ID.
It spawns a new process to record WAL (Write-Ahead Logging) on the HyperBEAM node, then use the process ID to create a new DB instance on the Rollup node. The rollup node automatically bundles up queries and asyncronously dumps them to the HyperBEAM process in the background, while serving users at in-memory speed with cloud-level performance.
const id = await db.spawn()mkdir()
Creating a dir (directory) with schema, auth, and name definitions.
auth defines custom query types and their rules such as set:user and del:user.
await db.mkdir({
name: "users",
schema: { type: "object", required: ["name", "age"] },
auth: [["set:user,update:user,del:user", [["allow()"]]]],
})set()
set() executes write queries according to the auth rules and the schema set on the dir.
const res = await db.set("add:user", { name: "Bob", age: 25 })
const { success, error, result, query } = resresult
result comes with variety of metadata.
hashpath: hash to track verifiable compute steps (AO-Core protocol)signer: signer of the HTTP messagemsg: HTTP message (HTTP message signature)nonce: nonce to prevent replay attacksop: operation (op=opcode+:+oprand)opcode: operation typeoperand: custom operation namequery: query withoutopdir: directory to updatebefore: data before updateddata: data after updatedid: DB IDts: timestampresult: contains transaction index and updated keys and data in the underlying kv store
Query Types (opcode)
There are 5 opcode types you can specify in auth with mkdir().
add: add a doc with auto-generated docid, always add a new docset: add a new doc with specified docid, whether or not it existsupdate: update a doc if it doesn't exist with docid, reject if it existsupsert: add a doc with docid if it doesn't exist, update it if it existsdel: delete a doc with docid if it exists
Special Modifiers (_$)
_$ provides special modifiers for field updates.
// deleting the field, this is different from assigning null
await db.update({ name: {_$: "del" }}, "users", "Bob")
// timestamp
await db.update({ date: {_$: "ts" }}, "users", "Bob")
// message signer
await db.update({ signer: {_$: "signer" }}, "users", "Bob")You can also execute advanced logic to modify the field by defining FPJSON in an array.
// increment
await db.update({ age: {_$: ["inc"] }}, "users", "Bob")
// add
await db.update({ age: {_$: ["add", 5] }}, "users", "Bob")
// remoive items
await db.update({ favs: {_$: ["without", ["apple"]] }}, "users", "Bob")batch()
Batch-execute multiple write queries.
const Bob = { name: "Bob", age: 20, favs: [ "apple", "orange" ]}
const Alice = { name: "Alice", age: 40, favs: [ "orange", "peach" ]}
const Beth = { name: "Mike", age: 30, favs: [ "grapes", "orange" ]}
const Mike = { name: "Mike", age: 30, favs: [ "peach", "apple" ]}
await db.batch([
[ "update:user", { favs: Bob.favs }, "users", "Bob" ],
[ "set:user", Alice, "users", "Alice" ],
[ "set:user", Beth, "users", "Beth" ],
[ "set:user", Mike, "users", "Mike" ]
])get()
single doc
const Bob = await db.get("users", "Bob")multiple docs
const users = await db.get("users")sort
await db.get("users", ["age", "desc"])
// => [ Alice, Beth, Bob, Mike ]
await db.get("users", ["name", "desc"])
// => [ Mike, Bob, Beth, Alice ]
await db.get("users", ["age", "desc"], ["name", "desc"])
// => [ Alice, Mike, Beth, Bob ]limit
await db.get("users", ["age", "desc"], 2)
// => [ Alice, Beth ]where
== | != | > | >= | < | <= | in | not-in | array-contains | array-contains-any
await db.get("users", ["age", "==", 30])
// => [ Beth, Mike ]
await db.get("users", ["age", "!=", 30])
// => [ Bob, Alice ]
await db.get("users", ["age", ">", 30])
// => [ Alice ]
await db.get("users", ["age", ">=", 30])
// => [ Beth, Mike, Alice ]
await db.get("users", ["age", "<", 30])
// => [ Bob ]
await db.get("users", ["age", "<=", 30])
// => [ Bob, Beth, Mike ]
await db.get("users", ["age", "in", [20, 30]])
// => [ Bob, Beth, Mike ]
await db.get("users", ["age", "not-in", [20, 30]])
// => [ Alice ]
await db.get("users", ["favs", "array-conteins", "apple"])
// => [ Bob, Mike ]
await db.get("users", ["favs", "array-conteins-any", ["apple", "peach"])
// => [ Alice, Bob, Mike ]skip
startAt | startAfter | endAt | endBefore
await db.get("users", ["age", "asc"], ["startAt", 30])
// => [ Beth, Mike, Alice ]
await db.get("users", ["age", "asc"], ["startAfter", 30])
// => [ Alice ]
await db.get("users", ["age", "asc"], ["endAt", 30])
// => [ Bob, Beth, Mike ]
await db.get("users", ["age", "asc"], ["endBefore", 30])
// => [ Bob ]cget()
cget() has the same interface as get() but it returns doc with metadata.
const { __cursor__, dir, id, data: Bob } = await db.cget("users", "Bob")You can also use the result from cget() as a cursor with skip operations.
const cursor = await db.cget("users", "Bob")
await db.get("users", ["age", "asc"], ["startAfter", cursor])
// => [ Beth, Mike, Alice ]iter()
iter() internally handles cget() and makes pagination easier.
let { docs, next, isNext } = await db.iter("users", ["age", "asc"], 2)
// docs => [ { data: Bob }, { data: Beth } ]
while(isNext) {
;({ docs, next, isNext } = await next())
// docs => [ { data: Mike }, { data: Alice } ]
}nonce()
The current nonce() of the assigned signer. If the client has the wrong nonce, it auto-sync with the latest nonce and retry the failed query.
const nonce = await db.nonce()stat()
stat(dir) returns dir info including schema, auth, indexes, and triggers.
index is the leaf position of the dir in the zk sparse merkle tree.
auth is the FPJSON rules for authentication and data transformations.
autoid is the auto-increment id by add, the actual dir ids are in base64 form.
const stat = await db.stat("users")
// => { schema, auth, indexes, triggers, index, autoid }addIndex()
Multi-field sorting requires adding the index first.
await db.addIndex([["age", "desc"], ["name", "desc"]], "users")
await db.get("users", ["age", "desc"], ["name", "desc"])
// => [ Alice, Mike, Beth, Bob ]removeIndex()
await db.removeIndex([["age", "desc"], ["name", "desc"]], "users")
await db.get("users", ["age", "desc"], ["name", "desc"])
// => ErrorsetSchema()
Update the JSON schema for the dir.
await db.setSchema(schema, "users")setAuth()
Update the auth rules for the dir.
await db.setAuth(auth, "users")addTrigger()
Add a trigger to the dir.
const trigger = {
key: "inc_user_count",
on: "create",
fn: [
["update()", [{ user_count: { _$: ["inc"] } }, "meta", "app_stats"]],
],
}
// increment user_count in meta/app_stats when a user is created
await db.addTrigger(trigger, "users")removeTrigger()
Remove a trigger from the dir. Specify a key to remove.
await db.removeTrigger({ key: "inc_user_count" }, "users")Utilities
wdb23()
Convert an Arweave address to a WDB23 address.
import { wdb23 } from "wdb-sdk"
const addr23 = wdb23(arweave_address)wdb160()
Generate a WDB160 hash from multiple imputs.
import { wdb160 } from "wdb-sdk"
const hash = wdb160(["abc", "def"])