structstd[src]

Types

Type FunctionArrayHashMapWithAllocator[src]

A hash table of keys and values, each stored sequentially.

Insertion order is preserved. In general, this data structure supports the same operations as std.ArrayList.

Deletion operations:

  • swapRemove - O(1)
  • orderedRemove - O(N)

Modifying the hash map while iterating is allowed, however, one must understand the (well defined) behavior when mixing insertions and deletions with iteration.

See ArrayHashMapUnmanaged for a variant of this data structure that accepts an Allocator as a parameter when needed rather than storing it.

Parameters

K: type
V: type
Context: type

A namespace that provides these two functions:

  • pub fn hash(self, K) u32
  • pub fn eql(self, K, K, usize) bool

The final usize in the eql function represents the index of the key that's already inside the map.

store_hash: bool

When false, this data structure is biased towards cheap eql functions and avoids storing each key's hash in the table. Setting store_hash to true incurs more memory cost but limits eql to being called only once per insertion/deletion (provided there are no hash collisions).

Types

TypeUnmanaged[src]

The ArrayHashMapUnmanaged type using the same settings as this managed map.

Source Code

Source code
pub const Unmanaged = ArrayHashMapUnmanaged(K, V, Context, store_hash)

Fields

unmanaged: Unmanaged
allocator: Allocator
ctx: Context

Values

ConstantEntry[src]

Pointers to a key and value in the backing store of this map. Modifying the key is allowed only if it does not change the hash. Modifying the value is allowed. Entry pointers become invalid whenever this ArrayHashMap is modified, unless ensureTotalCapacity/ensureUnusedCapacity was previously used.

Source Code

Source code
pub const Entry = Unmanaged.Entry

ConstantKV[src]

A KV pair which has been copied out of the backing store

Source Code

Source code
pub const KV = Unmanaged.KV

ConstantData[src]

The Data type used for the MultiArrayList backing this map

Source Code

Source code
pub const Data = Unmanaged.Data

ConstantDataList[src]

The MultiArrayList type backing this map

Source Code

Source code
pub const DataList = Unmanaged.DataList

ConstantHash[src]

The stored hash type, either u32 or void.

Source Code

Source code
pub const Hash = Unmanaged.Hash

ConstantGetOrPutResult[src]

getOrPut variants return this structure, with pointers to the backing store and a flag to indicate whether an existing entry was found. Modifying the key is allowed only if it does not change the hash. Modifying the value is allowed. Entry pointers become invalid whenever this ArrayHashMap is modified, unless ensureTotalCapacity/ensureUnusedCapacity was previously used.

Source Code

Source code
pub const GetOrPutResult = Unmanaged.GetOrPutResult

ConstantIterator[src]

An Iterator over Entry pointers.

Source Code

Source code
pub const Iterator = Unmanaged.Iterator

Functions

Functioninit[src]

pub fn init(allocator: Allocator) Self

Create an ArrayHashMap instance which will use a specified allocator.

Parameters

allocator: Allocator

Source Code

Source code
pub fn init(allocator: Allocator) Self {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call initContext instead.");
    return initContext(allocator, undefined);
}

FunctioninitContext[src]

pub fn initContext(allocator: Allocator, ctx: Context) Self

Parameters

allocator: Allocator
ctx: Context

Source Code

Source code
pub fn initContext(allocator: Allocator, ctx: Context) Self {
    return .{
        .unmanaged = .empty,
        .allocator = allocator,
        .ctx = ctx,
    };
}

Functiondeinit[src]

pub fn deinit(self: *Self) void

Frees the backing allocation and leaves the map in an undefined state. Note that this does not free keys or values. You must take care of that before calling this function, if it is needed.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self) void {
    self.unmanaged.deinit(self.allocator);
    self.* = undefined;
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.unmanaged.lockPointers();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.unmanaged.unlockPointers();
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Clears the map but retains the backing allocation for future use.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    return self.unmanaged.clearRetainingCapacity();
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self) void

Clears the map and releases the backing allocation

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self) void {
    return self.unmanaged.clearAndFree(self.allocator);
}

Functioncount[src]

pub fn count(self: Self) usize

Returns the number of KV pairs stored in this map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.unmanaged.count();
}

Functionkeys[src]

pub fn keys(self: Self) []K

Returns the backing array of keys in this map. Modifying the map may invalidate this array. Modifying this array in a way that changes key hashes or key equality puts the map into an unusable state until reIndex is called.

Parameters

self: Self

Source Code

Source code
pub fn keys(self: Self) []K {
    return self.unmanaged.keys();
}

Functionvalues[src]

pub fn values(self: Self) []V

Returns the backing array of values in this map. Modifying the map may invalidate this array. It is permitted to modify the values in this array.

Parameters

self: Self

Source Code

Source code
pub fn values(self: Self) []V {
    return self.unmanaged.values();
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Returns an iterator over the pairs in this map. Modifying the map may invalidate this iterator.

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return self.unmanaged.iterator();
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, key: K) !GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key).

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, key: K) !GetOrPutResult {
    return self.unmanaged.getOrPutContext(self.allocator, key, self.ctx);
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) !GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) !GetOrPutResult {
    return self.unmanaged.getOrPutContextAdapted(self.allocator, key, ctx, self.ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key). If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityContext(key, self.ctx);
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityAdapted(key, ctx);
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, key: K, value: V) !GetOrPutResult

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, key: K, value: V) !GetOrPutResult {
    return self.unmanaged.getOrPutValueContext(self.allocator, key, value, self.ctx);
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void

Increases capacity, guaranteeing that insertions up until the expected_count will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void {
    return self.unmanaged.ensureTotalCapacityContext(self.allocator, new_capacity, self.ctx);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void

Increases capacity, guaranteeing that insertions up until additional_count more items will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void {
    return self.unmanaged.ensureUnusedCapacityContext(self.allocator, additional_count, self.ctx);
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Returns the number of total elements which may be present before it is no longer guaranteed that no allocations will be performed.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    return self.unmanaged.capacity();
}

Functionput[src]

pub fn put(self: *Self, key: K, value: V) !void

Clobbers any existing data. To detect if a put would clobber existing data, see getOrPut.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, key: K, value: V) !void {
    return self.unmanaged.putContext(self.allocator, key, value, self.ctx);
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, key: K, value: V) !void

Inserts a key-value pair into the hash map, asserting that no previous entry with the same key is already present

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, key: K, value: V) !void {
    return self.unmanaged.putNoClobberContext(self.allocator, key, value, self.ctx);
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityContext(key, value, self.ctx);
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Asserts that it does not clobber any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityNoClobberContext(key, value, self.ctx);
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, key: K, value: V) !?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, key: K, value: V) !?KV {
    return self.unmanaged.fetchPutContext(self.allocator, key, value, self.ctx);
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happuns, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    return self.unmanaged.fetchPutAssumeCapacityContext(key, value, self.ctx);
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Finds pointers to the key and value storage associated with a key.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    return self.unmanaged.getEntryContext(key, self.ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    return self.unmanaged.getEntryAdapted(key, ctx);
}

FunctiongetIndex[src]

pub fn getIndex(self: Self, key: K) ?usize

Finds the index in the entries array where a key is stored

Parameters

self: Self
key: K

Source Code

Source code
pub fn getIndex(self: Self, key: K) ?usize {
    return self.unmanaged.getIndexContext(key, self.ctx);
}

FunctiongetIndexAdapted[src]

pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize

Parameters

self: Self

Source Code

Source code
pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize {
    return self.unmanaged.getIndexAdapted(key, ctx);
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Find the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    return self.unmanaged.getContext(key, self.ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    return self.unmanaged.getAdapted(key, ctx);
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Find a pointer to the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    return self.unmanaged.getPtrContext(key, self.ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    return self.unmanaged.getPtrAdapted(key, ctx);
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Find the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    return self.unmanaged.getKeyContext(key, self.ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    return self.unmanaged.getKeyAdapted(key, ctx);
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Find a pointer to the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    return self.unmanaged.getKeyPtrContext(key, self.ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    return self.unmanaged.getKeyPtrAdapted(key, ctx);
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Check whether a key is stored in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    return self.unmanaged.containsContext(key, self.ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.containsAdapted(key, ctx);
}

FunctionfetchSwapRemove[src]

pub fn fetchSwapRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchSwapRemove(self: *Self, key: K) ?KV {
    return self.unmanaged.fetchSwapRemoveContext(key, self.ctx);
}

FunctionfetchSwapRemoveAdapted[src]

pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    return self.unmanaged.fetchSwapRemoveContextAdapted(key, ctx, self.ctx);
}

FunctionfetchOrderedRemove[src]

pub fn fetchOrderedRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by shifting all elements forward thereby maintaining the current ordering.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchOrderedRemove(self: *Self, key: K) ?KV {
    return self.unmanaged.fetchOrderedRemoveContext(key, self.ctx);
}

FunctionfetchOrderedRemoveAdapted[src]

pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    return self.unmanaged.fetchOrderedRemoveContextAdapted(key, ctx, self.ctx);
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by swapping it with the last element. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn swapRemove(self: *Self, key: K) bool {
    return self.unmanaged.swapRemoveContext(key, self.ctx);
}

FunctionswapRemoveAdapted[src]

pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.swapRemoveContextAdapted(key, ctx, self.ctx);
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn orderedRemove(self: *Self, key: K) bool {
    return self.unmanaged.orderedRemoveContext(key, self.ctx);
}

FunctionorderedRemoveAdapted[src]

pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.orderedRemoveContextAdapted(key, ctx, self.ctx);
}

FunctionswapRemoveAt[src]

pub fn swapRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn swapRemoveAt(self: *Self, index: usize) void {
    self.unmanaged.swapRemoveAtContext(index, self.ctx);
}

FunctionorderedRemoveAt[src]

pub fn orderedRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn orderedRemoveAt(self: *Self, index: usize) void {
    self.unmanaged.orderedRemoveAtContext(index, self.ctx);
}

Functionclone[src]

pub fn clone(self: Self) !Self

Create a copy of the hash map which can be modified separately. The copy uses the same context and allocator as this instance.

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self) !Self {
    var other = try self.unmanaged.cloneContext(self.allocator, self.ctx);
    return other.promoteContext(self.allocator, self.ctx);
}

FunctioncloneWithAllocator[src]

pub fn cloneWithAllocator(self: Self, allocator: Allocator) !Self

Create a copy of the hash map which can be modified separately. The copy uses the same context as this instance, but the specified allocator.

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocator(self: Self, allocator: Allocator) !Self {
    var other = try self.unmanaged.cloneContext(allocator, self.ctx);
    return other.promoteContext(allocator, self.ctx);
}

FunctioncloneWithContext[src]

pub fn cloneWithContext(self: Self, ctx: anytype) !ArrayHashMap(K, V, @TypeOf(ctx), store_hash)

Create a copy of the hash map which can be modified separately. The copy uses the same allocator as this instance, but the specified context.

Parameters

self: Self

Source Code

Source code
pub fn cloneWithContext(self: Self, ctx: anytype) !ArrayHashMap(K, V, @TypeOf(ctx), store_hash) {
    var other = try self.unmanaged.cloneContext(self.allocator, ctx);
    return other.promoteContext(self.allocator, ctx);
}

FunctioncloneWithAllocatorAndContext[src]

pub fn cloneWithAllocatorAndContext(self: Self, allocator: Allocator, ctx: anytype) !ArrayHashMap(K, V, @TypeOf(ctx), store_hash)

Create a copy of the hash map which can be modified separately. The copy uses the specified allocator and context.

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocatorAndContext(self: Self, allocator: Allocator, ctx: anytype) !ArrayHashMap(K, V, @TypeOf(ctx), store_hash) {
    var other = try self.unmanaged.cloneContext(allocator, ctx);
    return other.promoteContext(allocator, ctx);
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.unmanaged.pointer_stability.assertUnlocked();
    const result = self.*;
    self.unmanaged = .empty;
    return result;
}

FunctionreIndex[src]

pub fn reIndex(self: *Self) !void

Recomputes stored hashes and rebuilds the key indexes. If the underlying keys have been modified directly, call this method to recompute the denormalized metadata necessary for the operation of the methods of this map that lookup entries by key.

One use case for this is directly calling entries.resize() to grow the underlying storage, and then setting the keys and values directly without going through the methods of this map.

The time complexity of this operation is O(n).

Parameters

self: *Self

Source Code

Source code
pub fn reIndex(self: *Self) !void {
    return self.unmanaged.reIndexContext(self.allocator, self.ctx);
}

Functionsort[src]

pub fn sort(self: *Self, sort_ctx: anytype) void

Sorts the entries and then rebuilds the index. sort_ctx must have this method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool

Parameters

self: *Self

Source Code

Source code
pub fn sort(self: *Self, sort_ctx: anytype) void {
    return self.unmanaged.sortContext(sort_ctx, self.ctx);
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Keeps capacity the same.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. Any deinitialization of discarded entries must take place after calling this function.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    return self.unmanaged.shrinkRetainingCapacityContext(new_len, self.ctx);
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Reduces allocated capacity.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. It is a bug to call this function if the discarded entries require deinitialization. For that use case, shrinkRetainingCapacity can be used instead.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, new_len: usize) void {
    return self.unmanaged.shrinkAndFreeContext(self.allocator, new_len, self.ctx);
}

Functionpop[src]

pub fn pop(self: *Self) ?KV

Removes the last inserted Entry in the hash map and returns it if count is nonzero. Otherwise returns null.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?KV {
    return self.unmanaged.popContext(self.ctx);
}

Source Code

Source code
pub fn ArrayHashMapWithAllocator(
    comptime K: type,
    comptime V: type,
    /// A namespace that provides these two functions:
    /// * `pub fn hash(self, K) u32`
    /// * `pub fn eql(self, K, K, usize) bool`
    ///
    /// The final `usize` in the `eql` function represents the index of the key
    /// that's already inside the map.
    comptime Context: type,
    /// When `false`, this data structure is biased towards cheap `eql`
    /// functions and avoids storing each key's hash in the table. Setting
    /// `store_hash` to `true` incurs more memory cost but limits `eql` to
    /// being called only once per insertion/deletion (provided there are no
    /// hash collisions).
    comptime store_hash: bool,
) type {
    return struct {
        unmanaged: Unmanaged,
        allocator: Allocator,
        ctx: Context,

        /// The ArrayHashMapUnmanaged type using the same settings as this managed map.
        pub const Unmanaged = ArrayHashMapUnmanaged(K, V, Context, store_hash);

        /// Pointers to a key and value in the backing store of this map.
        /// Modifying the key is allowed only if it does not change the hash.
        /// Modifying the value is allowed.
        /// Entry pointers become invalid whenever this ArrayHashMap is modified,
        /// unless `ensureTotalCapacity`/`ensureUnusedCapacity` was previously used.
        pub const Entry = Unmanaged.Entry;

        /// A KV pair which has been copied out of the backing store
        pub const KV = Unmanaged.KV;

        /// The Data type used for the MultiArrayList backing this map
        pub const Data = Unmanaged.Data;
        /// The MultiArrayList type backing this map
        pub const DataList = Unmanaged.DataList;

        /// The stored hash type, either u32 or void.
        pub const Hash = Unmanaged.Hash;

        /// getOrPut variants return this structure, with pointers
        /// to the backing store and a flag to indicate whether an
        /// existing entry was found.
        /// Modifying the key is allowed only if it does not change the hash.
        /// Modifying the value is allowed.
        /// Entry pointers become invalid whenever this ArrayHashMap is modified,
        /// unless `ensureTotalCapacity`/`ensureUnusedCapacity` was previously used.
        pub const GetOrPutResult = Unmanaged.GetOrPutResult;

        /// An Iterator over Entry pointers.
        pub const Iterator = Unmanaged.Iterator;

        const Self = @This();

        /// Create an ArrayHashMap instance which will use a specified allocator.
        pub fn init(allocator: Allocator) Self {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call initContext instead.");
            return initContext(allocator, undefined);
        }
        pub fn initContext(allocator: Allocator, ctx: Context) Self {
            return .{
                .unmanaged = .empty,
                .allocator = allocator,
                .ctx = ctx,
            };
        }

        /// Frees the backing allocation and leaves the map in an undefined state.
        /// Note that this does not free keys or values.  You must take care of that
        /// before calling this function, if it is needed.
        pub fn deinit(self: *Self) void {
            self.unmanaged.deinit(self.allocator);
            self.* = undefined;
        }

        /// Puts the hash map into a state where any method call that would
        /// cause an existing key or value pointer to become invalidated will
        /// instead trigger an assertion.
        ///
        /// An additional call to `lockPointers` in such state also triggers an
        /// assertion.
        ///
        /// `unlockPointers` returns the hash map to the previous state.
        pub fn lockPointers(self: *Self) void {
            self.unmanaged.lockPointers();
        }

        /// Undoes a call to `lockPointers`.
        pub fn unlockPointers(self: *Self) void {
            self.unmanaged.unlockPointers();
        }

        /// Clears the map but retains the backing allocation for future use.
        pub fn clearRetainingCapacity(self: *Self) void {
            return self.unmanaged.clearRetainingCapacity();
        }

        /// Clears the map and releases the backing allocation
        pub fn clearAndFree(self: *Self) void {
            return self.unmanaged.clearAndFree(self.allocator);
        }

        /// Returns the number of KV pairs stored in this map.
        pub fn count(self: Self) usize {
            return self.unmanaged.count();
        }

        /// Returns the backing array of keys in this map. Modifying the map may
        /// invalidate this array. Modifying this array in a way that changes
        /// key hashes or key equality puts the map into an unusable state until
        /// `reIndex` is called.
        pub fn keys(self: Self) []K {
            return self.unmanaged.keys();
        }
        /// Returns the backing array of values in this map. Modifying the map
        /// may invalidate this array. It is permitted to modify the values in
        /// this array.
        pub fn values(self: Self) []V {
            return self.unmanaged.values();
        }

        /// Returns an iterator over the pairs in this map.
        /// Modifying the map may invalidate this iterator.
        pub fn iterator(self: *const Self) Iterator {
            return self.unmanaged.iterator();
        }

        /// If key exists this function cannot fail.
        /// If there is an existing item with `key`, then the result
        /// `Entry` pointer points to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined value, and
        /// the `Entry` pointer points to it. Caller should then initialize
        /// the value (but not the key).
        pub fn getOrPut(self: *Self, key: K) !GetOrPutResult {
            return self.unmanaged.getOrPutContext(self.allocator, key, self.ctx);
        }
        pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) !GetOrPutResult {
            return self.unmanaged.getOrPutContextAdapted(self.allocator, key, ctx, self.ctx);
        }

        /// If there is an existing item with `key`, then the result
        /// `Entry` pointer points to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined value, and
        /// the `Entry` pointer points to it. Caller should then initialize
        /// the value (but not the key).
        /// If a new entry needs to be stored, this function asserts there
        /// is enough capacity to store it.
        pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
            return self.unmanaged.getOrPutAssumeCapacityContext(key, self.ctx);
        }
        pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
            return self.unmanaged.getOrPutAssumeCapacityAdapted(key, ctx);
        }
        pub fn getOrPutValue(self: *Self, key: K, value: V) !GetOrPutResult {
            return self.unmanaged.getOrPutValueContext(self.allocator, key, value, self.ctx);
        }

        /// Increases capacity, guaranteeing that insertions up until the
        /// `expected_count` will not cause an allocation, and therefore cannot fail.
        pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void {
            return self.unmanaged.ensureTotalCapacityContext(self.allocator, new_capacity, self.ctx);
        }

        /// Increases capacity, guaranteeing that insertions up until
        /// `additional_count` **more** items will not cause an allocation, and
        /// therefore cannot fail.
        pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void {
            return self.unmanaged.ensureUnusedCapacityContext(self.allocator, additional_count, self.ctx);
        }

        /// Returns the number of total elements which may be present before it is
        /// no longer guaranteed that no allocations will be performed.
        pub fn capacity(self: Self) usize {
            return self.unmanaged.capacity();
        }

        /// Clobbers any existing data. To detect if a put would clobber
        /// existing data, see `getOrPut`.
        pub fn put(self: *Self, key: K, value: V) !void {
            return self.unmanaged.putContext(self.allocator, key, value, self.ctx);
        }

        /// Inserts a key-value pair into the hash map, asserting that no previous
        /// entry with the same key is already present
        pub fn putNoClobber(self: *Self, key: K, value: V) !void {
            return self.unmanaged.putNoClobberContext(self.allocator, key, value, self.ctx);
        }

        /// Asserts there is enough capacity to store the new key-value pair.
        /// Clobbers any existing data. To detect if a put would clobber
        /// existing data, see `getOrPutAssumeCapacity`.
        pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
            return self.unmanaged.putAssumeCapacityContext(key, value, self.ctx);
        }

        /// Asserts there is enough capacity to store the new key-value pair.
        /// Asserts that it does not clobber any existing data.
        /// To detect if a put would clobber existing data, see `getOrPutAssumeCapacity`.
        pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
            return self.unmanaged.putAssumeCapacityNoClobberContext(key, value, self.ctx);
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        pub fn fetchPut(self: *Self, key: K, value: V) !?KV {
            return self.unmanaged.fetchPutContext(self.allocator, key, value, self.ctx);
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        /// If insertion happuns, asserts there is enough capacity without allocating.
        pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
            return self.unmanaged.fetchPutAssumeCapacityContext(key, value, self.ctx);
        }

        /// Finds pointers to the key and value storage associated with a key.
        pub fn getEntry(self: Self, key: K) ?Entry {
            return self.unmanaged.getEntryContext(key, self.ctx);
        }
        pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
            return self.unmanaged.getEntryAdapted(key, ctx);
        }

        /// Finds the index in the `entries` array where a key is stored
        pub fn getIndex(self: Self, key: K) ?usize {
            return self.unmanaged.getIndexContext(key, self.ctx);
        }
        pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize {
            return self.unmanaged.getIndexAdapted(key, ctx);
        }

        /// Find the value associated with a key
        pub fn get(self: Self, key: K) ?V {
            return self.unmanaged.getContext(key, self.ctx);
        }
        pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
            return self.unmanaged.getAdapted(key, ctx);
        }

        /// Find a pointer to the value associated with a key
        pub fn getPtr(self: Self, key: K) ?*V {
            return self.unmanaged.getPtrContext(key, self.ctx);
        }
        pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
            return self.unmanaged.getPtrAdapted(key, ctx);
        }

        /// Find the actual key associated with an adapted key
        pub fn getKey(self: Self, key: K) ?K {
            return self.unmanaged.getKeyContext(key, self.ctx);
        }
        pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
            return self.unmanaged.getKeyAdapted(key, ctx);
        }

        /// Find a pointer to the actual key associated with an adapted key
        pub fn getKeyPtr(self: Self, key: K) ?*K {
            return self.unmanaged.getKeyPtrContext(key, self.ctx);
        }
        pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
            return self.unmanaged.getKeyPtrAdapted(key, ctx);
        }

        /// Check whether a key is stored in the map
        pub fn contains(self: Self, key: K) bool {
            return self.unmanaged.containsContext(key, self.ctx);
        }
        pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
            return self.unmanaged.containsAdapted(key, ctx);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map, and then returned from this function. The entry is
        /// removed from the underlying array by swapping it with the last
        /// element.
        pub fn fetchSwapRemove(self: *Self, key: K) ?KV {
            return self.unmanaged.fetchSwapRemoveContext(key, self.ctx);
        }
        pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
            return self.unmanaged.fetchSwapRemoveContextAdapted(key, ctx, self.ctx);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map, and then returned from this function. The entry is
        /// removed from the underlying array by shifting all elements forward
        /// thereby maintaining the current ordering.
        pub fn fetchOrderedRemove(self: *Self, key: K) ?KV {
            return self.unmanaged.fetchOrderedRemoveContext(key, self.ctx);
        }
        pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
            return self.unmanaged.fetchOrderedRemoveContextAdapted(key, ctx, self.ctx);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map. The entry is removed from the underlying array
        /// by swapping it with the last element.  Returns true if an entry
        /// was removed, false otherwise.
        pub fn swapRemove(self: *Self, key: K) bool {
            return self.unmanaged.swapRemoveContext(key, self.ctx);
        }
        pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
            return self.unmanaged.swapRemoveContextAdapted(key, ctx, self.ctx);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map. The entry is removed from the underlying array
        /// by shifting all elements forward, thereby maintaining the
        /// current ordering.  Returns true if an entry was removed, false otherwise.
        pub fn orderedRemove(self: *Self, key: K) bool {
            return self.unmanaged.orderedRemoveContext(key, self.ctx);
        }
        pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
            return self.unmanaged.orderedRemoveContextAdapted(key, ctx, self.ctx);
        }

        /// Deletes the item at the specified index in `entries` from
        /// the hash map. The entry is removed from the underlying array
        /// by swapping it with the last element.
        pub fn swapRemoveAt(self: *Self, index: usize) void {
            self.unmanaged.swapRemoveAtContext(index, self.ctx);
        }

        /// Deletes the item at the specified index in `entries` from
        /// the hash map. The entry is removed from the underlying array
        /// by shifting all elements forward, thereby maintaining the
        /// current ordering.
        pub fn orderedRemoveAt(self: *Self, index: usize) void {
            self.unmanaged.orderedRemoveAtContext(index, self.ctx);
        }

        /// Create a copy of the hash map which can be modified separately.
        /// The copy uses the same context and allocator as this instance.
        pub fn clone(self: Self) !Self {
            var other = try self.unmanaged.cloneContext(self.allocator, self.ctx);
            return other.promoteContext(self.allocator, self.ctx);
        }
        /// Create a copy of the hash map which can be modified separately.
        /// The copy uses the same context as this instance, but the specified
        /// allocator.
        pub fn cloneWithAllocator(self: Self, allocator: Allocator) !Self {
            var other = try self.unmanaged.cloneContext(allocator, self.ctx);
            return other.promoteContext(allocator, self.ctx);
        }
        /// Create a copy of the hash map which can be modified separately.
        /// The copy uses the same allocator as this instance, but the
        /// specified context.
        pub fn cloneWithContext(self: Self, ctx: anytype) !ArrayHashMap(K, V, @TypeOf(ctx), store_hash) {
            var other = try self.unmanaged.cloneContext(self.allocator, ctx);
            return other.promoteContext(self.allocator, ctx);
        }
        /// Create a copy of the hash map which can be modified separately.
        /// The copy uses the specified allocator and context.
        pub fn cloneWithAllocatorAndContext(self: Self, allocator: Allocator, ctx: anytype) !ArrayHashMap(K, V, @TypeOf(ctx), store_hash) {
            var other = try self.unmanaged.cloneContext(allocator, ctx);
            return other.promoteContext(allocator, ctx);
        }

        /// Set the map to an empty state, making deinitialization a no-op, and
        /// returning a copy of the original.
        pub fn move(self: *Self) Self {
            self.unmanaged.pointer_stability.assertUnlocked();
            const result = self.*;
            self.unmanaged = .empty;
            return result;
        }

        /// Recomputes stored hashes and rebuilds the key indexes. If the
        /// underlying keys have been modified directly, call this method to
        /// recompute the denormalized metadata necessary for the operation of
        /// the methods of this map that lookup entries by key.
        ///
        /// One use case for this is directly calling `entries.resize()` to grow
        /// the underlying storage, and then setting the `keys` and `values`
        /// directly without going through the methods of this map.
        ///
        /// The time complexity of this operation is O(n).
        pub fn reIndex(self: *Self) !void {
            return self.unmanaged.reIndexContext(self.allocator, self.ctx);
        }

        /// Sorts the entries and then rebuilds the index.
        /// `sort_ctx` must have this method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        pub fn sort(self: *Self, sort_ctx: anytype) void {
            return self.unmanaged.sortContext(sort_ctx, self.ctx);
        }

        /// Shrinks the underlying `Entry` array to `new_len` elements and
        /// discards any associated index entries. Keeps capacity the same.
        ///
        /// Asserts the discarded entries remain initialized and capable of
        /// performing hash and equality checks. Any deinitialization of
        /// discarded entries must take place *after* calling this function.
        pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
            return self.unmanaged.shrinkRetainingCapacityContext(new_len, self.ctx);
        }

        /// Shrinks the underlying `Entry` array to `new_len` elements and
        /// discards any associated index entries. Reduces allocated capacity.
        ///
        /// Asserts the discarded entries remain initialized and capable of
        /// performing hash and equality checks. It is a bug to call this
        /// function if the discarded entries require deinitialization. For
        /// that use case, `shrinkRetainingCapacity` can be used instead.
        pub fn shrinkAndFree(self: *Self, new_len: usize) void {
            return self.unmanaged.shrinkAndFreeContext(self.allocator, new_len, self.ctx);
        }

        /// Removes the last inserted `Entry` in the hash map and returns it if count is nonzero.
        /// Otherwise returns null.
        pub fn pop(self: *Self) ?KV {
            return self.unmanaged.popContext(self.ctx);
        }
    };
}

Type FunctionArrayHashMapUnmanaged[src]

A hash table of keys and values, each stored sequentially.

Insertion order is preserved. In general, this data structure supports the same operations as std.ArrayListUnmanaged.

Deletion operations:

  • swapRemove - O(1)
  • orderedRemove - O(N)

Modifying the hash map while iterating is allowed, however, one must understand the (well defined) behavior when mixing insertions and deletions with iteration.

This type does not store an Allocator field - the Allocator must be passed in with each function call that requires it. See ArrayHashMap for a type that stores an Allocator field for convenience.

Can be initialized directly using the default field values.

This type is designed to have low overhead for small numbers of entries. When store_hash is false and the number of entries in the map is less than 9, the overhead cost of using ArrayHashMapUnmanaged rather than std.ArrayList is only a single pointer-sized integer.

Default initialization of this struct is deprecated; use .empty instead.

Parameters

K: type
V: type
Context: type

A namespace that provides these two functions:

  • pub fn hash(self, K) u32
  • pub fn eql(self, K, K, usize) bool

The final usize in the eql function represents the index of the key that's already inside the map.

store_hash: bool

When false, this data structure is biased towards cheap eql functions and avoids storing each key's hash in the table. Setting store_hash to true incurs more memory cost but limits eql to being called only once per insertion/deletion (provided there are no hash collisions).

Types

TypeDataList[src]

The MultiArrayList type backing this map

Source Code

Source code
pub const DataList = std.MultiArrayList(Data)

TypeHash[src]

The stored hash type, either u32 or void.

Source Code

Source code
pub const Hash = if (store_hash) u32 else void

TypeManaged[src]

The ArrayHashMap type using the same settings as this managed map.

Source Code

Source code
pub const Managed = ArrayHashMap(K, V, Context, store_hash)

Fields

entries: DataList = .{}

It is permitted to access this field directly. After any modification to the keys, consider calling reIndex.

index_header: ?*IndexHeader = null

When entries length is less than linear_scan_max, this remains null. Once entries length grows big enough, this field is allocated. There is an IndexHeader followed by an array of Index(I) structs, where I is defined by how many total indexes there are.

pointer_stability: std.debug.SafetyLock = .{}

Used to detect memory safety violations.

Values

Constantempty[src]

A map containing no keys or values.

Source Code

Source code
pub const empty: Self = .{
    .entries = .{},
    .index_header = null,
}

Functions

Functionpromote[src]

pub fn promote(self: Self, gpa: Allocator) Managed

Convert from an unmanaged map to a managed map. After calling this, the promoted map should no longer be used.

Parameters

self: Self

Source Code

Source code
pub fn promote(self: Self, gpa: Allocator) Managed {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
    return self.promoteContext(gpa, undefined);
}

FunctionpromoteContext[src]

pub fn promoteContext(self: Self, gpa: Allocator, ctx: Context) Managed

Parameters

self: Self
ctx: Context

Source Code

Source code
pub fn promoteContext(self: Self, gpa: Allocator, ctx: Context) Managed {
    return .{
        .unmanaged = self,
        .allocator = gpa,
        .ctx = ctx,
    };
}

Functioninit[src]

pub fn init(gpa: Allocator, key_list: []const K, value_list: []const V) Oom!Self

Parameters

key_list: []const K
value_list: []const V

Source Code

Source code
pub fn init(gpa: Allocator, key_list: []const K, value_list: []const V) Oom!Self {
    var self: Self = .{};
    errdefer self.deinit(gpa);
    try self.reinit(gpa, key_list, value_list);
    return self;
}

Functionreinit[src]

pub fn reinit(self: *Self, gpa: Allocator, key_list: []const K, value_list: []const V) Oom!void

An empty value_list may be passed, in which case the values array becomes undefined.

Parameters

self: *Self
key_list: []const K
value_list: []const V

Source Code

Source code
pub fn reinit(self: *Self, gpa: Allocator, key_list: []const K, value_list: []const V) Oom!void {
    try self.entries.resize(gpa, key_list.len);
    @memcpy(self.keys(), key_list);
    if (value_list.len == 0) {
        @memset(self.values(), undefined);
    } else {
        assert(key_list.len == value_list.len);
        @memcpy(self.values(), value_list);
    }
    try self.reIndex(gpa);
}

Functiondeinit[src]

pub fn deinit(self: *Self, gpa: Allocator) void

Frees the backing allocation and leaves the map in an undefined state. Note that this does not free keys or values. You must take care of that before calling this function, if it is needed.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self, gpa: Allocator) void {
    self.pointer_stability.assertUnlocked();
    self.entries.deinit(gpa);
    if (self.index_header) |header| {
        header.free(gpa);
    }
    self.* = undefined;
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.pointer_stability.lock();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.pointer_stability.unlock();
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Clears the map but retains the backing allocation for future use.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.entries.len = 0;
    if (self.index_header) |header| {
        switch (header.capacityIndexType()) {
            .u8 => @memset(header.indexes(u8), Index(u8).empty),
            .u16 => @memset(header.indexes(u16), Index(u16).empty),
            .u32 => @memset(header.indexes(u32), Index(u32).empty),
        }
    }
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, gpa: Allocator) void

Clears the map and releases the backing allocation

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self, gpa: Allocator) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.entries.shrinkAndFree(gpa, 0);
    if (self.index_header) |header| {
        header.free(gpa);
        self.index_header = null;
    }
}

Functioncount[src]

pub fn count(self: Self) usize

Returns the number of KV pairs stored in this map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.entries.len;
}

Functionkeys[src]

pub fn keys(self: Self) []K

Returns the backing array of keys in this map. Modifying the map may invalidate this array. Modifying this array in a way that changes key hashes or key equality puts the map into an unusable state until reIndex is called.

Parameters

self: Self

Source Code

Source code
pub fn keys(self: Self) []K {
    return self.entries.items(.key);
}

Functionvalues[src]

pub fn values(self: Self) []V

Returns the backing array of values in this map. Modifying the map may invalidate this array. It is permitted to modify the values in this array.

Parameters

self: Self

Source Code

Source code
pub fn values(self: Self) []V {
    return self.entries.items(.value);
}

Functioniterator[src]

pub fn iterator(self: Self) Iterator

Returns an iterator over the pairs in this map. Modifying the map may invalidate this iterator.

Parameters

self: Self

Source Code

Source code
pub fn iterator(self: Self) Iterator {
    const slice = self.entries.slice();
    return .{
        .keys = slice.items(.key).ptr,
        .values = slice.items(.value).ptr,
        .len = @as(u32, @intCast(slice.len)),
    };
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, gpa: Allocator, key: K) Oom!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key).

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, gpa: Allocator, key: K) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
    return self.getOrPutContext(gpa, key, undefined);
}

FunctiongetOrPutContext[src]

pub fn getOrPutContext(self: *Self, gpa: Allocator, key: K, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutContext(self: *Self, gpa: Allocator, key: K, ctx: Context) Oom!GetOrPutResult {
    const gop = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype) Oom!GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
    return self.getOrPutContextAdapted(gpa, key, key_ctx, undefined);
}

FunctiongetOrPutContextAdapted[src]

pub fn getOrPutContextAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn getOrPutContextAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Oom!GetOrPutResult {
    self.ensureTotalCapacityContext(gpa, self.entries.len + 1, ctx) catch |err| {
        // "If key exists this function cannot fail."
        const index = self.getIndexAdapted(key, key_ctx) orelse return err;
        const slice = self.entries.slice();
        return GetOrPutResult{
            .key_ptr = &slice.items(.key)[index],
            // workaround for #6974
            .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
            .found_existing = true,
            .index = index,
        };
    };
    return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key). If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
    return self.getOrPutAssumeCapacityContext(key, undefined);
}

FunctiongetOrPutAssumeCapacityContext[src]

pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
    const gop = self.getOrPutAssumeCapacityAdapted(key, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

If there is an existing item with key, then the result Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined key and value, and the Entry pointers point to it. Caller must then initialize both the key and the value. If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
    const header = self.index_header orelse {
        // Linear scan.
        const h = if (store_hash) checkedHash(ctx, key) else {};
        const slice = self.entries.slice();
        const hashes_array = slice.items(.hash);
        const keys_array = slice.items(.key);
        for (keys_array, 0..) |*item_key, i| {
            if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                return GetOrPutResult{
                    .key_ptr = item_key,
                    // workaround for #6974
                    .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[i],
                    .found_existing = true,
                    .index = i,
                };
            }
        }

        const index = self.entries.addOneAssumeCapacity();
        // The slice length changed, so we directly index the pointer.
        if (store_hash) hashes_array.ptr[index] = h;

        return GetOrPutResult{
            .key_ptr = &keys_array.ptr[index],
            // workaround for #6974
            .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value).ptr[index],
            .found_existing = false,
            .index = index,
        };
    };

    switch (header.capacityIndexType()) {
        .u8 => return self.getOrPutInternal(key, ctx, header, u8),
        .u16 => return self.getOrPutInternal(key, ctx, header, u16),
        .u32 => return self.getOrPutInternal(key, ctx, header, u32),
    }
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, gpa: Allocator, key: K, value: V) Oom!GetOrPutResult

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, gpa: Allocator, key: K, value: V) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
    return self.getOrPutValueContext(gpa, key, value, undefined);
}

FunctiongetOrPutValueContext[src]

pub fn getOrPutValueContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn getOrPutValueContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!GetOrPutResult {
    const res = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
    if (!res.found_existing) {
        res.key_ptr.* = key;
        res.value_ptr.* = value;
    }
    return res;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Oom!void

Increases capacity, guaranteeing that insertions up until the expected_count will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return self.ensureTotalCapacityContext(gpa, new_capacity, undefined);
}

FunctionensureTotalCapacityContext[src]

pub fn ensureTotalCapacityContext(self: *Self, gpa: Allocator, new_capacity: usize, ctx: Context) Oom!void

Parameters

self: *Self
new_capacity: usize
ctx: Context

Source Code

Source code
pub fn ensureTotalCapacityContext(self: *Self, gpa: Allocator, new_capacity: usize, ctx: Context) Oom!void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    if (new_capacity <= linear_scan_max) {
        try self.entries.ensureTotalCapacity(gpa, new_capacity);
        return;
    }

    if (self.index_header) |header| {
        if (new_capacity <= header.capacity()) {
            try self.entries.ensureTotalCapacity(gpa, new_capacity);
            return;
        }
    }

    try self.entries.ensureTotalCapacity(gpa, new_capacity);
    const new_bit_index = try IndexHeader.findBitIndex(new_capacity);
    const new_header = try IndexHeader.alloc(gpa, new_bit_index);

    if (self.index_header) |old_header| old_header.free(gpa);
    self.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
    self.index_header = new_header;
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity( self: *Self, gpa: Allocator, additional_capacity: usize, ) Oom!void

Increases capacity, guaranteeing that insertions up until additional_count more items will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
additional_capacity: usize

Source Code

Source code
pub fn ensureUnusedCapacity(
    self: *Self,
    gpa: Allocator,
    additional_capacity: usize,
) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return self.ensureUnusedCapacityContext(gpa, additional_capacity, undefined);
}

FunctionensureUnusedCapacityContext[src]

pub fn ensureUnusedCapacityContext( self: *Self, gpa: Allocator, additional_capacity: usize, ctx: Context, ) Oom!void

Parameters

self: *Self
additional_capacity: usize
ctx: Context

Source Code

Source code
pub fn ensureUnusedCapacityContext(
    self: *Self,
    gpa: Allocator,
    additional_capacity: usize,
    ctx: Context,
) Oom!void {
    return self.ensureTotalCapacityContext(gpa, self.count() + additional_capacity, ctx);
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Returns the number of total elements which may be present before it is no longer guaranteed that no allocations will be performed.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    const entry_cap = self.entries.capacity;
    const header = self.index_header orelse return @min(linear_scan_max, entry_cap);
    const indexes_cap = header.capacity();
    return @min(entry_cap, indexes_cap);
}

Functionput[src]

pub fn put(self: *Self, gpa: Allocator, key: K, value: V) Oom!void

Clobbers any existing data. To detect if a put would clobber existing data, see getOrPut.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
    return self.putContext(gpa, key, value, undefined);
}

FunctionputContext[src]

pub fn putContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
    const result = try self.getOrPutContext(gpa, key, ctx);
    result.value_ptr.* = value;
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, gpa: Allocator, key: K, value: V) Oom!void

Inserts a key-value pair into the hash map, asserting that no previous entry with the same key is already present

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
    return self.putNoClobberContext(gpa, key, value, undefined);
}

FunctionputNoClobberContext[src]

pub fn putNoClobberContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putNoClobberContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
    const result = try self.getOrPutContext(gpa, key, ctx);
    assert(!result.found_existing);
    result.value_ptr.* = value;
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
    return self.putAssumeCapacityContext(key, value, undefined);
}

FunctionputAssumeCapacityContext[src]

pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
    const result = self.getOrPutAssumeCapacityContext(key, ctx);
    result.value_ptr.* = value;
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Asserts that it does not clobber any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
    return self.putAssumeCapacityNoClobberContext(key, value, undefined);
}

FunctionputAssumeCapacityNoClobberContext[src]

pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
    const result = self.getOrPutAssumeCapacityContext(key, ctx);
    assert(!result.found_existing);
    result.value_ptr.* = value;
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, gpa: Allocator, key: K, value: V) Oom!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, gpa: Allocator, key: K, value: V) Oom!?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
    return self.fetchPutContext(gpa, key, value, undefined);
}

FunctionfetchPutContext[src]

pub fn fetchPutContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!?KV {
    const gop = try self.getOrPutContext(gpa, key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
    return self.fetchPutAssumeCapacityContext(key, value, undefined);
}

FunctionfetchPutAssumeCapacityContext[src]

pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Finds pointers to the key and value storage associated with a key.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
    return self.getEntryContext(key, undefined);
}

FunctiongetEntryContext[src]

pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
    return self.getEntryAdapted(key, ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    const slice = self.entries.slice();
    return Entry{
        .key_ptr = &slice.items(.key)[index],
        // workaround for #6974
        .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
    };
}

FunctiongetIndex[src]

pub fn getIndex(self: Self, key: K) ?usize

Finds the index in the entries array where a key is stored

Parameters

self: Self
key: K

Source Code

Source code
pub fn getIndex(self: Self, key: K) ?usize {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getIndexContext instead.");
    return self.getIndexContext(key, undefined);
}

FunctiongetIndexContext[src]

pub fn getIndexContext(self: Self, key: K, ctx: Context) ?usize

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getIndexContext(self: Self, key: K, ctx: Context) ?usize {
    return self.getIndexAdapted(key, ctx);
}

FunctiongetIndexAdapted[src]

pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize

Parameters

self: Self

Source Code

Source code
pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize {
    const header = self.index_header orelse {
        // Linear scan.
        const h = if (store_hash) checkedHash(ctx, key) else {};
        const slice = self.entries.slice();
        const hashes_array = slice.items(.hash);
        const keys_array = slice.items(.key);
        for (keys_array, 0..) |*item_key, i| {
            if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                return i;
            }
        }
        return null;
    };
    switch (header.capacityIndexType()) {
        .u8 => return self.getIndexWithHeaderGeneric(key, ctx, header, u8),
        .u16 => return self.getIndexWithHeaderGeneric(key, ctx, header, u16),
        .u32 => return self.getIndexWithHeaderGeneric(key, ctx, header, u32),
    }
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Find the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
    return self.getContext(key, undefined);
}

FunctiongetContext[src]

pub fn getContext(self: Self, key: K, ctx: Context) ?V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getContext(self: Self, key: K, ctx: Context) ?V {
    return self.getAdapted(key, ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return self.values()[index];
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Find a pointer to the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
    return self.getPtrContext(key, undefined);
}

FunctiongetPtrContext[src]

pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
    return self.getPtrAdapted(key, ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    // workaround for #6974
    return if (@sizeOf(*V) == 0) @as(*V, undefined) else &self.values()[index];
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Find the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
    return self.getKeyContext(key, undefined);
}

FunctiongetKeyContext[src]

pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
    return self.getKeyAdapted(key, ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return self.keys()[index];
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Find a pointer to the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
    return self.getKeyPtrContext(key, undefined);
}

FunctiongetKeyPtrContext[src]

pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
    return self.getKeyPtrAdapted(key, ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return &self.keys()[index];
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Check whether a key is stored in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
    return self.containsContext(key, undefined);
}

FunctioncontainsContext[src]

pub fn containsContext(self: Self, key: K, ctx: Context) bool

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn containsContext(self: Self, key: K, ctx: Context) bool {
    return self.containsAdapted(key, ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.getIndexAdapted(key, ctx) != null;
}

FunctionfetchSwapRemove[src]

pub fn fetchSwapRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchSwapRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContext instead.");
    return self.fetchSwapRemoveContext(key, undefined);
}

FunctionfetchSwapRemoveContext[src]

pub fn fetchSwapRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchSwapRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchSwapRemoveContextAdapted(key, ctx, ctx);
}

FunctionfetchSwapRemoveAdapted[src]

pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContextAdapted instead.");
    return self.fetchSwapRemoveContextAdapted(key, ctx, undefined);
}

FunctionfetchSwapRemoveContextAdapted[src]

pub fn fetchSwapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn fetchSwapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
}

FunctionfetchOrderedRemove[src]

pub fn fetchOrderedRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by shifting all elements forward thereby maintaining the current ordering.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchOrderedRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContext instead.");
    return self.fetchOrderedRemoveContext(key, undefined);
}

FunctionfetchOrderedRemoveContext[src]

pub fn fetchOrderedRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchOrderedRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchOrderedRemoveContextAdapted(key, ctx, ctx);
}

FunctionfetchOrderedRemoveAdapted[src]

pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContextAdapted instead.");
    return self.fetchOrderedRemoveContextAdapted(key, ctx, undefined);
}

FunctionfetchOrderedRemoveContextAdapted[src]

pub fn fetchOrderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn fetchOrderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by swapping it with the last element. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn swapRemove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContext instead.");
    return self.swapRemoveContext(key, undefined);
}

FunctionswapRemoveContext[src]

pub fn swapRemoveContext(self: *Self, key: K, ctx: Context) bool

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn swapRemoveContext(self: *Self, key: K, ctx: Context) bool {
    return self.swapRemoveContextAdapted(key, ctx, ctx);
}

FunctionswapRemoveAdapted[src]

pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContextAdapted instead.");
    return self.swapRemoveContextAdapted(key, ctx, undefined);
}

FunctionswapRemoveContextAdapted[src]

pub fn swapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn swapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn orderedRemove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContext instead.");
    return self.orderedRemoveContext(key, undefined);
}

FunctionorderedRemoveContext[src]

pub fn orderedRemoveContext(self: *Self, key: K, ctx: Context) bool

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn orderedRemoveContext(self: *Self, key: K, ctx: Context) bool {
    return self.orderedRemoveContextAdapted(key, ctx, ctx);
}

FunctionorderedRemoveAdapted[src]

pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContextAdapted instead.");
    return self.orderedRemoveContextAdapted(key, ctx, undefined);
}

FunctionorderedRemoveContextAdapted[src]

pub fn orderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn orderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
}

FunctionswapRemoveAt[src]

pub fn swapRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn swapRemoveAt(self: *Self, index: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveAtContext instead.");
    return self.swapRemoveAtContext(index, undefined);
}

FunctionswapRemoveAtContext[src]

pub fn swapRemoveAtContext(self: *Self, index: usize, ctx: Context) void

Parameters

self: *Self
index: usize
ctx: Context

Source Code

Source code
pub fn swapRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.removeByIndex(index, if (store_hash) {} else ctx, .swap);
}

FunctionorderedRemoveAt[src]

pub fn orderedRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn orderedRemoveAt(self: *Self, index: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveAtContext instead.");
    return self.orderedRemoveAtContext(index, undefined);
}

FunctionorderedRemoveAtContext[src]

pub fn orderedRemoveAtContext(self: *Self, index: usize, ctx: Context) void

Parameters

self: *Self
index: usize
ctx: Context

Source Code

Source code
pub fn orderedRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.removeByIndex(index, if (store_hash) {} else ctx, .ordered);
}

Functionclone[src]

pub fn clone(self: Self, gpa: Allocator) Oom!Self

Create a copy of the hash map which can be modified separately. The copy uses the same context as this instance, but is allocated with the provided allocator.

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self, gpa: Allocator) Oom!Self {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
    return self.cloneContext(gpa, undefined);
}

FunctioncloneContext[src]

pub fn cloneContext(self: Self, gpa: Allocator, ctx: Context) Oom!Self

Parameters

self: Self
ctx: Context

Source Code

Source code
pub fn cloneContext(self: Self, gpa: Allocator, ctx: Context) Oom!Self {
    var other: Self = .{};
    other.entries = try self.entries.clone(gpa);
    errdefer other.entries.deinit(gpa);

    if (self.index_header) |header| {
        // TODO: I'm pretty sure this could be memcpy'd instead of
        // doing all this work.
        const new_header = try IndexHeader.alloc(gpa, header.bit_index);
        other.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
        other.index_header = new_header;
    }
    return other;
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.pointer_stability.assertUnlocked();
    const result = self.*;
    self.* = .empty;
    return result;
}

FunctionreIndex[src]

pub fn reIndex(self: *Self, gpa: Allocator) Oom!void

Recomputes stored hashes and rebuilds the key indexes. If the underlying keys have been modified directly, call this method to recompute the denormalized metadata necessary for the operation of the methods of this map that lookup entries by key.

One use case for this is directly calling entries.resize() to grow the underlying storage, and then setting the keys and values directly without going through the methods of this map.

The time complexity of this operation is O(n).

Parameters

self: *Self

Source Code

Source code
pub fn reIndex(self: *Self, gpa: Allocator) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call reIndexContext instead.");
    return self.reIndexContext(gpa, undefined);
}

FunctionreIndexContext[src]

pub fn reIndexContext(self: *Self, gpa: Allocator, ctx: Context) Oom!void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn reIndexContext(self: *Self, gpa: Allocator, ctx: Context) Oom!void {
    // Recompute all hashes.
    if (store_hash) {
        for (self.keys(), self.entries.items(.hash)) |key, *hash| {
            const h = checkedHash(ctx, key);
            hash.* = h;
        }
    }
    try rebuildIndex(self, gpa, ctx);
}

FunctionsetKey[src]

pub fn setKey(self: *Self, gpa: Allocator, index: usize, new_key: K) Oom!void

Modify an entry's key without reordering any entries.

Parameters

self: *Self
index: usize
new_key: K

Source Code

Source code
pub fn setKey(self: *Self, gpa: Allocator, index: usize, new_key: K) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call setKeyContext instead.");
    return setKeyContext(self, gpa, index, new_key, undefined);
}

FunctionsetKeyContext[src]

pub fn setKeyContext(self: *Self, gpa: Allocator, index: usize, new_key: K, ctx: Context) Oom!void

Parameters

self: *Self
index: usize
new_key: K
ctx: Context

Source Code

Source code
pub fn setKeyContext(self: *Self, gpa: Allocator, index: usize, new_key: K, ctx: Context) Oom!void {
    const key_ptr = &self.entries.items(.key)[index];
    key_ptr.* = new_key;
    if (store_hash) self.entries.items(.hash)[index] = checkedHash(ctx, key_ptr.*);
    try rebuildIndex(self, gpa, undefined);
}

Functionsort[src]

pub inline fn sort(self: *Self, sort_ctx: anytype) void

Sorts the entries and then rebuilds the index. sort_ctx must have this method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool Uses a stable sorting algorithm.

Parameters

self: *Self

Source Code

Source code
pub inline fn sort(self: *Self, sort_ctx: anytype) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortContext instead.");
    return sortContextInternal(self, .stable, sort_ctx, undefined);
}

FunctionsortUnstable[src]

pub inline fn sortUnstable(self: *Self, sort_ctx: anytype) void

Sorts the entries and then rebuilds the index. sort_ctx must have this method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool Uses an unstable sorting algorithm.

Parameters

self: *Self

Source Code

Source code
pub inline fn sortUnstable(self: *Self, sort_ctx: anytype) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortUnstableContext instead.");
    return self.sortContextInternal(.unstable, sort_ctx, undefined);
}

FunctionsortContext[src]

pub inline fn sortContext(self: *Self, sort_ctx: anytype, ctx: Context) void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub inline fn sortContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
    return sortContextInternal(self, .stable, sort_ctx, ctx);
}

FunctionsortUnstableContext[src]

pub inline fn sortUnstableContext(self: *Self, sort_ctx: anytype, ctx: Context) void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub inline fn sortUnstableContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
    return sortContextInternal(self, .unstable, sort_ctx, ctx);
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Keeps capacity the same.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. Any deinitialization of discarded entries must take place after calling this function.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkRetainingCapacityContext instead.");
    return self.shrinkRetainingCapacityContext(new_len, undefined);
}

FunctionshrinkRetainingCapacityContext[src]

pub fn shrinkRetainingCapacityContext(self: *Self, new_len: usize, ctx: Context) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Keeps capacity the same.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. Any deinitialization of discarded entries must take place after calling this function.

Parameters

self: *Self
new_len: usize
ctx: Context

Source Code

Source code
pub fn shrinkRetainingCapacityContext(self: *Self, new_len: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    // Remove index entries from the new length onwards.
    // Explicitly choose to ONLY remove index entries and not the underlying array list
    // entries as we're going to remove them in the subsequent shrink call.
    if (self.index_header) |header| {
        var i: usize = new_len;
        while (i < self.entries.len) : (i += 1)
            self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
    }
    self.entries.shrinkRetainingCapacity(new_len);
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Reduces allocated capacity.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. It is a bug to call this function if the discarded entries require deinitialization. For that use case, shrinkRetainingCapacity can be used instead.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkAndFreeContext instead.");
    return self.shrinkAndFreeContext(gpa, new_len, undefined);
}

FunctionshrinkAndFreeContext[src]

pub fn shrinkAndFreeContext(self: *Self, gpa: Allocator, new_len: usize, ctx: Context) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Reduces allocated capacity.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. It is a bug to call this function if the discarded entries require deinitialization. For that use case, shrinkRetainingCapacityContext can be used instead.

Parameters

self: *Self
new_len: usize
ctx: Context

Source Code

Source code
pub fn shrinkAndFreeContext(self: *Self, gpa: Allocator, new_len: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    // Remove index entries from the new length onwards.
    // Explicitly choose to ONLY remove index entries and not the underlying array list
    // entries as we're going to remove them in the subsequent shrink call.
    if (self.index_header) |header| {
        var i: usize = new_len;
        while (i < self.entries.len) : (i += 1)
            self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
    }
    self.entries.shrinkAndFree(gpa, new_len);
}

Functionpop[src]

pub fn pop(self: *Self) ?KV

Removes the last inserted Entry in the hash map and returns it. Otherwise returns null.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call popContext instead.");
    return self.popContext(undefined);
}

FunctionpopContext[src]

pub fn popContext(self: *Self, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn popContext(self: *Self, ctx: Context) ?KV {
    if (self.entries.len == 0) return null;
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    const item = self.entries.get(self.entries.len - 1);
    if (self.index_header) |header|
        self.removeFromIndexByIndex(self.entries.len - 1, if (store_hash) {} else ctx, header);
    self.entries.len -= 1;
    return .{
        .key = item.key,
        .value = item.value,
    };
}

Source Code

Source code
pub fn ArrayHashMapUnmanaged(
    comptime K: type,
    comptime V: type,
    /// A namespace that provides these two functions:
    /// * `pub fn hash(self, K) u32`
    /// * `pub fn eql(self, K, K, usize) bool`
    ///
    /// The final `usize` in the `eql` function represents the index of the key
    /// that's already inside the map.
    comptime Context: type,
    /// When `false`, this data structure is biased towards cheap `eql`
    /// functions and avoids storing each key's hash in the table. Setting
    /// `store_hash` to `true` incurs more memory cost but limits `eql` to
    /// being called only once per insertion/deletion (provided there are no
    /// hash collisions).
    comptime store_hash: bool,
) type {
    return struct {
        /// It is permitted to access this field directly.
        /// After any modification to the keys, consider calling `reIndex`.
        entries: DataList = .{},

        /// When entries length is less than `linear_scan_max`, this remains `null`.
        /// Once entries length grows big enough, this field is allocated. There is
        /// an IndexHeader followed by an array of Index(I) structs, where I is defined
        /// by how many total indexes there are.
        index_header: ?*IndexHeader = null,

        /// Used to detect memory safety violations.
        pointer_stability: std.debug.SafetyLock = .{},

        /// A map containing no keys or values.
        pub const empty: Self = .{
            .entries = .{},
            .index_header = null,
        };

        /// Modifying the key is allowed only if it does not change the hash.
        /// Modifying the value is allowed.
        /// Entry pointers become invalid whenever this ArrayHashMap is modified,
        /// unless `ensureTotalCapacity`/`ensureUnusedCapacity` was previously used.
        pub const Entry = struct {
            key_ptr: *K,
            value_ptr: *V,
        };

        /// A KV pair which has been copied out of the backing store
        pub const KV = struct {
            key: K,
            value: V,
        };

        /// The Data type used for the MultiArrayList backing this map
        pub const Data = struct {
            hash: Hash,
            key: K,
            value: V,
        };

        /// The MultiArrayList type backing this map
        pub const DataList = std.MultiArrayList(Data);

        /// The stored hash type, either u32 or void.
        pub const Hash = if (store_hash) u32 else void;

        /// getOrPut variants return this structure, with pointers
        /// to the backing store and a flag to indicate whether an
        /// existing entry was found.
        /// Modifying the key is allowed only if it does not change the hash.
        /// Modifying the value is allowed.
        /// Entry pointers become invalid whenever this ArrayHashMap is modified,
        /// unless `ensureTotalCapacity`/`ensureUnusedCapacity` was previously used.
        pub const GetOrPutResult = struct {
            key_ptr: *K,
            value_ptr: *V,
            found_existing: bool,
            index: usize,
        };

        /// The ArrayHashMap type using the same settings as this managed map.
        pub const Managed = ArrayHashMap(K, V, Context, store_hash);

        /// Some functions require a context only if hashes are not stored.
        /// To keep the api simple, this type is only used internally.
        const ByIndexContext = if (store_hash) void else Context;

        const Self = @This();

        const linear_scan_max = @as(comptime_int, @max(1, @as(comptime_int, @min(
            std.atomic.cache_line / @as(comptime_int, @max(1, @sizeOf(Hash))),
            std.atomic.cache_line / @as(comptime_int, @max(1, @sizeOf(K))),
        ))));

        const RemovalType = enum {
            swap,
            ordered,
        };

        const Oom = Allocator.Error;

        /// Convert from an unmanaged map to a managed map.  After calling this,
        /// the promoted map should no longer be used.
        pub fn promote(self: Self, gpa: Allocator) Managed {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
            return self.promoteContext(gpa, undefined);
        }
        pub fn promoteContext(self: Self, gpa: Allocator, ctx: Context) Managed {
            return .{
                .unmanaged = self,
                .allocator = gpa,
                .ctx = ctx,
            };
        }

        pub fn init(gpa: Allocator, key_list: []const K, value_list: []const V) Oom!Self {
            var self: Self = .{};
            errdefer self.deinit(gpa);
            try self.reinit(gpa, key_list, value_list);
            return self;
        }

        /// An empty `value_list` may be passed, in which case the values array becomes `undefined`.
        pub fn reinit(self: *Self, gpa: Allocator, key_list: []const K, value_list: []const V) Oom!void {
            try self.entries.resize(gpa, key_list.len);
            @memcpy(self.keys(), key_list);
            if (value_list.len == 0) {
                @memset(self.values(), undefined);
            } else {
                assert(key_list.len == value_list.len);
                @memcpy(self.values(), value_list);
            }
            try self.reIndex(gpa);
        }

        /// Frees the backing allocation and leaves the map in an undefined state.
        /// Note that this does not free keys or values.  You must take care of that
        /// before calling this function, if it is needed.
        pub fn deinit(self: *Self, gpa: Allocator) void {
            self.pointer_stability.assertUnlocked();
            self.entries.deinit(gpa);
            if (self.index_header) |header| {
                header.free(gpa);
            }
            self.* = undefined;
        }

        /// Puts the hash map into a state where any method call that would
        /// cause an existing key or value pointer to become invalidated will
        /// instead trigger an assertion.
        ///
        /// An additional call to `lockPointers` in such state also triggers an
        /// assertion.
        ///
        /// `unlockPointers` returns the hash map to the previous state.
        pub fn lockPointers(self: *Self) void {
            self.pointer_stability.lock();
        }

        /// Undoes a call to `lockPointers`.
        pub fn unlockPointers(self: *Self) void {
            self.pointer_stability.unlock();
        }

        /// Clears the map but retains the backing allocation for future use.
        pub fn clearRetainingCapacity(self: *Self) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            self.entries.len = 0;
            if (self.index_header) |header| {
                switch (header.capacityIndexType()) {
                    .u8 => @memset(header.indexes(u8), Index(u8).empty),
                    .u16 => @memset(header.indexes(u16), Index(u16).empty),
                    .u32 => @memset(header.indexes(u32), Index(u32).empty),
                }
            }
        }

        /// Clears the map and releases the backing allocation
        pub fn clearAndFree(self: *Self, gpa: Allocator) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            self.entries.shrinkAndFree(gpa, 0);
            if (self.index_header) |header| {
                header.free(gpa);
                self.index_header = null;
            }
        }

        /// Returns the number of KV pairs stored in this map.
        pub fn count(self: Self) usize {
            return self.entries.len;
        }

        /// Returns the backing array of keys in this map. Modifying the map may
        /// invalidate this array. Modifying this array in a way that changes
        /// key hashes or key equality puts the map into an unusable state until
        /// `reIndex` is called.
        pub fn keys(self: Self) []K {
            return self.entries.items(.key);
        }
        /// Returns the backing array of values in this map. Modifying the map
        /// may invalidate this array. It is permitted to modify the values in
        /// this array.
        pub fn values(self: Self) []V {
            return self.entries.items(.value);
        }

        /// Returns an iterator over the pairs in this map.
        /// Modifying the map may invalidate this iterator.
        pub fn iterator(self: Self) Iterator {
            const slice = self.entries.slice();
            return .{
                .keys = slice.items(.key).ptr,
                .values = slice.items(.value).ptr,
                .len = @as(u32, @intCast(slice.len)),
            };
        }
        pub const Iterator = struct {
            keys: [*]K,
            values: [*]V,
            len: u32,
            index: u32 = 0,

            pub fn next(it: *Iterator) ?Entry {
                if (it.index >= it.len) return null;
                const result = Entry{
                    .key_ptr = &it.keys[it.index],
                    // workaround for #6974
                    .value_ptr = if (@sizeOf(*V) == 0) undefined else &it.values[it.index],
                };
                it.index += 1;
                return result;
            }

            /// Reset the iterator to the initial index
            pub fn reset(it: *Iterator) void {
                it.index = 0;
            }
        };

        /// If key exists this function cannot fail.
        /// If there is an existing item with `key`, then the result
        /// `Entry` pointer points to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined value, and
        /// the `Entry` pointer points to it. Caller should then initialize
        /// the value (but not the key).
        pub fn getOrPut(self: *Self, gpa: Allocator, key: K) Oom!GetOrPutResult {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
            return self.getOrPutContext(gpa, key, undefined);
        }
        pub fn getOrPutContext(self: *Self, gpa: Allocator, key: K, ctx: Context) Oom!GetOrPutResult {
            const gop = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
            if (!gop.found_existing) {
                gop.key_ptr.* = key;
            }
            return gop;
        }
        pub fn getOrPutAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype) Oom!GetOrPutResult {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
            return self.getOrPutContextAdapted(gpa, key, key_ctx, undefined);
        }
        pub fn getOrPutContextAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Oom!GetOrPutResult {
            self.ensureTotalCapacityContext(gpa, self.entries.len + 1, ctx) catch |err| {
                // "If key exists this function cannot fail."
                const index = self.getIndexAdapted(key, key_ctx) orelse return err;
                const slice = self.entries.slice();
                return GetOrPutResult{
                    .key_ptr = &slice.items(.key)[index],
                    // workaround for #6974
                    .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
                    .found_existing = true,
                    .index = index,
                };
            };
            return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
        }

        /// If there is an existing item with `key`, then the result
        /// `Entry` pointer points to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined value, and
        /// the `Entry` pointer points to it. Caller should then initialize
        /// the value (but not the key).
        /// If a new entry needs to be stored, this function asserts there
        /// is enough capacity to store it.
        pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
            return self.getOrPutAssumeCapacityContext(key, undefined);
        }
        pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
            const gop = self.getOrPutAssumeCapacityAdapted(key, ctx);
            if (!gop.found_existing) {
                gop.key_ptr.* = key;
            }
            return gop;
        }
        /// If there is an existing item with `key`, then the result
        /// `Entry` pointers point to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined key and value, and
        /// the `Entry` pointers point to it. Caller must then initialize
        /// both the key and the value.
        /// If a new entry needs to be stored, this function asserts there
        /// is enough capacity to store it.
        pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
            const header = self.index_header orelse {
                // Linear scan.
                const h = if (store_hash) checkedHash(ctx, key) else {};
                const slice = self.entries.slice();
                const hashes_array = slice.items(.hash);
                const keys_array = slice.items(.key);
                for (keys_array, 0..) |*item_key, i| {
                    if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                        return GetOrPutResult{
                            .key_ptr = item_key,
                            // workaround for #6974
                            .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[i],
                            .found_existing = true,
                            .index = i,
                        };
                    }
                }

                const index = self.entries.addOneAssumeCapacity();
                // The slice length changed, so we directly index the pointer.
                if (store_hash) hashes_array.ptr[index] = h;

                return GetOrPutResult{
                    .key_ptr = &keys_array.ptr[index],
                    // workaround for #6974
                    .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value).ptr[index],
                    .found_existing = false,
                    .index = index,
                };
            };

            switch (header.capacityIndexType()) {
                .u8 => return self.getOrPutInternal(key, ctx, header, u8),
                .u16 => return self.getOrPutInternal(key, ctx, header, u16),
                .u32 => return self.getOrPutInternal(key, ctx, header, u32),
            }
        }

        pub fn getOrPutValue(self: *Self, gpa: Allocator, key: K, value: V) Oom!GetOrPutResult {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
            return self.getOrPutValueContext(gpa, key, value, undefined);
        }
        pub fn getOrPutValueContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!GetOrPutResult {
            const res = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
            if (!res.found_existing) {
                res.key_ptr.* = key;
                res.value_ptr.* = value;
            }
            return res;
        }

        /// Increases capacity, guaranteeing that insertions up until the
        /// `expected_count` will not cause an allocation, and therefore cannot fail.
        pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Oom!void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
            return self.ensureTotalCapacityContext(gpa, new_capacity, undefined);
        }
        pub fn ensureTotalCapacityContext(self: *Self, gpa: Allocator, new_capacity: usize, ctx: Context) Oom!void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            if (new_capacity <= linear_scan_max) {
                try self.entries.ensureTotalCapacity(gpa, new_capacity);
                return;
            }

            if (self.index_header) |header| {
                if (new_capacity <= header.capacity()) {
                    try self.entries.ensureTotalCapacity(gpa, new_capacity);
                    return;
                }
            }

            try self.entries.ensureTotalCapacity(gpa, new_capacity);
            const new_bit_index = try IndexHeader.findBitIndex(new_capacity);
            const new_header = try IndexHeader.alloc(gpa, new_bit_index);

            if (self.index_header) |old_header| old_header.free(gpa);
            self.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
            self.index_header = new_header;
        }

        /// Increases capacity, guaranteeing that insertions up until
        /// `additional_count` **more** items will not cause an allocation, and
        /// therefore cannot fail.
        pub fn ensureUnusedCapacity(
            self: *Self,
            gpa: Allocator,
            additional_capacity: usize,
        ) Oom!void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
            return self.ensureUnusedCapacityContext(gpa, additional_capacity, undefined);
        }
        pub fn ensureUnusedCapacityContext(
            self: *Self,
            gpa: Allocator,
            additional_capacity: usize,
            ctx: Context,
        ) Oom!void {
            return self.ensureTotalCapacityContext(gpa, self.count() + additional_capacity, ctx);
        }

        /// Returns the number of total elements which may be present before it is
        /// no longer guaranteed that no allocations will be performed.
        pub fn capacity(self: Self) usize {
            const entry_cap = self.entries.capacity;
            const header = self.index_header orelse return @min(linear_scan_max, entry_cap);
            const indexes_cap = header.capacity();
            return @min(entry_cap, indexes_cap);
        }

        /// Clobbers any existing data. To detect if a put would clobber
        /// existing data, see `getOrPut`.
        pub fn put(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
            return self.putContext(gpa, key, value, undefined);
        }
        pub fn putContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
            const result = try self.getOrPutContext(gpa, key, ctx);
            result.value_ptr.* = value;
        }

        /// Inserts a key-value pair into the hash map, asserting that no previous
        /// entry with the same key is already present
        pub fn putNoClobber(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
            return self.putNoClobberContext(gpa, key, value, undefined);
        }
        pub fn putNoClobberContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
            const result = try self.getOrPutContext(gpa, key, ctx);
            assert(!result.found_existing);
            result.value_ptr.* = value;
        }

        /// Asserts there is enough capacity to store the new key-value pair.
        /// Clobbers any existing data. To detect if a put would clobber
        /// existing data, see `getOrPutAssumeCapacity`.
        pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
            return self.putAssumeCapacityContext(key, value, undefined);
        }
        pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
            const result = self.getOrPutAssumeCapacityContext(key, ctx);
            result.value_ptr.* = value;
        }

        /// Asserts there is enough capacity to store the new key-value pair.
        /// Asserts that it does not clobber any existing data.
        /// To detect if a put would clobber existing data, see `getOrPutAssumeCapacity`.
        pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
            return self.putAssumeCapacityNoClobberContext(key, value, undefined);
        }
        pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
            const result = self.getOrPutAssumeCapacityContext(key, ctx);
            assert(!result.found_existing);
            result.value_ptr.* = value;
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        pub fn fetchPut(self: *Self, gpa: Allocator, key: K, value: V) Oom!?KV {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
            return self.fetchPutContext(gpa, key, value, undefined);
        }
        pub fn fetchPutContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!?KV {
            const gop = try self.getOrPutContext(gpa, key, ctx);
            var result: ?KV = null;
            if (gop.found_existing) {
                result = KV{
                    .key = gop.key_ptr.*,
                    .value = gop.value_ptr.*,
                };
            }
            gop.value_ptr.* = value;
            return result;
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        /// If insertion happens, asserts there is enough capacity without allocating.
        pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
            return self.fetchPutAssumeCapacityContext(key, value, undefined);
        }
        pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
            const gop = self.getOrPutAssumeCapacityContext(key, ctx);
            var result: ?KV = null;
            if (gop.found_existing) {
                result = KV{
                    .key = gop.key_ptr.*,
                    .value = gop.value_ptr.*,
                };
            }
            gop.value_ptr.* = value;
            return result;
        }

        /// Finds pointers to the key and value storage associated with a key.
        pub fn getEntry(self: Self, key: K) ?Entry {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
            return self.getEntryContext(key, undefined);
        }
        pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
            return self.getEntryAdapted(key, ctx);
        }
        pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
            const index = self.getIndexAdapted(key, ctx) orelse return null;
            const slice = self.entries.slice();
            return Entry{
                .key_ptr = &slice.items(.key)[index],
                // workaround for #6974
                .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
            };
        }

        /// Finds the index in the `entries` array where a key is stored
        pub fn getIndex(self: Self, key: K) ?usize {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getIndexContext instead.");
            return self.getIndexContext(key, undefined);
        }
        pub fn getIndexContext(self: Self, key: K, ctx: Context) ?usize {
            return self.getIndexAdapted(key, ctx);
        }
        pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize {
            const header = self.index_header orelse {
                // Linear scan.
                const h = if (store_hash) checkedHash(ctx, key) else {};
                const slice = self.entries.slice();
                const hashes_array = slice.items(.hash);
                const keys_array = slice.items(.key);
                for (keys_array, 0..) |*item_key, i| {
                    if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                        return i;
                    }
                }
                return null;
            };
            switch (header.capacityIndexType()) {
                .u8 => return self.getIndexWithHeaderGeneric(key, ctx, header, u8),
                .u16 => return self.getIndexWithHeaderGeneric(key, ctx, header, u16),
                .u32 => return self.getIndexWithHeaderGeneric(key, ctx, header, u32),
            }
        }
        fn getIndexWithHeaderGeneric(self: Self, key: anytype, ctx: anytype, header: *IndexHeader, comptime I: type) ?usize {
            const indexes = header.indexes(I);
            const slot = self.getSlotByKey(key, ctx, header, I, indexes) orelse return null;
            return indexes[slot].entry_index;
        }

        /// Find the value associated with a key
        pub fn get(self: Self, key: K) ?V {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
            return self.getContext(key, undefined);
        }
        pub fn getContext(self: Self, key: K, ctx: Context) ?V {
            return self.getAdapted(key, ctx);
        }
        pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
            const index = self.getIndexAdapted(key, ctx) orelse return null;
            return self.values()[index];
        }

        /// Find a pointer to the value associated with a key
        pub fn getPtr(self: Self, key: K) ?*V {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
            return self.getPtrContext(key, undefined);
        }
        pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
            return self.getPtrAdapted(key, ctx);
        }
        pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
            const index = self.getIndexAdapted(key, ctx) orelse return null;
            // workaround for #6974
            return if (@sizeOf(*V) == 0) @as(*V, undefined) else &self.values()[index];
        }

        /// Find the actual key associated with an adapted key
        pub fn getKey(self: Self, key: K) ?K {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
            return self.getKeyContext(key, undefined);
        }
        pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
            return self.getKeyAdapted(key, ctx);
        }
        pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
            const index = self.getIndexAdapted(key, ctx) orelse return null;
            return self.keys()[index];
        }

        /// Find a pointer to the actual key associated with an adapted key
        pub fn getKeyPtr(self: Self, key: K) ?*K {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
            return self.getKeyPtrContext(key, undefined);
        }
        pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
            return self.getKeyPtrAdapted(key, ctx);
        }
        pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
            const index = self.getIndexAdapted(key, ctx) orelse return null;
            return &self.keys()[index];
        }

        /// Check whether a key is stored in the map
        pub fn contains(self: Self, key: K) bool {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
            return self.containsContext(key, undefined);
        }
        pub fn containsContext(self: Self, key: K, ctx: Context) bool {
            return self.containsAdapted(key, ctx);
        }
        pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
            return self.getIndexAdapted(key, ctx) != null;
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map, and then returned from this function. The entry is
        /// removed from the underlying array by swapping it with the last
        /// element.
        pub fn fetchSwapRemove(self: *Self, key: K) ?KV {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContext instead.");
            return self.fetchSwapRemoveContext(key, undefined);
        }
        pub fn fetchSwapRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
            return self.fetchSwapRemoveContextAdapted(key, ctx, ctx);
        }
        pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContextAdapted instead.");
            return self.fetchSwapRemoveContextAdapted(key, ctx, undefined);
        }
        pub fn fetchSwapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map, and then returned from this function. The entry is
        /// removed from the underlying array by shifting all elements forward
        /// thereby maintaining the current ordering.
        pub fn fetchOrderedRemove(self: *Self, key: K) ?KV {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContext instead.");
            return self.fetchOrderedRemoveContext(key, undefined);
        }
        pub fn fetchOrderedRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
            return self.fetchOrderedRemoveContextAdapted(key, ctx, ctx);
        }
        pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContextAdapted instead.");
            return self.fetchOrderedRemoveContextAdapted(key, ctx, undefined);
        }
        pub fn fetchOrderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map. The entry is removed from the underlying array
        /// by swapping it with the last element.  Returns true if an entry
        /// was removed, false otherwise.
        pub fn swapRemove(self: *Self, key: K) bool {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContext instead.");
            return self.swapRemoveContext(key, undefined);
        }
        pub fn swapRemoveContext(self: *Self, key: K, ctx: Context) bool {
            return self.swapRemoveContextAdapted(key, ctx, ctx);
        }
        pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContextAdapted instead.");
            return self.swapRemoveContextAdapted(key, ctx, undefined);
        }
        pub fn swapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map. The entry is removed from the underlying array
        /// by shifting all elements forward, thereby maintaining the
        /// current ordering.  Returns true if an entry was removed, false otherwise.
        pub fn orderedRemove(self: *Self, key: K) bool {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContext instead.");
            return self.orderedRemoveContext(key, undefined);
        }
        pub fn orderedRemoveContext(self: *Self, key: K, ctx: Context) bool {
            return self.orderedRemoveContextAdapted(key, ctx, ctx);
        }
        pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContextAdapted instead.");
            return self.orderedRemoveContextAdapted(key, ctx, undefined);
        }
        pub fn orderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
        }

        /// Deletes the item at the specified index in `entries` from
        /// the hash map. The entry is removed from the underlying array
        /// by swapping it with the last element.
        pub fn swapRemoveAt(self: *Self, index: usize) void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveAtContext instead.");
            return self.swapRemoveAtContext(index, undefined);
        }
        pub fn swapRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            self.removeByIndex(index, if (store_hash) {} else ctx, .swap);
        }

        /// Deletes the item at the specified index in `entries` from
        /// the hash map. The entry is removed from the underlying array
        /// by shifting all elements forward, thereby maintaining the
        /// current ordering.
        pub fn orderedRemoveAt(self: *Self, index: usize) void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveAtContext instead.");
            return self.orderedRemoveAtContext(index, undefined);
        }
        pub fn orderedRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            self.removeByIndex(index, if (store_hash) {} else ctx, .ordered);
        }

        /// Create a copy of the hash map which can be modified separately.
        /// The copy uses the same context as this instance, but is allocated
        /// with the provided allocator.
        pub fn clone(self: Self, gpa: Allocator) Oom!Self {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
            return self.cloneContext(gpa, undefined);
        }
        pub fn cloneContext(self: Self, gpa: Allocator, ctx: Context) Oom!Self {
            var other: Self = .{};
            other.entries = try self.entries.clone(gpa);
            errdefer other.entries.deinit(gpa);

            if (self.index_header) |header| {
                // TODO: I'm pretty sure this could be memcpy'd instead of
                // doing all this work.
                const new_header = try IndexHeader.alloc(gpa, header.bit_index);
                other.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
                other.index_header = new_header;
            }
            return other;
        }

        /// Set the map to an empty state, making deinitialization a no-op, and
        /// returning a copy of the original.
        pub fn move(self: *Self) Self {
            self.pointer_stability.assertUnlocked();
            const result = self.*;
            self.* = .empty;
            return result;
        }

        /// Recomputes stored hashes and rebuilds the key indexes. If the
        /// underlying keys have been modified directly, call this method to
        /// recompute the denormalized metadata necessary for the operation of
        /// the methods of this map that lookup entries by key.
        ///
        /// One use case for this is directly calling `entries.resize()` to grow
        /// the underlying storage, and then setting the `keys` and `values`
        /// directly without going through the methods of this map.
        ///
        /// The time complexity of this operation is O(n).
        pub fn reIndex(self: *Self, gpa: Allocator) Oom!void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call reIndexContext instead.");
            return self.reIndexContext(gpa, undefined);
        }

        pub fn reIndexContext(self: *Self, gpa: Allocator, ctx: Context) Oom!void {
            // Recompute all hashes.
            if (store_hash) {
                for (self.keys(), self.entries.items(.hash)) |key, *hash| {
                    const h = checkedHash(ctx, key);
                    hash.* = h;
                }
            }
            try rebuildIndex(self, gpa, ctx);
        }

        /// Modify an entry's key without reordering any entries.
        pub fn setKey(self: *Self, gpa: Allocator, index: usize, new_key: K) Oom!void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call setKeyContext instead.");
            return setKeyContext(self, gpa, index, new_key, undefined);
        }

        pub fn setKeyContext(self: *Self, gpa: Allocator, index: usize, new_key: K, ctx: Context) Oom!void {
            const key_ptr = &self.entries.items(.key)[index];
            key_ptr.* = new_key;
            if (store_hash) self.entries.items(.hash)[index] = checkedHash(ctx, key_ptr.*);
            try rebuildIndex(self, gpa, undefined);
        }

        fn rebuildIndex(self: *Self, gpa: Allocator, ctx: Context) Oom!void {
            if (self.entries.capacity <= linear_scan_max) return;

            // We're going to rebuild the index header and replace the existing one (if any). The
            // indexes should sized such that they will be at most 60% full.
            const bit_index = try IndexHeader.findBitIndex(self.entries.capacity);
            const new_header = try IndexHeader.alloc(gpa, bit_index);
            if (self.index_header) |header| header.free(gpa);
            self.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
            self.index_header = new_header;
        }

        /// Sorts the entries and then rebuilds the index.
        /// `sort_ctx` must have this method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        /// Uses a stable sorting algorithm.
        pub inline fn sort(self: *Self, sort_ctx: anytype) void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortContext instead.");
            return sortContextInternal(self, .stable, sort_ctx, undefined);
        }

        /// Sorts the entries and then rebuilds the index.
        /// `sort_ctx` must have this method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        /// Uses an unstable sorting algorithm.
        pub inline fn sortUnstable(self: *Self, sort_ctx: anytype) void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortUnstableContext instead.");
            return self.sortContextInternal(.unstable, sort_ctx, undefined);
        }

        pub inline fn sortContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
            return sortContextInternal(self, .stable, sort_ctx, ctx);
        }

        pub inline fn sortUnstableContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
            return sortContextInternal(self, .unstable, sort_ctx, ctx);
        }

        fn sortContextInternal(
            self: *Self,
            comptime mode: std.sort.Mode,
            sort_ctx: anytype,
            ctx: Context,
        ) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            switch (mode) {
                .stable => self.entries.sort(sort_ctx),
                .unstable => self.entries.sortUnstable(sort_ctx),
            }
            const header = self.index_header orelse return;
            header.reset();
            self.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, header);
        }

        /// Shrinks the underlying `Entry` array to `new_len` elements and
        /// discards any associated index entries. Keeps capacity the same.
        ///
        /// Asserts the discarded entries remain initialized and capable of
        /// performing hash and equality checks. Any deinitialization of
        /// discarded entries must take place *after* calling this function.
        pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkRetainingCapacityContext instead.");
            return self.shrinkRetainingCapacityContext(new_len, undefined);
        }

        /// Shrinks the underlying `Entry` array to `new_len` elements and
        /// discards any associated index entries. Keeps capacity the same.
        ///
        /// Asserts the discarded entries remain initialized and capable of
        /// performing hash and equality checks. Any deinitialization of
        /// discarded entries must take place *after* calling this function.
        pub fn shrinkRetainingCapacityContext(self: *Self, new_len: usize, ctx: Context) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            // Remove index entries from the new length onwards.
            // Explicitly choose to ONLY remove index entries and not the underlying array list
            // entries as we're going to remove them in the subsequent shrink call.
            if (self.index_header) |header| {
                var i: usize = new_len;
                while (i < self.entries.len) : (i += 1)
                    self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
            }
            self.entries.shrinkRetainingCapacity(new_len);
        }

        /// Shrinks the underlying `Entry` array to `new_len` elements and
        /// discards any associated index entries. Reduces allocated capacity.
        ///
        /// Asserts the discarded entries remain initialized and capable of
        /// performing hash and equality checks. It is a bug to call this
        /// function if the discarded entries require deinitialization. For
        /// that use case, `shrinkRetainingCapacity` can be used instead.
        pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkAndFreeContext instead.");
            return self.shrinkAndFreeContext(gpa, new_len, undefined);
        }

        /// Shrinks the underlying `Entry` array to `new_len` elements and
        /// discards any associated index entries. Reduces allocated capacity.
        ///
        /// Asserts the discarded entries remain initialized and capable of
        /// performing hash and equality checks. It is a bug to call this
        /// function if the discarded entries require deinitialization. For
        /// that use case, `shrinkRetainingCapacityContext` can be used
        /// instead.
        pub fn shrinkAndFreeContext(self: *Self, gpa: Allocator, new_len: usize, ctx: Context) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            // Remove index entries from the new length onwards.
            // Explicitly choose to ONLY remove index entries and not the underlying array list
            // entries as we're going to remove them in the subsequent shrink call.
            if (self.index_header) |header| {
                var i: usize = new_len;
                while (i < self.entries.len) : (i += 1)
                    self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
            }
            self.entries.shrinkAndFree(gpa, new_len);
        }

        /// Removes the last inserted `Entry` in the hash map and returns it.
        /// Otherwise returns null.
        pub fn pop(self: *Self) ?KV {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call popContext instead.");
            return self.popContext(undefined);
        }
        pub fn popContext(self: *Self, ctx: Context) ?KV {
            if (self.entries.len == 0) return null;
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();

            const item = self.entries.get(self.entries.len - 1);
            if (self.index_header) |header|
                self.removeFromIndexByIndex(self.entries.len - 1, if (store_hash) {} else ctx, header);
            self.entries.len -= 1;
            return .{
                .key = item.key,
                .value = item.value,
            };
        }

        fn fetchRemoveByKey(
            self: *Self,
            key: anytype,
            key_ctx: anytype,
            ctx: ByIndexContext,
            comptime removal_type: RemovalType,
        ) ?KV {
            const header = self.index_header orelse {
                // Linear scan.
                const key_hash = if (store_hash) key_ctx.hash(key) else {};
                const slice = self.entries.slice();
                const hashes_array = if (store_hash) slice.items(.hash) else {};
                const keys_array = slice.items(.key);
                for (keys_array, 0..) |*item_key, i| {
                    const hash_match = if (store_hash) hashes_array[i] == key_hash else true;
                    if (hash_match and key_ctx.eql(key, item_key.*, i)) {
                        const removed_entry: KV = .{
                            .key = keys_array[i],
                            .value = slice.items(.value)[i],
                        };
                        switch (removal_type) {
                            .swap => self.entries.swapRemove(i),
                            .ordered => self.entries.orderedRemove(i),
                        }
                        return removed_entry;
                    }
                }
                return null;
            };
            return switch (header.capacityIndexType()) {
                .u8 => self.fetchRemoveByKeyGeneric(key, key_ctx, ctx, header, u8, removal_type),
                .u16 => self.fetchRemoveByKeyGeneric(key, key_ctx, ctx, header, u16, removal_type),
                .u32 => self.fetchRemoveByKeyGeneric(key, key_ctx, ctx, header, u32, removal_type),
            };
        }
        fn fetchRemoveByKeyGeneric(
            self: *Self,
            key: anytype,
            key_ctx: anytype,
            ctx: ByIndexContext,
            header: *IndexHeader,
            comptime I: type,
            comptime removal_type: RemovalType,
        ) ?KV {
            const indexes = header.indexes(I);
            const entry_index = self.removeFromIndexByKey(key, key_ctx, header, I, indexes) orelse return null;
            const slice = self.entries.slice();
            const removed_entry: KV = .{
                .key = slice.items(.key)[entry_index],
                .value = slice.items(.value)[entry_index],
            };
            self.removeFromArrayAndUpdateIndex(entry_index, ctx, header, I, indexes, removal_type);
            return removed_entry;
        }

        fn removeByKey(
            self: *Self,
            key: anytype,
            key_ctx: anytype,
            ctx: ByIndexContext,
            comptime removal_type: RemovalType,
        ) bool {
            const header = self.index_header orelse {
                // Linear scan.
                const key_hash = if (store_hash) key_ctx.hash(key) else {};
                const slice = self.entries.slice();
                const hashes_array = if (store_hash) slice.items(.hash) else {};
                const keys_array = slice.items(.key);
                for (keys_array, 0..) |*item_key, i| {
                    const hash_match = if (store_hash) hashes_array[i] == key_hash else true;
                    if (hash_match and key_ctx.eql(key, item_key.*, i)) {
                        switch (removal_type) {
                            .swap => self.entries.swapRemove(i),
                            .ordered => self.entries.orderedRemove(i),
                        }
                        return true;
                    }
                }
                return false;
            };
            return switch (header.capacityIndexType()) {
                .u8 => self.removeByKeyGeneric(key, key_ctx, ctx, header, u8, removal_type),
                .u16 => self.removeByKeyGeneric(key, key_ctx, ctx, header, u16, removal_type),
                .u32 => self.removeByKeyGeneric(key, key_ctx, ctx, header, u32, removal_type),
            };
        }
        fn removeByKeyGeneric(self: *Self, key: anytype, key_ctx: anytype, ctx: ByIndexContext, header: *IndexHeader, comptime I: type, comptime removal_type: RemovalType) bool {
            const indexes = header.indexes(I);
            const entry_index = self.removeFromIndexByKey(key, key_ctx, header, I, indexes) orelse return false;
            self.removeFromArrayAndUpdateIndex(entry_index, ctx, header, I, indexes, removal_type);
            return true;
        }

        fn removeByIndex(self: *Self, entry_index: usize, ctx: ByIndexContext, comptime removal_type: RemovalType) void {
            assert(entry_index < self.entries.len);
            const header = self.index_header orelse {
                switch (removal_type) {
                    .swap => self.entries.swapRemove(entry_index),
                    .ordered => self.entries.orderedRemove(entry_index),
                }
                return;
            };
            switch (header.capacityIndexType()) {
                .u8 => self.removeByIndexGeneric(entry_index, ctx, header, u8, removal_type),
                .u16 => self.removeByIndexGeneric(entry_index, ctx, header, u16, removal_type),
                .u32 => self.removeByIndexGeneric(entry_index, ctx, header, u32, removal_type),
            }
        }
        fn removeByIndexGeneric(self: *Self, entry_index: usize, ctx: ByIndexContext, header: *IndexHeader, comptime I: type, comptime removal_type: RemovalType) void {
            const indexes = header.indexes(I);
            self.removeFromIndexByIndexGeneric(entry_index, ctx, header, I, indexes);
            self.removeFromArrayAndUpdateIndex(entry_index, ctx, header, I, indexes, removal_type);
        }

        fn removeFromArrayAndUpdateIndex(self: *Self, entry_index: usize, ctx: ByIndexContext, header: *IndexHeader, comptime I: type, indexes: []Index(I), comptime removal_type: RemovalType) void {
            const last_index = self.entries.len - 1; // overflow => remove from empty map
            switch (removal_type) {
                .swap => {
                    if (last_index != entry_index) {
                        // Because of the swap remove, now we need to update the index that was
                        // pointing to the last entry and is now pointing to this removed item slot.
                        self.updateEntryIndex(header, last_index, entry_index, ctx, I, indexes);
                    }
                    // updateEntryIndex reads from the old entry index,
                    // so it needs to run before removal.
                    self.entries.swapRemove(entry_index);
                },
                .ordered => {
                    var i: usize = entry_index;
                    while (i < last_index) : (i += 1) {
                        // Because of the ordered remove, everything from the entry index onwards has
                        // been shifted forward so we'll need to update the index entries.
                        self.updateEntryIndex(header, i + 1, i, ctx, I, indexes);
                    }
                    // updateEntryIndex reads from the old entry index,
                    // so it needs to run before removal.
                    self.entries.orderedRemove(entry_index);
                },
            }
        }

        fn updateEntryIndex(
            self: *Self,
            header: *IndexHeader,
            old_entry_index: usize,
            new_entry_index: usize,
            ctx: ByIndexContext,
            comptime I: type,
            indexes: []Index(I),
        ) void {
            const slot = self.getSlotByIndex(old_entry_index, ctx, header, I, indexes);
            indexes[slot].entry_index = @as(I, @intCast(new_entry_index));
        }

        fn removeFromIndexByIndex(self: *Self, entry_index: usize, ctx: ByIndexContext, header: *IndexHeader) void {
            switch (header.capacityIndexType()) {
                .u8 => self.removeFromIndexByIndexGeneric(entry_index, ctx, header, u8, header.indexes(u8)),
                .u16 => self.removeFromIndexByIndexGeneric(entry_index, ctx, header, u16, header.indexes(u16)),
                .u32 => self.removeFromIndexByIndexGeneric(entry_index, ctx, header, u32, header.indexes(u32)),
            }
        }
        fn removeFromIndexByIndexGeneric(self: *Self, entry_index: usize, ctx: ByIndexContext, header: *IndexHeader, comptime I: type, indexes: []Index(I)) void {
            const slot = self.getSlotByIndex(entry_index, ctx, header, I, indexes);
            removeSlot(slot, header, I, indexes);
        }

        fn removeFromIndexByKey(self: *Self, key: anytype, ctx: anytype, header: *IndexHeader, comptime I: type, indexes: []Index(I)) ?usize {
            const slot = self.getSlotByKey(key, ctx, header, I, indexes) orelse return null;
            const removed_entry_index = indexes[slot].entry_index;
            removeSlot(slot, header, I, indexes);
            return removed_entry_index;
        }

        fn removeSlot(removed_slot: usize, header: *IndexHeader, comptime I: type, indexes: []Index(I)) void {
            const start_index = removed_slot +% 1;
            const end_index = start_index +% indexes.len;

            var last_slot = removed_slot;
            var index: usize = start_index;
            while (index != end_index) : (index +%= 1) {
                const slot = header.constrainIndex(index);
                const slot_data = indexes[slot];
                if (slot_data.isEmpty() or slot_data.distance_from_start_index == 0) {
                    indexes[last_slot].setEmpty();
                    return;
                }
                indexes[last_slot] = .{
                    .entry_index = slot_data.entry_index,
                    .distance_from_start_index = slot_data.distance_from_start_index - 1,
                };
                last_slot = slot;
            }
            unreachable;
        }

        fn getSlotByIndex(self: *Self, entry_index: usize, ctx: ByIndexContext, header: *IndexHeader, comptime I: type, indexes: []Index(I)) usize {
            const slice = self.entries.slice();
            const h = if (store_hash) slice.items(.hash)[entry_index] else checkedHash(ctx, slice.items(.key)[entry_index]);
            const start_index = safeTruncate(usize, h);
            const end_index = start_index +% indexes.len;

            var index = start_index;
            var distance_from_start_index: I = 0;
            while (index != end_index) : ({
                index +%= 1;
                distance_from_start_index += 1;
            }) {
                const slot = header.constrainIndex(index);
                const slot_data = indexes[slot];

                // This is the fundamental property of the array hash map index.  If this
                // assert fails, it probably means that the entry was not in the index.
                assert(!slot_data.isEmpty());
                assert(slot_data.distance_from_start_index >= distance_from_start_index);

                if (slot_data.entry_index == entry_index) {
                    return slot;
                }
            }
            unreachable;
        }

        /// Must `ensureTotalCapacity`/`ensureUnusedCapacity` before calling this.
        fn getOrPutInternal(self: *Self, key: anytype, ctx: anytype, header: *IndexHeader, comptime I: type) GetOrPutResult {
            const slice = self.entries.slice();
            const hashes_array = if (store_hash) slice.items(.hash) else {};
            const keys_array = slice.items(.key);
            const values_array = slice.items(.value);
            const indexes = header.indexes(I);

            const h = checkedHash(ctx, key);
            const start_index = safeTruncate(usize, h);
            const end_index = start_index +% indexes.len;

            var index = start_index;
            var distance_from_start_index: I = 0;
            while (index != end_index) : ({
                index +%= 1;
                distance_from_start_index += 1;
            }) {
                var slot = header.constrainIndex(index);
                var slot_data = indexes[slot];

                // If the slot is empty, there can be no more items in this run.
                // We didn't find a matching item, so this must be new.
                // Put it in the empty slot.
                if (slot_data.isEmpty()) {
                    const new_index = self.entries.addOneAssumeCapacity();
                    indexes[slot] = .{
                        .distance_from_start_index = distance_from_start_index,
                        .entry_index = @as(I, @intCast(new_index)),
                    };

                    // update the hash if applicable
                    if (store_hash) hashes_array.ptr[new_index] = h;

                    return .{
                        .found_existing = false,
                        .key_ptr = &keys_array.ptr[new_index],
                        // workaround for #6974
                        .value_ptr = if (@sizeOf(*V) == 0) undefined else &values_array.ptr[new_index],
                        .index = new_index,
                    };
                }

                // This pointer survives the following append because we call
                // entries.ensureTotalCapacity before getOrPutInternal.
                const i = slot_data.entry_index;
                const hash_match = if (store_hash) h == hashes_array[i] else true;
                if (hash_match and checkedEql(ctx, key, keys_array[i], i)) {
                    return .{
                        .found_existing = true,
                        .key_ptr = &keys_array[slot_data.entry_index],
                        // workaround for #6974
                        .value_ptr = if (@sizeOf(*V) == 0) undefined else &values_array[slot_data.entry_index],
                        .index = slot_data.entry_index,
                    };
                }

                // If the entry is closer to its target than our current distance,
                // the entry we are looking for does not exist.  It would be in
                // this slot instead if it was here.  So stop looking, and switch
                // to insert mode.
                if (slot_data.distance_from_start_index < distance_from_start_index) {
                    // In this case, we did not find the item. We will put a new entry.
                    // However, we will use this index for the new entry, and move
                    // the previous index down the line, to keep the max distance_from_start_index
                    // as small as possible.
                    const new_index = self.entries.addOneAssumeCapacity();
                    if (store_hash) hashes_array.ptr[new_index] = h;
                    indexes[slot] = .{
                        .entry_index = @as(I, @intCast(new_index)),
                        .distance_from_start_index = distance_from_start_index,
                    };
                    distance_from_start_index = slot_data.distance_from_start_index;
                    var displaced_index = slot_data.entry_index;

                    // Find somewhere to put the index we replaced by shifting
                    // following indexes backwards.
                    index +%= 1;
                    distance_from_start_index += 1;
                    while (index != end_index) : ({
                        index +%= 1;
                        distance_from_start_index += 1;
                    }) {
                        slot = header.constrainIndex(index);
                        slot_data = indexes[slot];
                        if (slot_data.isEmpty()) {
                            indexes[slot] = .{
                                .entry_index = displaced_index,
                                .distance_from_start_index = distance_from_start_index,
                            };
                            return .{
                                .found_existing = false,
                                .key_ptr = &keys_array.ptr[new_index],
                                // workaround for #6974
                                .value_ptr = if (@sizeOf(*V) == 0) undefined else &values_array.ptr[new_index],
                                .index = new_index,
                            };
                        }

                        if (slot_data.distance_from_start_index < distance_from_start_index) {
                            indexes[slot] = .{
                                .entry_index = displaced_index,
                                .distance_from_start_index = distance_from_start_index,
                            };
                            displaced_index = slot_data.entry_index;
                            distance_from_start_index = slot_data.distance_from_start_index;
                        }
                    }
                    unreachable;
                }
            }
            unreachable;
        }

        fn getSlotByKey(self: Self, key: anytype, ctx: anytype, header: *IndexHeader, comptime I: type, indexes: []Index(I)) ?usize {
            const slice = self.entries.slice();
            const hashes_array = if (store_hash) slice.items(.hash) else {};
            const keys_array = slice.items(.key);
            const h = checkedHash(ctx, key);

            const start_index = safeTruncate(usize, h);
            const end_index = start_index +% indexes.len;

            var index = start_index;
            var distance_from_start_index: I = 0;
            while (index != end_index) : ({
                index +%= 1;
                distance_from_start_index += 1;
            }) {
                const slot = header.constrainIndex(index);
                const slot_data = indexes[slot];
                if (slot_data.isEmpty() or slot_data.distance_from_start_index < distance_from_start_index)
                    return null;

                const i = slot_data.entry_index;
                const hash_match = if (store_hash) h == hashes_array[i] else true;
                if (hash_match and checkedEql(ctx, key, keys_array[i], i))
                    return slot;
            }
            unreachable;
        }

        fn insertAllEntriesIntoNewHeader(self: *Self, ctx: ByIndexContext, header: *IndexHeader) void {
            switch (header.capacityIndexType()) {
                .u8 => return self.insertAllEntriesIntoNewHeaderGeneric(ctx, header, u8),
                .u16 => return self.insertAllEntriesIntoNewHeaderGeneric(ctx, header, u16),
                .u32 => return self.insertAllEntriesIntoNewHeaderGeneric(ctx, header, u32),
            }
        }
        fn insertAllEntriesIntoNewHeaderGeneric(self: *Self, ctx: ByIndexContext, header: *IndexHeader, comptime I: type) void {
            const slice = self.entries.slice();
            const items = if (store_hash) slice.items(.hash) else slice.items(.key);
            const indexes = header.indexes(I);

            entry_loop: for (items, 0..) |key, i| {
                const h = if (store_hash) key else checkedHash(ctx, key);
                const start_index = safeTruncate(usize, h);
                const end_index = start_index +% indexes.len;
                var index = start_index;
                var entry_index = @as(I, @intCast(i));
                var distance_from_start_index: I = 0;
                while (index != end_index) : ({
                    index +%= 1;
                    distance_from_start_index += 1;
                }) {
                    const slot = header.constrainIndex(index);
                    const next_index = indexes[slot];
                    if (next_index.isEmpty()) {
                        indexes[slot] = .{
                            .distance_from_start_index = distance_from_start_index,
                            .entry_index = entry_index,
                        };
                        continue :entry_loop;
                    }
                    if (next_index.distance_from_start_index < distance_from_start_index) {
                        indexes[slot] = .{
                            .distance_from_start_index = distance_from_start_index,
                            .entry_index = entry_index,
                        };
                        distance_from_start_index = next_index.distance_from_start_index;
                        entry_index = next_index.entry_index;
                    }
                }
                unreachable;
            }
        }

        fn checkedHash(ctx: anytype, key: anytype) u32 {
            // If you get a compile error on the next line, it means that your
            // generic hash function doesn't accept your key.
            return ctx.hash(key);
        }

        fn checkedEql(ctx: anytype, a: anytype, b: K, b_index: usize) bool {
            // If you get a compile error on the next line, it means that your
            // generic eql function doesn't accept (self, adapt key, K, index).
            return ctx.eql(a, b, b_index);
        }

        fn dumpState(self: Self, comptime keyFmt: []const u8, comptime valueFmt: []const u8) void {
            if (@sizeOf(ByIndexContext) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call dumpStateContext instead.");
            self.dumpStateContext(keyFmt, valueFmt, undefined);
        }
        fn dumpStateContext(self: Self, comptime keyFmt: []const u8, comptime valueFmt: []const u8, ctx: Context) void {
            const p = std.debug.print;
            p("{s}:\n", .{@typeName(Self)});
            const slice = self.entries.slice();
            const hash_status = if (store_hash) "stored" else "computed";
            p("  len={} capacity={} hashes {s}\n", .{ slice.len, slice.capacity, hash_status });
            var i: usize = 0;
            const mask: u32 = if (self.index_header) |header| header.mask() else ~@as(u32, 0);
            while (i < slice.len) : (i += 1) {
                const hash = if (store_hash) slice.items(.hash)[i] else checkedHash(ctx, slice.items(.key)[i]);
                if (store_hash) {
                    p(
                        "  [{}]: key=" ++ keyFmt ++ " value=" ++ valueFmt ++ " hash=0x{x} slot=[0x{x}]\n",
                        .{ i, slice.items(.key)[i], slice.items(.value)[i], hash, hash & mask },
                    );
                } else {
                    p(
                        "  [{}]: key=" ++ keyFmt ++ " value=" ++ valueFmt ++ " slot=[0x{x}]\n",
                        .{ i, slice.items(.key)[i], slice.items(.value)[i], hash & mask },
                    );
                }
            }
            if (self.index_header) |header| {
                p("\n", .{});
                switch (header.capacityIndexType()) {
                    .u8 => dumpIndex(header, u8),
                    .u16 => dumpIndex(header, u16),
                    .u32 => dumpIndex(header, u32),
                }
            }
        }
        fn dumpIndex(header: *IndexHeader, comptime I: type) void {
            const p = std.debug.print;
            p("  index len=0x{x} type={}\n", .{ header.length(), header.capacityIndexType() });
            const indexes = header.indexes(I);
            if (indexes.len == 0) return;
            var is_empty = false;
            for (indexes, 0..) |idx, i| {
                if (idx.isEmpty()) {
                    is_empty = true;
                } else {
                    if (is_empty) {
                        is_empty = false;
                        p("  ...\n", .{});
                    }
                    p("  [0x{x}]: [{}] +{}\n", .{ i, idx.entry_index, idx.distance_from_start_index });
                }
            }
            if (is_empty) {
                p("  ...\n", .{});
            }
        }
    };
}

Type FunctionArrayList[src]

A contiguous, growable list of items in memory. This is a wrapper around an array of T values. Initialize with init.

This struct internally stores a std.mem.Allocator for memory management. To manually specify an allocator with each function call see ArrayListUnmanaged.

Parameters

T: type

Types

TypeSlice[src]

Source Code

Source code
pub const Slice = if (alignment) |a| ([]align(a) T) else []T

Type FunctionSentinelSlice[src]

Parameters

s: T

Source Code

Source code
pub fn SentinelSlice(comptime s: T) type {
    return if (alignment) |a| ([:s]align(a) T) else [:s]T;
}

TypeWriter[src]

Source Code

Source code
pub const Writer = if (T != u8)
    @compileError("The Writer interface is only defined for ArrayList(u8) " ++
        "but the given type is ArrayList(" ++ @typeName(T) ++ ")")
else
    std.io.Writer(*Self, Allocator.Error, appendWrite)

TypeFixedWriter[src]

Source Code

Source code
pub const FixedWriter = std.io.Writer(*Self, Allocator.Error, appendWriteFixed)

Fields

items: Slice

Contents of the list. This field is intended to be accessed directly.

Pointers to elements in this slice are invalidated by various functions of this ArrayList in accordance with the respective documentation. In all cases, "invalidated" means that the memory has been passed to this allocator's resize or free function.

capacity: usize

How many T values this list can hold without allocating additional memory.

allocator: Allocator

Functions

Functioninit[src]

pub fn init(allocator: Allocator) Self

Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator

Source Code

Source code
pub fn init(allocator: Allocator) Self {
    return Self{
        .items = &[_]T{},
        .capacity = 0,
        .allocator = allocator,
    };
}

FunctioninitCapacity[src]

pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self

Initialize with capacity to hold num elements. The resulting capacity will equal num exactly. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
num: usize

Source Code

Source code
pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self {
    var self = Self.init(allocator);
    try self.ensureTotalCapacityPrecise(num);
    return self;
}

Functiondeinit[src]

pub fn deinit(self: Self) void

Release all allocated memory.

Parameters

self: Self

Source Code

Source code
pub fn deinit(self: Self) void {
    if (@sizeOf(T) > 0) {
        self.allocator.free(self.allocatedSlice());
    }
}

FunctionfromOwnedSlice[src]

pub fn fromOwnedSlice(allocator: Allocator, slice: Slice) Self

ArrayList takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
slice: Slice

Source Code

Source code
pub fn fromOwnedSlice(allocator: Allocator, slice: Slice) Self {
    return Self{
        .items = slice,
        .capacity = slice.len,
        .allocator = allocator,
    };
}

FunctionfromOwnedSliceSentinel[src]

pub fn fromOwnedSliceSentinel(allocator: Allocator, comptime sentinel: T, slice: [:sentinel]T) Self

ArrayList takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
sentinel: T
slice: [:sentinel]T

Source Code

Source code
pub fn fromOwnedSliceSentinel(allocator: Allocator, comptime sentinel: T, slice: [:sentinel]T) Self {
    return Self{
        .items = slice,
        .capacity = slice.len + 1,
        .allocator = allocator,
    };
}

FunctionmoveToUnmanaged[src]

pub fn moveToUnmanaged(self: *Self) ArrayListAlignedUnmanaged(T, alignment)

Initializes an ArrayListUnmanaged with the items and capacity fields of this ArrayList. Empties this ArrayList.

Parameters

self: *Self

Source Code

Source code
pub fn moveToUnmanaged(self: *Self) ArrayListAlignedUnmanaged(T, alignment) {
    const allocator = self.allocator;
    const result: ArrayListAlignedUnmanaged(T, alignment) = .{ .items = self.items, .capacity = self.capacity };
    self.* = init(allocator);
    return result;
}

FunctiontoOwnedSlice[src]

pub fn toOwnedSlice(self: *Self) Allocator.Error!Slice

The caller owns the returned memory. Empties this ArrayList. Its capacity is cleared, making deinit safe but unnecessary to call.

Parameters

self: *Self

Source Code

Source code
pub fn toOwnedSlice(self: *Self) Allocator.Error!Slice {
    const allocator = self.allocator;

    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, self.items.len)) |new_items| {
        self.* = init(allocator);
        return new_items;
    }

    const new_memory = try allocator.alignedAlloc(T, alignment, self.items.len);
    @memcpy(new_memory, self.items);
    self.clearAndFree();
    return new_memory;
}

FunctiontoOwnedSliceSentinel[src]

pub fn toOwnedSliceSentinel(self: *Self, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel)

The caller owns the returned memory. Empties this ArrayList.

Parameters

self: *Self
sentinel: T

Source Code

Source code
pub fn toOwnedSliceSentinel(self: *Self, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel) {
    // This addition can never overflow because `self.items` can never occupy the whole address space
    try self.ensureTotalCapacityPrecise(self.items.len + 1);
    self.appendAssumeCapacity(sentinel);
    const result = try self.toOwnedSlice();
    return result[0 .. result.len - 1 :sentinel];
}

Functionclone[src]

pub fn clone(self: Self) Allocator.Error!Self

Creates a copy of this ArrayList, using the same allocator.

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self) Allocator.Error!Self {
    var cloned = try Self.initCapacity(self.allocator, self.capacity);
    cloned.appendSliceAssumeCapacity(self.items);
    return cloned;
}

Functioninsert[src]

pub fn insert(self: *Self, i: usize, item: T) Allocator.Error!void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If i is equal to the length of the list this operation is equivalent to append. This operation is O(N). Invalidates element pointers if additional memory is needed. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insert(self: *Self, i: usize, item: T) Allocator.Error!void {
    const dst = try self.addManyAt(i, 1);
    dst[0] = item;
}

FunctioninsertAssumeCapacity[src]

pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If i is equal to the length of the list this operation is equivalent to appendAssumeCapacity. This operation is O(N). Asserts that there is enough capacity for the new item. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void {
    assert(self.items.len < self.capacity);
    self.items.len += 1;

    mem.copyBackwards(T, self.items[i + 1 .. self.items.len], self.items[i .. self.items.len - 1]);
    self.items[i] = item;
}

FunctionaddManyAt[src]

pub fn addManyAt(self: *Self, index: usize, count: usize) Allocator.Error![]T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
count: usize

Source Code

Source code
pub fn addManyAt(self: *Self, index: usize, count: usize) Allocator.Error![]T {
    const new_len = try addOrOom(self.items.len, count);

    if (self.capacity >= new_len)
        return addManyAtAssumeCapacity(self, index, count);

    // Here we avoid copying allocated but unused bytes by
    // attempting a resize in place, and falling back to allocating
    // a new buffer and doing our own copy. With a realloc() call,
    // the allocator implementation would pointlessly copy our
    // extra capacity.
    const new_capacity = ArrayListAlignedUnmanaged(T, alignment).growCapacity(self.capacity, new_len);
    const old_memory = self.allocatedSlice();
    if (self.allocator.remap(old_memory, new_capacity)) |new_memory| {
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
        return addManyAtAssumeCapacity(self, index, count);
    }

    // Make a new allocation, avoiding `ensureTotalCapacity` in order
    // to avoid extra memory copies.
    const new_memory = try self.allocator.alignedAlloc(T, alignment, new_capacity);
    const to_move = self.items[index..];
    @memcpy(new_memory[0..index], self.items[0..index]);
    @memcpy(new_memory[index + count ..][0..to_move.len], to_move);
    self.allocator.free(old_memory);
    self.items = new_memory[0..new_len];
    self.capacity = new_memory.len;
    // The inserted elements at `new_memory[index..][0..count]` have
    // already been set to `undefined` by memory allocation.
    return new_memory[index..][0..count];
}

FunctionaddManyAtAssumeCapacity[src]

pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Asserts that there is enough capacity for the new elements. Invalidates pre-existing pointers to elements at and after index, but does not invalidate any before that. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
count: usize

Source Code

Source code
pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T {
    const new_len = self.items.len + count;
    assert(self.capacity >= new_len);
    const to_move = self.items[index..];
    self.items.len = new_len;
    mem.copyBackwards(T, self.items[index + count ..], to_move);
    const result = self.items[index..][0..count];
    @memset(result, undefined);
    return result;
}

FunctioninsertSlice[src]

pub fn insertSlice( self: *Self, index: usize, items: []const T, ) Allocator.Error!void

Insert slice items at index i by moving list[i .. list.len] to make room. This operation is O(N). Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
items: []const T

Source Code

Source code
pub fn insertSlice(
    self: *Self,
    index: usize,
    items: []const T,
) Allocator.Error!void {
    const dst = try self.addManyAt(index, items.len);
    @memcpy(dst, items);
}

FunctionreplaceRange[src]

pub fn replaceRange(self: *Self, start: usize, len: usize, new_items: []const T) Allocator.Error!void

Grows or shrinks the list as necessary. Invalidates element pointers if additional capacity is allocated. Asserts that the range is in bounds.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRange(self: *Self, start: usize, len: usize, new_items: []const T) Allocator.Error!void {
    var unmanaged = self.moveToUnmanaged();
    defer self.* = unmanaged.toManaged(self.allocator);
    return unmanaged.replaceRange(self.allocator, start, len, new_items);
}

FunctionreplaceRangeAssumeCapacity[src]

pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void

Grows or shrinks the list as necessary. Never invalidates element pointers. Asserts the capacity is enough for additional items.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void {
    var unmanaged = self.moveToUnmanaged();
    defer self.* = unmanaged.toManaged(self.allocator);
    return unmanaged.replaceRangeAssumeCapacity(start, len, new_items);
}

Functionappend[src]

pub fn append(self: *Self, item: T) Allocator.Error!void

Extends the list by 1 element. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn append(self: *Self, item: T) Allocator.Error!void {
    const new_item_ptr = try self.addOne();
    new_item_ptr.* = item;
}

FunctionappendAssumeCapacity[src]

pub fn appendAssumeCapacity(self: *Self, item: T) void

Extends the list by 1 element. Never invalidates element pointers. Asserts that the list can hold one additional item.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn appendAssumeCapacity(self: *Self, item: T) void {
    self.addOneAssumeCapacity().* = item;
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, i: usize) T

Remove the element at index i, shift elements after index i forward, and return the removed element. Invalidates element pointers to end of list. This operation is O(N). This preserves item order. Use swapRemove if order preservation is not important. Asserts that the index is in bounds. Asserts that the list is not empty.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn orderedRemove(self: *Self, i: usize) T {
    const old_item = self.items[i];
    self.replaceRangeAssumeCapacity(i, 1, &.{});
    return old_item;
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, i: usize) T

Removes the element at the specified index and returns it. The empty slot is filled from the end of the list. This operation is O(1). This may not preserve item order. Use orderedRemove if you need to preserve order. Asserts that the list is not empty. Asserts that the index is in bounds.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn swapRemove(self: *Self, i: usize) T {
    if (self.items.len - 1 == i) return self.pop().?;

    const old_item = self.items[i];
    self.items[i] = self.pop().?;
    return old_item;
}

FunctionappendSlice[src]

pub fn appendSlice(self: *Self, items: []const T) Allocator.Error!void

Append the slice of items to the list. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSlice(self: *Self, items: []const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(items.len);
    self.appendSliceAssumeCapacity(items);
}

FunctionappendSliceAssumeCapacity[src]

pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void

Append the slice of items to the list. Never invalidates element pointers. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

FunctionappendUnalignedSlice[src]

pub fn appendUnalignedSlice(self: *Self, items: []align(1) const T) Allocator.Error!void

Append an unaligned slice of items to the list. Allocates more memory as necessary. Only call this function if calling appendSlice instead would be a compile error. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSlice(self: *Self, items: []align(1) const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(items.len);
    self.appendUnalignedSliceAssumeCapacity(items);
}

FunctionappendUnalignedSliceAssumeCapacity[src]

pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void

Append the slice of items to the list. Never invalidates element pointers. This function is only needed when calling appendSliceAssumeCapacity instead would be a compile error due to the alignment of the items parameter. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

Functionwriter[src]

pub fn writer(self: *Self) Writer

Initializes a Writer which will append to the list.

Parameters

self: *Self

Source Code

Source code
pub fn writer(self: *Self) Writer {
    return .{ .context = self };
}

FunctionfixedWriter[src]

pub fn fixedWriter(self: *Self) FixedWriter

Initializes a Writer which will append to the list but will return error.OutOfMemory rather than increasing capacity.

Parameters

self: *Self

Source Code

Source code
pub fn fixedWriter(self: *Self) FixedWriter {
    return .{ .context = self };
}

FunctionappendNTimes[src]

pub inline fn appendNTimes(self: *Self, value: T, n: usize) Allocator.Error!void

Append a value to the list n times. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed. The function is inline so that a comptime-known value parameter will have a more optimal memset codegen in case it has a repeated byte pattern.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimes(self: *Self, value: T, n: usize) Allocator.Error!void {
    const old_len = self.items.len;
    try self.resize(try addOrOom(old_len, n));
    @memset(self.items[old_len..self.items.len], value);
}

FunctionappendNTimesAssumeCapacity[src]

pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void

Append a value to the list n times. Never invalidates element pointers. The function is inline so that a comptime-known value parameter will have a more optimal memset codegen in case it has a repeated byte pattern. Asserts that the list can hold the additional items.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
    const new_len = self.items.len + n;
    assert(new_len <= self.capacity);
    @memset(self.items.ptr[self.items.len..new_len], value);
    self.items.len = new_len;
}

Functionresize[src]

pub fn resize(self: *Self, new_len: usize) Allocator.Error!void

Adjust the list length to new_len. Additional elements contain the value undefined. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn resize(self: *Self, new_len: usize) Allocator.Error!void {
    try self.ensureTotalCapacity(new_len);
    self.items.len = new_len;
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, new_len: usize) void

Reduce allocated capacity to new_len. May invalidate element pointers. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, new_len: usize) void {
    var unmanaged = self.moveToUnmanaged();
    unmanaged.shrinkAndFree(self.allocator, new_len);
    self.* = unmanaged.toManaged(self.allocator);
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Reduce length to new_len. Invalidates element pointers for the elements items[new_len..]. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    assert(new_len <= self.items.len);
    self.items.len = new_len;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.items.len = 0;
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self) void {
    self.allocator.free(self.allocatedSlice());
    self.items.len = 0;
    self.capacity = 0;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) Allocator.Error!void

If the current capacity is less than new_capacity, this function will modify the array so that it can hold at least new_capacity items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) Allocator.Error!void {
    if (@sizeOf(T) == 0) {
        self.capacity = math.maxInt(usize);
        return;
    }

    if (self.capacity >= new_capacity) return;

    const better_capacity = ArrayListAlignedUnmanaged(T, alignment).growCapacity(self.capacity, new_capacity);
    return self.ensureTotalCapacityPrecise(better_capacity);
}

FunctionensureTotalCapacityPrecise[src]

pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) Allocator.Error!void

If the current capacity is less than new_capacity, this function will modify the array so that it can hold exactly new_capacity items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) Allocator.Error!void {
    if (@sizeOf(T) == 0) {
        self.capacity = math.maxInt(usize);
        return;
    }

    if (self.capacity >= new_capacity) return;

    // Here we avoid copying allocated but unused bytes by
    // attempting a resize in place, and falling back to allocating
    // a new buffer and doing our own copy. With a realloc() call,
    // the allocator implementation would pointlessly copy our
    // extra capacity.
    const old_memory = self.allocatedSlice();
    if (self.allocator.remap(old_memory, new_capacity)) |new_memory| {
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    } else {
        const new_memory = try self.allocator.alignedAlloc(T, alignment, new_capacity);
        @memcpy(new_memory[0..self.items.len], self.items);
        self.allocator.free(old_memory);
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    }
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) Allocator.Error!void

Modify the array so that it can hold at least additional_count more items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) Allocator.Error!void {
    return self.ensureTotalCapacity(try addOrOom(self.items.len, additional_count));
}

FunctionexpandToCapacity[src]

pub fn expandToCapacity(self: *Self) void

Increases the array's length to match the full capacity that is already allocated. The new elements have undefined values. Never invalidates element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn expandToCapacity(self: *Self) void {
    self.items.len = self.capacity;
}

FunctionaddOne[src]

pub fn addOne(self: *Self) Allocator.Error!*T

Increase length by 1, returning pointer to the new item. The returned pointer becomes invalid when the list resized.

Parameters

self: *Self

Source Code

Source code
pub fn addOne(self: *Self) Allocator.Error!*T {
    // This can never overflow because `self.items` can never occupy the whole address space
    const newlen = self.items.len + 1;
    try self.ensureTotalCapacity(newlen);
    return self.addOneAssumeCapacity();
}

FunctionaddOneAssumeCapacity[src]

pub fn addOneAssumeCapacity(self: *Self) *T

Increase length by 1, returning pointer to the new item. The returned pointer becomes invalid when the list is resized. Never invalidates element pointers. Asserts that the list can hold one additional item.

Parameters

self: *Self

Source Code

Source code
pub fn addOneAssumeCapacity(self: *Self) *T {
    assert(self.items.len < self.capacity);
    self.items.len += 1;
    return &self.items[self.items.len - 1];
}

FunctionaddManyAsArray[src]

pub fn addManyAsArray(self: *Self, comptime n: usize) Allocator.Error!*[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized. Resizes list if self.capacity is not large enough.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArray(self: *Self, comptime n: usize) Allocator.Error!*[n]T {
    const prev_len = self.items.len;
    try self.resize(try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsArrayAssumeCapacity[src]

pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSlice[src]

pub fn addManyAsSlice(self: *Self, n: usize) Allocator.Error![]T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized. Resizes list if self.capacity is not large enough.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSlice(self: *Self, n: usize) Allocator.Error![]T {
    const prev_len = self.items.len;
    try self.resize(try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSliceAssumeCapacity[src]

pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Remove and return the last element from the list, or return null if list is empty. Invalidates element pointers to the removed element, if any.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.items.len == 0) return null;
    const val = self.items[self.items.len - 1];
    self.items.len -= 1;
    return val;
}

FunctionallocatedSlice[src]

pub fn allocatedSlice(self: Self) Slice

Returns a slice of all the items plus the extra capacity, whose memory contents are undefined.

Parameters

self: Self

Source Code

Source code
pub fn allocatedSlice(self: Self) Slice {
    // `items.len` is the length, not the capacity.
    return self.items.ptr[0..self.capacity];
}

FunctionunusedCapacitySlice[src]

pub fn unusedCapacitySlice(self: Self) []T

Returns a slice of only the extra capacity after items. This can be useful for writing directly into an ArrayList. Note that such an operation must be followed up with a direct modification of self.items.len.

Parameters

self: Self

Source Code

Source code
pub fn unusedCapacitySlice(self: Self) []T {
    return self.allocatedSlice()[self.items.len..];
}

FunctiongetLast[src]

pub fn getLast(self: Self) T

Returns the last element from the list. Asserts that the list is not empty.

Parameters

self: Self

Source Code

Source code
pub fn getLast(self: Self) T {
    const val = self.items[self.items.len - 1];
    return val;
}

FunctiongetLastOrNull[src]

pub fn getLastOrNull(self: Self) ?T

Returns the last element from the list, or null if list is empty.

Parameters

self: Self

Source Code

Source code
pub fn getLastOrNull(self: Self) ?T {
    if (self.items.len == 0) return null;
    return self.getLast();
}

Source Code

Source code
pub fn ArrayList(comptime T: type) type {
    return ArrayListAligned(T, null);
}

Type FunctionArrayListAligned[src]

A contiguous, growable list of arbitrarily aligned items in memory. This is a wrapper around an array of T values aligned to alignment-byte addresses. If the specified alignment is null, then @alignOf(T) is used. Initialize with init.

This struct internally stores a std.mem.Allocator for memory management. To manually specify an allocator with each function call see ArrayListAlignedUnmanaged.

Parameters

T: type
alignment: ?u29

Types

TypeSlice[src]

Source Code

Source code
pub const Slice = if (alignment) |a| ([]align(a) T) else []T

Type FunctionSentinelSlice[src]

Parameters

s: T

Source Code

Source code
pub fn SentinelSlice(comptime s: T) type {
    return if (alignment) |a| ([:s]align(a) T) else [:s]T;
}

TypeWriter[src]

Source Code

Source code
pub const Writer = if (T != u8)
    @compileError("The Writer interface is only defined for ArrayList(u8) " ++
        "but the given type is ArrayList(" ++ @typeName(T) ++ ")")
else
    std.io.Writer(*Self, Allocator.Error, appendWrite)

TypeFixedWriter[src]

Source Code

Source code
pub const FixedWriter = std.io.Writer(*Self, Allocator.Error, appendWriteFixed)

Fields

items: Slice

Contents of the list. This field is intended to be accessed directly.

Pointers to elements in this slice are invalidated by various functions of this ArrayList in accordance with the respective documentation. In all cases, "invalidated" means that the memory has been passed to this allocator's resize or free function.

capacity: usize

How many T values this list can hold without allocating additional memory.

allocator: Allocator

Functions

Functioninit[src]

pub fn init(allocator: Allocator) Self

Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator

Source Code

Source code
pub fn init(allocator: Allocator) Self {
    return Self{
        .items = &[_]T{},
        .capacity = 0,
        .allocator = allocator,
    };
}

FunctioninitCapacity[src]

pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self

Initialize with capacity to hold num elements. The resulting capacity will equal num exactly. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
num: usize

Source Code

Source code
pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self {
    var self = Self.init(allocator);
    try self.ensureTotalCapacityPrecise(num);
    return self;
}

Functiondeinit[src]

pub fn deinit(self: Self) void

Release all allocated memory.

Parameters

self: Self

Source Code

Source code
pub fn deinit(self: Self) void {
    if (@sizeOf(T) > 0) {
        self.allocator.free(self.allocatedSlice());
    }
}

FunctionfromOwnedSlice[src]

pub fn fromOwnedSlice(allocator: Allocator, slice: Slice) Self

ArrayList takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
slice: Slice

Source Code

Source code
pub fn fromOwnedSlice(allocator: Allocator, slice: Slice) Self {
    return Self{
        .items = slice,
        .capacity = slice.len,
        .allocator = allocator,
    };
}

FunctionfromOwnedSliceSentinel[src]

pub fn fromOwnedSliceSentinel(allocator: Allocator, comptime sentinel: T, slice: [:sentinel]T) Self

ArrayList takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
sentinel: T
slice: [:sentinel]T

Source Code

Source code
pub fn fromOwnedSliceSentinel(allocator: Allocator, comptime sentinel: T, slice: [:sentinel]T) Self {
    return Self{
        .items = slice,
        .capacity = slice.len + 1,
        .allocator = allocator,
    };
}

FunctionmoveToUnmanaged[src]

pub fn moveToUnmanaged(self: *Self) ArrayListAlignedUnmanaged(T, alignment)

Initializes an ArrayListUnmanaged with the items and capacity fields of this ArrayList. Empties this ArrayList.

Parameters

self: *Self

Source Code

Source code
pub fn moveToUnmanaged(self: *Self) ArrayListAlignedUnmanaged(T, alignment) {
    const allocator = self.allocator;
    const result: ArrayListAlignedUnmanaged(T, alignment) = .{ .items = self.items, .capacity = self.capacity };
    self.* = init(allocator);
    return result;
}

FunctiontoOwnedSlice[src]

pub fn toOwnedSlice(self: *Self) Allocator.Error!Slice

The caller owns the returned memory. Empties this ArrayList. Its capacity is cleared, making deinit safe but unnecessary to call.

Parameters

self: *Self

Source Code

Source code
pub fn toOwnedSlice(self: *Self) Allocator.Error!Slice {
    const allocator = self.allocator;

    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, self.items.len)) |new_items| {
        self.* = init(allocator);
        return new_items;
    }

    const new_memory = try allocator.alignedAlloc(T, alignment, self.items.len);
    @memcpy(new_memory, self.items);
    self.clearAndFree();
    return new_memory;
}

FunctiontoOwnedSliceSentinel[src]

pub fn toOwnedSliceSentinel(self: *Self, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel)

The caller owns the returned memory. Empties this ArrayList.

Parameters

self: *Self
sentinel: T

Source Code

Source code
pub fn toOwnedSliceSentinel(self: *Self, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel) {
    // This addition can never overflow because `self.items` can never occupy the whole address space
    try self.ensureTotalCapacityPrecise(self.items.len + 1);
    self.appendAssumeCapacity(sentinel);
    const result = try self.toOwnedSlice();
    return result[0 .. result.len - 1 :sentinel];
}

Functionclone[src]

pub fn clone(self: Self) Allocator.Error!Self

Creates a copy of this ArrayList, using the same allocator.

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self) Allocator.Error!Self {
    var cloned = try Self.initCapacity(self.allocator, self.capacity);
    cloned.appendSliceAssumeCapacity(self.items);
    return cloned;
}

Functioninsert[src]

pub fn insert(self: *Self, i: usize, item: T) Allocator.Error!void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If i is equal to the length of the list this operation is equivalent to append. This operation is O(N). Invalidates element pointers if additional memory is needed. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insert(self: *Self, i: usize, item: T) Allocator.Error!void {
    const dst = try self.addManyAt(i, 1);
    dst[0] = item;
}

FunctioninsertAssumeCapacity[src]

pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If i is equal to the length of the list this operation is equivalent to appendAssumeCapacity. This operation is O(N). Asserts that there is enough capacity for the new item. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void {
    assert(self.items.len < self.capacity);
    self.items.len += 1;

    mem.copyBackwards(T, self.items[i + 1 .. self.items.len], self.items[i .. self.items.len - 1]);
    self.items[i] = item;
}

FunctionaddManyAt[src]

pub fn addManyAt(self: *Self, index: usize, count: usize) Allocator.Error![]T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
count: usize

Source Code

Source code
pub fn addManyAt(self: *Self, index: usize, count: usize) Allocator.Error![]T {
    const new_len = try addOrOom(self.items.len, count);

    if (self.capacity >= new_len)
        return addManyAtAssumeCapacity(self, index, count);

    // Here we avoid copying allocated but unused bytes by
    // attempting a resize in place, and falling back to allocating
    // a new buffer and doing our own copy. With a realloc() call,
    // the allocator implementation would pointlessly copy our
    // extra capacity.
    const new_capacity = ArrayListAlignedUnmanaged(T, alignment).growCapacity(self.capacity, new_len);
    const old_memory = self.allocatedSlice();
    if (self.allocator.remap(old_memory, new_capacity)) |new_memory| {
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
        return addManyAtAssumeCapacity(self, index, count);
    }

    // Make a new allocation, avoiding `ensureTotalCapacity` in order
    // to avoid extra memory copies.
    const new_memory = try self.allocator.alignedAlloc(T, alignment, new_capacity);
    const to_move = self.items[index..];
    @memcpy(new_memory[0..index], self.items[0..index]);
    @memcpy(new_memory[index + count ..][0..to_move.len], to_move);
    self.allocator.free(old_memory);
    self.items = new_memory[0..new_len];
    self.capacity = new_memory.len;
    // The inserted elements at `new_memory[index..][0..count]` have
    // already been set to `undefined` by memory allocation.
    return new_memory[index..][0..count];
}

FunctionaddManyAtAssumeCapacity[src]

pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Asserts that there is enough capacity for the new elements. Invalidates pre-existing pointers to elements at and after index, but does not invalidate any before that. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
count: usize

Source Code

Source code
pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T {
    const new_len = self.items.len + count;
    assert(self.capacity >= new_len);
    const to_move = self.items[index..];
    self.items.len = new_len;
    mem.copyBackwards(T, self.items[index + count ..], to_move);
    const result = self.items[index..][0..count];
    @memset(result, undefined);
    return result;
}

FunctioninsertSlice[src]

pub fn insertSlice( self: *Self, index: usize, items: []const T, ) Allocator.Error!void

Insert slice items at index i by moving list[i .. list.len] to make room. This operation is O(N). Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
items: []const T

Source Code

Source code
pub fn insertSlice(
    self: *Self,
    index: usize,
    items: []const T,
) Allocator.Error!void {
    const dst = try self.addManyAt(index, items.len);
    @memcpy(dst, items);
}

FunctionreplaceRange[src]

pub fn replaceRange(self: *Self, start: usize, len: usize, new_items: []const T) Allocator.Error!void

Grows or shrinks the list as necessary. Invalidates element pointers if additional capacity is allocated. Asserts that the range is in bounds.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRange(self: *Self, start: usize, len: usize, new_items: []const T) Allocator.Error!void {
    var unmanaged = self.moveToUnmanaged();
    defer self.* = unmanaged.toManaged(self.allocator);
    return unmanaged.replaceRange(self.allocator, start, len, new_items);
}

FunctionreplaceRangeAssumeCapacity[src]

pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void

Grows or shrinks the list as necessary. Never invalidates element pointers. Asserts the capacity is enough for additional items.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void {
    var unmanaged = self.moveToUnmanaged();
    defer self.* = unmanaged.toManaged(self.allocator);
    return unmanaged.replaceRangeAssumeCapacity(start, len, new_items);
}

Functionappend[src]

pub fn append(self: *Self, item: T) Allocator.Error!void

Extends the list by 1 element. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn append(self: *Self, item: T) Allocator.Error!void {
    const new_item_ptr = try self.addOne();
    new_item_ptr.* = item;
}

FunctionappendAssumeCapacity[src]

pub fn appendAssumeCapacity(self: *Self, item: T) void

Extends the list by 1 element. Never invalidates element pointers. Asserts that the list can hold one additional item.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn appendAssumeCapacity(self: *Self, item: T) void {
    self.addOneAssumeCapacity().* = item;
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, i: usize) T

Remove the element at index i, shift elements after index i forward, and return the removed element. Invalidates element pointers to end of list. This operation is O(N). This preserves item order. Use swapRemove if order preservation is not important. Asserts that the index is in bounds. Asserts that the list is not empty.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn orderedRemove(self: *Self, i: usize) T {
    const old_item = self.items[i];
    self.replaceRangeAssumeCapacity(i, 1, &.{});
    return old_item;
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, i: usize) T

Removes the element at the specified index and returns it. The empty slot is filled from the end of the list. This operation is O(1). This may not preserve item order. Use orderedRemove if you need to preserve order. Asserts that the list is not empty. Asserts that the index is in bounds.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn swapRemove(self: *Self, i: usize) T {
    if (self.items.len - 1 == i) return self.pop().?;

    const old_item = self.items[i];
    self.items[i] = self.pop().?;
    return old_item;
}

FunctionappendSlice[src]

pub fn appendSlice(self: *Self, items: []const T) Allocator.Error!void

Append the slice of items to the list. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSlice(self: *Self, items: []const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(items.len);
    self.appendSliceAssumeCapacity(items);
}

FunctionappendSliceAssumeCapacity[src]

pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void

Append the slice of items to the list. Never invalidates element pointers. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

FunctionappendUnalignedSlice[src]

pub fn appendUnalignedSlice(self: *Self, items: []align(1) const T) Allocator.Error!void

Append an unaligned slice of items to the list. Allocates more memory as necessary. Only call this function if calling appendSlice instead would be a compile error. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSlice(self: *Self, items: []align(1) const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(items.len);
    self.appendUnalignedSliceAssumeCapacity(items);
}

FunctionappendUnalignedSliceAssumeCapacity[src]

pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void

Append the slice of items to the list. Never invalidates element pointers. This function is only needed when calling appendSliceAssumeCapacity instead would be a compile error due to the alignment of the items parameter. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

Functionwriter[src]

pub fn writer(self: *Self) Writer

Initializes a Writer which will append to the list.

Parameters

self: *Self

Source Code

Source code
pub fn writer(self: *Self) Writer {
    return .{ .context = self };
}

FunctionfixedWriter[src]

pub fn fixedWriter(self: *Self) FixedWriter

Initializes a Writer which will append to the list but will return error.OutOfMemory rather than increasing capacity.

Parameters

self: *Self

Source Code

Source code
pub fn fixedWriter(self: *Self) FixedWriter {
    return .{ .context = self };
}

FunctionappendNTimes[src]

pub inline fn appendNTimes(self: *Self, value: T, n: usize) Allocator.Error!void

Append a value to the list n times. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed. The function is inline so that a comptime-known value parameter will have a more optimal memset codegen in case it has a repeated byte pattern.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimes(self: *Self, value: T, n: usize) Allocator.Error!void {
    const old_len = self.items.len;
    try self.resize(try addOrOom(old_len, n));
    @memset(self.items[old_len..self.items.len], value);
}

FunctionappendNTimesAssumeCapacity[src]

pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void

Append a value to the list n times. Never invalidates element pointers. The function is inline so that a comptime-known value parameter will have a more optimal memset codegen in case it has a repeated byte pattern. Asserts that the list can hold the additional items.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
    const new_len = self.items.len + n;
    assert(new_len <= self.capacity);
    @memset(self.items.ptr[self.items.len..new_len], value);
    self.items.len = new_len;
}

Functionresize[src]

pub fn resize(self: *Self, new_len: usize) Allocator.Error!void

Adjust the list length to new_len. Additional elements contain the value undefined. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn resize(self: *Self, new_len: usize) Allocator.Error!void {
    try self.ensureTotalCapacity(new_len);
    self.items.len = new_len;
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, new_len: usize) void

Reduce allocated capacity to new_len. May invalidate element pointers. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, new_len: usize) void {
    var unmanaged = self.moveToUnmanaged();
    unmanaged.shrinkAndFree(self.allocator, new_len);
    self.* = unmanaged.toManaged(self.allocator);
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Reduce length to new_len. Invalidates element pointers for the elements items[new_len..]. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    assert(new_len <= self.items.len);
    self.items.len = new_len;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.items.len = 0;
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self) void {
    self.allocator.free(self.allocatedSlice());
    self.items.len = 0;
    self.capacity = 0;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) Allocator.Error!void

If the current capacity is less than new_capacity, this function will modify the array so that it can hold at least new_capacity items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) Allocator.Error!void {
    if (@sizeOf(T) == 0) {
        self.capacity = math.maxInt(usize);
        return;
    }

    if (self.capacity >= new_capacity) return;

    const better_capacity = ArrayListAlignedUnmanaged(T, alignment).growCapacity(self.capacity, new_capacity);
    return self.ensureTotalCapacityPrecise(better_capacity);
}

FunctionensureTotalCapacityPrecise[src]

pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) Allocator.Error!void

If the current capacity is less than new_capacity, this function will modify the array so that it can hold exactly new_capacity items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) Allocator.Error!void {
    if (@sizeOf(T) == 0) {
        self.capacity = math.maxInt(usize);
        return;
    }

    if (self.capacity >= new_capacity) return;

    // Here we avoid copying allocated but unused bytes by
    // attempting a resize in place, and falling back to allocating
    // a new buffer and doing our own copy. With a realloc() call,
    // the allocator implementation would pointlessly copy our
    // extra capacity.
    const old_memory = self.allocatedSlice();
    if (self.allocator.remap(old_memory, new_capacity)) |new_memory| {
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    } else {
        const new_memory = try self.allocator.alignedAlloc(T, alignment, new_capacity);
        @memcpy(new_memory[0..self.items.len], self.items);
        self.allocator.free(old_memory);
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    }
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) Allocator.Error!void

Modify the array so that it can hold at least additional_count more items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) Allocator.Error!void {
    return self.ensureTotalCapacity(try addOrOom(self.items.len, additional_count));
}

FunctionexpandToCapacity[src]

pub fn expandToCapacity(self: *Self) void

Increases the array's length to match the full capacity that is already allocated. The new elements have undefined values. Never invalidates element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn expandToCapacity(self: *Self) void {
    self.items.len = self.capacity;
}

FunctionaddOne[src]

pub fn addOne(self: *Self) Allocator.Error!*T

Increase length by 1, returning pointer to the new item. The returned pointer becomes invalid when the list resized.

Parameters

self: *Self

Source Code

Source code
pub fn addOne(self: *Self) Allocator.Error!*T {
    // This can never overflow because `self.items` can never occupy the whole address space
    const newlen = self.items.len + 1;
    try self.ensureTotalCapacity(newlen);
    return self.addOneAssumeCapacity();
}

FunctionaddOneAssumeCapacity[src]

pub fn addOneAssumeCapacity(self: *Self) *T

Increase length by 1, returning pointer to the new item. The returned pointer becomes invalid when the list is resized. Never invalidates element pointers. Asserts that the list can hold one additional item.

Parameters

self: *Self

Source Code

Source code
pub fn addOneAssumeCapacity(self: *Self) *T {
    assert(self.items.len < self.capacity);
    self.items.len += 1;
    return &self.items[self.items.len - 1];
}

FunctionaddManyAsArray[src]

pub fn addManyAsArray(self: *Self, comptime n: usize) Allocator.Error!*[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized. Resizes list if self.capacity is not large enough.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArray(self: *Self, comptime n: usize) Allocator.Error!*[n]T {
    const prev_len = self.items.len;
    try self.resize(try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsArrayAssumeCapacity[src]

pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSlice[src]

pub fn addManyAsSlice(self: *Self, n: usize) Allocator.Error![]T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized. Resizes list if self.capacity is not large enough.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSlice(self: *Self, n: usize) Allocator.Error![]T {
    const prev_len = self.items.len;
    try self.resize(try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSliceAssumeCapacity[src]

pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Remove and return the last element from the list, or return null if list is empty. Invalidates element pointers to the removed element, if any.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.items.len == 0) return null;
    const val = self.items[self.items.len - 1];
    self.items.len -= 1;
    return val;
}

FunctionallocatedSlice[src]

pub fn allocatedSlice(self: Self) Slice

Returns a slice of all the items plus the extra capacity, whose memory contents are undefined.

Parameters

self: Self

Source Code

Source code
pub fn allocatedSlice(self: Self) Slice {
    // `items.len` is the length, not the capacity.
    return self.items.ptr[0..self.capacity];
}

FunctionunusedCapacitySlice[src]

pub fn unusedCapacitySlice(self: Self) []T

Returns a slice of only the extra capacity after items. This can be useful for writing directly into an ArrayList. Note that such an operation must be followed up with a direct modification of self.items.len.

Parameters

self: Self

Source Code

Source code
pub fn unusedCapacitySlice(self: Self) []T {
    return self.allocatedSlice()[self.items.len..];
}

FunctiongetLast[src]

pub fn getLast(self: Self) T

Returns the last element from the list. Asserts that the list is not empty.

Parameters

self: Self

Source Code

Source code
pub fn getLast(self: Self) T {
    const val = self.items[self.items.len - 1];
    return val;
}

FunctiongetLastOrNull[src]

pub fn getLastOrNull(self: Self) ?T

Returns the last element from the list, or null if list is empty.

Parameters

self: Self

Source Code

Source code
pub fn getLastOrNull(self: Self) ?T {
    if (self.items.len == 0) return null;
    return self.getLast();
}

Source Code

Source code
pub fn ArrayListAligned(comptime T: type, comptime alignment: ?u29) type {
    if (alignment) |a| {
        if (a == @alignOf(T)) {
            return ArrayListAligned(T, null);
        }
    }
    return struct {
        const Self = @This();
        /// Contents of the list. This field is intended to be accessed
        /// directly.
        ///
        /// Pointers to elements in this slice are invalidated by various
        /// functions of this ArrayList in accordance with the respective
        /// documentation. In all cases, "invalidated" means that the memory
        /// has been passed to this allocator's resize or free function.
        items: Slice,
        /// How many T values this list can hold without allocating
        /// additional memory.
        capacity: usize,
        allocator: Allocator,

        pub const Slice = if (alignment) |a| ([]align(a) T) else []T;

        pub fn SentinelSlice(comptime s: T) type {
            return if (alignment) |a| ([:s]align(a) T) else [:s]T;
        }

        /// Deinitialize with `deinit` or use `toOwnedSlice`.
        pub fn init(allocator: Allocator) Self {
            return Self{
                .items = &[_]T{},
                .capacity = 0,
                .allocator = allocator,
            };
        }

        /// Initialize with capacity to hold `num` elements.
        /// The resulting capacity will equal `num` exactly.
        /// Deinitialize with `deinit` or use `toOwnedSlice`.
        pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self {
            var self = Self.init(allocator);
            try self.ensureTotalCapacityPrecise(num);
            return self;
        }

        /// Release all allocated memory.
        pub fn deinit(self: Self) void {
            if (@sizeOf(T) > 0) {
                self.allocator.free(self.allocatedSlice());
            }
        }

        /// ArrayList takes ownership of the passed in slice. The slice must have been
        /// allocated with `allocator`.
        /// Deinitialize with `deinit` or use `toOwnedSlice`.
        pub fn fromOwnedSlice(allocator: Allocator, slice: Slice) Self {
            return Self{
                .items = slice,
                .capacity = slice.len,
                .allocator = allocator,
            };
        }

        /// ArrayList takes ownership of the passed in slice. The slice must have been
        /// allocated with `allocator`.
        /// Deinitialize with `deinit` or use `toOwnedSlice`.
        pub fn fromOwnedSliceSentinel(allocator: Allocator, comptime sentinel: T, slice: [:sentinel]T) Self {
            return Self{
                .items = slice,
                .capacity = slice.len + 1,
                .allocator = allocator,
            };
        }

        /// Initializes an ArrayListUnmanaged with the `items` and `capacity` fields
        /// of this ArrayList. Empties this ArrayList.
        pub fn moveToUnmanaged(self: *Self) ArrayListAlignedUnmanaged(T, alignment) {
            const allocator = self.allocator;
            const result: ArrayListAlignedUnmanaged(T, alignment) = .{ .items = self.items, .capacity = self.capacity };
            self.* = init(allocator);
            return result;
        }

        /// The caller owns the returned memory. Empties this ArrayList.
        /// Its capacity is cleared, making `deinit` safe but unnecessary to call.
        pub fn toOwnedSlice(self: *Self) Allocator.Error!Slice {
            const allocator = self.allocator;

            const old_memory = self.allocatedSlice();
            if (allocator.remap(old_memory, self.items.len)) |new_items| {
                self.* = init(allocator);
                return new_items;
            }

            const new_memory = try allocator.alignedAlloc(T, alignment, self.items.len);
            @memcpy(new_memory, self.items);
            self.clearAndFree();
            return new_memory;
        }

        /// The caller owns the returned memory. Empties this ArrayList.
        pub fn toOwnedSliceSentinel(self: *Self, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel) {
            // This addition can never overflow because `self.items` can never occupy the whole address space
            try self.ensureTotalCapacityPrecise(self.items.len + 1);
            self.appendAssumeCapacity(sentinel);
            const result = try self.toOwnedSlice();
            return result[0 .. result.len - 1 :sentinel];
        }

        /// Creates a copy of this ArrayList, using the same allocator.
        pub fn clone(self: Self) Allocator.Error!Self {
            var cloned = try Self.initCapacity(self.allocator, self.capacity);
            cloned.appendSliceAssumeCapacity(self.items);
            return cloned;
        }

        /// Insert `item` at index `i`. Moves `list[i .. list.len]` to higher indices to make room.
        /// If `i` is equal to the length of the list this operation is equivalent to append.
        /// This operation is O(N).
        /// Invalidates element pointers if additional memory is needed.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn insert(self: *Self, i: usize, item: T) Allocator.Error!void {
            const dst = try self.addManyAt(i, 1);
            dst[0] = item;
        }

        /// Insert `item` at index `i`. Moves `list[i .. list.len]` to higher indices to make room.
        /// If `i` is equal to the length of the list this operation is
        /// equivalent to appendAssumeCapacity.
        /// This operation is O(N).
        /// Asserts that there is enough capacity for the new item.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void {
            assert(self.items.len < self.capacity);
            self.items.len += 1;

            mem.copyBackwards(T, self.items[i + 1 .. self.items.len], self.items[i .. self.items.len - 1]);
            self.items[i] = item;
        }

        /// Add `count` new elements at position `index`, which have
        /// `undefined` values. Returns a slice pointing to the newly allocated
        /// elements, which becomes invalid after various `ArrayList`
        /// operations.
        /// Invalidates pre-existing pointers to elements at and after `index`.
        /// Invalidates all pre-existing element pointers if capacity must be
        /// increased to accommodate the new elements.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn addManyAt(self: *Self, index: usize, count: usize) Allocator.Error![]T {
            const new_len = try addOrOom(self.items.len, count);

            if (self.capacity >= new_len)
                return addManyAtAssumeCapacity(self, index, count);

            // Here we avoid copying allocated but unused bytes by
            // attempting a resize in place, and falling back to allocating
            // a new buffer and doing our own copy. With a realloc() call,
            // the allocator implementation would pointlessly copy our
            // extra capacity.
            const new_capacity = ArrayListAlignedUnmanaged(T, alignment).growCapacity(self.capacity, new_len);
            const old_memory = self.allocatedSlice();
            if (self.allocator.remap(old_memory, new_capacity)) |new_memory| {
                self.items.ptr = new_memory.ptr;
                self.capacity = new_memory.len;
                return addManyAtAssumeCapacity(self, index, count);
            }

            // Make a new allocation, avoiding `ensureTotalCapacity` in order
            // to avoid extra memory copies.
            const new_memory = try self.allocator.alignedAlloc(T, alignment, new_capacity);
            const to_move = self.items[index..];
            @memcpy(new_memory[0..index], self.items[0..index]);
            @memcpy(new_memory[index + count ..][0..to_move.len], to_move);
            self.allocator.free(old_memory);
            self.items = new_memory[0..new_len];
            self.capacity = new_memory.len;
            // The inserted elements at `new_memory[index..][0..count]` have
            // already been set to `undefined` by memory allocation.
            return new_memory[index..][0..count];
        }

        /// Add `count` new elements at position `index`, which have
        /// `undefined` values. Returns a slice pointing to the newly allocated
        /// elements, which becomes invalid after various `ArrayList`
        /// operations.
        /// Asserts that there is enough capacity for the new elements.
        /// Invalidates pre-existing pointers to elements at and after `index`, but
        /// does not invalidate any before that.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T {
            const new_len = self.items.len + count;
            assert(self.capacity >= new_len);
            const to_move = self.items[index..];
            self.items.len = new_len;
            mem.copyBackwards(T, self.items[index + count ..], to_move);
            const result = self.items[index..][0..count];
            @memset(result, undefined);
            return result;
        }

        /// Insert slice `items` at index `i` by moving `list[i .. list.len]` to make room.
        /// This operation is O(N).
        /// Invalidates pre-existing pointers to elements at and after `index`.
        /// Invalidates all pre-existing element pointers if capacity must be
        /// increased to accommodate the new elements.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn insertSlice(
            self: *Self,
            index: usize,
            items: []const T,
        ) Allocator.Error!void {
            const dst = try self.addManyAt(index, items.len);
            @memcpy(dst, items);
        }

        /// Grows or shrinks the list as necessary.
        /// Invalidates element pointers if additional capacity is allocated.
        /// Asserts that the range is in bounds.
        pub fn replaceRange(self: *Self, start: usize, len: usize, new_items: []const T) Allocator.Error!void {
            var unmanaged = self.moveToUnmanaged();
            defer self.* = unmanaged.toManaged(self.allocator);
            return unmanaged.replaceRange(self.allocator, start, len, new_items);
        }

        /// Grows or shrinks the list as necessary.
        /// Never invalidates element pointers.
        /// Asserts the capacity is enough for additional items.
        pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void {
            var unmanaged = self.moveToUnmanaged();
            defer self.* = unmanaged.toManaged(self.allocator);
            return unmanaged.replaceRangeAssumeCapacity(start, len, new_items);
        }

        /// Extends the list by 1 element. Allocates more memory as necessary.
        /// Invalidates element pointers if additional memory is needed.
        pub fn append(self: *Self, item: T) Allocator.Error!void {
            const new_item_ptr = try self.addOne();
            new_item_ptr.* = item;
        }

        /// Extends the list by 1 element.
        /// Never invalidates element pointers.
        /// Asserts that the list can hold one additional item.
        pub fn appendAssumeCapacity(self: *Self, item: T) void {
            self.addOneAssumeCapacity().* = item;
        }

        /// Remove the element at index `i`, shift elements after index
        /// `i` forward, and return the removed element.
        /// Invalidates element pointers to end of list.
        /// This operation is O(N).
        /// This preserves item order. Use `swapRemove` if order preservation is not important.
        /// Asserts that the index is in bounds.
        /// Asserts that the list is not empty.
        pub fn orderedRemove(self: *Self, i: usize) T {
            const old_item = self.items[i];
            self.replaceRangeAssumeCapacity(i, 1, &.{});
            return old_item;
        }

        /// Removes the element at the specified index and returns it.
        /// The empty slot is filled from the end of the list.
        /// This operation is O(1).
        /// This may not preserve item order. Use `orderedRemove` if you need to preserve order.
        /// Asserts that the list is not empty.
        /// Asserts that the index is in bounds.
        pub fn swapRemove(self: *Self, i: usize) T {
            if (self.items.len - 1 == i) return self.pop().?;

            const old_item = self.items[i];
            self.items[i] = self.pop().?;
            return old_item;
        }

        /// Append the slice of items to the list. Allocates more
        /// memory as necessary.
        /// Invalidates element pointers if additional memory is needed.
        pub fn appendSlice(self: *Self, items: []const T) Allocator.Error!void {
            try self.ensureUnusedCapacity(items.len);
            self.appendSliceAssumeCapacity(items);
        }

        /// Append the slice of items to the list.
        /// Never invalidates element pointers.
        /// Asserts that the list can hold the additional items.
        pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
            const old_len = self.items.len;
            const new_len = old_len + items.len;
            assert(new_len <= self.capacity);
            self.items.len = new_len;
            @memcpy(self.items[old_len..][0..items.len], items);
        }

        /// Append an unaligned slice of items to the list. Allocates more
        /// memory as necessary. Only call this function if calling
        /// `appendSlice` instead would be a compile error.
        /// Invalidates element pointers if additional memory is needed.
        pub fn appendUnalignedSlice(self: *Self, items: []align(1) const T) Allocator.Error!void {
            try self.ensureUnusedCapacity(items.len);
            self.appendUnalignedSliceAssumeCapacity(items);
        }

        /// Append the slice of items to the list.
        /// Never invalidates element pointers.
        /// This function is only needed when calling
        /// `appendSliceAssumeCapacity` instead would be a compile error due to the
        /// alignment of the `items` parameter.
        /// Asserts that the list can hold the additional items.
        pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void {
            const old_len = self.items.len;
            const new_len = old_len + items.len;
            assert(new_len <= self.capacity);
            self.items.len = new_len;
            @memcpy(self.items[old_len..][0..items.len], items);
        }

        pub const Writer = if (T != u8)
            @compileError("The Writer interface is only defined for ArrayList(u8) " ++
                "but the given type is ArrayList(" ++ @typeName(T) ++ ")")
        else
            std.io.Writer(*Self, Allocator.Error, appendWrite);

        /// Initializes a Writer which will append to the list.
        pub fn writer(self: *Self) Writer {
            return .{ .context = self };
        }

        /// Same as `append` except it returns the number of bytes written, which is always the same
        /// as `m.len`. The purpose of this function existing is to match `std.io.Writer` API.
        /// Invalidates element pointers if additional memory is needed.
        fn appendWrite(self: *Self, m: []const u8) Allocator.Error!usize {
            try self.appendSlice(m);
            return m.len;
        }

        pub const FixedWriter = std.io.Writer(*Self, Allocator.Error, appendWriteFixed);

        /// Initializes a Writer which will append to the list but will return
        /// `error.OutOfMemory` rather than increasing capacity.
        pub fn fixedWriter(self: *Self) FixedWriter {
            return .{ .context = self };
        }

        /// The purpose of this function existing is to match `std.io.Writer` API.
        fn appendWriteFixed(self: *Self, m: []const u8) error{OutOfMemory}!usize {
            const available_capacity = self.capacity - self.items.len;
            if (m.len > available_capacity)
                return error.OutOfMemory;

            self.appendSliceAssumeCapacity(m);
            return m.len;
        }

        /// Append a value to the list `n` times.
        /// Allocates more memory as necessary.
        /// Invalidates element pointers if additional memory is needed.
        /// The function is inline so that a comptime-known `value` parameter will
        /// have a more optimal memset codegen in case it has a repeated byte pattern.
        pub inline fn appendNTimes(self: *Self, value: T, n: usize) Allocator.Error!void {
            const old_len = self.items.len;
            try self.resize(try addOrOom(old_len, n));
            @memset(self.items[old_len..self.items.len], value);
        }

        /// Append a value to the list `n` times.
        /// Never invalidates element pointers.
        /// The function is inline so that a comptime-known `value` parameter will
        /// have a more optimal memset codegen in case it has a repeated byte pattern.
        /// Asserts that the list can hold the additional items.
        pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
            const new_len = self.items.len + n;
            assert(new_len <= self.capacity);
            @memset(self.items.ptr[self.items.len..new_len], value);
            self.items.len = new_len;
        }

        /// Adjust the list length to `new_len`.
        /// Additional elements contain the value `undefined`.
        /// Invalidates element pointers if additional memory is needed.
        pub fn resize(self: *Self, new_len: usize) Allocator.Error!void {
            try self.ensureTotalCapacity(new_len);
            self.items.len = new_len;
        }

        /// Reduce allocated capacity to `new_len`.
        /// May invalidate element pointers.
        /// Asserts that the new length is less than or equal to the previous length.
        pub fn shrinkAndFree(self: *Self, new_len: usize) void {
            var unmanaged = self.moveToUnmanaged();
            unmanaged.shrinkAndFree(self.allocator, new_len);
            self.* = unmanaged.toManaged(self.allocator);
        }

        /// Reduce length to `new_len`.
        /// Invalidates element pointers for the elements `items[new_len..]`.
        /// Asserts that the new length is less than or equal to the previous length.
        pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
            assert(new_len <= self.items.len);
            self.items.len = new_len;
        }

        /// Invalidates all element pointers.
        pub fn clearRetainingCapacity(self: *Self) void {
            self.items.len = 0;
        }

        /// Invalidates all element pointers.
        pub fn clearAndFree(self: *Self) void {
            self.allocator.free(self.allocatedSlice());
            self.items.len = 0;
            self.capacity = 0;
        }

        /// If the current capacity is less than `new_capacity`, this function will
        /// modify the array so that it can hold at least `new_capacity` items.
        /// Invalidates element pointers if additional memory is needed.
        pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) Allocator.Error!void {
            if (@sizeOf(T) == 0) {
                self.capacity = math.maxInt(usize);
                return;
            }

            if (self.capacity >= new_capacity) return;

            const better_capacity = ArrayListAlignedUnmanaged(T, alignment).growCapacity(self.capacity, new_capacity);
            return self.ensureTotalCapacityPrecise(better_capacity);
        }

        /// If the current capacity is less than `new_capacity`, this function will
        /// modify the array so that it can hold exactly `new_capacity` items.
        /// Invalidates element pointers if additional memory is needed.
        pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) Allocator.Error!void {
            if (@sizeOf(T) == 0) {
                self.capacity = math.maxInt(usize);
                return;
            }

            if (self.capacity >= new_capacity) return;

            // Here we avoid copying allocated but unused bytes by
            // attempting a resize in place, and falling back to allocating
            // a new buffer and doing our own copy. With a realloc() call,
            // the allocator implementation would pointlessly copy our
            // extra capacity.
            const old_memory = self.allocatedSlice();
            if (self.allocator.remap(old_memory, new_capacity)) |new_memory| {
                self.items.ptr = new_memory.ptr;
                self.capacity = new_memory.len;
            } else {
                const new_memory = try self.allocator.alignedAlloc(T, alignment, new_capacity);
                @memcpy(new_memory[0..self.items.len], self.items);
                self.allocator.free(old_memory);
                self.items.ptr = new_memory.ptr;
                self.capacity = new_memory.len;
            }
        }

        /// Modify the array so that it can hold at least `additional_count` **more** items.
        /// Invalidates element pointers if additional memory is needed.
        pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) Allocator.Error!void {
            return self.ensureTotalCapacity(try addOrOom(self.items.len, additional_count));
        }

        /// Increases the array's length to match the full capacity that is already allocated.
        /// The new elements have `undefined` values.
        /// Never invalidates element pointers.
        pub fn expandToCapacity(self: *Self) void {
            self.items.len = self.capacity;
        }

        /// Increase length by 1, returning pointer to the new item.
        /// The returned pointer becomes invalid when the list resized.
        pub fn addOne(self: *Self) Allocator.Error!*T {
            // This can never overflow because `self.items` can never occupy the whole address space
            const newlen = self.items.len + 1;
            try self.ensureTotalCapacity(newlen);
            return self.addOneAssumeCapacity();
        }

        /// Increase length by 1, returning pointer to the new item.
        /// The returned pointer becomes invalid when the list is resized.
        /// Never invalidates element pointers.
        /// Asserts that the list can hold one additional item.
        pub fn addOneAssumeCapacity(self: *Self) *T {
            assert(self.items.len < self.capacity);
            self.items.len += 1;
            return &self.items[self.items.len - 1];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is an array pointing to the newly allocated elements.
        /// The returned pointer becomes invalid when the list is resized.
        /// Resizes list if `self.capacity` is not large enough.
        pub fn addManyAsArray(self: *Self, comptime n: usize) Allocator.Error!*[n]T {
            const prev_len = self.items.len;
            try self.resize(try addOrOom(self.items.len, n));
            return self.items[prev_len..][0..n];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is an array pointing to the newly allocated elements.
        /// Never invalidates element pointers.
        /// The returned pointer becomes invalid when the list is resized.
        /// Asserts that the list can hold the additional items.
        pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T {
            assert(self.items.len + n <= self.capacity);
            const prev_len = self.items.len;
            self.items.len += n;
            return self.items[prev_len..][0..n];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is a slice pointing to the newly allocated elements.
        /// The returned pointer becomes invalid when the list is resized.
        /// Resizes list if `self.capacity` is not large enough.
        pub fn addManyAsSlice(self: *Self, n: usize) Allocator.Error![]T {
            const prev_len = self.items.len;
            try self.resize(try addOrOom(self.items.len, n));
            return self.items[prev_len..][0..n];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is a slice pointing to the newly allocated elements.
        /// Never invalidates element pointers.
        /// The returned pointer becomes invalid when the list is resized.
        /// Asserts that the list can hold the additional items.
        pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T {
            assert(self.items.len + n <= self.capacity);
            const prev_len = self.items.len;
            self.items.len += n;
            return self.items[prev_len..][0..n];
        }

        /// Remove and return the last element from the list, or return `null` if list is empty.
        /// Invalidates element pointers to the removed element, if any.
        pub fn pop(self: *Self) ?T {
            if (self.items.len == 0) return null;
            const val = self.items[self.items.len - 1];
            self.items.len -= 1;
            return val;
        }

        /// Returns a slice of all the items plus the extra capacity, whose memory
        /// contents are `undefined`.
        pub fn allocatedSlice(self: Self) Slice {
            // `items.len` is the length, not the capacity.
            return self.items.ptr[0..self.capacity];
        }

        /// Returns a slice of only the extra capacity after items.
        /// This can be useful for writing directly into an ArrayList.
        /// Note that such an operation must be followed up with a direct
        /// modification of `self.items.len`.
        pub fn unusedCapacitySlice(self: Self) []T {
            return self.allocatedSlice()[self.items.len..];
        }

        /// Returns the last element from the list.
        /// Asserts that the list is not empty.
        pub fn getLast(self: Self) T {
            const val = self.items[self.items.len - 1];
            return val;
        }

        /// Returns the last element from the list, or `null` if list is empty.
        pub fn getLastOrNull(self: Self) ?T {
            if (self.items.len == 0) return null;
            return self.getLast();
        }
    };
}

Type FunctionArrayListAlignedUnmanaged[src]

A contiguous, growable list of arbitrarily aligned items in memory. This is a wrapper around an array of T values aligned to alignment-byte addresses. If the specified alignment is null, then @alignOf(T) is used.

Functions that potentially allocate memory accept an Allocator parameter. Initialize directly or with initCapacity, and deinitialize with deinit or use toOwnedSlice.

Default initialization of this struct is deprecated; use .empty instead.

Parameters

T: type
alignment: ?u29

Types

TypeSlice[src]

Source Code

Source code
pub const Slice = if (alignment) |a| ([]align(a) T) else []T

Type FunctionSentinelSlice[src]

Parameters

s: T

Source Code

Source code
pub fn SentinelSlice(comptime s: T) type {
    return if (alignment) |a| ([:s]align(a) T) else [:s]T;
}

TypeWriter[src]

Source Code

Source code
pub const Writer = if (T != u8)
    @compileError("The Writer interface is only defined for ArrayList(u8) " ++
        "but the given type is ArrayList(" ++ @typeName(T) ++ ")")
else
    std.io.Writer(WriterContext, Allocator.Error, appendWrite)

TypeFixedWriter[src]

Source Code

Source code
pub const FixedWriter = std.io.Writer(*Self, Allocator.Error, appendWriteFixed)

Fields

items: Slice = &[_]T{}

Contents of the list. This field is intended to be accessed directly.

Pointers to elements in this slice are invalidated by various functions of this ArrayList in accordance with the respective documentation. In all cases, "invalidated" means that the memory has been passed to an allocator's resize or free function.

capacity: usize = 0

How many T values this list can hold without allocating additional memory.

Values

Constantempty[src]

An ArrayList containing no elements.

Source Code

Source code
pub const empty: Self = .{
    .items = &.{},
    .capacity = 0,
}

Functions

FunctioninitCapacity[src]

pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self

Initialize with capacity to hold num elements. The resulting capacity will equal num exactly. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
num: usize

Source Code

Source code
pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self {
    var self = Self{};
    try self.ensureTotalCapacityPrecise(allocator, num);
    return self;
}

FunctioninitBuffer[src]

pub fn initBuffer(buffer: Slice) Self

Initialize with externally-managed memory. The buffer determines the capacity, and the length is set to zero. When initialized this way, all functions that accept an Allocator argument cause illegal behavior.

Parameters

buffer: Slice

Source Code

Source code
pub fn initBuffer(buffer: Slice) Self {
    return .{
        .items = buffer[0..0],
        .capacity = buffer.len,
    };
}

Functiondeinit[src]

pub fn deinit(self: *Self, allocator: Allocator) void

Release all allocated memory.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn deinit(self: *Self, allocator: Allocator) void {
    allocator.free(self.allocatedSlice());
    self.* = undefined;
}

FunctiontoManaged[src]

pub fn toManaged(self: *Self, allocator: Allocator) ArrayListAligned(T, alignment)

Convert this list into an analogous memory-managed one. The returned list has ownership of the underlying memory.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn toManaged(self: *Self, allocator: Allocator) ArrayListAligned(T, alignment) {
    return .{ .items = self.items, .capacity = self.capacity, .allocator = allocator };
}

FunctionfromOwnedSlice[src]

pub fn fromOwnedSlice(slice: Slice) Self

ArrayListUnmanaged takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

slice: Slice

Source Code

Source code
pub fn fromOwnedSlice(slice: Slice) Self {
    return Self{
        .items = slice,
        .capacity = slice.len,
    };
}

FunctionfromOwnedSliceSentinel[src]

pub fn fromOwnedSliceSentinel(comptime sentinel: T, slice: [:sentinel]T) Self

ArrayListUnmanaged takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

sentinel: T
slice: [:sentinel]T

Source Code

Source code
pub fn fromOwnedSliceSentinel(comptime sentinel: T, slice: [:sentinel]T) Self {
    return Self{
        .items = slice,
        .capacity = slice.len + 1,
    };
}

FunctiontoOwnedSlice[src]

pub fn toOwnedSlice(self: *Self, allocator: Allocator) Allocator.Error!Slice

The caller owns the returned memory. Empties this ArrayList. Its capacity is cleared, making deinit() safe but unnecessary to call.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn toOwnedSlice(self: *Self, allocator: Allocator) Allocator.Error!Slice {
    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, self.items.len)) |new_items| {
        self.* = .empty;
        return new_items;
    }

    const new_memory = try allocator.alignedAlloc(T, alignment, self.items.len);
    @memcpy(new_memory, self.items);
    self.clearAndFree(allocator);
    return new_memory;
}

FunctiontoOwnedSliceSentinel[src]

pub fn toOwnedSliceSentinel(self: *Self, allocator: Allocator, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel)

The caller owns the returned memory. ArrayList becomes empty.

Parameters

self: *Self
allocator: Allocator
sentinel: T

Source Code

Source code
pub fn toOwnedSliceSentinel(self: *Self, allocator: Allocator, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel) {
    // This addition can never overflow because `self.items` can never occupy the whole address space
    try self.ensureTotalCapacityPrecise(allocator, self.items.len + 1);
    self.appendAssumeCapacity(sentinel);
    const result = try self.toOwnedSlice(allocator);
    return result[0 .. result.len - 1 :sentinel];
}

Functionclone[src]

pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self

Creates a copy of this ArrayList.

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self {
    var cloned = try Self.initCapacity(allocator, self.capacity);
    cloned.appendSliceAssumeCapacity(self.items);
    return cloned;
}

Functioninsert[src]

pub fn insert(self: *Self, allocator: Allocator, i: usize, item: T) Allocator.Error!void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If i is equal to the length of the list this operation is equivalent to append. This operation is O(N). Invalidates element pointers if additional memory is needed. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
allocator: Allocator
i: usize
item: T

Source Code

Source code
pub fn insert(self: *Self, allocator: Allocator, i: usize, item: T) Allocator.Error!void {
    const dst = try self.addManyAt(allocator, i, 1);
    dst[0] = item;
}

FunctioninsertAssumeCapacity[src]

pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If in is equal to the length of the list this operation is equivalent to append. This operation is O(N). Asserts that the list has capacity for one additional item. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void {
    assert(self.items.len < self.capacity);
    self.items.len += 1;

    mem.copyBackwards(T, self.items[i + 1 .. self.items.len], self.items[i .. self.items.len - 1]);
    self.items[i] = item;
}

FunctionaddManyAt[src]

pub fn addManyAt( self: *Self, allocator: Allocator, index: usize, count: usize, ) Allocator.Error![]T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
allocator: Allocator
index: usize
count: usize

Source Code

Source code
pub fn addManyAt(
    self: *Self,
    allocator: Allocator,
    index: usize,
    count: usize,
) Allocator.Error![]T {
    var managed = self.toManaged(allocator);
    defer self.* = managed.moveToUnmanaged();
    return managed.addManyAt(index, count);
}

FunctionaddManyAtAssumeCapacity[src]

pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Invalidates pre-existing pointers to elements at and after index, but does not invalidate any before that. Asserts that the list has capacity for the additional items. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
count: usize

Source Code

Source code
pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T {
    const new_len = self.items.len + count;
    assert(self.capacity >= new_len);
    const to_move = self.items[index..];
    self.items.len = new_len;
    mem.copyBackwards(T, self.items[index + count ..], to_move);
    const result = self.items[index..][0..count];
    @memset(result, undefined);
    return result;
}

FunctioninsertSlice[src]

pub fn insertSlice( self: *Self, allocator: Allocator, index: usize, items: []const T, ) Allocator.Error!void

Insert slice items at index i by moving list[i .. list.len] to make room. This operation is O(N). Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
allocator: Allocator
index: usize
items: []const T

Source Code

Source code
pub fn insertSlice(
    self: *Self,
    allocator: Allocator,
    index: usize,
    items: []const T,
) Allocator.Error!void {
    const dst = try self.addManyAt(
        allocator,
        index,
        items.len,
    );
    @memcpy(dst, items);
}

FunctionreplaceRange[src]

pub fn replaceRange( self: *Self, allocator: Allocator, start: usize, len: usize, new_items: []const T, ) Allocator.Error!void

Grows or shrinks the list as necessary. Invalidates element pointers if additional capacity is allocated. Asserts that the range is in bounds.

Parameters

self: *Self
allocator: Allocator
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRange(
    self: *Self,
    allocator: Allocator,
    start: usize,
    len: usize,
    new_items: []const T,
) Allocator.Error!void {
    const after_range = start + len;
    const range = self.items[start..after_range];
    if (range.len < new_items.len) {
        const first = new_items[0..range.len];
        const rest = new_items[range.len..];
        @memcpy(range[0..first.len], first);
        try self.insertSlice(allocator, after_range, rest);
    } else {
        self.replaceRangeAssumeCapacity(start, len, new_items);
    }
}

FunctionreplaceRangeAssumeCapacity[src]

pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void

Grows or shrinks the list as necessary. Never invalidates element pointers. Asserts the capacity is enough for additional items.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void {
    const after_range = start + len;
    const range = self.items[start..after_range];

    if (range.len == new_items.len)
        @memcpy(range[0..new_items.len], new_items)
    else if (range.len < new_items.len) {
        const first = new_items[0..range.len];
        const rest = new_items[range.len..];
        @memcpy(range[0..first.len], first);
        const dst = self.addManyAtAssumeCapacity(after_range, rest.len);
        @memcpy(dst, rest);
    } else {
        const extra = range.len - new_items.len;
        @memcpy(range[0..new_items.len], new_items);
        std.mem.copyForwards(
            T,
            self.items[after_range - extra ..],
            self.items[after_range..],
        );
        @memset(self.items[self.items.len - extra ..], undefined);
        self.items.len -= extra;
    }
}

Functionappend[src]

pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void

Extend the list by 1 element. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
item: T

Source Code

Source code
pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void {
    const new_item_ptr = try self.addOne(allocator);
    new_item_ptr.* = item;
}

FunctionappendAssumeCapacity[src]

pub fn appendAssumeCapacity(self: *Self, item: T) void

Extend the list by 1 element. Never invalidates element pointers. Asserts that the list can hold one additional item.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn appendAssumeCapacity(self: *Self, item: T) void {
    self.addOneAssumeCapacity().* = item;
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, i: usize) T

Remove the element at index i from the list and return its value. Invalidates pointers to the last element. This operation is O(N). Asserts that the list is not empty. Asserts that the index is in bounds.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn orderedRemove(self: *Self, i: usize) T {
    const old_item = self.items[i];
    self.replaceRangeAssumeCapacity(i, 1, &.{});
    return old_item;
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, i: usize) T

Removes the element at the specified index and returns it. The empty slot is filled from the end of the list. Invalidates pointers to last element. This operation is O(1). Asserts that the list is not empty. Asserts that the index is in bounds.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn swapRemove(self: *Self, i: usize) T {
    if (self.items.len - 1 == i) return self.pop().?;

    const old_item = self.items[i];
    self.items[i] = self.pop().?;
    return old_item;
}

FunctionappendSlice[src]

pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void

Append the slice of items to the list. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
items: []const T

Source Code

Source code
pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(allocator, items.len);
    self.appendSliceAssumeCapacity(items);
}

FunctionappendSliceAssumeCapacity[src]

pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void

Append the slice of items to the list. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

FunctionappendUnalignedSlice[src]

pub fn appendUnalignedSlice(self: *Self, allocator: Allocator, items: []align(1) const T) Allocator.Error!void

Append the slice of items to the list. Allocates more memory as necessary. Only call this function if a call to appendSlice instead would be a compile error. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSlice(self: *Self, allocator: Allocator, items: []align(1) const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(allocator, items.len);
    self.appendUnalignedSliceAssumeCapacity(items);
}

FunctionappendUnalignedSliceAssumeCapacity[src]

pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void

Append an unaligned slice of items to the list. Only call this function if a call to appendSliceAssumeCapacity instead would be a compile error. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

Functionwriter[src]

pub fn writer(self: *Self, allocator: Allocator) Writer

Initializes a Writer which will append to the list.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn writer(self: *Self, allocator: Allocator) Writer {
    return .{ .context = .{ .self = self, .allocator = allocator } };
}

FunctionfixedWriter[src]

pub fn fixedWriter(self: *Self) FixedWriter

Initializes a Writer which will append to the list but will return error.OutOfMemory rather than increasing capacity.

Parameters

self: *Self

Source Code

Source code
pub fn fixedWriter(self: *Self) FixedWriter {
    return .{ .context = self };
}

FunctionappendNTimes[src]

pub inline fn appendNTimes(self: *Self, allocator: Allocator, value: T, n: usize) Allocator.Error!void

Append a value to the list n times. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed. The function is inline so that a comptime-known value parameter will have a more optimal memset codegen in case it has a repeated byte pattern.

Parameters

self: *Self
allocator: Allocator
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimes(self: *Self, allocator: Allocator, value: T, n: usize) Allocator.Error!void {
    const old_len = self.items.len;
    try self.resize(allocator, try addOrOom(old_len, n));
    @memset(self.items[old_len..self.items.len], value);
}

FunctionappendNTimesAssumeCapacity[src]

pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void

Append a value to the list n times. Never invalidates element pointers. The function is inline so that a comptime-known value parameter will have better memset codegen in case it has a repeated byte pattern. Asserts that the list can hold the additional items.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
    const new_len = self.items.len + n;
    assert(new_len <= self.capacity);
    @memset(self.items.ptr[self.items.len..new_len], value);
    self.items.len = new_len;
}

Functionresize[src]

pub fn resize(self: *Self, allocator: Allocator, new_len: usize) Allocator.Error!void

Adjust the list length to new_len. Additional elements contain the value undefined. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
new_len: usize

Source Code

Source code
pub fn resize(self: *Self, allocator: Allocator, new_len: usize) Allocator.Error!void {
    try self.ensureTotalCapacity(allocator, new_len);
    self.items.len = new_len;
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, allocator: Allocator, new_len: usize) void

Reduce allocated capacity to new_len. May invalidate element pointers. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
allocator: Allocator
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, allocator: Allocator, new_len: usize) void {
    assert(new_len <= self.items.len);

    if (@sizeOf(T) == 0) {
        self.items.len = new_len;
        return;
    }

    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, new_len)) |new_items| {
        self.capacity = new_items.len;
        self.items = new_items;
        return;
    }

    const new_memory = allocator.alignedAlloc(T, alignment, new_len) catch |e| switch (e) {
        error.OutOfMemory => {
            // No problem, capacity is still correct then.
            self.items.len = new_len;
            return;
        },
    };

    @memcpy(new_memory, self.items[0..new_len]);
    allocator.free(old_memory);
    self.items = new_memory;
    self.capacity = new_memory.len;
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Reduce length to new_len. Invalidates pointers to elements items[new_len..]. Keeps capacity the same. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    assert(new_len <= self.items.len);
    self.items.len = new_len;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.items.len = 0;
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, allocator: Allocator) void

Invalidates all element pointers.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn clearAndFree(self: *Self, allocator: Allocator) void {
    allocator.free(self.allocatedSlice());
    self.items.len = 0;
    self.capacity = 0;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void

Modify the array so that it can hold at least new_capacity items. Implements super-linear growth to achieve amortized O(1) append operations. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void {
    if (self.capacity >= new_capacity) return;
    return self.ensureTotalCapacityPrecise(gpa, growCapacity(self.capacity, new_capacity));
}

FunctionensureTotalCapacityPrecise[src]

pub fn ensureTotalCapacityPrecise(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void

If the current capacity is less than new_capacity, this function will modify the array so that it can hold exactly new_capacity items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacityPrecise(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void {
    if (@sizeOf(T) == 0) {
        self.capacity = math.maxInt(usize);
        return;
    }

    if (self.capacity >= new_capacity) return;

    // Here we avoid copying allocated but unused bytes by
    // attempting a resize in place, and falling back to allocating
    // a new buffer and doing our own copy. With a realloc() call,
    // the allocator implementation would pointlessly copy our
    // extra capacity.
    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, new_capacity)) |new_memory| {
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    } else {
        const new_memory = try allocator.alignedAlloc(T, alignment, new_capacity);
        @memcpy(new_memory[0..self.items.len], self.items);
        allocator.free(old_memory);
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    }
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity( self: *Self, allocator: Allocator, additional_count: usize, ) Allocator.Error!void

Modify the array so that it can hold at least additional_count more items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(
    self: *Self,
    allocator: Allocator,
    additional_count: usize,
) Allocator.Error!void {
    return self.ensureTotalCapacity(allocator, try addOrOom(self.items.len, additional_count));
}

FunctionexpandToCapacity[src]

pub fn expandToCapacity(self: *Self) void

Increases the array's length to match the full capacity that is already allocated. The new elements have undefined values. Never invalidates element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn expandToCapacity(self: *Self) void {
    self.items.len = self.capacity;
}

FunctionaddOne[src]

pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T

Increase length by 1, returning pointer to the new item. The returned element pointer becomes invalid when the list is resized.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T {
    // This can never overflow because `self.items` can never occupy the whole address space
    const newlen = self.items.len + 1;
    try self.ensureTotalCapacity(allocator, newlen);
    return self.addOneAssumeCapacity();
}

FunctionaddOneAssumeCapacity[src]

pub fn addOneAssumeCapacity(self: *Self) *T

Increase length by 1, returning pointer to the new item. Never invalidates element pointers. The returned element pointer becomes invalid when the list is resized. Asserts that the list can hold one additional item.

Parameters

self: *Self

Source Code

Source code
pub fn addOneAssumeCapacity(self: *Self) *T {
    assert(self.items.len < self.capacity);

    self.items.len += 1;
    return &self.items[self.items.len - 1];
}

FunctionaddManyAsArray[src]

pub fn addManyAsArray(self: *Self, allocator: Allocator, comptime n: usize) Allocator.Error!*[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized.

Parameters

self: *Self
allocator: Allocator
n: usize

Source Code

Source code
pub fn addManyAsArray(self: *Self, allocator: Allocator, comptime n: usize) Allocator.Error!*[n]T {
    const prev_len = self.items.len;
    try self.resize(allocator, try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsArrayAssumeCapacity[src]

pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSlice[src]

pub fn addManyAsSlice(self: *Self, allocator: Allocator, n: usize) Allocator.Error![]T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized. Resizes list if self.capacity is not large enough.

Parameters

self: *Self
allocator: Allocator
n: usize

Source Code

Source code
pub fn addManyAsSlice(self: *Self, allocator: Allocator, n: usize) Allocator.Error![]T {
    const prev_len = self.items.len;
    try self.resize(allocator, try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSliceAssumeCapacity[src]

pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Remove and return the last element from the list. If the list is empty, returns null. Invalidates pointers to last element.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.items.len == 0) return null;
    const val = self.items[self.items.len - 1];
    self.items.len -= 1;
    return val;
}

FunctionallocatedSlice[src]

pub fn allocatedSlice(self: Self) Slice

Returns a slice of all the items plus the extra capacity, whose memory contents are undefined.

Parameters

self: Self

Source Code

Source code
pub fn allocatedSlice(self: Self) Slice {
    return self.items.ptr[0..self.capacity];
}

FunctionunusedCapacitySlice[src]

pub fn unusedCapacitySlice(self: Self) []T

Returns a slice of only the extra capacity after items. This can be useful for writing directly into an ArrayList. Note that such an operation must be followed up with a direct modification of self.items.len.

Parameters

self: Self

Source Code

Source code
pub fn unusedCapacitySlice(self: Self) []T {
    return self.allocatedSlice()[self.items.len..];
}

FunctiongetLast[src]

pub fn getLast(self: Self) T

Return the last element from the list. Asserts that the list is not empty.

Parameters

self: Self

Source Code

Source code
pub fn getLast(self: Self) T {
    const val = self.items[self.items.len - 1];
    return val;
}

FunctiongetLastOrNull[src]

pub fn getLastOrNull(self: Self) ?T

Return the last element from the list, or return null if list is empty.

Parameters

self: Self

Source Code

Source code
pub fn getLastOrNull(self: Self) ?T {
    if (self.items.len == 0) return null;
    return self.getLast();
}

Source Code

Source code
pub fn ArrayListAlignedUnmanaged(comptime T: type, comptime alignment: ?u29) type {
    if (alignment) |a| {
        if (a == @alignOf(T)) {
            return ArrayListAlignedUnmanaged(T, null);
        }
    }
    return struct {
        const Self = @This();
        /// Contents of the list. This field is intended to be accessed
        /// directly.
        ///
        /// Pointers to elements in this slice are invalidated by various
        /// functions of this ArrayList in accordance with the respective
        /// documentation. In all cases, "invalidated" means that the memory
        /// has been passed to an allocator's resize or free function.
        items: Slice = &[_]T{},
        /// How many T values this list can hold without allocating
        /// additional memory.
        capacity: usize = 0,

        /// An ArrayList containing no elements.
        pub const empty: Self = .{
            .items = &.{},
            .capacity = 0,
        };

        pub const Slice = if (alignment) |a| ([]align(a) T) else []T;

        pub fn SentinelSlice(comptime s: T) type {
            return if (alignment) |a| ([:s]align(a) T) else [:s]T;
        }

        /// Initialize with capacity to hold `num` elements.
        /// The resulting capacity will equal `num` exactly.
        /// Deinitialize with `deinit` or use `toOwnedSlice`.
        pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self {
            var self = Self{};
            try self.ensureTotalCapacityPrecise(allocator, num);
            return self;
        }

        /// Initialize with externally-managed memory. The buffer determines the
        /// capacity, and the length is set to zero.
        /// When initialized this way, all functions that accept an Allocator
        /// argument cause illegal behavior.
        pub fn initBuffer(buffer: Slice) Self {
            return .{
                .items = buffer[0..0],
                .capacity = buffer.len,
            };
        }

        /// Release all allocated memory.
        pub fn deinit(self: *Self, allocator: Allocator) void {
            allocator.free(self.allocatedSlice());
            self.* = undefined;
        }

        /// Convert this list into an analogous memory-managed one.
        /// The returned list has ownership of the underlying memory.
        pub fn toManaged(self: *Self, allocator: Allocator) ArrayListAligned(T, alignment) {
            return .{ .items = self.items, .capacity = self.capacity, .allocator = allocator };
        }

        /// ArrayListUnmanaged takes ownership of the passed in slice. The slice must have been
        /// allocated with `allocator`.
        /// Deinitialize with `deinit` or use `toOwnedSlice`.
        pub fn fromOwnedSlice(slice: Slice) Self {
            return Self{
                .items = slice,
                .capacity = slice.len,
            };
        }

        /// ArrayListUnmanaged takes ownership of the passed in slice. The slice must have been
        /// allocated with `allocator`.
        /// Deinitialize with `deinit` or use `toOwnedSlice`.
        pub fn fromOwnedSliceSentinel(comptime sentinel: T, slice: [:sentinel]T) Self {
            return Self{
                .items = slice,
                .capacity = slice.len + 1,
            };
        }

        /// The caller owns the returned memory. Empties this ArrayList.
        /// Its capacity is cleared, making deinit() safe but unnecessary to call.
        pub fn toOwnedSlice(self: *Self, allocator: Allocator) Allocator.Error!Slice {
            const old_memory = self.allocatedSlice();
            if (allocator.remap(old_memory, self.items.len)) |new_items| {
                self.* = .empty;
                return new_items;
            }

            const new_memory = try allocator.alignedAlloc(T, alignment, self.items.len);
            @memcpy(new_memory, self.items);
            self.clearAndFree(allocator);
            return new_memory;
        }

        /// The caller owns the returned memory. ArrayList becomes empty.
        pub fn toOwnedSliceSentinel(self: *Self, allocator: Allocator, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel) {
            // This addition can never overflow because `self.items` can never occupy the whole address space
            try self.ensureTotalCapacityPrecise(allocator, self.items.len + 1);
            self.appendAssumeCapacity(sentinel);
            const result = try self.toOwnedSlice(allocator);
            return result[0 .. result.len - 1 :sentinel];
        }

        /// Creates a copy of this ArrayList.
        pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self {
            var cloned = try Self.initCapacity(allocator, self.capacity);
            cloned.appendSliceAssumeCapacity(self.items);
            return cloned;
        }

        /// Insert `item` at index `i`. Moves `list[i .. list.len]` to higher indices to make room.
        /// If `i` is equal to the length of the list this operation is equivalent to append.
        /// This operation is O(N).
        /// Invalidates element pointers if additional memory is needed.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn insert(self: *Self, allocator: Allocator, i: usize, item: T) Allocator.Error!void {
            const dst = try self.addManyAt(allocator, i, 1);
            dst[0] = item;
        }

        /// Insert `item` at index `i`. Moves `list[i .. list.len]` to higher indices to make room.
        /// If in` is equal to the length of the list this operation is equivalent to append.
        /// This operation is O(N).
        /// Asserts that the list has capacity for one additional item.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void {
            assert(self.items.len < self.capacity);
            self.items.len += 1;

            mem.copyBackwards(T, self.items[i + 1 .. self.items.len], self.items[i .. self.items.len - 1]);
            self.items[i] = item;
        }

        /// Add `count` new elements at position `index`, which have
        /// `undefined` values. Returns a slice pointing to the newly allocated
        /// elements, which becomes invalid after various `ArrayList`
        /// operations.
        /// Invalidates pre-existing pointers to elements at and after `index`.
        /// Invalidates all pre-existing element pointers if capacity must be
        /// increased to accommodate the new elements.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn addManyAt(
            self: *Self,
            allocator: Allocator,
            index: usize,
            count: usize,
        ) Allocator.Error![]T {
            var managed = self.toManaged(allocator);
            defer self.* = managed.moveToUnmanaged();
            return managed.addManyAt(index, count);
        }

        /// Add `count` new elements at position `index`, which have
        /// `undefined` values. Returns a slice pointing to the newly allocated
        /// elements, which becomes invalid after various `ArrayList`
        /// operations.
        /// Invalidates pre-existing pointers to elements at and after `index`, but
        /// does not invalidate any before that.
        /// Asserts that the list has capacity for the additional items.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T {
            const new_len = self.items.len + count;
            assert(self.capacity >= new_len);
            const to_move = self.items[index..];
            self.items.len = new_len;
            mem.copyBackwards(T, self.items[index + count ..], to_move);
            const result = self.items[index..][0..count];
            @memset(result, undefined);
            return result;
        }

        /// Insert slice `items` at index `i` by moving `list[i .. list.len]` to make room.
        /// This operation is O(N).
        /// Invalidates pre-existing pointers to elements at and after `index`.
        /// Invalidates all pre-existing element pointers if capacity must be
        /// increased to accommodate the new elements.
        /// Asserts that the index is in bounds or equal to the length.
        pub fn insertSlice(
            self: *Self,
            allocator: Allocator,
            index: usize,
            items: []const T,
        ) Allocator.Error!void {
            const dst = try self.addManyAt(
                allocator,
                index,
                items.len,
            );
            @memcpy(dst, items);
        }

        /// Grows or shrinks the list as necessary.
        /// Invalidates element pointers if additional capacity is allocated.
        /// Asserts that the range is in bounds.
        pub fn replaceRange(
            self: *Self,
            allocator: Allocator,
            start: usize,
            len: usize,
            new_items: []const T,
        ) Allocator.Error!void {
            const after_range = start + len;
            const range = self.items[start..after_range];
            if (range.len < new_items.len) {
                const first = new_items[0..range.len];
                const rest = new_items[range.len..];
                @memcpy(range[0..first.len], first);
                try self.insertSlice(allocator, after_range, rest);
            } else {
                self.replaceRangeAssumeCapacity(start, len, new_items);
            }
        }

        /// Grows or shrinks the list as necessary.
        /// Never invalidates element pointers.
        /// Asserts the capacity is enough for additional items.
        pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void {
            const after_range = start + len;
            const range = self.items[start..after_range];

            if (range.len == new_items.len)
                @memcpy(range[0..new_items.len], new_items)
            else if (range.len < new_items.len) {
                const first = new_items[0..range.len];
                const rest = new_items[range.len..];
                @memcpy(range[0..first.len], first);
                const dst = self.addManyAtAssumeCapacity(after_range, rest.len);
                @memcpy(dst, rest);
            } else {
                const extra = range.len - new_items.len;
                @memcpy(range[0..new_items.len], new_items);
                std.mem.copyForwards(
                    T,
                    self.items[after_range - extra ..],
                    self.items[after_range..],
                );
                @memset(self.items[self.items.len - extra ..], undefined);
                self.items.len -= extra;
            }
        }

        /// Extend the list by 1 element. Allocates more memory as necessary.
        /// Invalidates element pointers if additional memory is needed.
        pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void {
            const new_item_ptr = try self.addOne(allocator);
            new_item_ptr.* = item;
        }

        /// Extend the list by 1 element.
        /// Never invalidates element pointers.
        /// Asserts that the list can hold one additional item.
        pub fn appendAssumeCapacity(self: *Self, item: T) void {
            self.addOneAssumeCapacity().* = item;
        }

        /// Remove the element at index `i` from the list and return its value.
        /// Invalidates pointers to the last element.
        /// This operation is O(N).
        /// Asserts that the list is not empty.
        /// Asserts that the index is in bounds.
        pub fn orderedRemove(self: *Self, i: usize) T {
            const old_item = self.items[i];
            self.replaceRangeAssumeCapacity(i, 1, &.{});
            return old_item;
        }

        /// Removes the element at the specified index and returns it.
        /// The empty slot is filled from the end of the list.
        /// Invalidates pointers to last element.
        /// This operation is O(1).
        /// Asserts that the list is not empty.
        /// Asserts that the index is in bounds.
        pub fn swapRemove(self: *Self, i: usize) T {
            if (self.items.len - 1 == i) return self.pop().?;

            const old_item = self.items[i];
            self.items[i] = self.pop().?;
            return old_item;
        }

        /// Append the slice of items to the list. Allocates more
        /// memory as necessary.
        /// Invalidates element pointers if additional memory is needed.
        pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void {
            try self.ensureUnusedCapacity(allocator, items.len);
            self.appendSliceAssumeCapacity(items);
        }

        /// Append the slice of items to the list.
        /// Asserts that the list can hold the additional items.
        pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
            const old_len = self.items.len;
            const new_len = old_len + items.len;
            assert(new_len <= self.capacity);
            self.items.len = new_len;
            @memcpy(self.items[old_len..][0..items.len], items);
        }

        /// Append the slice of items to the list. Allocates more
        /// memory as necessary. Only call this function if a call to `appendSlice` instead would
        /// be a compile error.
        /// Invalidates element pointers if additional memory is needed.
        pub fn appendUnalignedSlice(self: *Self, allocator: Allocator, items: []align(1) const T) Allocator.Error!void {
            try self.ensureUnusedCapacity(allocator, items.len);
            self.appendUnalignedSliceAssumeCapacity(items);
        }

        /// Append an unaligned slice of items to the list.
        /// Only call this function if a call to `appendSliceAssumeCapacity`
        /// instead would be a compile error.
        /// Asserts that the list can hold the additional items.
        pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void {
            const old_len = self.items.len;
            const new_len = old_len + items.len;
            assert(new_len <= self.capacity);
            self.items.len = new_len;
            @memcpy(self.items[old_len..][0..items.len], items);
        }

        pub const WriterContext = struct {
            self: *Self,
            allocator: Allocator,
        };

        pub const Writer = if (T != u8)
            @compileError("The Writer interface is only defined for ArrayList(u8) " ++
                "but the given type is ArrayList(" ++ @typeName(T) ++ ")")
        else
            std.io.Writer(WriterContext, Allocator.Error, appendWrite);

        /// Initializes a Writer which will append to the list.
        pub fn writer(self: *Self, allocator: Allocator) Writer {
            return .{ .context = .{ .self = self, .allocator = allocator } };
        }

        /// Same as `append` except it returns the number of bytes written,
        /// which is always the same as `m.len`. The purpose of this function
        /// existing is to match `std.io.Writer` API.
        /// Invalidates element pointers if additional memory is needed.
        fn appendWrite(context: WriterContext, m: []const u8) Allocator.Error!usize {
            try context.self.appendSlice(context.allocator, m);
            return m.len;
        }

        pub const FixedWriter = std.io.Writer(*Self, Allocator.Error, appendWriteFixed);

        /// Initializes a Writer which will append to the list but will return
        /// `error.OutOfMemory` rather than increasing capacity.
        pub fn fixedWriter(self: *Self) FixedWriter {
            return .{ .context = self };
        }

        /// The purpose of this function existing is to match `std.io.Writer` API.
        fn appendWriteFixed(self: *Self, m: []const u8) error{OutOfMemory}!usize {
            const available_capacity = self.capacity - self.items.len;
            if (m.len > available_capacity)
                return error.OutOfMemory;

            self.appendSliceAssumeCapacity(m);
            return m.len;
        }

        /// Append a value to the list `n` times.
        /// Allocates more memory as necessary.
        /// Invalidates element pointers if additional memory is needed.
        /// The function is inline so that a comptime-known `value` parameter will
        /// have a more optimal memset codegen in case it has a repeated byte pattern.
        pub inline fn appendNTimes(self: *Self, allocator: Allocator, value: T, n: usize) Allocator.Error!void {
            const old_len = self.items.len;
            try self.resize(allocator, try addOrOom(old_len, n));
            @memset(self.items[old_len..self.items.len], value);
        }

        /// Append a value to the list `n` times.
        /// Never invalidates element pointers.
        /// The function is inline so that a comptime-known `value` parameter will
        /// have better memset codegen in case it has a repeated byte pattern.
        /// Asserts that the list can hold the additional items.
        pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
            const new_len = self.items.len + n;
            assert(new_len <= self.capacity);
            @memset(self.items.ptr[self.items.len..new_len], value);
            self.items.len = new_len;
        }

        /// Adjust the list length to `new_len`.
        /// Additional elements contain the value `undefined`.
        /// Invalidates element pointers if additional memory is needed.
        pub fn resize(self: *Self, allocator: Allocator, new_len: usize) Allocator.Error!void {
            try self.ensureTotalCapacity(allocator, new_len);
            self.items.len = new_len;
        }

        /// Reduce allocated capacity to `new_len`.
        /// May invalidate element pointers.
        /// Asserts that the new length is less than or equal to the previous length.
        pub fn shrinkAndFree(self: *Self, allocator: Allocator, new_len: usize) void {
            assert(new_len <= self.items.len);

            if (@sizeOf(T) == 0) {
                self.items.len = new_len;
                return;
            }

            const old_memory = self.allocatedSlice();
            if (allocator.remap(old_memory, new_len)) |new_items| {
                self.capacity = new_items.len;
                self.items = new_items;
                return;
            }

            const new_memory = allocator.alignedAlloc(T, alignment, new_len) catch |e| switch (e) {
                error.OutOfMemory => {
                    // No problem, capacity is still correct then.
                    self.items.len = new_len;
                    return;
                },
            };

            @memcpy(new_memory, self.items[0..new_len]);
            allocator.free(old_memory);
            self.items = new_memory;
            self.capacity = new_memory.len;
        }

        /// Reduce length to `new_len`.
        /// Invalidates pointers to elements `items[new_len..]`.
        /// Keeps capacity the same.
        /// Asserts that the new length is less than or equal to the previous length.
        pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
            assert(new_len <= self.items.len);
            self.items.len = new_len;
        }

        /// Invalidates all element pointers.
        pub fn clearRetainingCapacity(self: *Self) void {
            self.items.len = 0;
        }

        /// Invalidates all element pointers.
        pub fn clearAndFree(self: *Self, allocator: Allocator) void {
            allocator.free(self.allocatedSlice());
            self.items.len = 0;
            self.capacity = 0;
        }

        /// Modify the array so that it can hold at least `new_capacity` items.
        /// Implements super-linear growth to achieve amortized O(1) append operations.
        /// Invalidates element pointers if additional memory is needed.
        pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void {
            if (self.capacity >= new_capacity) return;
            return self.ensureTotalCapacityPrecise(gpa, growCapacity(self.capacity, new_capacity));
        }

        /// If the current capacity is less than `new_capacity`, this function will
        /// modify the array so that it can hold exactly `new_capacity` items.
        /// Invalidates element pointers if additional memory is needed.
        pub fn ensureTotalCapacityPrecise(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void {
            if (@sizeOf(T) == 0) {
                self.capacity = math.maxInt(usize);
                return;
            }

            if (self.capacity >= new_capacity) return;

            // Here we avoid copying allocated but unused bytes by
            // attempting a resize in place, and falling back to allocating
            // a new buffer and doing our own copy. With a realloc() call,
            // the allocator implementation would pointlessly copy our
            // extra capacity.
            const old_memory = self.allocatedSlice();
            if (allocator.remap(old_memory, new_capacity)) |new_memory| {
                self.items.ptr = new_memory.ptr;
                self.capacity = new_memory.len;
            } else {
                const new_memory = try allocator.alignedAlloc(T, alignment, new_capacity);
                @memcpy(new_memory[0..self.items.len], self.items);
                allocator.free(old_memory);
                self.items.ptr = new_memory.ptr;
                self.capacity = new_memory.len;
            }
        }

        /// Modify the array so that it can hold at least `additional_count` **more** items.
        /// Invalidates element pointers if additional memory is needed.
        pub fn ensureUnusedCapacity(
            self: *Self,
            allocator: Allocator,
            additional_count: usize,
        ) Allocator.Error!void {
            return self.ensureTotalCapacity(allocator, try addOrOom(self.items.len, additional_count));
        }

        /// Increases the array's length to match the full capacity that is already allocated.
        /// The new elements have `undefined` values.
        /// Never invalidates element pointers.
        pub fn expandToCapacity(self: *Self) void {
            self.items.len = self.capacity;
        }

        /// Increase length by 1, returning pointer to the new item.
        /// The returned element pointer becomes invalid when the list is resized.
        pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T {
            // This can never overflow because `self.items` can never occupy the whole address space
            const newlen = self.items.len + 1;
            try self.ensureTotalCapacity(allocator, newlen);
            return self.addOneAssumeCapacity();
        }

        /// Increase length by 1, returning pointer to the new item.
        /// Never invalidates element pointers.
        /// The returned element pointer becomes invalid when the list is resized.
        /// Asserts that the list can hold one additional item.
        pub fn addOneAssumeCapacity(self: *Self) *T {
            assert(self.items.len < self.capacity);

            self.items.len += 1;
            return &self.items[self.items.len - 1];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is an array pointing to the newly allocated elements.
        /// The returned pointer becomes invalid when the list is resized.
        pub fn addManyAsArray(self: *Self, allocator: Allocator, comptime n: usize) Allocator.Error!*[n]T {
            const prev_len = self.items.len;
            try self.resize(allocator, try addOrOom(self.items.len, n));
            return self.items[prev_len..][0..n];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is an array pointing to the newly allocated elements.
        /// Never invalidates element pointers.
        /// The returned pointer becomes invalid when the list is resized.
        /// Asserts that the list can hold the additional items.
        pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T {
            assert(self.items.len + n <= self.capacity);
            const prev_len = self.items.len;
            self.items.len += n;
            return self.items[prev_len..][0..n];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is a slice pointing to the newly allocated elements.
        /// The returned pointer becomes invalid when the list is resized.
        /// Resizes list if `self.capacity` is not large enough.
        pub fn addManyAsSlice(self: *Self, allocator: Allocator, n: usize) Allocator.Error![]T {
            const prev_len = self.items.len;
            try self.resize(allocator, try addOrOom(self.items.len, n));
            return self.items[prev_len..][0..n];
        }

        /// Resize the array, adding `n` new elements, which have `undefined` values.
        /// The return value is a slice pointing to the newly allocated elements.
        /// Never invalidates element pointers.
        /// The returned pointer becomes invalid when the list is resized.
        /// Asserts that the list can hold the additional items.
        pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T {
            assert(self.items.len + n <= self.capacity);
            const prev_len = self.items.len;
            self.items.len += n;
            return self.items[prev_len..][0..n];
        }

        /// Remove and return the last element from the list.
        /// If the list is empty, returns `null`.
        /// Invalidates pointers to last element.
        pub fn pop(self: *Self) ?T {
            if (self.items.len == 0) return null;
            const val = self.items[self.items.len - 1];
            self.items.len -= 1;
            return val;
        }

        /// Returns a slice of all the items plus the extra capacity, whose memory
        /// contents are `undefined`.
        pub fn allocatedSlice(self: Self) Slice {
            return self.items.ptr[0..self.capacity];
        }

        /// Returns a slice of only the extra capacity after items.
        /// This can be useful for writing directly into an ArrayList.
        /// Note that such an operation must be followed up with a direct
        /// modification of `self.items.len`.
        pub fn unusedCapacitySlice(self: Self) []T {
            return self.allocatedSlice()[self.items.len..];
        }

        /// Return the last element from the list.
        /// Asserts that the list is not empty.
        pub fn getLast(self: Self) T {
            const val = self.items[self.items.len - 1];
            return val;
        }

        /// Return the last element from the list, or
        /// return `null` if list is empty.
        pub fn getLastOrNull(self: Self) ?T {
            if (self.items.len == 0) return null;
            return self.getLast();
        }

        const init_capacity = @as(comptime_int, @max(1, std.atomic.cache_line / @sizeOf(T)));

        /// Called when memory growth is necessary. Returns a capacity larger than
        /// minimum that grows super-linearly.
        fn growCapacity(current: usize, minimum: usize) usize {
            var new = current;
            while (true) {
                new +|= new / 2 + init_capacity;
                if (new >= minimum)
                    return new;
            }
        }
    };
}

Type FunctionArrayListUnmanaged[src]

An ArrayList, but the allocator is passed as a parameter to the relevant functions rather than stored in the struct itself. The same allocator must be used throughout the entire lifetime of an ArrayListUnmanaged. Initialize directly or with initCapacity, and deinitialize with deinit or use toOwnedSlice.

Parameters

T: type

Types

TypeSlice[src]

Source Code

Source code
pub const Slice = if (alignment) |a| ([]align(a) T) else []T

Type FunctionSentinelSlice[src]

Parameters

s: T

Source Code

Source code
pub fn SentinelSlice(comptime s: T) type {
    return if (alignment) |a| ([:s]align(a) T) else [:s]T;
}

TypeWriter[src]

Source Code

Source code
pub const Writer = if (T != u8)
    @compileError("The Writer interface is only defined for ArrayList(u8) " ++
        "but the given type is ArrayList(" ++ @typeName(T) ++ ")")
else
    std.io.Writer(WriterContext, Allocator.Error, appendWrite)

TypeFixedWriter[src]

Source Code

Source code
pub const FixedWriter = std.io.Writer(*Self, Allocator.Error, appendWriteFixed)

Fields

items: Slice = &[_]T{}

Contents of the list. This field is intended to be accessed directly.

Pointers to elements in this slice are invalidated by various functions of this ArrayList in accordance with the respective documentation. In all cases, "invalidated" means that the memory has been passed to an allocator's resize or free function.

capacity: usize = 0

How many T values this list can hold without allocating additional memory.

Values

Constantempty[src]

An ArrayList containing no elements.

Source Code

Source code
pub const empty: Self = .{
    .items = &.{},
    .capacity = 0,
}

Functions

FunctioninitCapacity[src]

pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self

Initialize with capacity to hold num elements. The resulting capacity will equal num exactly. Deinitialize with deinit or use toOwnedSlice.

Parameters

allocator: Allocator
num: usize

Source Code

Source code
pub fn initCapacity(allocator: Allocator, num: usize) Allocator.Error!Self {
    var self = Self{};
    try self.ensureTotalCapacityPrecise(allocator, num);
    return self;
}

FunctioninitBuffer[src]

pub fn initBuffer(buffer: Slice) Self

Initialize with externally-managed memory. The buffer determines the capacity, and the length is set to zero. When initialized this way, all functions that accept an Allocator argument cause illegal behavior.

Parameters

buffer: Slice

Source Code

Source code
pub fn initBuffer(buffer: Slice) Self {
    return .{
        .items = buffer[0..0],
        .capacity = buffer.len,
    };
}

Functiondeinit[src]

pub fn deinit(self: *Self, allocator: Allocator) void

Release all allocated memory.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn deinit(self: *Self, allocator: Allocator) void {
    allocator.free(self.allocatedSlice());
    self.* = undefined;
}

FunctiontoManaged[src]

pub fn toManaged(self: *Self, allocator: Allocator) ArrayListAligned(T, alignment)

Convert this list into an analogous memory-managed one. The returned list has ownership of the underlying memory.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn toManaged(self: *Self, allocator: Allocator) ArrayListAligned(T, alignment) {
    return .{ .items = self.items, .capacity = self.capacity, .allocator = allocator };
}

FunctionfromOwnedSlice[src]

pub fn fromOwnedSlice(slice: Slice) Self

ArrayListUnmanaged takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

slice: Slice

Source Code

Source code
pub fn fromOwnedSlice(slice: Slice) Self {
    return Self{
        .items = slice,
        .capacity = slice.len,
    };
}

FunctionfromOwnedSliceSentinel[src]

pub fn fromOwnedSliceSentinel(comptime sentinel: T, slice: [:sentinel]T) Self

ArrayListUnmanaged takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit or use toOwnedSlice.

Parameters

sentinel: T
slice: [:sentinel]T

Source Code

Source code
pub fn fromOwnedSliceSentinel(comptime sentinel: T, slice: [:sentinel]T) Self {
    return Self{
        .items = slice,
        .capacity = slice.len + 1,
    };
}

FunctiontoOwnedSlice[src]

pub fn toOwnedSlice(self: *Self, allocator: Allocator) Allocator.Error!Slice

The caller owns the returned memory. Empties this ArrayList. Its capacity is cleared, making deinit() safe but unnecessary to call.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn toOwnedSlice(self: *Self, allocator: Allocator) Allocator.Error!Slice {
    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, self.items.len)) |new_items| {
        self.* = .empty;
        return new_items;
    }

    const new_memory = try allocator.alignedAlloc(T, alignment, self.items.len);
    @memcpy(new_memory, self.items);
    self.clearAndFree(allocator);
    return new_memory;
}

FunctiontoOwnedSliceSentinel[src]

pub fn toOwnedSliceSentinel(self: *Self, allocator: Allocator, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel)

The caller owns the returned memory. ArrayList becomes empty.

Parameters

self: *Self
allocator: Allocator
sentinel: T

Source Code

Source code
pub fn toOwnedSliceSentinel(self: *Self, allocator: Allocator, comptime sentinel: T) Allocator.Error!SentinelSlice(sentinel) {
    // This addition can never overflow because `self.items` can never occupy the whole address space
    try self.ensureTotalCapacityPrecise(allocator, self.items.len + 1);
    self.appendAssumeCapacity(sentinel);
    const result = try self.toOwnedSlice(allocator);
    return result[0 .. result.len - 1 :sentinel];
}

Functionclone[src]

pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self

Creates a copy of this ArrayList.

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self {
    var cloned = try Self.initCapacity(allocator, self.capacity);
    cloned.appendSliceAssumeCapacity(self.items);
    return cloned;
}

Functioninsert[src]

pub fn insert(self: *Self, allocator: Allocator, i: usize, item: T) Allocator.Error!void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If i is equal to the length of the list this operation is equivalent to append. This operation is O(N). Invalidates element pointers if additional memory is needed. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
allocator: Allocator
i: usize
item: T

Source Code

Source code
pub fn insert(self: *Self, allocator: Allocator, i: usize, item: T) Allocator.Error!void {
    const dst = try self.addManyAt(allocator, i, 1);
    dst[0] = item;
}

FunctioninsertAssumeCapacity[src]

pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void

Insert item at index i. Moves list[i .. list.len] to higher indices to make room. If in is equal to the length of the list this operation is equivalent to append. This operation is O(N). Asserts that the list has capacity for one additional item. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insertAssumeCapacity(self: *Self, i: usize, item: T) void {
    assert(self.items.len < self.capacity);
    self.items.len += 1;

    mem.copyBackwards(T, self.items[i + 1 .. self.items.len], self.items[i .. self.items.len - 1]);
    self.items[i] = item;
}

FunctionaddManyAt[src]

pub fn addManyAt( self: *Self, allocator: Allocator, index: usize, count: usize, ) Allocator.Error![]T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
allocator: Allocator
index: usize
count: usize

Source Code

Source code
pub fn addManyAt(
    self: *Self,
    allocator: Allocator,
    index: usize,
    count: usize,
) Allocator.Error![]T {
    var managed = self.toManaged(allocator);
    defer self.* = managed.moveToUnmanaged();
    return managed.addManyAt(index, count);
}

FunctionaddManyAtAssumeCapacity[src]

pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T

Add count new elements at position index, which have undefined values. Returns a slice pointing to the newly allocated elements, which becomes invalid after various ArrayList operations. Invalidates pre-existing pointers to elements at and after index, but does not invalidate any before that. Asserts that the list has capacity for the additional items. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
index: usize
count: usize

Source Code

Source code
pub fn addManyAtAssumeCapacity(self: *Self, index: usize, count: usize) []T {
    const new_len = self.items.len + count;
    assert(self.capacity >= new_len);
    const to_move = self.items[index..];
    self.items.len = new_len;
    mem.copyBackwards(T, self.items[index + count ..], to_move);
    const result = self.items[index..][0..count];
    @memset(result, undefined);
    return result;
}

FunctioninsertSlice[src]

pub fn insertSlice( self: *Self, allocator: Allocator, index: usize, items: []const T, ) Allocator.Error!void

Insert slice items at index i by moving list[i .. list.len] to make room. This operation is O(N). Invalidates pre-existing pointers to elements at and after index. Invalidates all pre-existing element pointers if capacity must be increased to accommodate the new elements. Asserts that the index is in bounds or equal to the length.

Parameters

self: *Self
allocator: Allocator
index: usize
items: []const T

Source Code

Source code
pub fn insertSlice(
    self: *Self,
    allocator: Allocator,
    index: usize,
    items: []const T,
) Allocator.Error!void {
    const dst = try self.addManyAt(
        allocator,
        index,
        items.len,
    );
    @memcpy(dst, items);
}

FunctionreplaceRange[src]

pub fn replaceRange( self: *Self, allocator: Allocator, start: usize, len: usize, new_items: []const T, ) Allocator.Error!void

Grows or shrinks the list as necessary. Invalidates element pointers if additional capacity is allocated. Asserts that the range is in bounds.

Parameters

self: *Self
allocator: Allocator
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRange(
    self: *Self,
    allocator: Allocator,
    start: usize,
    len: usize,
    new_items: []const T,
) Allocator.Error!void {
    const after_range = start + len;
    const range = self.items[start..after_range];
    if (range.len < new_items.len) {
        const first = new_items[0..range.len];
        const rest = new_items[range.len..];
        @memcpy(range[0..first.len], first);
        try self.insertSlice(allocator, after_range, rest);
    } else {
        self.replaceRangeAssumeCapacity(start, len, new_items);
    }
}

FunctionreplaceRangeAssumeCapacity[src]

pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void

Grows or shrinks the list as necessary. Never invalidates element pointers. Asserts the capacity is enough for additional items.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRangeAssumeCapacity(self: *Self, start: usize, len: usize, new_items: []const T) void {
    const after_range = start + len;
    const range = self.items[start..after_range];

    if (range.len == new_items.len)
        @memcpy(range[0..new_items.len], new_items)
    else if (range.len < new_items.len) {
        const first = new_items[0..range.len];
        const rest = new_items[range.len..];
        @memcpy(range[0..first.len], first);
        const dst = self.addManyAtAssumeCapacity(after_range, rest.len);
        @memcpy(dst, rest);
    } else {
        const extra = range.len - new_items.len;
        @memcpy(range[0..new_items.len], new_items);
        std.mem.copyForwards(
            T,
            self.items[after_range - extra ..],
            self.items[after_range..],
        );
        @memset(self.items[self.items.len - extra ..], undefined);
        self.items.len -= extra;
    }
}

Functionappend[src]

pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void

Extend the list by 1 element. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
item: T

Source Code

Source code
pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void {
    const new_item_ptr = try self.addOne(allocator);
    new_item_ptr.* = item;
}

FunctionappendAssumeCapacity[src]

pub fn appendAssumeCapacity(self: *Self, item: T) void

Extend the list by 1 element. Never invalidates element pointers. Asserts that the list can hold one additional item.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn appendAssumeCapacity(self: *Self, item: T) void {
    self.addOneAssumeCapacity().* = item;
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, i: usize) T

Remove the element at index i from the list and return its value. Invalidates pointers to the last element. This operation is O(N). Asserts that the list is not empty. Asserts that the index is in bounds.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn orderedRemove(self: *Self, i: usize) T {
    const old_item = self.items[i];
    self.replaceRangeAssumeCapacity(i, 1, &.{});
    return old_item;
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, i: usize) T

Removes the element at the specified index and returns it. The empty slot is filled from the end of the list. Invalidates pointers to last element. This operation is O(1). Asserts that the list is not empty. Asserts that the index is in bounds.

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn swapRemove(self: *Self, i: usize) T {
    if (self.items.len - 1 == i) return self.pop().?;

    const old_item = self.items[i];
    self.items[i] = self.pop().?;
    return old_item;
}

FunctionappendSlice[src]

pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void

Append the slice of items to the list. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
items: []const T

Source Code

Source code
pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(allocator, items.len);
    self.appendSliceAssumeCapacity(items);
}

FunctionappendSliceAssumeCapacity[src]

pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void

Append the slice of items to the list. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

FunctionappendUnalignedSlice[src]

pub fn appendUnalignedSlice(self: *Self, allocator: Allocator, items: []align(1) const T) Allocator.Error!void

Append the slice of items to the list. Allocates more memory as necessary. Only call this function if a call to appendSlice instead would be a compile error. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSlice(self: *Self, allocator: Allocator, items: []align(1) const T) Allocator.Error!void {
    try self.ensureUnusedCapacity(allocator, items.len);
    self.appendUnalignedSliceAssumeCapacity(items);
}

FunctionappendUnalignedSliceAssumeCapacity[src]

pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void

Append an unaligned slice of items to the list. Only call this function if a call to appendSliceAssumeCapacity instead would be a compile error. Asserts that the list can hold the additional items.

Parameters

self: *Self
items: []align(1) const T

Source Code

Source code
pub fn appendUnalignedSliceAssumeCapacity(self: *Self, items: []align(1) const T) void {
    const old_len = self.items.len;
    const new_len = old_len + items.len;
    assert(new_len <= self.capacity);
    self.items.len = new_len;
    @memcpy(self.items[old_len..][0..items.len], items);
}

Functionwriter[src]

pub fn writer(self: *Self, allocator: Allocator) Writer

Initializes a Writer which will append to the list.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn writer(self: *Self, allocator: Allocator) Writer {
    return .{ .context = .{ .self = self, .allocator = allocator } };
}

FunctionfixedWriter[src]

pub fn fixedWriter(self: *Self) FixedWriter

Initializes a Writer which will append to the list but will return error.OutOfMemory rather than increasing capacity.

Parameters

self: *Self

Source Code

Source code
pub fn fixedWriter(self: *Self) FixedWriter {
    return .{ .context = self };
}

FunctionappendNTimes[src]

pub inline fn appendNTimes(self: *Self, allocator: Allocator, value: T, n: usize) Allocator.Error!void

Append a value to the list n times. Allocates more memory as necessary. Invalidates element pointers if additional memory is needed. The function is inline so that a comptime-known value parameter will have a more optimal memset codegen in case it has a repeated byte pattern.

Parameters

self: *Self
allocator: Allocator
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimes(self: *Self, allocator: Allocator, value: T, n: usize) Allocator.Error!void {
    const old_len = self.items.len;
    try self.resize(allocator, try addOrOom(old_len, n));
    @memset(self.items[old_len..self.items.len], value);
}

FunctionappendNTimesAssumeCapacity[src]

pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void

Append a value to the list n times. Never invalidates element pointers. The function is inline so that a comptime-known value parameter will have better memset codegen in case it has a repeated byte pattern. Asserts that the list can hold the additional items.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub inline fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
    const new_len = self.items.len + n;
    assert(new_len <= self.capacity);
    @memset(self.items.ptr[self.items.len..new_len], value);
    self.items.len = new_len;
}

Functionresize[src]

pub fn resize(self: *Self, allocator: Allocator, new_len: usize) Allocator.Error!void

Adjust the list length to new_len. Additional elements contain the value undefined. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
new_len: usize

Source Code

Source code
pub fn resize(self: *Self, allocator: Allocator, new_len: usize) Allocator.Error!void {
    try self.ensureTotalCapacity(allocator, new_len);
    self.items.len = new_len;
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, allocator: Allocator, new_len: usize) void

Reduce allocated capacity to new_len. May invalidate element pointers. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
allocator: Allocator
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, allocator: Allocator, new_len: usize) void {
    assert(new_len <= self.items.len);

    if (@sizeOf(T) == 0) {
        self.items.len = new_len;
        return;
    }

    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, new_len)) |new_items| {
        self.capacity = new_items.len;
        self.items = new_items;
        return;
    }

    const new_memory = allocator.alignedAlloc(T, alignment, new_len) catch |e| switch (e) {
        error.OutOfMemory => {
            // No problem, capacity is still correct then.
            self.items.len = new_len;
            return;
        },
    };

    @memcpy(new_memory, self.items[0..new_len]);
    allocator.free(old_memory);
    self.items = new_memory;
    self.capacity = new_memory.len;
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Reduce length to new_len. Invalidates pointers to elements items[new_len..]. Keeps capacity the same. Asserts that the new length is less than or equal to the previous length.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    assert(new_len <= self.items.len);
    self.items.len = new_len;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.items.len = 0;
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, allocator: Allocator) void

Invalidates all element pointers.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn clearAndFree(self: *Self, allocator: Allocator) void {
    allocator.free(self.allocatedSlice());
    self.items.len = 0;
    self.capacity = 0;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void

Modify the array so that it can hold at least new_capacity items. Implements super-linear growth to achieve amortized O(1) append operations. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void {
    if (self.capacity >= new_capacity) return;
    return self.ensureTotalCapacityPrecise(gpa, growCapacity(self.capacity, new_capacity));
}

FunctionensureTotalCapacityPrecise[src]

pub fn ensureTotalCapacityPrecise(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void

If the current capacity is less than new_capacity, this function will modify the array so that it can hold exactly new_capacity items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacityPrecise(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void {
    if (@sizeOf(T) == 0) {
        self.capacity = math.maxInt(usize);
        return;
    }

    if (self.capacity >= new_capacity) return;

    // Here we avoid copying allocated but unused bytes by
    // attempting a resize in place, and falling back to allocating
    // a new buffer and doing our own copy. With a realloc() call,
    // the allocator implementation would pointlessly copy our
    // extra capacity.
    const old_memory = self.allocatedSlice();
    if (allocator.remap(old_memory, new_capacity)) |new_memory| {
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    } else {
        const new_memory = try allocator.alignedAlloc(T, alignment, new_capacity);
        @memcpy(new_memory[0..self.items.len], self.items);
        allocator.free(old_memory);
        self.items.ptr = new_memory.ptr;
        self.capacity = new_memory.len;
    }
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity( self: *Self, allocator: Allocator, additional_count: usize, ) Allocator.Error!void

Modify the array so that it can hold at least additional_count more items. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
allocator: Allocator
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(
    self: *Self,
    allocator: Allocator,
    additional_count: usize,
) Allocator.Error!void {
    return self.ensureTotalCapacity(allocator, try addOrOom(self.items.len, additional_count));
}

FunctionexpandToCapacity[src]

pub fn expandToCapacity(self: *Self) void

Increases the array's length to match the full capacity that is already allocated. The new elements have undefined values. Never invalidates element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn expandToCapacity(self: *Self) void {
    self.items.len = self.capacity;
}

FunctionaddOne[src]

pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T

Increase length by 1, returning pointer to the new item. The returned element pointer becomes invalid when the list is resized.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T {
    // This can never overflow because `self.items` can never occupy the whole address space
    const newlen = self.items.len + 1;
    try self.ensureTotalCapacity(allocator, newlen);
    return self.addOneAssumeCapacity();
}

FunctionaddOneAssumeCapacity[src]

pub fn addOneAssumeCapacity(self: *Self) *T

Increase length by 1, returning pointer to the new item. Never invalidates element pointers. The returned element pointer becomes invalid when the list is resized. Asserts that the list can hold one additional item.

Parameters

self: *Self

Source Code

Source code
pub fn addOneAssumeCapacity(self: *Self) *T {
    assert(self.items.len < self.capacity);

    self.items.len += 1;
    return &self.items[self.items.len - 1];
}

FunctionaddManyAsArray[src]

pub fn addManyAsArray(self: *Self, allocator: Allocator, comptime n: usize) Allocator.Error!*[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized.

Parameters

self: *Self
allocator: Allocator
n: usize

Source Code

Source code
pub fn addManyAsArray(self: *Self, allocator: Allocator, comptime n: usize) Allocator.Error!*[n]T {
    const prev_len = self.items.len;
    try self.resize(allocator, try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsArrayAssumeCapacity[src]

pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T

Resize the array, adding n new elements, which have undefined values. The return value is an array pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArrayAssumeCapacity(self: *Self, comptime n: usize) *[n]T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSlice[src]

pub fn addManyAsSlice(self: *Self, allocator: Allocator, n: usize) Allocator.Error![]T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. The returned pointer becomes invalid when the list is resized. Resizes list if self.capacity is not large enough.

Parameters

self: *Self
allocator: Allocator
n: usize

Source Code

Source code
pub fn addManyAsSlice(self: *Self, allocator: Allocator, n: usize) Allocator.Error![]T {
    const prev_len = self.items.len;
    try self.resize(allocator, try addOrOom(self.items.len, n));
    return self.items[prev_len..][0..n];
}

FunctionaddManyAsSliceAssumeCapacity[src]

pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T

Resize the array, adding n new elements, which have undefined values. The return value is a slice pointing to the newly allocated elements. Never invalidates element pointers. The returned pointer becomes invalid when the list is resized. Asserts that the list can hold the additional items.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSliceAssumeCapacity(self: *Self, n: usize) []T {
    assert(self.items.len + n <= self.capacity);
    const prev_len = self.items.len;
    self.items.len += n;
    return self.items[prev_len..][0..n];
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Remove and return the last element from the list. If the list is empty, returns null. Invalidates pointers to last element.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.items.len == 0) return null;
    const val = self.items[self.items.len - 1];
    self.items.len -= 1;
    return val;
}

FunctionallocatedSlice[src]

pub fn allocatedSlice(self: Self) Slice

Returns a slice of all the items plus the extra capacity, whose memory contents are undefined.

Parameters

self: Self

Source Code

Source code
pub fn allocatedSlice(self: Self) Slice {
    return self.items.ptr[0..self.capacity];
}

FunctionunusedCapacitySlice[src]

pub fn unusedCapacitySlice(self: Self) []T

Returns a slice of only the extra capacity after items. This can be useful for writing directly into an ArrayList. Note that such an operation must be followed up with a direct modification of self.items.len.

Parameters

self: Self

Source Code

Source code
pub fn unusedCapacitySlice(self: Self) []T {
    return self.allocatedSlice()[self.items.len..];
}

FunctiongetLast[src]

pub fn getLast(self: Self) T

Return the last element from the list. Asserts that the list is not empty.

Parameters

self: Self

Source Code

Source code
pub fn getLast(self: Self) T {
    const val = self.items[self.items.len - 1];
    return val;
}

FunctiongetLastOrNull[src]

pub fn getLastOrNull(self: Self) ?T

Return the last element from the list, or return null if list is empty.

Parameters

self: Self

Source Code

Source code
pub fn getLastOrNull(self: Self) ?T {
    if (self.items.len == 0) return null;
    return self.getLast();
}

Source Code

Source code
pub fn ArrayListUnmanaged(comptime T: type) type {
    return ArrayListAlignedUnmanaged(T, null);
}

Type FunctionAutoArrayHashMap[src]

An ArrayHashMap with default hash and equal functions.

See AutoContext for a description of the hash and equal implementations.

Parameters

K: type
V: type

Source Code

Source code
pub fn AutoArrayHashMap(comptime K: type, comptime V: type) type {
    return ArrayHashMap(K, V, AutoContext(K), !autoEqlIsCheap(K));
}

Type FunctionAutoArrayHashMapUnmanaged[src]

An ArrayHashMapUnmanaged with default hash and equal functions.

See AutoContext for a description of the hash and equal implementations.

Parameters

K: type
V: type

Types

TypeDataList[src]

The MultiArrayList type backing this map

Source Code

Source code
pub const DataList = std.MultiArrayList(Data)

TypeHash[src]

The stored hash type, either u32 or void.

Source Code

Source code
pub const Hash = if (store_hash) u32 else void

TypeManaged[src]

The ArrayHashMap type using the same settings as this managed map.

Source Code

Source code
pub const Managed = ArrayHashMap(K, V, Context, store_hash)

Fields

entries: DataList = .{}

It is permitted to access this field directly. After any modification to the keys, consider calling reIndex.

index_header: ?*IndexHeader = null

When entries length is less than linear_scan_max, this remains null. Once entries length grows big enough, this field is allocated. There is an IndexHeader followed by an array of Index(I) structs, where I is defined by how many total indexes there are.

pointer_stability: std.debug.SafetyLock = .{}

Used to detect memory safety violations.

Values

Constantempty[src]

A map containing no keys or values.

Source Code

Source code
pub const empty: Self = .{
    .entries = .{},
    .index_header = null,
}

Functions

Functionpromote[src]

pub fn promote(self: Self, gpa: Allocator) Managed

Convert from an unmanaged map to a managed map. After calling this, the promoted map should no longer be used.

Parameters

self: Self

Source Code

Source code
pub fn promote(self: Self, gpa: Allocator) Managed {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
    return self.promoteContext(gpa, undefined);
}

FunctionpromoteContext[src]

pub fn promoteContext(self: Self, gpa: Allocator, ctx: Context) Managed

Parameters

self: Self
ctx: Context

Source Code

Source code
pub fn promoteContext(self: Self, gpa: Allocator, ctx: Context) Managed {
    return .{
        .unmanaged = self,
        .allocator = gpa,
        .ctx = ctx,
    };
}

Functioninit[src]

pub fn init(gpa: Allocator, key_list: []const K, value_list: []const V) Oom!Self

Parameters

key_list: []const K
value_list: []const V

Source Code

Source code
pub fn init(gpa: Allocator, key_list: []const K, value_list: []const V) Oom!Self {
    var self: Self = .{};
    errdefer self.deinit(gpa);
    try self.reinit(gpa, key_list, value_list);
    return self;
}

Functionreinit[src]

pub fn reinit(self: *Self, gpa: Allocator, key_list: []const K, value_list: []const V) Oom!void

An empty value_list may be passed, in which case the values array becomes undefined.

Parameters

self: *Self
key_list: []const K
value_list: []const V

Source Code

Source code
pub fn reinit(self: *Self, gpa: Allocator, key_list: []const K, value_list: []const V) Oom!void {
    try self.entries.resize(gpa, key_list.len);
    @memcpy(self.keys(), key_list);
    if (value_list.len == 0) {
        @memset(self.values(), undefined);
    } else {
        assert(key_list.len == value_list.len);
        @memcpy(self.values(), value_list);
    }
    try self.reIndex(gpa);
}

Functiondeinit[src]

pub fn deinit(self: *Self, gpa: Allocator) void

Frees the backing allocation and leaves the map in an undefined state. Note that this does not free keys or values. You must take care of that before calling this function, if it is needed.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self, gpa: Allocator) void {
    self.pointer_stability.assertUnlocked();
    self.entries.deinit(gpa);
    if (self.index_header) |header| {
        header.free(gpa);
    }
    self.* = undefined;
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.pointer_stability.lock();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.pointer_stability.unlock();
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Clears the map but retains the backing allocation for future use.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.entries.len = 0;
    if (self.index_header) |header| {
        switch (header.capacityIndexType()) {
            .u8 => @memset(header.indexes(u8), Index(u8).empty),
            .u16 => @memset(header.indexes(u16), Index(u16).empty),
            .u32 => @memset(header.indexes(u32), Index(u32).empty),
        }
    }
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, gpa: Allocator) void

Clears the map and releases the backing allocation

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self, gpa: Allocator) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.entries.shrinkAndFree(gpa, 0);
    if (self.index_header) |header| {
        header.free(gpa);
        self.index_header = null;
    }
}

Functioncount[src]

pub fn count(self: Self) usize

Returns the number of KV pairs stored in this map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.entries.len;
}

Functionkeys[src]

pub fn keys(self: Self) []K

Returns the backing array of keys in this map. Modifying the map may invalidate this array. Modifying this array in a way that changes key hashes or key equality puts the map into an unusable state until reIndex is called.

Parameters

self: Self

Source Code

Source code
pub fn keys(self: Self) []K {
    return self.entries.items(.key);
}

Functionvalues[src]

pub fn values(self: Self) []V

Returns the backing array of values in this map. Modifying the map may invalidate this array. It is permitted to modify the values in this array.

Parameters

self: Self

Source Code

Source code
pub fn values(self: Self) []V {
    return self.entries.items(.value);
}

Functioniterator[src]

pub fn iterator(self: Self) Iterator

Returns an iterator over the pairs in this map. Modifying the map may invalidate this iterator.

Parameters

self: Self

Source Code

Source code
pub fn iterator(self: Self) Iterator {
    const slice = self.entries.slice();
    return .{
        .keys = slice.items(.key).ptr,
        .values = slice.items(.value).ptr,
        .len = @as(u32, @intCast(slice.len)),
    };
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, gpa: Allocator, key: K) Oom!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key).

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, gpa: Allocator, key: K) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
    return self.getOrPutContext(gpa, key, undefined);
}

FunctiongetOrPutContext[src]

pub fn getOrPutContext(self: *Self, gpa: Allocator, key: K, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutContext(self: *Self, gpa: Allocator, key: K, ctx: Context) Oom!GetOrPutResult {
    const gop = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype) Oom!GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
    return self.getOrPutContextAdapted(gpa, key, key_ctx, undefined);
}

FunctiongetOrPutContextAdapted[src]

pub fn getOrPutContextAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn getOrPutContextAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Oom!GetOrPutResult {
    self.ensureTotalCapacityContext(gpa, self.entries.len + 1, ctx) catch |err| {
        // "If key exists this function cannot fail."
        const index = self.getIndexAdapted(key, key_ctx) orelse return err;
        const slice = self.entries.slice();
        return GetOrPutResult{
            .key_ptr = &slice.items(.key)[index],
            // workaround for #6974
            .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
            .found_existing = true,
            .index = index,
        };
    };
    return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key). If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
    return self.getOrPutAssumeCapacityContext(key, undefined);
}

FunctiongetOrPutAssumeCapacityContext[src]

pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
    const gop = self.getOrPutAssumeCapacityAdapted(key, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

If there is an existing item with key, then the result Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined key and value, and the Entry pointers point to it. Caller must then initialize both the key and the value. If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
    const header = self.index_header orelse {
        // Linear scan.
        const h = if (store_hash) checkedHash(ctx, key) else {};
        const slice = self.entries.slice();
        const hashes_array = slice.items(.hash);
        const keys_array = slice.items(.key);
        for (keys_array, 0..) |*item_key, i| {
            if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                return GetOrPutResult{
                    .key_ptr = item_key,
                    // workaround for #6974
                    .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[i],
                    .found_existing = true,
                    .index = i,
                };
            }
        }

        const index = self.entries.addOneAssumeCapacity();
        // The slice length changed, so we directly index the pointer.
        if (store_hash) hashes_array.ptr[index] = h;

        return GetOrPutResult{
            .key_ptr = &keys_array.ptr[index],
            // workaround for #6974
            .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value).ptr[index],
            .found_existing = false,
            .index = index,
        };
    };

    switch (header.capacityIndexType()) {
        .u8 => return self.getOrPutInternal(key, ctx, header, u8),
        .u16 => return self.getOrPutInternal(key, ctx, header, u16),
        .u32 => return self.getOrPutInternal(key, ctx, header, u32),
    }
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, gpa: Allocator, key: K, value: V) Oom!GetOrPutResult

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, gpa: Allocator, key: K, value: V) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
    return self.getOrPutValueContext(gpa, key, value, undefined);
}

FunctiongetOrPutValueContext[src]

pub fn getOrPutValueContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn getOrPutValueContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!GetOrPutResult {
    const res = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
    if (!res.found_existing) {
        res.key_ptr.* = key;
        res.value_ptr.* = value;
    }
    return res;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Oom!void

Increases capacity, guaranteeing that insertions up until the expected_count will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return self.ensureTotalCapacityContext(gpa, new_capacity, undefined);
}

FunctionensureTotalCapacityContext[src]

pub fn ensureTotalCapacityContext(self: *Self, gpa: Allocator, new_capacity: usize, ctx: Context) Oom!void

Parameters

self: *Self
new_capacity: usize
ctx: Context

Source Code

Source code
pub fn ensureTotalCapacityContext(self: *Self, gpa: Allocator, new_capacity: usize, ctx: Context) Oom!void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    if (new_capacity <= linear_scan_max) {
        try self.entries.ensureTotalCapacity(gpa, new_capacity);
        return;
    }

    if (self.index_header) |header| {
        if (new_capacity <= header.capacity()) {
            try self.entries.ensureTotalCapacity(gpa, new_capacity);
            return;
        }
    }

    try self.entries.ensureTotalCapacity(gpa, new_capacity);
    const new_bit_index = try IndexHeader.findBitIndex(new_capacity);
    const new_header = try IndexHeader.alloc(gpa, new_bit_index);

    if (self.index_header) |old_header| old_header.free(gpa);
    self.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
    self.index_header = new_header;
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity( self: *Self, gpa: Allocator, additional_capacity: usize, ) Oom!void

Increases capacity, guaranteeing that insertions up until additional_count more items will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
additional_capacity: usize

Source Code

Source code
pub fn ensureUnusedCapacity(
    self: *Self,
    gpa: Allocator,
    additional_capacity: usize,
) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return self.ensureUnusedCapacityContext(gpa, additional_capacity, undefined);
}

FunctionensureUnusedCapacityContext[src]

pub fn ensureUnusedCapacityContext( self: *Self, gpa: Allocator, additional_capacity: usize, ctx: Context, ) Oom!void

Parameters

self: *Self
additional_capacity: usize
ctx: Context

Source Code

Source code
pub fn ensureUnusedCapacityContext(
    self: *Self,
    gpa: Allocator,
    additional_capacity: usize,
    ctx: Context,
) Oom!void {
    return self.ensureTotalCapacityContext(gpa, self.count() + additional_capacity, ctx);
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Returns the number of total elements which may be present before it is no longer guaranteed that no allocations will be performed.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    const entry_cap = self.entries.capacity;
    const header = self.index_header orelse return @min(linear_scan_max, entry_cap);
    const indexes_cap = header.capacity();
    return @min(entry_cap, indexes_cap);
}

Functionput[src]

pub fn put(self: *Self, gpa: Allocator, key: K, value: V) Oom!void

Clobbers any existing data. To detect if a put would clobber existing data, see getOrPut.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
    return self.putContext(gpa, key, value, undefined);
}

FunctionputContext[src]

pub fn putContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
    const result = try self.getOrPutContext(gpa, key, ctx);
    result.value_ptr.* = value;
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, gpa: Allocator, key: K, value: V) Oom!void

Inserts a key-value pair into the hash map, asserting that no previous entry with the same key is already present

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
    return self.putNoClobberContext(gpa, key, value, undefined);
}

FunctionputNoClobberContext[src]

pub fn putNoClobberContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putNoClobberContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
    const result = try self.getOrPutContext(gpa, key, ctx);
    assert(!result.found_existing);
    result.value_ptr.* = value;
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
    return self.putAssumeCapacityContext(key, value, undefined);
}

FunctionputAssumeCapacityContext[src]

pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
    const result = self.getOrPutAssumeCapacityContext(key, ctx);
    result.value_ptr.* = value;
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Asserts that it does not clobber any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
    return self.putAssumeCapacityNoClobberContext(key, value, undefined);
}

FunctionputAssumeCapacityNoClobberContext[src]

pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
    const result = self.getOrPutAssumeCapacityContext(key, ctx);
    assert(!result.found_existing);
    result.value_ptr.* = value;
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, gpa: Allocator, key: K, value: V) Oom!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, gpa: Allocator, key: K, value: V) Oom!?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
    return self.fetchPutContext(gpa, key, value, undefined);
}

FunctionfetchPutContext[src]

pub fn fetchPutContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!?KV {
    const gop = try self.getOrPutContext(gpa, key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
    return self.fetchPutAssumeCapacityContext(key, value, undefined);
}

FunctionfetchPutAssumeCapacityContext[src]

pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Finds pointers to the key and value storage associated with a key.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
    return self.getEntryContext(key, undefined);
}

FunctiongetEntryContext[src]

pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
    return self.getEntryAdapted(key, ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    const slice = self.entries.slice();
    return Entry{
        .key_ptr = &slice.items(.key)[index],
        // workaround for #6974
        .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
    };
}

FunctiongetIndex[src]

pub fn getIndex(self: Self, key: K) ?usize

Finds the index in the entries array where a key is stored

Parameters

self: Self
key: K

Source Code

Source code
pub fn getIndex(self: Self, key: K) ?usize {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getIndexContext instead.");
    return self.getIndexContext(key, undefined);
}

FunctiongetIndexContext[src]

pub fn getIndexContext(self: Self, key: K, ctx: Context) ?usize

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getIndexContext(self: Self, key: K, ctx: Context) ?usize {
    return self.getIndexAdapted(key, ctx);
}

FunctiongetIndexAdapted[src]

pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize

Parameters

self: Self

Source Code

Source code
pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize {
    const header = self.index_header orelse {
        // Linear scan.
        const h = if (store_hash) checkedHash(ctx, key) else {};
        const slice = self.entries.slice();
        const hashes_array = slice.items(.hash);
        const keys_array = slice.items(.key);
        for (keys_array, 0..) |*item_key, i| {
            if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                return i;
            }
        }
        return null;
    };
    switch (header.capacityIndexType()) {
        .u8 => return self.getIndexWithHeaderGeneric(key, ctx, header, u8),
        .u16 => return self.getIndexWithHeaderGeneric(key, ctx, header, u16),
        .u32 => return self.getIndexWithHeaderGeneric(key, ctx, header, u32),
    }
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Find the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
    return self.getContext(key, undefined);
}

FunctiongetContext[src]

pub fn getContext(self: Self, key: K, ctx: Context) ?V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getContext(self: Self, key: K, ctx: Context) ?V {
    return self.getAdapted(key, ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return self.values()[index];
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Find a pointer to the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
    return self.getPtrContext(key, undefined);
}

FunctiongetPtrContext[src]

pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
    return self.getPtrAdapted(key, ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    // workaround for #6974
    return if (@sizeOf(*V) == 0) @as(*V, undefined) else &self.values()[index];
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Find the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
    return self.getKeyContext(key, undefined);
}

FunctiongetKeyContext[src]

pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
    return self.getKeyAdapted(key, ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return self.keys()[index];
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Find a pointer to the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
    return self.getKeyPtrContext(key, undefined);
}

FunctiongetKeyPtrContext[src]

pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
    return self.getKeyPtrAdapted(key, ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return &self.keys()[index];
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Check whether a key is stored in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
    return self.containsContext(key, undefined);
}

FunctioncontainsContext[src]

pub fn containsContext(self: Self, key: K, ctx: Context) bool

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn containsContext(self: Self, key: K, ctx: Context) bool {
    return self.containsAdapted(key, ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.getIndexAdapted(key, ctx) != null;
}

FunctionfetchSwapRemove[src]

pub fn fetchSwapRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchSwapRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContext instead.");
    return self.fetchSwapRemoveContext(key, undefined);
}

FunctionfetchSwapRemoveContext[src]

pub fn fetchSwapRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchSwapRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchSwapRemoveContextAdapted(key, ctx, ctx);
}

FunctionfetchSwapRemoveAdapted[src]

pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContextAdapted instead.");
    return self.fetchSwapRemoveContextAdapted(key, ctx, undefined);
}

FunctionfetchSwapRemoveContextAdapted[src]

pub fn fetchSwapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn fetchSwapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
}

FunctionfetchOrderedRemove[src]

pub fn fetchOrderedRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by shifting all elements forward thereby maintaining the current ordering.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchOrderedRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContext instead.");
    return self.fetchOrderedRemoveContext(key, undefined);
}

FunctionfetchOrderedRemoveContext[src]

pub fn fetchOrderedRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchOrderedRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchOrderedRemoveContextAdapted(key, ctx, ctx);
}

FunctionfetchOrderedRemoveAdapted[src]

pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContextAdapted instead.");
    return self.fetchOrderedRemoveContextAdapted(key, ctx, undefined);
}

FunctionfetchOrderedRemoveContextAdapted[src]

pub fn fetchOrderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn fetchOrderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by swapping it with the last element. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn swapRemove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContext instead.");
    return self.swapRemoveContext(key, undefined);
}

FunctionswapRemoveContext[src]

pub fn swapRemoveContext(self: *Self, key: K, ctx: Context) bool

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn swapRemoveContext(self: *Self, key: K, ctx: Context) bool {
    return self.swapRemoveContextAdapted(key, ctx, ctx);
}

FunctionswapRemoveAdapted[src]

pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContextAdapted instead.");
    return self.swapRemoveContextAdapted(key, ctx, undefined);
}

FunctionswapRemoveContextAdapted[src]

pub fn swapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn swapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn orderedRemove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContext instead.");
    return self.orderedRemoveContext(key, undefined);
}

FunctionorderedRemoveContext[src]

pub fn orderedRemoveContext(self: *Self, key: K, ctx: Context) bool

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn orderedRemoveContext(self: *Self, key: K, ctx: Context) bool {
    return self.orderedRemoveContextAdapted(key, ctx, ctx);
}

FunctionorderedRemoveAdapted[src]

pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContextAdapted instead.");
    return self.orderedRemoveContextAdapted(key, ctx, undefined);
}

FunctionorderedRemoveContextAdapted[src]

pub fn orderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn orderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
}

FunctionswapRemoveAt[src]

pub fn swapRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn swapRemoveAt(self: *Self, index: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveAtContext instead.");
    return self.swapRemoveAtContext(index, undefined);
}

FunctionswapRemoveAtContext[src]

pub fn swapRemoveAtContext(self: *Self, index: usize, ctx: Context) void

Parameters

self: *Self
index: usize
ctx: Context

Source Code

Source code
pub fn swapRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.removeByIndex(index, if (store_hash) {} else ctx, .swap);
}

FunctionorderedRemoveAt[src]

pub fn orderedRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn orderedRemoveAt(self: *Self, index: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveAtContext instead.");
    return self.orderedRemoveAtContext(index, undefined);
}

FunctionorderedRemoveAtContext[src]

pub fn orderedRemoveAtContext(self: *Self, index: usize, ctx: Context) void

Parameters

self: *Self
index: usize
ctx: Context

Source Code

Source code
pub fn orderedRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.removeByIndex(index, if (store_hash) {} else ctx, .ordered);
}

Functionclone[src]

pub fn clone(self: Self, gpa: Allocator) Oom!Self

Create a copy of the hash map which can be modified separately. The copy uses the same context as this instance, but is allocated with the provided allocator.

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self, gpa: Allocator) Oom!Self {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
    return self.cloneContext(gpa, undefined);
}

FunctioncloneContext[src]

pub fn cloneContext(self: Self, gpa: Allocator, ctx: Context) Oom!Self

Parameters

self: Self
ctx: Context

Source Code

Source code
pub fn cloneContext(self: Self, gpa: Allocator, ctx: Context) Oom!Self {
    var other: Self = .{};
    other.entries = try self.entries.clone(gpa);
    errdefer other.entries.deinit(gpa);

    if (self.index_header) |header| {
        // TODO: I'm pretty sure this could be memcpy'd instead of
        // doing all this work.
        const new_header = try IndexHeader.alloc(gpa, header.bit_index);
        other.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
        other.index_header = new_header;
    }
    return other;
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.pointer_stability.assertUnlocked();
    const result = self.*;
    self.* = .empty;
    return result;
}

FunctionreIndex[src]

pub fn reIndex(self: *Self, gpa: Allocator) Oom!void

Recomputes stored hashes and rebuilds the key indexes. If the underlying keys have been modified directly, call this method to recompute the denormalized metadata necessary for the operation of the methods of this map that lookup entries by key.

One use case for this is directly calling entries.resize() to grow the underlying storage, and then setting the keys and values directly without going through the methods of this map.

The time complexity of this operation is O(n).

Parameters

self: *Self

Source Code

Source code
pub fn reIndex(self: *Self, gpa: Allocator) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call reIndexContext instead.");
    return self.reIndexContext(gpa, undefined);
}

FunctionreIndexContext[src]

pub fn reIndexContext(self: *Self, gpa: Allocator, ctx: Context) Oom!void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn reIndexContext(self: *Self, gpa: Allocator, ctx: Context) Oom!void {
    // Recompute all hashes.
    if (store_hash) {
        for (self.keys(), self.entries.items(.hash)) |key, *hash| {
            const h = checkedHash(ctx, key);
            hash.* = h;
        }
    }
    try rebuildIndex(self, gpa, ctx);
}

FunctionsetKey[src]

pub fn setKey(self: *Self, gpa: Allocator, index: usize, new_key: K) Oom!void

Modify an entry's key without reordering any entries.

Parameters

self: *Self
index: usize
new_key: K

Source Code

Source code
pub fn setKey(self: *Self, gpa: Allocator, index: usize, new_key: K) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call setKeyContext instead.");
    return setKeyContext(self, gpa, index, new_key, undefined);
}

FunctionsetKeyContext[src]

pub fn setKeyContext(self: *Self, gpa: Allocator, index: usize, new_key: K, ctx: Context) Oom!void

Parameters

self: *Self
index: usize
new_key: K
ctx: Context

Source Code

Source code
pub fn setKeyContext(self: *Self, gpa: Allocator, index: usize, new_key: K, ctx: Context) Oom!void {
    const key_ptr = &self.entries.items(.key)[index];
    key_ptr.* = new_key;
    if (store_hash) self.entries.items(.hash)[index] = checkedHash(ctx, key_ptr.*);
    try rebuildIndex(self, gpa, undefined);
}

Functionsort[src]

pub inline fn sort(self: *Self, sort_ctx: anytype) void

Sorts the entries and then rebuilds the index. sort_ctx must have this method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool Uses a stable sorting algorithm.

Parameters

self: *Self

Source Code

Source code
pub inline fn sort(self: *Self, sort_ctx: anytype) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortContext instead.");
    return sortContextInternal(self, .stable, sort_ctx, undefined);
}

FunctionsortUnstable[src]

pub inline fn sortUnstable(self: *Self, sort_ctx: anytype) void

Sorts the entries and then rebuilds the index. sort_ctx must have this method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool Uses an unstable sorting algorithm.

Parameters

self: *Self

Source Code

Source code
pub inline fn sortUnstable(self: *Self, sort_ctx: anytype) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortUnstableContext instead.");
    return self.sortContextInternal(.unstable, sort_ctx, undefined);
}

FunctionsortContext[src]

pub inline fn sortContext(self: *Self, sort_ctx: anytype, ctx: Context) void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub inline fn sortContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
    return sortContextInternal(self, .stable, sort_ctx, ctx);
}

FunctionsortUnstableContext[src]

pub inline fn sortUnstableContext(self: *Self, sort_ctx: anytype, ctx: Context) void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub inline fn sortUnstableContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
    return sortContextInternal(self, .unstable, sort_ctx, ctx);
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Keeps capacity the same.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. Any deinitialization of discarded entries must take place after calling this function.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkRetainingCapacityContext instead.");
    return self.shrinkRetainingCapacityContext(new_len, undefined);
}

FunctionshrinkRetainingCapacityContext[src]

pub fn shrinkRetainingCapacityContext(self: *Self, new_len: usize, ctx: Context) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Keeps capacity the same.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. Any deinitialization of discarded entries must take place after calling this function.

Parameters

self: *Self
new_len: usize
ctx: Context

Source Code

Source code
pub fn shrinkRetainingCapacityContext(self: *Self, new_len: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    // Remove index entries from the new length onwards.
    // Explicitly choose to ONLY remove index entries and not the underlying array list
    // entries as we're going to remove them in the subsequent shrink call.
    if (self.index_header) |header| {
        var i: usize = new_len;
        while (i < self.entries.len) : (i += 1)
            self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
    }
    self.entries.shrinkRetainingCapacity(new_len);
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Reduces allocated capacity.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. It is a bug to call this function if the discarded entries require deinitialization. For that use case, shrinkRetainingCapacity can be used instead.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkAndFreeContext instead.");
    return self.shrinkAndFreeContext(gpa, new_len, undefined);
}

FunctionshrinkAndFreeContext[src]

pub fn shrinkAndFreeContext(self: *Self, gpa: Allocator, new_len: usize, ctx: Context) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Reduces allocated capacity.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. It is a bug to call this function if the discarded entries require deinitialization. For that use case, shrinkRetainingCapacityContext can be used instead.

Parameters

self: *Self
new_len: usize
ctx: Context

Source Code

Source code
pub fn shrinkAndFreeContext(self: *Self, gpa: Allocator, new_len: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    // Remove index entries from the new length onwards.
    // Explicitly choose to ONLY remove index entries and not the underlying array list
    // entries as we're going to remove them in the subsequent shrink call.
    if (self.index_header) |header| {
        var i: usize = new_len;
        while (i < self.entries.len) : (i += 1)
            self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
    }
    self.entries.shrinkAndFree(gpa, new_len);
}

Functionpop[src]

pub fn pop(self: *Self) ?KV

Removes the last inserted Entry in the hash map and returns it. Otherwise returns null.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call popContext instead.");
    return self.popContext(undefined);
}

FunctionpopContext[src]

pub fn popContext(self: *Self, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn popContext(self: *Self, ctx: Context) ?KV {
    if (self.entries.len == 0) return null;
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    const item = self.entries.get(self.entries.len - 1);
    if (self.index_header) |header|
        self.removeFromIndexByIndex(self.entries.len - 1, if (store_hash) {} else ctx, header);
    self.entries.len -= 1;
    return .{
        .key = item.key,
        .value = item.value,
    };
}

Source Code

Source code
pub fn AutoArrayHashMapUnmanaged(comptime K: type, comptime V: type) type {
    return ArrayHashMapUnmanaged(K, V, AutoContext(K), !autoEqlIsCheap(K));
}

Type FunctionAutoHashMap[src]

Parameters

K: type
V: type

Types

TypeUnmanaged[src]

The type of the unmanaged hash map underlying this wrapper

Source Code

Source code
pub const Unmanaged = HashMapUnmanaged(K, V, Context, max_load_percentage)

Fields

unmanaged: Unmanaged
allocator: Allocator
ctx: Context

Values

ConstantEntry[src]

An entry, containing pointers to a key and value stored in the map

Source Code

Source code
pub const Entry = Unmanaged.Entry

ConstantKV[src]

A copy of a key and value which are no longer in the map

Source Code

Source code
pub const KV = Unmanaged.KV

ConstantHash[src]

The integer type that is the result of hashing

Source Code

Source code
pub const Hash = Unmanaged.Hash

ConstantIterator[src]

The iterator type returned by iterator()

Source Code

Source code
pub const Iterator = Unmanaged.Iterator

ConstantKeyIterator[src]

Source Code

Source code
pub const KeyIterator = Unmanaged.KeyIterator

ConstantValueIterator[src]

Source Code

Source code
pub const ValueIterator = Unmanaged.ValueIterator

ConstantSize[src]

The integer type used to store the size of the map

Source Code

Source code
pub const Size = Unmanaged.Size

ConstantGetOrPutResult[src]

The type returned from getOrPut and variants

Source Code

Source code
pub const GetOrPutResult = Unmanaged.GetOrPutResult

Functions

Functioninit[src]

pub fn init(allocator: Allocator) Self

Create a managed hash map with an empty context. If the context is not zero-sized, you must use initContext(allocator, ctx) instead.

Parameters

allocator: Allocator

Source Code

Source code
pub fn init(allocator: Allocator) Self {
    if (@sizeOf(Context) != 0) {
        @compileError("Context must be specified! Call initContext(allocator, ctx) instead.");
    }
    return .{
        .unmanaged = .empty,
        .allocator = allocator,
        .ctx = undefined, // ctx is zero-sized so this is safe.
    };
}

FunctioninitContext[src]

pub fn initContext(allocator: Allocator, ctx: Context) Self

Create a managed hash map with a context

Parameters

allocator: Allocator
ctx: Context

Source Code

Source code
pub fn initContext(allocator: Allocator, ctx: Context) Self {
    return .{
        .unmanaged = .empty,
        .allocator = allocator,
        .ctx = ctx,
    };
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.unmanaged.lockPointers();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.unmanaged.unlockPointers();
}

Functiondeinit[src]

pub fn deinit(self: *Self) void

Release the backing array and invalidate this map. This does not deinit keys, values, or the context! If your keys or values need to be released, ensure that that is done before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self) void {
    self.unmanaged.deinit(self.allocator);
    self.* = undefined;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Empty the map, but keep the backing allocation for future use. This does not free keys or values! Be sure to release them if they need deinitialization before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    return self.unmanaged.clearRetainingCapacity();
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self) void

Empty the map and release the backing allocation. This does not free keys or values! Be sure to release them if they need deinitialization before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self) void {
    return self.unmanaged.clearAndFree(self.allocator);
}

Functioncount[src]

pub fn count(self: Self) Size

Return the number of items in the map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) Size {
    return self.unmanaged.count();
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Create an iterator over the entries in the map. The iterator is invalidated if the map is modified.

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return self.unmanaged.iterator();
}

FunctionkeyIterator[src]

pub fn keyIterator(self: Self) KeyIterator

Create an iterator over the keys in the map. The iterator is invalidated if the map is modified.

Parameters

self: Self

Source Code

Source code
pub fn keyIterator(self: Self) KeyIterator {
    return self.unmanaged.keyIterator();
}

FunctionvalueIterator[src]

pub fn valueIterator(self: Self) ValueIterator

Create an iterator over the values in the map. The iterator is invalidated if the map is modified.

Parameters

self: Self

Source Code

Source code
pub fn valueIterator(self: Self) ValueIterator {
    return self.unmanaged.valueIterator();
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, key: K) Allocator.Error!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller should then initialize the value (but not the key).

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, key: K) Allocator.Error!GetOrPutResult {
    return self.unmanaged.getOrPutContext(self.allocator, key, self.ctx);
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) Allocator.Error!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined key and value, and the Entry pointers point to it. Caller must then initialize the key and value.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) Allocator.Error!GetOrPutResult {
    return self.unmanaged.getOrPutContextAdapted(self.allocator, key, ctx, self.ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller should then initialize the value (but not the key). If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityContext(key, self.ctx);
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller must then initialize the key and value. If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityAdapted(key, ctx);
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, key: K, value: V) Allocator.Error!Entry

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, key: K, value: V) Allocator.Error!Entry {
    return self.unmanaged.getOrPutValueContext(self.allocator, key, value, self.ctx);
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, expected_count: Size) Allocator.Error!void

Increases capacity, guaranteeing that insertions up until the expected_count will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
expected_count: Size

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, expected_count: Size) Allocator.Error!void {
    return self.unmanaged.ensureTotalCapacityContext(self.allocator, expected_count, self.ctx);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: Size) Allocator.Error!void

Increases capacity, guaranteeing that insertions up until additional_count more items will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
additional_count: Size

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: Size) Allocator.Error!void {
    return self.unmanaged.ensureUnusedCapacityContext(self.allocator, additional_count, self.ctx);
}

Functioncapacity[src]

pub fn capacity(self: Self) Size

Returns the number of total elements which may be present before it is no longer guaranteed that no allocations will be performed.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) Size {
    return self.unmanaged.capacity();
}

Functionput[src]

pub fn put(self: *Self, key: K, value: V) Allocator.Error!void

Clobbers any existing data. To detect if a put would clobber existing data, see getOrPut.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, key: K, value: V) Allocator.Error!void {
    return self.unmanaged.putContext(self.allocator, key, value, self.ctx);
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, key: K, value: V) Allocator.Error!void

Inserts a key-value pair into the hash map, asserting that no previous entry with the same key is already present

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, key: K, value: V) Allocator.Error!void {
    return self.unmanaged.putNoClobberContext(self.allocator, key, value, self.ctx);
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityContext(key, value, self.ctx);
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Asserts that it does not clobber any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityNoClobberContext(key, value, self.ctx);
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, key: K, value: V) Allocator.Error!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, key: K, value: V) Allocator.Error!?KV {
    return self.unmanaged.fetchPutContext(self.allocator, key, value, self.ctx);
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    return self.unmanaged.fetchPutAssumeCapacityContext(key, value, self.ctx);
}

FunctionfetchRemove[src]

pub fn fetchRemove(self: *Self, key: K) ?KV

Removes a value from the map and returns the removed kv pair.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchRemove(self: *Self, key: K) ?KV {
    return self.unmanaged.fetchRemoveContext(key, self.ctx);
}

FunctionfetchRemoveAdapted[src]

pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    return self.unmanaged.fetchRemoveAdapted(key, ctx);
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Finds the value associated with a key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    return self.unmanaged.getContext(key, self.ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    return self.unmanaged.getAdapted(key, ctx);
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    return self.unmanaged.getPtrContext(key, self.ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    return self.unmanaged.getPtrAdapted(key, ctx);
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Finds the actual key associated with an adapted key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    return self.unmanaged.getKeyContext(key, self.ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    return self.unmanaged.getKeyAdapted(key, ctx);
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    return self.unmanaged.getKeyPtrContext(key, self.ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    return self.unmanaged.getKeyPtrAdapted(key, ctx);
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Finds the key and value associated with a key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    return self.unmanaged.getEntryContext(key, self.ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    return self.unmanaged.getEntryAdapted(key, ctx);
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Check if the map contains a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    return self.unmanaged.containsContext(key, self.ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.containsAdapted(key, ctx);
}

Functionremove[src]

pub fn remove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map, and this function returns true. Otherwise this function returns false.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K

Source Code

Source code
pub fn remove(self: *Self, key: K) bool {
    return self.unmanaged.removeContext(key, self.ctx);
}

FunctionremoveAdapted[src]

pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self

Source Code

Source code
pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.removeAdapted(key, ctx);
}

FunctionremoveByPtr[src]

pub fn removeByPtr(self: *Self, key_ptr: *K) void

Delete the entry with key pointed to by key_ptr from the hash map. key_ptr is assumed to be a valid pointer to a key that is present in the hash map.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key_ptr: *K

Source Code

Source code
pub fn removeByPtr(self: *Self, key_ptr: *K) void {
    self.unmanaged.removeByPtr(key_ptr);
}

Functionclone[src]

pub fn clone(self: Self) Allocator.Error!Self

Creates a copy of this map, using the same allocator

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self) Allocator.Error!Self {
    var other = try self.unmanaged.cloneContext(self.allocator, self.ctx);
    return other.promoteContext(self.allocator, self.ctx);
}

FunctioncloneWithAllocator[src]

pub fn cloneWithAllocator(self: Self, new_allocator: Allocator) Allocator.Error!Self

Creates a copy of this map, using a specified allocator

Parameters

self: Self
new_allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocator(self: Self, new_allocator: Allocator) Allocator.Error!Self {
    var other = try self.unmanaged.cloneContext(new_allocator, self.ctx);
    return other.promoteContext(new_allocator, self.ctx);
}

FunctioncloneWithContext[src]

pub fn cloneWithContext(self: Self, new_ctx: anytype) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage)

Creates a copy of this map, using a specified context

Parameters

self: Self

Source Code

Source code
pub fn cloneWithContext(self: Self, new_ctx: anytype) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other = try self.unmanaged.cloneContext(self.allocator, new_ctx);
    return other.promoteContext(self.allocator, new_ctx);
}

FunctioncloneWithAllocatorAndContext[src]

pub fn cloneWithAllocatorAndContext( self: Self, new_allocator: Allocator, new_ctx: anytype, ) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage)

Creates a copy of this map, using a specified allocator and context.

Parameters

self: Self
new_allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocatorAndContext(
    self: Self,
    new_allocator: Allocator,
    new_ctx: anytype,
) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other = try self.unmanaged.cloneContext(new_allocator, new_ctx);
    return other.promoteContext(new_allocator, new_ctx);
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.unmanaged.pointer_stability.assertUnlocked();
    const result = self.*;
    self.unmanaged = .empty;
    return result;
}

Functionrehash[src]

pub fn rehash(self: *Self) void

Rehash the map, in-place.

Over time, due to the current tombstone-based implementation, a HashMap could become fragmented due to the buildup of tombstone entries that causes a performance degradation due to excessive probing. The kind of pattern that might cause this is a long-lived HashMap with repeated inserts and deletes.

After this function is called, there will be no tombstones in the HashMap, each of the entries is rehashed and any existing key/value pointers into the HashMap are invalidated.

Parameters

self: *Self

Source Code

Source code
pub fn rehash(self: *Self) void {
    self.unmanaged.rehash(self.ctx);
}

Source Code

Source code
pub fn AutoHashMap(comptime K: type, comptime V: type) type {
    return HashMap(K, V, AutoContext(K), default_max_load_percentage);
}

Type FunctionAutoHashMapUnmanaged[src]

Parameters

K: type
V: type

Types

TypeSize[src]

Source Code

Source code
pub const Size = u32

TypeHash[src]

Source Code

Source code
pub const Hash = u64

TypeKeyIterator[src]

Source Code

Source code
pub const KeyIterator = FieldIterator(K)

TypeValueIterator[src]

Source Code

Source code
pub const ValueIterator = FieldIterator(V)

TypeManaged[src]

Source Code

Source code
pub const Managed = HashMap(K, V, Context, max_load_percentage)

Fields

metadata: ?[*]Metadata = null

Pointer to the metadata.

size: Size = 0

Current number of elements in the hashmap.

available: Size = 0

Number of available slots before a grow is needed to satisfy the max_load_percentage.

pointer_stability: std.debug.SafetyLock = .{}

Used to detect memory safety violations.

Values

Constantempty[src]

A map containing no keys or values.

Source Code

Source code
pub const empty: Self = .{
    .metadata = null,
    .size = 0,
    .available = 0,
}

Functions

Functionpromote[src]

pub fn promote(self: Self, allocator: Allocator) Managed

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn promote(self: Self, allocator: Allocator) Managed {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
    return promoteContext(self, allocator, undefined);
}

FunctionpromoteContext[src]

pub fn promoteContext(self: Self, allocator: Allocator, ctx: Context) Managed

Parameters

self: Self
allocator: Allocator
ctx: Context

Source Code

Source code
pub fn promoteContext(self: Self, allocator: Allocator, ctx: Context) Managed {
    return .{
        .unmanaged = self,
        .allocator = allocator,
        .ctx = ctx,
    };
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.pointer_stability.lock();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.pointer_stability.unlock();
}

Functiondeinit[src]

pub fn deinit(self: *Self, allocator: Allocator) void

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn deinit(self: *Self, allocator: Allocator) void {
    self.pointer_stability.assertUnlocked();
    self.deallocate(allocator);
    self.* = undefined;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, allocator: Allocator, new_size: Size) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
new_size: Size

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, allocator: Allocator, new_size: Size) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return ensureTotalCapacityContext(self, allocator, new_size, undefined);
}

FunctionensureTotalCapacityContext[src]

pub fn ensureTotalCapacityContext(self: *Self, allocator: Allocator, new_size: Size, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
new_size: Size
ctx: Context

Source Code

Source code
pub fn ensureTotalCapacityContext(self: *Self, allocator: Allocator, new_size: Size, ctx: Context) Allocator.Error!void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    if (new_size > self.size)
        try self.growIfNeeded(allocator, new_size - self.size, ctx);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, allocator: Allocator, additional_size: Size) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
additional_size: Size

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, allocator: Allocator, additional_size: Size) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureUnusedCapacityContext instead.");
    return ensureUnusedCapacityContext(self, allocator, additional_size, undefined);
}

FunctionensureUnusedCapacityContext[src]

pub fn ensureUnusedCapacityContext(self: *Self, allocator: Allocator, additional_size: Size, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
additional_size: Size
ctx: Context

Source Code

Source code
pub fn ensureUnusedCapacityContext(self: *Self, allocator: Allocator, additional_size: Size, ctx: Context) Allocator.Error!void {
    return ensureTotalCapacityContext(self, allocator, self.count() + additional_size, ctx);
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    if (self.metadata) |_| {
        self.initMetadatas();
        self.size = 0;
        self.available = @truncate((self.capacity() * max_load_percentage) / 100);
    }
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, allocator: Allocator) void

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn clearAndFree(self: *Self, allocator: Allocator) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    self.deallocate(allocator);
    self.size = 0;
    self.available = 0;
}

Functioncount[src]

pub fn count(self: Self) Size

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) Size {
    return self.size;
}

Functioncapacity[src]

pub fn capacity(self: Self) Size

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) Size {
    if (self.metadata == null) return 0;

    return self.header().capacity;
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return .{ .hm = self };
}

FunctionkeyIterator[src]

pub fn keyIterator(self: Self) KeyIterator

Parameters

self: Self

Source Code

Source code
pub fn keyIterator(self: Self) KeyIterator {
    if (self.metadata) |metadata| {
        return .{
            .len = self.capacity(),
            .metadata = metadata,
            .items = self.keys(),
        };
    } else {
        return .{
            .len = 0,
            .metadata = undefined,
            .items = undefined,
        };
    }
}

FunctionvalueIterator[src]

pub fn valueIterator(self: Self) ValueIterator

Parameters

self: Self

Source Code

Source code
pub fn valueIterator(self: Self) ValueIterator {
    if (self.metadata) |metadata| {
        return .{
            .len = self.capacity(),
            .metadata = metadata,
            .items = self.values(),
        };
    } else {
        return .{
            .len = 0,
            .metadata = undefined,
            .items = undefined,
        };
    }
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void

Insert an entry in the map. Assumes it is not already present.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
    return self.putNoClobberContext(allocator, key, value, undefined);
}

FunctionputNoClobberContext[src]

pub fn putNoClobberContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putNoClobberContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
    {
        self.pointer_stability.lock();
        defer self.pointer_stability.unlock();
        try self.growIfNeeded(allocator, 1, ctx);
    }
    self.putAssumeCapacityNoClobberContext(key, value, ctx);
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
    return self.putAssumeCapacityContext(key, value, undefined);
}

FunctionputAssumeCapacityContext[src]

pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    gop.value_ptr.* = value;
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Insert an entry in the map. Assumes it is not already present, and that no allocation is needed.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
    return self.putAssumeCapacityNoClobberContext(key, value, undefined);
}

FunctionputAssumeCapacityNoClobberContext[src]

pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
    assert(!self.containsContext(key, ctx));

    const hash: Hash = ctx.hash(key);
    const mask = self.capacity() - 1;
    var idx: usize = @truncate(hash & mask);

    var metadata = self.metadata.? + idx;
    while (metadata[0].isUsed()) {
        idx = (idx + 1) & mask;
        metadata = self.metadata.? + idx;
    }

    assert(self.available > 0);
    self.available -= 1;

    const fingerprint = Metadata.takeFingerprint(hash);
    metadata[0].fill(fingerprint);
    self.keys()[idx] = key;
    self.values()[idx] = value;

    self.size += 1;
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
    return self.fetchPutContext(allocator, key, value, undefined);
}

FunctionfetchPutContext[src]

pub fn fetchPutContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!?KV

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!?KV {
    const gop = try self.getOrPutContext(allocator, key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
    return self.fetchPutAssumeCapacityContext(key, value, undefined);
}

FunctionfetchPutAssumeCapacityContext[src]

pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchRemove[src]

pub fn fetchRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchRemoveContext instead.");
    return self.fetchRemoveContext(key, undefined);
}

FunctionfetchRemoveContext[src]

pub fn fetchRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchRemoveAdapted(key, ctx);
}

FunctionfetchRemoveAdapted[src]

pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (self.getIndex(key, ctx)) |idx| {
        const old_key = &self.keys()[idx];
        const old_val = &self.values()[idx];
        const result = KV{
            .key = old_key.*,
            .value = old_val.*,
        };
        self.metadata.?[idx].remove();
        old_key.* = undefined;
        old_val.* = undefined;
        self.size -= 1;
        self.available += 1;
        return result;
    }

    return null;
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
    return self.getEntryContext(key, undefined);
}

FunctiongetEntryContext[src]

pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
    return self.getEntryAdapted(key, ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    if (self.getIndex(key, ctx)) |idx| {
        return Entry{
            .key_ptr = &self.keys()[idx],
            .value_ptr = &self.values()[idx],
        };
    }
    return null;
}

Functionput[src]

pub fn put(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void

Insert an entry if the associated key is not already present, otherwise update preexisting value.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
    return self.putContext(allocator, key, value, undefined);
}

FunctionputContext[src]

pub fn putContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
    const result = try self.getOrPutContext(allocator, key, ctx);
    result.value_ptr.* = value;
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Get an optional pointer to the actual key associated with adapted key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
    return self.getKeyPtrContext(key, undefined);
}

FunctiongetKeyPtrContext[src]

pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
    return self.getKeyPtrAdapted(key, ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    if (self.getIndex(key, ctx)) |idx| {
        return &self.keys()[idx];
    }
    return null;
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Get a copy of the actual key associated with adapted key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
    return self.getKeyContext(key, undefined);
}

FunctiongetKeyContext[src]

pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
    return self.getKeyAdapted(key, ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    if (self.getIndex(key, ctx)) |idx| {
        return self.keys()[idx];
    }
    return null;
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Get an optional pointer to the value associated with key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
    return self.getPtrContext(key, undefined);
}

FunctiongetPtrContext[src]

pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
    return self.getPtrAdapted(key, ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    if (self.getIndex(key, ctx)) |idx| {
        return &self.values()[idx];
    }
    return null;
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Get a copy of the value associated with key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
    return self.getContext(key, undefined);
}

FunctiongetContext[src]

pub fn getContext(self: Self, key: K, ctx: Context) ?V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getContext(self: Self, key: K, ctx: Context) ?V {
    return self.getAdapted(key, ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    if (self.getIndex(key, ctx)) |idx| {
        return self.values()[idx];
    }
    return null;
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, allocator: Allocator, key: K) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, allocator: Allocator, key: K) Allocator.Error!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
    return self.getOrPutContext(allocator, key, undefined);
}

FunctiongetOrPutContext[src]

pub fn getOrPutContext(self: *Self, allocator: Allocator, key: K, ctx: Context) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutContext(self: *Self, allocator: Allocator, key: K, ctx: Context) Allocator.Error!GetOrPutResult {
    const gop = try self.getOrPutContextAdapted(allocator, key, ctx, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype) Allocator.Error!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
    return self.getOrPutContextAdapted(allocator, key, key_ctx, undefined);
}

FunctiongetOrPutContextAdapted[src]

pub fn getOrPutContextAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
ctx: Context

Source Code

Source code
pub fn getOrPutContextAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Allocator.Error!GetOrPutResult {
    {
        self.pointer_stability.lock();
        defer self.pointer_stability.unlock();
        self.growIfNeeded(allocator, 1, ctx) catch |err| {
            // If allocation fails, try to do the lookup anyway.
            // If we find an existing item, we can return it.
            // Otherwise return the error, we could not add another.
            const index = self.getIndex(key, key_ctx) orelse return err;
            return GetOrPutResult{
                .key_ptr = &self.keys()[index],
                .value_ptr = &self.values()[index],
                .found_existing = true,
            };
        };
    }
    return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
    return self.getOrPutAssumeCapacityContext(key, undefined);
}

FunctiongetOrPutAssumeCapacityContext[src]

pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
    const result = self.getOrPutAssumeCapacityAdapted(key, ctx);
    if (!result.found_existing) {
        result.key_ptr.* = key;
    }
    return result;
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {

    // If you get a compile error on this line, it means that your generic hash
    // function is invalid for these parameters.
    const hash: Hash = ctx.hash(key);

    const mask = self.capacity() - 1;
    const fingerprint = Metadata.takeFingerprint(hash);
    var limit = self.capacity();
    var idx = @as(usize, @truncate(hash & mask));

    var first_tombstone_idx: usize = self.capacity(); // invalid index
    var metadata = self.metadata.? + idx;
    while (!metadata[0].isFree() and limit != 0) {
        if (metadata[0].isUsed() and metadata[0].fingerprint == fingerprint) {
            const test_key = &self.keys()[idx];
            // If you get a compile error on this line, it means that your generic eql
            // function is invalid for these parameters.

            if (ctx.eql(key, test_key.*)) {
                return GetOrPutResult{
                    .key_ptr = test_key,
                    .value_ptr = &self.values()[idx],
                    .found_existing = true,
                };
            }
        } else if (first_tombstone_idx == self.capacity() and metadata[0].isTombstone()) {
            first_tombstone_idx = idx;
        }

        limit -= 1;
        idx = (idx + 1) & mask;
        metadata = self.metadata.? + idx;
    }

    if (first_tombstone_idx < self.capacity()) {
        // Cheap try to lower probing lengths after deletions. Recycle a tombstone.
        idx = first_tombstone_idx;
        metadata = self.metadata.? + idx;
    }
    // We're using a slot previously free or a tombstone.
    self.available -= 1;

    metadata[0].fill(fingerprint);
    const new_key = &self.keys()[idx];
    const new_value = &self.values()[idx];
    new_key.* = undefined;
    new_value.* = undefined;
    self.size += 1;

    return GetOrPutResult{
        .key_ptr = new_key,
        .value_ptr = new_value,
        .found_existing = false,
    };
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!Entry

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
    return self.getOrPutValueContext(allocator, key, value, undefined);
}

FunctiongetOrPutValueContext[src]

pub fn getOrPutValueContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!Entry

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn getOrPutValueContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!Entry {
    const res = try self.getOrPutAdapted(allocator, key, ctx);
    if (!res.found_existing) {
        res.key_ptr.* = key;
        res.value_ptr.* = value;
    }
    return Entry{ .key_ptr = res.key_ptr, .value_ptr = res.value_ptr };
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Return true if there is a value associated with key in the map.

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
    return self.containsContext(key, undefined);
}

FunctioncontainsContext[src]

pub fn containsContext(self: Self, key: K, ctx: Context) bool

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn containsContext(self: Self, key: K, ctx: Context) bool {
    return self.containsAdapted(key, ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.getIndex(key, ctx) != null;
}

Functionremove[src]

pub fn remove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map, and this function returns true. Otherwise this function returns false.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K

Source Code

Source code
pub fn remove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call removeContext instead.");
    return self.removeContext(key, undefined);
}

FunctionremoveContext[src]

pub fn removeContext(self: *Self, key: K, ctx: Context) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn removeContext(self: *Self, key: K, ctx: Context) bool {
    return self.removeAdapted(key, ctx);
}

FunctionremoveAdapted[src]

pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self

Source Code

Source code
pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (self.getIndex(key, ctx)) |idx| {
        self.removeByIndex(idx);
        return true;
    }

    return false;
}

FunctionremoveByPtr[src]

pub fn removeByPtr(self: *Self, key_ptr: *K) void

Delete the entry with key pointed to by key_ptr from the hash map. key_ptr is assumed to be a valid pointer to a key that is present in the hash map.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key_ptr: *K

Source Code

Source code
pub fn removeByPtr(self: *Self, key_ptr: *K) void {
    // TODO: replace with pointer subtraction once supported by zig
    // if @sizeOf(K) == 0 then there is at most one item in the hash
    // map, which is assumed to exist as key_ptr must be valid.  This
    // item must be at index 0.
    const idx = if (@sizeOf(K) > 0)
        (@intFromPtr(key_ptr) - @intFromPtr(self.keys())) / @sizeOf(K)
    else
        0;

    self.removeByIndex(idx);
}

Functionclone[src]

pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
    return self.cloneContext(allocator, @as(Context, undefined));
}

FunctioncloneContext[src]

pub fn cloneContext(self: Self, allocator: Allocator, new_ctx: anytype) Allocator.Error!HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage)

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn cloneContext(self: Self, allocator: Allocator, new_ctx: anytype) Allocator.Error!HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other: HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) = .empty;
    if (self.size == 0)
        return other;

    const new_cap = capacityForSize(self.size);
    try other.allocate(allocator, new_cap);
    other.initMetadatas();
    other.available = @truncate((new_cap * max_load_percentage) / 100);

    var i: Size = 0;
    var metadata = self.metadata.?;
    const keys_ptr = self.keys();
    const values_ptr = self.values();
    while (i < self.capacity()) : (i += 1) {
        if (metadata[i].isUsed()) {
            other.putAssumeCapacityNoClobberContext(keys_ptr[i], values_ptr[i], new_ctx);
            if (other.size == self.size)
                break;
        }
    }

    return other;
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.pointer_stability.assertUnlocked();
    const result = self.*;
    self.* = .empty;
    return result;
}

Functionrehash[src]

pub fn rehash(self: *Self, ctx: anytype) void

Rehash the map, in-place.

Over time, due to the current tombstone-based implementation, a HashMap could become fragmented due to the buildup of tombstone entries that causes a performance degradation due to excessive probing. The kind of pattern that might cause this is a long-lived HashMap with repeated inserts and deletes.

After this function is called, there will be no tombstones in the HashMap, each of the entries is rehashed and any existing key/value pointers into the HashMap are invalidated.

Parameters

self: *Self

Source Code

Source code
pub fn rehash(self: *Self, ctx: anytype) void {
    const mask = self.capacity() - 1;

    var metadata = self.metadata.?;
    var keys_ptr = self.keys();
    var values_ptr = self.values();
    var curr: Size = 0;

    // While we are re-hashing every slot, we will use the
    // fingerprint to mark used buckets as being used and either free
    // (needing to be rehashed) or tombstone (already rehashed).

    while (curr < self.capacity()) : (curr += 1) {
        metadata[curr].fingerprint = Metadata.free;
    }

    // Now iterate over all the buckets, rehashing them

    curr = 0;
    while (curr < self.capacity()) {
        if (!metadata[curr].isUsed()) {
            assert(metadata[curr].isFree());
            curr += 1;
            continue;
        }

        const hash = ctx.hash(keys_ptr[curr]);
        const fingerprint = Metadata.takeFingerprint(hash);
        var idx = @as(usize, @truncate(hash & mask));

        // For each bucket, rehash to an index:
        // 1) before the cursor, probed into a free slot, or
        // 2) equal to the cursor, no need to move, or
        // 3) ahead of the cursor, probing over already rehashed

        while ((idx < curr and metadata[idx].isUsed()) or
            (idx > curr and metadata[idx].fingerprint == Metadata.tombstone))
        {
            idx = (idx + 1) & mask;
        }

        if (idx < curr) {
            assert(metadata[idx].isFree());
            metadata[idx].fill(fingerprint);
            keys_ptr[idx] = keys_ptr[curr];
            values_ptr[idx] = values_ptr[curr];

            metadata[curr].used = 0;
            assert(metadata[curr].isFree());
            keys_ptr[curr] = undefined;
            values_ptr[curr] = undefined;

            curr += 1;
        } else if (idx == curr) {
            metadata[idx].fingerprint = fingerprint;
            curr += 1;
        } else {
            assert(metadata[idx].fingerprint != Metadata.tombstone);
            metadata[idx].fingerprint = Metadata.tombstone;
            if (metadata[idx].isUsed()) {
                std.mem.swap(K, &keys_ptr[curr], &keys_ptr[idx]);
                std.mem.swap(V, &values_ptr[curr], &values_ptr[idx]);
            } else {
                metadata[idx].used = 1;
                keys_ptr[idx] = keys_ptr[curr];
                values_ptr[idx] = values_ptr[curr];

                metadata[curr].fingerprint = Metadata.free;
                metadata[curr].used = 0;
                keys_ptr[curr] = undefined;
                values_ptr[curr] = undefined;

                curr += 1;
            }
        }
    }
}

Source Code

Source code
pub fn AutoHashMapUnmanaged(comptime K: type, comptime V: type) type {
    return HashMapUnmanaged(K, V, AutoContext(K), default_max_load_percentage);
}

Type FunctionBoundedArray[src]

A structure with an array and a length, that can be used as a slice.

Useful to pass around small arrays whose exact size is only known at runtime, but whose maximum size is known at comptime, without requiring an Allocator.

var actual_size = 32;
var a = try BoundedArray(u8, 64).init(actual_size);
var slice = a.slice(); // a slice of the 64-byte array
var a_clone = a; // creates a copy - the structure doesn't use any internal pointers

Parameters

T: type
buffer_capacity: usize

Types

TypeWriter[src]

Source Code

Source code
pub const Writer = if (T != u8)
    @compileError("The Writer interface is only defined for BoundedArray(u8, ...) " ++
        "but the given type is BoundedArray(" ++ @typeName(T) ++ ", ...)")
else
    std.io.Writer(*Self, error{Overflow}, appendWrite)

Fields

buffer: [buffer_capacity]T align(alignment) = undefined
len: usize = 0

Functions

Functioninit[src]

pub fn init(len: usize) error{Overflow}!Self

Set the actual length of the slice. Returns error.Overflow if it exceeds the length of the backing array.

Parameters

len: usize

Source Code

Source code
pub fn init(len: usize) error{Overflow}!Self {
    if (len > buffer_capacity) return error.Overflow;
    return Self{ .len = len };
}

Functionslice[src]

pub fn slice(self: anytype) switch (@TypeOf(&self.buffer)) { *align(alignment) [buffer_capacity]T => []align(alignment) T, *align(alignment) const [buffer_capacity]T => []align(alignment) const T, else => unreachable, }

View the internal array as a slice whose size was previously set.

Source Code

Source code
pub fn slice(self: anytype) switch (@TypeOf(&self.buffer)) {
    *align(alignment) [buffer_capacity]T => []align(alignment) T,
    *align(alignment) const [buffer_capacity]T => []align(alignment) const T,
    else => unreachable,
} {
    return self.buffer[0..self.len];
}

FunctionconstSlice[src]

pub fn constSlice(self: *const Self) []align(alignment) const T

View the internal array as a constant slice whose size was previously set.

Parameters

self: *const Self

Source Code

Source code
pub fn constSlice(self: *const Self) []align(alignment) const T {
    return self.slice();
}

Functionresize[src]

pub fn resize(self: *Self, len: usize) error{Overflow}!void

Adjust the slice's length to len. Does not initialize added items if any.

Parameters

self: *Self
len: usize

Source Code

Source code
pub fn resize(self: *Self, len: usize) error{Overflow}!void {
    if (len > buffer_capacity) return error.Overflow;
    self.len = len;
}

Functionclear[src]

pub fn clear(self: *Self) void

Remove all elements from the slice.

Parameters

self: *Self

Source Code

Source code
pub fn clear(self: *Self) void {
    self.len = 0;
}

FunctionfromSlice[src]

pub fn fromSlice(m: []const T) error{Overflow}!Self

Copy the content of an existing slice.

Parameters

m: []const T

Source Code

Source code
pub fn fromSlice(m: []const T) error{Overflow}!Self {
    var list = try init(m.len);
    @memcpy(list.slice(), m);
    return list;
}

Functionget[src]

pub fn get(self: Self, i: usize) T

Return the element at index i of the slice.

Parameters

self: Self
i: usize

Source Code

Source code
pub fn get(self: Self, i: usize) T {
    return self.constSlice()[i];
}

Functionset[src]

pub fn set(self: *Self, i: usize, item: T) void

Set the value of the element at index i of the slice.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn set(self: *Self, i: usize, item: T) void {
    self.slice()[i] = item;
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Return the maximum length of a slice.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    return self.buffer.len;
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: Self, additional_count: usize) error{Overflow}!void

Check that the slice can hold at least additional_count items.

Parameters

self: Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: Self, additional_count: usize) error{Overflow}!void {
    if (self.len + additional_count > buffer_capacity) {
        return error.Overflow;
    }
}

FunctionaddOne[src]

pub fn addOne(self: *Self) error{Overflow}!*T

Increase length by 1, returning a pointer to the new item.

Parameters

self: *Self

Source Code

Source code
pub fn addOne(self: *Self) error{Overflow}!*T {
    try self.ensureUnusedCapacity(1);
    return self.addOneAssumeCapacity();
}

FunctionaddOneAssumeCapacity[src]

pub fn addOneAssumeCapacity(self: *Self) *T

Increase length by 1, returning pointer to the new item. Asserts that there is space for the new item.

Parameters

self: *Self

Source Code

Source code
pub fn addOneAssumeCapacity(self: *Self) *T {
    assert(self.len < buffer_capacity);
    self.len += 1;
    return &self.slice()[self.len - 1];
}

FunctionaddManyAsArray[src]

pub fn addManyAsArray(self: *Self, comptime n: usize) error{Overflow}!*align(alignment) [n]T

Resize the slice, adding n new elements, which have undefined values. The return value is a pointer to the array of uninitialized elements.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArray(self: *Self, comptime n: usize) error{Overflow}!*align(alignment) [n]T {
    const prev_len = self.len;
    try self.resize(self.len + n);
    return self.slice()[prev_len..][0..n];
}

FunctionaddManyAsSlice[src]

pub fn addManyAsSlice(self: *Self, n: usize) error{Overflow}![]align(alignment) T

Resize the slice, adding n new elements, which have undefined values. The return value is a slice pointing to the uninitialized elements.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSlice(self: *Self, n: usize) error{Overflow}![]align(alignment) T {
    const prev_len = self.len;
    try self.resize(self.len + n);
    return self.slice()[prev_len..][0..n];
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Remove and return the last element from the slice, or return null if the slice is empty.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.len == 0) return null;
    const item = self.get(self.len - 1);
    self.len -= 1;
    return item;
}

FunctionunusedCapacitySlice[src]

pub fn unusedCapacitySlice(self: *Self) []align(alignment) T

Return a slice of only the extra capacity after items. This can be useful for writing directly into it. Note that such an operation must be followed up with a call to resize()

Parameters

self: *Self

Source Code

Source code
pub fn unusedCapacitySlice(self: *Self) []align(alignment) T {
    return self.buffer[self.len..];
}

Functioninsert[src]

pub fn insert( self: *Self, i: usize, item: T, ) error{Overflow}!void

Insert item at index i by moving slice[n .. slice.len] to make room. This operation is O(N).

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insert(
    self: *Self,
    i: usize,
    item: T,
) error{Overflow}!void {
    if (i > self.len) {
        return error.Overflow;
    }
    _ = try self.addOne();
    var s = self.slice();
    mem.copyBackwards(T, s[i + 1 .. s.len], s[i .. s.len - 1]);
    self.buffer[i] = item;
}

FunctioninsertSlice[src]

pub fn insertSlice(self: *Self, i: usize, items: []const T) error{Overflow}!void

Insert slice items at index i by moving slice[i .. slice.len] to make room. This operation is O(N).

Parameters

self: *Self
i: usize
items: []const T

Source Code

Source code
pub fn insertSlice(self: *Self, i: usize, items: []const T) error{Overflow}!void {
    try self.ensureUnusedCapacity(items.len);
    self.len += items.len;
    mem.copyBackwards(T, self.slice()[i + items.len .. self.len], self.constSlice()[i .. self.len - items.len]);
    @memcpy(self.slice()[i..][0..items.len], items);
}

FunctionreplaceRange[src]

pub fn replaceRange( self: *Self, start: usize, len: usize, new_items: []const T, ) error{Overflow}!void

Replace range of elements slice[start..][0..len] with new_items. Grows slice if len < new_items.len. Shrinks slice if len > new_items.len.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRange(
    self: *Self,
    start: usize,
    len: usize,
    new_items: []const T,
) error{Overflow}!void {
    const after_range = start + len;
    var range = self.slice()[start..after_range];

    if (range.len == new_items.len) {
        @memcpy(range[0..new_items.len], new_items);
    } else if (range.len < new_items.len) {
        const first = new_items[0..range.len];
        const rest = new_items[range.len..];
        @memcpy(range[0..first.len], first);
        try self.insertSlice(after_range, rest);
    } else {
        @memcpy(range[0..new_items.len], new_items);
        const after_subrange = start + new_items.len;
        for (self.constSlice()[after_range..], 0..) |item, i| {
            self.slice()[after_subrange..][i] = item;
        }
        self.len -= len - new_items.len;
    }
}

Functionappend[src]

pub fn append(self: *Self, item: T) error{Overflow}!void

Extend the slice by 1 element.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn append(self: *Self, item: T) error{Overflow}!void {
    const new_item_ptr = try self.addOne();
    new_item_ptr.* = item;
}

FunctionappendAssumeCapacity[src]

pub fn appendAssumeCapacity(self: *Self, item: T) void

Extend the slice by 1 element, asserting the capacity is already enough to store the new item.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn appendAssumeCapacity(self: *Self, item: T) void {
    const new_item_ptr = self.addOneAssumeCapacity();
    new_item_ptr.* = item;
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, i: usize) T

Remove the element at index i, shift elements after index i forward, and return the removed element. Asserts the slice has at least one item. This operation is O(N).

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn orderedRemove(self: *Self, i: usize) T {
    const newlen = self.len - 1;
    if (newlen == i) return self.pop().?;
    const old_item = self.get(i);
    for (self.slice()[i..newlen], 0..) |*b, j| b.* = self.get(i + 1 + j);
    self.set(newlen, undefined);
    self.len = newlen;
    return old_item;
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, i: usize) T

Remove the element at the specified index and return it. The empty slot is filled from the end of the slice. This operation is O(1).

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn swapRemove(self: *Self, i: usize) T {
    if (self.len - 1 == i) return self.pop().?;
    const old_item = self.get(i);
    self.set(i, self.pop().?);
    return old_item;
}

FunctionappendSlice[src]

pub fn appendSlice(self: *Self, items: []const T) error{Overflow}!void

Append the slice of items to the slice.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSlice(self: *Self, items: []const T) error{Overflow}!void {
    try self.ensureUnusedCapacity(items.len);
    self.appendSliceAssumeCapacity(items);
}

FunctionappendSliceAssumeCapacity[src]

pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void

Append the slice of items to the slice, asserting the capacity is already enough to store the new items.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
    const old_len = self.len;
    self.len += items.len;
    @memcpy(self.slice()[old_len..][0..items.len], items);
}

FunctionappendNTimes[src]

pub fn appendNTimes(self: *Self, value: T, n: usize) error{Overflow}!void

Append a value to the slice n times. Allocates more memory as necessary.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub fn appendNTimes(self: *Self, value: T, n: usize) error{Overflow}!void {
    const old_len = self.len;
    try self.resize(old_len + n);
    @memset(self.slice()[old_len..self.len], value);
}

FunctionappendNTimesAssumeCapacity[src]

pub fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void

Append a value to the slice n times. Asserts the capacity is enough.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
    const old_len = self.len;
    self.len += n;
    assert(self.len <= buffer_capacity);
    @memset(self.slice()[old_len..self.len], value);
}

Functionwriter[src]

pub fn writer(self: *Self) Writer

Initializes a writer which will write into the array.

Parameters

self: *Self

Source Code

Source code
pub fn writer(self: *Self) Writer {
    return .{ .context = self };
}

Example Usage

test BoundedArray {
    var a = try BoundedArray(u8, 64).init(32);

    try testing.expectEqual(a.capacity(), 64);
    try testing.expectEqual(a.slice().len, 32);
    try testing.expectEqual(a.constSlice().len, 32);

    try a.resize(48);
    try testing.expectEqual(a.len, 48);

    const x = [_]u8{1} ** 10;
    a = try BoundedArray(u8, 64).fromSlice(&x);
    try testing.expectEqualSlices(u8, &x, a.constSlice());

    var a2 = a;
    try testing.expectEqualSlices(u8, a.constSlice(), a2.constSlice());
    a2.set(0, 0);
    try testing.expect(a.get(0) != a2.get(0));

    try testing.expectError(error.Overflow, a.resize(100));
    try testing.expectError(error.Overflow, BoundedArray(u8, x.len - 1).fromSlice(&x));

    try a.resize(0);
    try a.ensureUnusedCapacity(a.capacity());
    (try a.addOne()).* = 0;
    try a.ensureUnusedCapacity(a.capacity() - 1);
    try testing.expectEqual(a.len, 1);

    const uninitialized = try a.addManyAsArray(4);
    try testing.expectEqual(uninitialized.len, 4);
    try testing.expectEqual(a.len, 5);

    try a.append(0xff);
    try testing.expectEqual(a.len, 6);
    try testing.expectEqual(a.pop(), 0xff);

    a.appendAssumeCapacity(0xff);
    try testing.expectEqual(a.len, 6);
    try testing.expectEqual(a.pop(), 0xff);

    try a.resize(1);
    try testing.expectEqual(a.pop(), 0);
    try testing.expectEqual(a.pop(), null);
    var unused = a.unusedCapacitySlice();
    @memset(unused[0..8], 2);
    unused[8] = 3;
    unused[9] = 4;
    try testing.expectEqual(unused.len, a.capacity());
    try a.resize(10);

    try a.insert(5, 0xaa);
    try testing.expectEqual(a.len, 11);
    try testing.expectEqual(a.get(5), 0xaa);
    try testing.expectEqual(a.get(9), 3);
    try testing.expectEqual(a.get(10), 4);

    try a.insert(11, 0xbb);
    try testing.expectEqual(a.len, 12);
    try testing.expectEqual(a.pop(), 0xbb);

    try a.appendSlice(&x);
    try testing.expectEqual(a.len, 11 + x.len);

    try a.appendNTimes(0xbb, 5);
    try testing.expectEqual(a.len, 11 + x.len + 5);
    try testing.expectEqual(a.pop(), 0xbb);

    a.appendNTimesAssumeCapacity(0xcc, 5);
    try testing.expectEqual(a.len, 11 + x.len + 5 - 1 + 5);
    try testing.expectEqual(a.pop(), 0xcc);

    try testing.expectEqual(a.len, 29);
    try a.replaceRange(1, 20, &x);
    try testing.expectEqual(a.len, 29 + x.len - 20);

    try a.insertSlice(0, &x);
    try testing.expectEqual(a.len, 29 + x.len - 20 + x.len);

    try a.replaceRange(1, 5, &x);
    try testing.expectEqual(a.len, 29 + x.len - 20 + x.len + x.len - 5);

    try a.append(10);
    try testing.expectEqual(a.pop(), 10);

    try a.append(20);
    const removed = a.orderedRemove(5);
    try testing.expectEqual(removed, 1);
    try testing.expectEqual(a.len, 34);

    a.set(0, 0xdd);
    a.set(a.len - 1, 0xee);
    const swapped = a.swapRemove(0);
    try testing.expectEqual(swapped, 0xdd);
    try testing.expectEqual(a.get(0), 0xee);

    const added_slice = try a.addManyAsSlice(3);
    try testing.expectEqual(added_slice.len, 3);
    try testing.expectEqual(a.len, 36);

    while (a.pop()) |_| {}
    const w = a.writer();
    const s = "hello, this is a test string";
    try w.writeAll(s);
    try testing.expectEqualStrings(s, a.constSlice());
}

Source Code

Source code
pub fn BoundedArray(comptime T: type, comptime buffer_capacity: usize) type {
    return BoundedArrayAligned(T, @alignOf(T), buffer_capacity);
}

Type FunctionBoundedArrayAligned[src]

A structure with an array, length and alignment, that can be used as a slice.

Useful to pass around small explicitly-aligned arrays whose exact size is only known at runtime, but whose maximum size is known at comptime, without requiring an Allocator.

Parameters

T: type
alignment: u29
buffer_capacity: usize

Types

TypeWriter[src]

Source Code

Source code
pub const Writer = if (T != u8)
    @compileError("The Writer interface is only defined for BoundedArray(u8, ...) " ++
        "but the given type is BoundedArray(" ++ @typeName(T) ++ ", ...)")
else
    std.io.Writer(*Self, error{Overflow}, appendWrite)

Fields

buffer: [buffer_capacity]T align(alignment) = undefined
len: usize = 0

Functions

Functioninit[src]

pub fn init(len: usize) error{Overflow}!Self

Set the actual length of the slice. Returns error.Overflow if it exceeds the length of the backing array.

Parameters

len: usize

Source Code

Source code
pub fn init(len: usize) error{Overflow}!Self {
    if (len > buffer_capacity) return error.Overflow;
    return Self{ .len = len };
}

Functionslice[src]

pub fn slice(self: anytype) switch (@TypeOf(&self.buffer)) { *align(alignment) [buffer_capacity]T => []align(alignment) T, *align(alignment) const [buffer_capacity]T => []align(alignment) const T, else => unreachable, }

View the internal array as a slice whose size was previously set.

Source Code

Source code
pub fn slice(self: anytype) switch (@TypeOf(&self.buffer)) {
    *align(alignment) [buffer_capacity]T => []align(alignment) T,
    *align(alignment) const [buffer_capacity]T => []align(alignment) const T,
    else => unreachable,
} {
    return self.buffer[0..self.len];
}

FunctionconstSlice[src]

pub fn constSlice(self: *const Self) []align(alignment) const T

View the internal array as a constant slice whose size was previously set.

Parameters

self: *const Self

Source Code

Source code
pub fn constSlice(self: *const Self) []align(alignment) const T {
    return self.slice();
}

Functionresize[src]

pub fn resize(self: *Self, len: usize) error{Overflow}!void

Adjust the slice's length to len. Does not initialize added items if any.

Parameters

self: *Self
len: usize

Source Code

Source code
pub fn resize(self: *Self, len: usize) error{Overflow}!void {
    if (len > buffer_capacity) return error.Overflow;
    self.len = len;
}

Functionclear[src]

pub fn clear(self: *Self) void

Remove all elements from the slice.

Parameters

self: *Self

Source Code

Source code
pub fn clear(self: *Self) void {
    self.len = 0;
}

FunctionfromSlice[src]

pub fn fromSlice(m: []const T) error{Overflow}!Self

Copy the content of an existing slice.

Parameters

m: []const T

Source Code

Source code
pub fn fromSlice(m: []const T) error{Overflow}!Self {
    var list = try init(m.len);
    @memcpy(list.slice(), m);
    return list;
}

Functionget[src]

pub fn get(self: Self, i: usize) T

Return the element at index i of the slice.

Parameters

self: Self
i: usize

Source Code

Source code
pub fn get(self: Self, i: usize) T {
    return self.constSlice()[i];
}

Functionset[src]

pub fn set(self: *Self, i: usize, item: T) void

Set the value of the element at index i of the slice.

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn set(self: *Self, i: usize, item: T) void {
    self.slice()[i] = item;
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Return the maximum length of a slice.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    return self.buffer.len;
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: Self, additional_count: usize) error{Overflow}!void

Check that the slice can hold at least additional_count items.

Parameters

self: Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: Self, additional_count: usize) error{Overflow}!void {
    if (self.len + additional_count > buffer_capacity) {
        return error.Overflow;
    }
}

FunctionaddOne[src]

pub fn addOne(self: *Self) error{Overflow}!*T

Increase length by 1, returning a pointer to the new item.

Parameters

self: *Self

Source Code

Source code
pub fn addOne(self: *Self) error{Overflow}!*T {
    try self.ensureUnusedCapacity(1);
    return self.addOneAssumeCapacity();
}

FunctionaddOneAssumeCapacity[src]

pub fn addOneAssumeCapacity(self: *Self) *T

Increase length by 1, returning pointer to the new item. Asserts that there is space for the new item.

Parameters

self: *Self

Source Code

Source code
pub fn addOneAssumeCapacity(self: *Self) *T {
    assert(self.len < buffer_capacity);
    self.len += 1;
    return &self.slice()[self.len - 1];
}

FunctionaddManyAsArray[src]

pub fn addManyAsArray(self: *Self, comptime n: usize) error{Overflow}!*align(alignment) [n]T

Resize the slice, adding n new elements, which have undefined values. The return value is a pointer to the array of uninitialized elements.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsArray(self: *Self, comptime n: usize) error{Overflow}!*align(alignment) [n]T {
    const prev_len = self.len;
    try self.resize(self.len + n);
    return self.slice()[prev_len..][0..n];
}

FunctionaddManyAsSlice[src]

pub fn addManyAsSlice(self: *Self, n: usize) error{Overflow}![]align(alignment) T

Resize the slice, adding n new elements, which have undefined values. The return value is a slice pointing to the uninitialized elements.

Parameters

self: *Self
n: usize

Source Code

Source code
pub fn addManyAsSlice(self: *Self, n: usize) error{Overflow}![]align(alignment) T {
    const prev_len = self.len;
    try self.resize(self.len + n);
    return self.slice()[prev_len..][0..n];
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Remove and return the last element from the slice, or return null if the slice is empty.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.len == 0) return null;
    const item = self.get(self.len - 1);
    self.len -= 1;
    return item;
}

FunctionunusedCapacitySlice[src]

pub fn unusedCapacitySlice(self: *Self) []align(alignment) T

Return a slice of only the extra capacity after items. This can be useful for writing directly into it. Note that such an operation must be followed up with a call to resize()

Parameters

self: *Self

Source Code

Source code
pub fn unusedCapacitySlice(self: *Self) []align(alignment) T {
    return self.buffer[self.len..];
}

Functioninsert[src]

pub fn insert( self: *Self, i: usize, item: T, ) error{Overflow}!void

Insert item at index i by moving slice[n .. slice.len] to make room. This operation is O(N).

Parameters

self: *Self
i: usize
item: T

Source Code

Source code
pub fn insert(
    self: *Self,
    i: usize,
    item: T,
) error{Overflow}!void {
    if (i > self.len) {
        return error.Overflow;
    }
    _ = try self.addOne();
    var s = self.slice();
    mem.copyBackwards(T, s[i + 1 .. s.len], s[i .. s.len - 1]);
    self.buffer[i] = item;
}

FunctioninsertSlice[src]

pub fn insertSlice(self: *Self, i: usize, items: []const T) error{Overflow}!void

Insert slice items at index i by moving slice[i .. slice.len] to make room. This operation is O(N).

Parameters

self: *Self
i: usize
items: []const T

Source Code

Source code
pub fn insertSlice(self: *Self, i: usize, items: []const T) error{Overflow}!void {
    try self.ensureUnusedCapacity(items.len);
    self.len += items.len;
    mem.copyBackwards(T, self.slice()[i + items.len .. self.len], self.constSlice()[i .. self.len - items.len]);
    @memcpy(self.slice()[i..][0..items.len], items);
}

FunctionreplaceRange[src]

pub fn replaceRange( self: *Self, start: usize, len: usize, new_items: []const T, ) error{Overflow}!void

Replace range of elements slice[start..][0..len] with new_items. Grows slice if len < new_items.len. Shrinks slice if len > new_items.len.

Parameters

self: *Self
start: usize
len: usize
new_items: []const T

Source Code

Source code
pub fn replaceRange(
    self: *Self,
    start: usize,
    len: usize,
    new_items: []const T,
) error{Overflow}!void {
    const after_range = start + len;
    var range = self.slice()[start..after_range];

    if (range.len == new_items.len) {
        @memcpy(range[0..new_items.len], new_items);
    } else if (range.len < new_items.len) {
        const first = new_items[0..range.len];
        const rest = new_items[range.len..];
        @memcpy(range[0..first.len], first);
        try self.insertSlice(after_range, rest);
    } else {
        @memcpy(range[0..new_items.len], new_items);
        const after_subrange = start + new_items.len;
        for (self.constSlice()[after_range..], 0..) |item, i| {
            self.slice()[after_subrange..][i] = item;
        }
        self.len -= len - new_items.len;
    }
}

Functionappend[src]

pub fn append(self: *Self, item: T) error{Overflow}!void

Extend the slice by 1 element.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn append(self: *Self, item: T) error{Overflow}!void {
    const new_item_ptr = try self.addOne();
    new_item_ptr.* = item;
}

FunctionappendAssumeCapacity[src]

pub fn appendAssumeCapacity(self: *Self, item: T) void

Extend the slice by 1 element, asserting the capacity is already enough to store the new item.

Parameters

self: *Self
item: T

Source Code

Source code
pub fn appendAssumeCapacity(self: *Self, item: T) void {
    const new_item_ptr = self.addOneAssumeCapacity();
    new_item_ptr.* = item;
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, i: usize) T

Remove the element at index i, shift elements after index i forward, and return the removed element. Asserts the slice has at least one item. This operation is O(N).

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn orderedRemove(self: *Self, i: usize) T {
    const newlen = self.len - 1;
    if (newlen == i) return self.pop().?;
    const old_item = self.get(i);
    for (self.slice()[i..newlen], 0..) |*b, j| b.* = self.get(i + 1 + j);
    self.set(newlen, undefined);
    self.len = newlen;
    return old_item;
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, i: usize) T

Remove the element at the specified index and return it. The empty slot is filled from the end of the slice. This operation is O(1).

Parameters

self: *Self
i: usize

Source Code

Source code
pub fn swapRemove(self: *Self, i: usize) T {
    if (self.len - 1 == i) return self.pop().?;
    const old_item = self.get(i);
    self.set(i, self.pop().?);
    return old_item;
}

FunctionappendSlice[src]

pub fn appendSlice(self: *Self, items: []const T) error{Overflow}!void

Append the slice of items to the slice.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSlice(self: *Self, items: []const T) error{Overflow}!void {
    try self.ensureUnusedCapacity(items.len);
    self.appendSliceAssumeCapacity(items);
}

FunctionappendSliceAssumeCapacity[src]

pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void

Append the slice of items to the slice, asserting the capacity is already enough to store the new items.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
    const old_len = self.len;
    self.len += items.len;
    @memcpy(self.slice()[old_len..][0..items.len], items);
}

FunctionappendNTimes[src]

pub fn appendNTimes(self: *Self, value: T, n: usize) error{Overflow}!void

Append a value to the slice n times. Allocates more memory as necessary.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub fn appendNTimes(self: *Self, value: T, n: usize) error{Overflow}!void {
    const old_len = self.len;
    try self.resize(old_len + n);
    @memset(self.slice()[old_len..self.len], value);
}

FunctionappendNTimesAssumeCapacity[src]

pub fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void

Append a value to the slice n times. Asserts the capacity is enough.

Parameters

self: *Self
value: T
n: usize

Source Code

Source code
pub fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
    const old_len = self.len;
    self.len += n;
    assert(self.len <= buffer_capacity);
    @memset(self.slice()[old_len..self.len], value);
}

Functionwriter[src]

pub fn writer(self: *Self) Writer

Initializes a writer which will write into the array.

Parameters

self: *Self

Source Code

Source code
pub fn writer(self: *Self) Writer {
    return .{ .context = self };
}

Source Code

Source code
pub fn BoundedArrayAligned(
    comptime T: type,
    comptime alignment: u29,
    comptime buffer_capacity: usize,
) type {
    return struct {
        const Self = @This();
        buffer: [buffer_capacity]T align(alignment) = undefined,
        len: usize = 0,

        /// Set the actual length of the slice.
        /// Returns error.Overflow if it exceeds the length of the backing array.
        pub fn init(len: usize) error{Overflow}!Self {
            if (len > buffer_capacity) return error.Overflow;
            return Self{ .len = len };
        }

        /// View the internal array as a slice whose size was previously set.
        pub fn slice(self: anytype) switch (@TypeOf(&self.buffer)) {
            *align(alignment) [buffer_capacity]T => []align(alignment) T,
            *align(alignment) const [buffer_capacity]T => []align(alignment) const T,
            else => unreachable,
        } {
            return self.buffer[0..self.len];
        }

        /// View the internal array as a constant slice whose size was previously set.
        pub fn constSlice(self: *const Self) []align(alignment) const T {
            return self.slice();
        }

        /// Adjust the slice's length to `len`.
        /// Does not initialize added items if any.
        pub fn resize(self: *Self, len: usize) error{Overflow}!void {
            if (len > buffer_capacity) return error.Overflow;
            self.len = len;
        }

        /// Remove all elements from the slice.
        pub fn clear(self: *Self) void {
            self.len = 0;
        }

        /// Copy the content of an existing slice.
        pub fn fromSlice(m: []const T) error{Overflow}!Self {
            var list = try init(m.len);
            @memcpy(list.slice(), m);
            return list;
        }

        /// Return the element at index `i` of the slice.
        pub fn get(self: Self, i: usize) T {
            return self.constSlice()[i];
        }

        /// Set the value of the element at index `i` of the slice.
        pub fn set(self: *Self, i: usize, item: T) void {
            self.slice()[i] = item;
        }

        /// Return the maximum length of a slice.
        pub fn capacity(self: Self) usize {
            return self.buffer.len;
        }

        /// Check that the slice can hold at least `additional_count` items.
        pub fn ensureUnusedCapacity(self: Self, additional_count: usize) error{Overflow}!void {
            if (self.len + additional_count > buffer_capacity) {
                return error.Overflow;
            }
        }

        /// Increase length by 1, returning a pointer to the new item.
        pub fn addOne(self: *Self) error{Overflow}!*T {
            try self.ensureUnusedCapacity(1);
            return self.addOneAssumeCapacity();
        }

        /// Increase length by 1, returning pointer to the new item.
        /// Asserts that there is space for the new item.
        pub fn addOneAssumeCapacity(self: *Self) *T {
            assert(self.len < buffer_capacity);
            self.len += 1;
            return &self.slice()[self.len - 1];
        }

        /// Resize the slice, adding `n` new elements, which have `undefined` values.
        /// The return value is a pointer to the array of uninitialized elements.
        pub fn addManyAsArray(self: *Self, comptime n: usize) error{Overflow}!*align(alignment) [n]T {
            const prev_len = self.len;
            try self.resize(self.len + n);
            return self.slice()[prev_len..][0..n];
        }

        /// Resize the slice, adding `n` new elements, which have `undefined` values.
        /// The return value is a slice pointing to the uninitialized elements.
        pub fn addManyAsSlice(self: *Self, n: usize) error{Overflow}![]align(alignment) T {
            const prev_len = self.len;
            try self.resize(self.len + n);
            return self.slice()[prev_len..][0..n];
        }

        /// Remove and return the last element from the slice, or return `null` if the slice is empty.
        pub fn pop(self: *Self) ?T {
            if (self.len == 0) return null;
            const item = self.get(self.len - 1);
            self.len -= 1;
            return item;
        }

        /// Return a slice of only the extra capacity after items.
        /// This can be useful for writing directly into it.
        /// Note that such an operation must be followed up with a
        /// call to `resize()`
        pub fn unusedCapacitySlice(self: *Self) []align(alignment) T {
            return self.buffer[self.len..];
        }

        /// Insert `item` at index `i` by moving `slice[n .. slice.len]` to make room.
        /// This operation is O(N).
        pub fn insert(
            self: *Self,
            i: usize,
            item: T,
        ) error{Overflow}!void {
            if (i > self.len) {
                return error.Overflow;
            }
            _ = try self.addOne();
            var s = self.slice();
            mem.copyBackwards(T, s[i + 1 .. s.len], s[i .. s.len - 1]);
            self.buffer[i] = item;
        }

        /// Insert slice `items` at index `i` by moving `slice[i .. slice.len]` to make room.
        /// This operation is O(N).
        pub fn insertSlice(self: *Self, i: usize, items: []const T) error{Overflow}!void {
            try self.ensureUnusedCapacity(items.len);
            self.len += items.len;
            mem.copyBackwards(T, self.slice()[i + items.len .. self.len], self.constSlice()[i .. self.len - items.len]);
            @memcpy(self.slice()[i..][0..items.len], items);
        }

        /// Replace range of elements `slice[start..][0..len]` with `new_items`.
        /// Grows slice if `len < new_items.len`.
        /// Shrinks slice if `len > new_items.len`.
        pub fn replaceRange(
            self: *Self,
            start: usize,
            len: usize,
            new_items: []const T,
        ) error{Overflow}!void {
            const after_range = start + len;
            var range = self.slice()[start..after_range];

            if (range.len == new_items.len) {
                @memcpy(range[0..new_items.len], new_items);
            } else if (range.len < new_items.len) {
                const first = new_items[0..range.len];
                const rest = new_items[range.len..];
                @memcpy(range[0..first.len], first);
                try self.insertSlice(after_range, rest);
            } else {
                @memcpy(range[0..new_items.len], new_items);
                const after_subrange = start + new_items.len;
                for (self.constSlice()[after_range..], 0..) |item, i| {
                    self.slice()[after_subrange..][i] = item;
                }
                self.len -= len - new_items.len;
            }
        }

        /// Extend the slice by 1 element.
        pub fn append(self: *Self, item: T) error{Overflow}!void {
            const new_item_ptr = try self.addOne();
            new_item_ptr.* = item;
        }

        /// Extend the slice by 1 element, asserting the capacity is already
        /// enough to store the new item.
        pub fn appendAssumeCapacity(self: *Self, item: T) void {
            const new_item_ptr = self.addOneAssumeCapacity();
            new_item_ptr.* = item;
        }

        /// Remove the element at index `i`, shift elements after index
        /// `i` forward, and return the removed element.
        /// Asserts the slice has at least one item.
        /// This operation is O(N).
        pub fn orderedRemove(self: *Self, i: usize) T {
            const newlen = self.len - 1;
            if (newlen == i) return self.pop().?;
            const old_item = self.get(i);
            for (self.slice()[i..newlen], 0..) |*b, j| b.* = self.get(i + 1 + j);
            self.set(newlen, undefined);
            self.len = newlen;
            return old_item;
        }

        /// Remove the element at the specified index and return it.
        /// The empty slot is filled from the end of the slice.
        /// This operation is O(1).
        pub fn swapRemove(self: *Self, i: usize) T {
            if (self.len - 1 == i) return self.pop().?;
            const old_item = self.get(i);
            self.set(i, self.pop().?);
            return old_item;
        }

        /// Append the slice of items to the slice.
        pub fn appendSlice(self: *Self, items: []const T) error{Overflow}!void {
            try self.ensureUnusedCapacity(items.len);
            self.appendSliceAssumeCapacity(items);
        }

        /// Append the slice of items to the slice, asserting the capacity is already
        /// enough to store the new items.
        pub fn appendSliceAssumeCapacity(self: *Self, items: []const T) void {
            const old_len = self.len;
            self.len += items.len;
            @memcpy(self.slice()[old_len..][0..items.len], items);
        }

        /// Append a value to the slice `n` times.
        /// Allocates more memory as necessary.
        pub fn appendNTimes(self: *Self, value: T, n: usize) error{Overflow}!void {
            const old_len = self.len;
            try self.resize(old_len + n);
            @memset(self.slice()[old_len..self.len], value);
        }

        /// Append a value to the slice `n` times.
        /// Asserts the capacity is enough.
        pub fn appendNTimesAssumeCapacity(self: *Self, value: T, n: usize) void {
            const old_len = self.len;
            self.len += n;
            assert(self.len <= buffer_capacity);
            @memset(self.slice()[old_len..self.len], value);
        }

        pub const Writer = if (T != u8)
            @compileError("The Writer interface is only defined for BoundedArray(u8, ...) " ++
                "but the given type is BoundedArray(" ++ @typeName(T) ++ ", ...)")
        else
            std.io.Writer(*Self, error{Overflow}, appendWrite);

        /// Initializes a writer which will write into the array.
        pub fn writer(self: *Self) Writer {
            return .{ .context = self };
        }

        /// Same as `appendSlice` except it returns the number of bytes written, which is always the same
        /// as `m.len`. The purpose of this function existing is to match `std.io.Writer` API.
        fn appendWrite(self: *Self, m: []const u8) error{Overflow}!usize {
            try self.appendSlice(m);
            return m.len;
        }
    };
}

Type FunctionStaticStringMap[src]

Static string map optimized for small sets of disparate string keys. Works by separating the keys by length at initialization and only checking strings of equal length at runtime.

Parameters

V: type

Fields

kvs: *const KVs = &empty_kvs
len_indexes: [*]const u32 = &empty_len_indexes
len_indexes_len: u32 = 0
min_len: u32 = std.math.maxInt(u32)
max_len: u32 = 0

Functions

FunctioninitComptime[src]

pub inline fn initComptime(comptime kvs_list: anytype) Self

Returns a map backed by static, comptime allocated memory.

kvs_list must be either a list of struct { []const u8, V } (key-value pair) tuples, or a list of struct { []const u8 } (only keys) tuples if V is void.

Source Code

Source code
pub inline fn initComptime(comptime kvs_list: anytype) Self {
    comptime {
        var self = Self{};
        if (kvs_list.len == 0)
            return self;

        // Since the KVs are sorted, a linearly-growing bound will never
        // be sufficient for extreme cases. So we grow proportional to
        // N*log2(N).
        @setEvalBranchQuota(10 * kvs_list.len * std.math.log2_int_ceil(usize, kvs_list.len));

        var sorted_keys: [kvs_list.len][]const u8 = undefined;
        var sorted_vals: [kvs_list.len]V = undefined;

        self.initSortedKVs(kvs_list, &sorted_keys, &sorted_vals);
        const final_keys = sorted_keys;
        const final_vals = sorted_vals;
        self.kvs = &.{
            .keys = &final_keys,
            .values = &final_vals,
            .len = @intCast(kvs_list.len),
        };

        var len_indexes: [self.max_len + 1]u32 = undefined;
        self.initLenIndexes(&len_indexes);
        const final_len_indexes = len_indexes;
        self.len_indexes = &final_len_indexes;
        self.len_indexes_len = @intCast(len_indexes.len);
        return self;
    }
}

Functioninit[src]

pub fn init(kvs_list: anytype, allocator: mem.Allocator) !Self

Returns a map backed by memory allocated with allocator.

Handles kvs_list the same way as initComptime().

Parameters

allocator: mem.Allocator

Source Code

Source code
pub fn init(kvs_list: anytype, allocator: mem.Allocator) !Self {
    var self = Self{};
    if (kvs_list.len == 0)
        return self;

    const sorted_keys = try allocator.alloc([]const u8, kvs_list.len);
    errdefer allocator.free(sorted_keys);
    const sorted_vals = try allocator.alloc(V, kvs_list.len);
    errdefer allocator.free(sorted_vals);
    const kvs = try allocator.create(KVs);
    errdefer allocator.destroy(kvs);

    self.initSortedKVs(kvs_list, sorted_keys, sorted_vals);
    kvs.* = .{
        .keys = sorted_keys.ptr,
        .values = sorted_vals.ptr,
        .len = @intCast(kvs_list.len),
    };
    self.kvs = kvs;

    const len_indexes = try allocator.alloc(u32, self.max_len + 1);
    self.initLenIndexes(len_indexes);
    self.len_indexes = len_indexes.ptr;
    self.len_indexes_len = @intCast(len_indexes.len);
    return self;
}

Functiondeinit[src]

pub fn deinit(self: Self, allocator: mem.Allocator) void

this method should only be used with init() and not with initComptime().

Parameters

self: Self
allocator: mem.Allocator

Source Code

Source code
pub fn deinit(self: Self, allocator: mem.Allocator) void {
    allocator.free(self.len_indexes[0..self.len_indexes_len]);
    allocator.free(self.kvs.keys[0..self.kvs.len]);
    allocator.free(self.kvs.values[0..self.kvs.len]);
    allocator.destroy(self.kvs);
}

Functionhas[src]

pub fn has(self: Self, str: []const u8) bool

Checks if the map has a value for the key.

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn has(self: Self, str: []const u8) bool {
    return self.get(str) != null;
}

Functionget[src]

pub fn get(self: Self, str: []const u8) ?V

Returns the value for the key if any, else null.

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn get(self: Self, str: []const u8) ?V {
    if (self.kvs.len == 0)
        return null;

    return self.kvs.values[self.getIndex(str) orelse return null];
}

FunctiongetIndex[src]

pub fn getIndex(self: Self, str: []const u8) ?usize

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn getIndex(self: Self, str: []const u8) ?usize {
    const kvs = self.kvs.*;
    if (kvs.len == 0)
        return null;

    if (str.len < self.min_len or str.len > self.max_len)
        return null;

    var i = self.len_indexes[str.len];
    while (true) {
        const key = kvs.keys[i];
        if (key.len != str.len)
            return null;
        if (eql(key, str))
            return i;
        i += 1;
        if (i >= kvs.len)
            return null;
    }
}

FunctiongetLongestPrefix[src]

pub fn getLongestPrefix(self: Self, str: []const u8) ?KV

Returns the key-value pair where key is the longest prefix of str else null.

This is effectively an O(N) algorithm which loops from max_len to min_len and calls getIndex() to check all keys with the given len.

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn getLongestPrefix(self: Self, str: []const u8) ?KV {
    if (self.kvs.len == 0)
        return null;
    const i = self.getLongestPrefixIndex(str) orelse return null;
    const kvs = self.kvs.*;
    return .{
        .key = kvs.keys[i],
        .value = kvs.values[i],
    };
}

FunctiongetLongestPrefixIndex[src]

pub fn getLongestPrefixIndex(self: Self, str: []const u8) ?usize

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn getLongestPrefixIndex(self: Self, str: []const u8) ?usize {
    if (self.kvs.len == 0)
        return null;

    if (str.len < self.min_len)
        return null;

    var len = @min(self.max_len, str.len);
    while (len >= self.min_len) : (len -= 1) {
        if (self.getIndex(str[0..len])) |i|
            return i;
    }
    return null;
}

Functionkeys[src]

pub fn keys(self: Self) []const []const u8

Parameters

self: Self

Source Code

Source code
pub fn keys(self: Self) []const []const u8 {
    const kvs = self.kvs.*;
    return kvs.keys[0..kvs.len];
}

Functionvalues[src]

pub fn values(self: Self) []const V

Parameters

self: Self

Source Code

Source code
pub fn values(self: Self) []const V {
    const kvs = self.kvs.*;
    return kvs.values[0..kvs.len];
}

Source Code

Source code
pub fn StaticStringMap(comptime V: type) type {
    return StaticStringMapWithEql(V, defaultEql);
}

Type FunctionStaticStringMapWithEql[src]

StaticStringMap, but accepts an equality function (eql). The eql function is only called to determine the equality of equal length strings. Any strings that are not equal length are never compared using the eql function.

Parameters

V: type
eql: fn (a: []const u8, b: []const u8) bool

Fields

kvs: *const KVs = &empty_kvs
len_indexes: [*]const u32 = &empty_len_indexes
len_indexes_len: u32 = 0
min_len: u32 = std.math.maxInt(u32)
max_len: u32 = 0

Functions

FunctioninitComptime[src]

pub inline fn initComptime(comptime kvs_list: anytype) Self

Returns a map backed by static, comptime allocated memory.

kvs_list must be either a list of struct { []const u8, V } (key-value pair) tuples, or a list of struct { []const u8 } (only keys) tuples if V is void.

Source Code

Source code
pub inline fn initComptime(comptime kvs_list: anytype) Self {
    comptime {
        var self = Self{};
        if (kvs_list.len == 0)
            return self;

        // Since the KVs are sorted, a linearly-growing bound will never
        // be sufficient for extreme cases. So we grow proportional to
        // N*log2(N).
        @setEvalBranchQuota(10 * kvs_list.len * std.math.log2_int_ceil(usize, kvs_list.len));

        var sorted_keys: [kvs_list.len][]const u8 = undefined;
        var sorted_vals: [kvs_list.len]V = undefined;

        self.initSortedKVs(kvs_list, &sorted_keys, &sorted_vals);
        const final_keys = sorted_keys;
        const final_vals = sorted_vals;
        self.kvs = &.{
            .keys = &final_keys,
            .values = &final_vals,
            .len = @intCast(kvs_list.len),
        };

        var len_indexes: [self.max_len + 1]u32 = undefined;
        self.initLenIndexes(&len_indexes);
        const final_len_indexes = len_indexes;
        self.len_indexes = &final_len_indexes;
        self.len_indexes_len = @intCast(len_indexes.len);
        return self;
    }
}

Functioninit[src]

pub fn init(kvs_list: anytype, allocator: mem.Allocator) !Self

Returns a map backed by memory allocated with allocator.

Handles kvs_list the same way as initComptime().

Parameters

allocator: mem.Allocator

Source Code

Source code
pub fn init(kvs_list: anytype, allocator: mem.Allocator) !Self {
    var self = Self{};
    if (kvs_list.len == 0)
        return self;

    const sorted_keys = try allocator.alloc([]const u8, kvs_list.len);
    errdefer allocator.free(sorted_keys);
    const sorted_vals = try allocator.alloc(V, kvs_list.len);
    errdefer allocator.free(sorted_vals);
    const kvs = try allocator.create(KVs);
    errdefer allocator.destroy(kvs);

    self.initSortedKVs(kvs_list, sorted_keys, sorted_vals);
    kvs.* = .{
        .keys = sorted_keys.ptr,
        .values = sorted_vals.ptr,
        .len = @intCast(kvs_list.len),
    };
    self.kvs = kvs;

    const len_indexes = try allocator.alloc(u32, self.max_len + 1);
    self.initLenIndexes(len_indexes);
    self.len_indexes = len_indexes.ptr;
    self.len_indexes_len = @intCast(len_indexes.len);
    return self;
}

Functiondeinit[src]

pub fn deinit(self: Self, allocator: mem.Allocator) void

this method should only be used with init() and not with initComptime().

Parameters

self: Self
allocator: mem.Allocator

Source Code

Source code
pub fn deinit(self: Self, allocator: mem.Allocator) void {
    allocator.free(self.len_indexes[0..self.len_indexes_len]);
    allocator.free(self.kvs.keys[0..self.kvs.len]);
    allocator.free(self.kvs.values[0..self.kvs.len]);
    allocator.destroy(self.kvs);
}

Functionhas[src]

pub fn has(self: Self, str: []const u8) bool

Checks if the map has a value for the key.

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn has(self: Self, str: []const u8) bool {
    return self.get(str) != null;
}

Functionget[src]

pub fn get(self: Self, str: []const u8) ?V

Returns the value for the key if any, else null.

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn get(self: Self, str: []const u8) ?V {
    if (self.kvs.len == 0)
        return null;

    return self.kvs.values[self.getIndex(str) orelse return null];
}

FunctiongetIndex[src]

pub fn getIndex(self: Self, str: []const u8) ?usize

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn getIndex(self: Self, str: []const u8) ?usize {
    const kvs = self.kvs.*;
    if (kvs.len == 0)
        return null;

    if (str.len < self.min_len or str.len > self.max_len)
        return null;

    var i = self.len_indexes[str.len];
    while (true) {
        const key = kvs.keys[i];
        if (key.len != str.len)
            return null;
        if (eql(key, str))
            return i;
        i += 1;
        if (i >= kvs.len)
            return null;
    }
}

FunctiongetLongestPrefix[src]

pub fn getLongestPrefix(self: Self, str: []const u8) ?KV

Returns the key-value pair where key is the longest prefix of str else null.

This is effectively an O(N) algorithm which loops from max_len to min_len and calls getIndex() to check all keys with the given len.

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn getLongestPrefix(self: Self, str: []const u8) ?KV {
    if (self.kvs.len == 0)
        return null;
    const i = self.getLongestPrefixIndex(str) orelse return null;
    const kvs = self.kvs.*;
    return .{
        .key = kvs.keys[i],
        .value = kvs.values[i],
    };
}

FunctiongetLongestPrefixIndex[src]

pub fn getLongestPrefixIndex(self: Self, str: []const u8) ?usize

Parameters

self: Self
str: []const u8

Source Code

Source code
pub fn getLongestPrefixIndex(self: Self, str: []const u8) ?usize {
    if (self.kvs.len == 0)
        return null;

    if (str.len < self.min_len)
        return null;

    var len = @min(self.max_len, str.len);
    while (len >= self.min_len) : (len -= 1) {
        if (self.getIndex(str[0..len])) |i|
            return i;
    }
    return null;
}

Functionkeys[src]

pub fn keys(self: Self) []const []const u8

Parameters

self: Self

Source Code

Source code
pub fn keys(self: Self) []const []const u8 {
    const kvs = self.kvs.*;
    return kvs.keys[0..kvs.len];
}

Functionvalues[src]

pub fn values(self: Self) []const V

Parameters

self: Self

Source Code

Source code
pub fn values(self: Self) []const V {
    const kvs = self.kvs.*;
    return kvs.values[0..kvs.len];
}

Source Code

Source code
pub fn StaticStringMapWithEql(
    comptime V: type,
    comptime eql: fn (a: []const u8, b: []const u8) bool,
) type {
    return struct {
        kvs: *const KVs = &empty_kvs,
        len_indexes: [*]const u32 = &empty_len_indexes,
        len_indexes_len: u32 = 0,
        min_len: u32 = std.math.maxInt(u32),
        max_len: u32 = 0,

        pub const KV = struct {
            key: []const u8,
            value: V,
        };

        const Self = @This();
        const KVs = struct {
            keys: [*]const []const u8,
            values: [*]const V,
            len: u32,
        };
        const empty_kvs = KVs{
            .keys = &empty_keys,
            .values = &empty_vals,
            .len = 0,
        };
        const empty_len_indexes = [0]u32{};
        const empty_keys = [0][]const u8{};
        const empty_vals = [0]V{};

        /// Returns a map backed by static, comptime allocated memory.
        ///
        /// `kvs_list` must be either a list of `struct { []const u8, V }`
        /// (key-value pair) tuples, or a list of `struct { []const u8 }`
        /// (only keys) tuples if `V` is `void`.
        pub inline fn initComptime(comptime kvs_list: anytype) Self {
            comptime {
                var self = Self{};
                if (kvs_list.len == 0)
                    return self;

                // Since the KVs are sorted, a linearly-growing bound will never
                // be sufficient for extreme cases. So we grow proportional to
                // N*log2(N).
                @setEvalBranchQuota(10 * kvs_list.len * std.math.log2_int_ceil(usize, kvs_list.len));

                var sorted_keys: [kvs_list.len][]const u8 = undefined;
                var sorted_vals: [kvs_list.len]V = undefined;

                self.initSortedKVs(kvs_list, &sorted_keys, &sorted_vals);
                const final_keys = sorted_keys;
                const final_vals = sorted_vals;
                self.kvs = &.{
                    .keys = &final_keys,
                    .values = &final_vals,
                    .len = @intCast(kvs_list.len),
                };

                var len_indexes: [self.max_len + 1]u32 = undefined;
                self.initLenIndexes(&len_indexes);
                const final_len_indexes = len_indexes;
                self.len_indexes = &final_len_indexes;
                self.len_indexes_len = @intCast(len_indexes.len);
                return self;
            }
        }

        /// Returns a map backed by memory allocated with `allocator`.
        ///
        /// Handles `kvs_list` the same way as `initComptime()`.
        pub fn init(kvs_list: anytype, allocator: mem.Allocator) !Self {
            var self = Self{};
            if (kvs_list.len == 0)
                return self;

            const sorted_keys = try allocator.alloc([]const u8, kvs_list.len);
            errdefer allocator.free(sorted_keys);
            const sorted_vals = try allocator.alloc(V, kvs_list.len);
            errdefer allocator.free(sorted_vals);
            const kvs = try allocator.create(KVs);
            errdefer allocator.destroy(kvs);

            self.initSortedKVs(kvs_list, sorted_keys, sorted_vals);
            kvs.* = .{
                .keys = sorted_keys.ptr,
                .values = sorted_vals.ptr,
                .len = @intCast(kvs_list.len),
            };
            self.kvs = kvs;

            const len_indexes = try allocator.alloc(u32, self.max_len + 1);
            self.initLenIndexes(len_indexes);
            self.len_indexes = len_indexes.ptr;
            self.len_indexes_len = @intCast(len_indexes.len);
            return self;
        }

        /// this method should only be used with init() and not with initComptime().
        pub fn deinit(self: Self, allocator: mem.Allocator) void {
            allocator.free(self.len_indexes[0..self.len_indexes_len]);
            allocator.free(self.kvs.keys[0..self.kvs.len]);
            allocator.free(self.kvs.values[0..self.kvs.len]);
            allocator.destroy(self.kvs);
        }

        const SortContext = struct {
            keys: [][]const u8,
            vals: []V,

            pub fn lessThan(ctx: @This(), a: usize, b: usize) bool {
                return ctx.keys[a].len < ctx.keys[b].len;
            }

            pub fn swap(ctx: @This(), a: usize, b: usize) void {
                std.mem.swap([]const u8, &ctx.keys[a], &ctx.keys[b]);
                std.mem.swap(V, &ctx.vals[a], &ctx.vals[b]);
            }
        };

        fn initSortedKVs(
            self: *Self,
            kvs_list: anytype,
            sorted_keys: [][]const u8,
            sorted_vals: []V,
        ) void {
            for (kvs_list, 0..) |kv, i| {
                sorted_keys[i] = kv.@"0";
                sorted_vals[i] = if (V == void) {} else kv.@"1";
                self.min_len = @intCast(@min(self.min_len, kv.@"0".len));
                self.max_len = @intCast(@max(self.max_len, kv.@"0".len));
            }
            mem.sortUnstableContext(0, sorted_keys.len, SortContext{
                .keys = sorted_keys,
                .vals = sorted_vals,
            });
        }

        fn initLenIndexes(self: Self, len_indexes: []u32) void {
            var len: usize = 0;
            var i: u32 = 0;
            while (len <= self.max_len) : (len += 1) {
                // find the first keyword len == len
                while (len > self.kvs.keys[i].len) {
                    i += 1;
                }
                len_indexes[len] = i;
            }
        }

        /// Checks if the map has a value for the key.
        pub fn has(self: Self, str: []const u8) bool {
            return self.get(str) != null;
        }

        /// Returns the value for the key if any, else null.
        pub fn get(self: Self, str: []const u8) ?V {
            if (self.kvs.len == 0)
                return null;

            return self.kvs.values[self.getIndex(str) orelse return null];
        }

        pub fn getIndex(self: Self, str: []const u8) ?usize {
            const kvs = self.kvs.*;
            if (kvs.len == 0)
                return null;

            if (str.len < self.min_len or str.len > self.max_len)
                return null;

            var i = self.len_indexes[str.len];
            while (true) {
                const key = kvs.keys[i];
                if (key.len != str.len)
                    return null;
                if (eql(key, str))
                    return i;
                i += 1;
                if (i >= kvs.len)
                    return null;
            }
        }

        /// Returns the key-value pair where key is the longest prefix of `str`
        /// else null.
        ///
        /// This is effectively an O(N) algorithm which loops from `max_len` to
        /// `min_len` and calls `getIndex()` to check all keys with the given
        /// len.
        pub fn getLongestPrefix(self: Self, str: []const u8) ?KV {
            if (self.kvs.len == 0)
                return null;
            const i = self.getLongestPrefixIndex(str) orelse return null;
            const kvs = self.kvs.*;
            return .{
                .key = kvs.keys[i],
                .value = kvs.values[i],
            };
        }

        pub fn getLongestPrefixIndex(self: Self, str: []const u8) ?usize {
            if (self.kvs.len == 0)
                return null;

            if (str.len < self.min_len)
                return null;

            var len = @min(self.max_len, str.len);
            while (len >= self.min_len) : (len -= 1) {
                if (self.getIndex(str[0..len])) |i|
                    return i;
            }
            return null;
        }

        pub fn keys(self: Self) []const []const u8 {
            const kvs = self.kvs.*;
            return kvs.keys[0..kvs.len];
        }

        pub fn values(self: Self) []const V {
            const kvs = self.kvs.*;
            return kvs.values[0..kvs.len];
        }
    };
}

Type FunctionDoublyLinkedList[src]

A doubly-linked list has a pair of pointers to both the head and tail of the list. List elements have pointers to both the previous and next elements in the sequence. The list can be traversed both forward and backward. Some operations that take linear O(n) time with a singly-linked list can be done without traversal in constant O(1) time with a doubly-linked list:

  • Removing an element.
  • Inserting a new element before an existing element.
  • Pushing or popping an element from the end of the list.

Parameters

T: type

Fields

first: ?*Node = null
last: ?*Node = null
len: usize = 0

Functions

FunctioninsertAfter[src]

pub fn insertAfter(list: *Self, node: *Node, new_node: *Node) void

Insert a new node after an existing one.

Arguments: node: Pointer to a node in the list. new_node: Pointer to the new node to insert.

Parameters

list: *Self
node: *Node
new_node: *Node

Source Code

Source code
pub fn insertAfter(list: *Self, node: *Node, new_node: *Node) void {
    new_node.prev = node;
    if (node.next) |next_node| {
        // Intermediate node.
        new_node.next = next_node;
        next_node.prev = new_node;
    } else {
        // Last element of the list.
        new_node.next = null;
        list.last = new_node;
    }
    node.next = new_node;

    list.len += 1;
}

FunctioninsertBefore[src]

pub fn insertBefore(list: *Self, node: *Node, new_node: *Node) void

Insert a new node before an existing one.

Arguments: node: Pointer to a node in the list. new_node: Pointer to the new node to insert.

Parameters

list: *Self
node: *Node
new_node: *Node

Source Code

Source code
pub fn insertBefore(list: *Self, node: *Node, new_node: *Node) void {
    new_node.next = node;
    if (node.prev) |prev_node| {
        // Intermediate node.
        new_node.prev = prev_node;
        prev_node.next = new_node;
    } else {
        // First element of the list.
        new_node.prev = null;
        list.first = new_node;
    }
    node.prev = new_node;

    list.len += 1;
}

FunctionconcatByMoving[src]

pub fn concatByMoving(list1: *Self, list2: *Self) void

Concatenate list2 onto the end of list1, removing all entries from the former.

Arguments: list1: the list to concatenate onto list2: the list to be concatenated

Parameters

list1: *Self
list2: *Self

Source Code

Source code
pub fn concatByMoving(list1: *Self, list2: *Self) void {
    const l2_first = list2.first orelse return;
    if (list1.last) |l1_last| {
        l1_last.next = list2.first;
        l2_first.prev = list1.last;
        list1.len += list2.len;
    } else {
        // list1 was empty
        list1.first = list2.first;
        list1.len = list2.len;
    }
    list1.last = list2.last;
    list2.first = null;
    list2.last = null;
    list2.len = 0;
}

Functionappend[src]

pub fn append(list: *Self, new_node: *Node) void

Insert a new node at the end of the list.

Arguments: new_node: Pointer to the new node to insert.

Parameters

list: *Self
new_node: *Node

Source Code

Source code
pub fn append(list: *Self, new_node: *Node) void {
    if (list.last) |last| {
        // Insert after last.
        list.insertAfter(last, new_node);
    } else {
        // Empty list.
        list.prepend(new_node);
    }
}

Functionprepend[src]

pub fn prepend(list: *Self, new_node: *Node) void

Insert a new node at the beginning of the list.

Arguments: new_node: Pointer to the new node to insert.

Parameters

list: *Self
new_node: *Node

Source Code

Source code
pub fn prepend(list: *Self, new_node: *Node) void {
    if (list.first) |first| {
        // Insert before first.
        list.insertBefore(first, new_node);
    } else {
        // Empty list.
        list.first = new_node;
        list.last = new_node;
        new_node.prev = null;
        new_node.next = null;

        list.len = 1;
    }
}

Functionremove[src]

pub fn remove(list: *Self, node: *Node) void

Remove a node from the list.

Arguments: node: Pointer to the node to be removed.

Parameters

list: *Self
node: *Node

Source Code

Source code
pub fn remove(list: *Self, node: *Node) void {
    if (node.prev) |prev_node| {
        // Intermediate node.
        prev_node.next = node.next;
    } else {
        // First element of the list.
        list.first = node.next;
    }

    if (node.next) |next_node| {
        // Intermediate node.
        next_node.prev = node.prev;
    } else {
        // Last element of the list.
        list.last = node.prev;
    }

    list.len -= 1;
    assert(list.len == 0 or (list.first != null and list.last != null));
}

Functionpop[src]

pub fn pop(list: *Self) ?*Node

Remove and return the last node in the list.

Returns: A pointer to the last node in the list.

Parameters

list: *Self

Source Code

Source code
pub fn pop(list: *Self) ?*Node {
    const last = list.last orelse return null;
    list.remove(last);
    return last;
}

FunctionpopFirst[src]

pub fn popFirst(list: *Self) ?*Node

Remove and return the first node in the list.

Returns: A pointer to the first node in the list.

Parameters

list: *Self

Source Code

Source code
pub fn popFirst(list: *Self) ?*Node {
    const first = list.first orelse return null;
    list.remove(first);
    return first;
}

Source Code

Source code
pub fn DoublyLinkedList(comptime T: type) type {
    return struct {
        const Self = @This();

        /// Node inside the linked list wrapping the actual data.
        pub const Node = struct {
            prev: ?*Node = null,
            next: ?*Node = null,
            data: T,
        };

        first: ?*Node = null,
        last: ?*Node = null,
        len: usize = 0,

        /// Insert a new node after an existing one.
        ///
        /// Arguments:
        ///     node: Pointer to a node in the list.
        ///     new_node: Pointer to the new node to insert.
        pub fn insertAfter(list: *Self, node: *Node, new_node: *Node) void {
            new_node.prev = node;
            if (node.next) |next_node| {
                // Intermediate node.
                new_node.next = next_node;
                next_node.prev = new_node;
            } else {
                // Last element of the list.
                new_node.next = null;
                list.last = new_node;
            }
            node.next = new_node;

            list.len += 1;
        }

        /// Insert a new node before an existing one.
        ///
        /// Arguments:
        ///     node: Pointer to a node in the list.
        ///     new_node: Pointer to the new node to insert.
        pub fn insertBefore(list: *Self, node: *Node, new_node: *Node) void {
            new_node.next = node;
            if (node.prev) |prev_node| {
                // Intermediate node.
                new_node.prev = prev_node;
                prev_node.next = new_node;
            } else {
                // First element of the list.
                new_node.prev = null;
                list.first = new_node;
            }
            node.prev = new_node;

            list.len += 1;
        }

        /// Concatenate list2 onto the end of list1, removing all entries from the former.
        ///
        /// Arguments:
        ///     list1: the list to concatenate onto
        ///     list2: the list to be concatenated
        pub fn concatByMoving(list1: *Self, list2: *Self) void {
            const l2_first = list2.first orelse return;
            if (list1.last) |l1_last| {
                l1_last.next = list2.first;
                l2_first.prev = list1.last;
                list1.len += list2.len;
            } else {
                // list1 was empty
                list1.first = list2.first;
                list1.len = list2.len;
            }
            list1.last = list2.last;
            list2.first = null;
            list2.last = null;
            list2.len = 0;
        }

        /// Insert a new node at the end of the list.
        ///
        /// Arguments:
        ///     new_node: Pointer to the new node to insert.
        pub fn append(list: *Self, new_node: *Node) void {
            if (list.last) |last| {
                // Insert after last.
                list.insertAfter(last, new_node);
            } else {
                // Empty list.
                list.prepend(new_node);
            }
        }

        /// Insert a new node at the beginning of the list.
        ///
        /// Arguments:
        ///     new_node: Pointer to the new node to insert.
        pub fn prepend(list: *Self, new_node: *Node) void {
            if (list.first) |first| {
                // Insert before first.
                list.insertBefore(first, new_node);
            } else {
                // Empty list.
                list.first = new_node;
                list.last = new_node;
                new_node.prev = null;
                new_node.next = null;

                list.len = 1;
            }
        }

        /// Remove a node from the list.
        ///
        /// Arguments:
        ///     node: Pointer to the node to be removed.
        pub fn remove(list: *Self, node: *Node) void {
            if (node.prev) |prev_node| {
                // Intermediate node.
                prev_node.next = node.next;
            } else {
                // First element of the list.
                list.first = node.next;
            }

            if (node.next) |next_node| {
                // Intermediate node.
                next_node.prev = node.prev;
            } else {
                // Last element of the list.
                list.last = node.prev;
            }

            list.len -= 1;
            assert(list.len == 0 or (list.first != null and list.last != null));
        }

        /// Remove and return the last node in the list.
        ///
        /// Returns:
        ///     A pointer to the last node in the list.
        pub fn pop(list: *Self) ?*Node {
            const last = list.last orelse return null;
            list.remove(last);
            return last;
        }

        /// Remove and return the first node in the list.
        ///
        /// Returns:
        ///     A pointer to the first node in the list.
        pub fn popFirst(list: *Self) ?*Node {
            const first = list.first orelse return null;
            list.remove(first);
            return first;
        }
    };
}

Type FunctionEnumArray[src]

An array keyed by an enum, backed by a dense array. If the enum is not dense, a mapping will be constructed from enum values to dense indices. This type does no dynamic allocation and can be copied by value.

Parameters

E: type
V: type

Types

TypeIndexer[src]

The index mapping for this map

Source Code

Source code
pub const Indexer = EnumIndexer(E)

Fields

Values

ConstantKey[src]

The key type used to index this map

Source Code

Source code
pub const Key = Indexer.Key

ConstantValue[src]

The value type stored in this map

Source Code

Source code
pub const Value = V

Constantlen[src]

The number of possible keys in the map

Source Code

Source code
pub const len = Indexer.count

Functions

Functioninit[src]

pub fn init(init_values: EnumFieldStruct(E, Value, null)) Self

Parameters

init_values: EnumFieldStruct(E, Value, null)

Source Code

Source code
pub fn init(init_values: EnumFieldStruct(E, Value, null)) Self {
    return initDefault(null, init_values);
}

FunctioninitDefault[src]

pub fn initDefault(comptime default: ?Value, init_values: EnumFieldStruct(E, Value, default)) Self

Initializes values in the enum array, with the specified default.

Parameters

default: ?Value
init_values: EnumFieldStruct(E, Value, default)

Source Code

Source code
pub fn initDefault(comptime default: ?Value, init_values: EnumFieldStruct(E, Value, default)) Self {
    @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
    var result: Self = .{ .values = undefined };
    inline for (0..Self.len) |i| {
        const key = comptime Indexer.keyForIndex(i);
        const tag = @tagName(key);
        result.values[i] = @field(init_values, tag);
    }
    return result;
}

FunctioninitUndefined[src]

pub fn initUndefined() Self

Source Code

Source code
pub fn initUndefined() Self {
    return Self{ .values = undefined };
}

FunctioninitFill[src]

pub fn initFill(v: Value) Self

Parameters

Source Code

Source code
pub fn initFill(v: Value) Self {
    var self: Self = undefined;
    @memset(&self.values, v);
    return self;
}

Functionget[src]

pub fn get(self: Self, key: Key) Value

Returns the value in the array associated with a key.

Parameters

self: Self
key: Key

Source Code

Source code
pub fn get(self: Self, key: Key) Value {
    return self.values[Indexer.indexOf(key)];
}

FunctiongetPtr[src]

pub fn getPtr(self: *Self, key: Key) *Value

Returns a pointer to the slot in the array associated with a key.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn getPtr(self: *Self, key: Key) *Value {
    return &self.values[Indexer.indexOf(key)];
}

FunctiongetPtrConst[src]

pub fn getPtrConst(self: *const Self, key: Key) *const Value

Returns a const pointer to the slot in the array associated with a key.

Parameters

self: *const Self
key: Key

Source Code

Source code
pub fn getPtrConst(self: *const Self, key: Key) *const Value {
    return &self.values[Indexer.indexOf(key)];
}

Functionset[src]

pub fn set(self: *Self, key: Key, value: Value) void

Sets the value in the slot associated with a key.

Parameters

self: *Self
key: Key
value: Value

Source Code

Source code
pub fn set(self: *Self, key: Key, value: Value) void {
    self.values[Indexer.indexOf(key)] = value;
}

Functioniterator[src]

pub fn iterator(self: *Self) Iterator

Iterates over the items in the array, in index order.

Parameters

self: *Self

Source Code

Source code
pub fn iterator(self: *Self) Iterator {
    return .{
        .values = &self.values,
    };
}

Source Code

Source code
pub fn EnumArray(comptime E: type, comptime V: type) type {
    return struct {
        const Self = @This();

        /// The index mapping for this map
        pub const Indexer = EnumIndexer(E);
        /// The key type used to index this map
        pub const Key = Indexer.Key;
        /// The value type stored in this map
        pub const Value = V;
        /// The number of possible keys in the map
        pub const len = Indexer.count;

        values: [Indexer.count]Value,

        pub fn init(init_values: EnumFieldStruct(E, Value, null)) Self {
            return initDefault(null, init_values);
        }

        /// Initializes values in the enum array, with the specified default.
        pub fn initDefault(comptime default: ?Value, init_values: EnumFieldStruct(E, Value, default)) Self {
            @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
            var result: Self = .{ .values = undefined };
            inline for (0..Self.len) |i| {
                const key = comptime Indexer.keyForIndex(i);
                const tag = @tagName(key);
                result.values[i] = @field(init_values, tag);
            }
            return result;
        }

        pub fn initUndefined() Self {
            return Self{ .values = undefined };
        }

        pub fn initFill(v: Value) Self {
            var self: Self = undefined;
            @memset(&self.values, v);
            return self;
        }

        /// Returns the value in the array associated with a key.
        pub fn get(self: Self, key: Key) Value {
            return self.values[Indexer.indexOf(key)];
        }

        /// Returns a pointer to the slot in the array associated with a key.
        pub fn getPtr(self: *Self, key: Key) *Value {
            return &self.values[Indexer.indexOf(key)];
        }

        /// Returns a const pointer to the slot in the array associated with a key.
        pub fn getPtrConst(self: *const Self, key: Key) *const Value {
            return &self.values[Indexer.indexOf(key)];
        }

        /// Sets the value in the slot associated with a key.
        pub fn set(self: *Self, key: Key, value: Value) void {
            self.values[Indexer.indexOf(key)] = value;
        }

        /// Iterates over the items in the array, in index order.
        pub fn iterator(self: *Self) Iterator {
            return .{
                .values = &self.values,
            };
        }

        /// An entry in the array.
        pub const Entry = struct {
            /// The key associated with this entry.
            /// Modifying this key will not change the array.
            key: Key,

            /// A pointer to the value in the array associated
            /// with this key.  Modifications through this
            /// pointer will modify the underlying data.
            value: *Value,
        };

        pub const Iterator = struct {
            index: usize = 0,
            values: *[Indexer.count]Value,

            pub fn next(self: *Iterator) ?Entry {
                const index = self.index;
                if (index < Indexer.count) {
                    self.index += 1;
                    return Entry{
                        .key = Indexer.keyForIndex(index),
                        .value = &self.values[index],
                    };
                }
                return null;
            }
        };
    };
}

Type FunctionEnumMap[src]

A map keyed by an enum, backed by a bitfield and a dense array. If the enum is exhaustive but not dense, a mapping will be constructed from enum values to dense indices. This type does no dynamic allocation and can be copied by value.

Parameters

E: type
V: type

Types

TypeIndexer[src]

The index mapping for this map

Source Code

Source code
pub const Indexer = EnumIndexer(E)

Fields

bits: BitSet = BitSet.initEmpty()

Bits determining whether items are in the map

values: [Indexer.count]Value = undefined

Values of items in the map. If the associated bit is zero, the value is undefined.

Values

ConstantKey[src]

The key type used to index this map

Source Code

Source code
pub const Key = Indexer.Key

ConstantValue[src]

The value type stored in this map

Source Code

Source code
pub const Value = V

Constantlen[src]

The number of possible keys in the map

Source Code

Source code
pub const len = Indexer.count

Functions

Functioninit[src]

pub fn init(init_values: EnumFieldStruct(E, ?Value, @as(?Value, null))) Self

Initializes the map using a sparse struct of optionals

Parameters

init_values: EnumFieldStruct(E, ?Value, @as(?Value, null))

Source Code

Source code
pub fn init(init_values: EnumFieldStruct(E, ?Value, @as(?Value, null))) Self {
    @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
    var result: Self = .{};
    if (@typeInfo(E).@"enum".is_exhaustive) {
        inline for (0..Self.len) |i| {
            const key = comptime Indexer.keyForIndex(i);
            const tag = @tagName(key);
            if (@field(init_values, tag)) |*v| {
                result.bits.set(i);
                result.values[i] = v.*;
            }
        }
    } else {
        inline for (std.meta.fields(E)) |field| {
            const key = @field(E, field.name);
            if (@field(init_values, field.name)) |*v| {
                const i = comptime Indexer.indexOf(key);
                result.bits.set(i);
                result.values[i] = v.*;
            }
        }
    }
    return result;
}

FunctioninitFull[src]

pub fn initFull(value: Value) Self

Initializes a full mapping with all keys set to value. Consider using EnumArray instead if the map will remain full.

Parameters

value: Value

Source Code

Source code
pub fn initFull(value: Value) Self {
    var result: Self = .{
        .bits = Self.BitSet.initFull(),
        .values = undefined,
    };
    @memset(&result.values, value);
    return result;
}

FunctioninitFullWith[src]

pub fn initFullWith(init_values: EnumFieldStruct(E, Value, null)) Self

Initializes a full mapping with supplied values. Consider using EnumArray instead if the map will remain full.

Parameters

init_values: EnumFieldStruct(E, Value, null)

Source Code

Source code
pub fn initFullWith(init_values: EnumFieldStruct(E, Value, null)) Self {
    return initFullWithDefault(null, init_values);
}

FunctioninitFullWithDefault[src]

pub fn initFullWithDefault(comptime default: ?Value, init_values: EnumFieldStruct(E, Value, default)) Self

Initializes a full mapping with a provided default. Consider using EnumArray instead if the map will remain full.

Parameters

default: ?Value
init_values: EnumFieldStruct(E, Value, default)

Source Code

Source code
pub fn initFullWithDefault(comptime default: ?Value, init_values: EnumFieldStruct(E, Value, default)) Self {
    @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
    var result: Self = .{
        .bits = Self.BitSet.initFull(),
        .values = undefined,
    };
    inline for (0..Self.len) |i| {
        const key = comptime Indexer.keyForIndex(i);
        const tag = @tagName(key);
        result.values[i] = @field(init_values, tag);
    }
    return result;
}

Functioncount[src]

pub fn count(self: Self) usize

The number of items in the map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.bits.count();
}

Functioncontains[src]

pub fn contains(self: Self, key: Key) bool

Checks if the map contains an item.

Parameters

self: Self
key: Key

Source Code

Source code
pub fn contains(self: Self, key: Key) bool {
    return self.bits.isSet(Indexer.indexOf(key));
}

Functionget[src]

pub fn get(self: Self, key: Key) ?Value

Gets the value associated with a key. If the key is not in the map, returns null.

Parameters

self: Self
key: Key

Source Code

Source code
pub fn get(self: Self, key: Key) ?Value {
    const index = Indexer.indexOf(key);
    return if (self.bits.isSet(index)) self.values[index] else null;
}

FunctiongetAssertContains[src]

pub fn getAssertContains(self: Self, key: Key) Value

Gets the value associated with a key, which must exist in the map.

Parameters

self: Self
key: Key

Source Code

Source code
pub fn getAssertContains(self: Self, key: Key) Value {
    const index = Indexer.indexOf(key);
    assert(self.bits.isSet(index));
    return self.values[index];
}

FunctiongetPtr[src]

pub fn getPtr(self: *Self, key: Key) ?*Value

Gets the address of the value associated with a key. If the key is not in the map, returns null.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn getPtr(self: *Self, key: Key) ?*Value {
    const index = Indexer.indexOf(key);
    return if (self.bits.isSet(index)) &self.values[index] else null;
}

FunctiongetPtrConst[src]

pub fn getPtrConst(self: *const Self, key: Key) ?*const Value

Gets the address of the const value associated with a key. If the key is not in the map, returns null.

Parameters

self: *const Self
key: Key

Source Code

Source code
pub fn getPtrConst(self: *const Self, key: Key) ?*const Value {
    const index = Indexer.indexOf(key);
    return if (self.bits.isSet(index)) &self.values[index] else null;
}

FunctiongetPtrAssertContains[src]

pub fn getPtrAssertContains(self: *Self, key: Key) *Value

Gets the address of the value associated with a key. The key must be present in the map.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn getPtrAssertContains(self: *Self, key: Key) *Value {
    const index = Indexer.indexOf(key);
    assert(self.bits.isSet(index));
    return &self.values[index];
}

FunctiongetPtrConstAssertContains[src]

pub fn getPtrConstAssertContains(self: *const Self, key: Key) *const Value

Gets the address of the const value associated with a key. The key must be present in the map.

Parameters

self: *const Self
key: Key

Source Code

Source code
pub fn getPtrConstAssertContains(self: *const Self, key: Key) *const Value {
    const index = Indexer.indexOf(key);
    assert(self.bits.isSet(index));
    return &self.values[index];
}

Functionput[src]

pub fn put(self: *Self, key: Key, value: Value) void

Adds the key to the map with the supplied value. If the key is already in the map, overwrites the value.

Parameters

self: *Self
key: Key
value: Value

Source Code

Source code
pub fn put(self: *Self, key: Key, value: Value) void {
    const index = Indexer.indexOf(key);
    self.bits.set(index);
    self.values[index] = value;
}

FunctionputUninitialized[src]

pub fn putUninitialized(self: *Self, key: Key) *Value

Adds the key to the map with an undefined value. If the key is already in the map, the value becomes undefined. A pointer to the value is returned, which should be used to initialize the value.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn putUninitialized(self: *Self, key: Key) *Value {
    const index = Indexer.indexOf(key);
    self.bits.set(index);
    self.values[index] = undefined;
    return &self.values[index];
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, key: Key, value: Value) ?Value

Sets the value associated with the key in the map, and returns the old value. If the key was not in the map, returns null.

Parameters

self: *Self
key: Key
value: Value

Source Code

Source code
pub fn fetchPut(self: *Self, key: Key, value: Value) ?Value {
    const index = Indexer.indexOf(key);
    const result: ?Value = if (self.bits.isSet(index)) self.values[index] else null;
    self.bits.set(index);
    self.values[index] = value;
    return result;
}

Functionremove[src]

pub fn remove(self: *Self, key: Key) void

Removes a key from the map. If the key was not in the map, does nothing.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn remove(self: *Self, key: Key) void {
    const index = Indexer.indexOf(key);
    self.bits.unset(index);
    self.values[index] = undefined;
}

FunctionfetchRemove[src]

pub fn fetchRemove(self: *Self, key: Key) ?Value

Removes a key from the map, and returns the old value. If the key was not in the map, returns null.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn fetchRemove(self: *Self, key: Key) ?Value {
    const index = Indexer.indexOf(key);
    const result: ?Value = if (self.bits.isSet(index)) self.values[index] else null;
    self.bits.unset(index);
    self.values[index] = undefined;
    return result;
}

Functioniterator[src]

pub fn iterator(self: *Self) Iterator

Returns an iterator over the map, which visits items in index order. Modifications to the underlying map may or may not be observed by the iterator, but will not invalidate it.

Parameters

self: *Self

Source Code

Source code
pub fn iterator(self: *Self) Iterator {
    return .{
        .inner = self.bits.iterator(.{}),
        .values = &self.values,
    };
}

Example Usage

test EnumMap {
    const Ball = enum { red, green, blue };

    const some = EnumMap(Ball, u8).init(.{
        .green = 0xff,
        .blue = 0x80,
    });
    try testing.expectEqual(2, some.count());
    try testing.expectEqual(null, some.get(.red));
    try testing.expectEqual(0xff, some.get(.green));
    try testing.expectEqual(0x80, some.get(.blue));
}

Source Code

Source code
pub fn EnumMap(comptime E: type, comptime V: type) type {
    return struct {
        const Self = @This();

        /// The index mapping for this map
        pub const Indexer = EnumIndexer(E);
        /// The key type used to index this map
        pub const Key = Indexer.Key;
        /// The value type stored in this map
        pub const Value = V;
        /// The number of possible keys in the map
        pub const len = Indexer.count;

        const BitSet = std.StaticBitSet(Indexer.count);

        /// Bits determining whether items are in the map
        bits: BitSet = BitSet.initEmpty(),
        /// Values of items in the map.  If the associated
        /// bit is zero, the value is undefined.
        values: [Indexer.count]Value = undefined,

        /// Initializes the map using a sparse struct of optionals
        pub fn init(init_values: EnumFieldStruct(E, ?Value, @as(?Value, null))) Self {
            @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
            var result: Self = .{};
            if (@typeInfo(E).@"enum".is_exhaustive) {
                inline for (0..Self.len) |i| {
                    const key = comptime Indexer.keyForIndex(i);
                    const tag = @tagName(key);
                    if (@field(init_values, tag)) |*v| {
                        result.bits.set(i);
                        result.values[i] = v.*;
                    }
                }
            } else {
                inline for (std.meta.fields(E)) |field| {
                    const key = @field(E, field.name);
                    if (@field(init_values, field.name)) |*v| {
                        const i = comptime Indexer.indexOf(key);
                        result.bits.set(i);
                        result.values[i] = v.*;
                    }
                }
            }
            return result;
        }

        /// Initializes a full mapping with all keys set to value.
        /// Consider using EnumArray instead if the map will remain full.
        pub fn initFull(value: Value) Self {
            var result: Self = .{
                .bits = Self.BitSet.initFull(),
                .values = undefined,
            };
            @memset(&result.values, value);
            return result;
        }

        /// Initializes a full mapping with supplied values.
        /// Consider using EnumArray instead if the map will remain full.
        pub fn initFullWith(init_values: EnumFieldStruct(E, Value, null)) Self {
            return initFullWithDefault(null, init_values);
        }

        /// Initializes a full mapping with a provided default.
        /// Consider using EnumArray instead if the map will remain full.
        pub fn initFullWithDefault(comptime default: ?Value, init_values: EnumFieldStruct(E, Value, default)) Self {
            @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
            var result: Self = .{
                .bits = Self.BitSet.initFull(),
                .values = undefined,
            };
            inline for (0..Self.len) |i| {
                const key = comptime Indexer.keyForIndex(i);
                const tag = @tagName(key);
                result.values[i] = @field(init_values, tag);
            }
            return result;
        }

        /// The number of items in the map.
        pub fn count(self: Self) usize {
            return self.bits.count();
        }

        /// Checks if the map contains an item.
        pub fn contains(self: Self, key: Key) bool {
            return self.bits.isSet(Indexer.indexOf(key));
        }

        /// Gets the value associated with a key.
        /// If the key is not in the map, returns null.
        pub fn get(self: Self, key: Key) ?Value {
            const index = Indexer.indexOf(key);
            return if (self.bits.isSet(index)) self.values[index] else null;
        }

        /// Gets the value associated with a key, which must
        /// exist in the map.
        pub fn getAssertContains(self: Self, key: Key) Value {
            const index = Indexer.indexOf(key);
            assert(self.bits.isSet(index));
            return self.values[index];
        }

        /// Gets the address of the value associated with a key.
        /// If the key is not in the map, returns null.
        pub fn getPtr(self: *Self, key: Key) ?*Value {
            const index = Indexer.indexOf(key);
            return if (self.bits.isSet(index)) &self.values[index] else null;
        }

        /// Gets the address of the const value associated with a key.
        /// If the key is not in the map, returns null.
        pub fn getPtrConst(self: *const Self, key: Key) ?*const Value {
            const index = Indexer.indexOf(key);
            return if (self.bits.isSet(index)) &self.values[index] else null;
        }

        /// Gets the address of the value associated with a key.
        /// The key must be present in the map.
        pub fn getPtrAssertContains(self: *Self, key: Key) *Value {
            const index = Indexer.indexOf(key);
            assert(self.bits.isSet(index));
            return &self.values[index];
        }

        /// Gets the address of the const value associated with a key.
        /// The key must be present in the map.
        pub fn getPtrConstAssertContains(self: *const Self, key: Key) *const Value {
            const index = Indexer.indexOf(key);
            assert(self.bits.isSet(index));
            return &self.values[index];
        }

        /// Adds the key to the map with the supplied value.
        /// If the key is already in the map, overwrites the value.
        pub fn put(self: *Self, key: Key, value: Value) void {
            const index = Indexer.indexOf(key);
            self.bits.set(index);
            self.values[index] = value;
        }

        /// Adds the key to the map with an undefined value.
        /// If the key is already in the map, the value becomes undefined.
        /// A pointer to the value is returned, which should be
        /// used to initialize the value.
        pub fn putUninitialized(self: *Self, key: Key) *Value {
            const index = Indexer.indexOf(key);
            self.bits.set(index);
            self.values[index] = undefined;
            return &self.values[index];
        }

        /// Sets the value associated with the key in the map,
        /// and returns the old value.  If the key was not in
        /// the map, returns null.
        pub fn fetchPut(self: *Self, key: Key, value: Value) ?Value {
            const index = Indexer.indexOf(key);
            const result: ?Value = if (self.bits.isSet(index)) self.values[index] else null;
            self.bits.set(index);
            self.values[index] = value;
            return result;
        }

        /// Removes a key from the map.  If the key was not in the map,
        /// does nothing.
        pub fn remove(self: *Self, key: Key) void {
            const index = Indexer.indexOf(key);
            self.bits.unset(index);
            self.values[index] = undefined;
        }

        /// Removes a key from the map, and returns the old value.
        /// If the key was not in the map, returns null.
        pub fn fetchRemove(self: *Self, key: Key) ?Value {
            const index = Indexer.indexOf(key);
            const result: ?Value = if (self.bits.isSet(index)) self.values[index] else null;
            self.bits.unset(index);
            self.values[index] = undefined;
            return result;
        }

        /// Returns an iterator over the map, which visits items in index order.
        /// Modifications to the underlying map may or may not be observed by
        /// the iterator, but will not invalidate it.
        pub fn iterator(self: *Self) Iterator {
            return .{
                .inner = self.bits.iterator(.{}),
                .values = &self.values,
            };
        }

        /// An entry in the map.
        pub const Entry = struct {
            /// The key associated with this entry.
            /// Modifying this key will not change the map.
            key: Key,

            /// A pointer to the value in the map associated
            /// with this key.  Modifications through this
            /// pointer will modify the underlying data.
            value: *Value,
        };

        pub const Iterator = struct {
            inner: BitSet.Iterator(.{}),
            values: *[Indexer.count]Value,

            pub fn next(self: *Iterator) ?Entry {
                return if (self.inner.next()) |index|
                    Entry{
                        .key = Indexer.keyForIndex(index),
                        .value = &self.values[index],
                    }
                else
                    null;
            }
        };
    };
}

Type FunctionEnumSet[src]

A set of enum elements, backed by a bitfield. If the enum is exhaustive but not dense, a mapping will be constructed from enum values to dense indices. This type does no dynamic allocation and can be copied by value.

Parameters

E: type

Types

TypeIndexer[src]

The indexing rules for converting between keys and indices.

Source Code

Source code
pub const Indexer = EnumIndexer(E)

Fields

Values

ConstantKey[src]

The element type for this set.

Source Code

Source code
pub const Key = Indexer.Key

Constantlen[src]

The maximum number of items in this set.

Source Code

Source code
pub const len = Indexer.count

Functions

Functioninit[src]

pub fn init(init_values: EnumFieldStruct(E, bool, false)) Self

Initializes the set using a struct of bools

Parameters

init_values: EnumFieldStruct(E, bool, false)

Source Code

Source code
pub fn init(init_values: EnumFieldStruct(E, bool, false)) Self {
    @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
    var result: Self = .{};
    if (@typeInfo(E).@"enum".is_exhaustive) {
        inline for (0..Self.len) |i| {
            const key = comptime Indexer.keyForIndex(i);
            const tag = @tagName(key);
            if (@field(init_values, tag)) {
                result.bits.set(i);
            }
        }
    } else {
        inline for (std.meta.fields(E)) |field| {
            const key = @field(E, field.name);
            if (@field(init_values, field.name)) {
                const i = comptime Indexer.indexOf(key);
                result.bits.set(i);
            }
        }
    }
    return result;
}

FunctioninitEmpty[src]

pub fn initEmpty() Self

Returns a set containing no keys.

Source Code

Source code
pub fn initEmpty() Self {
    return .{ .bits = BitSet.initEmpty() };
}

FunctioninitFull[src]

pub fn initFull() Self

Returns a set containing all possible keys.

Source Code

Source code
pub fn initFull() Self {
    return .{ .bits = BitSet.initFull() };
}

FunctioninitMany[src]

pub fn initMany(keys: []const Key) Self

Returns a set containing multiple keys.

Parameters

keys: []const Key

Source Code

Source code
pub fn initMany(keys: []const Key) Self {
    var set = initEmpty();
    for (keys) |key| set.insert(key);
    return set;
}

FunctioninitOne[src]

pub fn initOne(key: Key) Self

Returns a set containing a single key.

Parameters

key: Key

Source Code

Source code
pub fn initOne(key: Key) Self {
    return initMany(&[_]Key{key});
}

Functioncount[src]

pub fn count(self: Self) usize

Returns the number of keys in the set.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.bits.count();
}

Functioncontains[src]

pub fn contains(self: Self, key: Key) bool

Checks if a key is in the set.

Parameters

self: Self
key: Key

Source Code

Source code
pub fn contains(self: Self, key: Key) bool {
    return self.bits.isSet(Indexer.indexOf(key));
}

Functioninsert[src]

pub fn insert(self: *Self, key: Key) void

Puts a key in the set.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn insert(self: *Self, key: Key) void {
    self.bits.set(Indexer.indexOf(key));
}

Functionremove[src]

pub fn remove(self: *Self, key: Key) void

Removes a key from the set.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn remove(self: *Self, key: Key) void {
    self.bits.unset(Indexer.indexOf(key));
}

FunctionsetPresent[src]

pub fn setPresent(self: *Self, key: Key, present: bool) void

Changes the presence of a key in the set to match the passed bool.

Parameters

self: *Self
key: Key
present: bool

Source Code

Source code
pub fn setPresent(self: *Self, key: Key, present: bool) void {
    self.bits.setValue(Indexer.indexOf(key), present);
}

Functiontoggle[src]

pub fn toggle(self: *Self, key: Key) void

Toggles the presence of a key in the set. If the key is in the set, removes it. Otherwise adds it.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn toggle(self: *Self, key: Key) void {
    self.bits.toggle(Indexer.indexOf(key));
}

FunctiontoggleSet[src]

pub fn toggleSet(self: *Self, other: Self) void

Toggles the presence of all keys in the passed set.

Parameters

self: *Self
other: Self

Source Code

Source code
pub fn toggleSet(self: *Self, other: Self) void {
    self.bits.toggleSet(other.bits);
}

FunctiontoggleAll[src]

pub fn toggleAll(self: *Self) void

Toggles all possible keys in the set.

Parameters

self: *Self

Source Code

Source code
pub fn toggleAll(self: *Self) void {
    self.bits.toggleAll();
}

FunctionsetUnion[src]

pub fn setUnion(self: *Self, other: Self) void

Adds all keys in the passed set to this set.

Parameters

self: *Self
other: Self

Source Code

Source code
pub fn setUnion(self: *Self, other: Self) void {
    self.bits.setUnion(other.bits);
}

FunctionsetIntersection[src]

pub fn setIntersection(self: *Self, other: Self) void

Removes all keys which are not in the passed set.

Parameters

self: *Self
other: Self

Source Code

Source code
pub fn setIntersection(self: *Self, other: Self) void {
    self.bits.setIntersection(other.bits);
}

Functioneql[src]

pub fn eql(self: Self, other: Self) bool

Returns true iff both sets have the same keys.

Parameters

self: Self
other: Self

Source Code

Source code
pub fn eql(self: Self, other: Self) bool {
    return self.bits.eql(other.bits);
}

FunctionsubsetOf[src]

pub fn subsetOf(self: Self, other: Self) bool

Returns true iff all the keys in this set are in the other set. The other set may have keys not found in this set.

Parameters

self: Self
other: Self

Source Code

Source code
pub fn subsetOf(self: Self, other: Self) bool {
    return self.bits.subsetOf(other.bits);
}

FunctionsupersetOf[src]

pub fn supersetOf(self: Self, other: Self) bool

Returns true iff this set contains all the keys in the other set. This set may have keys not found in the other set.

Parameters

self: Self
other: Self

Source Code

Source code
pub fn supersetOf(self: Self, other: Self) bool {
    return self.bits.supersetOf(other.bits);
}

Functioncomplement[src]

pub fn complement(self: Self) Self

Returns a set with all the keys not in this set.

Parameters

self: Self

Source Code

Source code
pub fn complement(self: Self) Self {
    return .{ .bits = self.bits.complement() };
}

FunctionunionWith[src]

pub fn unionWith(self: Self, other: Self) Self

Returns a set with keys that are in either this set or the other set.

Parameters

self: Self
other: Self

Source Code

Source code
pub fn unionWith(self: Self, other: Self) Self {
    return .{ .bits = self.bits.unionWith(other.bits) };
}

FunctionintersectWith[src]

pub fn intersectWith(self: Self, other: Self) Self

Returns a set with keys that are in both this set and the other set.

Parameters

self: Self
other: Self

Source Code

Source code
pub fn intersectWith(self: Self, other: Self) Self {
    return .{ .bits = self.bits.intersectWith(other.bits) };
}

FunctionxorWith[src]

pub fn xorWith(self: Self, other: Self) Self

Returns a set with keys that are in either this set or the other set, but not both.

Parameters

self: Self
other: Self

Source Code

Source code
pub fn xorWith(self: Self, other: Self) Self {
    return .{ .bits = self.bits.xorWith(other.bits) };
}

FunctiondifferenceWith[src]

pub fn differenceWith(self: Self, other: Self) Self

Returns a set with keys that are in this set except for keys in the other set.

Parameters

self: Self
other: Self

Source Code

Source code
pub fn differenceWith(self: Self, other: Self) Self {
    return .{ .bits = self.bits.differenceWith(other.bits) };
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Returns an iterator over this set, which iterates in index order. Modifications to the set during iteration may or may not be observed by the iterator, but will not invalidate it.

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return .{ .inner = self.bits.iterator(.{}) };
}

Source Code

Source code
pub fn EnumSet(comptime E: type) type {
    return struct {
        const Self = @This();

        /// The indexing rules for converting between keys and indices.
        pub const Indexer = EnumIndexer(E);
        /// The element type for this set.
        pub const Key = Indexer.Key;

        const BitSet = std.StaticBitSet(Indexer.count);

        /// The maximum number of items in this set.
        pub const len = Indexer.count;

        bits: BitSet = BitSet.initEmpty(),

        /// Initializes the set using a struct of bools
        pub fn init(init_values: EnumFieldStruct(E, bool, false)) Self {
            @setEvalBranchQuota(2 * @typeInfo(E).@"enum".fields.len);
            var result: Self = .{};
            if (@typeInfo(E).@"enum".is_exhaustive) {
                inline for (0..Self.len) |i| {
                    const key = comptime Indexer.keyForIndex(i);
                    const tag = @tagName(key);
                    if (@field(init_values, tag)) {
                        result.bits.set(i);
                    }
                }
            } else {
                inline for (std.meta.fields(E)) |field| {
                    const key = @field(E, field.name);
                    if (@field(init_values, field.name)) {
                        const i = comptime Indexer.indexOf(key);
                        result.bits.set(i);
                    }
                }
            }
            return result;
        }

        /// Returns a set containing no keys.
        pub fn initEmpty() Self {
            return .{ .bits = BitSet.initEmpty() };
        }

        /// Returns a set containing all possible keys.
        pub fn initFull() Self {
            return .{ .bits = BitSet.initFull() };
        }

        /// Returns a set containing multiple keys.
        pub fn initMany(keys: []const Key) Self {
            var set = initEmpty();
            for (keys) |key| set.insert(key);
            return set;
        }

        /// Returns a set containing a single key.
        pub fn initOne(key: Key) Self {
            return initMany(&[_]Key{key});
        }

        /// Returns the number of keys in the set.
        pub fn count(self: Self) usize {
            return self.bits.count();
        }

        /// Checks if a key is in the set.
        pub fn contains(self: Self, key: Key) bool {
            return self.bits.isSet(Indexer.indexOf(key));
        }

        /// Puts a key in the set.
        pub fn insert(self: *Self, key: Key) void {
            self.bits.set(Indexer.indexOf(key));
        }

        /// Removes a key from the set.
        pub fn remove(self: *Self, key: Key) void {
            self.bits.unset(Indexer.indexOf(key));
        }

        /// Changes the presence of a key in the set to match the passed bool.
        pub fn setPresent(self: *Self, key: Key, present: bool) void {
            self.bits.setValue(Indexer.indexOf(key), present);
        }

        /// Toggles the presence of a key in the set.  If the key is in
        /// the set, removes it.  Otherwise adds it.
        pub fn toggle(self: *Self, key: Key) void {
            self.bits.toggle(Indexer.indexOf(key));
        }

        /// Toggles the presence of all keys in the passed set.
        pub fn toggleSet(self: *Self, other: Self) void {
            self.bits.toggleSet(other.bits);
        }

        /// Toggles all possible keys in the set.
        pub fn toggleAll(self: *Self) void {
            self.bits.toggleAll();
        }

        /// Adds all keys in the passed set to this set.
        pub fn setUnion(self: *Self, other: Self) void {
            self.bits.setUnion(other.bits);
        }

        /// Removes all keys which are not in the passed set.
        pub fn setIntersection(self: *Self, other: Self) void {
            self.bits.setIntersection(other.bits);
        }

        /// Returns true iff both sets have the same keys.
        pub fn eql(self: Self, other: Self) bool {
            return self.bits.eql(other.bits);
        }

        /// Returns true iff all the keys in this set are
        /// in the other set. The other set may have keys
        /// not found in this set.
        pub fn subsetOf(self: Self, other: Self) bool {
            return self.bits.subsetOf(other.bits);
        }

        /// Returns true iff this set contains all the keys
        /// in the other set. This set may have keys not
        /// found in the other set.
        pub fn supersetOf(self: Self, other: Self) bool {
            return self.bits.supersetOf(other.bits);
        }

        /// Returns a set with all the keys not in this set.
        pub fn complement(self: Self) Self {
            return .{ .bits = self.bits.complement() };
        }

        /// Returns a set with keys that are in either this
        /// set or the other set.
        pub fn unionWith(self: Self, other: Self) Self {
            return .{ .bits = self.bits.unionWith(other.bits) };
        }

        /// Returns a set with keys that are in both this
        /// set and the other set.
        pub fn intersectWith(self: Self, other: Self) Self {
            return .{ .bits = self.bits.intersectWith(other.bits) };
        }

        /// Returns a set with keys that are in either this
        /// set or the other set, but not both.
        pub fn xorWith(self: Self, other: Self) Self {
            return .{ .bits = self.bits.xorWith(other.bits) };
        }

        /// Returns a set with keys that are in this set
        /// except for keys in the other set.
        pub fn differenceWith(self: Self, other: Self) Self {
            return .{ .bits = self.bits.differenceWith(other.bits) };
        }

        /// Returns an iterator over this set, which iterates in
        /// index order.  Modifications to the set during iteration
        /// may or may not be observed by the iterator, but will
        /// not invalidate it.
        pub fn iterator(self: *const Self) Iterator {
            return .{ .inner = self.bits.iterator(.{}) };
        }

        pub const Iterator = struct {
            inner: BitSet.Iterator(.{}),

            pub fn next(self: *Iterator) ?Key {
                return if (self.inner.next()) |index|
                    Indexer.keyForIndex(index)
                else
                    null;
            }
        };
    };
}

Type FunctionHashMap[src]

General purpose hash table. No order is guaranteed and any modification invalidates live iterators. It provides fast operations (lookup, insertion, deletion) with quite high load factors (up to 80% by default) for low memory usage. For a hash map that can be initialized directly that does not store an Allocator field, see HashMapUnmanaged. If iterating over the table entries is a strong usecase and needs to be fast, prefer the alternative std.ArrayHashMap. Context must be a struct type with two member functions: hash(self, K) u64 eql(self, K, K) bool Adapted variants of many functions are provided. These variants take a pseudo key instead of a key. Their context must have the functions: hash(self, PseudoKey) u64 eql(self, PseudoKey, K) bool

Parameters

K: type
V: type
Context: type
max_load_percentage: u64

Types

TypeUnmanaged[src]

The type of the unmanaged hash map underlying this wrapper

Source Code

Source code
pub const Unmanaged = HashMapUnmanaged(K, V, Context, max_load_percentage)

Fields

unmanaged: Unmanaged
allocator: Allocator
ctx: Context

Values

ConstantEntry[src]

An entry, containing pointers to a key and value stored in the map

Source Code

Source code
pub const Entry = Unmanaged.Entry

ConstantKV[src]

A copy of a key and value which are no longer in the map

Source Code

Source code
pub const KV = Unmanaged.KV

ConstantHash[src]

The integer type that is the result of hashing

Source Code

Source code
pub const Hash = Unmanaged.Hash

ConstantIterator[src]

The iterator type returned by iterator()

Source Code

Source code
pub const Iterator = Unmanaged.Iterator

ConstantKeyIterator[src]

Source Code

Source code
pub const KeyIterator = Unmanaged.KeyIterator

ConstantValueIterator[src]

Source Code

Source code
pub const ValueIterator = Unmanaged.ValueIterator

ConstantSize[src]

The integer type used to store the size of the map

Source Code

Source code
pub const Size = Unmanaged.Size

ConstantGetOrPutResult[src]

The type returned from getOrPut and variants

Source Code

Source code
pub const GetOrPutResult = Unmanaged.GetOrPutResult

Functions

Functioninit[src]

pub fn init(allocator: Allocator) Self

Create a managed hash map with an empty context. If the context is not zero-sized, you must use initContext(allocator, ctx) instead.

Parameters

allocator: Allocator

Source Code

Source code
pub fn init(allocator: Allocator) Self {
    if (@sizeOf(Context) != 0) {
        @compileError("Context must be specified! Call initContext(allocator, ctx) instead.");
    }
    return .{
        .unmanaged = .empty,
        .allocator = allocator,
        .ctx = undefined, // ctx is zero-sized so this is safe.
    };
}

FunctioninitContext[src]

pub fn initContext(allocator: Allocator, ctx: Context) Self

Create a managed hash map with a context

Parameters

allocator: Allocator
ctx: Context

Source Code

Source code
pub fn initContext(allocator: Allocator, ctx: Context) Self {
    return .{
        .unmanaged = .empty,
        .allocator = allocator,
        .ctx = ctx,
    };
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.unmanaged.lockPointers();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.unmanaged.unlockPointers();
}

Functiondeinit[src]

pub fn deinit(self: *Self) void

Release the backing array and invalidate this map. This does not deinit keys, values, or the context! If your keys or values need to be released, ensure that that is done before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self) void {
    self.unmanaged.deinit(self.allocator);
    self.* = undefined;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Empty the map, but keep the backing allocation for future use. This does not free keys or values! Be sure to release them if they need deinitialization before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    return self.unmanaged.clearRetainingCapacity();
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self) void

Empty the map and release the backing allocation. This does not free keys or values! Be sure to release them if they need deinitialization before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self) void {
    return self.unmanaged.clearAndFree(self.allocator);
}

Functioncount[src]

pub fn count(self: Self) Size

Return the number of items in the map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) Size {
    return self.unmanaged.count();
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Create an iterator over the entries in the map. The iterator is invalidated if the map is modified.

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return self.unmanaged.iterator();
}

FunctionkeyIterator[src]

pub fn keyIterator(self: Self) KeyIterator

Create an iterator over the keys in the map. The iterator is invalidated if the map is modified.

Parameters

self: Self

Source Code

Source code
pub fn keyIterator(self: Self) KeyIterator {
    return self.unmanaged.keyIterator();
}

FunctionvalueIterator[src]

pub fn valueIterator(self: Self) ValueIterator

Create an iterator over the values in the map. The iterator is invalidated if the map is modified.

Parameters

self: Self

Source Code

Source code
pub fn valueIterator(self: Self) ValueIterator {
    return self.unmanaged.valueIterator();
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, key: K) Allocator.Error!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller should then initialize the value (but not the key).

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, key: K) Allocator.Error!GetOrPutResult {
    return self.unmanaged.getOrPutContext(self.allocator, key, self.ctx);
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) Allocator.Error!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined key and value, and the Entry pointers point to it. Caller must then initialize the key and value.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) Allocator.Error!GetOrPutResult {
    return self.unmanaged.getOrPutContextAdapted(self.allocator, key, ctx, self.ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller should then initialize the value (but not the key). If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityContext(key, self.ctx);
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller must then initialize the key and value. If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityAdapted(key, ctx);
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, key: K, value: V) Allocator.Error!Entry

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, key: K, value: V) Allocator.Error!Entry {
    return self.unmanaged.getOrPutValueContext(self.allocator, key, value, self.ctx);
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, expected_count: Size) Allocator.Error!void

Increases capacity, guaranteeing that insertions up until the expected_count will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
expected_count: Size

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, expected_count: Size) Allocator.Error!void {
    return self.unmanaged.ensureTotalCapacityContext(self.allocator, expected_count, self.ctx);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: Size) Allocator.Error!void

Increases capacity, guaranteeing that insertions up until additional_count more items will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
additional_count: Size

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: Size) Allocator.Error!void {
    return self.unmanaged.ensureUnusedCapacityContext(self.allocator, additional_count, self.ctx);
}

Functioncapacity[src]

pub fn capacity(self: Self) Size

Returns the number of total elements which may be present before it is no longer guaranteed that no allocations will be performed.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) Size {
    return self.unmanaged.capacity();
}

Functionput[src]

pub fn put(self: *Self, key: K, value: V) Allocator.Error!void

Clobbers any existing data. To detect if a put would clobber existing data, see getOrPut.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, key: K, value: V) Allocator.Error!void {
    return self.unmanaged.putContext(self.allocator, key, value, self.ctx);
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, key: K, value: V) Allocator.Error!void

Inserts a key-value pair into the hash map, asserting that no previous entry with the same key is already present

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, key: K, value: V) Allocator.Error!void {
    return self.unmanaged.putNoClobberContext(self.allocator, key, value, self.ctx);
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityContext(key, value, self.ctx);
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Asserts that it does not clobber any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityNoClobberContext(key, value, self.ctx);
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, key: K, value: V) Allocator.Error!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, key: K, value: V) Allocator.Error!?KV {
    return self.unmanaged.fetchPutContext(self.allocator, key, value, self.ctx);
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    return self.unmanaged.fetchPutAssumeCapacityContext(key, value, self.ctx);
}

FunctionfetchRemove[src]

pub fn fetchRemove(self: *Self, key: K) ?KV

Removes a value from the map and returns the removed kv pair.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchRemove(self: *Self, key: K) ?KV {
    return self.unmanaged.fetchRemoveContext(key, self.ctx);
}

FunctionfetchRemoveAdapted[src]

pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    return self.unmanaged.fetchRemoveAdapted(key, ctx);
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Finds the value associated with a key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    return self.unmanaged.getContext(key, self.ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    return self.unmanaged.getAdapted(key, ctx);
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    return self.unmanaged.getPtrContext(key, self.ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    return self.unmanaged.getPtrAdapted(key, ctx);
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Finds the actual key associated with an adapted key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    return self.unmanaged.getKeyContext(key, self.ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    return self.unmanaged.getKeyAdapted(key, ctx);
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    return self.unmanaged.getKeyPtrContext(key, self.ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    return self.unmanaged.getKeyPtrAdapted(key, ctx);
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Finds the key and value associated with a key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    return self.unmanaged.getEntryContext(key, self.ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    return self.unmanaged.getEntryAdapted(key, ctx);
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Check if the map contains a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    return self.unmanaged.containsContext(key, self.ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.containsAdapted(key, ctx);
}

Functionremove[src]

pub fn remove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map, and this function returns true. Otherwise this function returns false.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K

Source Code

Source code
pub fn remove(self: *Self, key: K) bool {
    return self.unmanaged.removeContext(key, self.ctx);
}

FunctionremoveAdapted[src]

pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self

Source Code

Source code
pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.removeAdapted(key, ctx);
}

FunctionremoveByPtr[src]

pub fn removeByPtr(self: *Self, key_ptr: *K) void

Delete the entry with key pointed to by key_ptr from the hash map. key_ptr is assumed to be a valid pointer to a key that is present in the hash map.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key_ptr: *K

Source Code

Source code
pub fn removeByPtr(self: *Self, key_ptr: *K) void {
    self.unmanaged.removeByPtr(key_ptr);
}

Functionclone[src]

pub fn clone(self: Self) Allocator.Error!Self

Creates a copy of this map, using the same allocator

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self) Allocator.Error!Self {
    var other = try self.unmanaged.cloneContext(self.allocator, self.ctx);
    return other.promoteContext(self.allocator, self.ctx);
}

FunctioncloneWithAllocator[src]

pub fn cloneWithAllocator(self: Self, new_allocator: Allocator) Allocator.Error!Self

Creates a copy of this map, using a specified allocator

Parameters

self: Self
new_allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocator(self: Self, new_allocator: Allocator) Allocator.Error!Self {
    var other = try self.unmanaged.cloneContext(new_allocator, self.ctx);
    return other.promoteContext(new_allocator, self.ctx);
}

FunctioncloneWithContext[src]

pub fn cloneWithContext(self: Self, new_ctx: anytype) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage)

Creates a copy of this map, using a specified context

Parameters

self: Self

Source Code

Source code
pub fn cloneWithContext(self: Self, new_ctx: anytype) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other = try self.unmanaged.cloneContext(self.allocator, new_ctx);
    return other.promoteContext(self.allocator, new_ctx);
}

FunctioncloneWithAllocatorAndContext[src]

pub fn cloneWithAllocatorAndContext( self: Self, new_allocator: Allocator, new_ctx: anytype, ) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage)

Creates a copy of this map, using a specified allocator and context.

Parameters

self: Self
new_allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocatorAndContext(
    self: Self,
    new_allocator: Allocator,
    new_ctx: anytype,
) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other = try self.unmanaged.cloneContext(new_allocator, new_ctx);
    return other.promoteContext(new_allocator, new_ctx);
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.unmanaged.pointer_stability.assertUnlocked();
    const result = self.*;
    self.unmanaged = .empty;
    return result;
}

Functionrehash[src]

pub fn rehash(self: *Self) void

Rehash the map, in-place.

Over time, due to the current tombstone-based implementation, a HashMap could become fragmented due to the buildup of tombstone entries that causes a performance degradation due to excessive probing. The kind of pattern that might cause this is a long-lived HashMap with repeated inserts and deletes.

After this function is called, there will be no tombstones in the HashMap, each of the entries is rehashed and any existing key/value pointers into the HashMap are invalidated.

Parameters

self: *Self

Source Code

Source code
pub fn rehash(self: *Self) void {
    self.unmanaged.rehash(self.ctx);
}

Source Code

Source code
pub fn HashMap(
    comptime K: type,
    comptime V: type,
    comptime Context: type,
    comptime max_load_percentage: u64,
) type {
    return struct {
        unmanaged: Unmanaged,
        allocator: Allocator,
        ctx: Context,

        /// The type of the unmanaged hash map underlying this wrapper
        pub const Unmanaged = HashMapUnmanaged(K, V, Context, max_load_percentage);
        /// An entry, containing pointers to a key and value stored in the map
        pub const Entry = Unmanaged.Entry;
        /// A copy of a key and value which are no longer in the map
        pub const KV = Unmanaged.KV;
        /// The integer type that is the result of hashing
        pub const Hash = Unmanaged.Hash;
        /// The iterator type returned by iterator()
        pub const Iterator = Unmanaged.Iterator;

        pub const KeyIterator = Unmanaged.KeyIterator;
        pub const ValueIterator = Unmanaged.ValueIterator;

        /// The integer type used to store the size of the map
        pub const Size = Unmanaged.Size;
        /// The type returned from getOrPut and variants
        pub const GetOrPutResult = Unmanaged.GetOrPutResult;

        const Self = @This();

        /// Create a managed hash map with an empty context.
        /// If the context is not zero-sized, you must use
        /// initContext(allocator, ctx) instead.
        pub fn init(allocator: Allocator) Self {
            if (@sizeOf(Context) != 0) {
                @compileError("Context must be specified! Call initContext(allocator, ctx) instead.");
            }
            return .{
                .unmanaged = .empty,
                .allocator = allocator,
                .ctx = undefined, // ctx is zero-sized so this is safe.
            };
        }

        /// Create a managed hash map with a context
        pub fn initContext(allocator: Allocator, ctx: Context) Self {
            return .{
                .unmanaged = .empty,
                .allocator = allocator,
                .ctx = ctx,
            };
        }

        /// Puts the hash map into a state where any method call that would
        /// cause an existing key or value pointer to become invalidated will
        /// instead trigger an assertion.
        ///
        /// An additional call to `lockPointers` in such state also triggers an
        /// assertion.
        ///
        /// `unlockPointers` returns the hash map to the previous state.
        pub fn lockPointers(self: *Self) void {
            self.unmanaged.lockPointers();
        }

        /// Undoes a call to `lockPointers`.
        pub fn unlockPointers(self: *Self) void {
            self.unmanaged.unlockPointers();
        }

        /// Release the backing array and invalidate this map.
        /// This does *not* deinit keys, values, or the context!
        /// If your keys or values need to be released, ensure
        /// that that is done before calling this function.
        pub fn deinit(self: *Self) void {
            self.unmanaged.deinit(self.allocator);
            self.* = undefined;
        }

        /// Empty the map, but keep the backing allocation for future use.
        /// This does *not* free keys or values! Be sure to
        /// release them if they need deinitialization before
        /// calling this function.
        pub fn clearRetainingCapacity(self: *Self) void {
            return self.unmanaged.clearRetainingCapacity();
        }

        /// Empty the map and release the backing allocation.
        /// This does *not* free keys or values! Be sure to
        /// release them if they need deinitialization before
        /// calling this function.
        pub fn clearAndFree(self: *Self) void {
            return self.unmanaged.clearAndFree(self.allocator);
        }

        /// Return the number of items in the map.
        pub fn count(self: Self) Size {
            return self.unmanaged.count();
        }

        /// Create an iterator over the entries in the map.
        /// The iterator is invalidated if the map is modified.
        pub fn iterator(self: *const Self) Iterator {
            return self.unmanaged.iterator();
        }

        /// Create an iterator over the keys in the map.
        /// The iterator is invalidated if the map is modified.
        pub fn keyIterator(self: Self) KeyIterator {
            return self.unmanaged.keyIterator();
        }

        /// Create an iterator over the values in the map.
        /// The iterator is invalidated if the map is modified.
        pub fn valueIterator(self: Self) ValueIterator {
            return self.unmanaged.valueIterator();
        }

        /// If key exists this function cannot fail.
        /// If there is an existing item with `key`, then the result's
        /// `Entry` pointers point to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined value, and
        /// the `Entry` pointers point to it. Caller should then initialize
        /// the value (but not the key).
        pub fn getOrPut(self: *Self, key: K) Allocator.Error!GetOrPutResult {
            return self.unmanaged.getOrPutContext(self.allocator, key, self.ctx);
        }

        /// If key exists this function cannot fail.
        /// If there is an existing item with `key`, then the result's
        /// `Entry` pointers point to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined key and value, and
        /// the `Entry` pointers point to it. Caller must then initialize
        /// the key and value.
        pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) Allocator.Error!GetOrPutResult {
            return self.unmanaged.getOrPutContextAdapted(self.allocator, key, ctx, self.ctx);
        }

        /// If there is an existing item with `key`, then the result's
        /// `Entry` pointers point to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined value, and
        /// the `Entry` pointers point to it. Caller should then initialize
        /// the value (but not the key).
        /// If a new entry needs to be stored, this function asserts there
        /// is enough capacity to store it.
        pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
            return self.unmanaged.getOrPutAssumeCapacityContext(key, self.ctx);
        }

        /// If there is an existing item with `key`, then the result's
        /// `Entry` pointers point to it, and found_existing is true.
        /// Otherwise, puts a new item with undefined value, and
        /// the `Entry` pointers point to it. Caller must then initialize
        /// the key and value.
        /// If a new entry needs to be stored, this function asserts there
        /// is enough capacity to store it.
        pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
            return self.unmanaged.getOrPutAssumeCapacityAdapted(key, ctx);
        }

        pub fn getOrPutValue(self: *Self, key: K, value: V) Allocator.Error!Entry {
            return self.unmanaged.getOrPutValueContext(self.allocator, key, value, self.ctx);
        }

        /// Increases capacity, guaranteeing that insertions up until the
        /// `expected_count` will not cause an allocation, and therefore cannot fail.
        pub fn ensureTotalCapacity(self: *Self, expected_count: Size) Allocator.Error!void {
            return self.unmanaged.ensureTotalCapacityContext(self.allocator, expected_count, self.ctx);
        }

        /// Increases capacity, guaranteeing that insertions up until
        /// `additional_count` **more** items will not cause an allocation, and
        /// therefore cannot fail.
        pub fn ensureUnusedCapacity(self: *Self, additional_count: Size) Allocator.Error!void {
            return self.unmanaged.ensureUnusedCapacityContext(self.allocator, additional_count, self.ctx);
        }

        /// Returns the number of total elements which may be present before it is
        /// no longer guaranteed that no allocations will be performed.
        pub fn capacity(self: Self) Size {
            return self.unmanaged.capacity();
        }

        /// Clobbers any existing data. To detect if a put would clobber
        /// existing data, see `getOrPut`.
        pub fn put(self: *Self, key: K, value: V) Allocator.Error!void {
            return self.unmanaged.putContext(self.allocator, key, value, self.ctx);
        }

        /// Inserts a key-value pair into the hash map, asserting that no previous
        /// entry with the same key is already present
        pub fn putNoClobber(self: *Self, key: K, value: V) Allocator.Error!void {
            return self.unmanaged.putNoClobberContext(self.allocator, key, value, self.ctx);
        }

        /// Asserts there is enough capacity to store the new key-value pair.
        /// Clobbers any existing data. To detect if a put would clobber
        /// existing data, see `getOrPutAssumeCapacity`.
        pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
            return self.unmanaged.putAssumeCapacityContext(key, value, self.ctx);
        }

        /// Asserts there is enough capacity to store the new key-value pair.
        /// Asserts that it does not clobber any existing data.
        /// To detect if a put would clobber existing data, see `getOrPutAssumeCapacity`.
        pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
            return self.unmanaged.putAssumeCapacityNoClobberContext(key, value, self.ctx);
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        pub fn fetchPut(self: *Self, key: K, value: V) Allocator.Error!?KV {
            return self.unmanaged.fetchPutContext(self.allocator, key, value, self.ctx);
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        /// If insertion happens, asserts there is enough capacity without allocating.
        pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
            return self.unmanaged.fetchPutAssumeCapacityContext(key, value, self.ctx);
        }

        /// Removes a value from the map and returns the removed kv pair.
        pub fn fetchRemove(self: *Self, key: K) ?KV {
            return self.unmanaged.fetchRemoveContext(key, self.ctx);
        }

        pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
            return self.unmanaged.fetchRemoveAdapted(key, ctx);
        }

        /// Finds the value associated with a key in the map
        pub fn get(self: Self, key: K) ?V {
            return self.unmanaged.getContext(key, self.ctx);
        }
        pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
            return self.unmanaged.getAdapted(key, ctx);
        }

        pub fn getPtr(self: Self, key: K) ?*V {
            return self.unmanaged.getPtrContext(key, self.ctx);
        }
        pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
            return self.unmanaged.getPtrAdapted(key, ctx);
        }

        /// Finds the actual key associated with an adapted key in the map
        pub fn getKey(self: Self, key: K) ?K {
            return self.unmanaged.getKeyContext(key, self.ctx);
        }
        pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
            return self.unmanaged.getKeyAdapted(key, ctx);
        }

        pub fn getKeyPtr(self: Self, key: K) ?*K {
            return self.unmanaged.getKeyPtrContext(key, self.ctx);
        }
        pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
            return self.unmanaged.getKeyPtrAdapted(key, ctx);
        }

        /// Finds the key and value associated with a key in the map
        pub fn getEntry(self: Self, key: K) ?Entry {
            return self.unmanaged.getEntryContext(key, self.ctx);
        }

        pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
            return self.unmanaged.getEntryAdapted(key, ctx);
        }

        /// Check if the map contains a key
        pub fn contains(self: Self, key: K) bool {
            return self.unmanaged.containsContext(key, self.ctx);
        }

        pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
            return self.unmanaged.containsAdapted(key, ctx);
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map, and this function returns true.  Otherwise this
        /// function returns false.
        ///
        /// TODO: answer the question in these doc comments, does this
        /// increase the unused capacity by one?
        pub fn remove(self: *Self, key: K) bool {
            return self.unmanaged.removeContext(key, self.ctx);
        }

        /// TODO: answer the question in these doc comments, does this
        /// increase the unused capacity by one?
        pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
            return self.unmanaged.removeAdapted(key, ctx);
        }

        /// Delete the entry with key pointed to by key_ptr from the hash map.
        /// key_ptr is assumed to be a valid pointer to a key that is present
        /// in the hash map.
        ///
        /// TODO: answer the question in these doc comments, does this
        /// increase the unused capacity by one?
        pub fn removeByPtr(self: *Self, key_ptr: *K) void {
            self.unmanaged.removeByPtr(key_ptr);
        }

        /// Creates a copy of this map, using the same allocator
        pub fn clone(self: Self) Allocator.Error!Self {
            var other = try self.unmanaged.cloneContext(self.allocator, self.ctx);
            return other.promoteContext(self.allocator, self.ctx);
        }

        /// Creates a copy of this map, using a specified allocator
        pub fn cloneWithAllocator(self: Self, new_allocator: Allocator) Allocator.Error!Self {
            var other = try self.unmanaged.cloneContext(new_allocator, self.ctx);
            return other.promoteContext(new_allocator, self.ctx);
        }

        /// Creates a copy of this map, using a specified context
        pub fn cloneWithContext(self: Self, new_ctx: anytype) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
            var other = try self.unmanaged.cloneContext(self.allocator, new_ctx);
            return other.promoteContext(self.allocator, new_ctx);
        }

        /// Creates a copy of this map, using a specified allocator and context.
        pub fn cloneWithAllocatorAndContext(
            self: Self,
            new_allocator: Allocator,
            new_ctx: anytype,
        ) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
            var other = try self.unmanaged.cloneContext(new_allocator, new_ctx);
            return other.promoteContext(new_allocator, new_ctx);
        }

        /// Set the map to an empty state, making deinitialization a no-op, and
        /// returning a copy of the original.
        pub fn move(self: *Self) Self {
            self.unmanaged.pointer_stability.assertUnlocked();
            const result = self.*;
            self.unmanaged = .empty;
            return result;
        }

        /// Rehash the map, in-place.
        ///
        /// Over time, due to the current tombstone-based implementation, a
        /// HashMap could become fragmented due to the buildup of tombstone
        /// entries that causes a performance degradation due to excessive
        /// probing. The kind of pattern that might cause this is a long-lived
        /// HashMap with repeated inserts and deletes.
        ///
        /// After this function is called, there will be no tombstones in
        /// the HashMap, each of the entries is rehashed and any existing
        /// key/value pointers into the HashMap are invalidated.
        pub fn rehash(self: *Self) void {
            self.unmanaged.rehash(self.ctx);
        }
    };
}

Type FunctionHashMapUnmanaged[src]

A HashMap based on open addressing and linear probing. A lookup or modification typically incurs only 2 cache misses. No order is guaranteed and any modification invalidates live iterators. It achieves good performance with quite high load factors (by default, grow is triggered at 80% full) and only one byte of overhead per element. The struct itself is only 16 bytes for a small footprint. This comes at the price of handling size with u32, which should be reasonable enough for almost all uses. Deletions are achieved with tombstones.

Default initialization of this struct is deprecated; use .empty instead.

Parameters

K: type
V: type
Context: type
max_load_percentage: u64

Types

TypeSize[src]

Source Code

Source code
pub const Size = u32

TypeHash[src]

Source Code

Source code
pub const Hash = u64

TypeKeyIterator[src]

Source Code

Source code
pub const KeyIterator = FieldIterator(K)

TypeValueIterator[src]

Source Code

Source code
pub const ValueIterator = FieldIterator(V)

TypeManaged[src]

Source Code

Source code
pub const Managed = HashMap(K, V, Context, max_load_percentage)

Fields

metadata: ?[*]Metadata = null

Pointer to the metadata.

size: Size = 0

Current number of elements in the hashmap.

available: Size = 0

Number of available slots before a grow is needed to satisfy the max_load_percentage.

pointer_stability: std.debug.SafetyLock = .{}

Used to detect memory safety violations.

Values

Constantempty[src]

A map containing no keys or values.

Source Code

Source code
pub const empty: Self = .{
    .metadata = null,
    .size = 0,
    .available = 0,
}

Functions

Functionpromote[src]

pub fn promote(self: Self, allocator: Allocator) Managed

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn promote(self: Self, allocator: Allocator) Managed {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
    return promoteContext(self, allocator, undefined);
}

FunctionpromoteContext[src]

pub fn promoteContext(self: Self, allocator: Allocator, ctx: Context) Managed

Parameters

self: Self
allocator: Allocator
ctx: Context

Source Code

Source code
pub fn promoteContext(self: Self, allocator: Allocator, ctx: Context) Managed {
    return .{
        .unmanaged = self,
        .allocator = allocator,
        .ctx = ctx,
    };
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.pointer_stability.lock();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.pointer_stability.unlock();
}

Functiondeinit[src]

pub fn deinit(self: *Self, allocator: Allocator) void

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn deinit(self: *Self, allocator: Allocator) void {
    self.pointer_stability.assertUnlocked();
    self.deallocate(allocator);
    self.* = undefined;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, allocator: Allocator, new_size: Size) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
new_size: Size

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, allocator: Allocator, new_size: Size) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return ensureTotalCapacityContext(self, allocator, new_size, undefined);
}

FunctionensureTotalCapacityContext[src]

pub fn ensureTotalCapacityContext(self: *Self, allocator: Allocator, new_size: Size, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
new_size: Size
ctx: Context

Source Code

Source code
pub fn ensureTotalCapacityContext(self: *Self, allocator: Allocator, new_size: Size, ctx: Context) Allocator.Error!void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    if (new_size > self.size)
        try self.growIfNeeded(allocator, new_size - self.size, ctx);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, allocator: Allocator, additional_size: Size) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
additional_size: Size

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, allocator: Allocator, additional_size: Size) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureUnusedCapacityContext instead.");
    return ensureUnusedCapacityContext(self, allocator, additional_size, undefined);
}

FunctionensureUnusedCapacityContext[src]

pub fn ensureUnusedCapacityContext(self: *Self, allocator: Allocator, additional_size: Size, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
additional_size: Size
ctx: Context

Source Code

Source code
pub fn ensureUnusedCapacityContext(self: *Self, allocator: Allocator, additional_size: Size, ctx: Context) Allocator.Error!void {
    return ensureTotalCapacityContext(self, allocator, self.count() + additional_size, ctx);
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    if (self.metadata) |_| {
        self.initMetadatas();
        self.size = 0;
        self.available = @truncate((self.capacity() * max_load_percentage) / 100);
    }
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, allocator: Allocator) void

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn clearAndFree(self: *Self, allocator: Allocator) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    self.deallocate(allocator);
    self.size = 0;
    self.available = 0;
}

Functioncount[src]

pub fn count(self: Self) Size

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) Size {
    return self.size;
}

Functioncapacity[src]

pub fn capacity(self: Self) Size

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) Size {
    if (self.metadata == null) return 0;

    return self.header().capacity;
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return .{ .hm = self };
}

FunctionkeyIterator[src]

pub fn keyIterator(self: Self) KeyIterator

Parameters

self: Self

Source Code

Source code
pub fn keyIterator(self: Self) KeyIterator {
    if (self.metadata) |metadata| {
        return .{
            .len = self.capacity(),
            .metadata = metadata,
            .items = self.keys(),
        };
    } else {
        return .{
            .len = 0,
            .metadata = undefined,
            .items = undefined,
        };
    }
}

FunctionvalueIterator[src]

pub fn valueIterator(self: Self) ValueIterator

Parameters

self: Self

Source Code

Source code
pub fn valueIterator(self: Self) ValueIterator {
    if (self.metadata) |metadata| {
        return .{
            .len = self.capacity(),
            .metadata = metadata,
            .items = self.values(),
        };
    } else {
        return .{
            .len = 0,
            .metadata = undefined,
            .items = undefined,
        };
    }
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void

Insert an entry in the map. Assumes it is not already present.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
    return self.putNoClobberContext(allocator, key, value, undefined);
}

FunctionputNoClobberContext[src]

pub fn putNoClobberContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putNoClobberContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
    {
        self.pointer_stability.lock();
        defer self.pointer_stability.unlock();
        try self.growIfNeeded(allocator, 1, ctx);
    }
    self.putAssumeCapacityNoClobberContext(key, value, ctx);
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
    return self.putAssumeCapacityContext(key, value, undefined);
}

FunctionputAssumeCapacityContext[src]

pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    gop.value_ptr.* = value;
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Insert an entry in the map. Assumes it is not already present, and that no allocation is needed.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
    return self.putAssumeCapacityNoClobberContext(key, value, undefined);
}

FunctionputAssumeCapacityNoClobberContext[src]

pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
    assert(!self.containsContext(key, ctx));

    const hash: Hash = ctx.hash(key);
    const mask = self.capacity() - 1;
    var idx: usize = @truncate(hash & mask);

    var metadata = self.metadata.? + idx;
    while (metadata[0].isUsed()) {
        idx = (idx + 1) & mask;
        metadata = self.metadata.? + idx;
    }

    assert(self.available > 0);
    self.available -= 1;

    const fingerprint = Metadata.takeFingerprint(hash);
    metadata[0].fill(fingerprint);
    self.keys()[idx] = key;
    self.values()[idx] = value;

    self.size += 1;
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
    return self.fetchPutContext(allocator, key, value, undefined);
}

FunctionfetchPutContext[src]

pub fn fetchPutContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!?KV

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!?KV {
    const gop = try self.getOrPutContext(allocator, key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
    return self.fetchPutAssumeCapacityContext(key, value, undefined);
}

FunctionfetchPutAssumeCapacityContext[src]

pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchRemove[src]

pub fn fetchRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchRemoveContext instead.");
    return self.fetchRemoveContext(key, undefined);
}

FunctionfetchRemoveContext[src]

pub fn fetchRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchRemoveAdapted(key, ctx);
}

FunctionfetchRemoveAdapted[src]

pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (self.getIndex(key, ctx)) |idx| {
        const old_key = &self.keys()[idx];
        const old_val = &self.values()[idx];
        const result = KV{
            .key = old_key.*,
            .value = old_val.*,
        };
        self.metadata.?[idx].remove();
        old_key.* = undefined;
        old_val.* = undefined;
        self.size -= 1;
        self.available += 1;
        return result;
    }

    return null;
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
    return self.getEntryContext(key, undefined);
}

FunctiongetEntryContext[src]

pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
    return self.getEntryAdapted(key, ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    if (self.getIndex(key, ctx)) |idx| {
        return Entry{
            .key_ptr = &self.keys()[idx],
            .value_ptr = &self.values()[idx],
        };
    }
    return null;
}

Functionput[src]

pub fn put(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void

Insert an entry if the associated key is not already present, otherwise update preexisting value.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
    return self.putContext(allocator, key, value, undefined);
}

FunctionputContext[src]

pub fn putContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
    const result = try self.getOrPutContext(allocator, key, ctx);
    result.value_ptr.* = value;
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Get an optional pointer to the actual key associated with adapted key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
    return self.getKeyPtrContext(key, undefined);
}

FunctiongetKeyPtrContext[src]

pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
    return self.getKeyPtrAdapted(key, ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    if (self.getIndex(key, ctx)) |idx| {
        return &self.keys()[idx];
    }
    return null;
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Get a copy of the actual key associated with adapted key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
    return self.getKeyContext(key, undefined);
}

FunctiongetKeyContext[src]

pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
    return self.getKeyAdapted(key, ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    if (self.getIndex(key, ctx)) |idx| {
        return self.keys()[idx];
    }
    return null;
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Get an optional pointer to the value associated with key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
    return self.getPtrContext(key, undefined);
}

FunctiongetPtrContext[src]

pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
    return self.getPtrAdapted(key, ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    if (self.getIndex(key, ctx)) |idx| {
        return &self.values()[idx];
    }
    return null;
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Get a copy of the value associated with key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
    return self.getContext(key, undefined);
}

FunctiongetContext[src]

pub fn getContext(self: Self, key: K, ctx: Context) ?V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getContext(self: Self, key: K, ctx: Context) ?V {
    return self.getAdapted(key, ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    if (self.getIndex(key, ctx)) |idx| {
        return self.values()[idx];
    }
    return null;
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, allocator: Allocator, key: K) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, allocator: Allocator, key: K) Allocator.Error!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
    return self.getOrPutContext(allocator, key, undefined);
}

FunctiongetOrPutContext[src]

pub fn getOrPutContext(self: *Self, allocator: Allocator, key: K, ctx: Context) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutContext(self: *Self, allocator: Allocator, key: K, ctx: Context) Allocator.Error!GetOrPutResult {
    const gop = try self.getOrPutContextAdapted(allocator, key, ctx, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype) Allocator.Error!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
    return self.getOrPutContextAdapted(allocator, key, key_ctx, undefined);
}

FunctiongetOrPutContextAdapted[src]

pub fn getOrPutContextAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
ctx: Context

Source Code

Source code
pub fn getOrPutContextAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Allocator.Error!GetOrPutResult {
    {
        self.pointer_stability.lock();
        defer self.pointer_stability.unlock();
        self.growIfNeeded(allocator, 1, ctx) catch |err| {
            // If allocation fails, try to do the lookup anyway.
            // If we find an existing item, we can return it.
            // Otherwise return the error, we could not add another.
            const index = self.getIndex(key, key_ctx) orelse return err;
            return GetOrPutResult{
                .key_ptr = &self.keys()[index],
                .value_ptr = &self.values()[index],
                .found_existing = true,
            };
        };
    }
    return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
    return self.getOrPutAssumeCapacityContext(key, undefined);
}

FunctiongetOrPutAssumeCapacityContext[src]

pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
    const result = self.getOrPutAssumeCapacityAdapted(key, ctx);
    if (!result.found_existing) {
        result.key_ptr.* = key;
    }
    return result;
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {

    // If you get a compile error on this line, it means that your generic hash
    // function is invalid for these parameters.
    const hash: Hash = ctx.hash(key);

    const mask = self.capacity() - 1;
    const fingerprint = Metadata.takeFingerprint(hash);
    var limit = self.capacity();
    var idx = @as(usize, @truncate(hash & mask));

    var first_tombstone_idx: usize = self.capacity(); // invalid index
    var metadata = self.metadata.? + idx;
    while (!metadata[0].isFree() and limit != 0) {
        if (metadata[0].isUsed() and metadata[0].fingerprint == fingerprint) {
            const test_key = &self.keys()[idx];
            // If you get a compile error on this line, it means that your generic eql
            // function is invalid for these parameters.

            if (ctx.eql(key, test_key.*)) {
                return GetOrPutResult{
                    .key_ptr = test_key,
                    .value_ptr = &self.values()[idx],
                    .found_existing = true,
                };
            }
        } else if (first_tombstone_idx == self.capacity() and metadata[0].isTombstone()) {
            first_tombstone_idx = idx;
        }

        limit -= 1;
        idx = (idx + 1) & mask;
        metadata = self.metadata.? + idx;
    }

    if (first_tombstone_idx < self.capacity()) {
        // Cheap try to lower probing lengths after deletions. Recycle a tombstone.
        idx = first_tombstone_idx;
        metadata = self.metadata.? + idx;
    }
    // We're using a slot previously free or a tombstone.
    self.available -= 1;

    metadata[0].fill(fingerprint);
    const new_key = &self.keys()[idx];
    const new_value = &self.values()[idx];
    new_key.* = undefined;
    new_value.* = undefined;
    self.size += 1;

    return GetOrPutResult{
        .key_ptr = new_key,
        .value_ptr = new_value,
        .found_existing = false,
    };
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!Entry

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
    return self.getOrPutValueContext(allocator, key, value, undefined);
}

FunctiongetOrPutValueContext[src]

pub fn getOrPutValueContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!Entry

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn getOrPutValueContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!Entry {
    const res = try self.getOrPutAdapted(allocator, key, ctx);
    if (!res.found_existing) {
        res.key_ptr.* = key;
        res.value_ptr.* = value;
    }
    return Entry{ .key_ptr = res.key_ptr, .value_ptr = res.value_ptr };
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Return true if there is a value associated with key in the map.

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
    return self.containsContext(key, undefined);
}

FunctioncontainsContext[src]

pub fn containsContext(self: Self, key: K, ctx: Context) bool

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn containsContext(self: Self, key: K, ctx: Context) bool {
    return self.containsAdapted(key, ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.getIndex(key, ctx) != null;
}

Functionremove[src]

pub fn remove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map, and this function returns true. Otherwise this function returns false.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K

Source Code

Source code
pub fn remove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call removeContext instead.");
    return self.removeContext(key, undefined);
}

FunctionremoveContext[src]

pub fn removeContext(self: *Self, key: K, ctx: Context) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn removeContext(self: *Self, key: K, ctx: Context) bool {
    return self.removeAdapted(key, ctx);
}

FunctionremoveAdapted[src]

pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self

Source Code

Source code
pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (self.getIndex(key, ctx)) |idx| {
        self.removeByIndex(idx);
        return true;
    }

    return false;
}

FunctionremoveByPtr[src]

pub fn removeByPtr(self: *Self, key_ptr: *K) void

Delete the entry with key pointed to by key_ptr from the hash map. key_ptr is assumed to be a valid pointer to a key that is present in the hash map.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key_ptr: *K

Source Code

Source code
pub fn removeByPtr(self: *Self, key_ptr: *K) void {
    // TODO: replace with pointer subtraction once supported by zig
    // if @sizeOf(K) == 0 then there is at most one item in the hash
    // map, which is assumed to exist as key_ptr must be valid.  This
    // item must be at index 0.
    const idx = if (@sizeOf(K) > 0)
        (@intFromPtr(key_ptr) - @intFromPtr(self.keys())) / @sizeOf(K)
    else
        0;

    self.removeByIndex(idx);
}

Functionclone[src]

pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
    return self.cloneContext(allocator, @as(Context, undefined));
}

FunctioncloneContext[src]

pub fn cloneContext(self: Self, allocator: Allocator, new_ctx: anytype) Allocator.Error!HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage)

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn cloneContext(self: Self, allocator: Allocator, new_ctx: anytype) Allocator.Error!HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other: HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) = .empty;
    if (self.size == 0)
        return other;

    const new_cap = capacityForSize(self.size);
    try other.allocate(allocator, new_cap);
    other.initMetadatas();
    other.available = @truncate((new_cap * max_load_percentage) / 100);

    var i: Size = 0;
    var metadata = self.metadata.?;
    const keys_ptr = self.keys();
    const values_ptr = self.values();
    while (i < self.capacity()) : (i += 1) {
        if (metadata[i].isUsed()) {
            other.putAssumeCapacityNoClobberContext(keys_ptr[i], values_ptr[i], new_ctx);
            if (other.size == self.size)
                break;
        }
    }

    return other;
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.pointer_stability.assertUnlocked();
    const result = self.*;
    self.* = .empty;
    return result;
}

Functionrehash[src]

pub fn rehash(self: *Self, ctx: anytype) void

Rehash the map, in-place.

Over time, due to the current tombstone-based implementation, a HashMap could become fragmented due to the buildup of tombstone entries that causes a performance degradation due to excessive probing. The kind of pattern that might cause this is a long-lived HashMap with repeated inserts and deletes.

After this function is called, there will be no tombstones in the HashMap, each of the entries is rehashed and any existing key/value pointers into the HashMap are invalidated.

Parameters

self: *Self

Source Code

Source code
pub fn rehash(self: *Self, ctx: anytype) void {
    const mask = self.capacity() - 1;

    var metadata = self.metadata.?;
    var keys_ptr = self.keys();
    var values_ptr = self.values();
    var curr: Size = 0;

    // While we are re-hashing every slot, we will use the
    // fingerprint to mark used buckets as being used and either free
    // (needing to be rehashed) or tombstone (already rehashed).

    while (curr < self.capacity()) : (curr += 1) {
        metadata[curr].fingerprint = Metadata.free;
    }

    // Now iterate over all the buckets, rehashing them

    curr = 0;
    while (curr < self.capacity()) {
        if (!metadata[curr].isUsed()) {
            assert(metadata[curr].isFree());
            curr += 1;
            continue;
        }

        const hash = ctx.hash(keys_ptr[curr]);
        const fingerprint = Metadata.takeFingerprint(hash);
        var idx = @as(usize, @truncate(hash & mask));

        // For each bucket, rehash to an index:
        // 1) before the cursor, probed into a free slot, or
        // 2) equal to the cursor, no need to move, or
        // 3) ahead of the cursor, probing over already rehashed

        while ((idx < curr and metadata[idx].isUsed()) or
            (idx > curr and metadata[idx].fingerprint == Metadata.tombstone))
        {
            idx = (idx + 1) & mask;
        }

        if (idx < curr) {
            assert(metadata[idx].isFree());
            metadata[idx].fill(fingerprint);
            keys_ptr[idx] = keys_ptr[curr];
            values_ptr[idx] = values_ptr[curr];

            metadata[curr].used = 0;
            assert(metadata[curr].isFree());
            keys_ptr[curr] = undefined;
            values_ptr[curr] = undefined;

            curr += 1;
        } else if (idx == curr) {
            metadata[idx].fingerprint = fingerprint;
            curr += 1;
        } else {
            assert(metadata[idx].fingerprint != Metadata.tombstone);
            metadata[idx].fingerprint = Metadata.tombstone;
            if (metadata[idx].isUsed()) {
                std.mem.swap(K, &keys_ptr[curr], &keys_ptr[idx]);
                std.mem.swap(V, &values_ptr[curr], &values_ptr[idx]);
            } else {
                metadata[idx].used = 1;
                keys_ptr[idx] = keys_ptr[curr];
                values_ptr[idx] = values_ptr[curr];

                metadata[curr].fingerprint = Metadata.free;
                metadata[curr].used = 0;
                keys_ptr[curr] = undefined;
                values_ptr[curr] = undefined;

                curr += 1;
            }
        }
    }
}

Source Code

Source code
pub fn HashMapUnmanaged(
    comptime K: type,
    comptime V: type,
    comptime Context: type,
    comptime max_load_percentage: u64,
) type {
    if (max_load_percentage <= 0 or max_load_percentage >= 100)
        @compileError("max_load_percentage must be between 0 and 100.");
    return struct {
        const Self = @This();

        // This is actually a midway pointer to the single buffer containing
        // a `Header` field, the `Metadata`s and `Entry`s.
        // At `-@sizeOf(Header)` is the Header field.
        // At `sizeOf(Metadata) * capacity + offset`, which is pointed to by
        // self.header().entries, is the array of entries.
        // This means that the hashmap only holds one live allocation, to
        // reduce memory fragmentation and struct size.
        /// Pointer to the metadata.
        metadata: ?[*]Metadata = null,

        /// Current number of elements in the hashmap.
        size: Size = 0,

        // Having a countdown to grow reduces the number of instructions to
        // execute when determining if the hashmap has enough capacity already.
        /// Number of available slots before a grow is needed to satisfy the
        /// `max_load_percentage`.
        available: Size = 0,

        /// Used to detect memory safety violations.
        pointer_stability: std.debug.SafetyLock = .{},

        // This is purely empirical and not a /very smart magic constant™/.
        /// Capacity of the first grow when bootstrapping the hashmap.
        const minimal_capacity = 8;

        /// A map containing no keys or values.
        pub const empty: Self = .{
            .metadata = null,
            .size = 0,
            .available = 0,
        };

        // This hashmap is specially designed for sizes that fit in a u32.
        pub const Size = u32;

        // u64 hashes guarantee us that the fingerprint bits will never be used
        // to compute the index of a slot, maximizing the use of entropy.
        pub const Hash = u64;

        pub const Entry = struct {
            key_ptr: *K,
            value_ptr: *V,
        };

        pub const KV = struct {
            key: K,
            value: V,
        };

        const Header = struct {
            values: [*]V,
            keys: [*]K,
            capacity: Size,
        };

        /// Metadata for a slot. It can be in three states: empty, used or
        /// tombstone. Tombstones indicate that an entry was previously used,
        /// they are a simple way to handle removal.
        /// To this state, we add 7 bits from the slot's key hash. These are
        /// used as a fast way to disambiguate between entries without
        /// having to use the equality function. If two fingerprints are
        /// different, we know that we don't have to compare the keys at all.
        /// The 7 bits are the highest ones from a 64 bit hash. This way, not
        /// only we use the `log2(capacity)` lowest bits from the hash to determine
        /// a slot index, but we use 7 more bits to quickly resolve collisions
        /// when multiple elements with different hashes end up wanting to be in the same slot.
        /// Not using the equality function means we don't have to read into
        /// the entries array, likely avoiding a cache miss and a potentially
        /// costly function call.
        const Metadata = packed struct {
            const FingerPrint = u7;

            const free: FingerPrint = 0;
            const tombstone: FingerPrint = 1;

            fingerprint: FingerPrint = free,
            used: u1 = 0,

            const slot_free = @as(u8, @bitCast(Metadata{ .fingerprint = free }));
            const slot_tombstone = @as(u8, @bitCast(Metadata{ .fingerprint = tombstone }));

            pub fn isUsed(self: Metadata) bool {
                return self.used == 1;
            }

            pub fn isTombstone(self: Metadata) bool {
                return @as(u8, @bitCast(self)) == slot_tombstone;
            }

            pub fn isFree(self: Metadata) bool {
                return @as(u8, @bitCast(self)) == slot_free;
            }

            pub fn takeFingerprint(hash: Hash) FingerPrint {
                const hash_bits = @typeInfo(Hash).int.bits;
                const fp_bits = @typeInfo(FingerPrint).int.bits;
                return @as(FingerPrint, @truncate(hash >> (hash_bits - fp_bits)));
            }

            pub fn fill(self: *Metadata, fp: FingerPrint) void {
                self.used = 1;
                self.fingerprint = fp;
            }

            pub fn remove(self: *Metadata) void {
                self.used = 0;
                self.fingerprint = tombstone;
            }
        };

        comptime {
            assert(@sizeOf(Metadata) == 1);
            assert(@alignOf(Metadata) == 1);
        }

        pub const Iterator = struct {
            hm: *const Self,
            index: Size = 0,

            pub fn next(it: *Iterator) ?Entry {
                assert(it.index <= it.hm.capacity());
                if (it.hm.size == 0) return null;

                const cap = it.hm.capacity();
                const end = it.hm.metadata.? + cap;
                var metadata = it.hm.metadata.? + it.index;

                while (metadata != end) : ({
                    metadata += 1;
                    it.index += 1;
                }) {
                    if (metadata[0].isUsed()) {
                        const key = &it.hm.keys()[it.index];
                        const value = &it.hm.values()[it.index];
                        it.index += 1;
                        return Entry{ .key_ptr = key, .value_ptr = value };
                    }
                }

                return null;
            }
        };

        pub const KeyIterator = FieldIterator(K);
        pub const ValueIterator = FieldIterator(V);

        fn FieldIterator(comptime T: type) type {
            return struct {
                len: usize,
                metadata: [*]const Metadata,
                items: [*]T,

                pub fn next(self: *@This()) ?*T {
                    while (self.len > 0) {
                        self.len -= 1;
                        const used = self.metadata[0].isUsed();
                        const item = &self.items[0];
                        self.metadata += 1;
                        self.items += 1;
                        if (used) {
                            return item;
                        }
                    }
                    return null;
                }
            };
        }

        pub const GetOrPutResult = struct {
            key_ptr: *K,
            value_ptr: *V,
            found_existing: bool,
        };

        pub const Managed = HashMap(K, V, Context, max_load_percentage);

        pub fn promote(self: Self, allocator: Allocator) Managed {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
            return promoteContext(self, allocator, undefined);
        }

        pub fn promoteContext(self: Self, allocator: Allocator, ctx: Context) Managed {
            return .{
                .unmanaged = self,
                .allocator = allocator,
                .ctx = ctx,
            };
        }

        /// Puts the hash map into a state where any method call that would
        /// cause an existing key or value pointer to become invalidated will
        /// instead trigger an assertion.
        ///
        /// An additional call to `lockPointers` in such state also triggers an
        /// assertion.
        ///
        /// `unlockPointers` returns the hash map to the previous state.
        pub fn lockPointers(self: *Self) void {
            self.pointer_stability.lock();
        }

        /// Undoes a call to `lockPointers`.
        pub fn unlockPointers(self: *Self) void {
            self.pointer_stability.unlock();
        }

        fn isUnderMaxLoadPercentage(size: Size, cap: Size) bool {
            return size * 100 < max_load_percentage * cap;
        }

        pub fn deinit(self: *Self, allocator: Allocator) void {
            self.pointer_stability.assertUnlocked();
            self.deallocate(allocator);
            self.* = undefined;
        }

        fn capacityForSize(size: Size) Size {
            var new_cap: u32 = @intCast((@as(u64, size) * 100) / max_load_percentage + 1);
            new_cap = math.ceilPowerOfTwo(u32, new_cap) catch unreachable;
            return new_cap;
        }

        pub fn ensureTotalCapacity(self: *Self, allocator: Allocator, new_size: Size) Allocator.Error!void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
            return ensureTotalCapacityContext(self, allocator, new_size, undefined);
        }
        pub fn ensureTotalCapacityContext(self: *Self, allocator: Allocator, new_size: Size, ctx: Context) Allocator.Error!void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();
            if (new_size > self.size)
                try self.growIfNeeded(allocator, new_size - self.size, ctx);
        }

        pub fn ensureUnusedCapacity(self: *Self, allocator: Allocator, additional_size: Size) Allocator.Error!void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureUnusedCapacityContext instead.");
            return ensureUnusedCapacityContext(self, allocator, additional_size, undefined);
        }
        pub fn ensureUnusedCapacityContext(self: *Self, allocator: Allocator, additional_size: Size, ctx: Context) Allocator.Error!void {
            return ensureTotalCapacityContext(self, allocator, self.count() + additional_size, ctx);
        }

        pub fn clearRetainingCapacity(self: *Self) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();
            if (self.metadata) |_| {
                self.initMetadatas();
                self.size = 0;
                self.available = @truncate((self.capacity() * max_load_percentage) / 100);
            }
        }

        pub fn clearAndFree(self: *Self, allocator: Allocator) void {
            self.pointer_stability.lock();
            defer self.pointer_stability.unlock();
            self.deallocate(allocator);
            self.size = 0;
            self.available = 0;
        }

        pub fn count(self: Self) Size {
            return self.size;
        }

        fn header(self: Self) *Header {
            return @ptrCast(@as([*]Header, @ptrCast(@alignCast(self.metadata.?))) - 1);
        }

        fn keys(self: Self) [*]K {
            return self.header().keys;
        }

        fn values(self: Self) [*]V {
            return self.header().values;
        }

        pub fn capacity(self: Self) Size {
            if (self.metadata == null) return 0;

            return self.header().capacity;
        }

        pub fn iterator(self: *const Self) Iterator {
            return .{ .hm = self };
        }

        pub fn keyIterator(self: Self) KeyIterator {
            if (self.metadata) |metadata| {
                return .{
                    .len = self.capacity(),
                    .metadata = metadata,
                    .items = self.keys(),
                };
            } else {
                return .{
                    .len = 0,
                    .metadata = undefined,
                    .items = undefined,
                };
            }
        }

        pub fn valueIterator(self: Self) ValueIterator {
            if (self.metadata) |metadata| {
                return .{
                    .len = self.capacity(),
                    .metadata = metadata,
                    .items = self.values(),
                };
            } else {
                return .{
                    .len = 0,
                    .metadata = undefined,
                    .items = undefined,
                };
            }
        }

        /// Insert an entry in the map. Assumes it is not already present.
        pub fn putNoClobber(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
            return self.putNoClobberContext(allocator, key, value, undefined);
        }
        pub fn putNoClobberContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
            {
                self.pointer_stability.lock();
                defer self.pointer_stability.unlock();
                try self.growIfNeeded(allocator, 1, ctx);
            }
            self.putAssumeCapacityNoClobberContext(key, value, ctx);
        }

        /// Asserts there is enough capacity to store the new key-value pair.
        /// Clobbers any existing data. To detect if a put would clobber
        /// existing data, see `getOrPutAssumeCapacity`.
        pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
            return self.putAssumeCapacityContext(key, value, undefined);
        }
        pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
            const gop = self.getOrPutAssumeCapacityContext(key, ctx);
            gop.value_ptr.* = value;
        }

        /// Insert an entry in the map. Assumes it is not already present,
        /// and that no allocation is needed.
        pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
            return self.putAssumeCapacityNoClobberContext(key, value, undefined);
        }
        pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
            assert(!self.containsContext(key, ctx));

            const hash: Hash = ctx.hash(key);
            const mask = self.capacity() - 1;
            var idx: usize = @truncate(hash & mask);

            var metadata = self.metadata.? + idx;
            while (metadata[0].isUsed()) {
                idx = (idx + 1) & mask;
                metadata = self.metadata.? + idx;
            }

            assert(self.available > 0);
            self.available -= 1;

            const fingerprint = Metadata.takeFingerprint(hash);
            metadata[0].fill(fingerprint);
            self.keys()[idx] = key;
            self.values()[idx] = value;

            self.size += 1;
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        pub fn fetchPut(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!?KV {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
            return self.fetchPutContext(allocator, key, value, undefined);
        }
        pub fn fetchPutContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!?KV {
            const gop = try self.getOrPutContext(allocator, key, ctx);
            var result: ?KV = null;
            if (gop.found_existing) {
                result = KV{
                    .key = gop.key_ptr.*,
                    .value = gop.value_ptr.*,
                };
            }
            gop.value_ptr.* = value;
            return result;
        }

        /// Inserts a new `Entry` into the hash map, returning the previous one, if any.
        /// If insertion happens, asserts there is enough capacity without allocating.
        pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
            return self.fetchPutAssumeCapacityContext(key, value, undefined);
        }
        pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
            const gop = self.getOrPutAssumeCapacityContext(key, ctx);
            var result: ?KV = null;
            if (gop.found_existing) {
                result = KV{
                    .key = gop.key_ptr.*,
                    .value = gop.value_ptr.*,
                };
            }
            gop.value_ptr.* = value;
            return result;
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map, and then returned from this function.
        pub fn fetchRemove(self: *Self, key: K) ?KV {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchRemoveContext instead.");
            return self.fetchRemoveContext(key, undefined);
        }
        pub fn fetchRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
            return self.fetchRemoveAdapted(key, ctx);
        }
        pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
            if (self.getIndex(key, ctx)) |idx| {
                const old_key = &self.keys()[idx];
                const old_val = &self.values()[idx];
                const result = KV{
                    .key = old_key.*,
                    .value = old_val.*,
                };
                self.metadata.?[idx].remove();
                old_key.* = undefined;
                old_val.* = undefined;
                self.size -= 1;
                self.available += 1;
                return result;
            }

            return null;
        }

        /// Find the index containing the data for the given key.
        fn getIndex(self: Self, key: anytype, ctx: anytype) ?usize {
            if (self.size == 0) {
                // We use cold instead of unlikely to force a jump to this case,
                // no matter the weight of the opposing side.
                @branchHint(.cold);
                return null;
            }

            // If you get a compile error on this line, it means that your generic hash
            // function is invalid for these parameters.
            const hash: Hash = ctx.hash(key);

            const mask = self.capacity() - 1;
            const fingerprint = Metadata.takeFingerprint(hash);
            // Don't loop indefinitely when there are no empty slots.
            var limit = self.capacity();
            var idx = @as(usize, @truncate(hash & mask));

            var metadata = self.metadata.? + idx;
            while (!metadata[0].isFree() and limit != 0) {
                if (metadata[0].isUsed() and metadata[0].fingerprint == fingerprint) {
                    const test_key = &self.keys()[idx];

                    if (ctx.eql(key, test_key.*)) {
                        return idx;
                    }
                }

                limit -= 1;
                idx = (idx + 1) & mask;
                metadata = self.metadata.? + idx;
            }

            return null;
        }

        pub fn getEntry(self: Self, key: K) ?Entry {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
            return self.getEntryContext(key, undefined);
        }
        pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
            return self.getEntryAdapted(key, ctx);
        }
        pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
            if (self.getIndex(key, ctx)) |idx| {
                return Entry{
                    .key_ptr = &self.keys()[idx],
                    .value_ptr = &self.values()[idx],
                };
            }
            return null;
        }

        /// Insert an entry if the associated key is not already present, otherwise update preexisting value.
        pub fn put(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
            return self.putContext(allocator, key, value, undefined);
        }
        pub fn putContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
            const result = try self.getOrPutContext(allocator, key, ctx);
            result.value_ptr.* = value;
        }

        /// Get an optional pointer to the actual key associated with adapted key, if present.
        pub fn getKeyPtr(self: Self, key: K) ?*K {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
            return self.getKeyPtrContext(key, undefined);
        }
        pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
            return self.getKeyPtrAdapted(key, ctx);
        }
        pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
            if (self.getIndex(key, ctx)) |idx| {
                return &self.keys()[idx];
            }
            return null;
        }

        /// Get a copy of the actual key associated with adapted key, if present.
        pub fn getKey(self: Self, key: K) ?K {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
            return self.getKeyContext(key, undefined);
        }
        pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
            return self.getKeyAdapted(key, ctx);
        }
        pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
            if (self.getIndex(key, ctx)) |idx| {
                return self.keys()[idx];
            }
            return null;
        }

        /// Get an optional pointer to the value associated with key, if present.
        pub fn getPtr(self: Self, key: K) ?*V {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
            return self.getPtrContext(key, undefined);
        }
        pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
            return self.getPtrAdapted(key, ctx);
        }
        pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
            if (self.getIndex(key, ctx)) |idx| {
                return &self.values()[idx];
            }
            return null;
        }

        /// Get a copy of the value associated with key, if present.
        pub fn get(self: Self, key: K) ?V {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
            return self.getContext(key, undefined);
        }
        pub fn getContext(self: Self, key: K, ctx: Context) ?V {
            return self.getAdapted(key, ctx);
        }
        pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
            if (self.getIndex(key, ctx)) |idx| {
                return self.values()[idx];
            }
            return null;
        }

        pub fn getOrPut(self: *Self, allocator: Allocator, key: K) Allocator.Error!GetOrPutResult {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
            return self.getOrPutContext(allocator, key, undefined);
        }
        pub fn getOrPutContext(self: *Self, allocator: Allocator, key: K, ctx: Context) Allocator.Error!GetOrPutResult {
            const gop = try self.getOrPutContextAdapted(allocator, key, ctx, ctx);
            if (!gop.found_existing) {
                gop.key_ptr.* = key;
            }
            return gop;
        }
        pub fn getOrPutAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype) Allocator.Error!GetOrPutResult {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
            return self.getOrPutContextAdapted(allocator, key, key_ctx, undefined);
        }
        pub fn getOrPutContextAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Allocator.Error!GetOrPutResult {
            {
                self.pointer_stability.lock();
                defer self.pointer_stability.unlock();
                self.growIfNeeded(allocator, 1, ctx) catch |err| {
                    // If allocation fails, try to do the lookup anyway.
                    // If we find an existing item, we can return it.
                    // Otherwise return the error, we could not add another.
                    const index = self.getIndex(key, key_ctx) orelse return err;
                    return GetOrPutResult{
                        .key_ptr = &self.keys()[index],
                        .value_ptr = &self.values()[index],
                        .found_existing = true,
                    };
                };
            }
            return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
        }

        pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
            return self.getOrPutAssumeCapacityContext(key, undefined);
        }
        pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
            const result = self.getOrPutAssumeCapacityAdapted(key, ctx);
            if (!result.found_existing) {
                result.key_ptr.* = key;
            }
            return result;
        }
        pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {

            // If you get a compile error on this line, it means that your generic hash
            // function is invalid for these parameters.
            const hash: Hash = ctx.hash(key);

            const mask = self.capacity() - 1;
            const fingerprint = Metadata.takeFingerprint(hash);
            var limit = self.capacity();
            var idx = @as(usize, @truncate(hash & mask));

            var first_tombstone_idx: usize = self.capacity(); // invalid index
            var metadata = self.metadata.? + idx;
            while (!metadata[0].isFree() and limit != 0) {
                if (metadata[0].isUsed() and metadata[0].fingerprint == fingerprint) {
                    const test_key = &self.keys()[idx];
                    // If you get a compile error on this line, it means that your generic eql
                    // function is invalid for these parameters.

                    if (ctx.eql(key, test_key.*)) {
                        return GetOrPutResult{
                            .key_ptr = test_key,
                            .value_ptr = &self.values()[idx],
                            .found_existing = true,
                        };
                    }
                } else if (first_tombstone_idx == self.capacity() and metadata[0].isTombstone()) {
                    first_tombstone_idx = idx;
                }

                limit -= 1;
                idx = (idx + 1) & mask;
                metadata = self.metadata.? + idx;
            }

            if (first_tombstone_idx < self.capacity()) {
                // Cheap try to lower probing lengths after deletions. Recycle a tombstone.
                idx = first_tombstone_idx;
                metadata = self.metadata.? + idx;
            }
            // We're using a slot previously free or a tombstone.
            self.available -= 1;

            metadata[0].fill(fingerprint);
            const new_key = &self.keys()[idx];
            const new_value = &self.values()[idx];
            new_key.* = undefined;
            new_value.* = undefined;
            self.size += 1;

            return GetOrPutResult{
                .key_ptr = new_key,
                .value_ptr = new_value,
                .found_existing = false,
            };
        }

        pub fn getOrPutValue(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!Entry {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
            return self.getOrPutValueContext(allocator, key, value, undefined);
        }
        pub fn getOrPutValueContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!Entry {
            const res = try self.getOrPutAdapted(allocator, key, ctx);
            if (!res.found_existing) {
                res.key_ptr.* = key;
                res.value_ptr.* = value;
            }
            return Entry{ .key_ptr = res.key_ptr, .value_ptr = res.value_ptr };
        }

        /// Return true if there is a value associated with key in the map.
        pub fn contains(self: Self, key: K) bool {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
            return self.containsContext(key, undefined);
        }
        pub fn containsContext(self: Self, key: K, ctx: Context) bool {
            return self.containsAdapted(key, ctx);
        }
        pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
            return self.getIndex(key, ctx) != null;
        }

        fn removeByIndex(self: *Self, idx: usize) void {
            self.metadata.?[idx].remove();
            self.keys()[idx] = undefined;
            self.values()[idx] = undefined;
            self.size -= 1;
            self.available += 1;
        }

        /// If there is an `Entry` with a matching key, it is deleted from
        /// the hash map, and this function returns true.  Otherwise this
        /// function returns false.
        ///
        /// TODO: answer the question in these doc comments, does this
        /// increase the unused capacity by one?
        pub fn remove(self: *Self, key: K) bool {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call removeContext instead.");
            return self.removeContext(key, undefined);
        }

        /// TODO: answer the question in these doc comments, does this
        /// increase the unused capacity by one?
        pub fn removeContext(self: *Self, key: K, ctx: Context) bool {
            return self.removeAdapted(key, ctx);
        }

        /// TODO: answer the question in these doc comments, does this
        /// increase the unused capacity by one?
        pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
            if (self.getIndex(key, ctx)) |idx| {
                self.removeByIndex(idx);
                return true;
            }

            return false;
        }

        /// Delete the entry with key pointed to by key_ptr from the hash map.
        /// key_ptr is assumed to be a valid pointer to a key that is present
        /// in the hash map.
        ///
        /// TODO: answer the question in these doc comments, does this
        /// increase the unused capacity by one?
        pub fn removeByPtr(self: *Self, key_ptr: *K) void {
            // TODO: replace with pointer subtraction once supported by zig
            // if @sizeOf(K) == 0 then there is at most one item in the hash
            // map, which is assumed to exist as key_ptr must be valid.  This
            // item must be at index 0.
            const idx = if (@sizeOf(K) > 0)
                (@intFromPtr(key_ptr) - @intFromPtr(self.keys())) / @sizeOf(K)
            else
                0;

            self.removeByIndex(idx);
        }

        fn initMetadatas(self: *Self) void {
            @memset(@as([*]u8, @ptrCast(self.metadata.?))[0 .. @sizeOf(Metadata) * self.capacity()], 0);
        }

        // This counts the number of occupied slots (not counting tombstones), which is
        // what has to stay under the max_load_percentage of capacity.
        fn load(self: Self) Size {
            const max_load = (self.capacity() * max_load_percentage) / 100;
            assert(max_load >= self.available);
            return @as(Size, @truncate(max_load - self.available));
        }

        fn growIfNeeded(self: *Self, allocator: Allocator, new_count: Size, ctx: Context) Allocator.Error!void {
            if (new_count > self.available) {
                try self.grow(allocator, capacityForSize(self.load() + new_count), ctx);
            }
        }

        pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self {
            if (@sizeOf(Context) != 0)
                @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
            return self.cloneContext(allocator, @as(Context, undefined));
        }
        pub fn cloneContext(self: Self, allocator: Allocator, new_ctx: anytype) Allocator.Error!HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) {
            var other: HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) = .empty;
            if (self.size == 0)
                return other;

            const new_cap = capacityForSize(self.size);
            try other.allocate(allocator, new_cap);
            other.initMetadatas();
            other.available = @truncate((new_cap * max_load_percentage) / 100);

            var i: Size = 0;
            var metadata = self.metadata.?;
            const keys_ptr = self.keys();
            const values_ptr = self.values();
            while (i < self.capacity()) : (i += 1) {
                if (metadata[i].isUsed()) {
                    other.putAssumeCapacityNoClobberContext(keys_ptr[i], values_ptr[i], new_ctx);
                    if (other.size == self.size)
                        break;
                }
            }

            return other;
        }

        /// Set the map to an empty state, making deinitialization a no-op, and
        /// returning a copy of the original.
        pub fn move(self: *Self) Self {
            self.pointer_stability.assertUnlocked();
            const result = self.*;
            self.* = .empty;
            return result;
        }

        /// Rehash the map, in-place.
        ///
        /// Over time, due to the current tombstone-based implementation, a
        /// HashMap could become fragmented due to the buildup of tombstone
        /// entries that causes a performance degradation due to excessive
        /// probing. The kind of pattern that might cause this is a long-lived
        /// HashMap with repeated inserts and deletes.
        ///
        /// After this function is called, there will be no tombstones in
        /// the HashMap, each of the entries is rehashed and any existing
        /// key/value pointers into the HashMap are invalidated.
        pub fn rehash(self: *Self, ctx: anytype) void {
            const mask = self.capacity() - 1;

            var metadata = self.metadata.?;
            var keys_ptr = self.keys();
            var values_ptr = self.values();
            var curr: Size = 0;

            // While we are re-hashing every slot, we will use the
            // fingerprint to mark used buckets as being used and either free
            // (needing to be rehashed) or tombstone (already rehashed).

            while (curr < self.capacity()) : (curr += 1) {
                metadata[curr].fingerprint = Metadata.free;
            }

            // Now iterate over all the buckets, rehashing them

            curr = 0;
            while (curr < self.capacity()) {
                if (!metadata[curr].isUsed()) {
                    assert(metadata[curr].isFree());
                    curr += 1;
                    continue;
                }

                const hash = ctx.hash(keys_ptr[curr]);
                const fingerprint = Metadata.takeFingerprint(hash);
                var idx = @as(usize, @truncate(hash & mask));

                // For each bucket, rehash to an index:
                // 1) before the cursor, probed into a free slot, or
                // 2) equal to the cursor, no need to move, or
                // 3) ahead of the cursor, probing over already rehashed

                while ((idx < curr and metadata[idx].isUsed()) or
                    (idx > curr and metadata[idx].fingerprint == Metadata.tombstone))
                {
                    idx = (idx + 1) & mask;
                }

                if (idx < curr) {
                    assert(metadata[idx].isFree());
                    metadata[idx].fill(fingerprint);
                    keys_ptr[idx] = keys_ptr[curr];
                    values_ptr[idx] = values_ptr[curr];

                    metadata[curr].used = 0;
                    assert(metadata[curr].isFree());
                    keys_ptr[curr] = undefined;
                    values_ptr[curr] = undefined;

                    curr += 1;
                } else if (idx == curr) {
                    metadata[idx].fingerprint = fingerprint;
                    curr += 1;
                } else {
                    assert(metadata[idx].fingerprint != Metadata.tombstone);
                    metadata[idx].fingerprint = Metadata.tombstone;
                    if (metadata[idx].isUsed()) {
                        std.mem.swap(K, &keys_ptr[curr], &keys_ptr[idx]);
                        std.mem.swap(V, &values_ptr[curr], &values_ptr[idx]);
                    } else {
                        metadata[idx].used = 1;
                        keys_ptr[idx] = keys_ptr[curr];
                        values_ptr[idx] = values_ptr[curr];

                        metadata[curr].fingerprint = Metadata.free;
                        metadata[curr].used = 0;
                        keys_ptr[curr] = undefined;
                        values_ptr[curr] = undefined;

                        curr += 1;
                    }
                }
            }
        }

        fn grow(self: *Self, allocator: Allocator, new_capacity: Size, ctx: Context) Allocator.Error!void {
            @branchHint(.cold);
            const new_cap = @max(new_capacity, minimal_capacity);
            assert(new_cap > self.capacity());
            assert(std.math.isPowerOfTwo(new_cap));

            var map: Self = .{};
            try map.allocate(allocator, new_cap);
            errdefer comptime unreachable;
            map.pointer_stability.lock();
            map.initMetadatas();
            map.available = @truncate((new_cap * max_load_percentage) / 100);

            if (self.size != 0) {
                const old_capacity = self.capacity();
                for (
                    self.metadata.?[0..old_capacity],
                    self.keys()[0..old_capacity],
                    self.values()[0..old_capacity],
                ) |m, k, v| {
                    if (!m.isUsed()) continue;
                    map.putAssumeCapacityNoClobberContext(k, v, ctx);
                    if (map.size == self.size) break;
                }
            }

            self.size = 0;
            self.pointer_stability = .{};
            std.mem.swap(Self, self, &map);
            map.deinit(allocator);
        }

        fn allocate(self: *Self, allocator: Allocator, new_capacity: Size) Allocator.Error!void {
            const header_align = @alignOf(Header);
            const key_align = if (@sizeOf(K) == 0) 1 else @alignOf(K);
            const val_align = if (@sizeOf(V) == 0) 1 else @alignOf(V);
            const max_align = comptime @max(header_align, key_align, val_align);

            const new_cap: usize = new_capacity;
            const meta_size = @sizeOf(Header) + new_cap * @sizeOf(Metadata);
            comptime assert(@alignOf(Metadata) == 1);

            const keys_start = std.mem.alignForward(usize, meta_size, key_align);
            const keys_end = keys_start + new_cap * @sizeOf(K);

            const vals_start = std.mem.alignForward(usize, keys_end, val_align);
            const vals_end = vals_start + new_cap * @sizeOf(V);

            const total_size = std.mem.alignForward(usize, vals_end, max_align);

            const slice = try allocator.alignedAlloc(u8, max_align, total_size);
            const ptr: [*]u8 = @ptrCast(slice.ptr);

            const metadata = ptr + @sizeOf(Header);

            const hdr = @as(*Header, @ptrCast(@alignCast(ptr)));
            if (@sizeOf([*]V) != 0) {
                hdr.values = @ptrCast(@alignCast((ptr + vals_start)));
            }
            if (@sizeOf([*]K) != 0) {
                hdr.keys = @ptrCast(@alignCast((ptr + keys_start)));
            }
            hdr.capacity = new_capacity;
            self.metadata = @ptrCast(@alignCast(metadata));
        }

        fn deallocate(self: *Self, allocator: Allocator) void {
            if (self.metadata == null) return;

            const header_align = @alignOf(Header);
            const key_align = if (@sizeOf(K) == 0) 1 else @alignOf(K);
            const val_align = if (@sizeOf(V) == 0) 1 else @alignOf(V);
            const max_align = comptime @max(header_align, key_align, val_align);

            const cap: usize = self.capacity();
            const meta_size = @sizeOf(Header) + cap * @sizeOf(Metadata);
            comptime assert(@alignOf(Metadata) == 1);

            const keys_start = std.mem.alignForward(usize, meta_size, key_align);
            const keys_end = keys_start + cap * @sizeOf(K);

            const vals_start = std.mem.alignForward(usize, keys_end, val_align);
            const vals_end = vals_start + cap * @sizeOf(V);

            const total_size = std.mem.alignForward(usize, vals_end, max_align);

            const slice = @as([*]align(max_align) u8, @alignCast(@ptrCast(self.header())))[0..total_size];
            allocator.free(slice);

            self.metadata = null;
            self.available = 0;
        }

        /// This function is used in the debugger pretty formatters in tools/ to fetch the
        /// header type to facilitate fancy debug printing for this type.
        fn dbHelper(self: *Self, hdr: *Header, entry: *Entry) void {
            _ = self;
            _ = hdr;
            _ = entry;
        }

        comptime {
            if (!builtin.strip_debug_info) _ = switch (builtin.zig_backend) {
                .stage2_llvm => &dbHelper,
                .stage2_x86_64 => KV,
                else => {},
            };
        }
    };
}

Type FunctionMultiArrayList[src]

A MultiArrayList stores a list of a struct or tagged union type. Instead of storing a single list of items, MultiArrayList stores separate lists for each field of the struct or lists of tags and bare unions. This allows for memory savings if the struct or union has padding, and also improves cache usage if only some fields or just tags are needed for a computation. The primary API for accessing fields is the slice() function, which computes the start pointers for the array of each field. From the slice you can call .items(.<field_name>) to obtain a slice of field values. For unions you can call .items(.tags) or .items(.data).

Parameters

T: type

Types

TypeField[src]

Source Code

Source code
pub const Field = meta.FieldEnum(Elem)

Fields

bytes: [*]align(@alignOf(T)) u8 = undefined
len: usize = 0
capacity: usize = 0

Values

Constantempty[src]

Source Code

Source code
pub const empty: Self = .{
    .bytes = undefined,
    .len = 0,
    .capacity = 0,
}

Functions

Functiondeinit[src]

pub fn deinit(self: *Self, gpa: Allocator) void

Release all allocated memory.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self, gpa: Allocator) void {
    gpa.free(self.allocatedBytes());
    self.* = undefined;
}

FunctiontoOwnedSlice[src]

pub fn toOwnedSlice(self: *Self) Slice

The caller owns the returned memory. Empties this MultiArrayList.

Parameters

self: *Self

Source Code

Source code
pub fn toOwnedSlice(self: *Self) Slice {
    const result = self.slice();
    self.* = .{};
    return result;
}

Functionslice[src]

pub fn slice(self: Self) Slice

Compute pointers to the start of each field of the array. If you need to access multiple fields, calling this may be more efficient than calling items() multiple times.

Parameters

self: Self

Source Code

Source code
pub fn slice(self: Self) Slice {
    var result: Slice = .{
        .ptrs = undefined,
        .len = self.len,
        .capacity = self.capacity,
    };
    var ptr: [*]u8 = self.bytes;
    for (sizes.bytes, sizes.fields) |field_size, i| {
        result.ptrs[i] = ptr;
        ptr += field_size * self.capacity;
    }
    return result;
}

Functionitems[src]

pub fn items(self: Self, comptime field: Field) []FieldType(field)

Get the slice of values for a specified field. If you need multiple fields, consider calling slice() instead.

Parameters

self: Self
field: Field

Source Code

Source code
pub fn items(self: Self, comptime field: Field) []FieldType(field) {
    return self.slice().items(field);
}

Functionset[src]

pub fn set(self: *Self, index: usize, elem: T) void

Overwrite one array element with new data.

Parameters

self: *Self
index: usize
elem: T

Source Code

Source code
pub fn set(self: *Self, index: usize, elem: T) void {
    var slices = self.slice();
    slices.set(index, elem);
}

Functionget[src]

pub fn get(self: Self, index: usize) T

Obtain all the data for one array element.

Parameters

self: Self
index: usize

Source Code

Source code
pub fn get(self: Self, index: usize) T {
    return self.slice().get(index);
}

Functionappend[src]

pub fn append(self: *Self, gpa: Allocator, elem: T) !void

Extend the list by 1 element. Allocates more memory as necessary.

Parameters

self: *Self
elem: T

Source Code

Source code
pub fn append(self: *Self, gpa: Allocator, elem: T) !void {
    try self.ensureUnusedCapacity(gpa, 1);
    self.appendAssumeCapacity(elem);
}

FunctionappendAssumeCapacity[src]

pub fn appendAssumeCapacity(self: *Self, elem: T) void

Extend the list by 1 element, but asserting self.capacity is sufficient to hold an additional item.

Parameters

self: *Self
elem: T

Source Code

Source code
pub fn appendAssumeCapacity(self: *Self, elem: T) void {
    assert(self.len < self.capacity);
    self.len += 1;
    self.set(self.len - 1, elem);
}

FunctionaddOne[src]

pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!usize

Extend the list by 1 element, returning the newly reserved index with uninitialized data. Allocates more memory as necesasry.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!usize {
    try self.ensureUnusedCapacity(allocator, 1);
    return self.addOneAssumeCapacity();
}

FunctionaddOneAssumeCapacity[src]

pub fn addOneAssumeCapacity(self: *Self) usize

Extend the list by 1 element, asserting self.capacity is sufficient to hold an additional item. Returns the newly reserved index with uninitialized data.

Parameters

self: *Self

Source Code

Source code
pub fn addOneAssumeCapacity(self: *Self) usize {
    assert(self.len < self.capacity);
    const index = self.len;
    self.len += 1;
    return index;
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Remove and return the last element from the list, or return null if list is empty. Invalidates pointers to fields of the removed element.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.len == 0) return null;
    const val = self.get(self.len - 1);
    self.len -= 1;
    return val;
}

Functioninsert[src]

pub fn insert(self: *Self, gpa: Allocator, index: usize, elem: T) !void

Inserts an item into an ordered list. Shifts all elements after and including the specified index back by one and sets the given index to the specified element. May reallocate and invalidate iterators.

Parameters

self: *Self
index: usize
elem: T

Source Code

Source code
pub fn insert(self: *Self, gpa: Allocator, index: usize, elem: T) !void {
    try self.ensureUnusedCapacity(gpa, 1);
    self.insertAssumeCapacity(index, elem);
}

FunctioninsertAssumeCapacity[src]

pub fn insertAssumeCapacity(self: *Self, index: usize, elem: T) void

Inserts an item into an ordered list which has room for it. Shifts all elements after and including the specified index back by one and sets the given index to the specified element. Will not reallocate the array, does not invalidate iterators.

Parameters

self: *Self
index: usize
elem: T

Source Code

Source code
pub fn insertAssumeCapacity(self: *Self, index: usize, elem: T) void {
    assert(self.len < self.capacity);
    assert(index <= self.len);
    self.len += 1;
    const entry = switch (@typeInfo(T)) {
        .@"struct" => elem,
        .@"union" => Elem.fromT(elem),
        else => unreachable,
    };
    const slices = self.slice();
    inline for (fields, 0..) |field_info, field_index| {
        const field_slice = slices.items(@as(Field, @enumFromInt(field_index)));
        var i: usize = self.len - 1;
        while (i > index) : (i -= 1) {
            field_slice[i] = field_slice[i - 1];
        }
        field_slice[index] = @field(entry, field_info.name);
    }
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, index: usize) void

Remove the specified item from the list, swapping the last item in the list into its position. Fast, but does not retain list ordering.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn swapRemove(self: *Self, index: usize) void {
    const slices = self.slice();
    inline for (fields, 0..) |_, i| {
        const field_slice = slices.items(@as(Field, @enumFromInt(i)));
        field_slice[index] = field_slice[self.len - 1];
        field_slice[self.len - 1] = undefined;
    }
    self.len -= 1;
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, index: usize) void

Remove the specified item from the list, shifting items after it to preserve order.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn orderedRemove(self: *Self, index: usize) void {
    const slices = self.slice();
    inline for (fields, 0..) |_, field_index| {
        const field_slice = slices.items(@as(Field, @enumFromInt(field_index)));
        var i = index;
        while (i < self.len - 1) : (i += 1) {
            field_slice[i] = field_slice[i + 1];
        }
        field_slice[i] = undefined;
    }
    self.len -= 1;
}

Functionresize[src]

pub fn resize(self: *Self, gpa: Allocator, new_len: usize) !void

Adjust the list's length to new_len. Does not initialize added items, if any.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn resize(self: *Self, gpa: Allocator, new_len: usize) !void {
    try self.ensureTotalCapacity(gpa, new_len);
    self.len = new_len;
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void

Attempt to reduce allocated capacity to new_len. If new_len is greater than zero, this may fail to reduce the capacity, but the data remains intact and the length is updated to new_len.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void {
    if (new_len == 0) return clearAndFree(self, gpa);

    assert(new_len <= self.capacity);
    assert(new_len <= self.len);

    const other_bytes = gpa.alignedAlloc(
        u8,
        @alignOf(Elem),
        capacityInBytes(new_len),
    ) catch {
        const self_slice = self.slice();
        inline for (fields, 0..) |field_info, i| {
            if (@sizeOf(field_info.type) != 0) {
                const field = @as(Field, @enumFromInt(i));
                const dest_slice = self_slice.items(field)[new_len..];
                // We use memset here for more efficient codegen in safety-checked,
                // valgrind-enabled builds. Otherwise the valgrind client request
                // will be repeated for every element.
                @memset(dest_slice, undefined);
            }
        }
        self.len = new_len;
        return;
    };
    var other = Self{
        .bytes = other_bytes.ptr,
        .capacity = new_len,
        .len = new_len,
    };
    self.len = new_len;
    const self_slice = self.slice();
    const other_slice = other.slice();
    inline for (fields, 0..) |field_info, i| {
        if (@sizeOf(field_info.type) != 0) {
            const field = @as(Field, @enumFromInt(i));
            @memcpy(other_slice.items(field), self_slice.items(field));
        }
    }
    gpa.free(self.allocatedBytes());
    self.* = other;
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, gpa: Allocator) void

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self, gpa: Allocator) void {
    gpa.free(self.allocatedBytes());
    self.* = .{};
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Reduce length to new_len. Invalidates pointers to elements items[new_len..]. Keeps capacity the same.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    self.len = new_len;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.len = 0;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void

Modify the array so that it can hold at least new_capacity items. Implements super-linear growth to achieve amortized O(1) append operations. Invalidates element pointers if additional memory is needed.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void {
    if (self.capacity >= new_capacity) return;
    return self.setCapacity(gpa, growCapacity(self.capacity, new_capacity));
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, gpa: Allocator, additional_count: usize) !void

Modify the array so that it can hold at least additional_count more items. Invalidates pointers if additional memory is needed.

Parameters

self: *Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, gpa: Allocator, additional_count: usize) !void {
    return self.ensureTotalCapacity(gpa, self.len + additional_count);
}

FunctionsetCapacity[src]

pub fn setCapacity(self: *Self, gpa: Allocator, new_capacity: usize) !void

Modify the array so that it can hold exactly new_capacity items. Invalidates pointers if additional memory is needed. new_capacity must be greater or equal to len.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn setCapacity(self: *Self, gpa: Allocator, new_capacity: usize) !void {
    assert(new_capacity >= self.len);
    const new_bytes = try gpa.alignedAlloc(
        u8,
        @alignOf(Elem),
        capacityInBytes(new_capacity),
    );
    if (self.len == 0) {
        gpa.free(self.allocatedBytes());
        self.bytes = new_bytes.ptr;
        self.capacity = new_capacity;
        return;
    }
    var other = Self{
        .bytes = new_bytes.ptr,
        .capacity = new_capacity,
        .len = self.len,
    };
    const self_slice = self.slice();
    const other_slice = other.slice();
    inline for (fields, 0..) |field_info, i| {
        if (@sizeOf(field_info.type) != 0) {
            const field = @as(Field, @enumFromInt(i));
            @memcpy(other_slice.items(field), self_slice.items(field));
        }
    }
    gpa.free(self.allocatedBytes());
    self.* = other;
}

Functionclone[src]

pub fn clone(self: Self, gpa: Allocator) !Self

Create a copy of this list with a new backing store, using the specified allocator.

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self, gpa: Allocator) !Self {
    var result = Self{};
    errdefer result.deinit(gpa);
    try result.ensureTotalCapacity(gpa, self.len);
    result.len = self.len;
    const self_slice = self.slice();
    const result_slice = result.slice();
    inline for (fields, 0..) |field_info, i| {
        if (@sizeOf(field_info.type) != 0) {
            const field = @as(Field, @enumFromInt(i));
            @memcpy(result_slice.items(field), self_slice.items(field));
        }
    }
    return result;
}

Functionsort[src]

pub fn sort(self: Self, ctx: anytype) void

This function guarantees a stable sort, i.e the relative order of equal elements is preserved during sorting. Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability If this guarantee does not matter, sortUnstable might be a faster alternative. ctx has the following method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool

Parameters

self: Self

Source Code

Source code
pub fn sort(self: Self, ctx: anytype) void {
    self.sortInternal(0, self.len, ctx, .stable);
}

FunctionsortSpan[src]

pub fn sortSpan(self: Self, a: usize, b: usize, ctx: anytype) void

Sorts only the subsection of items between indices a and b (excluding b) This function guarantees a stable sort, i.e the relative order of equal elements is preserved during sorting. Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability If this guarantee does not matter, sortSpanUnstable might be a faster alternative. ctx has the following method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool

Parameters

self: Self
a: usize
b: usize

Source Code

Source code
pub fn sortSpan(self: Self, a: usize, b: usize, ctx: anytype) void {
    self.sortInternal(a, b, ctx, .stable);
}

FunctionsortUnstable[src]

pub fn sortUnstable(self: Self, ctx: anytype) void

This function does NOT guarantee a stable sort, i.e the relative order of equal elements may change during sorting. Due to the weaker guarantees of this function, this may be faster than the stable sort method. Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability ctx has the following method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool

Parameters

self: Self

Source Code

Source code
pub fn sortUnstable(self: Self, ctx: anytype) void {
    self.sortInternal(0, self.len, ctx, .unstable);
}

FunctionsortSpanUnstable[src]

pub fn sortSpanUnstable(self: Self, a: usize, b: usize, ctx: anytype) void

Sorts only the subsection of items between indices a and b (excluding b) This function does NOT guarantee a stable sort, i.e the relative order of equal elements may change during sorting. Due to the weaker guarantees of this function, this may be faster than the stable sortSpan method. Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability ctx has the following method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool

Parameters

self: Self
a: usize
b: usize

Source Code

Source code
pub fn sortSpanUnstable(self: Self, a: usize, b: usize, ctx: anytype) void {
    self.sortInternal(a, b, ctx, .unstable);
}

FunctioncapacityInBytes[src]

pub fn capacityInBytes(capacity: usize) usize

Parameters

capacity: usize

Source Code

Source code
pub fn capacityInBytes(capacity: usize) usize {
    comptime var elem_bytes: usize = 0;
    inline for (sizes.bytes) |size| elem_bytes += size;
    return elem_bytes * capacity;
}

Source Code

Source code
pub fn MultiArrayList(comptime T: type) type {
    return struct {
        bytes: [*]align(@alignOf(T)) u8 = undefined,
        len: usize = 0,
        capacity: usize = 0,

        pub const empty: Self = .{
            .bytes = undefined,
            .len = 0,
            .capacity = 0,
        };

        const Elem = switch (@typeInfo(T)) {
            .@"struct" => T,
            .@"union" => |u| struct {
                pub const Bare = @Type(.{ .@"union" = .{
                    .layout = u.layout,
                    .tag_type = null,
                    .fields = u.fields,
                    .decls = &.{},
                } });
                pub const Tag =
                    u.tag_type orelse @compileError("MultiArrayList does not support untagged unions");
                tags: Tag,
                data: Bare,

                pub fn fromT(outer: T) @This() {
                    const tag = meta.activeTag(outer);
                    return .{
                        .tags = tag,
                        .data = switch (tag) {
                            inline else => |t| @unionInit(Bare, @tagName(t), @field(outer, @tagName(t))),
                        },
                    };
                }
                pub fn toT(tag: Tag, bare: Bare) T {
                    return switch (tag) {
                        inline else => |t| @unionInit(T, @tagName(t), @field(bare, @tagName(t))),
                    };
                }
            },
            else => @compileError("MultiArrayList only supports structs and tagged unions"),
        };

        pub const Field = meta.FieldEnum(Elem);

        /// A MultiArrayList.Slice contains cached start pointers for each field in the list.
        /// These pointers are not normally stored to reduce the size of the list in memory.
        /// If you are accessing multiple fields, call slice() first to compute the pointers,
        /// and then get the field arrays from the slice.
        pub const Slice = struct {
            /// This array is indexed by the field index which can be obtained
            /// by using @intFromEnum() on the Field enum
            ptrs: [fields.len][*]u8,
            len: usize,
            capacity: usize,

            pub const empty: Slice = .{
                .ptrs = undefined,
                .len = 0,
                .capacity = 0,
            };

            pub fn items(self: Slice, comptime field: Field) []FieldType(field) {
                const F = FieldType(field);
                if (self.capacity == 0) {
                    return &[_]F{};
                }
                const byte_ptr = self.ptrs[@intFromEnum(field)];
                const casted_ptr: [*]F = if (@sizeOf(F) == 0)
                    undefined
                else
                    @ptrCast(@alignCast(byte_ptr));
                return casted_ptr[0..self.len];
            }

            pub fn set(self: *Slice, index: usize, elem: T) void {
                const e = switch (@typeInfo(T)) {
                    .@"struct" => elem,
                    .@"union" => Elem.fromT(elem),
                    else => unreachable,
                };
                inline for (fields, 0..) |field_info, i| {
                    self.items(@as(Field, @enumFromInt(i)))[index] = @field(e, field_info.name);
                }
            }

            pub fn get(self: Slice, index: usize) T {
                var result: Elem = undefined;
                inline for (fields, 0..) |field_info, i| {
                    @field(result, field_info.name) = self.items(@as(Field, @enumFromInt(i)))[index];
                }
                return switch (@typeInfo(T)) {
                    .@"struct" => result,
                    .@"union" => Elem.toT(result.tags, result.data),
                    else => unreachable,
                };
            }

            pub fn toMultiArrayList(self: Slice) Self {
                if (self.ptrs.len == 0 or self.capacity == 0) {
                    return .{};
                }
                const unaligned_ptr = self.ptrs[sizes.fields[0]];
                const aligned_ptr: [*]align(@alignOf(Elem)) u8 = @alignCast(unaligned_ptr);
                return .{
                    .bytes = aligned_ptr,
                    .len = self.len,
                    .capacity = self.capacity,
                };
            }

            pub fn deinit(self: *Slice, gpa: Allocator) void {
                var other = self.toMultiArrayList();
                other.deinit(gpa);
                self.* = undefined;
            }

            /// This function is used in the debugger pretty formatters in tools/ to fetch the
            /// child field order and entry type to facilitate fancy debug printing for this type.
            fn dbHelper(self: *Slice, child: *Elem, field: *Field, entry: *Entry) void {
                _ = self;
                _ = child;
                _ = field;
                _ = entry;
            }
        };

        const Self = @This();

        const fields = meta.fields(Elem);
        /// `sizes.bytes` is an array of @sizeOf each T field. Sorted by alignment, descending.
        /// `sizes.fields` is an array mapping from `sizes.bytes` array index to field index.
        const sizes = blk: {
            const Data = struct {
                size: usize,
                size_index: usize,
                alignment: usize,
            };
            var data: [fields.len]Data = undefined;
            for (fields, 0..) |field_info, i| {
                data[i] = .{
                    .size = @sizeOf(field_info.type),
                    .size_index = i,
                    .alignment = if (@sizeOf(field_info.type) == 0) 1 else field_info.alignment,
                };
            }
            const Sort = struct {
                fn lessThan(context: void, lhs: Data, rhs: Data) bool {
                    _ = context;
                    return lhs.alignment > rhs.alignment;
                }
            };
            @setEvalBranchQuota(3 * fields.len * std.math.log2(fields.len));
            mem.sort(Data, &data, {}, Sort.lessThan);
            var sizes_bytes: [fields.len]usize = undefined;
            var field_indexes: [fields.len]usize = undefined;
            for (data, 0..) |elem, i| {
                sizes_bytes[i] = elem.size;
                field_indexes[i] = elem.size_index;
            }
            break :blk .{
                .bytes = sizes_bytes,
                .fields = field_indexes,
            };
        };

        /// Release all allocated memory.
        pub fn deinit(self: *Self, gpa: Allocator) void {
            gpa.free(self.allocatedBytes());
            self.* = undefined;
        }

        /// The caller owns the returned memory. Empties this MultiArrayList.
        pub fn toOwnedSlice(self: *Self) Slice {
            const result = self.slice();
            self.* = .{};
            return result;
        }

        /// Compute pointers to the start of each field of the array.
        /// If you need to access multiple fields, calling this may
        /// be more efficient than calling `items()` multiple times.
        pub fn slice(self: Self) Slice {
            var result: Slice = .{
                .ptrs = undefined,
                .len = self.len,
                .capacity = self.capacity,
            };
            var ptr: [*]u8 = self.bytes;
            for (sizes.bytes, sizes.fields) |field_size, i| {
                result.ptrs[i] = ptr;
                ptr += field_size * self.capacity;
            }
            return result;
        }

        /// Get the slice of values for a specified field.
        /// If you need multiple fields, consider calling slice()
        /// instead.
        pub fn items(self: Self, comptime field: Field) []FieldType(field) {
            return self.slice().items(field);
        }

        /// Overwrite one array element with new data.
        pub fn set(self: *Self, index: usize, elem: T) void {
            var slices = self.slice();
            slices.set(index, elem);
        }

        /// Obtain all the data for one array element.
        pub fn get(self: Self, index: usize) T {
            return self.slice().get(index);
        }

        /// Extend the list by 1 element. Allocates more memory as necessary.
        pub fn append(self: *Self, gpa: Allocator, elem: T) !void {
            try self.ensureUnusedCapacity(gpa, 1);
            self.appendAssumeCapacity(elem);
        }

        /// Extend the list by 1 element, but asserting `self.capacity`
        /// is sufficient to hold an additional item.
        pub fn appendAssumeCapacity(self: *Self, elem: T) void {
            assert(self.len < self.capacity);
            self.len += 1;
            self.set(self.len - 1, elem);
        }

        /// Extend the list by 1 element, returning the newly reserved
        /// index with uninitialized data.
        /// Allocates more memory as necesasry.
        pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!usize {
            try self.ensureUnusedCapacity(allocator, 1);
            return self.addOneAssumeCapacity();
        }

        /// Extend the list by 1 element, asserting `self.capacity`
        /// is sufficient to hold an additional item.  Returns the
        /// newly reserved index with uninitialized data.
        pub fn addOneAssumeCapacity(self: *Self) usize {
            assert(self.len < self.capacity);
            const index = self.len;
            self.len += 1;
            return index;
        }

        /// Remove and return the last element from the list, or return `null` if list is empty.
        /// Invalidates pointers to fields of the removed element.
        pub fn pop(self: *Self) ?T {
            if (self.len == 0) return null;
            const val = self.get(self.len - 1);
            self.len -= 1;
            return val;
        }

        /// Inserts an item into an ordered list.  Shifts all elements
        /// after and including the specified index back by one and
        /// sets the given index to the specified element.  May reallocate
        /// and invalidate iterators.
        pub fn insert(self: *Self, gpa: Allocator, index: usize, elem: T) !void {
            try self.ensureUnusedCapacity(gpa, 1);
            self.insertAssumeCapacity(index, elem);
        }

        /// Inserts an item into an ordered list which has room for it.
        /// Shifts all elements after and including the specified index
        /// back by one and sets the given index to the specified element.
        /// Will not reallocate the array, does not invalidate iterators.
        pub fn insertAssumeCapacity(self: *Self, index: usize, elem: T) void {
            assert(self.len < self.capacity);
            assert(index <= self.len);
            self.len += 1;
            const entry = switch (@typeInfo(T)) {
                .@"struct" => elem,
                .@"union" => Elem.fromT(elem),
                else => unreachable,
            };
            const slices = self.slice();
            inline for (fields, 0..) |field_info, field_index| {
                const field_slice = slices.items(@as(Field, @enumFromInt(field_index)));
                var i: usize = self.len - 1;
                while (i > index) : (i -= 1) {
                    field_slice[i] = field_slice[i - 1];
                }
                field_slice[index] = @field(entry, field_info.name);
            }
        }

        /// Remove the specified item from the list, swapping the last
        /// item in the list into its position.  Fast, but does not
        /// retain list ordering.
        pub fn swapRemove(self: *Self, index: usize) void {
            const slices = self.slice();
            inline for (fields, 0..) |_, i| {
                const field_slice = slices.items(@as(Field, @enumFromInt(i)));
                field_slice[index] = field_slice[self.len - 1];
                field_slice[self.len - 1] = undefined;
            }
            self.len -= 1;
        }

        /// Remove the specified item from the list, shifting items
        /// after it to preserve order.
        pub fn orderedRemove(self: *Self, index: usize) void {
            const slices = self.slice();
            inline for (fields, 0..) |_, field_index| {
                const field_slice = slices.items(@as(Field, @enumFromInt(field_index)));
                var i = index;
                while (i < self.len - 1) : (i += 1) {
                    field_slice[i] = field_slice[i + 1];
                }
                field_slice[i] = undefined;
            }
            self.len -= 1;
        }

        /// Adjust the list's length to `new_len`.
        /// Does not initialize added items, if any.
        pub fn resize(self: *Self, gpa: Allocator, new_len: usize) !void {
            try self.ensureTotalCapacity(gpa, new_len);
            self.len = new_len;
        }

        /// Attempt to reduce allocated capacity to `new_len`.
        /// If `new_len` is greater than zero, this may fail to reduce the capacity,
        /// but the data remains intact and the length is updated to new_len.
        pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void {
            if (new_len == 0) return clearAndFree(self, gpa);

            assert(new_len <= self.capacity);
            assert(new_len <= self.len);

            const other_bytes = gpa.alignedAlloc(
                u8,
                @alignOf(Elem),
                capacityInBytes(new_len),
            ) catch {
                const self_slice = self.slice();
                inline for (fields, 0..) |field_info, i| {
                    if (@sizeOf(field_info.type) != 0) {
                        const field = @as(Field, @enumFromInt(i));
                        const dest_slice = self_slice.items(field)[new_len..];
                        // We use memset here for more efficient codegen in safety-checked,
                        // valgrind-enabled builds. Otherwise the valgrind client request
                        // will be repeated for every element.
                        @memset(dest_slice, undefined);
                    }
                }
                self.len = new_len;
                return;
            };
            var other = Self{
                .bytes = other_bytes.ptr,
                .capacity = new_len,
                .len = new_len,
            };
            self.len = new_len;
            const self_slice = self.slice();
            const other_slice = other.slice();
            inline for (fields, 0..) |field_info, i| {
                if (@sizeOf(field_info.type) != 0) {
                    const field = @as(Field, @enumFromInt(i));
                    @memcpy(other_slice.items(field), self_slice.items(field));
                }
            }
            gpa.free(self.allocatedBytes());
            self.* = other;
        }

        pub fn clearAndFree(self: *Self, gpa: Allocator) void {
            gpa.free(self.allocatedBytes());
            self.* = .{};
        }

        /// Reduce length to `new_len`.
        /// Invalidates pointers to elements `items[new_len..]`.
        /// Keeps capacity the same.
        pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
            self.len = new_len;
        }

        /// Invalidates all element pointers.
        pub fn clearRetainingCapacity(self: *Self) void {
            self.len = 0;
        }

        /// Modify the array so that it can hold at least `new_capacity` items.
        /// Implements super-linear growth to achieve amortized O(1) append operations.
        /// Invalidates element pointers if additional memory is needed.
        pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void {
            if (self.capacity >= new_capacity) return;
            return self.setCapacity(gpa, growCapacity(self.capacity, new_capacity));
        }

        const init_capacity = init: {
            var max = 1;
            for (fields) |field| max = @as(comptime_int, @max(max, @sizeOf(field.type)));
            break :init @as(comptime_int, @max(1, std.atomic.cache_line / max));
        };

        /// Called when memory growth is necessary. Returns a capacity larger than
        /// minimum that grows super-linearly.
        fn growCapacity(current: usize, minimum: usize) usize {
            var new = current;
            while (true) {
                new +|= new / 2 + init_capacity;
                if (new >= minimum)
                    return new;
            }
        }

        /// Modify the array so that it can hold at least `additional_count` **more** items.
        /// Invalidates pointers if additional memory is needed.
        pub fn ensureUnusedCapacity(self: *Self, gpa: Allocator, additional_count: usize) !void {
            return self.ensureTotalCapacity(gpa, self.len + additional_count);
        }

        /// Modify the array so that it can hold exactly `new_capacity` items.
        /// Invalidates pointers if additional memory is needed.
        /// `new_capacity` must be greater or equal to `len`.
        pub fn setCapacity(self: *Self, gpa: Allocator, new_capacity: usize) !void {
            assert(new_capacity >= self.len);
            const new_bytes = try gpa.alignedAlloc(
                u8,
                @alignOf(Elem),
                capacityInBytes(new_capacity),
            );
            if (self.len == 0) {
                gpa.free(self.allocatedBytes());
                self.bytes = new_bytes.ptr;
                self.capacity = new_capacity;
                return;
            }
            var other = Self{
                .bytes = new_bytes.ptr,
                .capacity = new_capacity,
                .len = self.len,
            };
            const self_slice = self.slice();
            const other_slice = other.slice();
            inline for (fields, 0..) |field_info, i| {
                if (@sizeOf(field_info.type) != 0) {
                    const field = @as(Field, @enumFromInt(i));
                    @memcpy(other_slice.items(field), self_slice.items(field));
                }
            }
            gpa.free(self.allocatedBytes());
            self.* = other;
        }

        /// Create a copy of this list with a new backing store,
        /// using the specified allocator.
        pub fn clone(self: Self, gpa: Allocator) !Self {
            var result = Self{};
            errdefer result.deinit(gpa);
            try result.ensureTotalCapacity(gpa, self.len);
            result.len = self.len;
            const self_slice = self.slice();
            const result_slice = result.slice();
            inline for (fields, 0..) |field_info, i| {
                if (@sizeOf(field_info.type) != 0) {
                    const field = @as(Field, @enumFromInt(i));
                    @memcpy(result_slice.items(field), self_slice.items(field));
                }
            }
            return result;
        }

        /// `ctx` has the following method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        fn sortInternal(self: Self, a: usize, b: usize, ctx: anytype, comptime mode: std.sort.Mode) void {
            const sort_context: struct {
                sub_ctx: @TypeOf(ctx),
                slice: Slice,

                pub fn swap(sc: @This(), a_index: usize, b_index: usize) void {
                    inline for (fields, 0..) |field_info, i| {
                        if (@sizeOf(field_info.type) != 0) {
                            const field: Field = @enumFromInt(i);
                            const ptr = sc.slice.items(field);
                            mem.swap(field_info.type, &ptr[a_index], &ptr[b_index]);
                        }
                    }
                }

                pub fn lessThan(sc: @This(), a_index: usize, b_index: usize) bool {
                    return sc.sub_ctx.lessThan(a_index, b_index);
                }
            } = .{
                .sub_ctx = ctx,
                .slice = self.slice(),
            };

            switch (mode) {
                .stable => mem.sortContext(a, b, sort_context),
                .unstable => mem.sortUnstableContext(a, b, sort_context),
            }
        }

        /// This function guarantees a stable sort, i.e the relative order of equal elements is preserved during sorting.
        /// Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
        /// If this guarantee does not matter, `sortUnstable` might be a faster alternative.
        /// `ctx` has the following method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        pub fn sort(self: Self, ctx: anytype) void {
            self.sortInternal(0, self.len, ctx, .stable);
        }

        /// Sorts only the subsection of items between indices `a` and `b` (excluding `b`)
        /// This function guarantees a stable sort, i.e the relative order of equal elements is preserved during sorting.
        /// Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
        /// If this guarantee does not matter, `sortSpanUnstable` might be a faster alternative.
        /// `ctx` has the following method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        pub fn sortSpan(self: Self, a: usize, b: usize, ctx: anytype) void {
            self.sortInternal(a, b, ctx, .stable);
        }

        /// This function does NOT guarantee a stable sort, i.e the relative order of equal elements may change during sorting.
        /// Due to the weaker guarantees of this function, this may be faster than the stable `sort` method.
        /// Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
        /// `ctx` has the following method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        pub fn sortUnstable(self: Self, ctx: anytype) void {
            self.sortInternal(0, self.len, ctx, .unstable);
        }

        /// Sorts only the subsection of items between indices `a` and `b` (excluding `b`)
        /// This function does NOT guarantee a stable sort, i.e the relative order of equal elements may change during sorting.
        /// Due to the weaker guarantees of this function, this may be faster than the stable `sortSpan` method.
        /// Read more about stable sorting here: https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
        /// `ctx` has the following method:
        /// `fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool`
        pub fn sortSpanUnstable(self: Self, a: usize, b: usize, ctx: anytype) void {
            self.sortInternal(a, b, ctx, .unstable);
        }

        pub fn capacityInBytes(capacity: usize) usize {
            comptime var elem_bytes: usize = 0;
            inline for (sizes.bytes) |size| elem_bytes += size;
            return elem_bytes * capacity;
        }

        fn allocatedBytes(self: Self) []align(@alignOf(Elem)) u8 {
            return self.bytes[0..capacityInBytes(self.capacity)];
        }

        fn FieldType(comptime field: Field) type {
            return @FieldType(Elem, @tagName(field));
        }

        const Entry = entry: {
            var entry_fields: [fields.len]std.builtin.Type.StructField = undefined;
            for (&entry_fields, sizes.fields) |*entry_field, i| entry_field.* = .{
                .name = fields[i].name ++ "_ptr",
                .type = *fields[i].type,
                .default_value_ptr = null,
                .is_comptime = fields[i].is_comptime,
                .alignment = fields[i].alignment,
            };
            break :entry @Type(.{ .@"struct" = .{
                .layout = .@"extern",
                .fields = &entry_fields,
                .decls = &.{},
                .is_tuple = false,
            } });
        };
        /// This function is used in the debugger pretty formatters in tools/ to fetch the
        /// child field order and entry type to facilitate fancy debug printing for this type.
        fn dbHelper(self: *Self, child: *Elem, field: *Field, entry: *Entry) void {
            _ = self;
            _ = child;
            _ = field;
            _ = entry;
        }

        comptime {
            if (builtin.zig_backend == .stage2_llvm and !builtin.strip_debug_info) {
                _ = &dbHelper;
                _ = &Slice.dbHelper;
            }
        }
    };
}

Type FunctionPriorityQueue[src]

Priority queue for storing generic data. Initialize with init. Provide compareFn that returns Order.lt when its second argument should get popped before its third argument, Order.eq if the arguments are of equal priority, or Order.gt if the third argument should be popped first. For example, to make pop return the smallest number, provide fn lessThan(context: void, a: T, b: T) Order { _ = context; return std.math.order(a, b); }

Parameters

T: type
Context: type
compareFn: fn (context: Context, a: T, b: T) Order

Fields

items: []T
cap: usize
allocator: Allocator
context: Context

Functions

Functioninit[src]

pub fn init(allocator: Allocator, context: Context) Self

Initialize and return a priority queue.

Parameters

allocator: Allocator
context: Context

Source Code

Source code
pub fn init(allocator: Allocator, context: Context) Self {
    return Self{
        .items = &[_]T{},
        .cap = 0,
        .allocator = allocator,
        .context = context,
    };
}

Functiondeinit[src]

pub fn deinit(self: Self) void

Free memory used by the queue.

Parameters

self: Self

Source Code

Source code
pub fn deinit(self: Self) void {
    self.allocator.free(self.allocatedSlice());
}

Functionadd[src]

pub fn add(self: *Self, elem: T) !void

Insert a new element, maintaining priority.

Parameters

self: *Self
elem: T

Source Code

Source code
pub fn add(self: *Self, elem: T) !void {
    try self.ensureUnusedCapacity(1);
    addUnchecked(self, elem);
}

FunctionaddSlice[src]

pub fn addSlice(self: *Self, items: []const T) !void

Add each element in items to the queue.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn addSlice(self: *Self, items: []const T) !void {
    try self.ensureUnusedCapacity(items.len);
    for (items) |e| {
        self.addUnchecked(e);
    }
}

Functionpeek[src]

pub fn peek(self: *Self) ?T

Look at the highest priority element in the queue. Returns null if empty.

Parameters

self: *Self

Source Code

Source code
pub fn peek(self: *Self) ?T {
    return if (self.items.len > 0) self.items[0] else null;
}

FunctionremoveOrNull[src]

pub fn removeOrNull(self: *Self) ?T

Pop the highest priority element from the queue. Returns null if empty.

Parameters

self: *Self

Source Code

Source code
pub fn removeOrNull(self: *Self) ?T {
    return if (self.items.len > 0) self.remove() else null;
}

Functionremove[src]

pub fn remove(self: *Self) T

Remove and return the highest priority element from the queue.

Parameters

self: *Self

Source Code

Source code
pub fn remove(self: *Self) T {
    return self.removeIndex(0);
}

FunctionremoveIndex[src]

pub fn removeIndex(self: *Self, index: usize) T

Remove and return element at index. Indices are in the same order as iterator, which is not necessarily priority order.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn removeIndex(self: *Self, index: usize) T {
    assert(self.items.len > index);
    const last = self.items[self.items.len - 1];
    const item = self.items[index];
    self.items[index] = last;
    self.items.len -= 1;

    if (index == self.items.len) {
        // Last element removed, nothing more to do.
    } else if (index == 0) {
        siftDown(self, index);
    } else {
        const parent_index = ((index - 1) >> 1);
        const parent = self.items[parent_index];
        if (compareFn(self.context, last, parent) == .gt) {
            siftDown(self, index);
        } else {
            siftUp(self, index);
        }
    }

    return item;
}

Functioncount[src]

pub fn count(self: Self) usize

Return the number of elements remaining in the priority queue.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.items.len;
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Return the number of elements that can be added to the queue before more memory is allocated.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    return self.cap;
}

FunctionfromOwnedSlice[src]

pub fn fromOwnedSlice(allocator: Allocator, items: []T, context: Context) Self

PriorityQueue takes ownership of the passed in slice. The slice must have been allocated with allocator. Deinitialize with deinit.

Parameters

allocator: Allocator
items: []T
context: Context

Source Code

Source code
pub fn fromOwnedSlice(allocator: Allocator, items: []T, context: Context) Self {
    var self = Self{
        .items = items,
        .cap = items.len,
        .allocator = allocator,
        .context = context,
    };

    var i = self.items.len >> 1;
    while (i > 0) {
        i -= 1;
        self.siftDown(i);
    }
    return self;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void

Ensure that the queue can fit at least new_capacity items.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void {
    var better_capacity = self.cap;
    if (better_capacity >= new_capacity) return;
    while (true) {
        better_capacity += better_capacity / 2 + 8;
        if (better_capacity >= new_capacity) break;
    }
    try self.ensureTotalCapacityPrecise(better_capacity);
}

FunctionensureTotalCapacityPrecise[src]

pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) !void

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) !void {
    if (self.capacity() >= new_capacity) return;

    const old_memory = self.allocatedSlice();
    const new_memory = try self.allocator.realloc(old_memory, new_capacity);
    self.items.ptr = new_memory.ptr;
    self.cap = new_memory.len;
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void

Ensure that the queue can fit at least additional_count more item.

Parameters

self: *Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void {
    return self.ensureTotalCapacity(self.items.len + additional_count);
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, new_capacity: usize) void

Reduce allocated capacity to new_capacity.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, new_capacity: usize) void {
    assert(new_capacity <= self.cap);

    // Cannot shrink to smaller than the current queue size without invalidating the heap property
    assert(new_capacity >= self.items.len);

    const old_memory = self.allocatedSlice();
    const new_memory = self.allocator.realloc(old_memory, new_capacity) catch |e| switch (e) {
        error.OutOfMemory => { // no problem, capacity is still correct then.
            return;
        },
    };

    self.items.ptr = new_memory.ptr;
    self.cap = new_memory.len;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.items.len = 0;
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self) void

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self) void {
    self.allocator.free(self.allocatedSlice());
    self.items.len = 0;
    self.cap = 0;
}

Functionupdate[src]

pub fn update(self: *Self, elem: T, new_elem: T) !void

Parameters

self: *Self
elem: T
new_elem: T

Source Code

Source code
pub fn update(self: *Self, elem: T, new_elem: T) !void {
    const update_index = blk: {
        var idx: usize = 0;
        while (idx < self.items.len) : (idx += 1) {
            const item = self.items[idx];
            if (compareFn(self.context, item, elem) == .eq) break :blk idx;
        }
        return error.ElementNotFound;
    };
    const old_elem: T = self.items[update_index];
    self.items[update_index] = new_elem;
    switch (compareFn(self.context, new_elem, old_elem)) {
        .lt => siftUp(self, update_index),
        .gt => siftDown(self, update_index),
        .eq => {}, // Nothing to do as the items have equal priority
    }
}

Functioniterator[src]

pub fn iterator(self: *Self) Iterator

Return an iterator that walks the queue without consuming it. The iteration order may differ from the priority order. Invalidated if the heap is modified.

Parameters

self: *Self

Source Code

Source code
pub fn iterator(self: *Self) Iterator {
    return Iterator{
        .queue = self,
        .count = 0,
    };
}

Source Code

Source code
pub fn PriorityQueue(comptime T: type, comptime Context: type, comptime compareFn: fn (context: Context, a: T, b: T) Order) type {
    return struct {
        const Self = @This();

        items: []T,
        cap: usize,
        allocator: Allocator,
        context: Context,

        /// Initialize and return a priority queue.
        pub fn init(allocator: Allocator, context: Context) Self {
            return Self{
                .items = &[_]T{},
                .cap = 0,
                .allocator = allocator,
                .context = context,
            };
        }

        /// Free memory used by the queue.
        pub fn deinit(self: Self) void {
            self.allocator.free(self.allocatedSlice());
        }

        /// Insert a new element, maintaining priority.
        pub fn add(self: *Self, elem: T) !void {
            try self.ensureUnusedCapacity(1);
            addUnchecked(self, elem);
        }

        fn addUnchecked(self: *Self, elem: T) void {
            self.items.len += 1;
            self.items[self.items.len - 1] = elem;
            siftUp(self, self.items.len - 1);
        }

        fn siftUp(self: *Self, start_index: usize) void {
            const child = self.items[start_index];
            var child_index = start_index;
            while (child_index > 0) {
                const parent_index = ((child_index - 1) >> 1);
                const parent = self.items[parent_index];
                if (compareFn(self.context, child, parent) != .lt) break;
                self.items[child_index] = parent;
                child_index = parent_index;
            }
            self.items[child_index] = child;
        }

        /// Add each element in `items` to the queue.
        pub fn addSlice(self: *Self, items: []const T) !void {
            try self.ensureUnusedCapacity(items.len);
            for (items) |e| {
                self.addUnchecked(e);
            }
        }

        /// Look at the highest priority element in the queue. Returns
        /// `null` if empty.
        pub fn peek(self: *Self) ?T {
            return if (self.items.len > 0) self.items[0] else null;
        }

        /// Pop the highest priority element from the queue. Returns
        /// `null` if empty.
        pub fn removeOrNull(self: *Self) ?T {
            return if (self.items.len > 0) self.remove() else null;
        }

        /// Remove and return the highest priority element from the
        /// queue.
        pub fn remove(self: *Self) T {
            return self.removeIndex(0);
        }

        /// Remove and return element at index. Indices are in the
        /// same order as iterator, which is not necessarily priority
        /// order.
        pub fn removeIndex(self: *Self, index: usize) T {
            assert(self.items.len > index);
            const last = self.items[self.items.len - 1];
            const item = self.items[index];
            self.items[index] = last;
            self.items.len -= 1;

            if (index == self.items.len) {
                // Last element removed, nothing more to do.
            } else if (index == 0) {
                siftDown(self, index);
            } else {
                const parent_index = ((index - 1) >> 1);
                const parent = self.items[parent_index];
                if (compareFn(self.context, last, parent) == .gt) {
                    siftDown(self, index);
                } else {
                    siftUp(self, index);
                }
            }

            return item;
        }

        /// Return the number of elements remaining in the priority
        /// queue.
        pub fn count(self: Self) usize {
            return self.items.len;
        }

        /// Return the number of elements that can be added to the
        /// queue before more memory is allocated.
        pub fn capacity(self: Self) usize {
            return self.cap;
        }

        /// Returns a slice of all the items plus the extra capacity, whose memory
        /// contents are `undefined`.
        fn allocatedSlice(self: Self) []T {
            // `items.len` is the length, not the capacity.
            return self.items.ptr[0..self.cap];
        }

        fn siftDown(self: *Self, target_index: usize) void {
            const target_element = self.items[target_index];
            var index = target_index;
            while (true) {
                var lesser_child_i = (std.math.mul(usize, index, 2) catch break) | 1;
                if (!(lesser_child_i < self.items.len)) break;

                const next_child_i = lesser_child_i + 1;
                if (next_child_i < self.items.len and compareFn(self.context, self.items[next_child_i], self.items[lesser_child_i]) == .lt) {
                    lesser_child_i = next_child_i;
                }

                if (compareFn(self.context, target_element, self.items[lesser_child_i]) == .lt) break;

                self.items[index] = self.items[lesser_child_i];
                index = lesser_child_i;
            }
            self.items[index] = target_element;
        }

        /// PriorityQueue takes ownership of the passed in slice. The slice must have been
        /// allocated with `allocator`.
        /// Deinitialize with `deinit`.
        pub fn fromOwnedSlice(allocator: Allocator, items: []T, context: Context) Self {
            var self = Self{
                .items = items,
                .cap = items.len,
                .allocator = allocator,
                .context = context,
            };

            var i = self.items.len >> 1;
            while (i > 0) {
                i -= 1;
                self.siftDown(i);
            }
            return self;
        }

        /// Ensure that the queue can fit at least `new_capacity` items.
        pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void {
            var better_capacity = self.cap;
            if (better_capacity >= new_capacity) return;
            while (true) {
                better_capacity += better_capacity / 2 + 8;
                if (better_capacity >= new_capacity) break;
            }
            try self.ensureTotalCapacityPrecise(better_capacity);
        }

        pub fn ensureTotalCapacityPrecise(self: *Self, new_capacity: usize) !void {
            if (self.capacity() >= new_capacity) return;

            const old_memory = self.allocatedSlice();
            const new_memory = try self.allocator.realloc(old_memory, new_capacity);
            self.items.ptr = new_memory.ptr;
            self.cap = new_memory.len;
        }

        /// Ensure that the queue can fit at least `additional_count` **more** item.
        pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void {
            return self.ensureTotalCapacity(self.items.len + additional_count);
        }

        /// Reduce allocated capacity to `new_capacity`.
        pub fn shrinkAndFree(self: *Self, new_capacity: usize) void {
            assert(new_capacity <= self.cap);

            // Cannot shrink to smaller than the current queue size without invalidating the heap property
            assert(new_capacity >= self.items.len);

            const old_memory = self.allocatedSlice();
            const new_memory = self.allocator.realloc(old_memory, new_capacity) catch |e| switch (e) {
                error.OutOfMemory => { // no problem, capacity is still correct then.
                    return;
                },
            };

            self.items.ptr = new_memory.ptr;
            self.cap = new_memory.len;
        }

        pub fn clearRetainingCapacity(self: *Self) void {
            self.items.len = 0;
        }

        pub fn clearAndFree(self: *Self) void {
            self.allocator.free(self.allocatedSlice());
            self.items.len = 0;
            self.cap = 0;
        }

        pub fn update(self: *Self, elem: T, new_elem: T) !void {
            const update_index = blk: {
                var idx: usize = 0;
                while (idx < self.items.len) : (idx += 1) {
                    const item = self.items[idx];
                    if (compareFn(self.context, item, elem) == .eq) break :blk idx;
                }
                return error.ElementNotFound;
            };
            const old_elem: T = self.items[update_index];
            self.items[update_index] = new_elem;
            switch (compareFn(self.context, new_elem, old_elem)) {
                .lt => siftUp(self, update_index),
                .gt => siftDown(self, update_index),
                .eq => {}, // Nothing to do as the items have equal priority
            }
        }

        pub const Iterator = struct {
            queue: *PriorityQueue(T, Context, compareFn),
            count: usize,

            pub fn next(it: *Iterator) ?T {
                if (it.count >= it.queue.items.len) return null;
                const out = it.count;
                it.count += 1;
                return it.queue.items[out];
            }

            pub fn reset(it: *Iterator) void {
                it.count = 0;
            }
        };

        /// Return an iterator that walks the queue without consuming
        /// it. The iteration order may differ from the priority order.
        /// Invalidated if the heap is modified.
        pub fn iterator(self: *Self) Iterator {
            return Iterator{
                .queue = self,
                .count = 0,
            };
        }

        fn dump(self: *Self) void {
            const print = std.debug.print;
            print("{{ ", .{});
            print("items: ", .{});
            for (self.items) |e| {
                print("{}, ", .{e});
            }
            print("array: ", .{});
            for (self.items) |e| {
                print("{}, ", .{e});
            }
            print("len: {} ", .{self.items.len});
            print("capacity: {}", .{self.cap});
            print(" }}\n", .{});
        }
    };
}

Type FunctionPriorityDequeue[src]

Priority Dequeue for storing generic data. Initialize with init. Provide compareFn that returns Order.lt when its second argument should get min-popped before its third argument, Order.eq if the arguments are of equal priority, or Order.gt if the third argument should be min-popped second. Popping the max element works in reverse. For example, to make popMin return the smallest number, provide fn lessThan(context: void, a: T, b: T) Order { _ = context; return std.math.order(a, b); }

Parameters

T: type
Context: type
compareFn: fn (context: Context, a: T, b: T) Order

Fields

items: []T
len: usize
allocator: Allocator
context: Context

Functions

Functioninit[src]

pub fn init(allocator: Allocator, context: Context) Self

Initialize and return a new priority dequeue.

Parameters

allocator: Allocator
context: Context

Source Code

Source code
pub fn init(allocator: Allocator, context: Context) Self {
    return Self{
        .items = &[_]T{},
        .len = 0,
        .allocator = allocator,
        .context = context,
    };
}

Functiondeinit[src]

pub fn deinit(self: Self) void

Free memory used by the dequeue.

Parameters

self: Self

Source Code

Source code
pub fn deinit(self: Self) void {
    self.allocator.free(self.items);
}

Functionadd[src]

pub fn add(self: *Self, elem: T) !void

Insert a new element, maintaining priority.

Parameters

self: *Self
elem: T

Source Code

Source code
pub fn add(self: *Self, elem: T) !void {
    try self.ensureUnusedCapacity(1);
    addUnchecked(self, elem);
}

FunctionaddSlice[src]

pub fn addSlice(self: *Self, items: []const T) !void

Add each element in items to the dequeue.

Parameters

self: *Self
items: []const T

Source Code

Source code
pub fn addSlice(self: *Self, items: []const T) !void {
    try self.ensureUnusedCapacity(items.len);
    for (items) |e| {
        self.addUnchecked(e);
    }
}

FunctionpeekMin[src]

pub fn peekMin(self: *Self) ?T

Look at the smallest element in the dequeue. Returns null if empty.

Parameters

self: *Self

Source Code

Source code
pub fn peekMin(self: *Self) ?T {
    return if (self.len > 0) self.items[0] else null;
}

FunctionpeekMax[src]

pub fn peekMax(self: *Self) ?T

Look at the largest element in the dequeue. Returns null if empty.

Parameters

self: *Self

Source Code

Source code
pub fn peekMax(self: *Self) ?T {
    if (self.len == 0) return null;
    if (self.len == 1) return self.items[0];
    if (self.len == 2) return self.items[1];
    return self.bestItemAtIndices(1, 2, .gt).item;
}

FunctionremoveMinOrNull[src]

pub fn removeMinOrNull(self: *Self) ?T

Pop the smallest element from the dequeue. Returns null if empty.

Parameters

self: *Self

Source Code

Source code
pub fn removeMinOrNull(self: *Self) ?T {
    return if (self.len > 0) self.removeMin() else null;
}

FunctionremoveMin[src]

pub fn removeMin(self: *Self) T

Remove and return the smallest element from the dequeue.

Parameters

self: *Self

Source Code

Source code
pub fn removeMin(self: *Self) T {
    return self.removeIndex(0);
}

FunctionremoveMaxOrNull[src]

pub fn removeMaxOrNull(self: *Self) ?T

Pop the largest element from the dequeue. Returns null if empty.

Parameters

self: *Self

Source Code

Source code
pub fn removeMaxOrNull(self: *Self) ?T {
    return if (self.len > 0) self.removeMax() else null;
}

FunctionremoveMax[src]

pub fn removeMax(self: *Self) T

Remove and return the largest element from the dequeue.

Parameters

self: *Self

Source Code

Source code
pub fn removeMax(self: *Self) T {
    return self.removeIndex(self.maxIndex().?);
}

FunctionremoveIndex[src]

pub fn removeIndex(self: *Self, index: usize) T

Remove and return element at index. Indices are in the same order as iterator, which is not necessarily priority order.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn removeIndex(self: *Self, index: usize) T {
    assert(self.len > index);
    const item = self.items[index];
    const last = self.items[self.len - 1];

    self.items[index] = last;
    self.len -= 1;
    siftDown(self, index);

    return item;
}

Functioncount[src]

pub fn count(self: Self) usize

Return the number of elements remaining in the dequeue

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.len;
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Return the number of elements that can be added to the dequeue before more memory is allocated.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    return self.items.len;
}

FunctionfromOwnedSlice[src]

pub fn fromOwnedSlice(allocator: Allocator, items: []T, context: Context) Self

Dequeue takes ownership of the passed in slice. The slice must have been allocated with allocator. De-initialize with deinit.

Parameters

allocator: Allocator
items: []T
context: Context

Source Code

Source code
pub fn fromOwnedSlice(allocator: Allocator, items: []T, context: Context) Self {
    var queue = Self{
        .items = items,
        .len = items.len,
        .allocator = allocator,
        .context = context,
    };

    if (queue.len <= 1) return queue;

    const half = (queue.len >> 1) - 1;
    var i: usize = 0;
    while (i <= half) : (i += 1) {
        const index = half - i;
        queue.siftDown(index);
    }
    return queue;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void

Ensure that the dequeue can fit at least new_capacity items.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void {
    var better_capacity = self.capacity();
    if (better_capacity >= new_capacity) return;
    while (true) {
        better_capacity += better_capacity / 2 + 8;
        if (better_capacity >= new_capacity) break;
    }
    self.items = try self.allocator.realloc(self.items, better_capacity);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void

Ensure that the dequeue can fit at least additional_count more items.

Parameters

self: *Self
additional_count: usize

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void {
    return self.ensureTotalCapacity(self.len + additional_count);
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, new_len: usize) void

Reduce allocated capacity to new_len.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, new_len: usize) void {
    assert(new_len <= self.items.len);

    // Cannot shrink to smaller than the current queue size without invalidating the heap property
    assert(new_len >= self.len);

    self.items = self.allocator.realloc(self.items[0..], new_len) catch |e| switch (e) {
        error.OutOfMemory => { // no problem, capacity is still correct then.
            self.items.len = new_len;
            return;
        },
    };
}

Functionupdate[src]

pub fn update(self: *Self, elem: T, new_elem: T) !void

Parameters

self: *Self
elem: T
new_elem: T

Source Code

Source code
pub fn update(self: *Self, elem: T, new_elem: T) !void {
    const old_index = blk: {
        var idx: usize = 0;
        while (idx < self.len) : (idx += 1) {
            const item = self.items[idx];
            if (compareFn(self.context, item, elem) == .eq) break :blk idx;
        }
        return error.ElementNotFound;
    };
    _ = self.removeIndex(old_index);
    self.addUnchecked(new_elem);
}

Functioniterator[src]

pub fn iterator(self: *Self) Iterator

Return an iterator that walks the queue without consuming it. The iteration order may differ from the priority order. Invalidated if the queue is modified.

Parameters

self: *Self

Source Code

Source code
pub fn iterator(self: *Self) Iterator {
    return Iterator{
        .queue = self,
        .count = 0,
    };
}

Source Code

Source code
pub fn PriorityDequeue(comptime T: type, comptime Context: type, comptime compareFn: fn (context: Context, a: T, b: T) Order) type {
    return struct {
        const Self = @This();

        items: []T,
        len: usize,
        allocator: Allocator,
        context: Context,

        /// Initialize and return a new priority dequeue.
        pub fn init(allocator: Allocator, context: Context) Self {
            return Self{
                .items = &[_]T{},
                .len = 0,
                .allocator = allocator,
                .context = context,
            };
        }

        /// Free memory used by the dequeue.
        pub fn deinit(self: Self) void {
            self.allocator.free(self.items);
        }

        /// Insert a new element, maintaining priority.
        pub fn add(self: *Self, elem: T) !void {
            try self.ensureUnusedCapacity(1);
            addUnchecked(self, elem);
        }

        /// Add each element in `items` to the dequeue.
        pub fn addSlice(self: *Self, items: []const T) !void {
            try self.ensureUnusedCapacity(items.len);
            for (items) |e| {
                self.addUnchecked(e);
            }
        }

        fn addUnchecked(self: *Self, elem: T) void {
            self.items[self.len] = elem;

            if (self.len > 0) {
                const start = self.getStartForSiftUp(elem, self.len);
                self.siftUp(start);
            }

            self.len += 1;
        }

        fn isMinLayer(index: usize) bool {
            // In the min-max heap structure:
            // The first element is on a min layer;
            // next two are on a max layer;
            // next four are on a min layer, and so on.
            return 1 == @clz(index +% 1) & 1;
        }

        fn nextIsMinLayer(self: Self) bool {
            return isMinLayer(self.len);
        }

        const StartIndexAndLayer = struct {
            index: usize,
            min_layer: bool,
        };

        fn getStartForSiftUp(self: Self, child: T, index: usize) StartIndexAndLayer {
            const child_index = index;
            const parent_index = parentIndex(child_index);
            const parent = self.items[parent_index];

            const min_layer = self.nextIsMinLayer();
            const order = compareFn(self.context, child, parent);
            if ((min_layer and order == .gt) or (!min_layer and order == .lt)) {
                // We must swap the item with it's parent if it is on the "wrong" layer
                self.items[parent_index] = child;
                self.items[child_index] = parent;
                return .{
                    .index = parent_index,
                    .min_layer = !min_layer,
                };
            } else {
                return .{
                    .index = child_index,
                    .min_layer = min_layer,
                };
            }
        }

        fn siftUp(self: *Self, start: StartIndexAndLayer) void {
            if (start.min_layer) {
                doSiftUp(self, start.index, .lt);
            } else {
                doSiftUp(self, start.index, .gt);
            }
        }

        fn doSiftUp(self: *Self, start_index: usize, target_order: Order) void {
            var child_index = start_index;
            while (child_index > 2) {
                const grandparent_index = grandparentIndex(child_index);
                const child = self.items[child_index];
                const grandparent = self.items[grandparent_index];

                // If the grandparent is already better or equal, we have gone as far as we need to
                if (compareFn(self.context, child, grandparent) != target_order) break;

                // Otherwise swap the item with it's grandparent
                self.items[grandparent_index] = child;
                self.items[child_index] = grandparent;
                child_index = grandparent_index;
            }
        }

        /// Look at the smallest element in the dequeue. Returns
        /// `null` if empty.
        pub fn peekMin(self: *Self) ?T {
            return if (self.len > 0) self.items[0] else null;
        }

        /// Look at the largest element in the dequeue. Returns
        /// `null` if empty.
        pub fn peekMax(self: *Self) ?T {
            if (self.len == 0) return null;
            if (self.len == 1) return self.items[0];
            if (self.len == 2) return self.items[1];
            return self.bestItemAtIndices(1, 2, .gt).item;
        }

        fn maxIndex(self: Self) ?usize {
            if (self.len == 0) return null;
            if (self.len == 1) return 0;
            if (self.len == 2) return 1;
            return self.bestItemAtIndices(1, 2, .gt).index;
        }

        /// Pop the smallest element from the dequeue. Returns
        /// `null` if empty.
        pub fn removeMinOrNull(self: *Self) ?T {
            return if (self.len > 0) self.removeMin() else null;
        }

        /// Remove and return the smallest element from the
        /// dequeue.
        pub fn removeMin(self: *Self) T {
            return self.removeIndex(0);
        }

        /// Pop the largest element from the dequeue. Returns
        /// `null` if empty.
        pub fn removeMaxOrNull(self: *Self) ?T {
            return if (self.len > 0) self.removeMax() else null;
        }

        /// Remove and return the largest element from the
        /// dequeue.
        pub fn removeMax(self: *Self) T {
            return self.removeIndex(self.maxIndex().?);
        }

        /// Remove and return element at index. Indices are in the
        /// same order as iterator, which is not necessarily priority
        /// order.
        pub fn removeIndex(self: *Self, index: usize) T {
            assert(self.len > index);
            const item = self.items[index];
            const last = self.items[self.len - 1];

            self.items[index] = last;
            self.len -= 1;
            siftDown(self, index);

            return item;
        }

        fn siftDown(self: *Self, index: usize) void {
            if (isMinLayer(index)) {
                self.doSiftDown(index, .lt);
            } else {
                self.doSiftDown(index, .gt);
            }
        }

        fn doSiftDown(self: *Self, start_index: usize, target_order: Order) void {
            var index = start_index;
            const half = self.len >> 1;
            while (true) {
                const first_grandchild_index = firstGrandchildIndex(index);
                const last_grandchild_index = first_grandchild_index + 3;

                const elem = self.items[index];

                if (last_grandchild_index < self.len) {
                    // All four grandchildren exist
                    const index2 = first_grandchild_index + 1;
                    const index3 = index2 + 1;

                    // Find the best grandchild
                    const best_left = self.bestItemAtIndices(first_grandchild_index, index2, target_order);
                    const best_right = self.bestItemAtIndices(index3, last_grandchild_index, target_order);
                    const best_grandchild = self.bestItem(best_left, best_right, target_order);

                    // If the item is better than or equal to its best grandchild, we are done
                    if (compareFn(self.context, best_grandchild.item, elem) != target_order) return;

                    // Otherwise, swap them
                    self.items[best_grandchild.index] = elem;
                    self.items[index] = best_grandchild.item;
                    index = best_grandchild.index;

                    // We might need to swap the element with it's parent
                    self.swapIfParentIsBetter(elem, index, target_order);
                } else {
                    // The children or grandchildren are the last layer
                    const first_child_index = firstChildIndex(index);
                    if (first_child_index >= self.len) return;

                    const best_descendent = self.bestDescendent(first_child_index, first_grandchild_index, target_order);

                    // If the item is better than or equal to its best descendant, we are done
                    if (compareFn(self.context, best_descendent.item, elem) != target_order) return;

                    // Otherwise swap them
                    self.items[best_descendent.index] = elem;
                    self.items[index] = best_descendent.item;
                    index = best_descendent.index;

                    // If we didn't swap a grandchild, we are done
                    if (index < first_grandchild_index) return;

                    // We might need to swap the element with it's parent
                    self.swapIfParentIsBetter(elem, index, target_order);
                    return;
                }

                // If we are now in the last layer, we are done
                if (index >= half) return;
            }
        }

        fn swapIfParentIsBetter(self: *Self, child: T, child_index: usize, target_order: Order) void {
            const parent_index = parentIndex(child_index);
            const parent = self.items[parent_index];

            if (compareFn(self.context, parent, child) == target_order) {
                self.items[parent_index] = child;
                self.items[child_index] = parent;
            }
        }

        const ItemAndIndex = struct {
            item: T,
            index: usize,
        };

        fn getItem(self: Self, index: usize) ItemAndIndex {
            return .{
                .item = self.items[index],
                .index = index,
            };
        }

        fn bestItem(self: Self, item1: ItemAndIndex, item2: ItemAndIndex, target_order: Order) ItemAndIndex {
            if (compareFn(self.context, item1.item, item2.item) == target_order) {
                return item1;
            } else {
                return item2;
            }
        }

        fn bestItemAtIndices(self: Self, index1: usize, index2: usize, target_order: Order) ItemAndIndex {
            const item1 = self.getItem(index1);
            const item2 = self.getItem(index2);
            return self.bestItem(item1, item2, target_order);
        }

        fn bestDescendent(self: Self, first_child_index: usize, first_grandchild_index: usize, target_order: Order) ItemAndIndex {
            const second_child_index = first_child_index + 1;
            if (first_grandchild_index >= self.len) {
                // No grandchildren, find the best child (second may not exist)
                if (second_child_index >= self.len) {
                    return .{
                        .item = self.items[first_child_index],
                        .index = first_child_index,
                    };
                } else {
                    return self.bestItemAtIndices(first_child_index, second_child_index, target_order);
                }
            }

            const second_grandchild_index = first_grandchild_index + 1;
            if (second_grandchild_index >= self.len) {
                // One grandchild, so we know there is a second child. Compare first grandchild and second child
                return self.bestItemAtIndices(first_grandchild_index, second_child_index, target_order);
            }

            const best_left_grandchild_index = self.bestItemAtIndices(first_grandchild_index, second_grandchild_index, target_order).index;
            const third_grandchild_index = second_grandchild_index + 1;
            if (third_grandchild_index >= self.len) {
                // Two grandchildren, and we know the best. Compare this to second child.
                return self.bestItemAtIndices(best_left_grandchild_index, second_child_index, target_order);
            } else {
                // Three grandchildren, compare the min of the first two with the third
                return self.bestItemAtIndices(best_left_grandchild_index, third_grandchild_index, target_order);
            }
        }

        /// Return the number of elements remaining in the dequeue
        pub fn count(self: Self) usize {
            return self.len;
        }

        /// Return the number of elements that can be added to the
        /// dequeue before more memory is allocated.
        pub fn capacity(self: Self) usize {
            return self.items.len;
        }

        /// Dequeue takes ownership of the passed in slice. The slice must have been
        /// allocated with `allocator`.
        /// De-initialize with `deinit`.
        pub fn fromOwnedSlice(allocator: Allocator, items: []T, context: Context) Self {
            var queue = Self{
                .items = items,
                .len = items.len,
                .allocator = allocator,
                .context = context,
            };

            if (queue.len <= 1) return queue;

            const half = (queue.len >> 1) - 1;
            var i: usize = 0;
            while (i <= half) : (i += 1) {
                const index = half - i;
                queue.siftDown(index);
            }
            return queue;
        }

        /// Ensure that the dequeue can fit at least `new_capacity` items.
        pub fn ensureTotalCapacity(self: *Self, new_capacity: usize) !void {
            var better_capacity = self.capacity();
            if (better_capacity >= new_capacity) return;
            while (true) {
                better_capacity += better_capacity / 2 + 8;
                if (better_capacity >= new_capacity) break;
            }
            self.items = try self.allocator.realloc(self.items, better_capacity);
        }

        /// Ensure that the dequeue can fit at least `additional_count` **more** items.
        pub fn ensureUnusedCapacity(self: *Self, additional_count: usize) !void {
            return self.ensureTotalCapacity(self.len + additional_count);
        }

        /// Reduce allocated capacity to `new_len`.
        pub fn shrinkAndFree(self: *Self, new_len: usize) void {
            assert(new_len <= self.items.len);

            // Cannot shrink to smaller than the current queue size without invalidating the heap property
            assert(new_len >= self.len);

            self.items = self.allocator.realloc(self.items[0..], new_len) catch |e| switch (e) {
                error.OutOfMemory => { // no problem, capacity is still correct then.
                    self.items.len = new_len;
                    return;
                },
            };
        }

        pub fn update(self: *Self, elem: T, new_elem: T) !void {
            const old_index = blk: {
                var idx: usize = 0;
                while (idx < self.len) : (idx += 1) {
                    const item = self.items[idx];
                    if (compareFn(self.context, item, elem) == .eq) break :blk idx;
                }
                return error.ElementNotFound;
            };
            _ = self.removeIndex(old_index);
            self.addUnchecked(new_elem);
        }

        pub const Iterator = struct {
            queue: *PriorityDequeue(T, Context, compareFn),
            count: usize,

            pub fn next(it: *Iterator) ?T {
                if (it.count >= it.queue.len) return null;
                const out = it.count;
                it.count += 1;
                return it.queue.items[out];
            }

            pub fn reset(it: *Iterator) void {
                it.count = 0;
            }
        };

        /// Return an iterator that walks the queue without consuming
        /// it. The iteration order may differ from the priority order.
        /// Invalidated if the queue is modified.
        pub fn iterator(self: *Self) Iterator {
            return Iterator{
                .queue = self,
                .count = 0,
            };
        }

        fn dump(self: *Self) void {
            const print = std.debug.print;
            print("{{ ", .{});
            print("items: ", .{});
            for (self.items, 0..) |e, i| {
                if (i >= self.len) break;
                print("{}, ", .{e});
            }
            print("array: ", .{});
            for (self.items) |e| {
                print("{}, ", .{e});
            }
            print("len: {} ", .{self.len});
            print("capacity: {}", .{self.capacity()});
            print(" }}\n", .{});
        }

        fn parentIndex(index: usize) usize {
            return (index - 1) >> 1;
        }

        fn grandparentIndex(index: usize) usize {
            return parentIndex(parentIndex(index));
        }

        fn firstChildIndex(index: usize) usize {
            return (index << 1) + 1;
        }

        fn firstGrandchildIndex(index: usize) usize {
            return firstChildIndex(firstChildIndex(index));
        }
    };
}

Type FunctionSegmentedList[src]

This is a stack data structure where pointers to indexes have the same lifetime as the data structure itself, unlike ArrayList where append() invalidates all existing element pointers. The tradeoff is that elements are not guaranteed to be contiguous. For that, use ArrayList. Note however that most elements are contiguous, making this data structure cache-friendly.

Because it never has to copy elements from an old location to a new location, it does not require its elements to be copyable, and it avoids wasting memory when backed by an ArenaAllocator. Note that the append() and pop() convenience methods perform a copy, but you can instead use addOne(), at(), setCapacity(), and shrinkCapacity() to avoid copying items.

This data structure has O(1) append and O(1) pop.

It supports preallocated elements, making it especially well suited when the expected maximum size is small. prealloc_item_count must be 0, or a power of 2.

Parameters

T: type
prealloc_item_count: usize

Types

TypeIterator[src]

Source Code

Source code
pub const Iterator = BaseIterator(*Self, *T)

TypeConstIterator[src]

Source Code

Source code
pub const ConstIterator = BaseIterator(*const Self, *const T)

Fields

prealloc_segment: [prealloc_item_count]T = undefined
dynamic_segments: [][*]T = &[_][*]T{}
len: usize = 0

Values

Constantprealloc_count[src]

Source Code

Source code
pub const prealloc_count = prealloc_item_count

Functions

Functiondeinit[src]

pub fn deinit(self: *Self, allocator: Allocator) void

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn deinit(self: *Self, allocator: Allocator) void {
    self.freeShelves(allocator, @as(ShelfIndex, @intCast(self.dynamic_segments.len)), 0);
    allocator.free(self.dynamic_segments);
    self.* = undefined;
}

Functionat[src]

pub fn at(self: anytype, i: usize) AtType(@TypeOf(self))

Parameters

i: usize

Source Code

Source code
pub fn at(self: anytype, i: usize) AtType(@TypeOf(self)) {
    assert(i < self.len);
    return self.uncheckedAt(i);
}

Functioncount[src]

pub fn count(self: Self) usize

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.len;
}

Functionappend[src]

pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
item: T

Source Code

Source code
pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void {
    const new_item_ptr = try self.addOne(allocator);
    new_item_ptr.* = item;
}

FunctionappendSlice[src]

pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
items: []const T

Source Code

Source code
pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void {
    for (items) |item| {
        try self.append(allocator, item);
    }
}

Functionpop[src]

pub fn pop(self: *Self) ?T

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?T {
    if (self.len == 0) return null;

    const index = self.len - 1;
    const result = uncheckedAt(self, index).*;
    self.len = index;
    return result;
}

FunctionaddOne[src]

pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T {
    const new_length = self.len + 1;
    try self.growCapacity(allocator, new_length);
    const result = uncheckedAt(self, self.len);
    self.len = new_length;
    return result;
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Reduce length to new_len. Invalidates pointers for the elements at index new_len and beyond.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    assert(new_len <= self.len);
    self.len = new_len;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Invalidates all element pointers.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.len = 0;
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, allocator: Allocator) void

Invalidates all element pointers.

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn clearAndFree(self: *Self, allocator: Allocator) void {
    self.setCapacity(allocator, 0) catch unreachable;
    self.len = 0;
}

FunctionsetCapacity[src]

pub fn setCapacity(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void

Grows or shrinks capacity to match usage. TODO update this and related methods to match the conventions set by ArrayList

Parameters

self: *Self
allocator: Allocator
new_capacity: usize

Source Code

Source code
pub fn setCapacity(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void {
    if (prealloc_item_count != 0) {
        if (new_capacity <= @as(usize, 1) << (prealloc_exp + @as(ShelfIndex, @intCast(self.dynamic_segments.len)))) {
            return self.shrinkCapacity(allocator, new_capacity);
        }
    }
    return self.growCapacity(allocator, new_capacity);
}

FunctiongrowCapacity[src]

pub fn growCapacity(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void

Only grows capacity, or retains current capacity.

Parameters

self: *Self
allocator: Allocator
new_capacity: usize

Source Code

Source code
pub fn growCapacity(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void {
    const new_cap_shelf_count = shelfCount(new_capacity);
    const old_shelf_count = @as(ShelfIndex, @intCast(self.dynamic_segments.len));
    if (new_cap_shelf_count <= old_shelf_count) return;

    const new_dynamic_segments = try allocator.alloc([*]T, new_cap_shelf_count);
    errdefer allocator.free(new_dynamic_segments);

    var i: ShelfIndex = 0;
    while (i < old_shelf_count) : (i += 1) {
        new_dynamic_segments[i] = self.dynamic_segments[i];
    }
    errdefer while (i > old_shelf_count) : (i -= 1) {
        allocator.free(new_dynamic_segments[i][0..shelfSize(i)]);
    };
    while (i < new_cap_shelf_count) : (i += 1) {
        new_dynamic_segments[i] = (try allocator.alloc(T, shelfSize(i))).ptr;
    }

    allocator.free(self.dynamic_segments);
    self.dynamic_segments = new_dynamic_segments;
}

FunctionshrinkCapacity[src]

pub fn shrinkCapacity(self: *Self, allocator: Allocator, new_capacity: usize) void

Only shrinks capacity or retains current capacity. It may fail to reduce the capacity in which case the capacity will remain unchanged.

Parameters

self: *Self
allocator: Allocator
new_capacity: usize

Source Code

Source code
pub fn shrinkCapacity(self: *Self, allocator: Allocator, new_capacity: usize) void {
    if (new_capacity <= prealloc_item_count) {
        const len = @as(ShelfIndex, @intCast(self.dynamic_segments.len));
        self.freeShelves(allocator, len, 0);
        allocator.free(self.dynamic_segments);
        self.dynamic_segments = &[_][*]T{};
        return;
    }

    const new_cap_shelf_count = shelfCount(new_capacity);
    const old_shelf_count = @as(ShelfIndex, @intCast(self.dynamic_segments.len));
    assert(new_cap_shelf_count <= old_shelf_count);
    if (new_cap_shelf_count == old_shelf_count) return;

    // freeShelves() must be called before resizing the dynamic
    // segments, but we don't know if resizing the dynamic segments
    // will work until we try it. So we must allocate a fresh memory
    // buffer in order to reduce capacity.
    const new_dynamic_segments = allocator.alloc([*]T, new_cap_shelf_count) catch return;
    self.freeShelves(allocator, old_shelf_count, new_cap_shelf_count);
    if (allocator.resize(self.dynamic_segments, new_cap_shelf_count)) {
        // We didn't need the new memory allocation after all.
        self.dynamic_segments = self.dynamic_segments[0..new_cap_shelf_count];
        allocator.free(new_dynamic_segments);
    } else {
        // Good thing we allocated that new memory slice.
        @memcpy(new_dynamic_segments, self.dynamic_segments[0..new_cap_shelf_count]);
        allocator.free(self.dynamic_segments);
        self.dynamic_segments = new_dynamic_segments;
    }
}

Functionshrink[src]

pub fn shrink(self: *Self, new_len: usize) void

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrink(self: *Self, new_len: usize) void {
    assert(new_len <= self.len);
    // TODO take advantage of the new realloc semantics
    self.len = new_len;
}

FunctionwriteToSlice[src]

pub fn writeToSlice(self: *Self, dest: []T, start: usize) void

Parameters

self: *Self
dest: []T
start: usize

Source Code

Source code
pub fn writeToSlice(self: *Self, dest: []T, start: usize) void {
    const end = start + dest.len;
    assert(end <= self.len);

    var i = start;
    if (end <= prealloc_item_count) {
        const src = self.prealloc_segment[i..end];
        @memcpy(dest[i - start ..][0..src.len], src);
        return;
    } else if (i < prealloc_item_count) {
        const src = self.prealloc_segment[i..];
        @memcpy(dest[i - start ..][0..src.len], src);
        i = prealloc_item_count;
    }

    while (i < end) {
        const shelf_index = shelfIndex(i);
        const copy_start = boxIndex(i, shelf_index);
        const copy_end = @min(shelfSize(shelf_index), copy_start + end - i);
        const src = self.dynamic_segments[shelf_index][copy_start..copy_end];
        @memcpy(dest[i - start ..][0..src.len], src);
        i += (copy_end - copy_start);
    }
}

FunctionuncheckedAt[src]

pub fn uncheckedAt(self: anytype, index: usize) AtType(@TypeOf(self))

Parameters

index: usize

Source Code

Source code
pub fn uncheckedAt(self: anytype, index: usize) AtType(@TypeOf(self)) {
    if (index < prealloc_item_count) {
        return &self.prealloc_segment[index];
    }
    const shelf_index = shelfIndex(index);
    const box_index = boxIndex(index, shelf_index);
    return &self.dynamic_segments[shelf_index][box_index];
}

Functioniterator[src]

pub fn iterator(self: *Self, start_index: usize) Iterator

Parameters

self: *Self
start_index: usize

Source Code

Source code
pub fn iterator(self: *Self, start_index: usize) Iterator {
    var it = Iterator{
        .list = self,
        .index = undefined,
        .shelf_index = undefined,
        .box_index = undefined,
        .shelf_size = undefined,
    };
    it.set(start_index);
    return it;
}

FunctionconstIterator[src]

pub fn constIterator(self: *const Self, start_index: usize) ConstIterator

Parameters

self: *const Self
start_index: usize

Source Code

Source code
pub fn constIterator(self: *const Self, start_index: usize) ConstIterator {
    var it = ConstIterator{
        .list = self,
        .index = undefined,
        .shelf_index = undefined,
        .box_index = undefined,
        .shelf_size = undefined,
    };
    it.set(start_index);
    return it;
}

Source Code

Source code
pub fn SegmentedList(comptime T: type, comptime prealloc_item_count: usize) type {
    return struct {
        const Self = @This();
        const ShelfIndex = std.math.Log2Int(usize);

        const prealloc_exp: ShelfIndex = blk: {
            // we don't use the prealloc_exp constant when prealloc_item_count is 0
            // but lazy-init may still be triggered by other code so supply a value
            if (prealloc_item_count == 0) {
                break :blk 0;
            } else {
                assert(std.math.isPowerOfTwo(prealloc_item_count));
                const value = std.math.log2_int(usize, prealloc_item_count);
                break :blk value;
            }
        };

        prealloc_segment: [prealloc_item_count]T = undefined,
        dynamic_segments: [][*]T = &[_][*]T{},
        len: usize = 0,

        pub const prealloc_count = prealloc_item_count;

        fn AtType(comptime SelfType: type) type {
            if (@typeInfo(SelfType).pointer.is_const) {
                return *const T;
            } else {
                return *T;
            }
        }

        pub fn deinit(self: *Self, allocator: Allocator) void {
            self.freeShelves(allocator, @as(ShelfIndex, @intCast(self.dynamic_segments.len)), 0);
            allocator.free(self.dynamic_segments);
            self.* = undefined;
        }

        pub fn at(self: anytype, i: usize) AtType(@TypeOf(self)) {
            assert(i < self.len);
            return self.uncheckedAt(i);
        }

        pub fn count(self: Self) usize {
            return self.len;
        }

        pub fn append(self: *Self, allocator: Allocator, item: T) Allocator.Error!void {
            const new_item_ptr = try self.addOne(allocator);
            new_item_ptr.* = item;
        }

        pub fn appendSlice(self: *Self, allocator: Allocator, items: []const T) Allocator.Error!void {
            for (items) |item| {
                try self.append(allocator, item);
            }
        }

        pub fn pop(self: *Self) ?T {
            if (self.len == 0) return null;

            const index = self.len - 1;
            const result = uncheckedAt(self, index).*;
            self.len = index;
            return result;
        }

        pub fn addOne(self: *Self, allocator: Allocator) Allocator.Error!*T {
            const new_length = self.len + 1;
            try self.growCapacity(allocator, new_length);
            const result = uncheckedAt(self, self.len);
            self.len = new_length;
            return result;
        }

        /// Reduce length to `new_len`.
        /// Invalidates pointers for the elements at index new_len and beyond.
        pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
            assert(new_len <= self.len);
            self.len = new_len;
        }

        /// Invalidates all element pointers.
        pub fn clearRetainingCapacity(self: *Self) void {
            self.len = 0;
        }

        /// Invalidates all element pointers.
        pub fn clearAndFree(self: *Self, allocator: Allocator) void {
            self.setCapacity(allocator, 0) catch unreachable;
            self.len = 0;
        }

        /// Grows or shrinks capacity to match usage.
        /// TODO update this and related methods to match the conventions set by ArrayList
        pub fn setCapacity(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void {
            if (prealloc_item_count != 0) {
                if (new_capacity <= @as(usize, 1) << (prealloc_exp + @as(ShelfIndex, @intCast(self.dynamic_segments.len)))) {
                    return self.shrinkCapacity(allocator, new_capacity);
                }
            }
            return self.growCapacity(allocator, new_capacity);
        }

        /// Only grows capacity, or retains current capacity.
        pub fn growCapacity(self: *Self, allocator: Allocator, new_capacity: usize) Allocator.Error!void {
            const new_cap_shelf_count = shelfCount(new_capacity);
            const old_shelf_count = @as(ShelfIndex, @intCast(self.dynamic_segments.len));
            if (new_cap_shelf_count <= old_shelf_count) return;

            const new_dynamic_segments = try allocator.alloc([*]T, new_cap_shelf_count);
            errdefer allocator.free(new_dynamic_segments);

            var i: ShelfIndex = 0;
            while (i < old_shelf_count) : (i += 1) {
                new_dynamic_segments[i] = self.dynamic_segments[i];
            }
            errdefer while (i > old_shelf_count) : (i -= 1) {
                allocator.free(new_dynamic_segments[i][0..shelfSize(i)]);
            };
            while (i < new_cap_shelf_count) : (i += 1) {
                new_dynamic_segments[i] = (try allocator.alloc(T, shelfSize(i))).ptr;
            }

            allocator.free(self.dynamic_segments);
            self.dynamic_segments = new_dynamic_segments;
        }

        /// Only shrinks capacity or retains current capacity.
        /// It may fail to reduce the capacity in which case the capacity will remain unchanged.
        pub fn shrinkCapacity(self: *Self, allocator: Allocator, new_capacity: usize) void {
            if (new_capacity <= prealloc_item_count) {
                const len = @as(ShelfIndex, @intCast(self.dynamic_segments.len));
                self.freeShelves(allocator, len, 0);
                allocator.free(self.dynamic_segments);
                self.dynamic_segments = &[_][*]T{};
                return;
            }

            const new_cap_shelf_count = shelfCount(new_capacity);
            const old_shelf_count = @as(ShelfIndex, @intCast(self.dynamic_segments.len));
            assert(new_cap_shelf_count <= old_shelf_count);
            if (new_cap_shelf_count == old_shelf_count) return;

            // freeShelves() must be called before resizing the dynamic
            // segments, but we don't know if resizing the dynamic segments
            // will work until we try it. So we must allocate a fresh memory
            // buffer in order to reduce capacity.
            const new_dynamic_segments = allocator.alloc([*]T, new_cap_shelf_count) catch return;
            self.freeShelves(allocator, old_shelf_count, new_cap_shelf_count);
            if (allocator.resize(self.dynamic_segments, new_cap_shelf_count)) {
                // We didn't need the new memory allocation after all.
                self.dynamic_segments = self.dynamic_segments[0..new_cap_shelf_count];
                allocator.free(new_dynamic_segments);
            } else {
                // Good thing we allocated that new memory slice.
                @memcpy(new_dynamic_segments, self.dynamic_segments[0..new_cap_shelf_count]);
                allocator.free(self.dynamic_segments);
                self.dynamic_segments = new_dynamic_segments;
            }
        }

        pub fn shrink(self: *Self, new_len: usize) void {
            assert(new_len <= self.len);
            // TODO take advantage of the new realloc semantics
            self.len = new_len;
        }

        pub fn writeToSlice(self: *Self, dest: []T, start: usize) void {
            const end = start + dest.len;
            assert(end <= self.len);

            var i = start;
            if (end <= prealloc_item_count) {
                const src = self.prealloc_segment[i..end];
                @memcpy(dest[i - start ..][0..src.len], src);
                return;
            } else if (i < prealloc_item_count) {
                const src = self.prealloc_segment[i..];
                @memcpy(dest[i - start ..][0..src.len], src);
                i = prealloc_item_count;
            }

            while (i < end) {
                const shelf_index = shelfIndex(i);
                const copy_start = boxIndex(i, shelf_index);
                const copy_end = @min(shelfSize(shelf_index), copy_start + end - i);
                const src = self.dynamic_segments[shelf_index][copy_start..copy_end];
                @memcpy(dest[i - start ..][0..src.len], src);
                i += (copy_end - copy_start);
            }
        }

        pub fn uncheckedAt(self: anytype, index: usize) AtType(@TypeOf(self)) {
            if (index < prealloc_item_count) {
                return &self.prealloc_segment[index];
            }
            const shelf_index = shelfIndex(index);
            const box_index = boxIndex(index, shelf_index);
            return &self.dynamic_segments[shelf_index][box_index];
        }

        fn shelfCount(box_count: usize) ShelfIndex {
            if (prealloc_item_count == 0) {
                return log2_int_ceil(usize, box_count + 1);
            }
            return log2_int_ceil(usize, box_count + prealloc_item_count) - prealloc_exp - 1;
        }

        fn shelfSize(shelf_index: ShelfIndex) usize {
            if (prealloc_item_count == 0) {
                return @as(usize, 1) << shelf_index;
            }
            return @as(usize, 1) << (shelf_index + (prealloc_exp + 1));
        }

        fn shelfIndex(list_index: usize) ShelfIndex {
            if (prealloc_item_count == 0) {
                return std.math.log2_int(usize, list_index + 1);
            }
            return std.math.log2_int(usize, list_index + prealloc_item_count) - prealloc_exp - 1;
        }

        fn boxIndex(list_index: usize, shelf_index: ShelfIndex) usize {
            if (prealloc_item_count == 0) {
                return (list_index + 1) - (@as(usize, 1) << shelf_index);
            }
            return list_index + prealloc_item_count - (@as(usize, 1) << ((prealloc_exp + 1) + shelf_index));
        }

        fn freeShelves(self: *Self, allocator: Allocator, from_count: ShelfIndex, to_count: ShelfIndex) void {
            var i = from_count;
            while (i != to_count) {
                i -= 1;
                allocator.free(self.dynamic_segments[i][0..shelfSize(i)]);
            }
        }

        pub const Iterator = BaseIterator(*Self, *T);
        pub const ConstIterator = BaseIterator(*const Self, *const T);
        fn BaseIterator(comptime SelfType: type, comptime ElementPtr: type) type {
            return struct {
                list: SelfType,
                index: usize,
                box_index: usize,
                shelf_index: ShelfIndex,
                shelf_size: usize,

                pub fn next(it: *@This()) ?ElementPtr {
                    if (it.index >= it.list.len) return null;
                    if (it.index < prealloc_item_count) {
                        const ptr = &it.list.prealloc_segment[it.index];
                        it.index += 1;
                        if (it.index == prealloc_item_count) {
                            it.box_index = 0;
                            it.shelf_index = 0;
                            it.shelf_size = prealloc_item_count * 2;
                        }
                        return ptr;
                    }

                    const ptr = &it.list.dynamic_segments[it.shelf_index][it.box_index];
                    it.index += 1;
                    it.box_index += 1;
                    if (it.box_index == it.shelf_size) {
                        it.shelf_index += 1;
                        it.box_index = 0;
                        it.shelf_size *= 2;
                    }
                    return ptr;
                }

                pub fn prev(it: *@This()) ?ElementPtr {
                    if (it.index == 0) return null;

                    it.index -= 1;
                    if (it.index < prealloc_item_count) return &it.list.prealloc_segment[it.index];

                    if (it.box_index == 0) {
                        it.shelf_index -= 1;
                        it.shelf_size /= 2;
                        it.box_index = it.shelf_size - 1;
                    } else {
                        it.box_index -= 1;
                    }

                    return &it.list.dynamic_segments[it.shelf_index][it.box_index];
                }

                pub fn peek(it: *@This()) ?ElementPtr {
                    if (it.index >= it.list.len)
                        return null;
                    if (it.index < prealloc_item_count)
                        return &it.list.prealloc_segment[it.index];

                    return &it.list.dynamic_segments[it.shelf_index][it.box_index];
                }

                pub fn set(it: *@This(), index: usize) void {
                    it.index = index;
                    if (index < prealloc_item_count) return;
                    it.shelf_index = shelfIndex(index);
                    it.box_index = boxIndex(index, it.shelf_index);
                    it.shelf_size = shelfSize(it.shelf_index);
                }
            };
        }

        pub fn iterator(self: *Self, start_index: usize) Iterator {
            var it = Iterator{
                .list = self,
                .index = undefined,
                .shelf_index = undefined,
                .box_index = undefined,
                .shelf_size = undefined,
            };
            it.set(start_index);
            return it;
        }

        pub fn constIterator(self: *const Self, start_index: usize) ConstIterator {
            var it = ConstIterator{
                .list = self,
                .index = undefined,
                .shelf_index = undefined,
                .box_index = undefined,
                .shelf_size = undefined,
            };
            it.set(start_index);
            return it;
        }
    };
}

Type FunctionSinglyLinkedList[src]

A singly-linked list is headed by a single forward pointer. The elements are singly-linked for minimum space and pointer manipulation overhead at the expense of O(n) removal for arbitrary elements. New elements can be added to the list after an existing element or at the head of the list. A singly-linked list may only be traversed in the forward direction. Singly-linked lists are ideal for applications with large datasets and few or no removals or for implementing a LIFO queue.

Parameters

T: type

Fields

first: ?*Node = null

Functions

Functionprepend[src]

pub fn prepend(list: *Self, new_node: *Node) void

Insert a new node at the head.

Arguments: new_node: Pointer to the new node to insert.

Parameters

list: *Self
new_node: *Node

Source Code

Source code
pub fn prepend(list: *Self, new_node: *Node) void {
    new_node.next = list.first;
    list.first = new_node;
}

Functionremove[src]

pub fn remove(list: *Self, node: *Node) void

Remove a node from the list.

Arguments: node: Pointer to the node to be removed.

Parameters

list: *Self
node: *Node

Source Code

Source code
pub fn remove(list: *Self, node: *Node) void {
    if (list.first == node) {
        list.first = node.next;
    } else {
        var current_elm = list.first.?;
        while (current_elm.next != node) {
            current_elm = current_elm.next.?;
        }
        current_elm.next = node.next;
    }
}

FunctionpopFirst[src]

pub fn popFirst(list: *Self) ?*Node

Remove and return the first node in the list.

Returns: A pointer to the first node in the list.

Parameters

list: *Self

Source Code

Source code
pub fn popFirst(list: *Self) ?*Node {
    const first = list.first orelse return null;
    list.first = first.next;
    return first;
}

Functionlen[src]

pub fn len(list: Self) usize

Iterate over all nodes, returning the count. This operation is O(N).

Parameters

list: Self

Source Code

Source code
pub fn len(list: Self) usize {
    if (list.first) |n| {
        return 1 + n.countChildren();
    } else {
        return 0;
    }
}

Source Code

Source code
pub fn SinglyLinkedList(comptime T: type) type {
    return struct {
        const Self = @This();

        /// Node inside the linked list wrapping the actual data.
        pub const Node = struct {
            next: ?*Node = null,
            data: T,

            pub const Data = T;

            /// Insert a new node after the current one.
            ///
            /// Arguments:
            ///     new_node: Pointer to the new node to insert.
            pub fn insertAfter(node: *Node, new_node: *Node) void {
                new_node.next = node.next;
                node.next = new_node;
            }

            /// Remove a node from the list.
            ///
            /// Arguments:
            ///     node: Pointer to the node to be removed.
            /// Returns:
            ///     node removed
            pub fn removeNext(node: *Node) ?*Node {
                const next_node = node.next orelse return null;
                node.next = next_node.next;
                return next_node;
            }

            /// Iterate over the singly-linked list from this node, until the final node is found.
            /// This operation is O(N).
            pub fn findLast(node: *Node) *Node {
                var it = node;
                while (true) {
                    it = it.next orelse return it;
                }
            }

            /// Iterate over each next node, returning the count of all nodes except the starting one.
            /// This operation is O(N).
            pub fn countChildren(node: *const Node) usize {
                var count: usize = 0;
                var it: ?*const Node = node.next;
                while (it) |n| : (it = n.next) {
                    count += 1;
                }
                return count;
            }

            /// Reverse the list starting from this node in-place.
            /// This operation is O(N).
            pub fn reverse(indirect: *?*Node) void {
                if (indirect.* == null) {
                    return;
                }
                var current: *Node = indirect.*.?;
                while (current.next) |next| {
                    current.next = next.next;
                    next.next = indirect.*;
                    indirect.* = next;
                }
            }
        };

        first: ?*Node = null,

        /// Insert a new node at the head.
        ///
        /// Arguments:
        ///     new_node: Pointer to the new node to insert.
        pub fn prepend(list: *Self, new_node: *Node) void {
            new_node.next = list.first;
            list.first = new_node;
        }

        /// Remove a node from the list.
        ///
        /// Arguments:
        ///     node: Pointer to the node to be removed.
        pub fn remove(list: *Self, node: *Node) void {
            if (list.first == node) {
                list.first = node.next;
            } else {
                var current_elm = list.first.?;
                while (current_elm.next != node) {
                    current_elm = current_elm.next.?;
                }
                current_elm.next = node.next;
            }
        }

        /// Remove and return the first node in the list.
        ///
        /// Returns:
        ///     A pointer to the first node in the list.
        pub fn popFirst(list: *Self) ?*Node {
            const first = list.first orelse return null;
            list.first = first.next;
            return first;
        }

        /// Iterate over all nodes, returning the count.
        /// This operation is O(N).
        pub fn len(list: Self) usize {
            if (list.first) |n| {
                return 1 + n.countChildren();
            } else {
                return 0;
            }
        }
    };
}

Type FunctionStaticBitSet[src]

Returns the optimal static bit set type for the specified number of elements: either IntegerBitSet or ArrayBitSet, both of which fulfill the same interface. The returned type will perform no allocations, can be copied by value, and does not require deinitialization.

Parameters

size: usize

Example Usage

test StaticBitSet {
    try testing.expectEqual(IntegerBitSet(0), StaticBitSet(0));
    try testing.expectEqual(IntegerBitSet(5), StaticBitSet(5));
    try testing.expectEqual(IntegerBitSet(@bitSizeOf(usize)), StaticBitSet(@bitSizeOf(usize)));
    try testing.expectEqual(ArrayBitSet(usize, @bitSizeOf(usize) + 1), StaticBitSet(@bitSizeOf(usize) + 1));
    try testing.expectEqual(ArrayBitSet(usize, 500), StaticBitSet(500));
}

Source Code

Source code
pub fn StaticBitSet(comptime size: usize) type {
    if (size <= @bitSizeOf(usize)) {
        return IntegerBitSet(size);
    } else {
        return ArrayBitSet(usize, size);
    }
}

Type FunctionStringHashMap[src]

Builtin hashmap for strings as keys. Key memory is managed by the caller. Keys and values will not automatically be freed.

Parameters

V: type

Types

TypeUnmanaged[src]

The type of the unmanaged hash map underlying this wrapper

Source Code

Source code
pub const Unmanaged = HashMapUnmanaged(K, V, Context, max_load_percentage)

Fields

unmanaged: Unmanaged
allocator: Allocator
ctx: Context

Values

ConstantEntry[src]

An entry, containing pointers to a key and value stored in the map

Source Code

Source code
pub const Entry = Unmanaged.Entry

ConstantKV[src]

A copy of a key and value which are no longer in the map

Source Code

Source code
pub const KV = Unmanaged.KV

ConstantHash[src]

The integer type that is the result of hashing

Source Code

Source code
pub const Hash = Unmanaged.Hash

ConstantIterator[src]

The iterator type returned by iterator()

Source Code

Source code
pub const Iterator = Unmanaged.Iterator

ConstantKeyIterator[src]

Source Code

Source code
pub const KeyIterator = Unmanaged.KeyIterator

ConstantValueIterator[src]

Source Code

Source code
pub const ValueIterator = Unmanaged.ValueIterator

ConstantSize[src]

The integer type used to store the size of the map

Source Code

Source code
pub const Size = Unmanaged.Size

ConstantGetOrPutResult[src]

The type returned from getOrPut and variants

Source Code

Source code
pub const GetOrPutResult = Unmanaged.GetOrPutResult

Functions

Functioninit[src]

pub fn init(allocator: Allocator) Self

Create a managed hash map with an empty context. If the context is not zero-sized, you must use initContext(allocator, ctx) instead.

Parameters

allocator: Allocator

Source Code

Source code
pub fn init(allocator: Allocator) Self {
    if (@sizeOf(Context) != 0) {
        @compileError("Context must be specified! Call initContext(allocator, ctx) instead.");
    }
    return .{
        .unmanaged = .empty,
        .allocator = allocator,
        .ctx = undefined, // ctx is zero-sized so this is safe.
    };
}

FunctioninitContext[src]

pub fn initContext(allocator: Allocator, ctx: Context) Self

Create a managed hash map with a context

Parameters

allocator: Allocator
ctx: Context

Source Code

Source code
pub fn initContext(allocator: Allocator, ctx: Context) Self {
    return .{
        .unmanaged = .empty,
        .allocator = allocator,
        .ctx = ctx,
    };
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.unmanaged.lockPointers();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.unmanaged.unlockPointers();
}

Functiondeinit[src]

pub fn deinit(self: *Self) void

Release the backing array and invalidate this map. This does not deinit keys, values, or the context! If your keys or values need to be released, ensure that that is done before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self) void {
    self.unmanaged.deinit(self.allocator);
    self.* = undefined;
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Empty the map, but keep the backing allocation for future use. This does not free keys or values! Be sure to release them if they need deinitialization before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    return self.unmanaged.clearRetainingCapacity();
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self) void

Empty the map and release the backing allocation. This does not free keys or values! Be sure to release them if they need deinitialization before calling this function.

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self) void {
    return self.unmanaged.clearAndFree(self.allocator);
}

Functioncount[src]

pub fn count(self: Self) Size

Return the number of items in the map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) Size {
    return self.unmanaged.count();
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Create an iterator over the entries in the map. The iterator is invalidated if the map is modified.

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return self.unmanaged.iterator();
}

FunctionkeyIterator[src]

pub fn keyIterator(self: Self) KeyIterator

Create an iterator over the keys in the map. The iterator is invalidated if the map is modified.

Parameters

self: Self

Source Code

Source code
pub fn keyIterator(self: Self) KeyIterator {
    return self.unmanaged.keyIterator();
}

FunctionvalueIterator[src]

pub fn valueIterator(self: Self) ValueIterator

Create an iterator over the values in the map. The iterator is invalidated if the map is modified.

Parameters

self: Self

Source Code

Source code
pub fn valueIterator(self: Self) ValueIterator {
    return self.unmanaged.valueIterator();
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, key: K) Allocator.Error!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller should then initialize the value (but not the key).

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, key: K) Allocator.Error!GetOrPutResult {
    return self.unmanaged.getOrPutContext(self.allocator, key, self.ctx);
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) Allocator.Error!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined key and value, and the Entry pointers point to it. Caller must then initialize the key and value.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, key: anytype, ctx: anytype) Allocator.Error!GetOrPutResult {
    return self.unmanaged.getOrPutContextAdapted(self.allocator, key, ctx, self.ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller should then initialize the value (but not the key). If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityContext(key, self.ctx);
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

If there is an existing item with key, then the result's Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointers point to it. Caller must then initialize the key and value. If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
    return self.unmanaged.getOrPutAssumeCapacityAdapted(key, ctx);
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, key: K, value: V) Allocator.Error!Entry

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, key: K, value: V) Allocator.Error!Entry {
    return self.unmanaged.getOrPutValueContext(self.allocator, key, value, self.ctx);
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, expected_count: Size) Allocator.Error!void

Increases capacity, guaranteeing that insertions up until the expected_count will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
expected_count: Size

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, expected_count: Size) Allocator.Error!void {
    return self.unmanaged.ensureTotalCapacityContext(self.allocator, expected_count, self.ctx);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, additional_count: Size) Allocator.Error!void

Increases capacity, guaranteeing that insertions up until additional_count more items will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
additional_count: Size

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, additional_count: Size) Allocator.Error!void {
    return self.unmanaged.ensureUnusedCapacityContext(self.allocator, additional_count, self.ctx);
}

Functioncapacity[src]

pub fn capacity(self: Self) Size

Returns the number of total elements which may be present before it is no longer guaranteed that no allocations will be performed.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) Size {
    return self.unmanaged.capacity();
}

Functionput[src]

pub fn put(self: *Self, key: K, value: V) Allocator.Error!void

Clobbers any existing data. To detect if a put would clobber existing data, see getOrPut.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, key: K, value: V) Allocator.Error!void {
    return self.unmanaged.putContext(self.allocator, key, value, self.ctx);
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, key: K, value: V) Allocator.Error!void

Inserts a key-value pair into the hash map, asserting that no previous entry with the same key is already present

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, key: K, value: V) Allocator.Error!void {
    return self.unmanaged.putNoClobberContext(self.allocator, key, value, self.ctx);
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityContext(key, value, self.ctx);
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Asserts that it does not clobber any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    return self.unmanaged.putAssumeCapacityNoClobberContext(key, value, self.ctx);
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, key: K, value: V) Allocator.Error!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, key: K, value: V) Allocator.Error!?KV {
    return self.unmanaged.fetchPutContext(self.allocator, key, value, self.ctx);
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    return self.unmanaged.fetchPutAssumeCapacityContext(key, value, self.ctx);
}

FunctionfetchRemove[src]

pub fn fetchRemove(self: *Self, key: K) ?KV

Removes a value from the map and returns the removed kv pair.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchRemove(self: *Self, key: K) ?KV {
    return self.unmanaged.fetchRemoveContext(key, self.ctx);
}

FunctionfetchRemoveAdapted[src]

pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    return self.unmanaged.fetchRemoveAdapted(key, ctx);
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Finds the value associated with a key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    return self.unmanaged.getContext(key, self.ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    return self.unmanaged.getAdapted(key, ctx);
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    return self.unmanaged.getPtrContext(key, self.ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    return self.unmanaged.getPtrAdapted(key, ctx);
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Finds the actual key associated with an adapted key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    return self.unmanaged.getKeyContext(key, self.ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    return self.unmanaged.getKeyAdapted(key, ctx);
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    return self.unmanaged.getKeyPtrContext(key, self.ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    return self.unmanaged.getKeyPtrAdapted(key, ctx);
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Finds the key and value associated with a key in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    return self.unmanaged.getEntryContext(key, self.ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    return self.unmanaged.getEntryAdapted(key, ctx);
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Check if the map contains a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    return self.unmanaged.containsContext(key, self.ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.containsAdapted(key, ctx);
}

Functionremove[src]

pub fn remove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map, and this function returns true. Otherwise this function returns false.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K

Source Code

Source code
pub fn remove(self: *Self, key: K) bool {
    return self.unmanaged.removeContext(key, self.ctx);
}

FunctionremoveAdapted[src]

pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self

Source Code

Source code
pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    return self.unmanaged.removeAdapted(key, ctx);
}

FunctionremoveByPtr[src]

pub fn removeByPtr(self: *Self, key_ptr: *K) void

Delete the entry with key pointed to by key_ptr from the hash map. key_ptr is assumed to be a valid pointer to a key that is present in the hash map.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key_ptr: *K

Source Code

Source code
pub fn removeByPtr(self: *Self, key_ptr: *K) void {
    self.unmanaged.removeByPtr(key_ptr);
}

Functionclone[src]

pub fn clone(self: Self) Allocator.Error!Self

Creates a copy of this map, using the same allocator

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self) Allocator.Error!Self {
    var other = try self.unmanaged.cloneContext(self.allocator, self.ctx);
    return other.promoteContext(self.allocator, self.ctx);
}

FunctioncloneWithAllocator[src]

pub fn cloneWithAllocator(self: Self, new_allocator: Allocator) Allocator.Error!Self

Creates a copy of this map, using a specified allocator

Parameters

self: Self
new_allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocator(self: Self, new_allocator: Allocator) Allocator.Error!Self {
    var other = try self.unmanaged.cloneContext(new_allocator, self.ctx);
    return other.promoteContext(new_allocator, self.ctx);
}

FunctioncloneWithContext[src]

pub fn cloneWithContext(self: Self, new_ctx: anytype) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage)

Creates a copy of this map, using a specified context

Parameters

self: Self

Source Code

Source code
pub fn cloneWithContext(self: Self, new_ctx: anytype) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other = try self.unmanaged.cloneContext(self.allocator, new_ctx);
    return other.promoteContext(self.allocator, new_ctx);
}

FunctioncloneWithAllocatorAndContext[src]

pub fn cloneWithAllocatorAndContext( self: Self, new_allocator: Allocator, new_ctx: anytype, ) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage)

Creates a copy of this map, using a specified allocator and context.

Parameters

self: Self
new_allocator: Allocator

Source Code

Source code
pub fn cloneWithAllocatorAndContext(
    self: Self,
    new_allocator: Allocator,
    new_ctx: anytype,
) Allocator.Error!HashMap(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other = try self.unmanaged.cloneContext(new_allocator, new_ctx);
    return other.promoteContext(new_allocator, new_ctx);
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.unmanaged.pointer_stability.assertUnlocked();
    const result = self.*;
    self.unmanaged = .empty;
    return result;
}

Functionrehash[src]

pub fn rehash(self: *Self) void

Rehash the map, in-place.

Over time, due to the current tombstone-based implementation, a HashMap could become fragmented due to the buildup of tombstone entries that causes a performance degradation due to excessive probing. The kind of pattern that might cause this is a long-lived HashMap with repeated inserts and deletes.

After this function is called, there will be no tombstones in the HashMap, each of the entries is rehashed and any existing key/value pointers into the HashMap are invalidated.

Parameters

self: *Self

Source Code

Source code
pub fn rehash(self: *Self) void {
    self.unmanaged.rehash(self.ctx);
}

Source Code

Source code
pub fn StringHashMap(comptime V: type) type {
    return HashMap([]const u8, V, StringContext, default_max_load_percentage);
}

Type FunctionStringHashMapUnmanaged[src]

Key memory is managed by the caller. Keys and values will not automatically be freed.

Parameters

V: type

Types

TypeSize[src]

Source Code

Source code
pub const Size = u32

TypeHash[src]

Source Code

Source code
pub const Hash = u64

TypeKeyIterator[src]

Source Code

Source code
pub const KeyIterator = FieldIterator(K)

TypeValueIterator[src]

Source Code

Source code
pub const ValueIterator = FieldIterator(V)

TypeManaged[src]

Source Code

Source code
pub const Managed = HashMap(K, V, Context, max_load_percentage)

Fields

metadata: ?[*]Metadata = null

Pointer to the metadata.

size: Size = 0

Current number of elements in the hashmap.

available: Size = 0

Number of available slots before a grow is needed to satisfy the max_load_percentage.

pointer_stability: std.debug.SafetyLock = .{}

Used to detect memory safety violations.

Values

Constantempty[src]

A map containing no keys or values.

Source Code

Source code
pub const empty: Self = .{
    .metadata = null,
    .size = 0,
    .available = 0,
}

Functions

Functionpromote[src]

pub fn promote(self: Self, allocator: Allocator) Managed

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn promote(self: Self, allocator: Allocator) Managed {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
    return promoteContext(self, allocator, undefined);
}

FunctionpromoteContext[src]

pub fn promoteContext(self: Self, allocator: Allocator, ctx: Context) Managed

Parameters

self: Self
allocator: Allocator
ctx: Context

Source Code

Source code
pub fn promoteContext(self: Self, allocator: Allocator, ctx: Context) Managed {
    return .{
        .unmanaged = self,
        .allocator = allocator,
        .ctx = ctx,
    };
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.pointer_stability.lock();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.pointer_stability.unlock();
}

Functiondeinit[src]

pub fn deinit(self: *Self, allocator: Allocator) void

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn deinit(self: *Self, allocator: Allocator) void {
    self.pointer_stability.assertUnlocked();
    self.deallocate(allocator);
    self.* = undefined;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, allocator: Allocator, new_size: Size) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
new_size: Size

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, allocator: Allocator, new_size: Size) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return ensureTotalCapacityContext(self, allocator, new_size, undefined);
}

FunctionensureTotalCapacityContext[src]

pub fn ensureTotalCapacityContext(self: *Self, allocator: Allocator, new_size: Size, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
new_size: Size
ctx: Context

Source Code

Source code
pub fn ensureTotalCapacityContext(self: *Self, allocator: Allocator, new_size: Size, ctx: Context) Allocator.Error!void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    if (new_size > self.size)
        try self.growIfNeeded(allocator, new_size - self.size, ctx);
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity(self: *Self, allocator: Allocator, additional_size: Size) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
additional_size: Size

Source Code

Source code
pub fn ensureUnusedCapacity(self: *Self, allocator: Allocator, additional_size: Size) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureUnusedCapacityContext instead.");
    return ensureUnusedCapacityContext(self, allocator, additional_size, undefined);
}

FunctionensureUnusedCapacityContext[src]

pub fn ensureUnusedCapacityContext(self: *Self, allocator: Allocator, additional_size: Size, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
additional_size: Size
ctx: Context

Source Code

Source code
pub fn ensureUnusedCapacityContext(self: *Self, allocator: Allocator, additional_size: Size, ctx: Context) Allocator.Error!void {
    return ensureTotalCapacityContext(self, allocator, self.count() + additional_size, ctx);
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    if (self.metadata) |_| {
        self.initMetadatas();
        self.size = 0;
        self.available = @truncate((self.capacity() * max_load_percentage) / 100);
    }
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, allocator: Allocator) void

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn clearAndFree(self: *Self, allocator: Allocator) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();
    self.deallocate(allocator);
    self.size = 0;
    self.available = 0;
}

Functioncount[src]

pub fn count(self: Self) Size

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) Size {
    return self.size;
}

Functioncapacity[src]

pub fn capacity(self: Self) Size

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) Size {
    if (self.metadata == null) return 0;

    return self.header().capacity;
}

Functioniterator[src]

pub fn iterator(self: *const Self) Iterator

Parameters

self: *const Self

Source Code

Source code
pub fn iterator(self: *const Self) Iterator {
    return .{ .hm = self };
}

FunctionkeyIterator[src]

pub fn keyIterator(self: Self) KeyIterator

Parameters

self: Self

Source Code

Source code
pub fn keyIterator(self: Self) KeyIterator {
    if (self.metadata) |metadata| {
        return .{
            .len = self.capacity(),
            .metadata = metadata,
            .items = self.keys(),
        };
    } else {
        return .{
            .len = 0,
            .metadata = undefined,
            .items = undefined,
        };
    }
}

FunctionvalueIterator[src]

pub fn valueIterator(self: Self) ValueIterator

Parameters

self: Self

Source Code

Source code
pub fn valueIterator(self: Self) ValueIterator {
    if (self.metadata) |metadata| {
        return .{
            .len = self.capacity(),
            .metadata = metadata,
            .items = self.values(),
        };
    } else {
        return .{
            .len = 0,
            .metadata = undefined,
            .items = undefined,
        };
    }
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void

Insert an entry in the map. Assumes it is not already present.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
    return self.putNoClobberContext(allocator, key, value, undefined);
}

FunctionputNoClobberContext[src]

pub fn putNoClobberContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putNoClobberContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
    {
        self.pointer_stability.lock();
        defer self.pointer_stability.unlock();
        try self.growIfNeeded(allocator, 1, ctx);
    }
    self.putAssumeCapacityNoClobberContext(key, value, ctx);
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
    return self.putAssumeCapacityContext(key, value, undefined);
}

FunctionputAssumeCapacityContext[src]

pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    gop.value_ptr.* = value;
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Insert an entry in the map. Assumes it is not already present, and that no allocation is needed.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
    return self.putAssumeCapacityNoClobberContext(key, value, undefined);
}

FunctionputAssumeCapacityNoClobberContext[src]

pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
    assert(!self.containsContext(key, ctx));

    const hash: Hash = ctx.hash(key);
    const mask = self.capacity() - 1;
    var idx: usize = @truncate(hash & mask);

    var metadata = self.metadata.? + idx;
    while (metadata[0].isUsed()) {
        idx = (idx + 1) & mask;
        metadata = self.metadata.? + idx;
    }

    assert(self.available > 0);
    self.available -= 1;

    const fingerprint = Metadata.takeFingerprint(hash);
    metadata[0].fill(fingerprint);
    self.keys()[idx] = key;
    self.values()[idx] = value;

    self.size += 1;
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
    return self.fetchPutContext(allocator, key, value, undefined);
}

FunctionfetchPutContext[src]

pub fn fetchPutContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!?KV

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!?KV {
    const gop = try self.getOrPutContext(allocator, key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
    return self.fetchPutAssumeCapacityContext(key, value, undefined);
}

FunctionfetchPutAssumeCapacityContext[src]

pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchRemove[src]

pub fn fetchRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchRemoveContext instead.");
    return self.fetchRemoveContext(key, undefined);
}

FunctionfetchRemoveContext[src]

pub fn fetchRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchRemoveAdapted(key, ctx);
}

FunctionfetchRemoveAdapted[src]

pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (self.getIndex(key, ctx)) |idx| {
        const old_key = &self.keys()[idx];
        const old_val = &self.values()[idx];
        const result = KV{
            .key = old_key.*,
            .value = old_val.*,
        };
        self.metadata.?[idx].remove();
        old_key.* = undefined;
        old_val.* = undefined;
        self.size -= 1;
        self.available += 1;
        return result;
    }

    return null;
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
    return self.getEntryContext(key, undefined);
}

FunctiongetEntryContext[src]

pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
    return self.getEntryAdapted(key, ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    if (self.getIndex(key, ctx)) |idx| {
        return Entry{
            .key_ptr = &self.keys()[idx],
            .value_ptr = &self.values()[idx],
        };
    }
    return null;
}

Functionput[src]

pub fn put(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void

Insert an entry if the associated key is not already present, otherwise update preexisting value.

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
    return self.putContext(allocator, key, value, undefined);
}

FunctionputContext[src]

pub fn putContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!void {
    const result = try self.getOrPutContext(allocator, key, ctx);
    result.value_ptr.* = value;
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Get an optional pointer to the actual key associated with adapted key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
    return self.getKeyPtrContext(key, undefined);
}

FunctiongetKeyPtrContext[src]

pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
    return self.getKeyPtrAdapted(key, ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    if (self.getIndex(key, ctx)) |idx| {
        return &self.keys()[idx];
    }
    return null;
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Get a copy of the actual key associated with adapted key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
    return self.getKeyContext(key, undefined);
}

FunctiongetKeyContext[src]

pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
    return self.getKeyAdapted(key, ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    if (self.getIndex(key, ctx)) |idx| {
        return self.keys()[idx];
    }
    return null;
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Get an optional pointer to the value associated with key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
    return self.getPtrContext(key, undefined);
}

FunctiongetPtrContext[src]

pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
    return self.getPtrAdapted(key, ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    if (self.getIndex(key, ctx)) |idx| {
        return &self.values()[idx];
    }
    return null;
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Get a copy of the value associated with key, if present.

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
    return self.getContext(key, undefined);
}

FunctiongetContext[src]

pub fn getContext(self: Self, key: K, ctx: Context) ?V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getContext(self: Self, key: K, ctx: Context) ?V {
    return self.getAdapted(key, ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    if (self.getIndex(key, ctx)) |idx| {
        return self.values()[idx];
    }
    return null;
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, allocator: Allocator, key: K) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, allocator: Allocator, key: K) Allocator.Error!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
    return self.getOrPutContext(allocator, key, undefined);
}

FunctiongetOrPutContext[src]

pub fn getOrPutContext(self: *Self, allocator: Allocator, key: K, ctx: Context) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutContext(self: *Self, allocator: Allocator, key: K, ctx: Context) Allocator.Error!GetOrPutResult {
    const gop = try self.getOrPutContextAdapted(allocator, key, ctx, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype) Allocator.Error!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
    return self.getOrPutContextAdapted(allocator, key, key_ctx, undefined);
}

FunctiongetOrPutContextAdapted[src]

pub fn getOrPutContextAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Allocator.Error!GetOrPutResult

Parameters

self: *Self
allocator: Allocator
ctx: Context

Source Code

Source code
pub fn getOrPutContextAdapted(self: *Self, allocator: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Allocator.Error!GetOrPutResult {
    {
        self.pointer_stability.lock();
        defer self.pointer_stability.unlock();
        self.growIfNeeded(allocator, 1, ctx) catch |err| {
            // If allocation fails, try to do the lookup anyway.
            // If we find an existing item, we can return it.
            // Otherwise return the error, we could not add another.
            const index = self.getIndex(key, key_ctx) orelse return err;
            return GetOrPutResult{
                .key_ptr = &self.keys()[index],
                .value_ptr = &self.values()[index],
                .found_existing = true,
            };
        };
    }
    return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
    return self.getOrPutAssumeCapacityContext(key, undefined);
}

FunctiongetOrPutAssumeCapacityContext[src]

pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
    const result = self.getOrPutAssumeCapacityAdapted(key, ctx);
    if (!result.found_existing) {
        result.key_ptr.* = key;
    }
    return result;
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {

    // If you get a compile error on this line, it means that your generic hash
    // function is invalid for these parameters.
    const hash: Hash = ctx.hash(key);

    const mask = self.capacity() - 1;
    const fingerprint = Metadata.takeFingerprint(hash);
    var limit = self.capacity();
    var idx = @as(usize, @truncate(hash & mask));

    var first_tombstone_idx: usize = self.capacity(); // invalid index
    var metadata = self.metadata.? + idx;
    while (!metadata[0].isFree() and limit != 0) {
        if (metadata[0].isUsed() and metadata[0].fingerprint == fingerprint) {
            const test_key = &self.keys()[idx];
            // If you get a compile error on this line, it means that your generic eql
            // function is invalid for these parameters.

            if (ctx.eql(key, test_key.*)) {
                return GetOrPutResult{
                    .key_ptr = test_key,
                    .value_ptr = &self.values()[idx],
                    .found_existing = true,
                };
            }
        } else if (first_tombstone_idx == self.capacity() and metadata[0].isTombstone()) {
            first_tombstone_idx = idx;
        }

        limit -= 1;
        idx = (idx + 1) & mask;
        metadata = self.metadata.? + idx;
    }

    if (first_tombstone_idx < self.capacity()) {
        // Cheap try to lower probing lengths after deletions. Recycle a tombstone.
        idx = first_tombstone_idx;
        metadata = self.metadata.? + idx;
    }
    // We're using a slot previously free or a tombstone.
    self.available -= 1;

    metadata[0].fill(fingerprint);
    const new_key = &self.keys()[idx];
    const new_value = &self.values()[idx];
    new_key.* = undefined;
    new_value.* = undefined;
    self.size += 1;

    return GetOrPutResult{
        .key_ptr = new_key,
        .value_ptr = new_value,
        .found_existing = false,
    };
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!Entry

Parameters

self: *Self
allocator: Allocator
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, allocator: Allocator, key: K, value: V) Allocator.Error!Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
    return self.getOrPutValueContext(allocator, key, value, undefined);
}

FunctiongetOrPutValueContext[src]

pub fn getOrPutValueContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!Entry

Parameters

self: *Self
allocator: Allocator
key: K
value: V
ctx: Context

Source Code

Source code
pub fn getOrPutValueContext(self: *Self, allocator: Allocator, key: K, value: V, ctx: Context) Allocator.Error!Entry {
    const res = try self.getOrPutAdapted(allocator, key, ctx);
    if (!res.found_existing) {
        res.key_ptr.* = key;
        res.value_ptr.* = value;
    }
    return Entry{ .key_ptr = res.key_ptr, .value_ptr = res.value_ptr };
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Return true if there is a value associated with key in the map.

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
    return self.containsContext(key, undefined);
}

FunctioncontainsContext[src]

pub fn containsContext(self: Self, key: K, ctx: Context) bool

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn containsContext(self: Self, key: K, ctx: Context) bool {
    return self.containsAdapted(key, ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.getIndex(key, ctx) != null;
}

Functionremove[src]

pub fn remove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map, and this function returns true. Otherwise this function returns false.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K

Source Code

Source code
pub fn remove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call removeContext instead.");
    return self.removeContext(key, undefined);
}

FunctionremoveContext[src]

pub fn removeContext(self: *Self, key: K, ctx: Context) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn removeContext(self: *Self, key: K, ctx: Context) bool {
    return self.removeAdapted(key, ctx);
}

FunctionremoveAdapted[src]

pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self

Source Code

Source code
pub fn removeAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (self.getIndex(key, ctx)) |idx| {
        self.removeByIndex(idx);
        return true;
    }

    return false;
}

FunctionremoveByPtr[src]

pub fn removeByPtr(self: *Self, key_ptr: *K) void

Delete the entry with key pointed to by key_ptr from the hash map. key_ptr is assumed to be a valid pointer to a key that is present in the hash map.

TODO: answer the question in these doc comments, does this increase the unused capacity by one?

Parameters

self: *Self
key_ptr: *K

Source Code

Source code
pub fn removeByPtr(self: *Self, key_ptr: *K) void {
    // TODO: replace with pointer subtraction once supported by zig
    // if @sizeOf(K) == 0 then there is at most one item in the hash
    // map, which is assumed to exist as key_ptr must be valid.  This
    // item must be at index 0.
    const idx = if (@sizeOf(K) > 0)
        (@intFromPtr(key_ptr) - @intFromPtr(self.keys())) / @sizeOf(K)
    else
        0;

    self.removeByIndex(idx);
}

Functionclone[src]

pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn clone(self: Self, allocator: Allocator) Allocator.Error!Self {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
    return self.cloneContext(allocator, @as(Context, undefined));
}

FunctioncloneContext[src]

pub fn cloneContext(self: Self, allocator: Allocator, new_ctx: anytype) Allocator.Error!HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage)

Parameters

self: Self
allocator: Allocator

Source Code

Source code
pub fn cloneContext(self: Self, allocator: Allocator, new_ctx: anytype) Allocator.Error!HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) {
    var other: HashMapUnmanaged(K, V, @TypeOf(new_ctx), max_load_percentage) = .empty;
    if (self.size == 0)
        return other;

    const new_cap = capacityForSize(self.size);
    try other.allocate(allocator, new_cap);
    other.initMetadatas();
    other.available = @truncate((new_cap * max_load_percentage) / 100);

    var i: Size = 0;
    var metadata = self.metadata.?;
    const keys_ptr = self.keys();
    const values_ptr = self.values();
    while (i < self.capacity()) : (i += 1) {
        if (metadata[i].isUsed()) {
            other.putAssumeCapacityNoClobberContext(keys_ptr[i], values_ptr[i], new_ctx);
            if (other.size == self.size)
                break;
        }
    }

    return other;
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.pointer_stability.assertUnlocked();
    const result = self.*;
    self.* = .empty;
    return result;
}

Functionrehash[src]

pub fn rehash(self: *Self, ctx: anytype) void

Rehash the map, in-place.

Over time, due to the current tombstone-based implementation, a HashMap could become fragmented due to the buildup of tombstone entries that causes a performance degradation due to excessive probing. The kind of pattern that might cause this is a long-lived HashMap with repeated inserts and deletes.

After this function is called, there will be no tombstones in the HashMap, each of the entries is rehashed and any existing key/value pointers into the HashMap are invalidated.

Parameters

self: *Self

Source Code

Source code
pub fn rehash(self: *Self, ctx: anytype) void {
    const mask = self.capacity() - 1;

    var metadata = self.metadata.?;
    var keys_ptr = self.keys();
    var values_ptr = self.values();
    var curr: Size = 0;

    // While we are re-hashing every slot, we will use the
    // fingerprint to mark used buckets as being used and either free
    // (needing to be rehashed) or tombstone (already rehashed).

    while (curr < self.capacity()) : (curr += 1) {
        metadata[curr].fingerprint = Metadata.free;
    }

    // Now iterate over all the buckets, rehashing them

    curr = 0;
    while (curr < self.capacity()) {
        if (!metadata[curr].isUsed()) {
            assert(metadata[curr].isFree());
            curr += 1;
            continue;
        }

        const hash = ctx.hash(keys_ptr[curr]);
        const fingerprint = Metadata.takeFingerprint(hash);
        var idx = @as(usize, @truncate(hash & mask));

        // For each bucket, rehash to an index:
        // 1) before the cursor, probed into a free slot, or
        // 2) equal to the cursor, no need to move, or
        // 3) ahead of the cursor, probing over already rehashed

        while ((idx < curr and metadata[idx].isUsed()) or
            (idx > curr and metadata[idx].fingerprint == Metadata.tombstone))
        {
            idx = (idx + 1) & mask;
        }

        if (idx < curr) {
            assert(metadata[idx].isFree());
            metadata[idx].fill(fingerprint);
            keys_ptr[idx] = keys_ptr[curr];
            values_ptr[idx] = values_ptr[curr];

            metadata[curr].used = 0;
            assert(metadata[curr].isFree());
            keys_ptr[curr] = undefined;
            values_ptr[curr] = undefined;

            curr += 1;
        } else if (idx == curr) {
            metadata[idx].fingerprint = fingerprint;
            curr += 1;
        } else {
            assert(metadata[idx].fingerprint != Metadata.tombstone);
            metadata[idx].fingerprint = Metadata.tombstone;
            if (metadata[idx].isUsed()) {
                std.mem.swap(K, &keys_ptr[curr], &keys_ptr[idx]);
                std.mem.swap(V, &values_ptr[curr], &values_ptr[idx]);
            } else {
                metadata[idx].used = 1;
                keys_ptr[idx] = keys_ptr[curr];
                values_ptr[idx] = values_ptr[curr];

                metadata[curr].fingerprint = Metadata.free;
                metadata[curr].used = 0;
                keys_ptr[curr] = undefined;
                values_ptr[curr] = undefined;

                curr += 1;
            }
        }
    }
}

Source Code

Source code
pub fn StringHashMapUnmanaged(comptime V: type) type {
    return HashMapUnmanaged([]const u8, V, StringContext, default_max_load_percentage);
}

Type FunctionStringArrayHashMap[src]

An ArrayHashMap with strings as keys.

Parameters

V: type

Source Code

Source code
pub fn StringArrayHashMap(comptime V: type) type {
    return ArrayHashMap([]const u8, V, StringContext, true);
}

Type FunctionStringArrayHashMapUnmanaged[src]

An ArrayHashMapUnmanaged with strings as keys.

Parameters

V: type

Types

TypeDataList[src]

The MultiArrayList type backing this map

Source Code

Source code
pub const DataList = std.MultiArrayList(Data)

TypeHash[src]

The stored hash type, either u32 or void.

Source Code

Source code
pub const Hash = if (store_hash) u32 else void

TypeManaged[src]

The ArrayHashMap type using the same settings as this managed map.

Source Code

Source code
pub const Managed = ArrayHashMap(K, V, Context, store_hash)

Fields

entries: DataList = .{}

It is permitted to access this field directly. After any modification to the keys, consider calling reIndex.

index_header: ?*IndexHeader = null

When entries length is less than linear_scan_max, this remains null. Once entries length grows big enough, this field is allocated. There is an IndexHeader followed by an array of Index(I) structs, where I is defined by how many total indexes there are.

pointer_stability: std.debug.SafetyLock = .{}

Used to detect memory safety violations.

Values

Constantempty[src]

A map containing no keys or values.

Source Code

Source code
pub const empty: Self = .{
    .entries = .{},
    .index_header = null,
}

Functions

Functionpromote[src]

pub fn promote(self: Self, gpa: Allocator) Managed

Convert from an unmanaged map to a managed map. After calling this, the promoted map should no longer be used.

Parameters

self: Self

Source Code

Source code
pub fn promote(self: Self, gpa: Allocator) Managed {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call promoteContext instead.");
    return self.promoteContext(gpa, undefined);
}

FunctionpromoteContext[src]

pub fn promoteContext(self: Self, gpa: Allocator, ctx: Context) Managed

Parameters

self: Self
ctx: Context

Source Code

Source code
pub fn promoteContext(self: Self, gpa: Allocator, ctx: Context) Managed {
    return .{
        .unmanaged = self,
        .allocator = gpa,
        .ctx = ctx,
    };
}

Functioninit[src]

pub fn init(gpa: Allocator, key_list: []const K, value_list: []const V) Oom!Self

Parameters

key_list: []const K
value_list: []const V

Source Code

Source code
pub fn init(gpa: Allocator, key_list: []const K, value_list: []const V) Oom!Self {
    var self: Self = .{};
    errdefer self.deinit(gpa);
    try self.reinit(gpa, key_list, value_list);
    return self;
}

Functionreinit[src]

pub fn reinit(self: *Self, gpa: Allocator, key_list: []const K, value_list: []const V) Oom!void

An empty value_list may be passed, in which case the values array becomes undefined.

Parameters

self: *Self
key_list: []const K
value_list: []const V

Source Code

Source code
pub fn reinit(self: *Self, gpa: Allocator, key_list: []const K, value_list: []const V) Oom!void {
    try self.entries.resize(gpa, key_list.len);
    @memcpy(self.keys(), key_list);
    if (value_list.len == 0) {
        @memset(self.values(), undefined);
    } else {
        assert(key_list.len == value_list.len);
        @memcpy(self.values(), value_list);
    }
    try self.reIndex(gpa);
}

Functiondeinit[src]

pub fn deinit(self: *Self, gpa: Allocator) void

Frees the backing allocation and leaves the map in an undefined state. Note that this does not free keys or values. You must take care of that before calling this function, if it is needed.

Parameters

self: *Self

Source Code

Source code
pub fn deinit(self: *Self, gpa: Allocator) void {
    self.pointer_stability.assertUnlocked();
    self.entries.deinit(gpa);
    if (self.index_header) |header| {
        header.free(gpa);
    }
    self.* = undefined;
}

FunctionlockPointers[src]

pub fn lockPointers(self: *Self) void

Puts the hash map into a state where any method call that would cause an existing key or value pointer to become invalidated will instead trigger an assertion.

An additional call to lockPointers in such state also triggers an assertion.

unlockPointers returns the hash map to the previous state.

Parameters

self: *Self

Source Code

Source code
pub fn lockPointers(self: *Self) void {
    self.pointer_stability.lock();
}

FunctionunlockPointers[src]

pub fn unlockPointers(self: *Self) void

Undoes a call to lockPointers.

Parameters

self: *Self

Source Code

Source code
pub fn unlockPointers(self: *Self) void {
    self.pointer_stability.unlock();
}

FunctionclearRetainingCapacity[src]

pub fn clearRetainingCapacity(self: *Self) void

Clears the map but retains the backing allocation for future use.

Parameters

self: *Self

Source Code

Source code
pub fn clearRetainingCapacity(self: *Self) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.entries.len = 0;
    if (self.index_header) |header| {
        switch (header.capacityIndexType()) {
            .u8 => @memset(header.indexes(u8), Index(u8).empty),
            .u16 => @memset(header.indexes(u16), Index(u16).empty),
            .u32 => @memset(header.indexes(u32), Index(u32).empty),
        }
    }
}

FunctionclearAndFree[src]

pub fn clearAndFree(self: *Self, gpa: Allocator) void

Clears the map and releases the backing allocation

Parameters

self: *Self

Source Code

Source code
pub fn clearAndFree(self: *Self, gpa: Allocator) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.entries.shrinkAndFree(gpa, 0);
    if (self.index_header) |header| {
        header.free(gpa);
        self.index_header = null;
    }
}

Functioncount[src]

pub fn count(self: Self) usize

Returns the number of KV pairs stored in this map.

Parameters

self: Self

Source Code

Source code
pub fn count(self: Self) usize {
    return self.entries.len;
}

Functionkeys[src]

pub fn keys(self: Self) []K

Returns the backing array of keys in this map. Modifying the map may invalidate this array. Modifying this array in a way that changes key hashes or key equality puts the map into an unusable state until reIndex is called.

Parameters

self: Self

Source Code

Source code
pub fn keys(self: Self) []K {
    return self.entries.items(.key);
}

Functionvalues[src]

pub fn values(self: Self) []V

Returns the backing array of values in this map. Modifying the map may invalidate this array. It is permitted to modify the values in this array.

Parameters

self: Self

Source Code

Source code
pub fn values(self: Self) []V {
    return self.entries.items(.value);
}

Functioniterator[src]

pub fn iterator(self: Self) Iterator

Returns an iterator over the pairs in this map. Modifying the map may invalidate this iterator.

Parameters

self: Self

Source Code

Source code
pub fn iterator(self: Self) Iterator {
    const slice = self.entries.slice();
    return .{
        .keys = slice.items(.key).ptr,
        .values = slice.items(.value).ptr,
        .len = @as(u32, @intCast(slice.len)),
    };
}

FunctiongetOrPut[src]

pub fn getOrPut(self: *Self, gpa: Allocator, key: K) Oom!GetOrPutResult

If key exists this function cannot fail. If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key).

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPut(self: *Self, gpa: Allocator, key: K) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContext instead.");
    return self.getOrPutContext(gpa, key, undefined);
}

FunctiongetOrPutContext[src]

pub fn getOrPutContext(self: *Self, gpa: Allocator, key: K, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutContext(self: *Self, gpa: Allocator, key: K, ctx: Context) Oom!GetOrPutResult {
    const gop = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAdapted[src]

pub fn getOrPutAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype) Oom!GetOrPutResult

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutContextAdapted instead.");
    return self.getOrPutContextAdapted(gpa, key, key_ctx, undefined);
}

FunctiongetOrPutContextAdapted[src]

pub fn getOrPutContextAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn getOrPutContextAdapted(self: *Self, gpa: Allocator, key: anytype, key_ctx: anytype, ctx: Context) Oom!GetOrPutResult {
    self.ensureTotalCapacityContext(gpa, self.entries.len + 1, ctx) catch |err| {
        // "If key exists this function cannot fail."
        const index = self.getIndexAdapted(key, key_ctx) orelse return err;
        const slice = self.entries.slice();
        return GetOrPutResult{
            .key_ptr = &slice.items(.key)[index],
            // workaround for #6974
            .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
            .found_existing = true,
            .index = index,
        };
    };
    return self.getOrPutAssumeCapacityAdapted(key, key_ctx);
}

FunctiongetOrPutAssumeCapacity[src]

pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult

If there is an existing item with key, then the result Entry pointer points to it, and found_existing is true. Otherwise, puts a new item with undefined value, and the Entry pointer points to it. Caller should then initialize the value (but not the key). If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn getOrPutAssumeCapacity(self: *Self, key: K) GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutAssumeCapacityContext instead.");
    return self.getOrPutAssumeCapacityContext(key, undefined);
}

FunctiongetOrPutAssumeCapacityContext[src]

pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn getOrPutAssumeCapacityContext(self: *Self, key: K, ctx: Context) GetOrPutResult {
    const gop = self.getOrPutAssumeCapacityAdapted(key, ctx);
    if (!gop.found_existing) {
        gop.key_ptr.* = key;
    }
    return gop;
}

FunctiongetOrPutAssumeCapacityAdapted[src]

pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult

If there is an existing item with key, then the result Entry pointers point to it, and found_existing is true. Otherwise, puts a new item with undefined key and value, and the Entry pointers point to it. Caller must then initialize both the key and the value. If a new entry needs to be stored, this function asserts there is enough capacity to store it.

Parameters

self: *Self

Source Code

Source code
pub fn getOrPutAssumeCapacityAdapted(self: *Self, key: anytype, ctx: anytype) GetOrPutResult {
    const header = self.index_header orelse {
        // Linear scan.
        const h = if (store_hash) checkedHash(ctx, key) else {};
        const slice = self.entries.slice();
        const hashes_array = slice.items(.hash);
        const keys_array = slice.items(.key);
        for (keys_array, 0..) |*item_key, i| {
            if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                return GetOrPutResult{
                    .key_ptr = item_key,
                    // workaround for #6974
                    .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[i],
                    .found_existing = true,
                    .index = i,
                };
            }
        }

        const index = self.entries.addOneAssumeCapacity();
        // The slice length changed, so we directly index the pointer.
        if (store_hash) hashes_array.ptr[index] = h;

        return GetOrPutResult{
            .key_ptr = &keys_array.ptr[index],
            // workaround for #6974
            .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value).ptr[index],
            .found_existing = false,
            .index = index,
        };
    };

    switch (header.capacityIndexType()) {
        .u8 => return self.getOrPutInternal(key, ctx, header, u8),
        .u16 => return self.getOrPutInternal(key, ctx, header, u16),
        .u32 => return self.getOrPutInternal(key, ctx, header, u32),
    }
}

FunctiongetOrPutValue[src]

pub fn getOrPutValue(self: *Self, gpa: Allocator, key: K, value: V) Oom!GetOrPutResult

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn getOrPutValue(self: *Self, gpa: Allocator, key: K, value: V) Oom!GetOrPutResult {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getOrPutValueContext instead.");
    return self.getOrPutValueContext(gpa, key, value, undefined);
}

FunctiongetOrPutValueContext[src]

pub fn getOrPutValueContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!GetOrPutResult

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn getOrPutValueContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!GetOrPutResult {
    const res = try self.getOrPutContextAdapted(gpa, key, ctx, ctx);
    if (!res.found_existing) {
        res.key_ptr.* = key;
        res.value_ptr.* = value;
    }
    return res;
}

FunctionensureTotalCapacity[src]

pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Oom!void

Increases capacity, guaranteeing that insertions up until the expected_count will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
new_capacity: usize

Source Code

Source code
pub fn ensureTotalCapacity(self: *Self, gpa: Allocator, new_capacity: usize) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return self.ensureTotalCapacityContext(gpa, new_capacity, undefined);
}

FunctionensureTotalCapacityContext[src]

pub fn ensureTotalCapacityContext(self: *Self, gpa: Allocator, new_capacity: usize, ctx: Context) Oom!void

Parameters

self: *Self
new_capacity: usize
ctx: Context

Source Code

Source code
pub fn ensureTotalCapacityContext(self: *Self, gpa: Allocator, new_capacity: usize, ctx: Context) Oom!void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    if (new_capacity <= linear_scan_max) {
        try self.entries.ensureTotalCapacity(gpa, new_capacity);
        return;
    }

    if (self.index_header) |header| {
        if (new_capacity <= header.capacity()) {
            try self.entries.ensureTotalCapacity(gpa, new_capacity);
            return;
        }
    }

    try self.entries.ensureTotalCapacity(gpa, new_capacity);
    const new_bit_index = try IndexHeader.findBitIndex(new_capacity);
    const new_header = try IndexHeader.alloc(gpa, new_bit_index);

    if (self.index_header) |old_header| old_header.free(gpa);
    self.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
    self.index_header = new_header;
}

FunctionensureUnusedCapacity[src]

pub fn ensureUnusedCapacity( self: *Self, gpa: Allocator, additional_capacity: usize, ) Oom!void

Increases capacity, guaranteeing that insertions up until additional_count more items will not cause an allocation, and therefore cannot fail.

Parameters

self: *Self
additional_capacity: usize

Source Code

Source code
pub fn ensureUnusedCapacity(
    self: *Self,
    gpa: Allocator,
    additional_capacity: usize,
) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call ensureTotalCapacityContext instead.");
    return self.ensureUnusedCapacityContext(gpa, additional_capacity, undefined);
}

FunctionensureUnusedCapacityContext[src]

pub fn ensureUnusedCapacityContext( self: *Self, gpa: Allocator, additional_capacity: usize, ctx: Context, ) Oom!void

Parameters

self: *Self
additional_capacity: usize
ctx: Context

Source Code

Source code
pub fn ensureUnusedCapacityContext(
    self: *Self,
    gpa: Allocator,
    additional_capacity: usize,
    ctx: Context,
) Oom!void {
    return self.ensureTotalCapacityContext(gpa, self.count() + additional_capacity, ctx);
}

Functioncapacity[src]

pub fn capacity(self: Self) usize

Returns the number of total elements which may be present before it is no longer guaranteed that no allocations will be performed.

Parameters

self: Self

Source Code

Source code
pub fn capacity(self: Self) usize {
    const entry_cap = self.entries.capacity;
    const header = self.index_header orelse return @min(linear_scan_max, entry_cap);
    const indexes_cap = header.capacity();
    return @min(entry_cap, indexes_cap);
}

Functionput[src]

pub fn put(self: *Self, gpa: Allocator, key: K, value: V) Oom!void

Clobbers any existing data. To detect if a put would clobber existing data, see getOrPut.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn put(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putContext instead.");
    return self.putContext(gpa, key, value, undefined);
}

FunctionputContext[src]

pub fn putContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
    const result = try self.getOrPutContext(gpa, key, ctx);
    result.value_ptr.* = value;
}

FunctionputNoClobber[src]

pub fn putNoClobber(self: *Self, gpa: Allocator, key: K, value: V) Oom!void

Inserts a key-value pair into the hash map, asserting that no previous entry with the same key is already present

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putNoClobber(self: *Self, gpa: Allocator, key: K, value: V) Oom!void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putNoClobberContext instead.");
    return self.putNoClobberContext(gpa, key, value, undefined);
}

FunctionputNoClobberContext[src]

pub fn putNoClobberContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putNoClobberContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!void {
    const result = try self.getOrPutContext(gpa, key, ctx);
    assert(!result.found_existing);
    result.value_ptr.* = value;
}

FunctionputAssumeCapacity[src]

pub fn putAssumeCapacity(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Clobbers any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacity(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityContext instead.");
    return self.putAssumeCapacityContext(key, value, undefined);
}

FunctionputAssumeCapacityContext[src]

pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) void {
    const result = self.getOrPutAssumeCapacityContext(key, ctx);
    result.value_ptr.* = value;
}

FunctionputAssumeCapacityNoClobber[src]

pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void

Asserts there is enough capacity to store the new key-value pair. Asserts that it does not clobber any existing data. To detect if a put would clobber existing data, see getOrPutAssumeCapacity.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn putAssumeCapacityNoClobber(self: *Self, key: K, value: V) void {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call putAssumeCapacityNoClobberContext instead.");
    return self.putAssumeCapacityNoClobberContext(key, value, undefined);
}

FunctionputAssumeCapacityNoClobberContext[src]

pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn putAssumeCapacityNoClobberContext(self: *Self, key: K, value: V, ctx: Context) void {
    const result = self.getOrPutAssumeCapacityContext(key, ctx);
    assert(!result.found_existing);
    result.value_ptr.* = value;
}

FunctionfetchPut[src]

pub fn fetchPut(self: *Self, gpa: Allocator, key: K, value: V) Oom!?KV

Inserts a new Entry into the hash map, returning the previous one, if any.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPut(self: *Self, gpa: Allocator, key: K, value: V) Oom!?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutContext instead.");
    return self.fetchPutContext(gpa, key, value, undefined);
}

FunctionfetchPutContext[src]

pub fn fetchPutContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutContext(self: *Self, gpa: Allocator, key: K, value: V, ctx: Context) Oom!?KV {
    const gop = try self.getOrPutContext(gpa, key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctionfetchPutAssumeCapacity[src]

pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV

Inserts a new Entry into the hash map, returning the previous one, if any. If insertion happens, asserts there is enough capacity without allocating.

Parameters

self: *Self
key: K
value: V

Source Code

Source code
pub fn fetchPutAssumeCapacity(self: *Self, key: K, value: V) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchPutAssumeCapacityContext instead.");
    return self.fetchPutAssumeCapacityContext(key, value, undefined);
}

FunctionfetchPutAssumeCapacityContext[src]

pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV

Parameters

self: *Self
key: K
value: V
ctx: Context

Source Code

Source code
pub fn fetchPutAssumeCapacityContext(self: *Self, key: K, value: V, ctx: Context) ?KV {
    const gop = self.getOrPutAssumeCapacityContext(key, ctx);
    var result: ?KV = null;
    if (gop.found_existing) {
        result = KV{
            .key = gop.key_ptr.*,
            .value = gop.value_ptr.*,
        };
    }
    gop.value_ptr.* = value;
    return result;
}

FunctiongetEntry[src]

pub fn getEntry(self: Self, key: K) ?Entry

Finds pointers to the key and value storage associated with a key.

Parameters

self: Self
key: K

Source Code

Source code
pub fn getEntry(self: Self, key: K) ?Entry {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getEntryContext instead.");
    return self.getEntryContext(key, undefined);
}

FunctiongetEntryContext[src]

pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getEntryContext(self: Self, key: K, ctx: Context) ?Entry {
    return self.getEntryAdapted(key, ctx);
}

FunctiongetEntryAdapted[src]

pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry

Parameters

self: Self

Source Code

Source code
pub fn getEntryAdapted(self: Self, key: anytype, ctx: anytype) ?Entry {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    const slice = self.entries.slice();
    return Entry{
        .key_ptr = &slice.items(.key)[index],
        // workaround for #6974
        .value_ptr = if (@sizeOf(*V) == 0) undefined else &slice.items(.value)[index],
    };
}

FunctiongetIndex[src]

pub fn getIndex(self: Self, key: K) ?usize

Finds the index in the entries array where a key is stored

Parameters

self: Self
key: K

Source Code

Source code
pub fn getIndex(self: Self, key: K) ?usize {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getIndexContext instead.");
    return self.getIndexContext(key, undefined);
}

FunctiongetIndexContext[src]

pub fn getIndexContext(self: Self, key: K, ctx: Context) ?usize

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getIndexContext(self: Self, key: K, ctx: Context) ?usize {
    return self.getIndexAdapted(key, ctx);
}

FunctiongetIndexAdapted[src]

pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize

Parameters

self: Self

Source Code

Source code
pub fn getIndexAdapted(self: Self, key: anytype, ctx: anytype) ?usize {
    const header = self.index_header orelse {
        // Linear scan.
        const h = if (store_hash) checkedHash(ctx, key) else {};
        const slice = self.entries.slice();
        const hashes_array = slice.items(.hash);
        const keys_array = slice.items(.key);
        for (keys_array, 0..) |*item_key, i| {
            if (hashes_array[i] == h and checkedEql(ctx, key, item_key.*, i)) {
                return i;
            }
        }
        return null;
    };
    switch (header.capacityIndexType()) {
        .u8 => return self.getIndexWithHeaderGeneric(key, ctx, header, u8),
        .u16 => return self.getIndexWithHeaderGeneric(key, ctx, header, u16),
        .u32 => return self.getIndexWithHeaderGeneric(key, ctx, header, u32),
    }
}

Functionget[src]

pub fn get(self: Self, key: K) ?V

Find the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn get(self: Self, key: K) ?V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getContext instead.");
    return self.getContext(key, undefined);
}

FunctiongetContext[src]

pub fn getContext(self: Self, key: K, ctx: Context) ?V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getContext(self: Self, key: K, ctx: Context) ?V {
    return self.getAdapted(key, ctx);
}

FunctiongetAdapted[src]

pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V

Parameters

self: Self

Source Code

Source code
pub fn getAdapted(self: Self, key: anytype, ctx: anytype) ?V {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return self.values()[index];
}

FunctiongetPtr[src]

pub fn getPtr(self: Self, key: K) ?*V

Find a pointer to the value associated with a key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getPtr(self: Self, key: K) ?*V {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getPtrContext instead.");
    return self.getPtrContext(key, undefined);
}

FunctiongetPtrContext[src]

pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getPtrContext(self: Self, key: K, ctx: Context) ?*V {
    return self.getPtrAdapted(key, ctx);
}

FunctiongetPtrAdapted[src]

pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V

Parameters

self: Self

Source Code

Source code
pub fn getPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*V {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    // workaround for #6974
    return if (@sizeOf(*V) == 0) @as(*V, undefined) else &self.values()[index];
}

FunctiongetKey[src]

pub fn getKey(self: Self, key: K) ?K

Find the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKey(self: Self, key: K) ?K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyContext instead.");
    return self.getKeyContext(key, undefined);
}

FunctiongetKeyContext[src]

pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyContext(self: Self, key: K, ctx: Context) ?K {
    return self.getKeyAdapted(key, ctx);
}

FunctiongetKeyAdapted[src]

pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K

Parameters

self: Self

Source Code

Source code
pub fn getKeyAdapted(self: Self, key: anytype, ctx: anytype) ?K {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return self.keys()[index];
}

FunctiongetKeyPtr[src]

pub fn getKeyPtr(self: Self, key: K) ?*K

Find a pointer to the actual key associated with an adapted key

Parameters

self: Self
key: K

Source Code

Source code
pub fn getKeyPtr(self: Self, key: K) ?*K {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call getKeyPtrContext instead.");
    return self.getKeyPtrContext(key, undefined);
}

FunctiongetKeyPtrContext[src]

pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn getKeyPtrContext(self: Self, key: K, ctx: Context) ?*K {
    return self.getKeyPtrAdapted(key, ctx);
}

FunctiongetKeyPtrAdapted[src]

pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K

Parameters

self: Self

Source Code

Source code
pub fn getKeyPtrAdapted(self: Self, key: anytype, ctx: anytype) ?*K {
    const index = self.getIndexAdapted(key, ctx) orelse return null;
    return &self.keys()[index];
}

Functioncontains[src]

pub fn contains(self: Self, key: K) bool

Check whether a key is stored in the map

Parameters

self: Self
key: K

Source Code

Source code
pub fn contains(self: Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call containsContext instead.");
    return self.containsContext(key, undefined);
}

FunctioncontainsContext[src]

pub fn containsContext(self: Self, key: K, ctx: Context) bool

Parameters

self: Self
key: K
ctx: Context

Source Code

Source code
pub fn containsContext(self: Self, key: K, ctx: Context) bool {
    return self.containsAdapted(key, ctx);
}

FunctioncontainsAdapted[src]

pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool

Parameters

self: Self

Source Code

Source code
pub fn containsAdapted(self: Self, key: anytype, ctx: anytype) bool {
    return self.getIndexAdapted(key, ctx) != null;
}

FunctionfetchSwapRemove[src]

pub fn fetchSwapRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchSwapRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContext instead.");
    return self.fetchSwapRemoveContext(key, undefined);
}

FunctionfetchSwapRemoveContext[src]

pub fn fetchSwapRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchSwapRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchSwapRemoveContextAdapted(key, ctx, ctx);
}

FunctionfetchSwapRemoveAdapted[src]

pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchSwapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchSwapRemoveContextAdapted instead.");
    return self.fetchSwapRemoveContextAdapted(key, ctx, undefined);
}

FunctionfetchSwapRemoveContextAdapted[src]

pub fn fetchSwapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn fetchSwapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
}

FunctionfetchOrderedRemove[src]

pub fn fetchOrderedRemove(self: *Self, key: K) ?KV

If there is an Entry with a matching key, it is deleted from the hash map, and then returned from this function. The entry is removed from the underlying array by shifting all elements forward thereby maintaining the current ordering.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn fetchOrderedRemove(self: *Self, key: K) ?KV {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContext instead.");
    return self.fetchOrderedRemoveContext(key, undefined);
}

FunctionfetchOrderedRemoveContext[src]

pub fn fetchOrderedRemoveContext(self: *Self, key: K, ctx: Context) ?KV

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn fetchOrderedRemoveContext(self: *Self, key: K, ctx: Context) ?KV {
    return self.fetchOrderedRemoveContextAdapted(key, ctx, ctx);
}

FunctionfetchOrderedRemoveAdapted[src]

pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV

Parameters

self: *Self

Source Code

Source code
pub fn fetchOrderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call fetchOrderedRemoveContextAdapted instead.");
    return self.fetchOrderedRemoveContextAdapted(key, ctx, undefined);
}

FunctionfetchOrderedRemoveContextAdapted[src]

pub fn fetchOrderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn fetchOrderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) ?KV {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.fetchRemoveByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
}

FunctionswapRemove[src]

pub fn swapRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by swapping it with the last element. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn swapRemove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContext instead.");
    return self.swapRemoveContext(key, undefined);
}

FunctionswapRemoveContext[src]

pub fn swapRemoveContext(self: *Self, key: K, ctx: Context) bool

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn swapRemoveContext(self: *Self, key: K, ctx: Context) bool {
    return self.swapRemoveContextAdapted(key, ctx, ctx);
}

FunctionswapRemoveAdapted[src]

pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn swapRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveContextAdapted instead.");
    return self.swapRemoveContextAdapted(key, ctx, undefined);
}

FunctionswapRemoveContextAdapted[src]

pub fn swapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn swapRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .swap);
}

FunctionorderedRemove[src]

pub fn orderedRemove(self: *Self, key: K) bool

If there is an Entry with a matching key, it is deleted from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering. Returns true if an entry was removed, false otherwise.

Parameters

self: *Self
key: K

Source Code

Source code
pub fn orderedRemove(self: *Self, key: K) bool {
    if (@sizeOf(Context) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContext instead.");
    return self.orderedRemoveContext(key, undefined);
}

FunctionorderedRemoveContext[src]

pub fn orderedRemoveContext(self: *Self, key: K, ctx: Context) bool

Parameters

self: *Self
key: K
ctx: Context

Source Code

Source code
pub fn orderedRemoveContext(self: *Self, key: K, ctx: Context) bool {
    return self.orderedRemoveContextAdapted(key, ctx, ctx);
}

FunctionorderedRemoveAdapted[src]

pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool

Parameters

self: *Self

Source Code

Source code
pub fn orderedRemoveAdapted(self: *Self, key: anytype, ctx: anytype) bool {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveContextAdapted instead.");
    return self.orderedRemoveContextAdapted(key, ctx, undefined);
}

FunctionorderedRemoveContextAdapted[src]

pub fn orderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn orderedRemoveContextAdapted(self: *Self, key: anytype, key_ctx: anytype, ctx: Context) bool {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    return self.removeByKey(key, key_ctx, if (store_hash) {} else ctx, .ordered);
}

FunctionswapRemoveAt[src]

pub fn swapRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by swapping it with the last element.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn swapRemoveAt(self: *Self, index: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call swapRemoveAtContext instead.");
    return self.swapRemoveAtContext(index, undefined);
}

FunctionswapRemoveAtContext[src]

pub fn swapRemoveAtContext(self: *Self, index: usize, ctx: Context) void

Parameters

self: *Self
index: usize
ctx: Context

Source Code

Source code
pub fn swapRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.removeByIndex(index, if (store_hash) {} else ctx, .swap);
}

FunctionorderedRemoveAt[src]

pub fn orderedRemoveAt(self: *Self, index: usize) void

Deletes the item at the specified index in entries from the hash map. The entry is removed from the underlying array by shifting all elements forward, thereby maintaining the current ordering.

Parameters

self: *Self
index: usize

Source Code

Source code
pub fn orderedRemoveAt(self: *Self, index: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call orderedRemoveAtContext instead.");
    return self.orderedRemoveAtContext(index, undefined);
}

FunctionorderedRemoveAtContext[src]

pub fn orderedRemoveAtContext(self: *Self, index: usize, ctx: Context) void

Parameters

self: *Self
index: usize
ctx: Context

Source Code

Source code
pub fn orderedRemoveAtContext(self: *Self, index: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    self.removeByIndex(index, if (store_hash) {} else ctx, .ordered);
}

Functionclone[src]

pub fn clone(self: Self, gpa: Allocator) Oom!Self

Create a copy of the hash map which can be modified separately. The copy uses the same context as this instance, but is allocated with the provided allocator.

Parameters

self: Self

Source Code

Source code
pub fn clone(self: Self, gpa: Allocator) Oom!Self {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call cloneContext instead.");
    return self.cloneContext(gpa, undefined);
}

FunctioncloneContext[src]

pub fn cloneContext(self: Self, gpa: Allocator, ctx: Context) Oom!Self

Parameters

self: Self
ctx: Context

Source Code

Source code
pub fn cloneContext(self: Self, gpa: Allocator, ctx: Context) Oom!Self {
    var other: Self = .{};
    other.entries = try self.entries.clone(gpa);
    errdefer other.entries.deinit(gpa);

    if (self.index_header) |header| {
        // TODO: I'm pretty sure this could be memcpy'd instead of
        // doing all this work.
        const new_header = try IndexHeader.alloc(gpa, header.bit_index);
        other.insertAllEntriesIntoNewHeader(if (store_hash) {} else ctx, new_header);
        other.index_header = new_header;
    }
    return other;
}

Functionmove[src]

pub fn move(self: *Self) Self

Set the map to an empty state, making deinitialization a no-op, and returning a copy of the original.

Parameters

self: *Self

Source Code

Source code
pub fn move(self: *Self) Self {
    self.pointer_stability.assertUnlocked();
    const result = self.*;
    self.* = .empty;
    return result;
}

FunctionreIndex[src]

pub fn reIndex(self: *Self, gpa: Allocator) Oom!void

Recomputes stored hashes and rebuilds the key indexes. If the underlying keys have been modified directly, call this method to recompute the denormalized metadata necessary for the operation of the methods of this map that lookup entries by key.

One use case for this is directly calling entries.resize() to grow the underlying storage, and then setting the keys and values directly without going through the methods of this map.

The time complexity of this operation is O(n).

Parameters

self: *Self

Source Code

Source code
pub fn reIndex(self: *Self, gpa: Allocator) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call reIndexContext instead.");
    return self.reIndexContext(gpa, undefined);
}

FunctionreIndexContext[src]

pub fn reIndexContext(self: *Self, gpa: Allocator, ctx: Context) Oom!void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn reIndexContext(self: *Self, gpa: Allocator, ctx: Context) Oom!void {
    // Recompute all hashes.
    if (store_hash) {
        for (self.keys(), self.entries.items(.hash)) |key, *hash| {
            const h = checkedHash(ctx, key);
            hash.* = h;
        }
    }
    try rebuildIndex(self, gpa, ctx);
}

FunctionsetKey[src]

pub fn setKey(self: *Self, gpa: Allocator, index: usize, new_key: K) Oom!void

Modify an entry's key without reordering any entries.

Parameters

self: *Self
index: usize
new_key: K

Source Code

Source code
pub fn setKey(self: *Self, gpa: Allocator, index: usize, new_key: K) Oom!void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call setKeyContext instead.");
    return setKeyContext(self, gpa, index, new_key, undefined);
}

FunctionsetKeyContext[src]

pub fn setKeyContext(self: *Self, gpa: Allocator, index: usize, new_key: K, ctx: Context) Oom!void

Parameters

self: *Self
index: usize
new_key: K
ctx: Context

Source Code

Source code
pub fn setKeyContext(self: *Self, gpa: Allocator, index: usize, new_key: K, ctx: Context) Oom!void {
    const key_ptr = &self.entries.items(.key)[index];
    key_ptr.* = new_key;
    if (store_hash) self.entries.items(.hash)[index] = checkedHash(ctx, key_ptr.*);
    try rebuildIndex(self, gpa, undefined);
}

Functionsort[src]

pub inline fn sort(self: *Self, sort_ctx: anytype) void

Sorts the entries and then rebuilds the index. sort_ctx must have this method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool Uses a stable sorting algorithm.

Parameters

self: *Self

Source Code

Source code
pub inline fn sort(self: *Self, sort_ctx: anytype) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortContext instead.");
    return sortContextInternal(self, .stable, sort_ctx, undefined);
}

FunctionsortUnstable[src]

pub inline fn sortUnstable(self: *Self, sort_ctx: anytype) void

Sorts the entries and then rebuilds the index. sort_ctx must have this method: fn lessThan(ctx: @TypeOf(ctx), a_index: usize, b_index: usize) bool Uses an unstable sorting algorithm.

Parameters

self: *Self

Source Code

Source code
pub inline fn sortUnstable(self: *Self, sort_ctx: anytype) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call sortUnstableContext instead.");
    return self.sortContextInternal(.unstable, sort_ctx, undefined);
}

FunctionsortContext[src]

pub inline fn sortContext(self: *Self, sort_ctx: anytype, ctx: Context) void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub inline fn sortContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
    return sortContextInternal(self, .stable, sort_ctx, ctx);
}

FunctionsortUnstableContext[src]

pub inline fn sortUnstableContext(self: *Self, sort_ctx: anytype, ctx: Context) void

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub inline fn sortUnstableContext(self: *Self, sort_ctx: anytype, ctx: Context) void {
    return sortContextInternal(self, .unstable, sort_ctx, ctx);
}

FunctionshrinkRetainingCapacity[src]

pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Keeps capacity the same.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. Any deinitialization of discarded entries must take place after calling this function.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkRetainingCapacity(self: *Self, new_len: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkRetainingCapacityContext instead.");
    return self.shrinkRetainingCapacityContext(new_len, undefined);
}

FunctionshrinkRetainingCapacityContext[src]

pub fn shrinkRetainingCapacityContext(self: *Self, new_len: usize, ctx: Context) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Keeps capacity the same.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. Any deinitialization of discarded entries must take place after calling this function.

Parameters

self: *Self
new_len: usize
ctx: Context

Source Code

Source code
pub fn shrinkRetainingCapacityContext(self: *Self, new_len: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    // Remove index entries from the new length onwards.
    // Explicitly choose to ONLY remove index entries and not the underlying array list
    // entries as we're going to remove them in the subsequent shrink call.
    if (self.index_header) |header| {
        var i: usize = new_len;
        while (i < self.entries.len) : (i += 1)
            self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
    }
    self.entries.shrinkRetainingCapacity(new_len);
}

FunctionshrinkAndFree[src]

pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Reduces allocated capacity.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. It is a bug to call this function if the discarded entries require deinitialization. For that use case, shrinkRetainingCapacity can be used instead.

Parameters

self: *Self
new_len: usize

Source Code

Source code
pub fn shrinkAndFree(self: *Self, gpa: Allocator, new_len: usize) void {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call shrinkAndFreeContext instead.");
    return self.shrinkAndFreeContext(gpa, new_len, undefined);
}

FunctionshrinkAndFreeContext[src]

pub fn shrinkAndFreeContext(self: *Self, gpa: Allocator, new_len: usize, ctx: Context) void

Shrinks the underlying Entry array to new_len elements and discards any associated index entries. Reduces allocated capacity.

Asserts the discarded entries remain initialized and capable of performing hash and equality checks. It is a bug to call this function if the discarded entries require deinitialization. For that use case, shrinkRetainingCapacityContext can be used instead.

Parameters

self: *Self
new_len: usize
ctx: Context

Source Code

Source code
pub fn shrinkAndFreeContext(self: *Self, gpa: Allocator, new_len: usize, ctx: Context) void {
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    // Remove index entries from the new length onwards.
    // Explicitly choose to ONLY remove index entries and not the underlying array list
    // entries as we're going to remove them in the subsequent shrink call.
    if (self.index_header) |header| {
        var i: usize = new_len;
        while (i < self.entries.len) : (i += 1)
            self.removeFromIndexByIndex(i, if (store_hash) {} else ctx, header);
    }
    self.entries.shrinkAndFree(gpa, new_len);
}

Functionpop[src]

pub fn pop(self: *Self) ?KV

Removes the last inserted Entry in the hash map and returns it. Otherwise returns null.

Parameters

self: *Self

Source Code

Source code
pub fn pop(self: *Self) ?KV {
    if (@sizeOf(ByIndexContext) != 0)
        @compileError("Cannot infer context " ++ @typeName(Context) ++ ", call popContext instead.");
    return self.popContext(undefined);
}

FunctionpopContext[src]

pub fn popContext(self: *Self, ctx: Context) ?KV

Parameters

self: *Self
ctx: Context

Source Code

Source code
pub fn popContext(self: *Self, ctx: Context) ?KV {
    if (self.entries.len == 0) return null;
    self.pointer_stability.lock();
    defer self.pointer_stability.unlock();

    const item = self.entries.get(self.entries.len - 1);
    if (self.index_header) |header|
        self.removeFromIndexByIndex(self.entries.len - 1, if (store_hash) {} else ctx, header);
    self.entries.len -= 1;
    return .{
        .key = item.key,
        .value = item.value,
    };
}

Source Code

Source code
pub fn StringArrayHashMapUnmanaged(comptime V: type) type {
    return ArrayHashMapUnmanaged([]const u8, V, StringContext, true);
}

Type FunctionTreap[src]

Parameters

Key: type

Fields

root: ?*Node = null
prng: Prng = .{}

Functions

FunctiongetMin[src]

pub fn getMin(self: Self) ?*Node

Returns the smallest Node by key in the treap if there is one. Use getEntryForExisting() to replace/remove this Node from the treap.

Parameters

self: Self

Source Code

Source code
pub fn getMin(self: Self) ?*Node {
    if (self.root) |root| return extremeInSubtreeOnDirection(root, 0);
    return null;
}

FunctiongetMax[src]

pub fn getMax(self: Self) ?*Node

Returns the largest Node by key in the treap if there is one. Use getEntryForExisting() to replace/remove this Node from the treap.

Parameters

self: Self

Source Code

Source code
pub fn getMax(self: Self) ?*Node {
    if (self.root) |root| return extremeInSubtreeOnDirection(root, 1);
    return null;
}

FunctiongetEntryFor[src]

pub fn getEntryFor(self: *Self, key: Key) Entry

Lookup the Entry for the given key in the treap. The Entry act's as a slot in the treap to insert/replace/remove the node associated with the key.

Parameters

self: *Self
key: Key

Source Code

Source code
pub fn getEntryFor(self: *Self, key: Key) Entry {
    var parent: ?*Node = undefined;
    const node = self.find(key, &parent);

    return Entry{
        .key = key,
        .treap = self,
        .node = node,
        .context = .{ .inserted_under = parent },
    };
}

FunctiongetEntryForExisting[src]

pub fn getEntryForExisting(self: *Self, node: *Node) Entry

Get an entry for a Node that currently exists in the treap. It is undefined behavior if the Node is not currently inserted in the treap. The Entry act's as a slot in the treap to insert/replace/remove the node associated with the key.

Parameters

self: *Self
node: *Node

Source Code

Source code
pub fn getEntryForExisting(self: *Self, node: *Node) Entry {
    assert(node.priority != 0);

    return Entry{
        .key = node.key,
        .treap = self,
        .node = node,
        .context = .{ .inserted_under = node.parent },
    };
}

FunctioninorderIterator[src]

pub fn inorderIterator(self: *Self) InorderIterator

Parameters

self: *Self

Source Code

Source code
pub fn inorderIterator(self: *Self) InorderIterator {
    return .{ .current = self.getMin() };
}

Source Code

Source code
pub fn Treap(comptime Key: type, comptime compareFn: anytype) type {
    return struct {
        const Self = @This();

        // Allow for compareFn to be fn (anytype, anytype) anytype
        // which allows the convenient use of std.math.order.
        fn compare(a: Key, b: Key) Order {
            return compareFn(a, b);
        }

        root: ?*Node = null,
        prng: Prng = .{},

        /// A customized pseudo random number generator for the treap.
        /// This just helps reducing the memory size of the treap itself
        /// as std.Random.DefaultPrng requires larger state (while producing better entropy for randomness to be fair).
        const Prng = struct {
            xorshift: usize = 0,

            fn random(self: *Prng, seed: usize) usize {
                // Lazily seed the prng state
                if (self.xorshift == 0) {
                    self.xorshift = seed;
                }

                // Since we're using usize, decide the shifts by the integer's bit width.
                const shifts = switch (@bitSizeOf(usize)) {
                    64 => .{ 13, 7, 17 },
                    32 => .{ 13, 17, 5 },
                    16 => .{ 7, 9, 8 },
                    else => @compileError("platform not supported"),
                };

                self.xorshift ^= self.xorshift >> shifts[0];
                self.xorshift ^= self.xorshift << shifts[1];
                self.xorshift ^= self.xorshift >> shifts[2];

                assert(self.xorshift != 0);
                return self.xorshift;
            }
        };

        /// A Node represents an item or point in the treap with a uniquely associated key.
        pub const Node = struct {
            key: Key,
            priority: usize,
            parent: ?*Node,
            children: [2]?*Node,

            pub fn next(node: *Node) ?*Node {
                return nextOnDirection(node, 1);
            }
            pub fn prev(node: *Node) ?*Node {
                return nextOnDirection(node, 0);
            }
        };

        fn extremeInSubtreeOnDirection(node: *Node, direction: u1) *Node {
            var cur = node;
            while (cur.children[direction]) |next| cur = next;
            return cur;
        }

        fn nextOnDirection(node: *Node, direction: u1) ?*Node {
            if (node.children[direction]) |child| {
                return extremeInSubtreeOnDirection(child, direction ^ 1);
            }
            var cur = node;
            // Traversing upward until we find `parent` to `cur` is NOT on
            // `direction`, or equivalently, `cur` to `parent` IS on
            // `direction` thus `parent` is the next.
            while (true) {
                if (cur.parent) |parent| {
                    // If `parent -> node` is NOT on `direction`, then
                    // `node -> parent` IS on `direction`
                    if (parent.children[direction] != cur) return parent;
                    cur = parent;
                } else {
                    return null;
                }
            }
        }

        /// Returns the smallest Node by key in the treap if there is one.
        /// Use `getEntryForExisting()` to replace/remove this Node from the treap.
        pub fn getMin(self: Self) ?*Node {
            if (self.root) |root| return extremeInSubtreeOnDirection(root, 0);
            return null;
        }

        /// Returns the largest Node by key in the treap if there is one.
        /// Use `getEntryForExisting()` to replace/remove this Node from the treap.
        pub fn getMax(self: Self) ?*Node {
            if (self.root) |root| return extremeInSubtreeOnDirection(root, 1);
            return null;
        }

        /// Lookup the Entry for the given key in the treap.
        /// The Entry act's as a slot in the treap to insert/replace/remove the node associated with the key.
        pub fn getEntryFor(self: *Self, key: Key) Entry {
            var parent: ?*Node = undefined;
            const node = self.find(key, &parent);

            return Entry{
                .key = key,
                .treap = self,
                .node = node,
                .context = .{ .inserted_under = parent },
            };
        }

        /// Get an entry for a Node that currently exists in the treap.
        /// It is undefined behavior if the Node is not currently inserted in the treap.
        /// The Entry act's as a slot in the treap to insert/replace/remove the node associated with the key.
        pub fn getEntryForExisting(self: *Self, node: *Node) Entry {
            assert(node.priority != 0);

            return Entry{
                .key = node.key,
                .treap = self,
                .node = node,
                .context = .{ .inserted_under = node.parent },
            };
        }

        /// An Entry represents a slot in the treap associated with a given key.
        pub const Entry = struct {
            /// The associated key for this entry.
            key: Key,
            /// A reference to the treap this entry is apart of.
            treap: *Self,
            /// The current node at this entry.
            node: ?*Node,
            /// The current state of the entry.
            context: union(enum) {
                /// A find() was called for this entry and the position in the treap is known.
                inserted_under: ?*Node,
                /// The entry's node was removed from the treap and a lookup must occur again for modification.
                removed,
            },

            /// Update's the Node at this Entry in the treap with the new node (null for deleting). `new_node`
            /// can have `undefind` content because the value will be initialized internally.
            pub fn set(self: *Entry, new_node: ?*Node) void {
                // Update the entry's node reference after updating the treap below.
                defer self.node = new_node;

                if (self.node) |old| {
                    if (new_node) |new| {
                        self.treap.replace(old, new);
                        return;
                    }

                    self.treap.remove(old);
                    self.context = .removed;
                    return;
                }

                if (new_node) |new| {
                    // A previous treap.remove() could have rebalanced the nodes
                    // so when inserting after a removal, we have to re-lookup the parent again.
                    // This lookup shouldn't find a node because we're yet to insert it..
                    var parent: ?*Node = undefined;
                    switch (self.context) {
                        .inserted_under => |p| parent = p,
                        .removed => assert(self.treap.find(self.key, &parent) == null),
                    }

                    self.treap.insert(self.key, parent, new);
                    self.context = .{ .inserted_under = parent };
                }
            }
        };

        fn find(self: Self, key: Key, parent_ref: *?*Node) ?*Node {
            var node = self.root;
            parent_ref.* = null;

            // basic binary search while tracking the parent.
            while (node) |current| {
                const order = compare(key, current.key);
                if (order == .eq) break;

                parent_ref.* = current;
                node = current.children[@intFromBool(order == .gt)];
            }

            return node;
        }

        fn insert(self: *Self, key: Key, parent: ?*Node, node: *Node) void {
            // generate a random priority & prepare the node to be inserted into the tree
            node.key = key;
            node.priority = self.prng.random(@intFromPtr(node));
            node.parent = parent;
            node.children = [_]?*Node{ null, null };

            // point the parent at the new node
            const link = if (parent) |p| &p.children[@intFromBool(compare(key, p.key) == .gt)] else &self.root;
            assert(link.* == null);
            link.* = node;

            // rotate the node up into the tree to balance it according to its priority
            while (node.parent) |p| {
                if (p.priority <= node.priority) break;

                const is_right = p.children[1] == node;
                assert(p.children[@intFromBool(is_right)] == node);

                const rotate_right = !is_right;
                self.rotate(p, rotate_right);
            }
        }

        fn replace(self: *Self, old: *Node, new: *Node) void {
            // copy over the values from the old node
            new.key = old.key;
            new.priority = old.priority;
            new.parent = old.parent;
            new.children = old.children;

            // point the parent at the new node
            const link = if (old.parent) |p| &p.children[@intFromBool(p.children[1] == old)] else &self.root;
            assert(link.* == old);
            link.* = new;

            // point the children's parent at the new node
            for (old.children) |child_node| {
                const child = child_node orelse continue;
                assert(child.parent == old);
                child.parent = new;
            }
        }

        fn remove(self: *Self, node: *Node) void {
            // rotate the node down to be a leaf of the tree for removal, respecting priorities.
            while (node.children[0] orelse node.children[1]) |_| {
                self.rotate(node, rotate_right: {
                    const right = node.children[1] orelse break :rotate_right true;
                    const left = node.children[0] orelse break :rotate_right false;
                    break :rotate_right (left.priority < right.priority);
                });
            }

            // node is a now a leaf; remove by nulling out the parent's reference to it.
            const link = if (node.parent) |p| &p.children[@intFromBool(p.children[1] == node)] else &self.root;
            assert(link.* == node);
            link.* = null;

            // clean up after ourselves
            node.priority = 0;
            node.parent = null;
            node.children = [_]?*Node{ null, null };
        }

        fn rotate(self: *Self, node: *Node, right: bool) void {
            // if right, converts the following:
            //      parent -> (node (target YY adjacent) XX)
            //      parent -> (target YY (node adjacent XX))
            //
            // if left (!right), converts the following:
            //      parent -> (node (target YY adjacent) XX)
            //      parent -> (target YY (node adjacent XX))
            const parent = node.parent;
            const target = node.children[@intFromBool(!right)] orelse unreachable;
            const adjacent = target.children[@intFromBool(right)];

            // rotate the children
            target.children[@intFromBool(right)] = node;
            node.children[@intFromBool(!right)] = adjacent;

            // rotate the parents
            node.parent = target;
            target.parent = parent;
            if (adjacent) |adj| adj.parent = node;

            // fix the parent link
            const link = if (parent) |p| &p.children[@intFromBool(p.children[1] == node)] else &self.root;
            assert(link.* == node);
            link.* = target;
        }

        /// Usage example:
        ///   var iter = treap.inorderIterator();
        ///   while (iter.next()) |node| {
        ///     ...
        ///   }
        pub const InorderIterator = struct {
            current: ?*Node,

            pub fn next(it: *InorderIterator) ?*Node {
                const current = it.current;
                it.current = if (current) |cur|
                    cur.next()
                else
                    null;
                return current;
            }
        };

        pub fn inorderIterator(self: *Self) InorderIterator {
            return .{ .current = self.getMin() };
        }
    };
}

Values

Constantoptions[src]

Stdlib-wide options that can be overridden by the root file.

Source Code

Source code
pub const options: Options = if (@hasDecl(root, "std_options")) root.std_options else .{}

Functions

Functiononce[src]

pub fn once(comptime f: fn () void) Once(f)

Parameters

f: fn () void

Source Code

Source code
pub fn once(comptime f: fn () void) Once(f) {
    return Once(f){};
}

Source Code

Source code
pub const ArrayHashMap = array_hash_map.ArrayHashMap;
pub const ArrayHashMapUnmanaged = array_hash_map.ArrayHashMapUnmanaged;
pub const ArrayList = @import("array_list.zig").ArrayList;
pub const ArrayListAligned = @import("array_list.zig").ArrayListAligned;
pub const ArrayListAlignedUnmanaged = @import("array_list.zig").ArrayListAlignedUnmanaged;
pub const ArrayListUnmanaged = @import("array_list.zig").ArrayListUnmanaged;
pub const AutoArrayHashMap = array_hash_map.AutoArrayHashMap;
pub const AutoArrayHashMapUnmanaged = array_hash_map.AutoArrayHashMapUnmanaged;
pub const AutoHashMap = hash_map.AutoHashMap;
pub const AutoHashMapUnmanaged = hash_map.AutoHashMapUnmanaged;
pub const BitStack = @import("BitStack.zig");
pub const BoundedArray = @import("bounded_array.zig").BoundedArray;
pub const BoundedArrayAligned = @import("bounded_array.zig").BoundedArrayAligned;
pub const Build = @import("Build.zig");
pub const BufMap = @import("buf_map.zig").BufMap;
pub const BufSet = @import("buf_set.zig").BufSet;
pub const StaticStringMap = static_string_map.StaticStringMap;
pub const StaticStringMapWithEql = static_string_map.StaticStringMapWithEql;
pub const DoublyLinkedList = @import("linked_list.zig").DoublyLinkedList;
pub const DynLib = @import("dynamic_library.zig").DynLib;
pub const DynamicBitSet = bit_set.DynamicBitSet;
pub const DynamicBitSetUnmanaged = bit_set.DynamicBitSetUnmanaged;
pub const EnumArray = enums.EnumArray;
pub const EnumMap = enums.EnumMap;
pub const EnumSet = enums.EnumSet;
pub const HashMap = hash_map.HashMap;
pub const HashMapUnmanaged = hash_map.HashMapUnmanaged;
pub const MultiArrayList = @import("multi_array_list.zig").MultiArrayList;
pub const PriorityQueue = @import("priority_queue.zig").PriorityQueue;
pub const PriorityDequeue = @import("priority_dequeue.zig").PriorityDequeue;
pub const Progress = @import("Progress.zig");
pub const Random = @import("Random.zig");
pub const RingBuffer = @import("RingBuffer.zig");
pub const SegmentedList = @import("segmented_list.zig").SegmentedList;
pub const SemanticVersion = @import("SemanticVersion.zig");
pub const SinglyLinkedList = @import("linked_list.zig").SinglyLinkedList;
pub const StaticBitSet = bit_set.StaticBitSet;
pub const StringHashMap = hash_map.StringHashMap;
pub const StringHashMapUnmanaged = hash_map.StringHashMapUnmanaged;
pub const StringArrayHashMap = array_hash_map.StringArrayHashMap;
pub const StringArrayHashMapUnmanaged = array_hash_map.StringArrayHashMapUnmanaged;
pub const Target = @import("Target.zig");
pub const Thread = @import("Thread.zig");
pub const Treap = @import("treap.zig").Treap;
pub const Tz = tz.Tz;
pub const Uri = @import("Uri.zig");

pub const array_hash_map = @import("array_hash_map.zig");
pub const atomic = @import("atomic.zig");
pub const base64 = @import("base64.zig");
pub const bit_set = @import("bit_set.zig");
pub const builtin = @import("builtin.zig");
pub const c = @import("c.zig");
pub const coff = @import("coff.zig");
pub const compress = @import("compress.zig");
pub const static_string_map = @import("static_string_map.zig");
pub const crypto = @import("crypto.zig");
pub const debug = @import("debug.zig");
pub const dwarf = @import("dwarf.zig");
pub const elf = @import("elf.zig");
pub const enums = @import("enums.zig");
pub const fifo = @import("fifo.zig");
pub const fmt = @import("fmt.zig");
pub const fs = @import("fs.zig");
pub const gpu = @import("gpu.zig");
pub const hash = @import("hash.zig");
pub const hash_map = @import("hash_map.zig");
pub const heap = @import("heap.zig");
pub const http = @import("http.zig");
pub const io = @import("io.zig");
pub const json = @import("json.zig");
pub const leb = @import("leb128.zig");
pub const log = @import("log.zig");
pub const macho = @import("macho.zig");
pub const math = @import("math.zig");
pub const mem = @import("mem.zig");
pub const meta = @import("meta.zig");
pub const net = @import("net.zig");
pub const os = @import("os.zig");
pub const once = @import("once.zig").once;
pub const pdb = @import("pdb.zig");
pub const posix = @import("posix.zig");
pub const process = @import("process.zig");
pub const sort = @import("sort.zig");
pub const simd = @import("simd.zig");
pub const ascii = @import("ascii.zig");
pub const tar = @import("tar.zig");
pub const testing = @import("testing.zig");
pub const time = @import("time.zig");
pub const tz = @import("tz.zig");
pub const unicode = @import("unicode.zig");
pub const valgrind = @import("valgrind.zig");
pub const wasm = @import("wasm.zig");
pub const zig = @import("zig.zig");
pub const zip = @import("zip.zig");
pub const zon = @import("zon.zig");
pub const start = @import("start.zig");

const root = @import("root");

/// Stdlib-wide options that can be overridden by the root file.
pub const options: Options = if (@hasDecl(root, "std_options")) root.std_options else .{};

pub const Options = struct {
    enable_segfault_handler: bool = debug.default_enable_segfault_handler,

    /// Function used to implement `std.fs.cwd` for WASI.
    wasiCwd: fn () os.wasi.fd_t = fs.defaultWasiCwd,

    /// The current log level.
    log_level: log.Level = log.default_level,

    log_scope_levels: []const log.ScopeLevel = &.{},

    logFn: fn (
        comptime message_level: log.Level,
        comptime scope: @TypeOf(.enum_literal),
        comptime format: []const u8,
        args: anytype,
    ) void = log.defaultLog,

    /// Overrides `std.heap.page_size_min`.
    page_size_min: ?usize = null,
    /// Overrides `std.heap.page_size_max`.
    page_size_max: ?usize = null,
    /// Overrides default implementation for determining OS page size at runtime.
    queryPageSize: fn () usize = heap.defaultQueryPageSize,

    fmt_max_depth: usize = fmt.default_max_depth,

    cryptoRandomSeed: fn (buffer: []u8) void = @import("crypto/tlcsprng.zig").defaultRandomSeed,

    crypto_always_getrandom: bool = false,

    crypto_fork_safety: bool = true,

    /// By default Zig disables SIGPIPE by setting a "no-op" handler for it.  Set this option
    /// to `true` to prevent that.
    ///
    /// Note that we use a "no-op" handler instead of SIG_IGN because it will not be inherited by
    /// any child process.
    ///
    /// SIGPIPE is triggered when a process attempts to write to a broken pipe. By default, SIGPIPE
    /// will terminate the process instead of exiting.  It doesn't trigger the panic handler so in many
    /// cases it's unclear why the process was terminated.  By capturing SIGPIPE instead, functions that
    /// write to broken pipes will return the EPIPE error (error.BrokenPipe) and the program can handle
    /// it like any other error.
    keep_sigpipe: bool = false,

    /// By default, std.http.Client will support HTTPS connections.  Set this option to `true` to
    /// disable TLS support.
    ///
    /// This will likely reduce the size of the binary, but it will also make it impossible to
    /// make a HTTPS connection.
    http_disable_tls: bool = false,

    /// This enables `std.http.Client` to log ssl secrets to the file specified by the SSLKEYLOGFILE
    /// env var.  Creating such a log file allows other programs with access to that file to decrypt
    /// all `std.http.Client` traffic made by this program.
    http_enable_ssl_key_log_file: bool = @import("builtin").mode == .Debug,

    side_channels_mitigations: crypto.SideChannelsMitigations = crypto.default_side_channels_mitigations,
};

// This forces the start.zig file to be imported, and the comptime logic inside that
// file decides whether to export any appropriate start symbols, and call main.
comptime {
    _ = start;
}

test {
    testing.refAllDecls(@This());
}

comptime {
    debug.assert(@import("std") == @This()); // std lib tests require --zig-lib-dir
}