Struct vasi_sync::atomic_tls_map::AtomicTlsMap
source · pub struct AtomicTlsMap<const N: usize, V, H = BuildHasherDefault<FxHasher>>where
H: BuildHasher,{ /* private fields */ }
Expand description
A lockless, no_std, no-alloc hash table.
Allows insertion and removal from an immutable reference, but does not support getting mutable references to internal values, and requires that a particular key is only ever accessed from the thread that inserted it, until that thread removes it.
Uses linear probing, and doesn’t support resizing. Lookup is Θ(1)
(average case) if the key is present and the key hasn’t been forced far away
from its “home” location, but is O(N)
worst case. Lookup of a non-present
key is always O(N)
; we need to scan the whole table.
This is designed mostly for use by shadow_shim::tls
to help implement
thread-local storage.
Implementations§
source§impl<const N: usize, V, H> AtomicTlsMap<N, V, H>where
H: BuildHasher,
impl<const N: usize, V, H> AtomicTlsMap<N, V, H>where
H: BuildHasher,
pub fn new_with_hasher(build_hasher: H) -> Self
sourcepub unsafe fn get(&self, key: NonZeroUsize) -> Option<Ref<'_, V>>
pub unsafe fn get(&self, key: NonZeroUsize) -> Option<Ref<'_, V>>
§Safety
The value at key
, if any, must have been inserted by the current thread.
sourcepub unsafe fn get_or_insert_with(
&self,
key: NonZeroUsize,
init: impl FnOnce() -> V,
) -> Ref<'_, V>
pub unsafe fn get_or_insert_with( &self, key: NonZeroUsize, init: impl FnOnce() -> V, ) -> Ref<'_, V>
Retrieve the value associated with key
, initializing it with init
if key
is not already present.
Panics if the table is full and key
is not already present.
§Safety
There must not be a value at key
that was inserted by a different
thread.
sourcepub unsafe fn remove(&self, key: NonZeroUsize) -> Option<V>
pub unsafe fn remove(&self, key: NonZeroUsize) -> Option<V>
Removes the value still for key
, if any. Panics if this thread has
any outstanding references for key
.
§Safety
The value at key
, if any, must have been inserted by the current thread.
sourcepub unsafe fn forget_all(&self)
pub unsafe fn forget_all(&self)
Resets metadata in the map to mark all entries vacant, without dropping the values.
Intended for use after fork
, after which entries belonging to other threads
are not guaranteed to be in any consistent state (so can’t be dropped), but
the threads owning those entries no longer exist in the child, so they can
be safely overwritten.
§Safety
Any outstanding references from self
(e.g. obtained via Self::get)
must not be accessed or dropped again. e.g. references held by other
threads before fork
are OK, since those threads do not exist in the
current process, and so will not access the child’s copy of this table.
References that have been forgotten via core::mem::forget
are also ok.
source§impl<const N: usize, V, H> AtomicTlsMap<N, V, H>where
H: BuildHasher + Default,
impl<const N: usize, V, H> AtomicTlsMap<N, V, H>where
H: BuildHasher + Default,
Trait Implementations§
source§impl<const N: usize, V, H> Drop for AtomicTlsMap<N, V, H>where
H: BuildHasher,
impl<const N: usize, V, H> Drop for AtomicTlsMap<N, V, H>where
H: BuildHasher,
impl<const N: usize, V, H> Sync for AtomicTlsMap<N, V, H>
Override default of UnsafeCell
, Cell
, and V
not being Sync
. We
synchronize access to these (if partly by requiring users to guarantee no
parallel access to a given key from multiple threads).
Likewise V
only needs to be Send
.