#[repr(C, align(8))]pub struct AtomicPtr<T> { /* private fields */ }
Expand description
A raw pointer type which can be safely shared between threads.
This type has the same in-memory representation as a *mut T
.
If the compiler and the platform support atomic loads and stores of pointers,
this type is a wrapper for the standard library’s
AtomicPtr
. If the platform supports it
but the compiler does not, atomic operations are implemented using inline
assembly.
Implementations§
source§impl<T> AtomicPtr<T>
impl<T> AtomicPtr<T>
sourcepub const fn new(p: *mut T) -> Self
pub const fn new(p: *mut T) -> Self
Creates a new AtomicPtr
.
§Examples
use portable_atomic::AtomicPtr;
let ptr = &mut 5;
let atomic_ptr = AtomicPtr::new(ptr);
sourcepub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self
pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self
Creates a new AtomicPtr
from a pointer.
This is const fn
on Rust 1.83+.
§Safety
ptr
must be aligned toalign_of::<AtomicPtr<T>>()
(note that on some platforms this can be bigger thanalign_of::<*mut T>()
).ptr
must be valid for both reads and writes for the whole lifetime'a
.- If this atomic type is lock-free, non-atomic accesses to the value
behind
ptr
must have a happens-before relationship with atomic accesses via the returned value (or vice-versa).- In other words, time periods where the value is accessed atomically may not overlap with periods where the value is accessed non-atomically.
- This requirement is trivially satisfied if
ptr
is never used non-atomically for the duration of lifetime'a
. Most use cases should be able to follow this guideline. - This requirement is also trivially satisfied if all accesses (atomic or not) are done from the same thread.
- If this atomic type is not lock-free:
- Any accesses to the value behind
ptr
must have a happens-before relationship with accesses via the returned value (or vice-versa). - Any concurrent accesses to the value behind
ptr
for the duration of lifetime'a
must be compatible with operations performed by this atomic type.
- Any accesses to the value behind
- This method must not be used to create overlapping or mixed-size atomic accesses, as these are not supported by the memory model.
sourcepub fn is_lock_free() -> bool
pub fn is_lock_free() -> bool
Returns true
if operations on values of this type are lock-free.
If the compiler or the platform doesn’t support the necessary atomic instructions, global locks for every potentially concurrent atomic operation will be used.
§Examples
use portable_atomic::AtomicPtr;
let is_lock_free = AtomicPtr::<()>::is_lock_free();
sourcepub const fn is_always_lock_free() -> bool
pub const fn is_always_lock_free() -> bool
Returns true
if operations on values of this type are lock-free.
If the compiler or the platform doesn’t support the necessary atomic instructions, global locks for every potentially concurrent atomic operation will be used.
Note: If the atomic operation relies on dynamic CPU feature detection, this type may be lock-free even if the function returns false.
§Examples
use portable_atomic::AtomicPtr;
const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
sourcepub const fn get_mut(&mut self) -> &mut *mut T
pub const fn get_mut(&mut self) -> &mut *mut T
Returns a mutable reference to the underlying pointer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
This is const fn
on Rust 1.83+.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let mut data = 10;
let mut atomic_ptr = AtomicPtr::new(&mut data);
let mut other_data = 5;
*atomic_ptr.get_mut() = &mut other_data;
assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
sourcepub const fn into_inner(self) -> *mut T
pub const fn into_inner(self) -> *mut T
Consumes the atomic and returns the contained value.
This is safe because passing self
by value guarantees that no other threads are
concurrently accessing the atomic data.
This is const fn
on Rust 1.56+.
§Examples
use portable_atomic::AtomicPtr;
let mut data = 5;
let atomic_ptr = AtomicPtr::new(&mut data);
assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
sourcepub fn load(&self, order: Ordering) -> *mut T
pub fn load(&self, order: Ordering) -> *mut T
Loads a value from the pointer.
load
takes an Ordering
argument which describes the memory ordering
of this operation. Possible values are SeqCst
, Acquire
and Relaxed
.
§Panics
Panics if order
is Release
or AcqRel
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let ptr = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let value = some_ptr.load(Ordering::Relaxed);
sourcepub fn store(&self, ptr: *mut T, order: Ordering)
pub fn store(&self, ptr: *mut T, order: Ordering)
Stores a value into the pointer.
store
takes an Ordering
argument which describes the memory ordering
of this operation. Possible values are SeqCst
, Release
and Relaxed
.
§Panics
Panics if order
is Acquire
or AcqRel
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let ptr = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let other_ptr = &mut 10;
some_ptr.store(other_ptr, Ordering::Relaxed);
sourcepub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
Stores a value into the pointer, returning the previous value.
swap
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire
makes the store part of this operation Relaxed
, and
using Release
makes the load part Relaxed
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let ptr = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let other_ptr = &mut 10;
let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
sourcepub fn compare_exchange(
&self,
current: *mut T,
new: *mut T,
success: Ordering,
failure: Ordering,
) -> Result<*mut T, *mut T>
pub fn compare_exchange( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>
Stores a value into the pointer if the current value is the same as the current
value.
The return value is a result indicating whether the new value was written and containing
the previous value. On success this value is guaranteed to be equal to current
.
compare_exchange
takes two Ordering
arguments to describe the memory
ordering of this operation. success
describes the required ordering for the
read-modify-write operation that takes place if the comparison with current
succeeds.
failure
describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire
as success ordering makes the store part
of this operation Relaxed
, and using Release
makes the successful load
Relaxed
. The failure ordering can only be SeqCst
, Acquire
or Relaxed
.
§Panics
Panics if failure
is Release
, AcqRel
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let ptr = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let other_ptr = &mut 10;
let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
sourcepub fn compare_exchange_weak(
&self,
current: *mut T,
new: *mut T,
success: Ordering,
failure: Ordering,
) -> Result<*mut T, *mut T>
pub fn compare_exchange_weak( &self, current: *mut T, new: *mut T, success: Ordering, failure: Ordering, ) -> Result<*mut T, *mut T>
Stores a value into the pointer if the current value is the same as the current
value.
Unlike AtomicPtr::compare_exchange
, this function is allowed to spuriously fail even when the
comparison succeeds, which can result in more efficient code on some platforms. The
return value is a result indicating whether the new value was written and containing the
previous value.
compare_exchange_weak
takes two Ordering
arguments to describe the memory
ordering of this operation. success
describes the required ordering for the
read-modify-write operation that takes place if the comparison with current
succeeds.
failure
describes the required ordering for the load operation that takes place when
the comparison fails. Using Acquire
as success ordering makes the store part
of this operation Relaxed
, and using Release
makes the successful load
Relaxed
. The failure ordering can only be SeqCst
, Acquire
or Relaxed
.
§Panics
Panics if failure
is Release
, AcqRel
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let some_ptr = AtomicPtr::new(&mut 5);
let new = &mut 10;
let mut old = some_ptr.load(Ordering::Relaxed);
loop {
match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}
sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<*mut T, *mut T>
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<*mut T, *mut T>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result
of Ok(previous_value)
if the function
returned Some(_)
, else Err(previous_value)
.
Note: This may call the function multiple times if the value has been
changed from other threads in the meantime, as long as the function
returns Some(_)
, but the function will have been applied only once to
the stored value.
fetch_update
takes two Ordering
arguments to describe the memory
ordering of this operation. The first describes the required ordering for
when the operation finally succeeds while the second describes the
required ordering for loads. These correspond to the success and failure
orderings of compare_exchange
respectively.
Using Acquire
as success ordering makes the store part of this
operation Relaxed
, and using Release
makes the final successful
load Relaxed
. The (failed) load ordering can only be SeqCst
,
Acquire
or Relaxed
.
§Panics
Panics if fetch_order
is Release
, AcqRel
.
§Considerations
This method is not magic; it is not provided by the hardware.
It is implemented in terms of compare_exchange_weak
,
and suffers from the same drawbacks.
In particular, this method will not circumvent the ABA Problem.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let ptr: *mut _ = &mut 5;
let some_ptr = AtomicPtr::new(ptr);
let new: *mut _ = &mut 10;
assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
if x == ptr {
Some(new)
} else {
None
}
});
assert_eq!(result, Ok(ptr));
assert_eq!(some_ptr.load(Ordering::SeqCst), new);
sourcepub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointer’s address by adding val
(in units of T
),
returning the previous pointer.
This is equivalent to using wrapping_add
to atomically perform the
equivalent of ptr = ptr.wrapping_add(val);
.
This method operates in units of T
, which means that it cannot be used
to offset the pointer by an amount which is not a multiple of
size_of::<T>()
. This can sometimes be inconvenient, as you may want to
work with a deliberately misaligned pointer. In such cases, you may use
the fetch_byte_add
method instead.
fetch_ptr_add
takes an Ordering
argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire
makes the store part of this operation
Relaxed
, and using Release
makes the load part Relaxed
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
// Note: units of `size_of::<i64>()`.
assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
sourcepub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointer’s address by subtracting val
(in units of T
),
returning the previous pointer.
This is equivalent to using wrapping_sub
to atomically perform the
equivalent of ptr = ptr.wrapping_sub(val);
.
This method operates in units of T
, which means that it cannot be used
to offset the pointer by an amount which is not a multiple of
size_of::<T>()
. This can sometimes be inconvenient, as you may want to
work with a deliberately misaligned pointer. In such cases, you may use
the fetch_byte_sub
method instead.
fetch_ptr_sub
takes an Ordering
argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire
makes the store part of this operation Relaxed
,
and using Release
makes the load part Relaxed
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let array = [1i32, 2i32];
let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
sourcepub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointer’s address by adding val
bytes, returning the
previous pointer.
This is equivalent to using wrapping_add
and cast
to atomically
perform ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()
.
fetch_byte_add
takes an Ordering
argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire
makes the store part of this operation
Relaxed
, and using Release
makes the load part Relaxed
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
// Note: in units of bytes, not `size_of::<i64>()`.
assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
sourcepub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
Offsets the pointer’s address by subtracting val
bytes, returning the
previous pointer.
This is equivalent to using wrapping_sub
and cast
to atomically
perform ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()
.
fetch_byte_sub
takes an Ordering
argument which describes the
memory ordering of this operation. All ordering modes are possible. Note
that using Acquire
makes the store part of this operation
Relaxed
, and using Release
makes the load part Relaxed
.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
sourcepub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
Performs a bitwise “or” operation on the address of the current pointer,
and the argument val
, and stores a pointer with provenance of the
current pointer and the resulting address.
This is equivalent to using map_addr
to atomically perform
ptr = ptr.map_addr(|a| a | val)
. This can be used in tagged
pointer schemes to atomically set tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr
. For
example: a.fetch_or(val).map_addr(|a| a | val)
.
fetch_or
takes an Ordering
argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire
makes the store part of this operation Relaxed
,
and using Release
makes the load part Relaxed
.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr
for
details.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
let atom = AtomicPtr::<i64>::new(pointer);
// Tag the bottom bit of the pointer.
assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
// Extract and untag.
let tagged = atom.load(Ordering::Relaxed);
assert_eq!(tagged.addr() & 1, 1);
assert_eq!(tagged.map_addr(|p| p & !1), pointer);
sourcepub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
Performs a bitwise “and” operation on the address of the current
pointer, and the argument val
, and stores a pointer with provenance of
the current pointer and the resulting address.
This is equivalent to using map_addr
to atomically perform
ptr = ptr.map_addr(|a| a & val)
. This can be used in tagged
pointer schemes to atomically unset tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr
. For
example: a.fetch_and(val).map_addr(|a| a & val)
.
fetch_and
takes an Ordering
argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire
makes the store part of this operation Relaxed
,
and using Release
makes the load part Relaxed
.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr
for
details.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
// A tagged pointer
let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
// Untag, and extract the previously tagged pointer.
let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
assert_eq!(untagged, pointer);
sourcepub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
Performs a bitwise “xor” operation on the address of the current
pointer, and the argument val
, and stores a pointer with provenance of
the current pointer and the resulting address.
This is equivalent to using map_addr
to atomically perform
ptr = ptr.map_addr(|a| a ^ val)
. This can be used in tagged
pointer schemes to atomically toggle tag bits.
Caveat: This operation returns the previous value. To compute the
stored value without losing provenance, you may use map_addr
. For
example: a.fetch_xor(val).map_addr(|a| a ^ val)
.
fetch_xor
takes an Ordering
argument which describes the memory
ordering of this operation. All ordering modes are possible. Note that
using Acquire
makes the store part of this operation Relaxed
,
and using Release
makes the load part Relaxed
.
This API and its claimed semantics are part of the Strict Provenance
experiment, see the module documentation for ptr
for
details.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
let atom = AtomicPtr::<i64>::new(pointer);
// Toggle a tag bit on the pointer.
atom.fetch_xor(1, Ordering::Relaxed);
assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
sourcepub fn bit_set(&self, bit: u32, order: Ordering) -> bool
pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
Sets the bit at the specified bit-position to 1.
Returns true
if the specified bit was previously set to 1.
bit_set
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire
makes the store part of this operation Relaxed
, and
using Release
makes the load part Relaxed
.
This corresponds to x86’s lock bts
, and the implementation calls them on x86/x86_64.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
let atom = AtomicPtr::<i64>::new(pointer);
// Tag the bottom bit of the pointer.
assert!(!atom.bit_set(0, Ordering::Relaxed));
// Extract and untag.
let tagged = atom.load(Ordering::Relaxed);
assert_eq!(tagged.addr() & 1, 1);
assert_eq!(tagged.map_addr(|p| p & !1), pointer);
sourcepub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
Clears the bit at the specified bit-position to 1.
Returns true
if the specified bit was previously set to 1.
bit_clear
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire
makes the store part of this operation Relaxed
, and
using Release
makes the load part Relaxed
.
This corresponds to x86’s lock btr
, and the implementation calls them on x86/x86_64.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
// A tagged pointer
let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
assert!(atom.bit_set(0, Ordering::Relaxed));
// Untag
assert!(atom.bit_clear(0, Ordering::Relaxed));
sourcepub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
Toggles the bit at the specified bit-position.
Returns true
if the specified bit was previously set to 1.
bit_toggle
takes an Ordering
argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
Acquire
makes the store part of this operation Relaxed
, and
using Release
makes the load part Relaxed
.
This corresponds to x86’s lock btc
, and the implementation calls them on x86/x86_64.
§Examples
use portable_atomic::{AtomicPtr, Ordering};
let pointer = &mut 3i64 as *mut i64;
let atom = AtomicPtr::<i64>::new(pointer);
// Toggle a tag bit on the pointer.
atom.bit_toggle(0, Ordering::Relaxed);
assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
sourcepub const fn as_ptr(&self) -> *mut *mut T
pub const fn as_ptr(&self) -> *mut *mut T
Returns a mutable pointer to the underlying pointer.
Returning an *mut
pointer from a shared reference to this atomic is
safe because the atomic types work with interior mutability. Any use of
the returned raw pointer requires an unsafe
block and has to uphold
the safety requirements. If there is concurrent access, note the following
additional safety requirements:
- If this atomic type is lock-free, any concurrent operations on it must be atomic.
- Otherwise, any concurrent operations on it must be compatible with operations performed by this atomic type.
This is const fn
on Rust 1.58+.