Unverified Commit 5fd8f876 authored by Hernando Castano's avatar Hernando Castano Committed by GitHub
Browse files

Add Bump Allocator (#831)

* Add bump allocator skeleton

* Implement `alloc` for our bump allocator

* Make the allocator usable globally

* Remove unused `init()` function

* Nightly RustFmt

* Use global mutable static instead of Mutex

This will reduce our use of dependencies which will hopefully
reduce our final Wasm binary size.

Also, apparently spinlocks aren't actually all that efficient.
See: https://matklad.github.io/2020/01/02/spinlocks-considered-harmful.html

* Stop assuming that memory is allocated at address `0`

* Remove semicolon

* Use correct address when checking if we're OOM

* Remove unnecessary unsafe block

* Return null pointers instead of panicking

Panicking in the global allocator is considered undefined behaviour.

* Use `checked_add` when getting upper limit memory address

* Use `MAX` associated const instead of `max_value`

* Inline `GlobalAlloc` methods

* Turns out I can't early return from `unwrap_or_else` 🤦

* Rollback my build script hacks

* Add initialization function to allocator

* Add some docs

* Make the bump allocator the default allocator

* Allow bump allocator to be tested on Unix platforms

* Remove unecessary checked_add

* Add error messages to unrecoverable errors

* Remove `init` function from allocator

Instead we now request a new page whenver we need it, regardless
of whether or not it's the first time we're allocating memory.

* Try switching from `mmap` to `malloc` when in `std` env

* Fix `is_null()` check when requesting memory

* Stop requesting real memory for `std` testing

Instead this tracks pages internally in the same way that the Wasm
environment would. This means we can test our allocator implementation
instead of fighting with `libc`.

* Gate the global bump allocator when not in `std`

* Allow for multi-page allocations

* Update the module documentation

* Override `alloc_zeroed` implementation

* Forgot to update Wasm target function name

* Appease the spellchecker

* Use proper English I guess

* Get rid of `page_requests` field

* Explicitly allow test builds to use test implementation

* All link to zero'd Wasm memory reference

* Check that our initial pointer is 0 in a test

* Add `cfg_if` branch for non-test, `std` enabled builds

* Simplify `cfg_if` statement
parent 4ff763c9
Pipeline #148269 passed with stages
in 31 minutes and 2 seconds
......@@ -15,8 +15,10 @@ categories = ["no-std", "embedded"]
include = ["Cargo.toml", "src/**/*.rs", "README.md", "LICENSE"]
[dependencies]
wee_alloc = { version = "0.4", default-features = false }
cfg-if = "1.0"
wee_alloc = { version = "0.4", default-features = false, optional = true }
[features]
default = ["std"]
std = []
wee-alloc = ["wee_alloc"]
// Copyright 2018-2021 Parity Technologies (UK) Ltd.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! A simple bump allocator.
//!
//! Its goal to have a much smaller footprint than the admittedly more full-featured `wee_alloc`
//! allocator which is currently being used by ink! smart contracts.
//!
//! The heap which is used by this allocator is built from pages of Wasm memory (each page is `64KiB`).
//! We will request new pages of memory as needed until we run out of memory, at which point we
//! will crash with an `OOM` error instead of freeing any memory.
use core::alloc::{
GlobalAlloc,
Layout,
};
/// A page in Wasm is `64KiB`
const PAGE_SIZE: usize = 64 * 1024;
static mut INNER: InnerAlloc = InnerAlloc::new();
/// A bump allocator suitable for use in a Wasm environment.
pub struct BumpAllocator;
unsafe impl GlobalAlloc for BumpAllocator {
#[inline]
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
match INNER.alloc(layout) {
Some(start) => start as *mut u8,
None => core::ptr::null_mut(),
}
}
#[inline]
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 {
// A new page in Wasm is guaranteed to already be zero initialized, so we can just use our
// regular `alloc` call here and save a bit of work.
//
// See: https://webassembly.github.io/spec/core/exec/modules.html#growing-memories
self.alloc(layout)
}
#[inline]
unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout) {}
}
#[cfg_attr(feature = "std", derive(Debug, Copy, Clone))]
struct InnerAlloc {
/// Points to the start of the next available allocation.
next: usize,
/// The address of the upper limit of our heap.
upper_limit: usize,
}
impl InnerAlloc {
const fn new() -> Self {
Self {
next: 0,
upper_limit: 0,
}
}
cfg_if::cfg_if! {
if #[cfg(test)] {
/// Request a `pages` number of page sized sections of Wasm memory. Each page is `64KiB` in size.
///
/// Returns `None` if a page is not available.
///
/// This implementation is only meant to be used for testing, since we cannot (easily)
/// test the `wasm32` implementation.
fn request_pages(&mut self, _pages: usize) -> Option<usize> {
Some(self.upper_limit)
}
} else if #[cfg(feature = "std")] {
fn request_pages(&mut self, _pages: usize) -> Option<usize> {
unreachable!(
"This branch is only used to keep the compiler happy when building tests, and
should never actually be called outside of a test run."
)
}
} else if #[cfg(target_arch = "wasm32")] {
/// Request a `pages` number of pages of Wasm memory. Each page is `64KiB` in size.
///
/// Returns `None` if a page is not available.
fn request_pages(&mut self, pages: usize) -> Option<usize> {
let prev_page = core::arch::wasm32::memory_grow(0, pages);
if prev_page == usize::MAX {
return None;
}
prev_page.checked_mul(PAGE_SIZE)
}
} else {
compile_error! {
"ink! only supports compilation as `std` or `no_std` + `wasm32-unknown`"
}
}
}
/// Tries to allocate enough memory on the heap for the given `Layout`. If there is not enough
/// room on the heap it'll try and grow it by a page.
///
/// Note: This implementation results in internal fragmentation when allocating across pages.
fn alloc(&mut self, layout: Layout) -> Option<usize> {
let alloc_start = self.next;
let aligned_size = layout.pad_to_align().size();
let alloc_end = alloc_start.checked_add(aligned_size)?;
if alloc_end > self.upper_limit {
let required_pages = (aligned_size + PAGE_SIZE - 1) / PAGE_SIZE;
let page_start = self.request_pages(required_pages)?;
self.upper_limit = required_pages
.checked_mul(PAGE_SIZE)
.and_then(|pages| page_start.checked_add(pages))?;
self.next = page_start.checked_add(aligned_size)?;
Some(page_start)
} else {
self.next = alloc_end;
Some(alloc_start)
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn can_alloc_a_byte() {
let mut inner = InnerAlloc::new();
let layout = Layout::new::<u8>();
assert_eq!(inner.alloc(layout), Some(0));
let expected_limit = PAGE_SIZE;
assert_eq!(inner.upper_limit, expected_limit);
let expected_alloc_start = std::mem::size_of::<u8>();
assert_eq!(inner.next, expected_alloc_start);
}
#[test]
fn can_alloc_a_foobarbaz() {
let mut inner = InnerAlloc::new();
struct FooBarBaz {
_foo: u32,
_bar: u128,
_baz: (u16, bool),
}
let layout = Layout::new::<FooBarBaz>();
let allocations = 3;
for _ in 0..allocations {
assert!(inner.alloc(layout).is_some());
}
let expected_limit = PAGE_SIZE;
assert_eq!(inner.upper_limit, expected_limit);
let expected_alloc_start = allocations * std::mem::size_of::<FooBarBaz>();
assert_eq!(inner.next, expected_alloc_start);
}
#[test]
fn can_alloc_across_pages() {
let mut inner = InnerAlloc::new();
struct Foo {
_foo: [u8; PAGE_SIZE - 1],
}
// First, let's allocate a struct which is _almost_ a full page
let layout = Layout::new::<Foo>();
assert_eq!(inner.alloc(layout), Some(0));
let expected_limit = PAGE_SIZE;
assert_eq!(inner.upper_limit, expected_limit);
let expected_alloc_start = std::mem::size_of::<Foo>();
assert_eq!(inner.next, expected_alloc_start);
// Now we'll allocate two bytes which will push us over to the next page
let layout = Layout::new::<u16>();
assert_eq!(inner.alloc(layout), Some(PAGE_SIZE));
let expected_limit = 2 * PAGE_SIZE;
assert_eq!(inner.upper_limit, expected_limit);
// Notice that we start the allocation on the second page, instead of making use of the
// remaining byte on the first page
let expected_alloc_start = PAGE_SIZE + std::mem::size_of::<u16>();
assert_eq!(inner.next, expected_alloc_start);
}
#[test]
fn can_alloc_multiple_pages() {
let mut inner = InnerAlloc::new();
struct Foo {
_foo: [u8; 2 * PAGE_SIZE],
}
let layout = Layout::new::<Foo>();
assert_eq!(inner.alloc(layout), Some(0));
let expected_limit = 2 * PAGE_SIZE;
assert_eq!(inner.upper_limit, expected_limit);
let expected_alloc_start = std::mem::size_of::<Foo>();
assert_eq!(inner.next, expected_alloc_start);
// Now we want to make sure that the state of our allocator is correct for any subsequent
// allocations
let layout = Layout::new::<u8>();
assert_eq!(inner.alloc(layout), Some(2 * PAGE_SIZE));
let expected_limit = 3 * PAGE_SIZE;
assert_eq!(inner.upper_limit, expected_limit);
let expected_alloc_start = 2 * PAGE_SIZE + std::mem::size_of::<u8>();
assert_eq!(inner.next, expected_alloc_start);
}
}
......@@ -12,10 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! Crate providing `WEE_ALLOC` support for all Wasm compilations of ink! smart contract.
//! Crate providing allocator support for all Wasm compilations of ink! smart contracts.
//!
//! The Wee allocator is an allocator specifically designed to have a low footprint albeit
//! being less efficient for allocation and deallocation operations.
//! The default allocator is a bump allocator whose goal is to have a small size footprint. If you
//! are not concerned about the size of your final Wasm binaries you may opt into using the more
//! full-featured `wee_alloc` allocator by activating the `wee-alloc` crate feature.
#![cfg_attr(not(feature = "std"), no_std)]
#![cfg_attr(not(feature = "std"), feature(alloc_error_handler, core_intrinsics))]
......@@ -23,8 +24,17 @@
// We use `wee_alloc` as the global allocator since it is optimized for binary file size
// so that contracts compiled with it as allocator do not grow too much in size.
#[cfg(not(feature = "std"))]
#[cfg(feature = "wee-alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
#[cfg(not(feature = "std"))]
#[cfg(not(feature = "wee-alloc"))]
#[global_allocator]
static mut ALLOC: bump::BumpAllocator = bump::BumpAllocator {};
#[cfg(not(feature = "wee-alloc"))]
mod bump;
#[cfg(not(feature = "std"))]
mod handlers;
......@@ -65,3 +65,4 @@ std = [
# Enable contract debug messages via `debug_print!` and `debug_println!`.
ink-debug = []
ink-experimental-engine = ["ink_engine"]
wee-alloc = ["ink_allocator/wee-alloc"]
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment