Summary
I hit an ASAN SEGV while fuzzing Wasmtime locally with the gc_ops target. A
guest Wasm program that allocates in a tight struct.new loop crashes the host
process on a memory access outside any mapped module.
Note: Triggered in PR khagankhan/wasmtime#13101
against the struct-fields branch in
khagankhan/wasmtime.
The PR is up-to-date with main; the only additions are to gc_ops
fuzzing, so the runtime/GC/codegen code is identical to upstream which
suggests this may be an upstream issue that the new fuzzer payloads expose.
Environment
| Field |
Value |
| Wasmtime version |
45.0.0 |
| Fork / branch |
khagankhan/wasmtime @ struct-fields |
| Commit |
07ef78ad5 ("Merge branch 'bytecodealliance:main' into struct-fields") |
| Target PR |
#13101 |
| Rustc |
1.95.0 (59807616e 2026-04-14) |
| Platform |
Linux 5.15.0-168-generic, x86_64-unknown-linux-gnu |
| Sanitizer |
AddressSanitizer (nightly cargo fuzz build) |
| Fuzz target |
gc_ops |
Reproducer
Minimal WAT
(module
(type (;0;) (func (result externref externref externref)))
(type (;1;) (func))
(type (;2;) (func (param externref externref externref)))
(type (;3;) (func (result externref externref externref)))
(type (;4;) (func (param structref)))
(rec
(type (;5;) (sub (struct)))
)
(type (;6;) (func (param (ref null 5))))
(import "" "gc" (func (;0;) (type 0)))
(import "" "take_refs" (func (;1;) (type 2)))
(import "" "make_refs" (func (;2;) (type 3)))
(import "" "take_struct" (func (;3;) (type 4)))
(import "" "take_struct_5" (func (;4;) (type 6)))
(table (;0;) 14 externref)
(table (;1;) 14 structref)
(table (;2;) 14 (ref null 5))
(global (;0;) (mut structref) ref.null struct)
(global (;1;) (mut (ref null 5)) ref.null 5)
(export "run" (func 5))
(func (;5;) (type 1)
(local externref structref (ref null 5))
loop
struct.new 5
global.set 1
br 0
end
)
)
Hot loop:
loop
struct.new 5 ;; allocate empty struct
global.set 1 ;; overwrite prior ref
br 0
end
Steps
cd ~/wasmtime
cargo +nightly fuzz build gc_ops --no-default-features
./target/x86_64-unknown-linux-gnu/release/gc_ops ~/minimized_artifact
Expected
Run forever (GC reclaims the now-unreachable previous value of global 1) or
trap cleanly with a GC-OOM wasmtime::Trap.
Actual
Host-side SEGV in JIT code (<unknown module>), not a guest trap.
Likely Cause (from trace and exiting fixed issues what I think might be)
With RUST_LOG=trace, the failing sequence is:
FreeList::new(0) # heap starts at 0 capacity
gc_alloc_raw(kind=StructRef, size=24, align=8)
Got GC heap OOM: no capacity for allocation of 24 bytes
Attempting to grow the GC heap by 24 bytes
FreeList::add_capacity(0x10000): 0x0 -> 0x10000 # grown to 64 KiB
<SEGV> # faults on the FIRST alloc
So this is not a bump allocator walking off the end of an exhausted heap
(the loop never iterates past the first struct.new). It faults immediately
after a successful grow_gc_heap.
Register state at the minimized crash:
rax = 0x10 # VMGcRef returned by gc_alloc_raw
r15 = 0x00007b86184a0000 # looks like the new GC heap base
fault = 0x7b86184a0018 = r15 + 0x18
Leading hypothesis: the JIT is computing obj_addr = gc_heap_base + gc_ref
using a stale gc_heap_base. The base load in
crates/cranelift/src/func_environ/gc/enabled.rs#L1470
is marked readonly/can_move whenever
!gc_heap_memory_type().memory_may_move(..), which allows CLIF to CSE/hoist
the load above the gc_alloc_raw libcall. But gc_alloc_raw can call
grow_gc_heap,
which updates VMStoreContext.gc_heap.base. so the cached pre-grow base
(0/null on the very first allocation) is used to compute the object address,
landing in unmapped memory.
The fuzz config hits this path easily:
crates/fuzzing/src/generators/config.rs#L443-L452
forces gc_heap_reservation = 0 with a small gc_heap_reservation_for_growth
(1 MiB), giving a malloc-backed heap whose base only becomes non-null after
the first grow.
Possibly adjacent to recent DRC/layout fixes on main:
#13110,
#13115.
ASAN Output
Actual seed (WRITE fault)
==63564==ERROR: AddressSanitizer: SEGV on unknown address 0x7b633f5a0108 (pc 0x7f6346557428 bp 0x7ffe1c9310d0 sp 0x7ffe1c931050 T0)
==63564==The signal is caused by a WRITE memory access.
#0 0x7f6346557428 (<unknown module>)
#1 0x7f634655cb5f (<unknown module>)
...
rax = 0x00000000000000f0 rbx = 0x00000000b0000000 rcx = 0x0000000000000001 rdx = 0x0000000000000040
rdi = 0x00000000000000f0 rsi = 0x000000000000beef rbp = 0x00007ffe1c9310d0 rsp = 0x00007ffe1c931050
r8 = 0x0000000000000001 r9 = 0x0000000000040000 r10 = 0x00000f6c68a88c5a r11 = 0x0000000000000000
r12 = 0x0000000000000008 r13 = 0x0000000000000018 r14 = 0x00007b633f5a0000 r15 = 0x00007d13463e0598
SUMMARY: AddressSanitizer: SEGV (<unknown module>)
(fault = r14 + 0x108, r14 looks like a GC heap base)
Minimized seed (READ fault)
==63646==ERROR: AddressSanitizer: SEGV on unknown address 0x7b86184a0018 (pc 0x7f861f4c30d4 bp 0x7ffe022df030 sp 0x7ffe022df000 T0)
==63646==The signal is caused by a READ memory access.
#0 0x7f861f4c30d4 (<unknown module>)
#1 0x7f861f4c8860 (<unknown module>)
...
rax = 0x0000000000000010 rbx = 0x00007d361f2e0598 rcx = 0x0000000000000010 rdx = 0x0000000000000020
rdi = 0x00000f7143c64f00 rsi = 0x0000000000000000 rbp = 0x00007ffe022df030 rsp = 0x00007ffe022df000
r8 = 0x0000000000000001 r9 = 0x0000000000400000 r10 = 0x00000f70c3c6cf9a r11 = 0x0000000000000000
r12 = 0x00007ce61f2e4c20 r13 = 0xfffffffffffffc19 r14 = 0x0000000000000005 r15 = 0x00007b86184a0000
SUMMARY: AddressSanitizer: SEGV (<unknown module>)
(fault = r15 + 0x18)
Artifacts
+cc @fitzgen
Summary
I hit an ASAN SEGV while fuzzing Wasmtime locally with the
gc_opstarget. Aguest Wasm program that allocates in a tight
struct.newloop crashes the hostprocess on a memory access outside any mapped module.
Environment
45.0.0khagankhan/wasmtime@struct-fields07ef78ad5("Merge branch 'bytecodealliance:main' into struct-fields")1.95.0 (59807616e 2026-04-14)5.15.0-168-generic,x86_64-unknown-linux-gnucargo fuzzbuild)gc_opsReproducer
Minimal WAT
Hot loop:
Steps
Expected
Run forever (GC reclaims the now-unreachable previous value of
global 1) ortrap cleanly with a GC-OOM
wasmtime::Trap.Actual
Host-side SEGV in JIT code (
<unknown module>), not a guest trap.Likely Cause (from trace and exiting fixed issues what I think might be)
With
RUST_LOG=trace, the failing sequence is:So this is not a bump allocator walking off the end of an exhausted heap
(the loop never iterates past the first
struct.new). It faults immediatelyafter a successful
grow_gc_heap.Register state at the minimized crash:
Leading hypothesis: the JIT is computing
obj_addr = gc_heap_base + gc_refusing a stale
gc_heap_base. The base load incrates/cranelift/src/func_environ/gc/enabled.rs#L1470is marked
readonly/can_movewhenever!gc_heap_memory_type().memory_may_move(..), which allows CLIF to CSE/hoistthe load above the
gc_alloc_rawlibcall. Butgc_alloc_rawcan callgrow_gc_heap,which updates
VMStoreContext.gc_heap.base. so the cached pre-grow base(0/null on the very first allocation) is used to compute the object address,
landing in unmapped memory.
The fuzz config hits this path easily:
crates/fuzzing/src/generators/config.rs#L443-L452forces
gc_heap_reservation = 0with a smallgc_heap_reservation_for_growth(1 MiB), giving a malloc-backed heap whose base only becomes non-null after
the first grow.
Possibly adjacent to recent DRC/layout fixes on main:
#13110,
#13115.
ASAN Output
Actual seed (WRITE fault)
(fault =
r14 + 0x108, r14 looks like a GC heap base)Minimized seed (READ fault)
(fault =
r15 + 0x18)Artifacts
+cc @fitzgen