[RFC PATCH] drm: panthor: add dev_coredumpv support

Daniel Almeida daniel.almeida at collabora.com
Tue Jul 23 13:41:08 UTC 2024


Hi Alice, thanks for the review!


>> +        fn alloc_mem(&mut self, size: usize) -> Option<*mut u8> {
>> +            assert!(size % 8 == 0, "Allocation size must be 8-byte aligned");
>> +            if isize::try_from(size).unwrap() == isize::MAX {
>> +                return None;
>> +            } else if self.pos + size > self.capacity {
>> +                kernel::pr_debug!("DumpAllocator out of memory");
>> +                None
>> +            } else {
>> +                let offset = self.pos;
>> +                self.pos += size;
>> +
>> +                // Safety: we know that this is a valid allocation, so
>> +                // dereferencing is safe. We don't ever return two pointers to
>> +                // the same address, so we adhere to the aliasing rules. We make
>> +                // sure that the memory is zero-initialized before being handed
>> +                // out (this happens when the allocator is first created) and we
>> +                // enforce a 8 byte alignment rule.
>> +                Some(unsafe { self.mem.as_ptr().offset(offset as isize) as *mut u8 })
>> +            }
>> +        }
>> +
>> +        pub(crate) fn alloc<T>(&mut self) -> Option<&mut T> {
>> +            let mem = self.alloc_mem(core::mem::size_of::<T>())? as *mut T;
>> +            // Safety: we uphold safety guarantees in alloc_mem(), so this is
>> +            // safe to dereference.
> 
> This code doesn't properly handle when T requires a large alignment.
> 

Can you expand a bit on this? IIRC the alignment of a structure/enum will be dictated 
by the field with the largest alignment requirement, right? Given that the largest primitive
allowed in the kernel is u64/i64, shouldn’t this suffice, e.g.:

 +            assert!(size % 8 == 0, "Allocation size must be 8-byte aligned");


>> +            Some(unsafe { &mut *mem })
>> +        }
>> +
>> +        pub(crate) fn alloc_bytes(&mut self, num_bytes: usize) -> Option<&mut [u8]> {
>> +            let mem = self.alloc_mem(num_bytes)?;
>> +
>> +            // Safety: we uphold safety guarantees in alloc_mem(), so this is
>> +            // safe to build a slice
>> +            Some(unsafe { core::slice::from_raw_parts_mut(mem, num_bytes) })
>> +        }
> 
> Using references for functions that allocate is generally wrong.
> References imply that you don't have ownership of the memory, but
> allocator functions would normally return ownership of the allocation.
> As-is, the code seems to leak these allocations.

All the memory must be given to dev_coredumpv(), which will then take
ownership.  dev_coredumpv() will free all the memory, so there should be no
leaks here.

I’ve switched to KVec in v2, so that will also cover the error paths,
which do leak in this version, sadly.

As-is, all the memory is pre-allocated as a single chunk. When space is carved
for a given T, a &mut is returned so that the data can be written in-place at
the right spot in said chunk.

Not only there shouldn’t be any leaks, but I can actually decode this from
userspace.

I agree that this pattern isn’t usual, but I don’t see anything
incorrect. Maybe I missed something?

> 
>> +        pub(crate) fn alloc_header(&mut self, ty: HeaderType, data_size: u32) -> &mut Header {
>> +            let hdr: &mut Header = self.alloc().unwrap();
>> +            hdr.magic = MAGIC;
>> +            hdr.ty = ty;
>> +            hdr.header_size = core::mem::size_of::<Header>() as u32;
>> +            hdr.data_size = data_size;
>> +            hdr
>> +        }
>> +
>> +        pub(crate) fn is_end(&self) -> bool {
>> +            self.pos == self.capacity
>> +        }
>> +
>> +        pub(crate) fn dump(self) -> (NonNull<core::ffi::c_void>, usize) {
>> +            (self.mem, self.capacity)
>> +        }
>> +    }
>> +}
>> +
>> +fn dump_registers(alloc: &mut DumpAllocator, args: &DumpArgs) {
>> +    let sz = core::mem::size_of_val(&REGISTERS);
>> +    alloc.alloc_header(HeaderType::Registers, sz.try_into().unwrap());
>> +
>> +    for reg in &REGISTERS {
>> +        let dumped_reg: &mut RegisterDump = alloc.alloc().unwrap();
>> +        dumped_reg.register = *reg;
>> +        dumped_reg.value = reg.read(args.reg_base_addr);
>> +    }
>> +}
>> +
>> +fn dump_bo(alloc: &mut DumpAllocator, bo: &mut bindings::drm_gem_object) {
>> +    let mut map = bindings::iosys_map::default();
>> +
>> +    // Safety: we trust the kernel to provide a valid BO.
>> +    let ret = unsafe { bindings::drm_gem_vmap_unlocked(bo, &mut map as _) };
>> +    if ret != 0 {
>> +        pr_warn!("Failed to map BO");
>> +        return;
>> +    }
>> +
>> +    let sz = bo.size;
>> +
>> +    // Safety: we know that the vaddr is valid and we know the BO size.
>> +    let mapped_bo: &mut [u8] =
>> +        unsafe { core::slice::from_raw_parts_mut(map.__bindgen_anon_1.vaddr as *mut _, sz) };
> 
> You don't write to this memory, so I would avoid the mutable reference.
> 
>> +    alloc.alloc_header(HeaderType::Vm, sz as u32);
>> +
>> +    let bo_data = alloc.alloc_bytes(sz).unwrap();
>> +    bo_data.copy_from_slice(&mapped_bo[..]);
>> +
>> +    // Safety: BO is valid and was previously mapped.
>> +    unsafe { bindings::drm_gem_vunmap_unlocked(bo, &mut map as _) };
> 
> You don't need `as _` here. You can just pass a mutable reference and
> Rust will automatically cast it to raw pointer.
> 
>> +}
>> +
>> +/// Dumps the current state of the GPU to a file
>> +///
>> +/// # Safety
>> +///
>> +/// `Args` must be aligned and non-null.
>> +/// All fields of `DumpArgs` must be valid.
>> +#[no_mangle]
>> +pub(crate) extern "C" fn panthor_core_dump(args: *const DumpArgs) -> core::ffi::c_int {
>> +    assert!(!args.is_null());
>> +    // Safety: we checked whether the pointer was null. It is assumed to be
>> +    // aligned as per the safety requirements.
>> +    let args = unsafe { &*args };
> 
> Creating a reference requires that it isn't dangling, so the safety
> requirements should require that.
> 
> Also, panthor_core_dump should be unsafe.
> 



More information about the dri-devel mailing list