On Monday, April 26, 2021 2:38:58 AM PDT Matthew Auld wrote:
Add new extension to support setting an immutable-priority-list of potential placements, at creation time.
If we use the normal gem_create or gem_create_ext without the extensions/placements then we still get the old behaviour with only placing the object in system memory.
v2(Daniel & Jason): - Add a bunch of kernel-doc - Simplify design for placements extension
Testcase: igt/gem_create/create-ext-placement-sanity-check Testcase: igt/gem_create/create-ext-placement-each Testcase: igt/gem_create/create-ext-placement-all Signed-off-by: Matthew Auld matthew.auld@intel.com Signed-off-by: CQ Tang cq.tang@intel.com Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: Daniele Ceraolo Spurio daniele.ceraolospurio@intel.com Cc: Lionel Landwerlin lionel.g.landwerlin@linux.intel.com Cc: Jordan Justen jordan.l.justen@intel.com Cc: Daniel Vetter daniel.vetter@intel.com Cc: Kenneth Graunke kenneth@whitecape.org Cc: Jason Ekstrand jason@jlekstrand.net Cc: Dave Airlie airlied@gmail.com Cc: dri-devel@lists.freedesktop.org Cc: mesa-dev@lists.freedesktop.org
drivers/gpu/drm/i915/gem/i915_gem_create.c | 215 ++++++++++++++++-- drivers/gpu/drm/i915/gem/i915_gem_object.c | 3 + .../gpu/drm/i915/gem/i915_gem_object_types.h | 6 + .../drm/i915/gem/selftests/i915_gem_mman.c | 26 +++ drivers/gpu/drm/i915/intel_memory_region.c | 16 ++ drivers/gpu/drm/i915/intel_memory_region.h | 4 + include/uapi/drm/i915_drm.h | 62 +++++ 7 files changed, 315 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index 90e9eb6601b5..895f1666a8d3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -4,12 +4,47 @@ */
#include "gem/i915_gem_ioctls.h" +#include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h"
#include "i915_drv.h" #include "i915_trace.h" #include "i915_user_extensions.h"
+static u32 object_max_page_size(struct drm_i915_gem_object *obj) +{
- u32 max_page_size = 0;
- int i;
- for (i = 0; i < obj->mm.n_placements; i++) {
struct intel_memory_region *mr = obj->mm.placements[i];
GEM_BUG_ON(!is_power_of_2(mr->min_page_size));
max_page_size = max_t(u32, max_page_size, mr->min_page_size);
- }
- GEM_BUG_ON(!max_page_size);
- return max_page_size;
+}
+static void object_set_placements(struct drm_i915_gem_object *obj,
struct intel_memory_region **placements,
unsigned int n_placements)
+{
- GEM_BUG_ON(!n_placements);
- if (n_placements == 1) {
struct intel_memory_region *mr = placements[0];
struct drm_i915_private *i915 = mr->i915;
obj->mm.placements = &i915->mm.regions[mr->id];
obj->mm.n_placements = 1;
- } else {
obj->mm.placements = placements;
obj->mm.n_placements = n_placements;
- }
+}
I found this helper function rather odd looking at first. In the general case, it simply sets fields based on the parameters...but in the n == 1 case, it goes and uses something else as the array.
On further inspection, this makes sense: normally, we have an array of multiple placements in priority order. That array is (essentially) malloc'd. But if there's only 1 item, having a malloc'd array of 1 thing is pretty silly. We can just point at it directly. Which means the callers can kfree the array, and the object destructor should not.
Maybe a comment saying
/* * For the common case of one memory region, skip storing an * allocated array and just point at the region directly. */
would be helpful?