[RFC,v2,3/8] video: tegra: host: Add channel and client support

Mark Zhang nvmarkzhang at gmail.com
Thu Nov 29 22:13:36 PST 2012


On 11/29/2012 06:46 PM, Terje Bergström wrote:
> On 29.11.2012 12:01, Mark Zhang wrote:
>>
>> Just for curious, why "pb->mapped + 1K" is the end of a 4K pushbuffer?
> 
> pb->mapped is u32 *, so compiler will take care of multiplying by
> sizeof(u32).
> 

Ah, yes. Sorry, I must be insane at that time. :)

>>> +unsigned int nvhost_cdma_wait_locked(struct nvhost_cdma *cdma,
>>> +		enum cdma_event event)
>>> +{
>>> +	for (;;) {
>>> +		unsigned int space = cdma_status_locked(cdma, event);
>>> +		if (space)
>>> +			return space;
>>> +
>>> +		/* If somebody has managed to already start waiting, yield */
>>> +		if (cdma->event != CDMA_EVENT_NONE) {
>>> +			mutex_unlock(&cdma->lock);
>>> +			schedule();
>>> +			mutex_lock(&cdma->lock);
>>> +			continue;
>>> +		}
>>> +		cdma->event = event;
>>> +
>>> +		mutex_unlock(&cdma->lock);
>>> +		down(&cdma->sem);
>>> +		mutex_lock(&cdma->lock);
>>
>> I'm newbie of nvhost but I feel here is very tricky, about the lock and
>> unlock of this mutex: cdma->lock. Does it require this mutex is locked
>> before calling this function? And do we need to unlock it before the
>> code: "return space;" above? IMHO, this is not a good design and can we
>> find out a better solution?
> 
> Yeah, it's not perfect and good solutions are welcome.
> cdma_status_locked() must be called with a mutex. But, what we generally
> wait for is for space in push buffer. The cleanup code cannot run if we
> keep cdma->lock, so I release it.
> 
> The two ways to loop are because there was a race between two processes
> waiting for space. One of them set cdma->event to indicate what it's
> waiting for and can go to sleep, but the other has to keep spinning.
> 

Alright. I just feel this mutex operations is complicated and
error-prone, but I just get the big picture of nvhost and still don't
know much about a lot of details. So I'll let you know if I find some
better solutions.

> Terje
> 


More information about the dri-devel mailing list