lkml.org 
[lkml]   [2020]   [Feb]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v7 2/3] soc: qcom: rpmh: Update dirty flag only when data changes
From
Date

On 2/27/2020 4:13 AM, Stephen Boyd wrote:
> Quoting Maulik Shah (2020-02-25 21:27:12)
>> Currently rpmh ctrlr dirty flag is set for all cases regardless
>> of data is really changed or not. Add changes to update it when
>> data is updated to newer values.
>>
>> Also move dirty flag updates to happen from within cache_lock.
>>
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> Reviewed-by: Srinivas Rao L <lsrao@codeaurora.org>
> Probably worth adding a Fixes tag here? Doesn't make sense to mark
> something dirty when it isn't changed.
Done. will update in v8.
>> ---
>> drivers/soc/qcom/rpmh.c | 21 ++++++++++++++++-----
>> 1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index eb0ded0..83ba4e0 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -139,20 +139,27 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>> existing:
>> switch (state) {
>> case RPMH_ACTIVE_ONLY_STATE:
>> - if (req->sleep_val != UINT_MAX)
>> + if (req->sleep_val != UINT_MAX) {
>> req->wake_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
>> break;
>> case RPMH_WAKE_ONLY_STATE:
>> - req->wake_val = cmd->data;
>> + if (req->wake_val != cmd->data) {
>> + req->wake_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
>> break;
>> case RPMH_SLEEP_STATE:
>> - req->sleep_val = cmd->data;
>> + if (req->sleep_val != cmd->data) {
>> + req->sleep_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
>> break;
>> default:
>> break;
> Please remove the default case. There are only three states in the enum. The
> compiler will warn if a switch statement doesn't cover all cases and
> we'll know to add something here if another enum value is added in the
> future.
Done.
>> }
>>
>> - ctrlr->dirty = true;
>> unlock:
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>> @@ -323,6 +331,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>> list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>> kfree(req);
>> INIT_LIST_HEAD(&ctrlr->batch_cache);
>> + ctrlr->dirty = true;
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> }
>>
>> @@ -456,6 +465,7 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>> int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>> {
>> struct cache_req *p;
>> + unsigned long flags;
>> int ret;
>>
>> if (!ctrlr->dirty) {
>> @@ -488,7 +498,9 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
>> return ret;
>> }
>>
>> + spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> ctrlr->dirty = false;
>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> So we take the spinlock to update it here. But we don't hold the
> spinlock to test for !dirty up above. Seems like either rpmh_flush() can
> only be called sequentially, or the lock added here needs to be held
> during the whole flush. Which way is it?

Thanks, i will remove !ctrlr->dirty check within rpmh_flush() as
currently we invoke it only when caches are dirty.

Last cpu going down can first check dirty flag outside rpmh_flush() and
decide to invoke it accoringly.

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

\
 
 \ /
  Last update: 2020-02-27 06:33    [W:0.055 / U:0.832 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site