lkml.org 
[lkml]   [2019]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH 4/5] sched/deadline: Cleanup on_dl_rq() handling
From
Date
On 7/30/19 9:21 AM, Peter Zijlstra wrote:
> On Tue, Jul 30, 2019 at 08:41:15AM +0200, Juri Lelli wrote:
>> On 29/07/19 18:49, Peter Zijlstra wrote:
>>> On Fri, Jul 26, 2019 at 09:27:55AM +0100, Dietmar Eggemann wrote:
>>>> Remove BUG_ON() in __enqueue_dl_entity() since there is already one in
>>>> enqueue_dl_entity().
>>>>
>>>> Move the check that the dl_se is not on the dl_rq from
>>>> __dequeue_dl_entity() to dequeue_dl_entity() to align with the enqueue
>>>> side and use the on_dl_rq() helper function.
>>>>
>>>> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
>>>> ---
>>>> kernel/sched/deadline.c | 8 +++-----
>>>> 1 file changed, 3 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>>> index 1fa005f79307..a9cb52ceb761 100644
>>>> --- a/kernel/sched/deadline.c
>>>> +++ b/kernel/sched/deadline.c
>>>> @@ -1407,8 +1407,6 @@ static void __enqueue_dl_entity(struct sched_dl_entity *dl_se)
>>>> struct sched_dl_entity *entry;
>>>> int leftmost = 1;
>>>>
>>>> - BUG_ON(!RB_EMPTY_NODE(&dl_se->rb_node));
>>>> -
>>>> while (*link) {
>>>> parent = *link;
>>>> entry = rb_entry(parent, struct sched_dl_entity, rb_node);
>>>> @@ -1430,9 +1428,6 @@ static void __dequeue_dl_entity(struct sched_dl_entity *dl_se)
>>>> {
>>>> struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
>>>>
>>>> - if (RB_EMPTY_NODE(&dl_se->rb_node))
>>>> - return;
>>>> -
>>>> rb_erase_cached(&dl_se->rb_node, &dl_rq->root);
>>>> RB_CLEAR_NODE(&dl_se->rb_node);
>>>>
>>>> @@ -1466,6 +1461,9 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se,
>>>>
>>>> static void dequeue_dl_entity(struct sched_dl_entity *dl_se)
>>>> {
>>>> + if (!on_dl_rq(dl_se))
>>>> + return;
>>>
>>> Why allow double dequeue instead of WARN?
>>
>> As I was saying to Valentin, it can currently happen that a task could
>> have already been dequeued by update_curr_dl()->throttle called by
>> dequeue_task_dl() before calling __dequeue_task_dl(). Do you think we
>> should check for this condition before calling into dequeue_dl_entity()?
>
> Yes, that's what ->dl_throttled is for, right? And !->dl_throttled &&
> !on_dl_rq() is a BUG.

OK, I will add the following snippet to the patch.
Although it's easy to provoke a situation in which DL tasks are throttled,
I haven't seen a throttling happening when the task is being dequeued.

--->8---

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b6d2f263e0a4..a009762097fa 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1507,8 +1507,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se,

static void dequeue_dl_entity(struct sched_dl_entity *dl_se)
{
- if (!on_dl_rq(dl_se))
- return;
+ BUG_ON(!on_dl_rq(dl_se));

__dequeue_dl_entity(dl_se);
}
@@ -1592,6 +1591,10 @@ static void __dequeue_task_dl(struct rq *rq, struct task_struct *p)
static void dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags)
{
update_curr_dl(rq);
+
+ if (p->dl.dl_throttled)
+ return;
+
__dequeue_task_dl(rq, p);

\
 
 \ /
  Last update: 2019-07-31 19:33    [W:0.069 / U:1.920 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site