[lkml]   [2022]   [Jul]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [PATCH v7 07/10] tmp, tmp_tis: Implement usage counter for locality

On 01.07.22 01:29, Jarkko Sakkinen wrote:

> I'm kind of thinking that should tpm_tis_data have a lock for its
> contents?

Most of the tpm_tis_data structure elements are set once during init and
then never changed but only read. So no need for locking for these. The
exceptions I see are

- flags
- locality_count
- locality

whereby "flags" is accessed by atomic bit manipulating functions and thus
does not need extra locking. "locality_count" is protected by the locality_count_mutex.
"locality" is only set in check_locality() which is called from tpm_tis_request_locality_locked()
which holds the locality_count_mutex. So check_locality() is also protected by the locality_count_mutex
(which for this reason should probably rather be called locality_mutex since it protects both the "locality_count"
and the "locality" variable).

There is one other place check_locality() is called from, namely the interrupt handler. This is also the only
place in which "locality" could be assigned another value than 0 (aka the default). In this case there
is no lock, so this could indeed by racy.

The solution I see for this is:
1. remove the entire loop that checks for the current locality, i.e. this code:

for (i = 0; i < 5; i++)
if (check_locality(chip, i))

So we avoid "locality" from being changed to something that is not the default.

2. grab the locality_count_mutex and protect "locality":

for (i = 0; i < 5; i++)
if (check_locality(chip, i))

I dont see the reason why we should store which locality is the active one, since the only thing
that ever would change it from 0 (i.e. the default which we use) to something else is some external instance.

So I would vote for option 1.

> I kind of doubt that we would ever need more than one lock for it,
> and it would give some more ensurance to not be race, especially
> when re-enabling interrupts this feels important to be "extra safe".
> I looked at this commit, and did not see anything that would prevent
> using a spin lock instead of mutex. With a spin lock priv can be
> accessed also in the interrupt context.
> So instead prepend this patch with a patch that adds:
> struct spin_lock lock;
> And something like:
> static inline struct tpm_tis_data *tpm_tis_priv_get(struct tpm_chip *chip)
> {
> struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
> spin_lock(&priv->lock);
> return priv;
> }
> static inline void tpm_tis_priv_put(struct tpm_tis_data *priv)
> {
> spin_unlock(&priv->lock);
> }
> And change the sites where priv is used to acquire the instance with this.

In this patch we need the mutex to protect the locality counter. We have to hold the mutex
while we do a register access that requires a locality (to make sure that the locality is not
released by another thread shortly before we do the access).

We cannot do the register access while holding a spinlock, since for SPI the (SPI) bus
lock mutex is used which needs a sleepable context. That is not given while holding a spinlock,
so I think we have no choice here unfortunately.


 \ /
  Last update: 2022-07-04 19:46    [W:0.101 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site