lkml.org 
[lkml]   [2018]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] pstore/ram: Clarify resource reservation labels
On Thu, Oct 18, 2018 at 3:33 PM, Dan Williams <dan.j.williams@intel.com> wrote:
> On Thu, Oct 18, 2018 at 3:26 PM Kees Cook <keescook@chromium.org> wrote:
>>
>> On Thu, Oct 18, 2018 at 3:23 PM, Dan Williams <dan.j.williams@intel.com> wrote:
>> > On Thu, Oct 18, 2018 at 3:19 PM Kees Cook <keescook@chromium.org> wrote:
>> >>
>> >> On Thu, Oct 18, 2018 at 2:35 PM, Dan Williams <dan.j.williams@intel.com> wrote:
>> >> > On Thu, Oct 18, 2018 at 1:31 PM Kees Cook <keescook@chromium.org> wrote:
>> > [..]
>> >> > I cringe at users picking addresses because someone is going to enable
>> >> > ramoops on top of their persistent memory namespace and wonder why
>> >> > their filesystem got clobbered. Should attempts to specify an explicit
>> >> > ramoops range that intersects EfiPersistentMemory fail by default? The
>> >> > memmap=ss!nn parameter has burned us many times with users picking the
>> >> > wrong address, so I'd be inclined to hide this ramoops sharp edge from
>> >> > them.
>> >>
>> >> Yeah, this is what I'm trying to solve. I'd like ramoops to find the
>> >> address itself, but it has to do it really early, so if I can't have
>> >> nvdimm handle it directly, will having regions already allocated with
>> >> request_mem_region() "get along" with the rest of nvdimm?
>> >
>> > If the filesystem existed on the namespace before the user specified
>> > the ramoops command line then ramoops will clobber the filesystem and
>> > the user will only find out when mount later fails. All the kernel
>> > will say is:
>> >
>> > dev_warn(dev, "could not reserve region %pR\n", res);
>> >
>> > ...from the pmem driver, and then the only way to figure who the
>> > conflict is with is to look at /proc/iomem, but the damage is already
>> > likely done by that point.
>>
>> Yeah, bleh. Okay, well, let's just skip this for now, since ramoops
>> doesn't do _anything_ with pmem now. No need to go crazy right from
>> the start. Instead, let's make it work "normally", and if someone
>> needs it for very early boot, they can manually enter the mem_address.
>>
>> How should I attach a ramoops_probe() call to pmem?
>
> To me this looks like it would be a nvdimm glue driver whose entire
> job is to attach to the namespace, fill out some
> ramoops_platform_data, and then register a "ramoops" platform_device
> for the ramoops driver to find.

That sounds right, yes. I'm happy to help review/test/etc.

-Kees

--
Kees Cook
Pixel Security

\
 
 \ /
  Last update: 2018-10-19 00:59    [W:0.063 / U:4.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site