-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kernel: Stop allocating physical pages for mapped MMIO regions #24279
Conversation
FWIW, to me it was always quite clear that |
AFAICT you don't actually gain anything by backing non-RAM pages by these, since the only 2 features it supports are completely irrelevant/contradictory to non-RAM pages:
So there's no point to using them to back non-RAM regions. We can rename them if you think that helps. |
Sure, I see your point here. I probably thought that someday we will use them as a way to expose physical ranges (and what is their usage) to userland in |
3ca74ef
to
8b343d5
Compare
@@ -80,7 +80,7 @@ ErrorOr<void> DisplayConnector::allocate_framebuffer_resources(size_t rounded_si | |||
if (!m_framebuffer_at_arbitrary_physical_range) { | |||
VERIFY(m_framebuffer_address.value().page_base() == m_framebuffer_address.value()); | |||
m_shared_framebuffer_vmobject = TRY(Memory::SharedFramebufferVMObject::try_create_for_physical_range(m_framebuffer_address.value(), rounded_size)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SharedFramebufferVMObject::try_create_for_physical_range
will currently create an AnonymousVMObject
.
Should it use a MMIOVMObject
now if this constructor is used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not obvious to me if it should, it does a bit too much trickery with switching between regions/vmobjects etc for me to be sure. @supercomputer7 what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Raspberry Pi framebuffer is in the VideoCore memory, which is after RAM.
So if we want to use the devicetree for the memory map on aarch64, we would have to make the SharedFramebufferVMObject work with memory after RAM.
That is the reason why I brought this up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, I would agree, but there's a bunch of logic in SharedFramebufferVMObject that deals with mapping and remapping the backing regions live (using the PhysicalPages), which I don't think would currently work with MMIOVMObject.
TBH, I'm not sure all of this logic has to be this complicated, but that's an issue for another time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to support virtual consoles and graphical desktop, we have to somehow ensure that when you move from a graphical TTY to a non-graphical one (or the other way around), then either the VT code gets access or the WindowManager is able to print pixels. This is why it's called a shared buffer, but only one client has access to it each time.
as for the actual implementation, it could be MMIO-oriented, although some GPUs don't do this and don't really allow MMIO access to the framebuffer, but rather you need to initialize a DMA transfer somehow.
But for most of our implementations we should back the shared buffer VMObject with an anonymous vmobject for shadow copy, and MMIO one for actual access.
If we for some reason get page faults in a high MMIO region, serenity/Kernel/Memory/Region.cpp Line 378 in 6654021
handle_fault version)
|
Sure, but there's no legitimate way to get a page fault on a high-MMIO region - AKA if you did it must be a kernel bug, so I don't think that's an issue. (If we did want to do something about it, it would only be printing a slightly nicer PANIC message) |
As MMIO is placed at fixed physical addressed, and does not need to be backed by real RAM physical pages, there's no need to use PhysicalPage instances to track their pages. This results in slightly reduced allocations, but more importantly makes MMIO addresses which end up after the normal RAM ranges work, like 64-bit PCI BARs usually are.
Since these are now only used to represent RAM pages, (and not MMIO pages) rename them to make their purpose more obvious.
8b343d5
to
211b638
Compare
As MMIO is placed at fixed physical addresses, and does not need to be backed by real RAM physical pages, there's no need to use PhysicalPage instances to track their pages.
This results in slightly reduced allocations, but more importantly makes MMIO addresses which end up after the normal RAM ranges work, like 64-bit PCI BARs usually are.
Fixes #22720.
If this is merged after #24254, do not merge this until I add a commit to revert the commit in that PR that temporarily disables usage of 64-bit PCI MMIO addresses.