-
Notifications
You must be signed in to change notification settings - Fork 161
RFE: Skeleton for DMA layer #306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 2 commits
57c366b
d991f9e
2f4ad18
3163d08
728ac70
eb7efbc
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,89 @@ | ||
| use std::sync::{Arc, Weak}; | ||
| use crate::dma_manager::GlobalDmaManager; | ||
| use user_driver::dma; | ||
|
|
||
| // DMA Client structure, representing a specific client instance | ||
| pub struct DmaClient { | ||
| manager: Weak<GlobalDmaManager>, | ||
| } | ||
|
|
||
| impl DmaClient { | ||
| pub fn new(manager: Weak<GlobalDmaManager>) -> Self { | ||
| Self { manager } | ||
| } | ||
|
|
||
| fn pin_memory(&self, range: &MemoryRange) -> Result<usize, DmaError> { | ||
| let manager = self.manager.upgrade().ok_or(DmaError::InitializationFailed)?; | ||
| let threshold = manager.get_client_threshold(self).ok_or(DmaError::InitializationFailed)?; | ||
|
|
||
| if range.size <= threshold && manager.is_pinned(range) { | ||
| Ok(range.start) | ||
| } else { | ||
| Err(DmaError::PinFailed) | ||
| } | ||
| } | ||
|
|
||
| pub fn map_dma_ranges( | ||
| &self, | ||
| ranges: &[MemoryRange], | ||
| ) -> Result<DmaTransactionHandler, DmaError> { | ||
| let manager = self.manager.upgrade().ok_or(DmaError::InitializationFailed)?; | ||
| let mut dma_transactions = Vec::new(); | ||
|
|
||
| let threshold = manager.get_client_threshold(self).ok_or(DmaError::InitializationFailed)?; | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This line can be moved before defining dma_transactions to fail earlier. |
||
|
|
||
| for range in ranges { | ||
| let (dma_addr, is_pinned, is_bounce_buffer) = if range.size <= threshold { | ||
| match self.pin_memory(range) { | ||
| Ok(pinned_addr) => (pinned_addr, true, false), | ||
| Err(_) => { | ||
| let bounce_addr = manager.allocate_bounce_buffer(range.size)?; | ||
| (bounce_addr, false, true) | ||
| } | ||
| } | ||
| } else { | ||
| let bounce_addr = manager.allocate_bounce_buffer(range.size)?; | ||
| (bounce_addr, false, true) | ||
| }; | ||
|
|
||
| dma_transactions.push(DmaTransaction { | ||
| original_addr: range.start, | ||
| dma_addr, | ||
| size: range.size, | ||
| is_pinned, | ||
| is_bounce_buffer, | ||
| is_physical: !is_bounce_buffer, | ||
| is_prepinned: manager.is_pinned(range), | ||
| }); | ||
| } | ||
|
|
||
| Ok(DmaTransactionHandler { | ||
| transactions: dma_transactions, | ||
| }) | ||
| } | ||
|
|
||
| pub fn unmap_dma_ranges(&self, dma_transactions: &[DmaTransaction]) -> Result<(), DmaError> { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Will this need to be an &mut reference to the dma_transactions? |
||
| let manager = self.manager.upgrade().ok_or(DmaError::InitializationFailed)?; | ||
|
|
||
| for transaction in dma_transactions { | ||
| if transaction.is_bounce_buffer { | ||
| // Code to release bounce buffer | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need copy out from bounce buffer here?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The caller may not know if it's bounce buffer or not, so I think we need handle it here. And need pass the Memory range to copy data out. Actually, I think we need know the IO direction to decide which copy is needed (including the one in map_dma_ranges). |
||
| } else if transaction.is_pinned && !transaction.is_prepinned { | ||
| // Code to unpin memory | ||
| } | ||
| } | ||
| Ok(()) | ||
| } | ||
| } | ||
|
|
||
|
|
||
| // Implementation of the DMA interface for `DmaClient` | ||
| impl DmaInterface for DmaClient { | ||
| fn map_dma_ranges(&self, ranges: &[MemoryRange]) -> Result<DmaTransactionHandler, DmaError> { | ||
| self.map_dma_ranges(ranges) | ||
| } | ||
|
|
||
| fn unmap_dma_ranges(&self, dma_transactions: &[DmaTransaction]) -> Result<(), DmaError> { | ||
| self.unmap_dma_ranges(dma_transactions) | ||
| } | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,85 @@ | ||
| use std::sync::{Arc, Mutex, Weak}; | ||
| use memory_range::MemoryRange; | ||
| use once_cell::sync::OnceCell; | ||
|
|
||
| pub use dma_client::{DmaClient, DmaInterface, DmaTransaction, DmaTransactionHandler}; | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think either clippy or fmt will want you to split these out. Doesn't hurt to run |
||
|
|
||
| pub enum DmaError { | ||
| InitializationFailed, | ||
| MapFailed, | ||
| UnmapFailed, | ||
| PinFailed, | ||
| BounceBufferFailed, | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please use source attributes so that we don't lose error origination. E.g. |
||
| } | ||
|
|
||
| static GLOBAL_DMA_MANAGER: OnceCell<Arc<GlobalDmaManager>> = OnceCell::new(); | ||
|
|
||
| /// Global DMA Manager to handle resources and manage clients | ||
| pub struct GlobalDmaManager { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What settings will this manager have? Which of those do you expect to expose in Vtl2Settings? |
||
| physical_ranges: Vec<MemoryRange>, | ||
| bounce_buffers: Vec<MemoryRange>, | ||
| clients: Mutex<Vec<Weak<DmaClient>>>, | ||
| client_thresholds: Mutex<Vec<(Weak<DmaClient>, usize)>>, | ||
| } | ||
|
|
||
| impl GlobalDmaManager { | ||
| /// Initializes the global DMA manager with physical ranges and bounce buffers | ||
| pub fn initialize( | ||
| physical_ranges: Vec<MemoryRange>, | ||
| bounce_buffers: Vec<MemoryRange>, | ||
| ) -> Result<(), DmaError> { | ||
| let manager = Arc::new(Self { | ||
| physical_ranges, | ||
| bounce_buffers, | ||
| clients: Mutex::new(Vec::new()), | ||
| client_thresholds: Mutex::new(Vec::new()), | ||
| }); | ||
|
|
||
| GLOBAL_DMA_MANAGER.set(manager).map_err(|_| DmaError::InitializationFailed) | ||
| } | ||
|
|
||
| /// Accesses the singleton instance of the global manager | ||
| pub fn get_instance() -> Arc<GlobalDmaManager> { | ||
| GLOBAL_DMA_MANAGER | ||
| .get() | ||
| .expect("GlobalDmaManager has not been initialized") | ||
| .clone() | ||
| } | ||
|
|
||
| /// Creates a new `DmaClient` and registers it with the global manager, along with its threshold | ||
| pub fn create_client(&self, pinning_threshold: usize) -> Arc<DmaClient> { | ||
| let client = Arc::new(DmaClient::new(Arc::downgrade(&self.get_instance()))); | ||
| self.register_client(&client, pinning_threshold); | ||
| client | ||
| } | ||
|
|
||
| /// Adds a new client to the list and stores its pinning threshold | ||
| fn register_client(&self, client: &Arc<DmaClient>, threshold: usize) { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there a need to have per-client bounce buffers? |
||
| let mut clients = self.clients.lock().unwrap(); | ||
| clients.push(Arc::downgrade(client)); | ||
|
|
||
| let mut thresholds = self.client_thresholds.lock().unwrap(); | ||
| thresholds.push((Arc::downgrade(client), threshold)); | ||
| } | ||
|
|
||
| /// Retrieves the pinning threshold for a given client | ||
| pub fn get_client_threshold(&self, client: &Arc<DmaClient>) -> Option<usize> { | ||
| let thresholds = self.client_thresholds.lock().unwrap(); | ||
| thresholds.iter().find_map(|(weak_client, threshold)| { | ||
| weak_client | ||
| .upgrade() | ||
| .filter(|c| Arc::ptr_eq(c, client)) | ||
| .map(|_| *threshold) | ||
| }) | ||
| } | ||
|
|
||
| /// Checks if the given memory range is already pinned | ||
| pub fn is_pinned(&self, range: &MemoryRange) -> bool { | ||
| false // Placeholder | ||
| } | ||
|
|
||
| /// Allocates a bounce buffer if available, otherwise returns an error | ||
| pub fn allocate_bounce_buffer(&self, size: usize) -> Result<usize, DmaError> { | ||
| Err(DmaError::BounceBufferFailed) // Placeholder | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need ensure the bounce buffer is page aligned?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I envision that bounce buffer management will be aligned.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And also note it: our current bounce buffer allocation function has infinite loop issue which we want to avoid it in the new implementation. |
||
| } | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,31 @@ | ||
|
|
||
| #[derive(Debug)] | ||
| pub enum DmaError { | ||
| InitializationFailed, | ||
| MapFailed, | ||
| UnmapFailed, | ||
| PinFailed, | ||
| BounceBufferFailed, | ||
| } | ||
|
|
||
| // Structure encapsulating the result of a DMA mapping operation | ||
| pub struct DmaTransactionHandler { | ||
| pub transactions: Vec<DmaTransaction>, | ||
| } | ||
|
|
||
| // Structure representing a DMA transaction with address and metadata | ||
| pub struct DmaTransaction { | ||
| pub original_addr: usize, | ||
| pub dma_addr: usize, | ||
| pub size: usize, | ||
| pub is_pinned: bool, | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we have comments explaining these fields? like is_pinned and is_bounce_buffer cannot be both true, why do we need keep both?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, i will add it.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agree with Juan. It seems you can go further, and do something like: pub enum MemoryBacking {
Pinned(prepinned: bool),
InBounceBuffer
}And rather than doing
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. if dma mapping options are disjoint (ie pinned or is bounce buffer), then they should be represented with an enum like Matt suggested.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, will change this to enum. |
||
| pub is_bounce_buffer: bool, | ||
| pub is_physical: bool, | ||
| pub is_prepinned: bool, | ||
| } | ||
|
|
||
| // Trait for the DMA interface | ||
| pub trait DmaInterface { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you envision that this would replace other uses of bounce buffering? (for example, copying from private memory into shared memory for isolated VMs, or when the block disk bounces for arm64 guests)?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, i do envision that.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Got it. How do you envision handling the case:
(I'm thinking about the block device driver here, where it would never want to pin memory - the kernel doesn't know about the VTL0 addresses.)
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We discussed this offline. |
||
| fn map_dma_ranges(&self, ranges: &[MemoryRange]) -> Result<DmaTransactionHandler, DmaError>; | ||
| fn unmap_dma_ranges(&self, dma_transactions: &[DmaTransaction]) -> Result<(), DmaError>; | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not returning an opaque handle but asking the caller to provide some pub struct fields seems a bit odd to me, but we can always iterate on this api later since all users will be in-tree. At least, it seems like perhaps
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand this is a draft ... could you add a doc comment so that folks know the intended contract and usage here?