CVE-2024-53058
Nov. 22, 2024, 5:53 p.m.
Tags
CVSS Score
Products Impacted
Vendor | Product | Versions |
---|---|---|
linux |
|
|
Description
In the Linux kernel, the following vulnerability has been resolved: net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data In case the non-paged data of a SKB carries protocol header and protocol payload to be transmitted on a certain platform that the DMA AXI address width is configured to 40-bit/48-bit, or the size of the non-paged data is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI address width is configured to 32-bit, then this SKB requires at least two DMA transmit descriptors to serve it. For example, three descriptors are allocated to split one DMA buffer mapped from one piece of non-paged data: dma_desc[N + 0], dma_desc[N + 1], dma_desc[N + 2]. Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold extra information to be reused in stmmac_tx_clean(): tx_q->tx_skbuff_dma[N + 0], tx_q->tx_skbuff_dma[N + 1], tx_q->tx_skbuff_dma[N + 2]. Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer address returned by DMA mapping call. stmmac_tx_clean() will try to unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf is a valid buffer address. The expected behavior that saves DMA buffer address of this non-paged data to tx_q->tx_skbuff_dma[entry].buf is: tx_q->tx_skbuff_dma[N + 0].buf = NULL; tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single(); Unfortunately, the current code misbehaves like this: tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single(); tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = NULL; On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address obviously, then the DMA buffer will be unmapped immediately. There may be a rare case that the DMA engine does not finish the pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go horribly wrong, DMA is going to access a unmapped/unreferenced memory region, corrupted data will be transmited or iommu fault will be triggered :( In contrast, the for-loop that maps SKB fragments behaves perfectly as expected, and that is how the driver should do for both non-paged data and paged frags actually. This patch corrects DMA map/unmap sequences by fixing the array index for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address. Tested and verified on DWXGMAC CORE 3.20a
Weaknesses
Date
Published: Nov. 19, 2024, 6:15 p.m.
Last Modified: Nov. 22, 2024, 5:53 p.m.
Status : Analyzed
CVE has been recently published to the CVE List and has been received by the NVD.
More infoSource
416baaa9-dc9f-4396-8d5f-8c081fb06d67
CPEs
Type | Vendor | Product | Version | Update | Edition | Language | Software Edition | Target Software | Target Hardware | Other Information |
---|---|---|---|---|---|---|---|---|---|---|
o | linux | linux_kernel | / | / | / | / | / | / | / | / |
o | linux | linux_kernel | / | / | / | / | / | / | / | / |
o | linux | linux_kernel | / | / | / | / | / | / | / | / |
o | linux | linux_kernel | / | / | / | / | / | / | / | / |
o | linux | linux_kernel | 6.12 | rc1 | / | / | / | / | / | / |
o | linux | linux_kernel | 6.12 | rc2 | / | / | / | / | / | / |
o | linux | linux_kernel | 6.12 | rc3 | / | / | / | / | / | / |
o | linux | linux_kernel | 6.12 | rc4 | / | / | / | / | / | / |
o | linux | linux_kernel | 6.12 | rc5 | / | / | / | / | / | / |
CVSS Data
Attack Vector
LOCAL
Attack Complexity
LOW
Privileges Required
LOW
Scope
UNCHANGED
Confidentiality Impact
NONE
Integrity Impact
NONE
Availability Impact
HIGH
Base Score
Exploitability Score
Impact Score
Base Severity
MEDIUMCVSS Vector String
The CVSS vector string provides an in-depth view of the vulnerability metrics.
View Vector StringCVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H