linux-mips
[Top] [All Lists]

2.4 pci_dma_sync_sg fix

To: linux-mips@linux-mips.org
Subject: 2.4 pci_dma_sync_sg fix
From: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Date: Thu, 01 Jul 2004 22:21:20 +0900 (JST)
Cc: ralf@linux-mips.org
Original-recipient: rfc822;linux-mips@linux-mips.org
Sender: linux-mips-bounce@linux-mips.org
pci_dma_sync_sg in 2.4 tree seems broken.  pci_map_sg were fixed a
while ago.  Please fix pci_dma_sync_sg also.

Here is a patch.

Index: pci.h
===================================================================
RCS file: /home/cvs/linux/include/asm-mips/pci.h,v
retrieving revision 1.24.2.16
diff -u -r1.24.2.16 pci.h
--- pci.h       17 Nov 2003 01:07:45 -0000      1.24.2.16
+++ pci.h       1 Jul 2004 13:10:48 -0000
@@ -270,20 +270,28 @@
  */
 static inline void pci_dma_sync_sg(struct pci_dev *hwdev,
                                   struct scatterlist *sg,
-                                  int nelems, int direction)
+                                  int nents, int direction)
 {
-#ifdef CONFIG_NONCOHERENT_IO
        int i;
-#endif
 
        if (direction == PCI_DMA_NONE)
                out_of_line_bug();
 
-       /* Make sure that gcc doesn't leave the empty loop body.  */
-#ifdef CONFIG_NONCOHERENT_IO
-       for (i = 0; i < nelems; i++, sg++)
-               dma_cache_wback_inv((unsigned long)sg->address, sg->length);
-#endif
+       for (i = 0; i < nents; i++, sg++) {
+               if (sg->address && sg->page)
+                       out_of_line_bug();
+               else if (!sg->address && !sg->page)
+                       out_of_line_bug();
+
+               if (sg->address) {
+                       dma_cache_wback_inv((unsigned long)sg->address,
+                                           sg->length);
+               } else {
+                       dma_cache_wback_inv((unsigned long)
+                               (page_address(sg->page) + sg->offset),
+                               sg->length);
+               }
+       }
 }
 
 /*
---
Atsushi Nemoto

<Prev in Thread] Current Thread [Next in Thread>