[Top] [All Lists]

Re: help offered

To: Greg Chesson <>
Subject: Re: help offered
From: <>
Date: Wed, 25 Nov 1998 16:38:44 -0500 (EST)
Cc: Ariel Faigon <>, Olivier Galibert <>,
In-reply-to: <>
On Wed, 25 Nov 1998, Greg Chesson wrote:

> But the memory subsystem is ccNUMA.  That means any channel in the system
> can read/write any memory in the system.  With io buffers that comprise
> multiple pages, and with the pages of the buffer located on several different
> memory controllers, multiple io channels can burst (in parallel) to the
> "array" of pages that comprise the buffer.

    Except some of this has to go through the CrayLink.  The memory you
are "bursting" to is not on the same node.  Therefore, if you have a
dual-threaded application that runs over the data, at most the max
bandwidth is 1.6GB/s (seeing as it's advantagous to spread your code to
two nodes and split the memory between them).  If you application can make
use of all processors on that box, then you get the full bandwidth.  The
most any single processor in that Origin can handle is 800MB/s and if it
needs to get that data, eventually that data is shoveled through the
CrayLink (and hopefully is gets migrated there).  Is there anything flawed
with this reasoning?

> file system buffer cache.  These are direct-io transfers between the channel
> and user-supplied buffers.  It's not clear the Linux permits dma to a mapped
> user page.... I get different opinions from folks.  Nevertheless, large pages

    I don't see why it cannot be done.  The page-cache/file system buffer
cache are supposed to be merged.  If you mmap that data, you should just
get a pte pointing to that area in the page cache.

> So, a 16-processor Origin can operate a 2 GB/s file system and use only
> 40-50% of its internal bandwidth.  Obviously, many many configurations
> of processors, channels, disks and network devices are possible.

    But that bandwidth isn't single node bandwidth.  No single node can do
4GB/s.  All nodes need to use their local memory to achieve max bandwidth.

                                                - Paul

<Prev in Thread] Current Thread [Next in Thread>