( ESNUG 511 Item 5 ) -------------------------------------------- [10/09/12]
Subject: A detailed discussion of On-Chip Networks with Virtual Channels
From: [ Jim Hogan of Vista Ventures LLC ]
Hi, John,
Multi-core SoCs need a high level of system concurrency to achieve high
performance. However, the resulting high number of parallel transactions
can be a nightmare. In some cases you may have as many as 16 master CPU
cores connected to one target. This many connections can result in serious
physical wire congestion or long bus latency as all 16 masters compete for
the bus and memory resources.
A clever fix to this issue is to add Virtual Channels (aka Multi-Threading)
where all the bus transactions -- up to 16 simultaneous transactions -- are
time division MUXed over the one physical bus. This results in all 16 CPUs
getting low latency DRAM access (in exchange for reduced bus bandwidth) over
one actual bus -- hence the name "Virtual Channels".
The key advantages to Virtual Channels are:
1.) increases the on-chip network throughput without increasing
the number of parallel busses
2.) this sames physical area on the die
3.) reduces long latency and traffic congestion
Ironically, although Virtual Channels reduce some bus bandwith, their impact
optimizes network bandwidth by adding concurrency into the interconnect path
from the initiator core to the target, such as the memory sub-system. That
is, more CPU cores get the data they request in a timely manner vs. if they
all had to wait their turn on the bus.
two CPUs using a Virtual Channel to access two DRAMs
The other big impact is since only one physical bus is used, SoCs that use
Virtual Channels are far less congested and more easier to physically route.
---- ---- ---- ---- ---- ---- ----
Non-Blocking Networks:
Virtual Channels also greatly enhance QoS by making all buses non-blocking.
The SoC can happily have multiple transactions on the same port without any
transaction pile-ups. High priority traffic is never stuck because lower
priority traffic is blocking passage. E.g. you never have a slow moving
vehicle driving in the express lane, with a fire engine desperately trying
to pass it.
Shared Buffers:
With Virtual Channels, in addition to sharing physical connections, buffer
space is also shared since now only one physical connection is used. These
shared buffers make it possible to deal with any number of packets as long
as long as total space consumed by the buffer does not exceed capacity.
A shared buffer is more optimal than separate independent buffers, as every
buffer would need to be sized for the maximum packet along each path. In
fact, the shared buffer may not have to be much larger than the single
channel case.
Head-of-Line Blocking:
Typical QoS systems are transaction-based at the master, where each bus
transaction is aligned to its priority level -- multiple levels can be set.
Additionally, each master can have its own priority level.
This method, although simple and intuitive will eventually lead to conflicts
in the system. For example, as shown below, the situation can occur where
high priority requests are blocked if low priority requests cannot progress
in the network;
Head-of-Line blocking
The high priority master can be denied access because of low priority data
behavior -- this is known as Head-of-Line blocking.
Another situation can occur when each master dynamically raises its priority
(since it is not getting enough bandwidth) and the other masters then raise
their priority because their bandwidth is being starved. The net effect is
that none of the elements gets priority -- the equivalent of road traffic
gridlock. To improve this situation, additional buffering (FIFOs and
arbiters) can be added to the network, but this adds complexity and does not
completely eliminate the problem. There are times when even the low
priority message must get through the network.
The Virtual Channel creates a non-blocking flow
so both master 1 & 2 get DRAM access
With a network using Virtual Channels with non-blocking flow control, both of
these types of failures are avoided. This is because the Virtual Channel
creates a "passing lane" when that pesky lower priority traffic is blocking
higher priority traffic. This passing lane makes any traffic, either "high"
or "low" priority from being delayed for too long.
---- ---- ---- ---- ---- ---- ----
Target-Based QoS:
For additional robustness in the system, QoS can be controlled at the target
instead of the master. How does this help? In many cases, the target of a
transaction request is memory. The memory controller clearly has the most
knowledge about its state, as opposed the masters trying to access it. So
the QoS logic at the target memory can look back into the network at all the
transactions in the queue. Downstream FIFO level information is forwarded
upstream to arbiters so that the arbiters never select a request that would
be blocked.
---- ---- ---- ---- ---- ---- ----
In summary, I believe we are underestimating the number of IP cores that
will be needed in SoC's over the next two years. As applications expand
exponentially there will also be an increasing number of CPU/GPU/DSP cores
required to serve them -- the more IP cores a network can handle, the more
that will be consumed.
Virtual Channels reduce the need to over-design the system since maximum
access can be ensured. Virtual Channels dramatically reduce the number of
bus wires required, creating a more flexible and layout-friendly SoC
architecture. They also lead to higher and more predictable performance
through high system concurrency.
- Jim Hogan
Vista Ventures, LLC Los Gatos, CA
Editor's Note: As mentioned earlier, Jim's on the Sonics board. - John
---- ---- ---- ---- ---- ---- ----
Related Articles
Hogan outlines key market drivers for Network-on-Chip (NoC) IP
Common definitions of On-Chip Communication Network (OCCN) terms
Exploring a designer's Make-or-Buy decision for On-Chip Networks
Metrics checklist for selecting commercial Network-on-Chip (NoC)
Hogan compares Sonics SGN vs. Arteris Flex NOC vs. ARM NIC 400
Join
Index
Next->Item
|
|