Editor's Note: Every now and then, when a really interesting topic comes
  up and I receive a very technically oriented piece about it from an EDA
  salesdroid, I have to contravene my informal Death-To-EDA-Salesdroids-And-
  Don't-Let-Them-Post-Here-On-ESNUG policy and publish their piece.  (That
  is, I won't let EDA Salesdroids publish opinions about the very products
  they sell, but I do let them add technical insights.  Only EDA *users*
  can give *opinions* about EDA tools in ESNUG.  For example, I won't let
  Wally Rhines, the CEO of Mentor, brag how his physical verification tool
  kicks butt over EPIC's physical verification tool.  But Wally's free to
  discuss some technical aspect of his or EPIC's tool on ESNUG -- and only
  customers and actual users can voice *opinions* about these tools.)

  That being said, the specifics in this of case seeing-stuff-written-by-
  salesdroids-in-ESNUG involves Platform Computing's LSF tool and their
  Boston distributor, Blackstone EDA.  LSF has become a very hot topic in
  the EDA world because it very efficiently networks PC's and UNIX-based
  workstations together and does automatic, run by run, load balancing of
  EDA runs.  A lot of chip designers I've known for years are independently
  singing the praises of LSF to me.  And we've been discussing LSF quite a
  bit here recently in ESNUG.  So....  for my readers who really hate seeing
  any salesdroid writing in ESNUG, I have carefully gone through the
  Blackstone LSF Whitepaper and ripped out all the marketing pitches and
  non-engineering-centric material I found in it -- yet kept all its juicy
  technical wisdom.  (This edited version is about 40 percent smaller; if
  you want the original 15 page version of this paper to give to your boss,
  feel free to contact Ron Ranauro of Blackstone EDA directly for it.)  :)

                                           - John Cooley
                                             the ESNUG guy

  P.S. Due to the length, this ESNUG consists solely of this edited LSF
       whitepaper and nothing more.  The next ESNUG will be back to our
       normal mix of 10 different threads on various chip-design topics.

( ESNUG 312 Item 1 ) ---------------------------------------------- [2/99]

Subject: ( ESNUG 311 #2 )  The (Edited) Blackstone-EDA Whitepaper On LSF

> I have had very positive results with LSF and praise it very highly,
> particularly for ASIC regression testing.  However, it is an extremely
> configurable tool and requires close cooperation between the
> administrators of the tool and the users.
>
>     - Tom Loftus
>       Hughes Network Systems


From: Ron Ranauro <ranauro@blackstone-eda.com>

John,

This wasn't written specifically for the ESNUG audience, but it gets into
many of the details and issues specifically relating to LSF that the ESNUG
thread is interested in.

FYI, we have built up this expertise over a period of 2+ years, installing
LSF and supporting approximately 35 sites here in the Boston area (95% of
which are doing ASIC/System/IC design with Verilog/VHDL, Synopsys etc).  We
are in the process of re-making ourselves to offer this expertise on a
wider geographic basis, even though we only have license to sell LSF in the
New England market.   Hence the White Paper.

Hope you find it good information.

    - Ron Ranauro
      Blackstone EDA                          Worcester, MA


[ Editor's Note: Again, I want to remind you that below is an edited version
  of the Blackstone Whitepaper on LSF.  This isn't Ron's orginal!  - John ]


               Optimizing Your Compute Farm with LSF
               -------------------------------------

      An (ESNUG-Edited) Technical White Paper - February 15, 1999


Before going into the details of load sharing strategy and policies, it is
useful to review the recent history of computing.   In the past, companies
relied on two approaches to deliver increased computing capacity to its
users: mainframe-centric and desktop-centric computing.


Mainframe-centric
-----------------

In the mainframe-centric approach, users connect to the mainframe via dumb
terminals.  The mainframe operating system serves as the job scheduler,
seeing to it that all user jobs get access to the CPU.  This
mainframe-centric approach offers straightforward administration, and the
time sharing operating system manages the workload.  However the approach is
eminently not scalable.  The $1 million dollar investment typically required
of a mainframe is a barrier to entry for many smaller companies, and the
cost to increase mainframe capacity is prohibitive even for large companies.
Because of this inherent lack of scalability, and because companies could
be left with an obsolete mainframe before it had been fully depreciated,
mainframe computing rapidly gave way to Unix/RISC architectures where
mainframe-like capacity and performance were affordably placed on an
individual users' desktop.


Desktop-centric
---------------

The desktop-centric approach offers the advantage of cost-effective,
incremental scalability in compute power.  With the advances in
microprocessor technology as predicted by Moore's law, every 12-18 months,
companies' can justify purchasing the fastest most powerful machine for
their power users.  These desktop resources remain underutilized for most of
the time.  Then, when the individual's need for computing capacity exceeds
that which resides on his or her desk, users resort to the process of
manual "load balancing" their jobs.  To access remote resources, CPU
monitors on desktops are used to monitor and determine which remote machine
is likely to complete jobs the soonest.  Users then remotely log into
colleagues' computers to run urgent jobs.  This practice is not only
disturbing to colleagues', but it fails to ensure efficient use of valuable
computing resources during periods of peak demand.

The manual "load balancing" of resources on the network results in some big
in-efficiencies:

     1.  Underused desktop resources

     2.  Overlaoded shared server resources

     3.  Lost time opportunity cost as users spend valuable time managing
         their compute cycles rather than doing the creative work which
         they were hired to perform.

     4.  Poorly managed software license resources as high priority jobs
         run on slower machines or experience license denials while
         low-priority jobs run


Network-centric
---------------

LSF successfully clusters all the CPU resources in a network into a single
networked entity.  The powerful batch scheduler within LSF allows
companies to define policies to enforce that high priority work is
served before low priority work, and that users requiring resources receive
a "fair share" of the CPU resources available to them on the network.  Users
no longer have to be aware of the specific hosts that will serve their job
requests.  Rather, they simply submit their jobs to the system and LSF
identifies the available hosts, selects one and automatically runs the job
on it.


Attitude Is Everything
----------------------

One obstacle to adopting the network centric model is the attitudes of
the individuals in the organization.  The view "I've got what I need on my
desk, I don't want to give up my desktop", is a response to the idea that
the individual is going to be asked to "give up" something.  Individuals
like the control and the efficacy of managing their own resource, and
knowing to what extent that they can rely on it.  Our experience is the way
the network-centric model is introduced to the environment and policies
that define how desktop and server resources will be used will go a long
way towards allaying the fears for the individual desktop users.

There is a transitional period for the organization after introducing load
sharing of shared server resources.  With proper user awareness training and
load sharing policies, we have found that the attitude reversal happens
within a span of 5 working days.  Users who expressed concern about "giving
up" resources, learn that the network supplies vast amounts of computing
resource.  Jobs return faster than they did before, often being served on
machines that they would not have considered for execution.  These same
users come to rely on the load sharing software as thought it were an
operating system component.  Management now has control of its computing
environment.  Priorities and fair share access to computing are easily
managed, utilization of depreciating resources is increased, and QoS levels
are easily monitored for capacity planning and cost justification of new
purchases.


Building a Compute Farm
-----------------------

The major components of the Compute Farm architecture are:

 - Applications you will run on the Compute Farm
 - CPU and Memory Resources
 - Operating Systems
 - Network Infrastructure
 - Data Storage Infrastructure
 - Load Sharing Configuration and Scheduling Policies
 - Monitoring and Tuning the Compute Farm

Applications
------------

The best place to start when considering a Compute Farm is to inventory
your applications.

 - Are they single threaded or multi-threaded?
 - Are they client-server architecture?
 - How much graphics processing is required to display the information?
 - How much memory do my applications use?
 - Is the solver process separated from the GUI?
 - What OS-hardware platforms do they run on?

Applications that are single-threaded and do not rely on client-server
architecture are the best candidates for load sharing on a network of
CPU's.  Multi-threaded applications benefit from load sharing, however their
existence might necessitate the inclusion of some number of SMP boxes in
the server pool, depending on the amount of communication required between
threads and the amount of memory each process consumes.

Client-server programs such as a CAD drawing package don't benefit as
much from load sharing.  This is because once the client-server connection
is made, requests made of the server for computation happen on the machine
which hosts the server, and therefore is not easily executed on another,
more available host.  Some recent advances in client-server technology have
introduced 'client-client-server' architectures where the 'client-client'
connection is made and requests for computation are run as separate
'server' process.  This approach will play to the strengths of the
load-sharing network.

CPU and Memory Resources
------------------------

The amount of memory your applications consume will determine the
composition and mix of memory and CPU boxes in your Compute Farm.  The
histogram of number of jobs and memory consumed for your jobs will
determine if single, 2 processor, or 4 processor systems with suitable
memory is most cost effective.  For example, if most of your jobs require
less than 512mb, but 20% of your jobs require up to 1GB RAM, it would be
more cost-effective to buy dual-processor boxes with 1GB of memory versus
twice the number of single processor boxes each with 1GB of memory.

Operating Systems
-----------------

The decision about which operating system to use is a function of your
application base.  If your applications run equally well under Unix, NT or
Linux, you're free to choose the operating system that minimizes your
administrative overhead, and maximizes your return on depreciated assets.
The fundamental tenent of the network-centric architecture is to put the
dollars into the shared resources (the fastest CPU's, the most memory,
fastest disks) and conserve the dollars on the desktop.  This might dictate
a mixed OS strategy.  The load-sharing software works well in a
heterogeneous environment, and depending on application availability can
allow companies to buy the fastest most cost effective hardware,
regardless of OS, and provide these resources transparently to their users.

Data Management Architecture
----------------------------

The decision about whether to have an EDA application write its data to a
local scratch directory versus writing to a network mounted file depends on
how much data is being read or written.  If the application runs for 10
hours and writes 200Mb of data, then this is writing 5,632 bytes per
second.  With the appropriate buffering, this amount of data is probably not
enough to slow down the application.  If, however the application run time
is 1 hour and it needs to read/write 200Mb, then caching to local disk and
copying the file before/after the execution would be most appropriate.  Note
that the ultimate cost to the network is the same, however the impact on
the running time of the application can be influenced greatly by the
decision to cache or not to cache.

For Unix installations, it makes sense to locate the data server and the
application server on the server farm sub-net.  This avoids lengthy
application and data load times.

Microsoft Windows NT also allows centralized serving of application and
data.  However, for increased reliability, performance, and ease of
administration, our experience has shown that the applications should be
installed locally on each server in the load-sharing network.  The Microsoft
SMS utility speeds this process and makes sure the application is installed
identically on each host.  To avoid confusion between users accessing shared
data sets across the network, it is best to use Windows NT Universal Naming
Convention (UNC) path specifications when referring to data files.

For mixed Unix/NT installations, we recommend establishing an NFS file
server for sharing application data, and using the SAMBA Free Software to
provide centralized file services for both Unix and NT.  This allows both
Unix and NT applications to read and write from the same database, and
eliminates the need for replicating files and for maintaining parallel
back-up strategies.

Load Sharing Configuration and Scheduling Policies
--------------------------------------------------

The discussion about what site-specific policies to implement to enable the
load-sharing network serve the business needs of the enterprise begins with
the question: "What are the scarce resources we need to manage?"  Some
companies have excess computes, but limited software licenses.  Other
companies have unlimited software licenses, but a limited number of
machines with large memory configurations.

The second major question to ask is: "What are the priorities of work in
our environment?"  In general, there should be an inverse proportion between
the priority of the work, and the numbers and duration of jobs that meet
that criteria.  For example, in a typical EDA environment the high-priority
interactive synthesis and simulation job slots might number 1 or 2 per user
(interactive queue).  For normal priority, non-interactive batch job slots
(batch), there might be a limit of 3 to 5 job slots per user, and the
non-interactive regression job slots (regression) might allow an unlimited
number of jobs slots for a sub-set of users.  The expectation is that the
interactive queue offers immediate response from the server farm.  Batch
queue jobs should find an open job slot within the span of an hour or two.
And regression queue jobs, which might number in the hundreds, may not
start for days.  (Note: It is possible that late in a project cycle, when
the regression cycle time is the critical path schedule item, that the
regression queue could be modified to become the priority queue.)

Queue Configuration Basics
--------------------------

With a detailed understanding of the important resources to manage and the
basic priorities of work, you can begin to design a set of queues to
fulfill your business objectives.  We have developed some simple guidelines
for defining a set of queues for users to submit their jobs:

 - Fewer queues are better than more queues.

 - Each queue should have a distinct purpose.  The user should have
   incentives to submit to lower priority queues, i.e., no resource limits.

 - The higher the scheduling priority of the queue, the stricter the
   resource limit on that queue.  Some examples: limit the number of jobs /
   user; limit the maximum run time of a job from that queue; limit the
   memory a job from that queue can use.

 - The lower the priority, the more relaxed the resource limit.  For
   example, a regression queue might allow 100's of concurrent jobs, but be
   pre-empted by jobs in an interactive queue.

 - Avoid application specific queues, unless specific priority/pre-emption
   strategies are required

 - Best to educate users to specify their resource requirements at job
   submission time, rather than specify the resource requirement at the
   queue level.

 - Avoid partitioning queues to specific hosts.  Let the scheduler use the
   resource requirements to determine the best host for the job

 - Take advantage of LSF's job "nice" value to effectively balance
   interactive and background batch jobs on desktop computes

 - Avoid complicated load balancing strategies.  If your applications are
   cpu or memory bound, then specify a maximum number of jobs per processor
   equal to 1 for all machines

 - Avoid paging at all costs



Cycle Stealing
--------------

Automating the software engineering compile, build, and test cycle is a
natural application for a load-sharing network.  A software engineers'
workflow cycle involves spending time thinking and editing text files,
followed by a compute-intensive compile, build, and test cycle.  Cycle
stealing seeks to find the idle cycles on desktop workstations for short,
CPU intensive jobs.  A cycle stealing scheduling policy would take advantage
of idle desktops and distribute the load on to the idle machines and return
the result to the users desktop machine transparently.  Key scheduling
criteria to determine suitability of a desktop would be CPU utilization and
memory available.  CPU's currently above a certain threshold, say 30% would
be unavailable for scheduling.  As an added precaution to avoid interference
with the users interactive response time, the scheduled job should be
executed at a very "nice" processor priority.  If the users' machine
suddenly became active, the scheduled job would not have priority for
processor cycles and therefore would take longer to finish, but would not
be considered an intrusion on the desktop users' machine.  Compute tasks
that run for significant duration, say more than 5 minutes, or require
significant amount of memory resources are not good candidates for cycle
stealing.   This is because the longer running jobs can increase the
likelihood of intrusion on an individual desktop users resource, and the
act of suspending a large memory job can severely degrade a desktop
machine's performance and hold significant memory and swap resources while
being held in suspension.

Evenings and weekend processing
-------------------------------

Many customers who have cpu intensive jobs that require significant memory
resources restrict their use of desktop resources to evenings and weekend
processing.  Typically, there is a mix of shared "server" resources to
process the daily workload and a significant number of desktop machines
with ample cpu processing and memory resources.  Because the application
demands significant memory resources to run adequately, scheduling those
jobs on the desktop resources during working hours would interfere with the
users day to day work unacceptably.  However, for many companies, when the
employees go home, the machines sit idle and therefore represent a
significant opportunity cost for the enterprise.  The LSF software enables
run-windows or dispatch windows to be set up for an individual host or a
group of hosts.  We advise our customers to use the dispatch window so that
no jobs will be dispatched to a desktop host after a certain hour, for
example 7am.  If the majority of jobs run less than 1 hour, it is likely
that overnight jobs dispatched to desktops will be finished before the user
becomes active with his/her machine.  In the event that a dispatched job is
running for a long time, the scheduling software can be programmed to
suspend or terminate and re-queue the job.  For a low priority regression
job, this lost work is a small cost versus the benefits of the extended
computing cycles gained by running over-night on the desktop machines.

Managing Software Licenses
--------------------------

For many enterprises, CPU and memory resources are abundant, but software
licenses are scarce.  For this reason, it is not sufficient to schedule a
job based solely on an available machine, but the combination of a lightly
loaded machine, with enough physical memory, and a software license is
needed.  LSF's architecture for communicating network-wide shared resources
or host-specific resources allows dynamic changes in license availability
to be communicated to the scheduler.  The mechanism for this is the ELIM
(External Load Information Manager).  Blackstone Technology Group provides a
customizable ELIM to perform this task.  This script can interrogate any
software license managed by the Globetrotter FlexLM system, and communicate
its status to the LSF scheduler.  We have found this to be reliable and
robust, even in an environment where not all applications are accessed
through LSF.  The efficient scheduling algorithm of the LSF software can
easily allow the jobs from a regression queue to consume all the licenses,
to the exclusion of the interactive users.  This can be a problem.  To
address this, our script includes provisions for specifying at what time of
the day is it "ok" to use all the licenses (evenings), and when should
there remain a buffer of licenses for users interactive use (days).

In addition to the ELIM script method described above, more simplistic
license management policies are available in LSF as well.  A simple approach
is to schedule the job regardless of license availability, and re-queue the
job if it exits with a specific return code.  For example, the Model
Technology simulator exits with a value of 4 if the program exited without
a license.  Another approach is to define an application specific queue and
limit the number of executing jobs from that queue to equal the number of
licenses for that particular application.  This works well if all the
requests for that license flow through that queue, and if all the jobs in
fact do require that license as a condition of execution.  Yet another
approach is to define a pre-exec script, which runs prior to the actual job
on the execution host.  This script would test the availability of the
license and return a 0 if its ok to run the job, or a non-zero would return
the job to the front of the queue.  Finally, LSF allows a jobstarter which
runs intrinsic to the job on the execution host, and takes the actual user
job command as an argument and executes it.  The jobstarter script parses
the output log of the job and looks for error status or license status
messages.  Upon finding errors, it returns a known program status code which
instructs the LSF scheduler to re-queue the job due to license failure.  For
all of this flexibility, we have found the external script to be the most
reliable and most straightforward approach.

Pre-emption Strategies
----------------------

Many environments require preemption of hardware and software license
resources.  LSF supports preemption of low priority jobs with high priority
jobs.  The action to take on the lower priority job upon being preempted
(suspend, terminate, re-nice processor priority) is programmable in the LSF
system.  If there is abundant CPU job slots to run jobs, but a scarcity of
software licenses, the preemption policy becomes more complex.  This is
because merely suspending a job does not free up the license.  In order to
free the license, the job must either be terminated or the license must be
"removed" and appropriately "restored" upon resumption.  For low priority,
short duration regression jobs, termination preemption might be a small
cost to pay.  The job is simply re-queued and is re-scheduled when there is
ample resources.  If the regression job was a long running job and was near
to the end of its run, then termination could be a very costly action to
take.  On solution to this is to have the preempting job's pre-execution
command interrogate the jobs running from the preemptive queue to determine
the last scheduled job from that queue and either suspend/remove or
terminate/re-queue the job as appropriate.  This intelligence makes sure we
interrupt the job with the smallest opportunity cost if we have to re-run
the job.

Some applications behave nicely when suspended or put into the background.
For example, the VCS simulator from Synopsys, upon receiving a Control-Z
signal, will suspend and relinquish its license.  Upon resumption it will
re-request the license and resume execution with no lost work.  In this
scenario, LSF is programmed to send the appropriate Control-Z signal to the
preemptive VCS job.  Subsequently, when the preempting job is finished, it
keeps track of the preempted job and sends the appropriate resume signal to
it before exiting.  

Locality of Data
----------------

For many companies, the data network is the precious resource in the
load-sharing network.  Some applications require significant I/O resources,
in addition to CPU and memory.  In this case, the locality of data as a
requirement to scheduling must be considered.  This simplest approach to
this is to copy the data file to the execution host before the execution,
and read and write to a scratch area on the execution host.  For very large
data sets, this may not be practical.  In this case, it might make sense to
"cache" these data sets on specific hosts on the network and see them as
"resources" associated with those hosts.  In this scenario, the LSF
scheduler would see those resources either by cluster configuration or via
ELIM script, and the scheduler would see to is that job resource
requirements are matched to hosts with the appropriate data set.  This
approach becomes impractical if the number of different data set resources
is large (LSF imposes a maximum of 128 external resources to be defined via
ELIM).  In this case, an external data base can be used to maintain the data
set-host location relationship.  This information is then used at job
submission time to modify the jobs "host preference" list to restrict the
scheduler to pick the best host from a sub set list of hosts, as defined by
the current status of the database.  (Blackstone Technology Group has
created a "data abstraction" enhancement to the LSF system, code named
"Orion" to provide this feature set.)

In general, it is best to rely on the network to serve the data up until
the point where it becomes clear that simply increasing network bandwidth
won't address the bottleneck.  Only then should you consider the added
complexity of  cacheing data sets on specific hosts in order to achieve
faster throughput.

Heterogeneous Execution
-----------------------

LSF supports remote execution across heterogeneous platforms.  The
mechanism to support this is the "jobstarter".  This jobstarter is a script
which runs on the execution host and take the actual job command line as an
argument and runs it.  Within this script, it is easy to set up to detect
the host type and set up the appropriate paths and environment variables to
execute the proper binaries etc.  This jobstarter script is also useful for
setting external environment settings, such as with Rational Software's
ClearCase "view" settings.

Monitoring and Tuning the Compute Farm
--------------------------------------

The LSF software maintains detailed logs of all jobs processed by the
system, who submitted them, how long they waited in the queue, how much
memory and CPU resource was consumed etc.  This information used in a
historical context, provides detailed and summary information about who is
using resources, how utilized are the resources, and where are the
bottlenecks.  In addition to detailed batch logs, Platform Computing offers
the LSF Analyzer graphical application to plot and chart detailed
utilization information for host load indices (CPU, memory, paging rates,
I/O rates etc.).  In addition, the Analyzer allows the definition of a rate
multiplier for a load index, and can generate 'charge back' accounting
reports for individual users, groups, or projects.

In general, with a well tuned LSF cluster, the administrator should expect
to see close to 100% CPU utilization, with nearly all hosts busy (LSF's
indication for a host unavailable to accept new jobs).  If the number of
pending jobs stays high, and CPU's remain idle, this could indicate a
mismatch of resources in the environment.  Either the job's resource
requirement is too strict (memory or host preference), or the hosts
themselves aren't configured properly, or there is a scarce resource (i.e.,
software license).  If hosts are running multiple jobs / CPU and the paging
rates are high, this is an indication to reduce the number of running jobs
allowable per CPU.  It's almost never preferable for jobs to page.  If CPU's
are less than 100% utilized, and jobs are backing up in the queues, then
there is some other scarce resource to be isolated.

An Alternative to FlexAdmin
---------------------------

Many customers rely on Globetrotter's FlexAdmin utility to monitor their
software license usage.  This tool draws a plot of license feature
utilization versus time, along with an indication of points in time where
license requests were denied.  This information is useful, however in an LSF
managed environment, it is possible that there will be no license denials,
since jobs are queued pending the availability of both a free CPU slot and
a software license.  To address this, Blackstone Technology Group has
implemented an HTML-based charting client, "SmartChart", which uses LSF's
dynamic load information for license resources, and dynamic job information
(running/pending/suspended) to create a plot of number of jobs
(running/pending/suspended) versus number of license resources available.
This information allows administrators and managers to see not only the
peak load for license usage, but to know also how many jobs were pending
due to unavailable license resource.  The depth (number of jobs pending) and
duration (elapsed time before the jobs enter the running state) of the
pending jobs is the true indication of the need to buy more software
licenses to serve the needs of an organization.  Or, this could be an
indication of spurious, unnecessary jobs submitted by a particular user.  In
one case, we were able to identify a large block of jobs which "never
finished", but still consumed a license.  Management was able to take
corrective action to reclaim the licenses and the associated CPU slots in
the Compute Farm, and to further educate their users to use LSF's options
for handling "run away" jobs in the Compute Farm.

Bottleneck Identification
-------------------------

In addition to isolating true license bottlenecks, it's important to know
when memory or CPU availability is the limiting resource.  With the
information available in LSF's logs, you can see the individual and
aggregate host utilization levels for CPU and memory resources.  If CPU and
memory resource utilization is very high, adequate license resources exist,
and there are many jobs pending, this would be an indication that either
memory or CPU resources are the limiting factor.

Alarms and Quality of Service
-----------------------------

Finally, with the information available in LSF's logs, it is possible to
implement an effective "alarm" system.  Here an administrator or manager is
notified when the pending backlog is too high for too long, or when the
number of available CPU's falls below a certain threshold, or when jobs
pend in the queue for more than a specified amount of time.  This data
capture and reporting system enabled by LSF's dynamic load monitoring
technology can be the basis of a total Quality of Service (QoS) contract
between a user organization and its support staff.  Administrators can fully
document the QoS provided and complete utilization and pending backlog
documentation exists to show higher levels of management how resources are
being applied and to make decisions about future investments in hardware
and software components.

Overall, utilization of hardware and software licenses increase by a factor
of 2 or more, plus an overall ease-of-access for needed compute time for
users.  For example, one customer estimated that the savings in reduced time
spent managing compute cycles equalled 1/2 hour per engineer per day.


References:
-----------

Compaq Computer Corporation White Paper: WindowsNT Compute Farms for Logic
Simulation.  www.compaq.com/products/workstations/solutions/eda/index.html

Platform Computing Corporation White Paper: LSF in EDA Delivering Shorter
Design Cycles, Faster Time to Market, Better Products.  www.platform.com

Sun Microsystems Technical White Paper: Maximizing Productivity: Compute
Farms for EDA.   www.sun.com/technical-computing/Publications/cfarm.html



 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)