comp.lang.ada
 help / color / mirror / Atom feed
From: agate!library.ucla.edu!ddsw1!news.kei.com!sol.ctr.columbia.edu!math.ohio- state.edu!darwin.sura.net!seas.gwu.edu!mfeldman@ucbvax.Berkeley.EDU  (Michael F
Subject: Re: storing arrays for Fortran (was: QUERY ABOUT MONITOR)
Date: 4 Aug 93 17:46:51 GMT	[thread overview]
Message-ID: <1993Aug4.174651.2765@seas.gwu.edu> (raw)

In article <1993Aug4.090104.20732@sei.cmu.edu> ae@sei.cmu.edu (Arthur Evans) wr
ites:
>michael.hagerty@nitelog.com (Michael Hagerty) complains about the
>inability to direct an Ada compiler to store arrays so as to be
>compatible with Fortran.
>
>However, there is nothing that I can see to keep an Ada vendor from
>introducing some of the functionality of Section M.3 of the 9X document
>into an Ada-83 compiler by use of pragmas.  Go poke your favorite
>vendor!
>
That's just why I brought it up a few days ago. I'm poking _all_ of them
at once. IMHO they are just imitating each other anyway. They see only
each other as the competition while the rest of the world leaves them
behind.

There are a lot of aspects of Ada that are not getting exploited because 
the folks who do the compilers seem to do _only_ what the next contract's 
customer wants. It's the Beltway Bandit vs. the entrepreneur mentality. 

Another suggestion, unwelcome as it may be. Why don't these guys get
together with the innovative hardware houses, and get Ada compilers out
simultaneously with new machines, especially parallel ones? Somehow
these guys always manage to get the Fortran and C dialects written; 
with Ada, because of the common front-ends, all they need to do is 
write code generators that really exploit the machines. The language
constructs are there already.

How 'bout a math library (OK, so it's vendor-dependent) that uses
overloaded operators to REALLY do vector and matrix stuff on
parallel (vector) machines? What are you waiting for? They've
already written the implementations, in C and Fortran; all you
need to do is interface 'em nicely to Ada specs.

I heard a neat story the other week about an Ada compiler for some
vector machine or other that took a loop like

 FOR I IN 1..10 LOOP
  A(I) := B(I) + C(I);
 END LOOP;

and vectorized it. Nice, eh? But they took

 FOR I IN Index LOOP      -- Index is a subtype 1..10
   A(I) := B(I) + C(I);
 END LOOP;

and compiled all the code as a straight loop. Didn't they ever hear
of subtypes?

How many compilers out there will compile an array assignment like

  A := B;   -- who cares what the typedefs are

into a _minimum_ number of block-move instructions for that target?
Or do they compile it as an element-by-element loop? That can make
a big performance difference, can't it?

Sheesh. This is what Ada's high-level constructs (tasking of course, but
also universal array/record assignment, operator overloading, etc.)
were supposed to be ABOUT. NOT one more me-too compiler for Sun SPARC.

Miike Feldman

             reply	other threads:[~1993-08-04 17:46 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1993-08-04 17:46 agate!library.ucla.edu!ddsw1!news.kei.com!sol.ctr.columbia.edu!math.ohio- [this message]
  -- strict thread matches above, loose matches on Subject: below --
1993-08-09  5:18 storing arrays for Fortran (was: QUERY ABOUT MONITOR) Robert Dewar
1993-08-05 18:41 Michael Feldman
1993-08-05 13:29 magnesium.club.cc.cmu.edu!news.sei.cmu.edu!ae
1993-08-05  3:42 Michael Feldman
1993-08-05  1:16 agate!library.ucla.edu!ddsw1!news.kei.com!sol.ctr.columbia.edu!math.ohio-
1993-08-04 13:01 magnesium.club.cc.cmu.edu!news.sei.cmu.edu!ae
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox