comp.lang.ada
 help / color / mirror / Atom feed
* The future of Spark . Spark 2014 : a wreckage
@ 2013-07-08 17:56 vincent.diemunsch
  2013-07-08 19:24 ` Simon Wright
                   ` (3 more replies)
  0 siblings, 4 replies; 68+ messages in thread
From: vincent.diemunsch @ 2013-07-08 17:56 UTC (permalink / raw)


Have you heard of the new Spark 2014 initiative by AdaCore ?
http://docs.adacore.com/spark2014-docs/html/ug/language_subset.html

Spark 2014 aims at replacing Spark annotations by Ada 2012 aspects, at replacing proof functions by expression functions, and at using the Frama-C
tools (Alt-Ergo and why3 language) for automatic proof. At the beginning I thought this was a good idea, but the more I get into the subject, the more it seems for me that they are going wrong.

First ASPECTS : Let's take an example, that is given in the Spark Reference Manual.

- their syntax is clumsy and error prone : 

Spark 2005 :
   procedure Swap;
   --# global in out X, Y;
   --# derives X from Y &
   --#         Y from X;
  
Same specification in Spark 2014 :
   procedure Swap
      with Global  => (In_Out => (X, Y)),
           Depends => (X => Y,   -- to be read as "X depends on Y"
                       Y => X);  -- to be read as "Y depends on X"

How are we supposed to guess that "X => Y" means X depends on Y, if the arrow goes from X to Y ? In the literature, Flow analysis use always Information Flow where arrows follows the real move of information. See for instance "SPARK the proven approach of High Integrity Software" from John Barn page 311 or the "Dragon Book" (Compilers : principles, techniques and tools).

- the syntax makes a complete confusion between a precondition and a test : Preconditions are SPECIFICATIONS that insures that the runtime tests that the Ada compiler puts in the code will pass. What's the use of testing at the entry of a procedure what will be tested in the procedure ? Doing the test twice won't impove correctness ! That is why proconditions and postconditions must stay as comments.


Second EXPRESSION FUNCTIONS :

Expressions functions break the Ada Specification, by allowing the programmer to give the body of a function into the specification of a package. For what purpose ? To allow the runtime testing of pre and post conditions...

But Spark had logical functions that could be declared in the package specification, for expressing pre/postconditions, for instance : 
   --# type StackType is abstract;
   --# function Count_of(S: StackType) return Natural;
   --# function Substack(S1: StackType; S2: StackType) return Boolean;

The problem with Spark was that one had to use an old language : FDL to express
what these functions meant. So the only thing that is required here for an improvement of Spark is to be able to write IN THE PACKAGE BODY the logical function as, for instance : 

package body Stack
--# own State is Ptr, Vector;

is
   type Ptrs       is range 0..MaxStackSize;
   subtype Indexes is Ptrs range 1..Ptrs'Last;
   type Vectors    is array (Indexes) of Integer;

   Ptr      : Ptrs    := 0;
   Vector   : Vectors := Vectors'(Indexes => 0);
   
   --# function Count_of(S: StackType) return Natural is Ptr;
   --# function Substack(S1: StackType; S2: StackType) return Boolean is
   --#    (for all X in 1 .. S1.Ptr => S1.Vector(X) = S2.Vector(X));

This is much, much better than the Spark 2014 solution because :
- a proof function like Substack is not supposed to be called at runtime, because it compares to states of the same unique stack, not two real stacks ! so it should not appear in the compilable code. But Spark 2014 would transform this into a runtime comparison...
- it allows to do in the body a refinement of the abstract type Stacktype that represents the package state, which is not possible with expression functions in the specification. It seems that the people of Spark 2014 didn't get that point.

I realy hope that Spark users will say to AdaCore that they aro going wrong and that AdaCore will really try to make a better future for Spark.

Vincent DIEMUNSCH

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 17:56 The future of Spark . Spark 2014 : a wreckage vincent.diemunsch
@ 2013-07-08 19:24 ` Simon Wright
  2013-07-08 20:59 ` Florian Weimer
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 68+ messages in thread
From: Simon Wright @ 2013-07-08 19:24 UTC (permalink / raw)


vincent.diemunsch@gmail.com writes:

> Same specification in Spark 2014 :
>    procedure Swap
>       with Global  => (In_Out => (X, Y)),
>            Depends => (X => Y,   -- to be read as "X depends on Y"
>                        Y => X);  -- to be read as "Y depends on X"
>
> How are we supposed to guess that "X => Y" means X depends on Y, if
> the arrow goes from X to Y ? In the literature, Flow analysis use
> always Information Flow where arrows follows the real move of
> information. See for instance "SPARK the proven approach of High
> Integrity Software" from John Barn page 311 or the "Dragon Book"
> (Compilers : principles, techniques and tools).

Maybe, but a UML dependency would be read the way that you have quoted.
I don't expect users would guess, I expect they'd RTFM.

> - the syntax makes a complete confusion between a precondition and a
> test : Preconditions are SPECIFICATIONS that insures that the runtime
> tests that the Ada compiler puts in the code will pass. What's the use
> of testing at the entry of a procedure what will be tested in the
> procedure ? Doing the test twice won't impove correctness ! That is
> why proconditions and postconditions must stay as comments.

I assume that after one has done a successful proof of correctness with
SPARK 2014 one will build with a configuration that says that assertions
and predicates are to be ignored (ARM 11.4.2). I suspect that for many
real-time systems you'd need to do this to get timely operation anyway;
nice if the compiler could ensure the Note, "Normally, the boolean
expression in a pragma Assert should not call functions that have
significant side effects when the result of the expression is True, so
that the particular assertion policy in effect will not affect normal
operation of the program.", but that's probably equivalent to the
halting problem.

I wonder whether the same consideration should apply to predicates?

> Expressions functions break the Ada Specification

Expression functions are *in* the Ada Specification! (assuming you mean
the ARM?)


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 17:56 The future of Spark . Spark 2014 : a wreckage vincent.diemunsch
  2013-07-08 19:24 ` Simon Wright
@ 2013-07-08 20:59 ` Florian Weimer
  2013-07-09  7:40   ` Dmitry A. Kazakov
  2013-07-08 21:32 ` Randy Brukardt
  2013-07-08 23:18 ` Peter C. Chapin
  3 siblings, 1 reply; 68+ messages in thread
From: Florian Weimer @ 2013-07-08 20:59 UTC (permalink / raw)


* vincent diemunsch:

> Same specification in Spark 2014 :
>    procedure Swap
>       with Global  => (In_Out => (X, Y)),
>            Depends => (X => Y,   -- to be read as "X depends on Y"
>                        Y => X);  -- to be read as "Y depends on X"
>
> How are we supposed to guess that "X => Y" means X depends on Y, if
> the arrow goes from X to Y ? In the literature, Flow analysis use
> always Information Flow where arrows follows the real move of
> information. See for instance "SPARK the proven approach of High
> Integrity Software" from John Barn page 311 or the "Dragon Book"
> (Compilers : principles, techniques and tools).

This is nothing new.  Boolean implication p → q is already written as
P <= Q in Ada.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 17:56 The future of Spark . Spark 2014 : a wreckage vincent.diemunsch
  2013-07-08 19:24 ` Simon Wright
  2013-07-08 20:59 ` Florian Weimer
@ 2013-07-08 21:32 ` Randy Brukardt
  2013-07-09  7:28   ` Dmitry A. Kazakov
                     ` (2 more replies)
  2013-07-08 23:18 ` Peter C. Chapin
  3 siblings, 3 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-08 21:32 UTC (permalink / raw)


<vincent.diemunsch@gmail.com> wrote in message 
news:d84b2eb2-eab3-4dec-917c-0498473c1e93@googlegroups.com...
>Have you heard of the new Spark 2014 initiative by AdaCore ?
>http://docs.adacore.com/spark2014-docs/html/ug/language_subset.html
>
>Spark 2014 aims at replacing Spark annotations by Ada 2012 aspects, at 
>replacing
>proof functions by expression functions, and at using the Frama-C tools 
>(Alt-Ergo
>and why3 language) for automatic proof. At the beginning I thought this was 
>a good
>idea, but the more I get into the subject, the more it seems for me that 
>they are going wrong.

I have nothing to do with AdaCore or this Spark update, so what follows is 
personal opinion.

I think existing Spark is evil, in that it funnels off effort best applied 
to the entire Ada language (or at least, a much larger subset). Computers 
are now powerful enough that the excuses for avoiding proofs of exceptions, 
dynamic memory allocation, and the like no longer are valid.

Thus I like what AdaCore is trying to do, because they are bringing Spark 
closer to Ada itself. Which makes it much more likely that the effort that 
goes into Spark can be applied to the full Ada language as well.

>First ASPECTS : Let's take an example, that is given in the Spark Reference 
>Manual.
>
>- their syntax is clumsy and error prone :
>
>Spark 2005 :
>   procedure Swap;
>   --# global in out X, Y;
>   --# derives X from Y &
>   --#         Y from X;

You're certainly right that this is clumsy and makes little sense. :-) A 
routine that modifies global variables in this way shouldn't pass any code 
review, so it's pretty much irrelevant what the annotations would look like.

>
>Same specification in Spark 2014 :
>   procedure Swap
>      with Global  => (In_Out => (X, Y)),
>           Depends => (X => Y,   -- to be read as "X depends on Y"
>                       Y => X);  -- to be read as "Y depends on X"
>
>How are we supposed to guess that "X => Y" means X depends on Y, if the 
>arrow goes
>from X to Y ? In the >literature, Flow analysis use always Information Flow 
>where arrows
>follows the real move of information. See for instance "SPARK the proven 
>approach of
>High Integrity Software" from John Barn page 311 or the "Dragon Book" 
>(Compilers :
>principles, techniques and tools).

I don't know, nor care, because I don't see any reason or value to "Depends" 
information in the first place. Routines shouldn't be modifying globals in 
this sort of way, IMHO, so there is no need to proof them. (Any globals 
should be hidden in package bodies and not exposed to clients.)

The Globals notation is much more important. We tried to design such a 
notation for Ada 2012, but it became too much work to complete and there was 
a lot of concern about getting it wrong. So we left this for the "experts". 
It appears that the experts are working on it, which is good, because we 
still want this capability in the next version of Ada (whenever that will 
be). And for that to be the case, we'll need it to have proper Ada syntax 
using aspects.

>- the syntax makes a complete confusion between a precondition and a test : 
>Preconditions
>are SPECIFICATIONS that insures that the runtime tests that the Ada 
>compiler puts in the
>code will pass. What's the use of testing at the entry of a procedure what 
>will be tested in
>the procedure ? Doing the test twice won't impove correctness ! That is why 
>proconditions
>and postconditions must stay as comments.

If you're doing the test twice, you are completely misusing Ada 2012 (not to 
mention Spark!). The whole point is that the *only* test is in the 
precondition (or predicates). If a proof tool (or better yet, the compiler - 
I believe these proofs need to be part of normal compilation), has proven 
that the precondition can't fail, the compiler is not going to generate any 
test anywhere.

I suppose Spark (which is a separate tool) won't be able to tell the 
compiler to not make the check. But since there will only be one check in 
the code, you certainly won't be getting redundant checks. (The entire point 
of adding raise expressions was that we failed to provide proper mechanisms 
for setting the exception raised in these checks; that's necessary to be 
able to eliminate the existing hand-coded checks in existing code. That's 
why raise expressions were added as a "bug fix" to Ada 2012 and thus are 
required of all Ada 2012 implementations.)

>Second EXPRESSION FUNCTIONS :
>
>Expressions functions break the Ada Specification, by allowing the 
>programmer to
>give the body of a function into the specification of a package. For what 
>purpose ?
>To allow the runtime testing of pre and post conditions...

No, to allow *compile-time* analysis of pre and post conditions. An Ada 
compiler cannot look into the body of a package to see what is there, and 
without expression functions, almost no compile-time analysis could be 
accomplished. (We also need globals annotations for similar reasons.) The 
real end-game here ought to be compile-time proof (that is, proofs that are 
created as part of the normal compilation of Ada code).

>But Spark had logical functions that could be declared in the package 
>specification, for
>expressing pre/postconditions, for instance :
>   --# type StackType is abstract;
>   --# function Count_of(S: StackType) return Natural;
>   --# function Substack(S1: StackType; S2: StackType) return Boolean;
>
>The problem with Spark was that one had to use an old language : FDL to 
>express
>what these functions meant. So the only thing that is required here for an 
>improvement
>of Spark is to be able to write IN THE PACKAGE BODY the logical function 
>as, for instance :

Proofs of any kind should never, ever depend on anything in the package body 
(other than the one being proved, of course):

(1) Body dependence greatly increases coupling;
(2) Proofs should be able to be created for partially created programs 
(doing proofs as an afterthought will never work); no body ought to be 
required or even exist. (This is also is important so that propietary code 
can be used and still included in proofs.)
(3) If body dependencies are eliminated, proofs remain valid so long as no 
specification is changed. The only unit that needs reproofing when a body is 
changed is that body itself.

(2) is necessary to use proof tools on large systems. Full source is almost 
never available, and if it is, it isn't available until very late in a 
project. (3) is necessary so that work doesn't have to be done over and over 
and over.

...
>- it allows to do in the body a refinement of the abstract type Stacktype 
>that
> represents the package state, which is not possible with expression 
> functions
> in the specification. It seems that the people of Spark 2014 didn't get 
> that point.

The expression functions go in the private part, where you can have 
visibility onto anything that you need, without exposing it to clients. 
Perhaps you missed that part.

> I realy hope that Spark users will say to AdaCore that they aro going 
> wrong
> and that AdaCore will really try to make a better future for Spark.

Spark users are on a certain dead-end, IMHO. The future is getting proof 
into the hands of all programmers, as a normal part of their day-to-day 
programming. That requires that proofs aren't necessarily complete, that 
they use whatever information is available, and most of all, that they only 
depend on the specifications of packages (never, ever, on the contents of a 
body).

It sounds like this is the direction that Spark 2014 is taking, which means 
IMHO that we'll all benefit, not just a few that can live with draconian 
limitations that turns Ada into a form of Fortran 77.

Just my $1 worth (inflation, you know).

                                  Randy.


Vincent DIEMUNSCH




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 17:56 The future of Spark . Spark 2014 : a wreckage vincent.diemunsch
                   ` (2 preceding siblings ...)
  2013-07-08 21:32 ` Randy Brukardt
@ 2013-07-08 23:18 ` Peter C. Chapin
  2013-07-09  7:34   ` Stefan.Lucks
  3 siblings, 1 reply; 68+ messages in thread
From: Peter C. Chapin @ 2013-07-08 23:18 UTC (permalink / raw)



On 07/08/2013 01:56 PM, vincent.diemunsch@gmail.com wrote:

>     procedure Swap
>        with Global  => (In_Out => (X, Y)),
>             Depends => (X => Y,   -- to be read as "X depends on Y"
>                         Y => X);  -- to be read as "Y depends on X"
>
> How are we supposed to guess that "X => Y" means X depends on Y, if the arrow goes from X to Y ?

Well the phrase "X depends on Y" has X on the left and Y on the right 
just as X => Y does. So it reads fairly naturally to me. I understand 
there is value in being consistent with existing notations although I 
get the impression there is already conflicting practice in that area.

More significantly, though, I think you may be assigning too much 
semantic weight to the '=>' token in this case. In the aspect syntax it 
is used more as a separator than anything else. Consider

   Pre => X < 0

Nobody would say that there is information flowing from "Pre" to the 
expression 'X < 0' or visa-versa. The SPARK 2014 syntax extends this 
notion of "arrow as separator" to the subcomponents of the aspects. Thus

   Depends => (X => Y)

The "inner" arrow just separates X from Y syntactically. It's not really 
related to information flow... although its easy to see why someone 
would assume that it was considering the semantics of the Depends aspect 
overall.

Peter

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 21:32 ` Randy Brukardt
@ 2013-07-09  7:28   ` Dmitry A. Kazakov
  2013-07-09 11:29     ` Simon Wright
  2013-07-09 20:37     ` Randy Brukardt
  2013-07-09  7:57   ` The future of Spark . Spark 2014 : a wreckage Stefan.Lucks
  2013-07-10 12:19   ` vincent.diemunsch
  2 siblings, 2 replies; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-09  7:28 UTC (permalink / raw)


On Mon, 8 Jul 2013 16:32:21 -0500, Randy Brukardt wrote:

>> I realy hope that Spark users will say to AdaCore that they aro going 
>> wrong and that AdaCore will really try to make a better future for Spark.
> 
> Spark users are on a certain dead-end, IMHO. The future is getting proof 
> into the hands of all programmers, as a normal part of their day-to-day 
> programming. That requires that proofs aren't necessarily complete,

Rather the programmer should announce what to prove and that proof must be
complete.

> that 
> they use whatever information is available, and most of all, that they only 
> depend on the specifications of packages (never, ever, on the contents of a 
> body).

There are obviously goals which cannot be proven without the bodies. Proofs
about specifications are important but far not all. Consider LSP as an
example. You cannot prove substitutability from interfaces. It simply does
not work. Same with almost everything else, because proving something in
general is much more harder, even impossible, than proving things about a
concrete instance, e.g. body.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 23:18 ` Peter C. Chapin
@ 2013-07-09  7:34   ` Stefan.Lucks
  2013-07-09 15:21     ` Adam Beneschan
  0 siblings, 1 reply; 68+ messages in thread
From: Stefan.Lucks @ 2013-07-09  7:34 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 2463 bytes --]

On Mon, 8 Jul 2013, Peter C. Chapin wrote:

> On 07/08/2013 01:56 PM, vincent.diemunsch@gmail.com wrote:
>
>>     procedure Swap
>>        with Global  => (In_Out => (X, Y)),
>>             Depends => (X => Y,   -- to be read as "X depends on Y"
>>                         Y => X);  -- to be read as "Y depends on X"
>> 
>> How are we supposed to guess that "X => Y" means X depends on Y, if the 
>> arrow goes from X to Y ?
>
> Well the phrase "X depends on Y" has X on the left and Y on the right just as 
> X => Y does. So it reads fairly naturally to me. I understand there is value 
> in being consistent with existing notations although I get the impression 
> there is already conflicting practice in that area.
>
> More significantly, though, I think you may be assigning too much semantic 
> weight to the '=>' token in this case. In the aspect syntax it is used more 
> as a separator than anything else. Consider
>
>  Pre => X < 0

Fair enough! The "=>" is not an arrow, it is just a syntactic token to 
separate two other tokens.

Unfortunately, this token greatly (and certainly intentionally, at the 
early days of Ada) resembles an arrow. And this is relly confusing in 
situations where an arrow would make sense, but where the "=>" goes into 
the wrong direction. "Pre =>" isn't an issue, but X => Y above is.

Actually, this has been irritating me when using named notation for out 
parameters, since I have learned Ada. If Y is an out parameter, it should 
be Do_Something (X <= Y), shouldn't it?

This isn't such a big issue that Ada will actually need another separator 
token, such as "<=" ... we'll have to live with "=>". But maybe, one could 
turn this "bug" into a feature, by defining an alternative to the Depends 
annotation^H^H^H^H^H^H^H^H^H^H aspect, where the "=>" points into the 
correct direction, e.g.:

      procedure Swap
         with Global           => (In_Out => (X, Y)),
              Information_Flow => (Y => X,   -- "X depends on Y"
                                   X => Y);  -- "Y depends on X"

as a legal replacement for the usage of "Depends". That would be a lot 
more intuitive!

Alas, it is probably too late for SPARK 2014 ... sigh!


------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
     <http://www.uni-weimar.de/cms/medien/mediensicherheit/home.html>
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 20:59 ` Florian Weimer
@ 2013-07-09  7:40   ` Dmitry A. Kazakov
  2013-07-09  8:30     ` Georg Bauhaus
                       ` (2 more replies)
  0 siblings, 3 replies; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-09  7:40 UTC (permalink / raw)


On Mon, 08 Jul 2013 22:59:30 +0200, Florian Weimer wrote:

> * vincent diemunsch:
> 
>> Same specification in Spark 2014 :
>>    procedure Swap
>>       with Global  => (In_Out => (X, Y)),
>>            Depends => (X => Y,   -- to be read as "X depends on Y"
>>                        Y => X);  -- to be read as "Y depends on X"
>>
>> How are we supposed to guess that "X => Y" means X depends on Y, if
>> the arrow goes from X to Y ? In the literature, Flow analysis use
>> always Information Flow where arrows follows the real move of
>> information. See for instance "SPARK the proven approach of High
>> Integrity Software" from John Barn page 311 or the "Dragon Book"
>> (Compilers : principles, techniques and tools).
> 
> This is nothing new.  Boolean implication p → q is already written as
> P <= Q in Ada.

Where?

It would be plain wrong, because logical implication (not P or Q) must be
denoted as P => Q.

   http://en.wikipedia.org/wiki/Material_conditional

BTW, there is a difference between implication (Boolean operation) and
consequence like

   P |= Q

   http://en.wikipedia.org/wiki/Logical_consequence#Semantic_consequence

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 21:32 ` Randy Brukardt
  2013-07-09  7:28   ` Dmitry A. Kazakov
@ 2013-07-09  7:57   ` Stefan.Lucks
  2013-07-09 20:46     ` Randy Brukardt
  2013-07-10 12:19   ` vincent.diemunsch
  2 siblings, 1 reply; 68+ messages in thread
From: Stefan.Lucks @ 2013-07-09  7:57 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 4281 bytes --]

On Mon, 8 Jul 2013, Randy Brukardt wrote:

> I think existing Spark is evil, in that it funnels off effort best applied
> to the entire Ada language (or at least, a much larger subset). Computers
> are now powerful enough that the excuses for avoiding proofs of exceptions,
> dynamic memory allocation, and the like no longer are valid.

Agreed! Well, "evil" is a bit harsh -- existing SPARK fills a gap that Ada 
until recently has left wide open, which was a good thing at its time. But 
Ada 2012 has narrowed that gap a lot, and it is a good thing that Ada and 
SPARK are now evolving together.

>> Spark 2005 :
>>   procedure Swap;
>>   --# global in out X, Y;
>>   --# derives X from Y &
>>   --#         Y from X;
>
> You're certainly right that this is clumsy and makes little sense. :-) A
> routine that modifies global variables in this way shouldn't pass any code
> review, so it's pretty much irrelevant what the annotations would look like.

Randy, here you are really throwing out the child with the bath. Sure, a 
procedure to swap two globals is is a silly example -- but the topic here 
is the syntax for information flow analysis, and the example has the 
benefit of being very simple. The following doesn't use globals and might 
pass a code review:

Spark 2005 :
    procedure Swap(X, Y: in out T);
    --# derives X from Y &
    --#         Y from X;


Same specification in Spark 2014 :
    procedure Swap(X, Y: in out T);
       with Depends => (X => Y,   -- to be read as "X depends on Y"
                        Y => X);  -- to be read as "Y depends on X"

>> How are we supposed to guess that "X => Y" means X depends on Y, if the
>> arrow goes from X to Y ? In the >literature, Flow analysis use always 
>> Information Flow where arrows follows the real move of information. 
[...]

> I don't know, nor care, because I don't see any reason or value to "Depends"
> information in the first place.

So you think, information flow analysis is completely worthless? (I mean, 
for building and verifying correct systems -- as a compiler writer you 
will not question the usage of information flow analysis inside the 
compiler or optimiser).

> Routines shouldn't be modifying globals in
> this sort of way, IMHO, so there is no need to proof them. (Any globals
> should be hidden in package bodies and not exposed to clients.)

Agreed. The OP has been using an example with modifying globals, mine 
doesn't, the issue raised by the OP remains.

>> - the syntax makes a complete confusion between a precondition and a 
>> test : Preconditions are SPECIFICATIONS that insures that the runtime 
>> tests that the Ada compiler puts in the code will pass. What's the use 
>> of testing at the entry of a procedure what will be tested in the 
>> procedure ? Doing the test twice won't impove correctness ! That is why 
>> proconditions and postconditions must stay as comments.
>
> If you're doing the test twice, you are completely misusing Ada 2012 (not to
> mention Spark!). The whole point is that the *only* test is in the
> precondition (or predicates). If a proof tool (or better yet, the compiler -
> I believe these proofs need to be part of normal compilation), has proven
> that the precondition can't fail, the compiler is not going to generate any
> test anywhere.

Agreed! Many people seem to confuse this like the OP: Ada 2012 is about 
*specifying* contracts. It *allows* to prove them or to check them at run 
time if proving them has not been possible. A good toolset would not 
generate code checking conditions that have been proven.

> Spark users are on a certain dead-end, IMHO.

That is a bit unfair, IMHO. SPARK users have a real head start when it 
comes to applying the really innovative features of Ada 2012.

> It sounds like this is the direction that Spark 2014 is taking, which means
> IMHO that we'll all benefit, not just a few that can live with draconian
> limitations that turns Ada into a form of Fortran 77.

Agreed!

------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
     <http://www.uni-weimar.de/cms/medien/mediensicherheit/home.html>
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  7:40   ` Dmitry A. Kazakov
@ 2013-07-09  8:30     ` Georg Bauhaus
  2013-07-09  8:58       ` Dmitry A. Kazakov
  2013-07-09 15:14     ` Adam Beneschan
  2013-07-09 22:51     ` Robert A Duff
  2 siblings, 1 reply; 68+ messages in thread
From: Georg Bauhaus @ 2013-07-09  8:30 UTC (permalink / raw)


On 09.07.13 09:40, Dmitry A. Kazakov wrote:
> On Mon, 08 Jul 2013 22:59:30 +0200, Florian Weimer wrote:
>
>> * vincent diemunsch:
>>
>>> Same specification in Spark 2014 :
>>>     procedure Swap
>>>        with Global  => (In_Out => (X, Y)),
>>>             Depends => (X => Y,   -- to be read as "X depends on Y"
>>>                         Y => X);  -- to be read as "Y depends on X"
>>>
>>> How are we supposed to guess that "X => Y" means X depends on Y, if
>>> the arrow goes from X to Y ? In the literature, Flow analysis use
>>> always Information Flow where arrows follows the real move of
>>> information. See for instance "SPARK the proven approach of High
>>> Integrity Software" from John Barn page 311 or the "Dragon Book"
>>> (Compilers : principles, techniques and tools).
>>
>> This is nothing new.  Boolean implication p → q is already written as
>> P <= Q in Ada.
>
> Where?

In Boolean expressions. With obvious, I think, abbreviations
and referring to ordering of (False, True), and (0, 1),

p q | p → q   (p <= q)
-----------------------
1 1 |   1       True
1 0 |   0       False
0 1 |   1       True
0 0 |   1       True



^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  8:30     ` Georg Bauhaus
@ 2013-07-09  8:58       ` Dmitry A. Kazakov
  2013-07-09 10:27         ` G.B.
  2013-07-09 11:13         ` G.B.
  0 siblings, 2 replies; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-09  8:58 UTC (permalink / raw)


On Tue, 09 Jul 2013 10:30:59 +0200, Georg Bauhaus wrote:

> On 09.07.13 09:40, Dmitry A. Kazakov wrote:
>> On Mon, 08 Jul 2013 22:59:30 +0200, Florian Weimer wrote:
>>
>>> * vincent diemunsch:
>>>
>>>> Same specification in Spark 2014 :
>>>>     procedure Swap
>>>>        with Global  => (In_Out => (X, Y)),
>>>>             Depends => (X => Y,   -- to be read as "X depends on Y"
>>>>                         Y => X);  -- to be read as "Y depends on X"
>>>>
>>>> How are we supposed to guess that "X => Y" means X depends on Y, if
>>>> the arrow goes from X to Y ? In the literature, Flow analysis use
>>>> always Information Flow where arrows follows the real move of
>>>> information. See for instance "SPARK the proven approach of High
>>>> Integrity Software" from John Barn page 311 or the "Dragon Book"
>>>> (Compilers : principles, techniques and tools).
>>>
>>> This is nothing new.  Boolean implication p → q is already written as
>>> P <= Q in Ada.
>>
>> Where?
> 
> In Boolean expressions. With obvious, I think, abbreviations
> and referring to ordering of (False, True), and (0, 1),
> 
> p q | p → q   (p <= q)
> -----------------------
> 1 1 |   1       True
> 1 0 |   0       False
> 0 1 |   1       True
> 0 0 |   1       True

This is p=>q

    p=>q  =def=  not p or q
/= q=>p  =def=  not q or p

p<=q = q=>p is

p q | p<=q
----------
1 1 | 1
1 0 | 1
0 1 | 0
0 0 | 1

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  8:58       ` Dmitry A. Kazakov
@ 2013-07-09 10:27         ` G.B.
  2013-07-09 11:13         ` G.B.
  1 sibling, 0 replies; 68+ messages in thread
From: G.B. @ 2013-07-09 10:27 UTC (permalink / raw)


On 09.07.13 10:58, Dmitry A. Kazakov wrote:
> On Tue, 09 Jul 2013 10:30:59 +0200, Georg Bauhaus wrote:
> 
>> On 09.07.13 09:40, Dmitry A. Kazakov wrote:
>>> On Mon, 08 Jul 2013 22:59:30 +0200, Florian Weimer wrote:

>>>> This is nothing new.  Boolean implication p → q is already written as
>>>> P <= Q in Ada.
>>>
>>> Where?
>>
>> In Boolean expressions. With obvious, I think, abbreviations
>> and referring to ordering of (False, True), and (0, 1),
>>
>> p q | p → q   (p <= q)
>> -----------------------
>> 1 1 |   1       True
>> 1 0 |   0       False
>> 0 1 |   1       True
>> 0 0 |   1       True
> 
> This is p=>q
>
>      p=>q  =def=  not p or q

"Boolean implication p → q is already written as
 P <= Q in Ada."

As shown above. "=>" has no "Boolean" meaning in Ada.
(It has, in Prolog.) In implementation defined aspects,
GNAT is free to assign "depends" to "=>", UML style.

I guess we need to adapt to even many overloadings
if symbolophobia is part of Ada culture.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  8:58       ` Dmitry A. Kazakov
  2013-07-09 10:27         ` G.B.
@ 2013-07-09 11:13         ` G.B.
  1 sibling, 0 replies; 68+ messages in thread
From: G.B. @ 2013-07-09 11:13 UTC (permalink / raw)


On 09.07.13 10:58, Dmitry A. Kazakov wrote:
> On Tue, 09 Jul 2013 10:30:59 +0200, Georg Bauhaus wrote:
> 
>> On 09.07.13 09:40, Dmitry A. Kazakov wrote:
>>> On Mon, 08 Jul 2013 22:59:30 +0200, Florian Weimer wrote:
>>>
>>>> * vincent diemunsch:
>>>>
>>>>> Same specification in Spark 2014 :
>>>>>      procedure Swap
>>>>>         with Global  => (In_Out => (X, Y)),
>>>>>              Depends => (X => Y,   -- to be read as "X depends on Y"
>>>>>                          Y => X);  -- to be read as "Y depends on X"
>>>>>
>>>>> How are we supposed to guess that "X => Y" means X depends on Y, if
>>>>> the arrow goes from X to Y ? In the literature, Flow analysis use
>>>>> always Information Flow where arrows follows the real move of
>>>>> information. See for instance "SPARK the proven approach of High
>>>>> Integrity Software" from John Barn page 311 or the "Dragon Book"
>>>>> (Compilers : principles, techniques and tools).
>>>>
>>>> This is nothing new.  Boolean implication p → q is already written as
>>>> P <= Q in Ada.
>>>
>>> Where?
>>
>> In Boolean expressions. With obvious, I think, abbreviations
>> and referring to ordering of (False, True), and (0, 1),
>>
>> p q | p → q   (p <= q)
>> -----------------------
>> 1 1 |   1       True
>> 1 0 |   0       False
>> 0 1 |   1       True
>> 0 0 |   1       True
> 
> This is p=>q

"Boolean implication p → q is already written as
 P <= Q in Ada."

As shown above. "=>" has no "Boolean" meaning in Ada.
(It has, in Prolog.) In implementation defined aspects,
GNAT is free to assign "depends" to "=>", UML style.

I guess we need to adapt to even more overloadings,
since symbolophobia is part of Ada culture.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  7:28   ` Dmitry A. Kazakov
@ 2013-07-09 11:29     ` Simon Wright
  2013-07-09 12:22       ` Dmitry A. Kazakov
  2013-07-09 20:37     ` Randy Brukardt
  1 sibling, 1 reply; 68+ messages in thread
From: Simon Wright @ 2013-07-09 11:29 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> On Mon, 8 Jul 2013 16:32:21 -0500, Randy Brukardt wrote:

>> that they use whatever information is available, and most of all,
>> that they only depend on the specifications of packages (never, ever,
>> on the contents of a body).
>
> There are obviously goals which cannot be proven without the
> bodies. Proofs about specifications are important but far not
> all. Consider LSP as an example. You cannot prove substitutability
> from interfaces. It simply does not work. Same with almost everything
> else, because proving something in general is much more harder, even
> impossible, than proving things about a concrete instance, e.g. body.

I think what Randy meant (and may even have said) was that in

   with A;
   package B is ...

it shouldn't be necessary to look at the body of A to prove the body of
B. Of course the body of A will have to be proved ... but not right now.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09 11:29     ` Simon Wright
@ 2013-07-09 12:22       ` Dmitry A. Kazakov
  0 siblings, 0 replies; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-09 12:22 UTC (permalink / raw)


On Tue, 09 Jul 2013 12:29:03 +0100, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> On Mon, 8 Jul 2013 16:32:21 -0500, Randy Brukardt wrote:
> 
>>> that they use whatever information is available, and most of all,
>>> that they only depend on the specifications of packages (never, ever,
>>> on the contents of a body).
>>
>> There are obviously goals which cannot be proven without the
>> bodies. Proofs about specifications are important but far not
>> all. Consider LSP as an example. You cannot prove substitutability
>> from interfaces. It simply does not work. Same with almost everything
>> else, because proving something in general is much more harder, even
>> impossible, than proving things about a concrete instance, e.g. body.
> 
> I think what Randy meant (and may even have said) was that in
> 
>    with A;
>    package B is ...
> 
> it shouldn't be necessary to look at the body of A to prove the body of
> B. Of course the body of A will have to be proved ... but not right now.

Yes this is how I understood him and what I objected to.

You cannot rely on the contracts when it comes to correctness.

OK, for the sake on the argument, you could contract proofs about A's body,
so that these proofs could be used in the proofs of the body B. But the
problem with that is exactly what Randy wanted to avoid - tight coupling
and doing things in advance which nobody would ever needed.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  7:40   ` Dmitry A. Kazakov
  2013-07-09  8:30     ` Georg Bauhaus
@ 2013-07-09 15:14     ` Adam Beneschan
  2013-07-09 22:51     ` Robert A Duff
  2 siblings, 0 replies; 68+ messages in thread
From: Adam Beneschan @ 2013-07-09 15:14 UTC (permalink / raw)


On Tuesday, July 9, 2013 12:40:55 AM UTC-7, Dmitry A. Kazakov wrote:

> > This is nothing new.  Boolean implication p → q is already written as
> > P <= Q in Ada.
> 
> Where?

It's an accident, really.  <= means "less than or equal to", and it works for implication just because it so happens that p → q is true except when p=TRUE and q=FALSE, which just happens to be the same behavior as p <= q.  Treating <= as an arrow was sort of a joke, I think.

                               -- Adam

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  7:34   ` Stefan.Lucks
@ 2013-07-09 15:21     ` Adam Beneschan
  0 siblings, 0 replies; 68+ messages in thread
From: Adam Beneschan @ 2013-07-09 15:21 UTC (permalink / raw)


On Tuesday, July 9, 2013 12:34:59 AM UTC-7, Stefan...@uni-weimar.de wrote:

> Actually, this has been irritating me when using named notation for out 
> parameters, since I have learned Ada. If Y is an out parameter, it should 
> be Do_Something (X <= Y), shouldn't it?

<= already has another meaning, so that wouldn't work.  As some of us have pointed out before, an early version of Ada did have different symbols; instead of the arrow, =>, it used =: for IN parameters, := for OUT parameters, :=: for IN OUT parameters.  I don't really know why this idea was dropped before Ada 83 was finalized.  I'm beginning to think it should have been left in.  Especially now that [IN] OUT parameters are allowed in function calls; I've been big on the idea of providing some syntax for a function call that would let the reader know that the call might be changing one of the parameters.

                            -- Adam

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  7:28   ` Dmitry A. Kazakov
  2013-07-09 11:29     ` Simon Wright
@ 2013-07-09 20:37     ` Randy Brukardt
  2013-07-10 10:03       ` Dmitry A. Kazakov
  1 sibling, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-09 20:37 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1vc73wjwkqmnl$.gx86xaqy60u4$.dlg@40tude.net...
> On Mon, 8 Jul 2013 16:32:21 -0500, Randy Brukardt wrote:
>
>>> I realy hope that Spark users will say to AdaCore that they aro going
>>> wrong and that AdaCore will really try to make a better future for 
>>> Spark.
>>
>> Spark users are on a certain dead-end, IMHO. The future is getting proof
>> into the hands of all programmers, as a normal part of their day-to-day
>> programming. That requires that proofs aren't necessarily complete,
>
> Rather the programmer should announce what to prove and that proof must be
> complete.

Right. And that should be an integral part of compilation: no proof, no 
compilation.

>> that
>> they use whatever information is available, and most of all, that they 
>> only
>> depend on the specifications of packages (never, ever, on the contents of 
>> a
>> body).
>
> There are obviously goals which cannot be proven without the bodies. 
> Proofs
> about specifications are important but far not all. Consider LSP as an
> example. You cannot prove substitutability from interfaces. It simply does
> not work. Same with almost everything else, because proving something in
> general is much more harder, even impossible, than proving things about a
> concrete instance, e.g. body.

My contention is such things are not worth proving. With strong enough 
contracts (including class-wide contracts for dispatching calls, and 
contracts on subprogram types :-), important properties of code can be 
proven. What's too hard shouldn't be bothered with, because we've lived just 
fine with no proofs at all for programming up to this point. Proofs should 
be an aid to debugging, not the be-all, end-all. I want to add proof 
technology incrementally so that everyone can use a little of it, benefit 
from it, and then use more of it. It's how most Ada programmers learned to 
use types and constraints, and its the way to get proofs into the 
mainstream. Not by stand-alone tools and subsets with weird rules.

                                   Randy.


                                            Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  7:57   ` The future of Spark . Spark 2014 : a wreckage Stefan.Lucks
@ 2013-07-09 20:46     ` Randy Brukardt
  2013-07-10  8:03       ` Stefan.Lucks
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-09 20:46 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.10.1307090935310.23845@debian...
On Mon, 8 Jul 2013, Randy Brukardt wrote:

...
>> I don't know, nor care, because I don't see any reason or value to 
>> "Depends"
>> information in the first place.
>
>So you think, information flow analysis is completely worthless? (I mean,
>for building and verifying correct systems -- as a compiler writer you
>will not question the usage of information flow analysis inside the
>compiler or optimiser).

Information flow *within* a subprogram is important. Information flow 
*between* subprograms is completely defined by the subprogram's interfaces 
and calls. Other information ought to be irrelevant (no one should be 
depending upon what the body of Swap does, other than whatever is specified 
in the postcondition).

Janus/Ada does not do any inter-procedural optimizations, and I doubt that 
it would ever do any beyond inlining and parameter elimination -- neither of 
which have anything to do with flow analysis.

I realize that one can prove additional properties with this additional 
information, but my contention is that trying to do so is harmful (it adds a 
lot of coupling to a program) and isn't valuable to the vast majority of 
users. That is where Spark mostly went wrong, IMHO.

                Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09  7:40   ` Dmitry A. Kazakov
  2013-07-09  8:30     ` Georg Bauhaus
  2013-07-09 15:14     ` Adam Beneschan
@ 2013-07-09 22:51     ` Robert A Duff
  2013-07-10  7:49       ` Dmitry A. Kazakov
  2 siblings, 1 reply; 68+ messages in thread
From: Robert A Duff @ 2013-07-09 22:51 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> On Mon, 08 Jul 2013 22:59:30 +0200, Florian Weimer wrote:
>> This is nothing new.  Boolean implication p → q is already written as
>> P <= Q in Ada.
>
> Where?
> 
> It would be plain wrong, ...

Less-or-equal on Booleans is indeed equivalent to implication.
But that's a hack.  Considering Booleans as ordered is questionable --
I never heard of such a thing in mathematical logic.

>...because logical implication (not P or Q) must be
> denoted as P => Q.

It must be?  In Eiffel, it is denoted by the "implies" keyword.
In Ada 2012, one writes "(if P then Q)", which is shorthand for
"(if P then Q else True)".

- Bob


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09 22:51     ` Robert A Duff
@ 2013-07-10  7:49       ` Dmitry A. Kazakov
  2013-07-10  8:21         ` Georg Bauhaus
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-10  7:49 UTC (permalink / raw)


On Tue, 09 Jul 2013 18:51:36 -0400, Robert A Duff wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> On Mon, 08 Jul 2013 22:59:30 +0200, Florian Weimer wrote:
>>> This is nothing new.  Boolean implication p → q is already written as
>>> P <= Q in Ada.
>>
>> Where?
>> 
>> It would be plain wrong, ...
> 
> Less-or-equal on Booleans is indeed equivalent to implication.
> But that's a hack.  Considering Booleans as ordered is questionable --
> I never heard of such a thing in mathematical logic.

Truth values are ordered in some logics. In others they are not.

>>...because logical implication (not P or Q) must be
>> denoted as P => Q.
> 
> It must be?

Yes, if logical implication is indeed meant here, of which I am not sure.

> In Eiffel, it is denoted by the "implies" keyword.
> In Ada 2012, one writes "(if P then Q)", which is shorthand for
> "(if P then Q else True)".

But this is not an implication (Boolean operation). It is a rule based on
implication. E.g. Modus ponens:

P, P=>Q
------------
Q

So "implies" and "if-then" are correct, while P->Q looks wrong even when
spelled correctly as P=>Q.

If there should be a symbol, I think,

   P |= Q  or P |- Q

would be more appropriate.

Why had we introduced Unicode? After all, there is a code point for this:

  P ⊧ Q    U+22A7  Logical consequence

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09 20:46     ` Randy Brukardt
@ 2013-07-10  8:03       ` Stefan.Lucks
  2013-07-10 12:46         ` Simon Wright
  2013-07-10 23:10         ` Randy Brukardt
  0 siblings, 2 replies; 68+ messages in thread
From: Stefan.Lucks @ 2013-07-10  8:03 UTC (permalink / raw)


On Tue, 9 Jul 2013, Randy Brukardt wrote:

> <Stefan.Lucks@uni-weimar.de> wrote:

>> So you think, information flow analysis is completely worthless? (I mean,
>> for building and verifying correct systems -- as a compiler writer you
>> will not question the usage of information flow analysis inside the
>> compiler or optimiser).
>
> Information flow *within* a subprogram is important. Information flow
> *between* subprograms is completely defined by the subprogram's interfaces
> and calls.

Firstly, for current Ada (i.e., Ada 2012) this is completely wrong 
whenever a program manipulates global variables. Even specifying pre- and 
postconditions doesn't guarantee the absence of any other (side-)effects.
(You are right, however, a proper "global" annotation would seem to catch 
this.)

Secondly, Ada's "in", "out", and "in out" information give only a cursory 
view of what is going on. Even *without* pre- and postcondition, verifying 
a SPARK-like specification of, say,

    procedure Count_Vote(Vote: Vote_Type;
                         Count_A, Count_B, Invalid: out Natural)
 	with Depends (Count_A => Vote, Count_A,
                       Count_B => Vote, Count_B,
                       Invalid => Vote, Invalid);

give a great deal of assurance -- a lot more than the same specification 
without the Depends Aspect.

Now consider the following not-quite-complete contract with pre- and 
postconditions:

    procedure Count_Vote(Vote: Vote_Type;
                         Count_A, Count_B, Invalid: out Natural)
 	with Pre => ((Count_A <= Natural'Last) and
 	             (Count_B <= Natural'Last) and
                      (Count_C <= Natural'Last)),
              Post => ((Count_A >= Count_A'Old) and
                       (Count_B >= Count_B'Old) and
                       (Invalid >= Invalid'Old) and
                       (Count_A + Count_B + Invalid =
 	              (Count_A + Count_B + Invalid)'Old +1);

At a first look, this may appear to be more precise. Each vote will 
increase exactly one counter, either Count_A or Count_B or Invalid. All 
what is left open is how does Count_Vote decide about which counter to 
increase.

However, a correct (up to the specification) implementation could do the 
following:

    procedure Count_Vote(Vote: Vote_Type;
                         Count_A, Count_B, Invalid: out Natural) is
    begin
       if (Vote = 'A') or (Vote='a') then
 	Count_A := Count_A + 1;
       elsif (Vote = 'B') or (Vote='b') then
         if (Count_A + Count_B < 1000) or else
            Count_B < Count_A then
             Count_B := Count_B + 1;
         else
             Count_A := Count_A + 1;
             -- Make sure A wins, assuming more than 2000 valid votes.
         end if;
       else
         Invalid := Invalid + 1;
       end if;
    end Count_Vote;

Without "Depends", your cool analysis tool verifying all pre- and 
postconditions will not catch this bug. And yes, I am aware that another 
implementation could cheat without violating the "Depends" clause. But 
ensuring an "innocent" result (a marginal lead for A where the real result 
would be a marginal lead for B) would be a lot more difficult, if not 
impossible.

The above is a malicious example, but the "frame problem" is a big issue 
for program analysis in general. It appears to be fairly difficult to 
write pre- and postconditions such that you can be sure nothing else is 
going on. "Global" and "Depends" allow to make "nothing else" claims that 
analysis tools can actually verify and use for their subsequent analysis, 
and where "Depends" is a lot more specific than "Global".

> Other information ought to be irrelevant (no one should be
> depending upon what the body of Swap does, other than whatever is specified
> in the postcondition).

I agree that no one should depend on what the body of some Subprogram 
does. But then, information flow analysis is actually useful!




------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
     <http://www.uni-weimar.de/cms/medien/mediensicherheit/home.html>
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10  7:49       ` Dmitry A. Kazakov
@ 2013-07-10  8:21         ` Georg Bauhaus
  0 siblings, 0 replies; 68+ messages in thread
From: Georg Bauhaus @ 2013-07-10  8:21 UTC (permalink / raw)


On 10.07.13 09:49, Dmitry A. Kazakov wrote:
> On Tue, 09 Jul 2013 18:51:36 -0400, Robert A Duff wrote:

>>> ...because logical implication (not P or Q) must be
>>> denoted as P => Q.
>>
>> It must be?
>
> Yes, if logical implication is indeed meant here, of which I am not sure.
>
>> In Eiffel, it is denoted by the "implies" keyword.
>> In Ada 2012, one writes "(if P then Q)", which is shorthand for
>> "(if P then Q else True)".
>
> But this is not an implication (Boolean operation). It is a rule based on
> implication. E.g. Modus ponens:
>
> P, P=>Q
> ------------
> Q
>
> So "implies" and "if-then" are correct, while P->Q looks wrong even when
> spelled correctly as P=>Q.
>
> If there should be a symbol, I think,
>
>     P |= Q  or P |- Q
>
> would be more appropriate.

Appropriate even though semantic and syntactic entailment resp.
are not usually considered part of the language governing P and Q?

Wouldn't we rather be writing computable expressions and
feed them to mechanical testing of satisfiability? (Which
means, right now, we have just formulae, but not any language
for speaking about arguments or about consistent sets.)

What's your plan?



^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-09 20:37     ` Randy Brukardt
@ 2013-07-10 10:03       ` Dmitry A. Kazakov
  2013-07-10 23:21         ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-10 10:03 UTC (permalink / raw)


On Tue, 9 Jul 2013 15:37:10 -0500, Randy Brukardt wrote:

>> There are obviously goals which cannot be proven without the bodies. Proofs
>> about specifications are important but far not all. Consider LSP as an
>> example. You cannot prove substitutability from interfaces. It simply does
>> not work. Same with almost everything else, because proving something in
>> general is much more harder, even impossible, than proving things about a
>> concrete instance, e.g. body.
> 
> My contention is such things are not worth proving. With strong enough 
> contracts (including class-wide contracts for dispatching calls, and 
> contracts on subprogram types :-), important properties of code can be 
> proven.

No, that is the problem. You want to overburden the contracts with stuff
that does not belong there. This would make the design fragile.

Consider as an example stack use. Do you want to contract all subprograms
to maximum stack use, or let the compiler to estimate stack use of a
concrete body?

Another example is a mathematical function like sine. In order to prove its
correctness you would need an extremely elaborated apparatus of contract
specification which itself will be a source of countless bugs due to its
complexity. This is a typical over-specification. Only if somebody wanted
to verify an implementation of sine, he would have to look into the body.
Most people never will.

Contracts should not define semantics, only most general properties of. Yet
semantics should sometimes be proven. The provider does not know when. It
is up to the user. In that case he have to look into the body.

> What's too hard shouldn't be bothered with, because we've lived just 
> fine with no proofs at all for programming up to this point.

We didn't. Accessibility checks and constraint errors are a permanent
headache in Ada.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-08 21:32 ` Randy Brukardt
  2013-07-09  7:28   ` Dmitry A. Kazakov
  2013-07-09  7:57   ` The future of Spark . Spark 2014 : a wreckage Stefan.Lucks
@ 2013-07-10 12:19   ` vincent.diemunsch
  2013-07-10 16:02     ` Simon Wright
  2013-07-11  0:36     ` Randy Brukardt
  2 siblings, 2 replies; 68+ messages in thread
From: vincent.diemunsch @ 2013-07-10 12:19 UTC (permalink / raw)



Hello Randy,

> I think existing Spark is evil, in that it funnels off effort best applied 
> to the entire Ada language (or at least, a much larger subset). Computers 
> are now powerful enough that the excuses for avoiding proofs of exceptions, 
> dynamic memory allocation, and the like no longer are valid.

Spark imposes static memory allocation to be able to prove total correctness,
which means that the program will always do what is expected, which is required
by embedded softwares... 


> The Globals notation is much more important. We tried to design such a 
> notation for Ada 2012, but it became too much work to complete and there was 
> a lot of concern about getting it wrong. So we left this for the "experts". 
> It appears that the experts are working on it, which is good, because we 
> still want this capability in the next version of Ada (whenever that will 
> be). And for that to be the case, we'll need it to have proper Ada syntax 
> using aspects.

Ada has been designed so as to let global variables be freely accessed. Only a pragma was used to specify that a function is Pure (of side effects) and can be skipped for optimisation. Therefore it is now almost impossible to change that and that is probably the reason why you had so much difficulties to define the global annotation, which is trivial, because how will you deal with a procedure that have no annotation, but still uses global variables ? A simple warning at compile time ? Refuse to compile ?

Aspects syntax is ugly. It is simply the return of a kind of primitive language like LISP, done with words and arrows. It shows clearly that it is simply a workaround.


> No, to allow *compile-time* analysis of pre and post conditions. An Ada 
> compiler cannot look into the body of a package to see what is there, and 
> without expression functions, almost no compile-time analysis could be 
> accomplished. (We also need globals annotations for similar reasons.) The 
> real end-game here ought to be compile-time proof (that is, proofs that are 
> created as part of the normal compilation of Ada code).

A proof can't be generated at compile time ! The proof must be written independently, based on formal specification, and checked by an analyser, that may or may not be the compiler.


> Proofs of any kind should never, ever depend on anything in the package body 

As others told you, a proof is a proof that the implementation respects the specification, therefore it needs to make an interpretation of the package body.

> Spark users are on a certain dead-end, IMHO. The future is getting proof 
> into the hands of all programmers, as a normal part of their day-to-day 
>  programming. That requires that proofs aren't necessarily complete, that 
> hey use whatever information is available, and most of all, that they only 
> depend on the specifications of packages (never, ever, on the contents of a 
>  body).

But you seem to make also the confusion between a test and a proof, and that is exactly what AdaCore is doing now : they simply write the tests as post-
conditions and uses symbolic interpretation during compile time. I think that is what you mention by "into the hands of all programmers", ie even those with very few theoretical background.

It's a nice feature to sell and great fun for compiler writers but it's far from a mathematical proof and what Spark was able to do. But mathematical proof requires formal specification and that is far from the "all programmers" knowledge. The problem is that recent books on Spark completly forgot to speak of the necessary formal specifications (like Z for instance).


To conclude here is my vision of Spark and Ada :
- use Ada for things that are necessary for the compiler to produce the code and to express the semantic of the program.
- use Pragmas only as compiler directives, for specific implementation or optimisation (InLine, Ravenscar, Remote-Call-Interface...). Not for assertion, tests or whatever.
- everything that is related to formal verification should stay as comment in
the code. These comments can then be verified by external tools. There can be different levels of verification, not only the Spark vision.
- formal specifications are necessary for formal verification, therefore Spark should be able to make the link between formal annotations and formal specification.
- tests can be put either as comments or as external files, but please not as aspects. It is ugly to write the detail of a procedure in the spec.


Forget Aspects and worst expression functions : they are simply an error.
Mixing testing, specification and proofs as pragma or aspects or a combination of both of them will result in a too complicated language lacking of abstraction and a proper syntax.

Regards 

Vincent



^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10  8:03       ` Stefan.Lucks
@ 2013-07-10 12:46         ` Simon Wright
  2013-07-10 23:10         ` Randy Brukardt
  1 sibling, 0 replies; 68+ messages in thread
From: Simon Wright @ 2013-07-10 12:46 UTC (permalink / raw)


Stefan.Lucks@uni-weimar.de writes:

>    procedure Count_Vote(Vote: Vote_Type;
>                         Count_A, Count_B, Invalid: out Natural)
> 	with Depends (Count_A => Vote, Count_A,
>                       Count_B => Vote, Count_B,
>                       Invalid => Vote, Invalid);

We'd hope this would fail verification because eg Count_A is an out
parameter and therefore the result can't depend on the original value!


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10 12:19   ` vincent.diemunsch
@ 2013-07-10 16:02     ` Simon Wright
  2013-07-11  0:36     ` Randy Brukardt
  1 sibling, 0 replies; 68+ messages in thread
From: Simon Wright @ 2013-07-10 16:02 UTC (permalink / raw)


vincent.diemunsch@gmail.com writes:

> Spark imposes static memory allocation to be able to prove total
> correctness, which means that the program will always do what is
> expected, which is required by embedded softwares...

Not at all sure that's true! Safety-related, security-related: probably,
other fields, not so much.

I just saw the Tony Hoare quote from 1995 (3rd quotation of [1]):

   "Ten years ago, researchers into formal methods (and I was the most
   mistaken among them) predicted that the programming world would
   embrace with gratitude every assistance promised by formalisation to
   solve the problems of reliability that arise when programs get large
   and more safety-critical. Programs have now got very large and very
   critical -- well beyond the scale which can be comfortably tackled by
   formal methods. There have been many problems and failures, but these
   have nearly always been attributable to inadequate analysis of
   requirements or inadequate management control. It has turned out that
   the world just does not suffer significantly from the kind of problem
   that our research was originally intended to solve."

[1] http://en.wikipedia.org/wiki/Tony_Hoare#Quotations

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10  8:03       ` Stefan.Lucks
  2013-07-10 12:46         ` Simon Wright
@ 2013-07-10 23:10         ` Randy Brukardt
  2013-07-11 19:23           ` Stefan.Lucks
  1 sibling, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-10 23:10 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.10.1307100904530.27942@debian...
On Tue, 9 Jul 2013, Randy Brukardt wrote:
> <Stefan.Lucks@uni-weimar.de> wrote:

>>> So you think, information flow analysis is completely worthless? (I 
>>> mean,
>>> for building and verifying correct systems -- as a compiler writer you
>>> will not question the usage of information flow analysis inside the
>>> compiler or optimiser).
>>
>> Information flow *within* a subprogram is important. Information flow
>> *between* subprograms is completely defined by the subprogram's 
>> interfaces
>> and calls.
>
>Firstly, for current Ada (i.e., Ada 2012) this is completely wrong
>whenever a program manipulates global variables. Even specifying pre- and
>postconditions doesn't guarantee the absence of any other (side-)effects.
>(You are right, however, a proper "global" annotation would seem to catch
>this.)

I'm always presuming a proper globals annotation. You really can't do 
anything without it (Janus/Ada does almost no optimizations across 
subprogram calls precisely because Ada doesn't have this information in its 
contracts).

>Secondly, Ada's "in", "out", and "in out" information give only a cursory
>view of what is going on.

Of course. But that and the postcondition is all you ought to be depending 
upon. If you're depending on more, your program is more fragile than it 
ought to be.

... (detailed and possibly interesting examples of Depends removed)

 ...
>Without "Depends", your cool analysis tool verifying all pre- and
>postconditions will not catch this bug.

Sure, but so what? It's a fools game to try to catch all bugs, it not even a 
worthwhile goal. (That's essentially where Spark goes wrong, in my view.)

If you had some magic annotations that could specify a totally bug-free 
program, then there no longer is any need for the program at all. Just 
execute the annotations, and forget the error-prone Ada code!

What I care about is extending what Ada already does well: catch the 
low-hanging fruit of bugs. There are always going to be a few really tough 
bugs that aren't going to be detectable automatically -- the goal should be 
to reduce that number in practical programs (no matter how large), not to 
eliminate all bugs in a barely usable subset of tiny programs.

Think about all of the "bugs" that we Ada programmers don't really have to 
deal with, because the compiler or runtime has already automatically 
detected it. That's not just type errors and array indexes out of range, but 
also dereferencing null pointers, accessing the wrong variant in a record, 
and many more. (I don't think Janus/Ada would have ever worked reliably 
without the variant check -- the early versions always had weird stability 
problems that disappeared as soon as we implemented the variant checks [and 
spent months eliminating all of the errors that turned up]).

What I think is important is bringing that level of ability to user-defined 
properties (think Is_Open and Mode for Text_IO files), and detecting more of 
these problems at compile-time (which is always better then runtime).

It's not being able to detect every possible bug.

My main point is that I think people are trying to solve the wrong problem, 
and that leads to having over-elaborate contracts.

>I agree that no one should depend on what the body of some Subprogram
>does. But then, information flow analysis is actually useful!

...in very marginal cases. I don't think it's worth trying to cover every 
possible base, certainly not before compilers even do a passible job on the 
easy cases. Once every Ada compiler does basic proof using 
pre/post/globals/exception contracts, we can revisit.

                                                    Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10 10:03       ` Dmitry A. Kazakov
@ 2013-07-10 23:21         ` Randy Brukardt
  2013-07-11  7:44           ` Dmitry A. Kazakov
  2013-07-12  1:01           ` Slow? Ada?? Bill Findlay
  0 siblings, 2 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-10 23:21 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:6pqhcwi0wuop.15v6h3g7gkto4.dlg@40tude.net...
> On Tue, 9 Jul 2013 15:37:10 -0500, Randy Brukardt wrote:
>
>>> There are obviously goals which cannot be proven without the bodies. 
>>> Proofs
>>> about specifications are important but far not all. Consider LSP as an
>>> example. You cannot prove substitutability from interfaces. It simply 
>>> does
>>> not work. Same with almost everything else, because proving something in
>>> general is much more harder, even impossible, than proving things about 
>>> a
>>> concrete instance, e.g. body.
>>
>> My contention is such things are not worth proving. With strong enough
>> contracts (including class-wide contracts for dispatching calls, and
>> contracts on subprogram types :-), important properties of code can be
>> proven.
>
> No, that is the problem. You want to overburden the contracts with stuff
> that does not belong there. This would make the design fragile.
>
> Consider as an example stack use. Do you want to contract all subprograms
> to maximum stack use, or let the compiler to estimate stack use of a
> concrete body?

As I said, these sorts of things aren't worth proving. Certainly not until 
proof of simple concepts is already a widely used feature of Ada; then 
expanding it further might make sense. But we have to walk before we can 
run, and we're barely crawling right now.

> Another example is a mathematical function like sine. In order to prove 
> its
> correctness you would need an extremely elaborated apparatus of contract
> specification which itself will be a source of countless bugs due to its
> complexity. This is a typical over-specification. Only if somebody wanted
> to verify an implementation of sine, he would have to look into the body.
> Most people never will.

There is no reason to prove such things. For things like Sine, the best one 
can do is copy an implementation from a cookbook -- it makes no sense to 
even imagine trying to create a new one. (That makes as much sense as 
creating a new sort.) Even the simplest tests will detect any implementation 
errors (and we're never going to be able to live without testing).

...
>> What's too hard shouldn't be bothered with, because we've lived just
>> fine with no proofs at all for programming up to this point.
>
> We didn't. Accessibility checks and constraint errors are a permanent
> headache in Ada.

Constraint_Errors a headache? They're a blessing, because the alternative is 
zillions of impossible to find bugs. (I lived with those in the early days 
of Janus/Ada; I'll never go back.) It will never, ever be possible to detect 
every runtime failure at compile-time, nor is it possible to pre-test 
because of the possiblity of race conditions, so we have to have 
Constraint_Error.

It's annoying that one can't tell between specification errors that should 
be contracted and proved away and those that are truly runtime (that is, a 
normal part of the implementation); Ada does conflate these things and 
that's probably annoying to a purist. (You can't seem to see the difference, 
which leads to bizarre conclusions.) But it's far from fatal.

                                          Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10 12:19   ` vincent.diemunsch
  2013-07-10 16:02     ` Simon Wright
@ 2013-07-11  0:36     ` Randy Brukardt
  2013-07-11  6:19       ` Simon Wright
                         ` (2 more replies)
  1 sibling, 3 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-11  0:36 UTC (permalink / raw)


<vincent.diemunsch@gmail.com> wrote in message 
news:7ebe0439-fa4c-4374-a6c7-365ac78e6e39@googlegroups.com..., replying to 
me

>> I think existing Spark is evil, in that it funnels off effort best 
>> applied
>> to the entire Ada language (or at least, a much larger subset). Computers
>> are now powerful enough that the excuses for avoiding proofs of 
>> exceptions,
>> dynamic memory allocation, and the like no longer are valid.
>
>Spark imposes static memory allocation to be able to prove total 
>correctness,
>which means that the program will always do what is expected, which is 
>required
>by embedded softwares...

The idea of "total correctness" is exactly what I object to. It's a false 
promise that only works on tiny, toy programs. Real Spark programs almost 
always interface to "regular" Ada code to do stuff that Spark can't handle, 
and once you've done that, you've lost your "total correctness".

In any case, static memory allocation is not necessary to prove correctness. 
All that's needed is a discipline in manging dynamically allocated memory 
(as happens inside of the containers, for instance). Ada already has strong 
typing of pointers, so the only problem to be avoided that affects 
correctness is premature destruction. Moreover, in Ada, you can use pools to 
keep the memory "theaters" separate, so you can avoid problems with 
fragmentation. (Although those are way overblown, they're very unlikely to 
occur in practice.)

It's certainly true that a tiny minority of systems cannot have any 
possibility of downtime, and those probably may have to stick with static 
allocation (although they could still use pools as an alternative). But it 
is silly to describe that as any sort of mainstream need.

>> The Globals notation is much more important. We tried to design such a
>> notation for Ada 2012, but it became too much work to complete and there 
>> was
>> a lot of concern about getting it wrong. So we left this for the 
>> "experts".
>> It appears that the experts are working on it, which is good, because we
>> still want this capability in the next version of Ada (whenever that will
>> be). And for that to be the case, we'll need it to have proper Ada syntax
>> using aspects.

>Ada has been designed so as to let global variables be freely accessed. 
>Only a
>pragma was used to specify that a function is Pure (of side effects) and 
>can be
>skipped for optimisation. Therefore it is now almost impossible to change 
>that

Right, we'd never change that.

>and that is probably the reason why you had so much difficulties to define 
>the
>global annotation, which is trivial,

It's not remotely trivial, because you have to deal with global variables 
that you don't have visibility onto and thus must not name (those in package 
bodies, especially), you have to deal intelligently with storage pools and 
access types, and you have to deal with generic units. Spark ignores almost 
all of these issues by not allowing the constructs, which is a non-starter 
for a construct for the full Ada language. We also required checking that 
the annotations were accurate, and that too is a fairly complex issue.

>because how will you deal with a procedure that have no annotation, but 
>still
>uses global variables ? A simple warning at compile time ? Refuse to 
>compile ?

No annotation means "uses anything", of course. You have to specify that a 
routine does NOT use globals. Perhaps that's backwards of what we would do 
if starting from scratch, but we're not starting from scratch. This is the 
same way we dealt with "overriding" and "limited" and lots of other things.

>Aspects syntax is ugly. It is simply the return of a kind of primitive 
>language like LISP,
>done with words and arrows. It shows clearly that it is simply a 
>workaround.

It's ten times better than using pragmas. Pragmas are the worst possible way 
to do anything. The rest is just Ada expression syntax, which has the 
distinct advantage that programmers don't have to learn anything new to use 
it. If you don't believe that Ada syntax is far superior to any other 
language's syntax, go back to curly bracket hell. :-) The barrier to using 
Spark is far too high, because it's a completely new syntax.

I agree with you that the Depends syntax is ugly, but that's mainly because 
Depends is a stupid thing that should not exist, and thus does not fit at 
all into the Ada model.

>> No, to allow *compile-time* analysis of pre and post conditions. An Ada
>> compiler cannot look into the body of a package to see what is there, and
>> without expression functions, almost no compile-time analysis could be
>> accomplished. (We also need globals annotations for similar reasons.) The
>> real end-game here ought to be compile-time proof (that is, proofs that 
>> are
>> created as part of the normal compilation of Ada code).

>A proof can't be generated at compile time ! The proof must be written
>independently, based on formal specification, and checked by an analyser,
>that may or may not be the compiler.

Wow, what an elitist attitude! *Every* compiler, for every language, uses 
proof techniques when it does optimizations and code generation. "I can 
prove that V always has value 5 here, because every path to here either 
fails to modify V or sets it to 5." Why shouldn't Ada programmers have 
access to that sort of checking?

Moreover, only a very small number of people/projects are willing to spend 
extra time to prove a working program. Schedules and the like make those 
sorts of extras the first thing to go. If we want more people to use these 
sorts of tools, they have to be integrated into the compiler so that using 
them is just as automatic as getting type and range checks.

>> Proofs of any kind should never, ever depend on anything in the package 
>> body
[other than the subprogram that you are proving, of course.]

>As others told you, a proof is a proof that the implementation respects the 
>specification,
>therefore it needs to make an interpretation of the package body.

If the one, single package body that it is currently proving. Looking into 
any *other* package body adds additional dependencies.

>> Spark users are on a certain dead-end, IMHO. The future is getting proof
>> into the hands of all programmers, as a normal part of their day-to-day
>>  programming. That requires that proofs aren't necessarily complete, that
>> hey use whatever information is available, and most of all, that they 
>> only
>> depend on the specifications of packages (never, ever, on the contents of 
>> a
>>  body).

>But you seem to make also the confusion between a test and a proof, and 
>that is
>exactly what AdaCore is doing now : they simply write the tests as post-
>conditions and uses symbolic interpretation during compile time. I think 
>that is
>what you mention by "into the hands of all programmers", ie even those with
>very few theoretical background.

Right; no one should need to understand the details in order to get the 
benefits. Do you understand how your Ada compiler does code selection or 
optimization? Of course not, but you still get the benefits.

And the entire point is that there is a strong correlation between test and 
proof. The programmer writes a "test" (as you put it; Ada 2012 calls it an 
assertion), and the compiler uses proof to verify that it is true (including 
all of the optimization tricks that compiler writers have been using for 
decades. If the proof is true, the "test" is eliminated from the generated 
code; if the proof fails, the "test" is generated, and the programmer gets a 
warning or error (depending on switches).

>It's a nice feature to sell and great fun for compiler writers but it's far 
>from a mathematical
>proof and what Spark was able to do. But mathematical proof requires formal 
>specification
>and that is far from the "all programmers" knowledge. The problem is that 
>recent books on
>Spark completly forgot to speak of the necessary formal specifications 
>(like Z for instance).

This is a case of the tail wagging the dog. (It's not surprising though, 
people often get so wrapped up in something that they forget why they 
started to do it in the first place.)

The goal here is to reduce the number of bugs in programs. No one cares how 
that is accomplished. "Formal mathematical proof" is one such tool, but it's 
almost completely inaccessible to the average programmer and project. And no 
project manager cares about a formal proof; they only care that their 
project is finished on time with a minimal number of bugs.

>To conclude here is my vision of Spark and Ada :
>- use Ada for things that are necessary for the compiler to produce the 
>code and to
>  express the semantic of the program.
>- use Pragmas only as compiler directives, for specific implementation or 
>optimisation
> (InLine, Ravenscar, Remote-Call-Interface...). Not for assertion, tests or 
> whatever.
Pragmas shouldn't be used for anything, IMHO. Maybe other that global 
properties; certainly nothing specific to an entity.
>- everything that is related to formal verification should stay as comment 
>in
>the code. These comments can then be verified by external tools. There can 
>be
>different levels of verification, not only the Spark vision.
>- formal specifications are necessary for formal verification, therefore 
>Spark
>should be able to make the link between formal annotations and formal 
>specification.
>- tests can be put either as comments or as external files, but please not 
>as aspects.
>It is ugly to write the detail of a procedure in the spec.

>Forget Aspects and worst expression functions : they are simply an error.
>Mixing testing, specification and proofs as pragma or aspects or a 
>combination
>of both of them will result in a too complicated language lacking of 
>abstraction and
>a proper syntax.

Too late, Ada 2012 was standardized in December and couldn't be recinded if 
we wanted to. Far too much code already exists. You should have come to the 
ARG and made these comments 3 years ago. Of course, it's unlikely that they 
would have changed anything -- we had comments from members of the formal 
methods community and they were taken very seriously. And we decided to go 
with aspects, expression functions and the like. Now that the path has been 
set, it's likely that most will follow along.

You of course don't have to follow the path -- you can stay with Ada 2005 
and Spark 2005 (which I'm sure will continue to be supported for a long time 
to come). But if you want to stay at the bleeding edge, then you're going to 
have to adopt Ada 2012, and everything that comes with it.

It's fairly clear that you have a real problem with the Ada model of 
specification (as it has existed forever), because there is no mixing here: 
assertions, proofs, and specifications are all different aspects (sorry 
about the pun) of the same whole; what's artifical is separating them in the 
first place.

The truly formal stuff shouldn't be in the Ada source code at all (I'd go so 
far as to say it shouldn't *exist* at all, but who am I to prevent people 
from wasting their time).

Anyway, good luck, no matter how you decide to proceed with Ada. I hope of 
course, that you will get over your initial reaction to Ada 2012 and join us 
in a better future way of programming, but if not, good luck wasting time. 
:-)

                               Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11  0:36     ` Randy Brukardt
@ 2013-07-11  6:19       ` Simon Wright
  2013-07-11 23:11         ` Randy Brukardt
  2013-07-11 10:10       ` vincent.diemunsch
  2013-07-11 23:50       ` Shark8
  2 siblings, 1 reply; 68+ messages in thread
From: Simon Wright @ 2013-07-11  6:19 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> problems with fragmentation. (Although those are way overblown,
> they're very unlikely to occur in practice.)

That depends on the capabilities of your OS's memory manager! In our
VxWorks 5.5/Ada 95 system, it didn't take long for the heap to become so
fragmented that the system couldn't meet real-time constraints (such as
failing to kick the watchdog timer).

It was a fairly straightforward job to provide our own version of GNAT's
System.Memory to use preallocated binary-sized bins (we instrumented it
to determine the maximum number of blocks for each bin and allocated
that number at startup, with the ability to allocate further blocks at
run time if the measured maximum requirement grew).

I don't remember why use of Ada storage pools wasn't the right solution;
maybe the some of the allocations were by the runtime without the option
to use a user-defined storage pool? (things like task control blocks).

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10 23:21         ` Randy Brukardt
@ 2013-07-11  7:44           ` Dmitry A. Kazakov
  2013-07-11 22:28             ` Randy Brukardt
  2013-07-12  1:01           ` Slow? Ada?? Bill Findlay
  1 sibling, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-11  7:44 UTC (permalink / raw)


On Wed, 10 Jul 2013 18:21:33 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:6pqhcwi0wuop.15v6h3g7gkto4.dlg@40tude.net...

>> Another example is a mathematical function like sine. In order to prove its
>> correctness you would need an extremely elaborated apparatus of contract
>> specification which itself will be a source of countless bugs due to its
>> complexity. This is a typical over-specification. Only if somebody wanted
>> to verify an implementation of sine, he would have to look into the body.
>> Most people never will.
> 
> There is no reason to prove such things. For things like Sine, the best one 
> can do is copy an implementation from a cookbook -- it makes no sense to 
> even imagine trying to create a new one.

Do you have Hankel function in your cookbook? How do you know that the copy
you made is authentic? What is about the accuracy of the implementation?

>>> What's too hard shouldn't be bothered with, because we've lived just
>>> fine with no proofs at all for programming up to this point.
>>
>> We didn't. Accessibility checks and constraint errors are a permanent
>> headache in Ada.
> 
> Constraint_Errors a headache? They're a blessing, because the alternative is 
> zillions of impossible to find bugs.

No, the alternative is to prove that the constraint is not violated.

You can do this only at the client side. Which is why it cannot be
contracted for the callee. When you move from instances to universally
holding contracts you frequently lose provability.

Note how accessibility checks attempted in the form of a contract became
more problem than solution. In order to have it provable you must
strengthen it. Probably every Ada programmer was annoyed by silly "must be
statically deeper" messages about pointers. It cannot be handled at the
level of contracts. Yet in most cases it is no problem to see that a
*given* pointer is never dangling.

> It's annoying that one can't tell between specification errors that should 
> be contracted and proved away and those that are truly runtime (that is, a 
> normal part of the implementation);

That was ARG's decision for Ada 2012, wasn't it?

> Ada does conflate these things and 
> that's probably annoying to a purist. (You can't seem to see the difference, 
> which leads to bizarre conclusions.) But it's far from fatal.

It is fatal, because inconsistent. Error /= bug.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11  0:36     ` Randy Brukardt
  2013-07-11  6:19       ` Simon Wright
@ 2013-07-11 10:10       ` vincent.diemunsch
  2013-07-11 14:23         ` Dmitry A. Kazakov
  2013-07-12  0:00         ` Randy Brukardt
  2013-07-11 23:50       ` Shark8
  2 siblings, 2 replies; 68+ messages in thread
From: vincent.diemunsch @ 2013-07-11 10:10 UTC (permalink / raw)



> 
> The idea of "total correctness" is exactly what I object to. It's a false 
> promise that only works on tiny, toy programs. Real Spark programs almost 
> always interface to "regular" Ada code to do stuff that Spark can't handle, 
> and once you've done that, you've lost your "total correctness".
> 

Ok one can't prove everything, only a subset of the logic, an abstraction of the system. But total correctness is vital for :
- distributed or concurrent programs, that can't be "debugged" the usual way because they are not determinist
- embedded systems because errors means lost of lives, and it is simply not possible to test everyting : the number of tests is to high.

> In any case, static memory allocation is not necessary to prove correctness. 
Maybe, but it is easier to prove with it :-).


> It's not remotely trivial, because you have to deal with global variables 
> that you don't have visibility onto and thus must not name (those in package 
> bodies, especially), you have to deal intelligently with storage pools and 
> access types, and you have to deal with generic units. Spark ignores almost 
> all of these issues by not allowing the constructs, which is a non-starter 
> for a construct for the full Ada language. We also required checking that 
> the annotations were accurate, and that too is a fairly complex issue.
 
> No annotation means "uses anything", of course. You have to specify that a 
> routine does NOT use globals. Perhaps that's backwards of what we would do 
> if starting from scratch, but we're not starting from scratch. This is the 
> same way we dealt with "overriding" and "limited" and lots of other things.
It is easy to specify that : simply use pragma Pure.

But what I like with Spark is that they took the problem from scratch and created own annotations with abstract states and inherit clauses. It won't
be possible to do it in Ada without these abstractions, and I don't see the
point of adding these without a deep rethinking of Ada encapsulation... Because
visibility of state and data flow is nice but it is a heavy burden that not everyone wants to deal with, thus the very existence of global variables.


There is another point on which I disagree : the fact that compilers make proofs for optimisation. Compilers do symbolic computation, and this computation includes propositional calculus. Sometimes the result of a symbolic computation is called a proof, but it is not a mathematical proof in fact. A proof is the very essence of mathematics and it is a human activity. Proving programs is a human activity exactly like conception.

Spark syntax needs to be different from Ada syntax because they manipulate different things : Ada manipulates types like boolean, integers. Spark annotations manipulates propositions based on abstraction of types (that I call categories). For instance where Ada sees : 
X : Positive; 
Spark Sees :
X : root_integer;
X >= 0;
X < Integer'Last;
So it is a nonsense to keep the Ada syntax for annotations and to allow them to be transformed as tests, because in doing so one looses abstraction.

> If you don't believe that Ada syntax is far superior to any other 
> language's syntax, go back to curly bracket hell. :-)
I like the Ada syntax, and also the Spark syntax :-)


> Wow, what an elitist attitude!
Ada must be better than C or Java otherwise it will disappear.


> Moreover, only a very small number of people/projects are willing to spend 
> extra time to prove a working program. Schedules and the like make those 
> sorts of extras the first thing to go. 
Right. Spark is for the elite, trying to sell it to everybody will result in a big disappointement.


> Too late, Ada 2012 was standardized in December and couldn't be recinded if 
> we wanted to. Far too much code already exists. You should have come to the 
> ARG and made these comments 3 years ago. 
I wish I could.

> Of course, it's unlikely that they would have changed anything -- we had 
> comments from members of the formal methods community and they were taken 
> very seriously. And we decided to go with aspects, expression functions 
> and the like. Now that the path has been set, it's likely that most will 
> follow along.
 
> You of course don't have to follow the path -- you can stay with Ada 2005 
> and Spark 2005 (which I'm sure will continue to be supported for a long time 
> to come). But if you want to stay at the bleeding edge, then you're going to 
> have to adopt Ada 2012, and everything that comes with it.

I will use Ada 2012, except annotations and expression functions, and Spark 2005. I think that Spark 2014 will be a dilemna for many people, and it will divide the little community in two : those who follow and those who stay with comment annotations. Since all tools are OpenSource, the second may try to make an evolution of the annotations, as I proposed.

> Anyway, good luck, no matter how you decide to proceed with Ada. I hope of 
> course, that you will get over your initial reaction to Ada 2012 and join us 
> in a better future way of programming, but if not, good luck wasting time. 
> :-)

I hope that it won't be a total waste of time, because I think that the error of AdaCore is that they thought they could use the tools and techniques from Frama-C. According to some researcher, Frama-C tools are better than Spark tools (more advanced, but more reliable ?), but the ACSL language is not at all as good as Spark. The trick is that while improving the tools, which was their job, they thought they could change the language and make it like ACSL (E-ACSL, the new version they created). But in doing so they lost the very specificity of Spark : not only its clear syntax, but its higher abstraction level, just as Ada has higher abstraction level as C or Java.

Regards,

Vincent

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11 10:10       ` vincent.diemunsch
@ 2013-07-11 14:23         ` Dmitry A. Kazakov
  2013-07-12  0:07           ` Randy Brukardt
  2013-07-12  0:00         ` Randy Brukardt
  1 sibling, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-11 14:23 UTC (permalink / raw)


On Thu, 11 Jul 2013 03:10:51 -0700 (PDT), vincent.diemunsch@gmail.com
wrote:

> Spark syntax needs to be different from Ada syntax because they manipulate
> different things : Ada manipulates types like boolean, integers. Spark
> annotations manipulates propositions based on abstraction of types (that I
> call categories).

Yes this is a crucial point. These are two different languages where one
(Ada) is a subject of another (SPARK). Statements in these languages should
be visually distinct.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-10 23:10         ` Randy Brukardt
@ 2013-07-11 19:23           ` Stefan.Lucks
  2013-07-12  0:21             ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Stefan.Lucks @ 2013-07-11 19:23 UTC (permalink / raw)


[-- Attachment #1: Type: TEXT/PLAIN, Size: 1783 bytes --]

On Wed, 10 Jul 2013, Randy Brukardt wrote:

>> Secondly, Ada's "in", "out", and "in out" information give only a cursory
>> view of what is going on.
>
> Of course. But that and the postcondition is all you ought to be depending
> upon. If you're depending on more, your program is more fragile than it
> ought to be.

Your program can depend on whatever has been specified (and hopefully 
proven) in the specification. Which is one reason why "Depends" is 
actually useful -- you are allowed to make more specific contracts.

> ... (detailed and possibly interesting examples of Depends removed)
>
> ...
>> Without "Depends", your cool analysis tool verifying all pre- and
>> postconditions will not catch this bug.
>
> Sure, but so what? It's a fools game to try to catch all bugs, it not even a
> worthwhile goal. (That's essentially where Spark goes wrong, in my view.)

Who said, catch all bugs? I didn't.

> What I care about is extending what Ada already does well: catch the
> low-hanging fruit of bugs.

By my own experience, having worked occasionally with SPARK, information 
flow analysis actually catches a lot of errors -- even without having 
specified any pre- or postconditions. In other words, SPARK's information 
flow analysis actually gives you the low-hanging fruits you mention.

The fruits (well, bugs) you catch by employing pre- and postconditions are 
a bit higher, actually. At least, that is what I gather from my own 
experience YMMV. Did you ever actually work with SPARK?

So long

Stefan


------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
     <http://www.uni-weimar.de/cms/medien/mediensicherheit/home.html>
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11  7:44           ` Dmitry A. Kazakov
@ 2013-07-11 22:28             ` Randy Brukardt
  2013-07-12  8:02               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-11 22:28 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1s6sw93u97355$.1n0205ucj6251.dlg@40tude.net...
> On Wed, 10 Jul 2013 18:21:33 -0500, Randy Brukardt wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:6pqhcwi0wuop.15v6h3g7gkto4.dlg@40tude.net...
>
>>> Another example is a mathematical function like sine. In order to prove 
>>> its
>>> correctness you would need an extremely elaborated apparatus of contract
>>> specification which itself will be a source of countless bugs due to its
>>> complexity. This is a typical over-specification. Only if somebody 
>>> wanted
>>> to verify an implementation of sine, he would have to look into the 
>>> body.
>>> Most people never will.
>>
>> There is no reason to prove such things. For things like Sine, the best 
>> one
>> can do is copy an implementation from a cookbook -- it makes no sense to
>> even imagine trying to create a new one.
>
> Do you have Hankel function in your cookbook?

Don't know what it is, so I'd never try to write one.

> How do you know that the copy you made is authentic? What is about the 
> accuracy of the implementation?

That's easily determinable by unit tests (which are needed in any case - 
testing and proof are not exclusive, you need both because it's real easy to 
prove the wrong thing). If you screw up copying a cookbook function, you'll 
get results that are obviously wrong, and that's easy to detect.

>>>> What's too hard shouldn't be bothered with, because we've lived just
>>>> fine with no proofs at all for programming up to this point.
>>>
>>> We didn't. Accessibility checks and constraint errors are a permanent
>>> headache in Ada.
>>
>> Constraint_Errors a headache? They're a blessing, because the alternative 
>> is
>> zillions of impossible to find bugs.
>
> No, the alternative is to prove that the constraint is not violated.

That's completely impractical.

> You can do this only at the client side. Which is why it cannot be
> contracted for the callee. When you move from instances to universally
> holding contracts you frequently lose provability.

...of things that aren't worth proving in the first place. You get rid of 
constraint_errors from bodies by writing good preconditions, predicates, 
constraints, and exclusions in the first place. Have constraint_error from 
dereferencing a null pointer? Use an exclusion (or a pre-test). Same with 
ranges or with user properties.

If for whatever reason you can't do this, then yes you lose proveability. 
But that's the price of lowering coupling, and excessive coupling is the 
cause of almost all software inflexibility.

> Note how accessibility checks attempted in the form of a contract became
> more problem than solution. In order to have it provable you must
> strengthen it. Probably every Ada programmer was annoyed by silly "must be
> statically deeper" messages about pointers. It cannot be handled at the
> level of contracts. Yet in most cases it is no problem to see that a
> *given* pointer is never dangling.

"No problem"? Informally, sure. It's impossible to prove in any real case; 
there is too much going on. (Claw, for instance, prevents it by design, but 
I don't think there is any way to formally describe that design -- certainly 
not in systems like Spark which ignore the effects of finalization [which is 
how Claw guarentees non-dangling]).

>> It's annoying that one can't tell between specification errors that 
>> should
>> be contracted and proved away and those that are truly runtime (that is, 
>> a
>> normal part of the implementation);
>
> That was ARG's decision for Ada 2012, wasn't it?

No, it was an effect of Ada 83, which conflated those uses of exceptions. We 
had to formalize existing practice (we need the ability to add these 
specifications to existing code without changing any behavior). It would be 
better for exceptions that we would want to "prove-away" (like Status_Error 
and Mode_Error in Text_IO) to be strictly separated from those that are 
purely runtime behavior (like End_Error and Device_Error).

One thing we had to do is to make these things work is to allow existing 
compilers to use runtime checks, with the hope that more will become proved 
over time. Compiler technology isn't quite ready for mandatory proof 
requirements (maybe next time).

>> Ada does conflate these things and
>> that's probably annoying to a purist. (You can't seem to see the 
>> difference,
>> which leads to bizarre conclusions.) But it's far from fatal.
>
> It is fatal, because inconsistent. Error /= bug.

We can only agree to disagree on that.

                      Randy.





^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11  6:19       ` Simon Wright
@ 2013-07-11 23:11         ` Randy Brukardt
  0 siblings, 0 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-11 23:11 UTC (permalink / raw)


"Simon Wright" <simon@pushface.org> wrote in message 
news:lyppupvcs3.fsf@pushface.org...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>
>> problems with fragmentation. (Although those are way overblown,
>> they're very unlikely to occur in practice.)
>
> That depends on the capabilities of your OS's memory manager! In our
> VxWorks 5.5/Ada 95 system, it didn't take long for the heap to become so
> fragmented that the system couldn't meet real-time constraints (such as
> failing to kick the watchdog timer).

I suppose I should know better than to presume most programmers are 
competent. ;-)

My personal experience is that fragmentation is rare at best. We had a bit 
of trouble with it in our MS-DOS compilers, that mainly came from allocating 
lots of objects, each one slightly larger than the last (pretty much the 
worst case for heap use). Changing to a quadradic size increase (rather than 
a linear one) got rid of that problem (as did moving some tiny objects to 
their own manager [allocated once and never freed back to the main heap] - 
today I'd use a custom pool for that).

I was very worried that the long running web server and spam filter would 
suffer from fragmentation, but I've never seen any evidence of that, even 
after running two months and having processed a half a million requests. 
Apparently, the Windows heap manager is smart enough to prevent it, as the 
fragmentation goes up a bit in the first few hours after the servers get 
started and then stays pretty constant after that.

> It was a fairly straightforward job to provide our own version of GNAT's
> System.Memory to use preallocated binary-sized bins (we instrumented it
> to determine the maximum number of blocks for each bin and allocated
> that number at startup, with the ability to allocate further blocks at
> run time if the measured maximum requirement grew).
>
> I don't remember why use of Ada storage pools wasn't the right solution;
> maybe the some of the allocations were by the runtime without the option
> to use a user-defined storage pool? (things like task control blocks).

Obviously, the best solution is to replace a bad heap by a good one. Won't 
work in every circumstance, but it will in 98% of them.

                          Randy. 


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11  0:36     ` Randy Brukardt
  2013-07-11  6:19       ` Simon Wright
  2013-07-11 10:10       ` vincent.diemunsch
@ 2013-07-11 23:50       ` Shark8
  2 siblings, 0 replies; 68+ messages in thread
From: Shark8 @ 2013-07-11 23:50 UTC (permalink / raw)


On Wednesday, July 10, 2013 6:36:42 PM UTC-6, Randy Brukardt wrote:
> 
> The idea of "total correctness" is exactly what I object to. It's a false 
> promise that only works on tiny, toy programs. Real Spark programs almost 
> always interface to "regular" Ada code to do stuff that Spark can't handle, 
> and once you've done that, you've lost your "total correctness".

Interesting sentiment, I'm not sure I agree with you -- though I can see what you're saying -- as a counter-example there's IRONSIDES, a DNS server that is "provably exception-free, contains no data flow errors, and terminates only in the ways that its programmers explicitly say that it can."

http://ironsides.martincarlisle.com/


I think we DO need "total correctness" in the lower levels of our software-stacks. Specifically OSes and compilers; I, for one, would love to see a proved Ada OS (no *nix, please) developed on a proved Ada compiler. -- I don't *know* if that would be "in demand" by the industry (but then, the industry is embracing C-style languages and ramming their heads on the walls built by those misdesigns).

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11 10:10       ` vincent.diemunsch
  2013-07-11 14:23         ` Dmitry A. Kazakov
@ 2013-07-12  0:00         ` Randy Brukardt
  2013-07-12  7:25           ` Simon Wright
  2013-07-12 14:23           ` Robert A Duff
  1 sibling, 2 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-12  0:00 UTC (permalink / raw)


<vincent.diemunsch@gmail.com> wrote in message 
news:a20ef22c-36f0-4c04-8d79-6aaadabb8fa6@googlegroups.com, replying to me:

>> The idea of "total correctness" is exactly what I object to. It's a false
>> promise that only works on tiny, toy programs. Real Spark programs almost
>> always interface to "regular" Ada code to do stuff that Spark can't 
>> handle,
>> and once you've done that, you've lost your "total correctness".
>
>
>Ok one can't prove everything, only a subset of the logic, an abstraction 
>of the system.
> But total correctness is vital for :
>- distributed or concurrent programs, that can't be "debugged" the usual 
>way because
> they are not determinist
>- embedded systems because errors means lost of lives, and it is simply not 
>possible
> to test everyting : the number of tests is to high.

I'm pretty amazed to find out that I can't debug concurrent programs, 
because I've been doing it for decades. On top of that, unit tests almost 
never involve any concurrency (just like proofs rarely do), so that it 
irrelevant for the vast majority of debugging (and the sorts of things that 
you would prove with Spark).

...
>> No annotation means "uses anything", of course. You have to specify that 
>> a
>> routine does NOT use globals. Perhaps that's backwards of what we would 
>> do
>> if starting from scratch, but we're not starting from scratch. This is 
>> the
>> same way we dealt with "overriding" and "limited" and lots of other 
>> things.

>It is easy to specify that : simply use pragma Pure.

Nope. First of all, pragma Pure allows the use of lots of kinds of globals - 
just do a dereference somewhere in the package. Secondly, pragma Pure only 
works on packages. It's extremely difficult to find any package that could 
have pragma Pure applied, because that means that there is no I/O, no access 
types, no clock/timers. I've never found any package of mine that could use 
it (almost all of my private types include a way to trace themselves, which 
you can't do in a pure package).

Moreover, do you know that adding pragma Pure to a package could make it 
erroneous (even if it is legal and compiles correctly)? Pure has a number of 
holes that make a program that uses it potentially less safe than one that 
does not. Because of this, I'm surprised that a safety-critical program 
would allow it to be used at all.

GNAT has a pragma Pure_Function which can be used on a single function. This 
isn't in Ada because it is just an assumption; there is no checks at all 
whether it is true. In that case, any mistake in the function would make a 
program calling it erroneous (because the compiler is allowed to assume the 
function is pure, even if it isn't). It can only be made safe if it is 
combined with a proof that there are no side-effects, in which case it 
surely should never be separated from that proof -- but pragmas are *always* 
separated from the thing they apply to. As such, they are never appropriate 
for setting properties of single entities.

>But what I like with Spark is that they took the problem from scratch and
> created own annotations with abstract states and inherit clauses. It won't
>be possible to do it in Ada without these abstractions, and I don't see the
>point of adding these without a deep rethinking of Ada encapsulation...

The point is very simple: Ada *compilers* can generate better code if they 
have this additional information. It isn't just useful for proofs. (And that 
has nothing to do with other annotations, either; it's unrelated to 
assertions like preconditions.) And there is no way that we're ever going to 
put anything significant to Ada compilers (and Ada programmers) into 
comments.

Ada encapuslation is one of the things Ada gets better than other 
languages -- there is no reason to rethink anything. There is a need to 
allow people who want to to give additional information to both readers and 
the Ada compiler.

> Because visibility of state and data flow is nice but it is a heavy burden
>that not everyone wants to deal with, thus the very existence of global
> variables.

Quite right, which is why the default has to be to live without that 
information. If you want to add it, you'll get more error detection at 
compile-time, possibly better code (even with all checking turned off), and 
provide more information for tools like Spark. A win-win in my mind.

>There is another point on which I disagree : the fact that compilers make
>proofs for optimisation. Compilers do symbolic computation, and this
>computation includes propositional calculus. Sometimes the result of a
>symbolic computation is called a proof, but it is not a mathematical proof
>in fact. A proof is the very essence of mathematics and it is a human
>activity. Proving programs is a human activity exactly like conception.

I agree that your definition of proof is wildly different than mine. What 
you're interested in I find not only irrelevant but actively harmful. The 
last thing we need (in any field!) is more human activities. Humans make 
mistakes; the more the computer does of the checking the better. And we 
absolutely do not need any additional tasks in programming; the pressure to 
take short-cuts is already very high and anything "extra" is very liable to 
be cut.

>Spark syntax needs to be different from Ada syntax because they manipulate 
>different
>things : Ada manipulates types like boolean, integers. Spark annotations 
>manipulates
>propositions based on abstraction of types (that I call categories). For 
>instance where
>Ada sees :
>X : Positive;
>Spark Sees :
>X : root_integer;
>X >= 0;
I hope Spark sees
X >= 1
else it isn't even proving the right thing. ;-)
>X < Integer'Last;
Well, an Ada compiler sees:
X : root_integer;
Check X in 1 .. Integer'Last.
This is not materially different.
>So it is a nonsense to keep the Ada syntax for annotations and to allow 
>them
>to be transformed as tests, because in doing so one looses abstraction.

Only if you don't understand Ada semantics. Having two different ways to 
write the same thing doesn't add any abstraction!

>> If you don't believe that Ada syntax is far superior to any other
>> language's syntax, go back to curly bracket hell. :-)
>I like the Ada syntax, and also the Spark syntax :-)

Fine, then stick with it.

>> Wow, what an elitist attitude!
>Ada must be better than C or Java otherwise it will disappear.

It already is. And this has nothing to do with Spark.

>> Moreover, only a very small number of people/projects are willing to 
>> spend
>> extra time to prove a working program. Schedules and the like make those
>> sorts of extras the first thing to go.
>Right. Spark is for the elite, trying to sell it to everybody will result 
>in a
>big disappointement.

Then, IMHO, it's not worth having. We need more customers for Ada as a 
whole, not a series of fractionated subsets. And to do that, we need *Ada* 
to be better.
...
>I will use Ada 2012, except annotations and expression functions, and Spark 
>2005.

Since aspects (and especially precondition and postcondition aspects) are 
pretty much the only new feature in Ada 2012 of significance (pretty much 
everything else was designed to support them, other than some of the 
containers related things), you'll end up using Ada 2005 plus if 
expressions. That's hardly counts as using Ada 2012.

I certainly would not call any code that uses an entity-specific pragma Ada 
2012 code. (I'd also call it evil, but that's just me.)

...

                                            Randy.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11 14:23         ` Dmitry A. Kazakov
@ 2013-07-12  0:07           ` Randy Brukardt
  0 siblings, 0 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-12  0:07 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:2bvcgf0jmjcv$.79izri1n5n9r.dlg@40tude.net...
> On Thu, 11 Jul 2013 03:10:51 -0700 (PDT), vincent.diemunsch@gmail.com
> wrote:
>
>> Spark syntax needs to be different from Ada syntax because they 
>> manipulate
>> different things : Ada manipulates types like boolean, integers. Spark
>> annotations manipulates propositions based on abstraction of types (that 
>> I
>> call categories).
>
> Yes this is a crucial point. These are two different languages where one
> (Ada) is a subject of another (SPARK). Statements in these languages 
> should
> be visually distinct.

The problem here is that there is a strong overlap in these annotations. If 
you make them completely distinct, you'll have to write most of them twice 
(especially globals contracts, which you'll want for the optimization/code 
generation benefits, and preconditions and predicates, because they expose 
what otherwise are checks buried in subprogram bodies [where they never can 
be eliminated] to the compiler [which can then eliminate them from the call 
site - no matter whether you are using proofs or not]). These to me are a 
code quality issue, and items intended for the Ada compiler have to be in 
the Ada syntax.) And I agree that you don't want to make them completely the 
same either (Ada has no use for the "depends" annotation that I can see; Ada 
code shouldn't have items in it that are of no use to an Ada compiler).

Which leaves us with a problem that isn't obvious to solve. AdaCore is 
trying.

Anyway, I've said enough on this topic.

                                Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11 19:23           ` Stefan.Lucks
@ 2013-07-12  0:21             ` Randy Brukardt
  2013-07-12  9:12               ` Stefan.Lucks
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-12  0:21 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.10.1307112112110.8707@debian...
On Wed, 10 Jul 2013, Randy Brukardt wrote:

...
>> ... (detailed and possibly interesting examples of Depends removed)
>>
>> ...
>>> Without "Depends", your cool analysis tool verifying all pre- and
>>> postconditions will not catch this bug.
>>
>> Sure, but so what? It's a fools game to try to catch all bugs, it not 
>> even a
>> worthwhile goal. (That's essentially where Spark goes wrong, in my view.)
>
>Who said, catch all bugs? I didn't.

You want to catch this very unlikely bug, and add a giant pile of extra junk 
annotations for this purpose. My contention is that it won't catch enough 
bugs by itself to be worth the complication.

>> What I care about is extending what Ada already does well: catch the
>> low-hanging fruit of bugs.
>
>By my own experience, having worked occasionally with SPARK, information
>flow analysis actually catches a lot of errors -- even without having
>specified any pre- or postconditions. In other words, SPARK's information
>flow analysis actually gives you the low-hanging fruits you mention.

Preconditions (and constraints) come first. You write them before you write 
any bodies, before you write any comments even - they're an integral part of 
the specification of a subprogram. So the situation you speak of isn't even 
possible.

Moreover, the vast majority of the information in "Depends" is already in 
the Ada subprogram specification (once you include its pre and 
postconditions). The question is how much it adds to already properly 
annotated subprograms, not what happens when the tools are misused.

I don't care at all about "after-the-fact" addition of these things. Hardly 
anyone is going to be adding annotations to existing packages (it's not 
clear whether we're ever going to do that for the language-defined 
packages).

>The fruits (well, bugs) you catch by employing pre- and postconditions are
>a bit higher, actually. At least, that is what I gather from my own
>experience YMMV.

Your experience seems to have been on annotating existing code, and doing it 
backwards ("depends" first). I can see why you might have annotated existing 
code that way, but that's not a goal or concern of mine. I'm only interested 
in new code, and code that's written properly (that is, the parameter modes 
tell one the data flow).

>Did you ever actually work with SPARK?

No, there wasn't a free version when I was interested. And now that there is 
a free version, I'm no longer interested because it is too restricted to be 
useful in any of my code. And it tries to do too much as well.

                                     Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Slow?  Ada??
  2013-07-10 23:21         ` Randy Brukardt
  2013-07-11  7:44           ` Dmitry A. Kazakov
@ 2013-07-12  1:01           ` Bill Findlay
  1 sibling, 0 replies; 68+ messages in thread
From: Bill Findlay @ 2013-07-12  1:01 UTC (permalink / raw)


I was amused to read this ...

On 11/07/2013 00:21, in article krkq9u$fo9$1@loke.gir.dk, "Randy Brukardt"
<randy@rrsoftware.com> wrote:
...
> For things like Sine, the best one can do is copy an implementation from a
> cookbook -- it makes no sense to even imagine trying to create a new one.

...for reasons nothing to do with that controversy.

When debugging Whetstone Algol under my KDF9 emulator I looked at the KDF9
assembly code programming of the arctan function and was completely baffled
by it, so I asked my former colleague, Michael Jamieson to investigate.
He successfully reconstructed the mathemetics behind it, which are very
non-obvious (to put it mildly).

A couple of days ago I idly wondered how well this 50 years old algorithm
would compare with modern implementations, so I wrote a test program to race
it against GNAT GPL 2013's arctan.  I expected to find that 2013 would spank
1963's botty.  To my astonishment, the old algorithm was faster.

Digging down, I found that Ada.Numerics.Long_Elementary_Functions.arctan
does quite a bit of argument range reduction and result error checking, but
finally invokes the "fpatan" opcode of the x86_64 CPU.

The 1963 algorithm, expressed in Ada 2012 but compiled with aggressive
optimization, is only 17% slower than that single CPU opcode!

Note also that, although it was designed for a machine with 48-bit floating
point, it gets the same accuracy as the hardware method on a machine with
64-bit floating point.

Here is the code, with the observed results included as commentary:

with Ada.Numerics.Long_Elementary_Functions;
with Ada.Text_IO;
with CPU_Timing;
with System.Machine_Code;

use  Ada.Numerics.Long_Elementary_Functions;
use  Ada.Text_IO;
use  CPU_Timing;
use  System.Machine_Code;

procedure arctan_test64 is

   -- typical output:
   -- R = 100000000 evaluations
   -- checksum = 5.00000005000000E+07, loop   time per repetition = 6 ns
   -- checksum = 4.38824577044418E+07, P51V15 time per evaluation = 49 ns
   -- checksum = 4.38824577044418E+07, arctan time per evaluation = 53 ns
   -- checksum = 4.38824577044418E+07, fpatan time per evaluation = 41 ns

   -- P51V15 is the KDF9 algorithm used in Whetstone Algol, ca. 1963
   -- See http://www.findlayw.plus.com/KDF9/Arctan%20Paper.pdf

   type vector is array (0..5) of Long_Float;

   V : constant vector := (  28165298.0 / 1479104550.0,
                             28165300.0 / 1479104550.0,
                             56327872.0 / 1479104550.0,
                            113397760.0 / 1479104550.0,
                            179306496.0 / 1479104550.0,
                           1073741824.0 / 1479104550.0);

   function P51V15 (x : Long_Float) return Long_Float with Inline;

   function P51V15 (x : Long_Float)
   return Long_Float is
      A : Long_Float := 1.0;
      S : Long_Float := V(0);
      B : vector;
   begin
       B(0) := sqrt(x*x + 1.0);
       -- 4 AGM (Arithmetic-Geometric Mean) cycles give a set of values ...
       for i in 0..3 loop
          A := (A+B(i)) * 0.5;
          B(i+1) := sqrt(A*B(i));
       end loop;
       -- ... that is subjected to a convergence acceleration process:
       for i in 1..5 loop
          S := S + V(i)*B(i-1);
       end loop;
       return x / S;
   end P51V15;

   -- this is the hardware arctan function of the x86_64 Core i7 CPU
   function fpatan (X : Long_Float) return Long_Float with Inline;

   function fpatan (X : Long_Float)
   return Long_Float is
      Result  : Long_Float;
   begin
      Asm(Template  => "fld1"
                     & Character'Val(10) -- LF
                     & Character'Val(9)  -- HT
                     & "fpatan",
          Outputs   => Long_Float'Asm_Output ("=t", Result),
          Inputs    => Long_Float'Asm_Input  ("0", X));
      return Result;
   end fpatan;

   R : constant := 1e8; -- number of loop repetitions

   function ns_per_rep (c : CPU_Usage_in_Microseconds)
   return Natural is
   begin
      return Natural(c * 1e3 / R);
   end ns_per_rep;

   x : Long_Float;
   c : CPU_Timer;
   l : CPU_Usage_in_Microseconds;
   t : CPU_Usage_in_Microseconds;

begin
   Put_Line("R =" & Integer'Image(R) & " evaluations");

   -- determine the fixed overhead time
   x := 0.0;
   Reset_Timer(c);
   for i in 1..R loop
      x := x + Long_Float(i)/Long_Float(R);
   end loop;
   l := User_CPU_Time_Since(c);
   Put_Line("checksum =" & x'Img & ", "
          & "loop   time per repetition ="
          & Natural'Image(ns_per_rep(l)) & " ns");

   x := 0.0;
   Reset_Timer(c);
   for i in 1..R loop
      x := x + P51V15(Long_Float(i)/Long_Float(R));
   end loop;
   t := User_CPU_Time_Since(c) - l;
   Put_Line("checksum =" & x'Img & ", "
          & "P51V15 time per evaluation ="
          & Natural'Image(ns_per_rep(t)) & " ns");

   x := 0.0;
   Reset_Timer(c);
   for i in 1..R loop
      x := x + arctan(Long_Float(i)/Long_Float(R));
   end loop;
   t := User_CPU_Time_Since(c) - l;
   Put_Line("checksum =" & x'Img & ", "
          & "arctan time per evaluation ="
          & Natural'Image(ns_per_rep(t)) & " ns");

   x := 0.0;
   Reset_Timer(c);
   for i in 1..R loop
      x := x + fpatan(Long_Float(i)/Long_Float(R));
   end loop;
   t := User_CPU_Time_Since(c) - l;
   Put_Line("checksum =" & x'Img & ", "
          & "fpatan time per evaluation ="
          & Natural'Image(ns_per_rep(t)) & " ns");

end arctan_test64;

The difference in time between Ada.Numerics.Long_Elementary_Functions.arctan
and  the fpatan function above is due to the afore-mentioned range reduction
and checking.  The KDF9 programmers in 1963 were less punctillious about
such matters than we rightly expect Ada to be nowadays.

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;



^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12  0:00         ` Randy Brukardt
@ 2013-07-12  7:25           ` Simon Wright
  2013-07-12 20:07             ` Randy Brukardt
  2013-07-12 14:23           ` Robert A Duff
  1 sibling, 1 reply; 68+ messages in thread
From: Simon Wright @ 2013-07-12  7:25 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> GNAT has a pragma Pure_Function which can be used on a single
> function. This isn't in Ada because it is just an assumption; there is
> no checks at all whether it is true. In that case, any mistake in the
> function would make a program calling it erroneous (because the
> compiler is allowed to assume the function is pure, even if it
> isn't). It can only be made safe if it is combined with a proof that
> there are no side-effects, in which case it surely should never be
> separated from that proof -- but pragmas are *always* separated from
> the thing they apply to. As such, they are never appropriate for
> setting properties of single entities.

GNAT supports Pure_Function as an aspect; so you can write

   package Pf is
      function F (X : Integer) return Integer with Pure_Function;
   end Pf;


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-11 22:28             ` Randy Brukardt
@ 2013-07-12  8:02               ` Dmitry A. Kazakov
  2013-07-12 21:16                 ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-12  8:02 UTC (permalink / raw)


On Thu, 11 Jul 2013 17:28:31 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1s6sw93u97355$.1n0205ucj6251.dlg@40tude.net...

>> How do you know that the copy you made is authentic? What is about the 
>> accuracy of the implementation?
> 
> That's easily determinable by unit tests (which are needed in any case - 
> testing and proof are not exclusive, you need both because it's real easy to 
> prove the wrong thing).

Firstly, you propose to replace proofs with tests, which automatically
disqualifies the answer.

Secondly, it does not work anyway. One famous example was given by Forsythe
in his book on numerical methods. It is solving quadratic equation using
school formula. In some cases (when roots are close to each other) the
accuracy catastrophically drops for one root and increases for another. You
*cannot* catch such stuff through tests. Only analysis can, and analysis is
equivalent to proofs.

>>>>> What's too hard shouldn't be bothered with, because we've lived just
>>>>> fine with no proofs at all for programming up to this point.
>>>>
>>>> We didn't. Accessibility checks and constraint errors are a permanent
>>>> headache in Ada.
>>>
>>> Constraint_Errors a headache? They're a blessing, because the alternative 
>>> is zillions of impossible to find bugs.
>>
>> No, the alternative is to prove that the constraint is not violated.
> 
> That's completely impractical.

You *always* can do on the client's side:

exception
   when Constraint_Error =>
       Put_Line ("Oops!');
end;

Which is the point, things are much simpler on instances of use compared to
universal usage.

>> You can do this only at the client side. Which is why it cannot be
>> contracted for the callee. When you move from instances to universally
>> holding contracts you frequently lose provability.
> 
> ...of things that aren't worth proving in the first place. You get rid of 
> constraint_errors from bodies by writing good preconditions, predicates, 
> constraints, and exclusions in the first place.

Really? You can start writing preconditions for +,-,*,/,** defined in the
package Standard and continue to user defined types and subtypes. If
anything is totally impractical then the idea to move Constraint_Error from
post to pre-condition. It is wrong and will never work for anything beyond
trivial examples.

> If for whatever reason you can't do this, then yes you lose proveability.

Nope. See above. I can prove that Constraint_Error is handled by given
client. Which is what Ada programmers really want, because unhandled
Constraint_Error is by far the most common bug.
 
>> Note how accessibility checks attempted in the form of a contract became
>> more problem than solution. In order to have it provable you must
>> strengthen it. Probably every Ada programmer was annoyed by silly "must be
>> statically deeper" messages about pointers. It cannot be handled at the
>> level of contracts. Yet in most cases it is no problem to see that a
>> *given* pointer is never dangling.
> 
> "No problem"? Informally, sure. It's impossible to prove in any real case; 
> there is too much going on. (Claw, for instance, prevents it by design, but 
> I don't think there is any way to formally describe that design -- certainly 
> not in systems like Spark which ignore the effects of finalization [which is 
> how Claw guarentees non-dangling]).

Most real cases are about downward closures and stuff when pointer is
aliased argument. For the latter you need the body.

>>> It's annoying that one can't tell between specification errors that should
>>> be contracted and proved away and those that are truly runtime (that is, 
>>> a normal part of the implementation);
>>
>> That was ARG's decision for Ada 2012, wasn't it?
> 
> No, it was an effect of Ada 83, which conflated those uses of exceptions. We 
> had to formalize existing practice (we need the ability to add these 
> specifications to existing code without changing any behavior). It would be 
> better for exceptions that we would want to "prove-away" (like Status_Error 
> and Mode_Error in Text_IO) to be strictly separated from those that are 
> purely runtime behavior (like End_Error and Device_Error).

This has nothing with that.

Status_Error and Mode_Error are there because file type as designed does
not reflect its mode:

   type Read_File_Type is ...
   type Write_File_Type is ...
   type Readwrite_File_Type is Read_File_Type and Write_File_Type is ...

Done.

> One thing we had to do is to make these things work is to allow existing 
> compilers to use runtime checks, with the hope that more will become proved 
> over time. Compiler technology isn't quite ready for mandatory proof 
> requirements (maybe next time).

This is a conflation of its worst.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12  0:21             ` Randy Brukardt
@ 2013-07-12  9:12               ` Stefan.Lucks
  2013-07-12 20:47                 ` Randy Brukardt
  2013-08-05  5:43                 ` Paul Rubin
  0 siblings, 2 replies; 68+ messages in thread
From: Stefan.Lucks @ 2013-07-12  9:12 UTC (permalink / raw)


On Thu, 11 Jul 2013, Randy Brukardt wrote:

>> Who said, catch all bugs? I didn't.
>
> You want to catch this very unlikely bug, and add a giant pile of extra junk
> annotations for this purpose. My contention is that it won't catch enough
> bugs by itself to be worth the complication.

Firstly, whatever data flow annotations are, they are not "junk".

Secondly, there is no "giant pile" of data flow annotations. Actually, 
they usually take a lot less lines of "anno-code" than writing pre- and 
postconditions. So even if you consider the data flow annotations as 
redundant, their overhead is small.

Thirdly, maybe my example has been too artificial. Below, I'll briefly 
describe one real-world example.

>> The fruits (well, bugs) you catch by employing pre- and postconditions are
>> a bit higher, actually. At least, that is what I gather from my own
>> experience YMMV.
>
> Your experience seems to have been on annotating existing code,

Not at all! (Well, I tried once to SPARKify existing Ada code -- but I got 
rid of that disease very very quickly. Turning naturally-written Ada into 
proper SPARK is a pain in the you-know-where!)

One real life example (simplified for the purpose of posting this) is the 
implementation an authenticated encryption scheme. Consider two byte 
strings X and Y of the same length, X being the message and being Y the 
"key stream". There is additional authentication key K. The output of the 
authenticated encryption is the ciphertext (X xor Y), followed by a 
cryptographic checksum(*) of X under the key K.

Specifying and proving the first part of the output (X xor Y) was easy. 
But specifying and proving the checksum part turned out to be tough. So I 
stopped trying it -- concentrating on the low-hanging fruits, as you put 
it.

However, I still had the flow annotations in my spec. (I use to write the 
flow annotations first, then the pre- and postconditions, and then the 
implementation.) The flow annotations specified the flow from X and K to 
Z. And that actually caught my error of using (X xor Y) instead of X in 
the implementation.

>> Did you ever actually work with SPARK?
>
> No, there wasn't a free version when I was interested. And now that there is
> a free version, I'm no longer interested because it is too restricted to be
> useful in any of my code. And it tries to do too much as well.

Agreed! Well, certainly there are people who require as much assurance as 
SPARK 05 provides, but I did find the work with old SPARK rather tedious, 
and I don't need that amount of assurance.


-----------------
(*) The correct terminology would be "message authentication code" or 
"MAC", rather than "checksum".



------  I  love  the  taste  of  Cryptanalysis  in  the morning!  ------
     <http://www.uni-weimar.de/cms/medien/mediensicherheit/home.html>
--Stefan.Lucks (at) uni-weimar.de, Bauhaus-Universität Weimar, Germany--


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12  0:00         ` Randy Brukardt
  2013-07-12  7:25           ` Simon Wright
@ 2013-07-12 14:23           ` Robert A Duff
  2013-07-12 20:14             ` Randy Brukardt
  1 sibling, 1 reply; 68+ messages in thread
From: Robert A Duff @ 2013-07-12 14:23 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> I'm pretty amazed to find out that I can't debug concurrent programs, 
> because I've been doing it for decades.

Are you done yet?

- Bob


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12  7:25           ` Simon Wright
@ 2013-07-12 20:07             ` Randy Brukardt
  0 siblings, 0 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-12 20:07 UTC (permalink / raw)


"Simon Wright" <simon@pushface.org> wrote in message 
news:lyd2qoutno.fsf@pushface.org...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>
>> GNAT has a pragma Pure_Function which can be used on a single
>> function. This isn't in Ada because it is just an assumption; there is
>> no checks at all whether it is true. In that case, any mistake in the
>> function would make a program calling it erroneous (because the
>> compiler is allowed to assume the function is pure, even if it
>> isn't). It can only be made safe if it is combined with a proof that
>> there are no side-effects, in which case it surely should never be
>> separated from that proof -- but pragmas are *always* separated from
>> the thing they apply to. As such, they are never appropriate for
>> setting properties of single entities.
>
> GNAT supports Pure_Function as an aspect; so you can write
>
>   package Pf is
>      function F (X : Integer) return Integer with Pure_Function;
>   end Pf;

Right, but the semantics are still (arguably) wrong. We (the ARG) eventually 
concluded that Ada can't use the name Pure_Function because we didn't want 
to interfere with the existing GNAT implementation, and doing it without any 
checking at all is just a non-starter.

                                      Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12 14:23           ` Robert A Duff
@ 2013-07-12 20:14             ` Randy Brukardt
  0 siblings, 0 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-12 20:14 UTC (permalink / raw)


"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote in message 
news:wcck3kvdfgg.fsf@shell01.TheWorld.com...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>
>> I'm pretty amazed to find out that I can't debug concurrent programs,
>> because I've been doing it for decades.
>
> Are you done yet?

I presume this is supposed to be a joke of some kind (but without a 
emoticon, can't be sure), and I'm not sure what the joke is supposed to be, 
either.

So, the serious answer: The servers appear to work properly, without 
deadlocks or other problems. There can't be 100% certainty of that, but 
hardly anyone requires that sort of certainty. I'd like a form of 
correctness-by-construction for concurrent programs, but that appears beyond 
the state of the art. I don't see proof tools adding much in this area, the 
main problem is ensuring that bad things are avoided (about the only place 
that proof might help would be in preventing livelock, but that's way down 
the list of problems).

                       Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12  9:12               ` Stefan.Lucks
@ 2013-07-12 20:47                 ` Randy Brukardt
  2013-08-05  5:43                 ` Paul Rubin
  1 sibling, 0 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-12 20:47 UTC (permalink / raw)


<Stefan.Lucks@uni-weimar.de> wrote in message 
news:alpine.DEB.2.10.1307120950410.10151@debian...
On Thu, 11 Jul 2013, Randy Brukardt wrote:
>>> Who said, catch all bugs? I didn't.
>>
>> You want to catch this very unlikely bug, and add a giant pile of extra 
>> junk
>> annotations for this purpose. My contention is that it won't catch enough
>> bugs by itself to be worth the complication.
>
>Firstly, whatever data flow annotations are, they are not "junk".
>
>Secondly, there is no "giant pile" of data flow annotations. Actually,
>they usually take a lot less lines of "anno-code" than writing pre- and
>postconditions. So even if you consider the data flow annotations as
>redundant, their overhead is small.

I called them "junk" because they're redundant (certainly in well-designed 
code). The OP complained that the proposed annotations for Spark 2014, and I 
agree with him on that. But I find it irrelevant because they're redundant.

And I'm strongly opposed to putting redundant information in specifications 
or anywhere else. I've learned by painful experience over the years that 
redundant information -- in type declarations, a compiler symboltable, or a 
programming language standard -- always ends up out of sync with the 
original infomation. That's especially true if it cannot be automatically 
checked (I'm dubious that "Depends" could be checked that way given the 
information available to an Ada compiler). So, I want to eliminate as much 
as possible of that information.

Clearly, we're not getting rid of parameter modes. Clearly, we need 
preconditions and postconditions, they can't be done any other way. That 
makes "Depends" the redundant information that we should not have in a 
specification.

Moreover, I really don't see what value they could possibly have. A 
subprogram has a number of "in" parameters and a number of "out" parameters 
(and possibly, but hopefully not some input globals and output globals, and 
treating a function result as a "out" parameter for this discussion). All of 
the "in" parameters should somehow effect the "out" parameters (and it is 
best if there is only one output).

Routines that don't have this structure are already dubious. It sometimes 
can't be avoided, but it should be extremely rare. So, already, the extra 
information gained by this "flow" information is minimal. On top of that, 
"out" parameters that don't depend on some parameters are likely to be 
obvious in the postcondition for that parameter.

The point is that there isn't much information to be gained from such an 
annotation; the vast majority of it is repeating the obvious (inputs affect 
outputs). I realize that there are a lot of badly designed subprograms out 
there, but I wouldn't want a significant language feature just for badly 
designed subprograms -- especially when we still need lots of support for 
*well* designed subprograms!

...
>> Your experience seems to have been on annotating existing code,
>
>Not at all! (Well, I tried once to SPARKify existing Ada code -- but I got
>rid of that disease very very quickly. Turning naturally-written Ada into
>proper SPARK is a pain in the you-know-where!)

The reason I said that if you design new code, you write the preconditions 
and postconditions along with the subprogram specification (many in the form 
of predicates, I would hope, as those are a lot easier than repeating large 
pre- and post-). (And you surely don't write subprograms where only some of 
the inputs affect some of the outputs.) Before you write anything else (like 
bodies or even descriptions of the purpose of the routine). I've never heard 
of any experienced Ada programmer writing subprogram specifications without 
considering the proper subtypes of the parameters immediately! So I don't 
see how you could get just "Depends" annotations. I grant that you might 
"beef up" the preconditions and postconditions later, to provide a more 
complete description, but you're almost always going to start with some.

>One real life example (simplified for the purpose of posting this) is the
>implementation an authenticated encryption scheme. Consider two byte
>strings X and Y of the same length, X being the message and being Y the
>"key stream". There is additional authentication key K. The output of the
>authenticated encryption is the ciphertext (X xor Y), followed by a
>cryptographic checksum(*) of X under the key K.

This sounds like precisely the bad sort of subprogram that typically should 
be avoided. Multiple outputs, and a strange non-dependence of some of the 
outputs on some of the inputs. I grant this can be unavoidable in some 
cases, but it should be rare. I don't see any point of an annotation that 
only provides extra information for such rare cases.

Probably a more sensible annotation here would be the negative: that is, 
declare which inputs a particular output does *not* depend upon. That 
normally should be a null list, in the rare case where it is non-null the 
information could be given.

But, as I said, this is not information useful to an Ada compiler (while the 
other annotations will improve code quality).

                        Randy.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12  8:02               ` Dmitry A. Kazakov
@ 2013-07-12 21:16                 ` Randy Brukardt
  2013-07-14 10:19                   ` Dmitry A. Kazakov
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-12 21:16 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1lhsyg46iz9jj.8jrvydqxt8l6.dlg@40tude.net...
> On Thu, 11 Jul 2013 17:28:31 -0500, Randy Brukardt wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:1s6sw93u97355$.1n0205ucj6251.dlg@40tude.net...
>
>>> How do you know that the copy you made is authentic? What is about the
>>> accuracy of the implementation?
>>
>> That's easily determinable by unit tests (which are needed in any case -
>> testing and proof are not exclusive, you need both because it's real easy 
>> to
>> prove the wrong thing).
>
> Firstly, you propose to replace proofs with tests, which automatically
> disqualifies the answer.

True enough. I'm saying that we need to replace the impractical ("proofs") 
with the possible.

> Secondly, it does not work anyway. One famous example was given by 
> Forsythe
> in his book on numerical methods. It is solving quadratic equation using
> school formula. In some cases (when roots are close to each other) the
> accuracy catastrophically drops for one root and increases for another. 
> You
> *cannot* catch such stuff through tests. Only analysis can, and analysis 
> is
> equivalent to proofs.

If your "cookbook" is full of bad algorithms, then of course you'll get 
garbage. And I certainly agree you can't test accuracy. The problem is that 
even if you prove that your algorithm is too inaccurate, what can you do? 
You don't have any other way to implement it (because you already did the 
research to find the best algorithm available -- if you didn't do at least 
that, "proof" is the LEAST of your problems).

I was presuming that you were starting with a cookbook implementation of 
appropriate accuracy, and the only need for testing is to prove that the 
algorithm was in fact implemented properly.

If you're starting with a blank sheet of paper, there is no technique that 
is likely to be available in my lifetime that is reaally going to help.

...
>>> No, the alternative is to prove that the constraint is not violated.
>>
>> That's completely impractical.
>
> You *always* can do on the client's side:
>
> exception
>   when Constraint_Error =>
>       Put_Line ("Oops!');
> end;

That doesn't prove that the constraint is not violated -- that's proving 
that you covered it up! Supposedly, it's the worst side-effect of exception 
contracts in Java, is that programmers write handlers to swallow exceptions 
without knowing or caring why they're doing so.

You've got to prevent the exceptions at the source, and the main way you do 
that for Constraint_Error is to use proper range constraints and null 
exclusions on parameters. Then your proof is actually possible because we 
moved it from inside of the body to the call site.

> Which is the point, things are much simpler on instances of use compared 
> to
> universal usage.

Which of course is what I've been saying: you move these requirements into 
the contract for a subprogram so you can prove them at the call-site. It's 
impossible to prove anything in the body of a subprogram if all of the 
parameters are type Integer.

>>> You can do this only at the client side. Which is why it cannot be
>>> contracted for the callee. When you move from instances to universally
>>> holding contracts you frequently lose provability.
>>
>> ...of things that aren't worth proving in the first place. You get rid of
>> constraint_errors from bodies by writing good preconditions, predicates,
>> constraints, and exclusions in the first place.
>
> Really? You can start writing preconditions for +,-,*,/,** defined in the
> package Standard and continue to user defined types and subtypes. If
> anything is totally impractical then the idea to move Constraint_Error 
> from
> post to pre-condition. It is wrong and will never work for anything beyond
> trivial examples.

Which is amazing because it works for most examples. (It doesn't work for 
overflow of "*" very well, but that's the only real issue.) And in any case, 
there is no point in moving it for language-defined operators -- both the 
compiler and any provers know exactly what they do. It's only for 
user-defined subprograms that you need to do so.

...
>>>> It's annoying that one can't tell between specification errors that 
>>>> should
>>>> be contracted and proved away and those that are truly runtime (that 
>>>> is,
>>>> a normal part of the implementation);
>>>
>>> That was ARG's decision for Ada 2012, wasn't it?
>>
>> No, it was an effect of Ada 83, which conflated those uses of exceptions. 
>> We
>> had to formalize existing practice (we need the ability to add these
>> specifications to existing code without changing any behavior). It would 
>> be
>> better for exceptions that we would want to "prove-away" (like 
>> Status_Error
>> and Mode_Error in Text_IO) to be strictly separated from those that are
>> purely runtime behavior (like End_Error and Device_Error).
>
> This has nothing with that.
>
> Status_Error and Mode_Error are there because file type as designed does
> not reflect its mode:
>
>   type Read_File_Type is ...
>   type Write_File_Type is ...
>   type Readwrite_File_Type is Read_File_Type and Write_File_Type is ...
>
> Done.

Huh? This makes no sense for this problem. You can't do that in any version 
of Ada (or any other sensible language), because it would require that the 
type of a file object could change in response to operations. And in such a 
language, it would be impossible to do any static checks (you can't know if 
Open is going to succeed, for instance, so afterwards, the file is going to 
have one of several possible types). Which would defeat the purpose.

Ada doesn't allow changing the type of an object for any reason, and to 
change that would require designing a new language. I realize that you have 
a fantasy that such a radical redesign could be made to work compatibly, but 
that's highly unlikely.

>> One thing we had to do is to make these things work is to allow existing
>> compilers to use runtime checks, with the hope that more will become 
>> proved
>> over time. Compiler technology isn't quite ready for mandatory proof
>> requirements (maybe next time).
>
> This is a conflation of its worst.

Carried over from Ada 83. Ada will never be a language that requires much in 
the way of proof, it would be way too disruptive to existing 
implementations. (And we definitely want more than one implementation!). It 
could only be changed in a new Ada-like language, but that seems unlikely 
right now.

                                         Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12 21:16                 ` Randy Brukardt
@ 2013-07-14 10:19                   ` Dmitry A. Kazakov
  2013-07-14 15:57                     ` Georg Bauhaus
  2013-07-16  0:13                     ` Randy Brukardt
  0 siblings, 2 replies; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-14 10:19 UTC (permalink / raw)


On Fri, 12 Jul 2013 16:16:36 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1lhsyg46iz9jj.8jrvydqxt8l6.dlg@40tude.net...

>> Secondly, it does not work anyway. One famous example was given by 
>> Forsythe
>> in his book on numerical methods. It is solving quadratic equation using
>> school formula. In some cases (when roots are close to each other) the
>> accuracy catastrophically drops for one root and increases for another. 
>> You
>> *cannot* catch such stuff through tests. Only analysis can, and analysis 
>> is
>> equivalent to proofs.
> 
> If your "cookbook" is full of bad algorithms, then of course you'll get 
> garbage. And I certainly agree you can't test accuracy. The problem is that 
> even if you prove that your algorithm is too inaccurate, what can you do?

This is a different question. The first one how do I prove things about
accuracy. And the answer is that I need bodies to do that and specifying
accuracy through contracts would be extremely difficult.

Answering the question, if you know that the aircraft will crash, you don't
board it.

> You don't have any other way to implement it (because you already did the 
> research to find the best algorithm available -- if you didn't do at least 
> that, "proof" is the LEAST of your problems).

There is practically never the best algorithm. In any case in order to take
a decision one needs evidences. The only reliable evidence is a proof.

>>>> No, the alternative is to prove that the constraint is not violated.
>>>
>>> That's completely impractical.
>>
>> You *always* can do on the client's side:
>>
>> exception
>>   when Constraint_Error =>
>>       Put_Line ("Oops!');
>> end;
> 
> That doesn't prove that the constraint is not violated

It handles violation, which cannot be prevented otherwise.

> You've got to prevent the exceptions at the source, and the main way you do 
> that for Constraint_Error is to use proper range constraints and null 
> exclusions on parameters.

No. Dynamic constraints cannot prevent anything. They are a part of
defining semantics so that violation could be handled elsewhere.

Only static constraints can serve prevention. If you let Constraint_Error
propagation, you already gave up with static constraints and with
provability on the whole class of all instances.

You still can have it provable on per instance basis and this is the
client's side.

> Then your proof is actually possible because we 
> moved it from inside of the body to the call site.

Yes. Bodies are needed.
 
>>>> You can do this only at the client side. Which is why it cannot be
>>>> contracted for the callee. When you move from instances to universally
>>>> holding contracts you frequently lose provability.
>>>
>>> ...of things that aren't worth proving in the first place. You get rid of
>>> constraint_errors from bodies by writing good preconditions, predicates,
>>> constraints, and exclusions in the first place.
>>
>> Really? You can start writing preconditions for +,-,*,/,** defined in the
>> package Standard and continue to user defined types and subtypes. If
>> anything is totally impractical then the idea to move Constraint_Error from
>> post to pre-condition. It is wrong and will never work for anything beyond
>> trivial examples.
> 
> Which is amazing because it works for most examples. (It doesn't work for 
> overflow of "*" very well, but that's the only real issue.) And in any case, 
> there is no point in moving it for language-defined operators -- both the 
> compiler and any provers know exactly what they do. It's only for 
> user-defined subprograms that you need to do so.

How are you going to do that for user-defined subprograms using
language-defined operations, which are all user-defined subprograms?

>>>>> It's annoying that one can't tell between specification errors that 
>>>>> should
>>>>> be contracted and proved away and those that are truly runtime (that 
>>>>> is,
>>>>> a normal part of the implementation);
>>>>
>>>> That was ARG's decision for Ada 2012, wasn't it?
>>>
>>> No, it was an effect of Ada 83, which conflated those uses of exceptions. We
>>> had to formalize existing practice (we need the ability to add these
>>> specifications to existing code without changing any behavior). It would be
>>> better for exceptions that we would want to "prove-away" (like 
>>> Status_Error
>>> and Mode_Error in Text_IO) to be strictly separated from those that are
>>> purely runtime behavior (like End_Error and Device_Error).
>>
>> This has nothing with that.
>>
>> Status_Error and Mode_Error are there because file type as designed does
>> not reflect its mode:
>>
>>   type Read_File_Type is ...
>>   type Write_File_Type is ...
>>   type Readwrite_File_Type is Read_File_Type and Write_File_Type is ...
>>
>> Done.
> 
> Huh? This makes no sense for this problem. You can't do that in any version 
> of Ada (or any other sensible language),

Of course I can. It is MI. Even in Ada it is partially possible with
interfaces, just a bit more clumsy than it should be. It is trivial to
statically ensure that Write will never be called on Read_File_Type. It has
no such operation.

> because it would require that the 
> type of a file object could change in response to operations.

Never.

>>> One thing we had to do is to make these things work is to allow existing
>>> compilers to use runtime checks, with the hope that more will become 
>>> proved over time. Compiler technology isn't quite ready for mandatory proof
>>> requirements (maybe next time).
>>
>> This is a conflation of its worst.
> 
> Carried over from Ada 83. Ada will never be a language that requires much in 
> the way of proof, it would be way too disruptive to existing 
> implementations.

I don't see how adding optional exception contracts should disrupt
anything.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-14 10:19                   ` Dmitry A. Kazakov
@ 2013-07-14 15:57                     ` Georg Bauhaus
  2013-07-16  0:16                       ` Randy Brukardt
  2013-07-16  0:13                     ` Randy Brukardt
  1 sibling, 1 reply; 68+ messages in thread
From: Georg Bauhaus @ 2013-07-14 15:57 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> I don't see how adding optional exception contracts should disrupt
> anything.

Aren't exceptions already part of contracts? Implied, though.

procedure Op (x : in out T)
  with Post => Foo(x) or else False;

For clarity, 

  C_E: Boolean renames False;

procedure Op (x : in out T)
  with Post => Foo(x) or else C_E;


Or, if there seems no way of telling clients about
preventive measures, but they should
nevertheless be aware of exceptions,

procedure Op (x : in out T)
  with Post => Random or else C_E;


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-14 10:19                   ` Dmitry A. Kazakov
  2013-07-14 15:57                     ` Georg Bauhaus
@ 2013-07-16  0:13                     ` Randy Brukardt
  2013-07-16  8:37                       ` Dmitry A. Kazakov
  1 sibling, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-16  0:13 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1d4tuwlfdnz2j$.18l68s96x3vjj.dlg@40tude.net...
> On Fri, 12 Jul 2013 16:16:36 -0500, Randy Brukardt wrote:
...
>> You've got to prevent the exceptions at the source, and the main way you 
>> do
>> that for Constraint_Error is to use proper range constraints and null
>> exclusions on parameters.
>
> No. Dynamic constraints cannot prevent anything. They are a part of
> defining semantics so that violation could be handled elsewhere.

Sorry, you definitely can prove that dynamic constraints can't be violated. 
Ada compilers do it all the time. The issue is letting the programmer have 
access to that information.

> Only static constraints can serve prevention. If you let Constraint_Error
> propagation, you already gave up with static constraints and with
> provability on the whole class of all instances.

If you don't want Constraint_Error to propagate (and generally one does 
not), then you don't put that into the exception contract for the routine. 
Hardly ever would you want that in the contract.

> You still can have it provable on per instance basis and this is the
> client's side.

No, that's making a lot more complicated that it needs to be, and I don't 
think it's possible in any useful generality.

In any case, going that route completely takes the compiler out of the 
equation. Once you do that, only an extremely limited number of users would 
care, and there will be no effect on normal programs.

...
>>>>> You can do this only at the client side. Which is why it cannot be
>>>>> contracted for the callee. When you move from instances to universally
>>>>> holding contracts you frequently lose provability.
>>>>
>>>> ...of things that aren't worth proving in the first place. You get rid 
>>>> of
>>>> constraint_errors from bodies by writing good preconditions, 
>>>> predicates,
>>>> constraints, and exclusions in the first place.
>>>
>>> Really? You can start writing preconditions for +,-,*,/,** defined in 
>>> the
>>> package Standard and continue to user defined types and subtypes. If
>>> anything is totally impractical then the idea to move Constraint_Error 
>>> from
>>> post to pre-condition. It is wrong and will never work for anything 
>>> beyond
>>> trivial examples.
>>
>> Which is amazing because it works for most examples. (It doesn't work for
>> overflow of "*" very well, but that's the only real issue.) And in any 
>> case,
>> there is no point in moving it for language-defined operators -- both the
>> compiler and any provers know exactly what they do. It's only for
>> user-defined subprograms that you need to do so.
>
> How are you going to do that for user-defined subprograms using
> language-defined operations, which are all user-defined subprograms?

Why is there a problem? Every Ada compiler (and pretty much every other 
compiler as well) does it already. The only issue is that programmers can't 
harness it for their own purposes. I would hope that exception contracts 
would be defined in order to do that.

...
>> Huh? This makes no sense for this problem. You can't do that in any 
>> version
>> of Ada (or any other sensible language),
>
> Of course I can. It is MI. Even in Ada it is partially possible with
> interfaces, just a bit more clumsy than it should be. It is trivial to
> statically ensure that Write will never be called on Read_File_Type. It 
> has
> no such operation.

But you can't know when you declare a file object what mode it will have, or 
even if it will get one. That's determined dynamically because by the result 
of Open.

>> because it would require that the
>> type of a file object could change in response to operations.
>
> Never.

You've never explained how you supposely static approach would work in the 
face of failing Opens, among other things. That's because it wouldn't work 
at all.

>>>> One thing we had to do is to make these things work is to allow 
>>>> existing
>>>> compilers to use runtime checks, with the hope that more will become
>>>> proved over time. Compiler technology isn't quite ready for mandatory 
>>>> proof
>>>> requirements (maybe next time).
>>>
>>> This is a conflation of its worst.
>>
>> Carried over from Ada 83. Ada will never be a language that requires much 
>> in
>> the way of proof, it would be way too disruptive to existing
>> implementations.
>
> I don't see how adding optional exception contracts should disrupt
> anything.

It wouldn't, and I hope we'll do that. But that doesn't have a darn thing to 
do with our discussion; I'm assuming such contracts (and global contracts) 
exist -- else absolutely nothing can be done. Everything I've been talking 
about for weeks is for Ada 202x, which certainly will have exception 
contracts if I have anything to do about it.

But you are insisting on more than that, and I think that's actively harmful 
to programming in general. You seem to insist on access to bodies even with 
such contracts, but that would prevent all compile-time checking -- the only 
kind that would benefit most programmers. I want each subprogram to be 
checked in isolation (subject its contracts), because in that case there is 
no coupling between calls and bodies.

                                     Randy.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-14 15:57                     ` Georg Bauhaus
@ 2013-07-16  0:16                       ` Randy Brukardt
  2013-07-17  1:30                         ` Shark8
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-16  0:16 UTC (permalink / raw)


"Georg Bauhaus" <rm-host.bauhaus@maps.arcor.de> wrote in message 
news:984821833395507830.672948rm-host.bauhaus-maps.arcor.de@news.arcor.de...
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:
>
>> I don't see how adding optional exception contracts should disrupt
>> anything.
>
> Aren't exceptions already part of contracts? Implied, though.

Yes, but only for inbound contracts (preconditions and predicates). We need 
them for outbound contracts as well, without having to guess. And I agree 
with Dmitry that those need to be compile-time checked in the subprogram 
bodies. (Exactly how that's done has to be worked out, of course).

procedure Op (x : in out T)
  with Post => Foo(x),
         Exception => Constraint_Error when not Foo(x);

or something like that.

                            Randy.




                                                Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-16  0:13                     ` Randy Brukardt
@ 2013-07-16  8:37                       ` Dmitry A. Kazakov
  2013-07-16 22:13                         ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-16  8:37 UTC (permalink / raw)


On Mon, 15 Jul 2013 19:13:24 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1d4tuwlfdnz2j$.18l68s96x3vjj.dlg@40tude.net...
>> On Fri, 12 Jul 2013 16:16:36 -0500, Randy Brukardt wrote:
> ...
>>> You've got to prevent the exceptions at the source, and the main way you 
>>> do that for Constraint_Error is to use proper range constraints and null
>>> exclusions on parameters.
>>
>> No. Dynamic constraints cannot prevent anything. They are a part of
>> defining semantics so that violation could be handled elsewhere.
> 
> Sorry, you definitely can prove that dynamic constraints can't be violated.

Only in some cases. For proofs it must be all or nothing.

> Ada compilers do it all the time. The issue is letting the programmer have 
> access to that information.

This is upside down. The programmer should manifest his intention for a
constraint to hold and the compiler should verify that. The manifest must
be syntactically visible. Conflation such declaration with dynamic
constraints is a bad idea.

>> Only static constraints can serve prevention. If you let Constraint_Error
>> propagation, you already gave up with static constraints and with
>> provability on the whole class of all instances.
> 
> If you don't want Constraint_Error to propagate (and generally one does 
> not), then you don't put that into the exception contract for the routine.

This would make the routine non-implementable, because, again, you cannot
do that on the callee side. It is a halting problem.

>> You still can have it provable on per instance basis and this is the
>> client's side.
> 
> No, that's making a lot more complicated that it needs to be, and I don't 
> think it's possible in any useful generality.

SPARK does this already. Moreover historically the concept of correctness
proofs was explicitly about instances. Only later, with the dawn of generic
programming, it was expanded onto contracts and behavior in general (e.g.
LSP). The latter is much more difficult and frequently impossible in hard
mathematical sense (undecidable).

> In any case, going that route completely takes the compiler out of the 
> equation. Once you do that, only an extremely limited number of users would 
> care, and there will be no effect on normal programs.

I disagree. I trust that most Ada programmers will enjoy trying it in their
program and gradually relaxing requirements on the proofs as they see what
can and what cannot be proved.

This process would be extremely useful for deeper understanding and
evaluation of how the given implementation works and how software
components interact each other in the concrete case (AKA software
integration).

>>>>>> You can do this only at the client side. Which is why it cannot be
>>>>>> contracted for the callee. When you move from instances to universally
>>>>>> holding contracts you frequently lose provability.
>>>>>
>>>>> ...of things that aren't worth proving in the first place. You get rid 
>>>>> of constraint_errors from bodies by writing good preconditions, 
>>>>> predicates, constraints, and exclusions in the first place.
>>>>
>>>> Really? You can start writing preconditions for +,-,*,/,** defined in the
>>>> package Standard and continue to user defined types and subtypes. If
>>>> anything is totally impractical then the idea to move Constraint_Error 
>>>> from post to pre-condition. It is wrong and will never work for anything 
>>>> beyond trivial examples.
>>>
>>> Which is amazing because it works for most examples. (It doesn't work for
>>> overflow of "*" very well, but that's the only real issue.) And in any case,
>>> there is no point in moving it for language-defined operators -- both the
>>> compiler and any provers know exactly what they do. It's only for
>>> user-defined subprograms that you need to do so.
>>
>> How are you going to do that for user-defined subprograms using
>> language-defined operations, which are all user-defined subprograms?
> 
> Why is there a problem? Every Ada compiler (and pretty much every other 
> compiler as well) does it already.

How do you prove that this does not raise Constraint_Error

   function Geometric_Mean (A, B : Float) return Float is
   begin
       return sqrt (A * B);
   end Geometric_Mean;

> The only issue is that programmers can't 
> harness it for their own purposes. I would hope that exception contracts 
> would be defined in order to do that.

Yes. But they will only help if used on the client side and when bodies are
looked into.

1. You will not prove that sqrt never raises Constraint_Error. You will do
that in *your* calls to sqrt it never does.

2. In a great number of cases you won't be able to contract *when*
Constraint_Error is to be raised. Consider this:

   function Sum (A : Float_Array) return Float;

You cannot spell the contract when Sum must raise Constraint_Error and when
not. Such a contract would much more complex (=> error prone) than the
implementation itself. This is why when the caller of Sum contracts itself
not to raise Constraint_Error, you will have to look into the body of Sum
in order to prove that. Otherwise, you will end up with Java's exception
contracts mess and brainless catch-all handlers.

>>> Huh? This makes no sense for this problem. You can't do that in any 
>>> version of Ada (or any other sensible language),
>>
>> Of course I can. It is MI. Even in Ada it is partially possible with
>> interfaces, just a bit more clumsy than it should be. It is trivial to
>> statically ensure that Write will never be called on Read_File_Type. It 
>> has no such operation.
> 
> But you can't know when you declare a file object what mode it will have, or 
> even if it will get one. That's determined dynamically because by the result 
> of Open.

Nope. Open must be a primitive operation of file type. Readonly file Open
/= write-only Open/Create.

>>> because it would require that the
>>> type of a file object could change in response to operations.
>>
>> Never.
> 
> You've never explained how you supposely static approach would work in the 
> face of failing Opens, among other things.

It is supposed to work with Read and Write.

Maybe I didn't understand your point. I didn't say that Open should not
raise Status_Error. I did that Read should not.

Opening readonly file for writing is not a bug it is an error, a legitime
state of the program which must be handled. Writing a file opened as
readonly is a bug. Bug /= error. Errors are handled, bugs are fixed.

> But you are insisting on more than that, and I think that's actively harmful 
> to programming in general. You seem to insist on access to bodies even with 
> such contracts, but that would prevent all compile-time checking -- the only 
> kind that would benefit most programmers.

I don't see how allowing correctness checks involving the bodies should
prevent correctness checks involving the specifications.

> I want each subprogram to be 
> checked in isolation (subject its contracts), because in that case there is 
> no coupling between calls and bodies.

This will be too weak to be useful. It was attempted long ago by LSP.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-16  8:37                       ` Dmitry A. Kazakov
@ 2013-07-16 22:13                         ` Randy Brukardt
  2013-07-17  7:59                           ` Dmitry A. Kazakov
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-16 22:13 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1rjye7dyn3vgk.66fbcca9y4zd.dlg@40tude.net...
> On Mon, 15 Jul 2013 19:13:24 -0500, Randy Brukardt wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:1d4tuwlfdnz2j$.18l68s96x3vjj.dlg@40tude.net...
>>> On Fri, 12 Jul 2013 16:16:36 -0500, Randy Brukardt wrote:
>> ...
>>>> You've got to prevent the exceptions at the source, and the main way 
>>>> you
>>>> do that for Constraint_Error is to use proper range constraints and 
>>>> null
>>>> exclusions on parameters.
>>>
>>> No. Dynamic constraints cannot prevent anything. They are a part of
>>> defining semantics so that violation could be handled elsewhere.
>>
>> Sorry, you definitely can prove that dynamic constraints can't be 
>> violated.
>
> Only in some cases. For proofs it must be all or nothing.
>
>> Ada compilers do it all the time. The issue is letting the programmer 
>> have
>> access to that information.
>
> This is upside down. The programmer should manifest his intention for a
> constraint to hold and the compiler should verify that. The manifest must
> be syntactically visible. Conflation such declaration with dynamic
> constraints is a bad idea.

Yes, that's precisely what I'm talking about - some sort of exception 
contract. But those don't depend on the kind of constraint.

>>> Only static constraints can serve prevention. If you let 
>>> Constraint_Error
>>> propagation, you already gave up with static constraints and with
>>> provability on the whole class of all instances.
>>
>> If you don't want Constraint_Error to propagate (and generally one does
>> not), then you don't put that into the exception contract for the 
>> routine.
>
> This would make the routine non-implementable, because, again, you cannot
> do that on the callee side. It is a halting problem.

As you showed a couple of days ago, it's trivial on the callee side (just 
stick in a handler). The hope is to use proof techniques to reduce the cases 
where that's required, but I see no important reason to worry about 
eliminating that.

>>> You still can have it provable on per instance basis and this is the
>>> client's side.
>>
>> No, that's making a lot more complicated that it needs to be, and I don't
>> think it's possible in any useful generality.
>
> SPARK does this already.

Exactly my point! :-)

> Moreover historically the concept of correctness
> proofs was explicitly about instances. Only later, with the dawn of 
> generic
> programming, it was expanded onto contracts and behavior in general (e.g.
> LSP). The latter is much more difficult and frequently impossible in hard
> mathematical sense (undecidable).

Certainly, there is lots of stuff that's not worth proving that's not 
provable. So what? We've lived for decades without general proof and there 
is absolutely no evidence that general proof would do anything to improve 
the quality and timelyness of software. (Software that takes longer to 
develop isn't viable no matter how much better quality it actualy is.)

>> In any case, going that route completely takes the compiler out of the
>> equation. Once you do that, only an extremely limited number of users 
>> would
>> care, and there will be no effect on normal programs.
>
> I disagree. I trust that most Ada programmers will enjoy trying it in 
> their
> program and gradually relaxing requirements on the proofs as they see what
> can and what cannot be proved.

No one is going to run extra tools beyond compilation; they simply don't 
have the time. The "win" with Ada is that the extra correctness brought by 
the language comes for "free", it's an integrated part of the development 
process. (Your program cannot be run if it doesn't compile.) Extra tools 
never will have that level force behind them, and people like me would never 
run them.

...
>> Why is there a problem? Every Ada compiler (and pretty much every other
>> compiler as well) does it already.
>
> How do you prove that this does not raise Constraint_Error
>
>   function Geometric_Mean (A, B : Float) return Float is
>   begin
>       return sqrt (A * B);
>   end Geometric_Mean;

You can't, because it does. :-) Your contract is not strong enough, which is 
not surprising: you have unbounded types here -- you can prove nothing about 
such types.

But that's OK, there is no point in proving everything about everything. 
Moreover, if you made this an expression function (meaning that you are 
merely giving a name to "sqrt (A*B)", you could plug it in without any 
effect on your proof. Enjoy.

>> The only issue is that programmers can't
>> harness it for their own purposes. I would hope that exception contracts
>> would be defined in order to do that.
>
> Yes. But they will only help if used on the client side and when bodies 
> are
> looked into.

You might be right, but in that case we're wasting our time even thinking 
about proof, because any technology that requires programmer-visble access 
to bodies is evil. ("Evil" here being a shorthand for "has a bunch of bad 
properties", which I've previously outlined.) And you may say that many Ada 
compilers have such features, and I repeat that pragma Inline and 
macro-expanded generics are evil, if they have any programmer-visible 
effect. Those sorts of things are only acceptable if they are completely 
semantically invisible, and that is rarely true.



> 1. You will not prove that sqrt never raises Constraint_Error. You will do
> that in *your* calls to sqrt it never does.

You don't have to. A properly written Sqrt has a precondition, and that 
precondition raises Constraint_Error when it fails. Sqrt itself never raises 
any exception. You then prove at the call site that the precondition is met.

> 2. In a great number of cases you won't be able to contract *when*
> Constraint_Error is to be raised. Consider this:
>
>   function Sum (A : Float_Array) return Float;
>
> You cannot spell the contract when Sum must raise Constraint_Error and 
> when
> not. Such a contract would much more complex (=> error prone) than the
> implementation itself. This is why when the caller of Sum contracts itself
> not to raise Constraint_Error, you will have to look into the body of Sum
> in order to prove that. Otherwise, you will end up with Java's exception
> contracts mess and brainless catch-all handlers.

Yup, if you think that *everything* can be contracted and should be 
contracted, the only possible result is Java's mess. That's perfectly clear, 
because I certainly agree with you that you can't explain why exceptions are 
raised formally in many cases. And that's actually OK, because the point is 
to get rid of the exceptions (with smart use of constraints, predicates, and 
preconditions). If you can't do that, there's really nothing to prove and 
nothing to be gained from the contracts.

...
>> You've never explained how you supposely static approach would work in 
>> the
>> face of failing Opens, among other things.
>
> It is supposed to work with Read and Write.
>
> Maybe I didn't understand your point. I didn't say that Open should not
> raise Status_Error. I did that Read should not.

I agree with that, that's the precondition's job.

> Opening readonly file for writing is not a bug it is an error, a legitime
> state of the program which must be handled. Writing a file opened as
> readonly is a bug. Bug /= error. Errors are handled, bugs are fixed.

You claim that you can use the file type to eliminate Status_Error and 
Mode_Error. But that's impossible, because an Open that fails leaves the 
file unopened; and you have to be able to fall back and open a different 
file (or create one instead) on such a condition. It's not practical to have 
a single Open operation as an initializer -- so your scheme means that the 
file type has to change dynamically during it's lifetime. That's the purpose 
of a subtype, not of a type (in Ada terms, at least).

>> But you are insisting on more than that, and I think that's actively 
>> harmful
>> to programming in general. You seem to insist on access to bodies even 
>> with
>> such contracts, but that would prevent all compile-time checking -- the 
>> only
>> kind that would benefit most programmers.
>
> I don't see how allowing correctness checks involving the bodies should
> prevent correctness checks involving the specifications.

Because you can't have "correctness checks" involving bodies that don't 
exist. And having them depend on bodies is "evil", because it means that a 
change to a body can make a call in someone else's code fail. That's 
completely unacceptable - it would be impossible to write any reusable code 
in such an environment.

>> I want each subprogram to be
>> checked in isolation (subject its contracts), because in that case there 
>> is
>> no coupling between calls and bodies.
>
> This will be too weak to be useful. It was attempted long ago by LSP.

LSP is a dynamic thingy, completely uncheckable at compile-time.

And you may be right, in which case any sort of proof is worthless, because 
violating encapsulation instantly causes all of the bad effects that Ada 
have been designed to avoid -- changing an unrelated thing (and every body 
is unrelated in Ada terms) never makes someone else's code illegal.

                                Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-16  0:16                       ` Randy Brukardt
@ 2013-07-17  1:30                         ` Shark8
  2013-07-17 23:08                           ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Shark8 @ 2013-07-17  1:30 UTC (permalink / raw)


On Monday, July 15, 2013 6:16:56 PM UTC-6, Randy Brukardt wrote:
> "Georg Bauhaus" <rm-host.bauhaus@maps.arcor.de> wrote in message 
> news:984821833395507830.672948rm-host.bauhaus-maps.arcor.de@news.arcor.de...
> > "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:
> >
> >> I don't see how adding optional exception contracts should disrupt
> >> anything.
> >
> > Aren't exceptions already part of contracts? Implied, though.
> 
> Yes, but only for inbound contracts (preconditions and predicates). We need 
> them for outbound contracts as well, without having to guess. And I agree 
> with Dmitry that those need to be compile-time checked in the subprogram 
> bodies. (Exactly how that's done has to be worked out, of course).
> 
> procedure Op (x : in out T)
>   with Post => Foo(x),
>          Exception => Constraint_Error when not Foo(x);
> or something like that.
> 
>                             Randy.

I think you and I talked about that idea (basically sets of exceptions) and I like the idea a lot (though not the particular syntax you were using, which is ok) -- I really like the idea you have for renaming the violation exception, in retrospect that would have been very desirable for constraint-aspect violations (pre-/post-conditions, particularly).

The one drawback from your proposal is that it makes you reiterate the condition, but with a negation, this could be a source of bugs. Even though it's a little less straightforward the following would circumvent that problem:

 procedure Op (x : in out T)
   with Post => (Constraint_Error at [not?] Foo(x));

which would reuse the keyword(s) AT [and NOT] in addition to preventing the unnecessary duplication. {The use of parenthesis could also be mandated that way the clauses [which are different constitutions] could be made/forced-distinct when they are to raise differing exceptions on violation.}


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-16 22:13                         ` Randy Brukardt
@ 2013-07-17  7:59                           ` Dmitry A. Kazakov
  2013-07-17 23:26                             ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-17  7:59 UTC (permalink / raw)


On Tue, 16 Jul 2013 17:13:43 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:1rjye7dyn3vgk.66fbcca9y4zd.dlg@40tude.net...
>> On Mon, 15 Jul 2013 19:13:24 -0500, Randy Brukardt wrote:

>>> Ada compilers do it all the time. The issue is letting the programmer 
>>> have access to that information.
>>
>> This is upside down. The programmer should manifest his intention for a
>> constraint to hold and the compiler should verify that. The manifest must
>> be syntactically visible. Conflation such declaration with dynamic
>> constraints is a bad idea.
> 
> Yes, that's precisely what I'm talking about - some sort of exception 
> contract. But those don't depend on the kind of constraint.

But they do, exception contract is a static constraint.

It is no matter what is constrained, but how. Dynamically (=>code) vs.
statically (=>proof).
 
>>> In any case, going that route completely takes the compiler out of the
>>> equation. Once you do that, only an extremely limited number of users 
>>> would care, and there will be no effect on normal programs.
>>
>> I disagree. I trust that most Ada programmers will enjoy trying it in their
>> program and gradually relaxing requirements on the proofs as they see what
>> can and what cannot be proved.
> 
> No one is going to run extra tools beyond compilation; they simply don't 
> have the time. The "win" with Ada is that the extra correctness brought by 
> the language comes for "free",

They aren't free as they rely on careful choice of types. This is indeed
frustrating for must programmers with C and "everything is int" background.
But Ada programmers are different, otherwise they weren't Ada programmers.

> it's an integrated part of the development 
> process. (Your program cannot be run if it doesn't compile.) Extra tools 
> never will have that level force behind them, and people like me would never 
> run them.

I always was for SPARK being integrated into Ada.

>>> Why is there a problem? Every Ada compiler (and pretty much every other
>>> compiler as well) does it already.
>>
>> How do you prove that this does not raise Constraint_Error
>>
>>   function Geometric_Mean (A, B : Float) return Float is
>>   begin
>>       return sqrt (A * B);
>>   end Geometric_Mean;
> 
> You can't, because it does. :-) Your contract is not strong enough, which is 
> not surprising: you have unbounded types here -- you can prove nothing about 
> such types.
> 
> But that's OK, there is no point in proving everything about everything. 
> Moreover, if you made this an expression function (meaning that you are 
> merely giving a name to "sqrt (A*B)", you could plug it in without any 
> effect on your proof. Enjoy.

But you could prove that with concrete ranges of A and B at the client
side. Which is the whole point. You want to discard a whole class of
proofs. Especially the ones crucial for exception contracts. How are you
going to prove the caller's contract not to raise? You could not without
non-sensual handlers. It would be like the idiotic "function must have a
return statement rule."

>>> The only issue is that programmers can't
>>> harness it for their own purposes. I would hope that exception contracts
>>> would be defined in order to do that.
>>
>> Yes. But they will only help if used on the client side and when bodies are
>> looked into.
> 
> You might be right, but in that case we're wasting our time even thinking 
> about proof, because any technology that requires programmer-visble access 
> to bodies is evil. ("Evil" here being a shorthand for "has a bunch of bad 
> properties", which I've previously outlined.)

Yes of course. Just like Churchill's remark about democracy, it is evil,
but anything else is much worse.

Correctness proof has the nature similar to optimization, most powerful on
concrete instances and close to damaging in general.

>> 1. You will not prove that sqrt never raises Constraint_Error. You will do
>> that in *your* calls to sqrt it never does.
> 
> You don't have to. A properly written Sqrt has a precondition, and that 
> precondition raises Constraint_Error when it fails. Sqrt itself never raises 
> any exception.

In most cases the precondition cannot be spelled or is more complex than
the implementation itself. It is just a wrong (and not working) way to do
this.

>> Opening readonly file for writing is not a bug it is an error, a legitime
>> state of the program which must be handled. Writing a file opened as
>> readonly is a bug. Bug /= error. Errors are handled, bugs are fixed.
> 
> You claim that you can use the file type to eliminate Status_Error and 
> Mode_Error. But that's impossible, because an Open that fails leaves the 
> file unopened;

This is another issue, i.e. construction and initialization. You should not
be able to create unopened instances of file type.

> and you have to be able to fall back and open a different 
> file (or create one instead) on such a condition. It's not practical to have 
> a single Open operation as an initializer

Why? I certainly want it exactly this way.

>> I don't see how allowing correctness checks involving the bodies should
>> prevent correctness checks involving the specifications.
> 
> Because you can't have "correctness checks" involving bodies that don't 
> exist.

Yes, it certainly will require some consideration of how to deal with
opaque libraries. The model is already a bit leaky in GNAT which requires
*.ali files. Maybe checks involving the bodies should be performed rather
by the binder and the information required for that stored in the *.ali
files or equivalent.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-17  1:30                         ` Shark8
@ 2013-07-17 23:08                           ` Randy Brukardt
  2013-07-18  7:19                             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-17 23:08 UTC (permalink / raw)


"Shark8" <onewingedshark@gmail.com> wrote in message 
news:e7de5753-5819-41a4-a40f-3d80c101bf03@googlegroups.com...
...
>The one drawback from your proposal is that it makes you reiterate the 
>condition,
>but with a negation, this could be a source of bugs. Even though it's a 
>little less
>straightforward the following would circumvent that problem:

> procedure Op (x : in out T)
>   with Post => (Constraint_Error at [not?] Foo(x));

I didn't mean this to be a real example. An example like the one given 
probably would have been better written as a precondition (depending on 
exactly what Foo does):

  procedure Op (x : in out T)
     with Pre => Foo(x) or else raise Constraint_Error,
            Post => Foo(x);

which is just normal Ada 2012 code (using the raise expression, which we 
added 'after the fact' to support this usage).

I don't think most exception contracts would have an associated 
postcondition (that is, cause); what would the postcondition be for 
Use_Error, for instance? It's just an optional help in a few circumstances.

                                                Randy.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-17  7:59                           ` Dmitry A. Kazakov
@ 2013-07-17 23:26                             ` Randy Brukardt
  2013-07-18  7:37                               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-17 23:26 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1awbjaens2vca$.1asmpnthmmycl.dlg@40tude.net...
> On Tue, 16 Jul 2013 17:13:43 -0500, Randy Brukardt wrote:
>
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
>> news:1rjye7dyn3vgk.66fbcca9y4zd.dlg@40tude.net...
>>> On Mon, 15 Jul 2013 19:13:24 -0500, Randy Brukardt wrote:
>
>>>> Ada compilers do it all the time. The issue is letting the programmer
>>>> have access to that information.
>>>
>>> This is upside down. The programmer should manifest his intention for a
>>> constraint to hold and the compiler should verify that. The manifest 
>>> must
>>> be syntactically visible. Conflation such declaration with dynamic
>>> constraints is a bad idea.
>>
>> Yes, that's precisely what I'm talking about - some sort of exception
>> contract. But those don't depend on the kind of constraint.
>
> But they do, exception contract is a static constraint.
>
> It is no matter what is constrained, but how. Dynamically (=>code) vs.
> statically (=>proof).

Ah, you've redefined Ada's terms again. "Static" is defined in RM 4.9, and 
that's how I'm using it. "Proof", which is not an Ada concept, is not 
relevant to it's description.

By your twisted definition, everything is static (including dynamic 
constraints!) if sufficient information is known at compile-time to reason 
about them.

It would be nice to have a discussion where you didn't change the 
terminology from that accepted by Ada (this is, after all, an Ada group).

...
>> it's an integrated part of the development
>> process. (Your program cannot be run if it doesn't compile.) Extra tools
>> never will have that level force behind them, and people like me would 
>> never
>> run them.
>
> I always was for SPARK being integrated into Ada.

Nothing that has body dependencies can be integrated into Ada.

...
>> But that's OK, there is no point in proving everything about everything.
>> Moreover, if you made this an expression function (meaning that you are
>> merely giving a name to "sqrt (A*B)", you could plug it in without any
>> effect on your proof. Enjoy.
>
> But you could prove that with concrete ranges of A and B at the client
> side. Which is the whole point. You want to discard a whole class of
> proofs.

Absolutely. The sorts of proofs you want would absolutely kill program 
maintenance, making it impossible to have the sort of reusable libraries 
that Ada makes so easy. The contract represented by the interface can be the 
*only* thing that clients depend upon. If they depend on anything else, 
changes to the body will break otherwise correct calls, and that's 
unacceptable. This property if absolutely required to build large programs 
with large and distributed teams -- otherwise, "proof" remains solely the 
provence of small, toy-sized programs.

> Especially the ones crucial for exception contracts. How are you
> going to prove the caller's contract not to raise? You could not without
> non-sensual handlers. It would be like the idiotic "function must have a
> return statement rule."

You can only provide exception contracts if you have strong preconditions 
and constraints on the parameters. In the absense of that, you shouldn't be 
*allowed* to prove anything, because in doing so you force a strong coupling 
to the contents of the body.

...
>>> Opening readonly file for writing is not a bug it is an error, a 
>>> legitime
>>> state of the program which must be handled. Writing a file opened as
>>> readonly is a bug. Bug /= error. Errors are handled, bugs are fixed.
>>
>> You claim that you can use the file type to eliminate Status_Error and
>> Mode_Error. But that's impossible, because an Open that fails leaves the
>> file unopened;
>
> This is another issue, i.e. construction and initialization. You should 
> not
> be able to create unopened instances of file type.

Nice theory, impossible to do in practice. Next suggestion...

>> and you have to be able to fall back and open a different
>> file (or create one instead) on such a condition. It's not practical to 
>> have
>> a single Open operation as an initializer
>
> Why? I certainly want it exactly this way.

It's impractical to have an Open that somehow encodes every possible 
sequence of Open-fall back-Open-Create that might come up. In the absense of 
that, you have to resort to complex program structures (it's not possible to 
declare objects with the right scope easily) or worse, lots of allocated 
objects. (A major cause of "allocator mania" is certain other languages is 
that they don't have sufficiently powerful ways to create stack objects.)

In any case, we're talking about Ada here, and the form of Ada's I/O is 
fixed. The question is what proof techniques we can bring to these existing 
libraries, not how they could be designed in some hypothetical and 
unimplementable language.

                                         Randy.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-17 23:08                           ` Randy Brukardt
@ 2013-07-18  7:19                             ` Dmitry A. Kazakov
  2013-07-19  4:36                               ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-18  7:19 UTC (permalink / raw)


On Wed, 17 Jul 2013 18:08:17 -0500, Randy Brukardt wrote:

> "Shark8" <onewingedshark@gmail.com> wrote in message 
> news:e7de5753-5819-41a4-a40f-3d80c101bf03@googlegroups.com...
> ...
>>The one drawback from your proposal is that it makes you reiterate the 
>>condition, but with a negation, this could be a source of bugs. Even though it's a 
>>little less straightforward the following would circumvent that problem:
> 
>> procedure Op (x : in out T)
>>   with Post => (Constraint_Error at [not?] Foo(x));
> 
> I didn't mean this to be a real example. An example like the one given 
> probably would have been better written as a precondition (depending on 
> exactly what Foo does):
> 
>   procedure Op (x : in out T)
>      with Pre => Foo(x) or else raise Constraint_Error,
>             Post => Foo(x);
> 
> which is just normal Ada 2012 code (using the raise expression, which we 
> added 'after the fact' to support this usage).

Yep, and also highlights the inconvenient truth, that the "precondition" is
not a precondition but rather a chunk of executable code pushed in front of
the body.

Why not to honestly call it Prologue?

> I don't think most exception contracts would have an associated 
> postcondition (that is, cause); what would the postcondition be for 
> Use_Error, for instance?

If proper post-condition meant then:

   ensure Foo (X) or Raised (Constraint_Error); -- Weak contract

The issue you mentioned is strong exception contract telling when the
exception has to propagate and when not. That would be:

   ensure Foo (X) xor Raised (Constraint_Error); -- Explicit contract

If not Foo (X) then exception.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-17 23:26                             ` Randy Brukardt
@ 2013-07-18  7:37                               ` Dmitry A. Kazakov
  2013-07-19  4:32                                 ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-18  7:37 UTC (permalink / raw)


On Wed, 17 Jul 2013 18:26:31 -0500, Randy Brukardt wrote:

> By your twisted definition, everything is static (including dynamic 
> constraints!) if sufficient information is known at compile-time to reason 
> about them.

Yes.
 
> Absolutely. The sorts of proofs you want would absolutely kill program 
> maintenance, making it impossible to have the sort of reusable libraries 
> that Ada makes so easy.

Maybe. But remember that other proofs will be useless beyond toy cases.

>> Especially the ones crucial for exception contracts. How are you
>> going to prove the caller's contract not to raise? You could not without
>> non-sensual handlers. It would be like the idiotic "function must have a
>> return statement rule."
> 
> You can only provide exception contracts if you have strong preconditions 
> and constraints on the parameters. In the absense of that, you shouldn't be 
> *allowed* to prove anything, because in doing so you force a strong coupling 
> to the contents of the body.

As I said. Useless. 

>>>> Opening readonly file for writing is not a bug it is an error, a  legitime
>>>> state of the program which must be handled. Writing a file opened as
>>>> readonly is a bug. Bug /= error. Errors are handled, bugs are fixed.
>>>
>>> You claim that you can use the file type to eliminate Status_Error and
>>> Mode_Error. But that's impossible, because an Open that fails leaves the
>>> file unopened;
>>
>> This is another issue, i.e. construction and initialization. You should 
>> not be able to create unopened instances of file type.
> 
> Nice theory, impossible to do in practice.

Even if as "practice" you mean broken limited return, it is still quite
possible.

>>> and you have to be able to fall back and open a different
>>> file (or create one instead) on such a condition. It's not practical to 
>>> have a single Open operation as an initializer
>>
>> Why? I certainly want it exactly this way.
> 
> It's impractical to have an Open that somehow encodes every possible 
> sequence of Open-fall back-Open-Create that might come up. In the absense of 
> that, you have to resort to complex program structures (it's not possible to 
> declare objects with the right scope easily) or worse,

It won't be any more complex than it is when handling exceptions
propagating out of Open. In fact it will be much simpler because you forgot
the issue of closing the file on subsequent errors.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-18  7:37                               ` Dmitry A. Kazakov
@ 2013-07-19  4:32                                 ` Randy Brukardt
  2013-07-19  7:16                                   ` Dmitry A. Kazakov
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-19  4:32 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:17i525dc26pat$.1octlzd41c8zm.dlg@40tude.net...
> On Wed, 17 Jul 2013 18:26:31 -0500, Randy Brukardt wrote:
...
>>> Especially the ones crucial for exception contracts. How are you
>>> going to prove the caller's contract not to raise? You could not without
>>> non-sensual handlers. It would be like the idiotic "function must have a
>>> return statement rule."
>>
>> You can only provide exception contracts if you have strong preconditions
>> and constraints on the parameters. In the absense of that, you shouldn't 
>> be
>> *allowed* to prove anything, because in doing so you force a strong 
>> coupling
>> to the contents of the body.
>
> As I said. Useless.

Then proof is beyond useless, it's downright dangerous. If you're right, no 
one should even think about it.

...
>> It's impractical to have an Open that somehow encodes every possible
>> sequence of Open-fall back-Open-Create that might come up. In the absense 
>> of
>> that, you have to resort to complex program structures (it's not possible 
>> to
>> declare objects with the right scope easily) or worse,
>
> It won't be any more complex than it is when handling exceptions
> propagating out of Open. In fact it will be much simpler because you 
> forgot
> the issue of closing the file on subsequent errors.

I've never had the slightest "issue" with closing the file at the end. And 
certainly, there aren't "subsequent errors", because either the file gets 
opened eventually, or the whole routine is aborted (usually, the entire 
program is halted at the point, since failure to be able to write results --  
no one needs to calculate results if you can't save them).

                             Randy.


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-18  7:19                             ` Dmitry A. Kazakov
@ 2013-07-19  4:36                               ` Randy Brukardt
  0 siblings, 0 replies; 68+ messages in thread
From: Randy Brukardt @ 2013-07-19  4:36 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:15q73x8g743d.solfjve1fywf$.dlg@40tude.net...
> On Wed, 17 Jul 2013 18:08:17 -0500, Randy Brukardt wrote:
...
>> I didn't mean this to be a real example. An example like the one given
>> probably would have been better written as a precondition (depending on
>> exactly what Foo does):
>>
>>   procedure Op (x : in out T)
>>      with Pre => Foo(x) or else raise Constraint_Error,
>>             Post => Foo(x);
>>
>> which is just normal Ada 2012 code (using the raise expression, which we
>> added 'after the fact' to support this usage).
>
> Yep, and also highlights the inconvenient truth, that the "precondition" 
> is
> not a precondition but rather a chunk of executable code pushed in front 
> of
> the body.

Well, it's special in the sense that the compiler will try hard to eliminate 
it, and generally, I think a compiler should generate a warning if it 
*can't* remove the precondition. (That might be hard to accomplish 
initially, mainly  because the removal code happens after the stage where 
warnings can easily be given.) In that sense, it's executable in name only.

> Why not to honestly call it Prologue?

I presume people wanted to match common usage (as in Eiffel, for instance).

                       Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-19  4:32                                 ` Randy Brukardt
@ 2013-07-19  7:16                                   ` Dmitry A. Kazakov
  2013-07-20  5:32                                     ` Randy Brukardt
  0 siblings, 1 reply; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-19  7:16 UTC (permalink / raw)


On Thu, 18 Jul 2013 23:32:57 -0500, Randy Brukardt wrote:

>> It won't be any more complex than it is when handling exceptions
>> propagating out of Open. In fact it will be much simpler because you 
>> forgot the issue of closing the file on subsequent errors.
> 
> I've never had the slightest "issue" with closing the file at the end. And 
> certainly, there aren't "subsequent errors", because either the file gets 
> opened eventually, or the whole routine is aborted (usually, the entire 
> program is halted at the point, since failure to be able to write results --  
> no one needs to calculate results if you can't save them).

I meant I/O errors and more elaborated cases like sockets. If you open a
socket file you should bind it, set options which may fail as well. Code
that does not use controlled wrappers look very ugly.

It is *always* a much better design when the object maintains a state
invariant (e.g. file is open). Provability of such code is a nice
byproduct.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-19  7:16                                   ` Dmitry A. Kazakov
@ 2013-07-20  5:32                                     ` Randy Brukardt
  2013-07-20  9:06                                       ` Dmitry A. Kazakov
  0 siblings, 1 reply; 68+ messages in thread
From: Randy Brukardt @ 2013-07-20  5:32 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:w30s4q4tdrue$.1q2ek7n0im2ba$.dlg@40tude.net...
> On Thu, 18 Jul 2013 23:32:57 -0500, Randy Brukardt wrote:
>
>>> It won't be any more complex than it is when handling exceptions
>>> propagating out of Open. In fact it will be much simpler because you
>>> forgot the issue of closing the file on subsequent errors.
>>
>> I've never had the slightest "issue" with closing the file at the end. 
>> And
>> certainly, there aren't "subsequent errors", because either the file gets
>> opened eventually, or the whole routine is aborted (usually, the entire
>> program is halted at the point, since failure to be able to write 
>> results --
>> no one needs to calculate results if you can't save them).
>
> I meant I/O errors and more elaborated cases like sockets. If you open a
> socket file you should bind it, set options which may fail as well. Code
> that does not use controlled wrappers look very ugly.
>
> It is *always* a much better design when the object maintains a state
> invariant (e.g. file is open). Provability of such code is a nice
> byproduct.

But isn't that impossible, practically? If the USB drive is unplugged, it's 
impossible to get to the file. Same with sockets. If you mean that it's 
required to propagate an exception out of scope so that it's impossible to 
access a file object that's not open, that's going to make a lot of trouble 
because of overlapping file lifetimes. (Imagine copying data from one file 
to another with a transformation: if one of the files has an I/O error, you 
no longer can clean up the other one (you might be able to use nesting to 
get it to work one way, but not either way).

In any case, this seems like solving a non-problem, as closing files and the 
like is trivially handled by finalization. (I don't much care about people 
who refuse to use the tools they're given.)

Anyway, I think the problem with your scheme is on the initialization end. 
Initialization (as you seemed to acknowledge above) is just too complex for 
many of these objects to do it as part of the object declaration.

Indeed, we (briefly) tried to use a scheme like the one you are suggesting 
for the Claw packages. We immediately ran into problems with too many 
parameters (especially for discriminants). And there were further problems 
with the interrelationships between the objects; we would have had to 
extensively use access types, which would have greatly compromised the 
ease-of-use and especially the bulletproof-ness goals that we had for Claw.

So, I don't think it works outside of the simplest cases.

                                       Randy.




^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-20  5:32                                     ` Randy Brukardt
@ 2013-07-20  9:06                                       ` Dmitry A. Kazakov
  0 siblings, 0 replies; 68+ messages in thread
From: Dmitry A. Kazakov @ 2013-07-20  9:06 UTC (permalink / raw)


On Sat, 20 Jul 2013 00:32:44 -0500, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
> news:w30s4q4tdrue$.1q2ek7n0im2ba$.dlg@40tude.net...
>> On Thu, 18 Jul 2013 23:32:57 -0500, Randy Brukardt wrote:
>>
>>>> It won't be any more complex than it is when handling exceptions
>>>> propagating out of Open. In fact it will be much simpler because you
>>>> forgot the issue of closing the file on subsequent errors.
>>>
>>> I've never had the slightest "issue" with closing the file at the end. 
>>> And
>>> certainly, there aren't "subsequent errors", because either the file gets
>>> opened eventually, or the whole routine is aborted (usually, the entire
>>> program is halted at the point, since failure to be able to write 
>>> results --
>>> no one needs to calculate results if you can't save them).
>>
>> I meant I/O errors and more elaborated cases like sockets. If you open a
>> socket file you should bind it, set options which may fail as well. Code
>> that does not use controlled wrappers look very ugly.
>>
>> It is *always* a much better design when the object maintains a state
>> invariant (e.g. file is open). Provability of such code is a nice
>> byproduct.
> 
> But isn't that impossible, practically? If the USB drive is unplugged, it's 
> impossible to get to the file. Same with sockets. If you mean that it's 
> required to propagate an exception out of scope so that it's impossible to 
> access a file object that's not open, that's going to make a lot of trouble 
> because of overlapping file lifetimes.

I think it is a consequence of your approach that leads to conflating
states of a file, of its descriptor, of the Ada's proxy to that descriptor
and all OSI levels from top to the bottom into one big mess. Yes, if you do
that it will never be possible to check.

My approach is to identify states which can be statically determined and
make them invariant. A file being opened or a socket created is such a
state. Non-opened file has no use. Similarly a bound socket is an invariant
for another type, and a connected socket is for yet another.

When a connection gets closed by the peer pending I/O should propagate an
I/O error exception for opened-bound-connected socket. This exception has
nothing to do with attempting to read fron a non-existing file description.
Because the latter is a bug to me, which shall be prevented statically.

> Anyway, I think the problem with your scheme is on the initialization end. 
> Initialization (as you seemed to acknowledge above) is just too complex for 
> many of these objects to do it as part of the object declaration.

My answer is that even talking about complexity is breaking abstraction. It
is irrelevant how complex an opaque operation is.

> Indeed, we (briefly) tried to use a scheme like the one you are suggesting 
> for the Claw packages. We immediately ran into problems with too many 
> parameters (especially for discriminants). And there were further problems 
> with the interrelationships between the objects; we would have had to 
> extensively use access types, which would have greatly compromised the 
> ease-of-use and especially the bulletproof-ness goals that we had for Claw.

These problems are mostly language problems, namely lack of proper
constructors and encapsulation of initialization/finalization.

Your and others opposition to fixing constructors in Ada is to large extent
based on flawed design patterns like you described.

> So, I don't think it works outside of the simplest cases.

On the contrary, it is the flawed "first construct then initialize"
pattern, which is simple only on paper. In practice it is becomes a huge
maintenance and testing problem because it shoulders all the burden on the
clients => distributed overhead.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: The future of Spark . Spark 2014 : a wreckage
  2013-07-12  9:12               ` Stefan.Lucks
  2013-07-12 20:47                 ` Randy Brukardt
@ 2013-08-05  5:43                 ` Paul Rubin
  1 sibling, 0 replies; 68+ messages in thread
From: Paul Rubin @ 2013-08-05  5:43 UTC (permalink / raw)


Stefan.Lucks@uni-weimar.de writes:
> the implementation an authenticated encryption scheme.... The
> output of the authenticated encryption is the ciphertext (X xor Y),
> followed by a cryptographic checksum(*) of X under the key K. ...
> The flow annotations specified the flow from X
> and K to Z. And that actually caught my error of using (X xor Y)
> instead of X in the implementation.

Surely you wanted to authenticate the ciphertext and not the plaintext?
Authenticating the plaintext can leak information about it.


^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2013-08-05  5:43 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-08 17:56 The future of Spark . Spark 2014 : a wreckage vincent.diemunsch
2013-07-08 19:24 ` Simon Wright
2013-07-08 20:59 ` Florian Weimer
2013-07-09  7:40   ` Dmitry A. Kazakov
2013-07-09  8:30     ` Georg Bauhaus
2013-07-09  8:58       ` Dmitry A. Kazakov
2013-07-09 10:27         ` G.B.
2013-07-09 11:13         ` G.B.
2013-07-09 15:14     ` Adam Beneschan
2013-07-09 22:51     ` Robert A Duff
2013-07-10  7:49       ` Dmitry A. Kazakov
2013-07-10  8:21         ` Georg Bauhaus
2013-07-08 21:32 ` Randy Brukardt
2013-07-09  7:28   ` Dmitry A. Kazakov
2013-07-09 11:29     ` Simon Wright
2013-07-09 12:22       ` Dmitry A. Kazakov
2013-07-09 20:37     ` Randy Brukardt
2013-07-10 10:03       ` Dmitry A. Kazakov
2013-07-10 23:21         ` Randy Brukardt
2013-07-11  7:44           ` Dmitry A. Kazakov
2013-07-11 22:28             ` Randy Brukardt
2013-07-12  8:02               ` Dmitry A. Kazakov
2013-07-12 21:16                 ` Randy Brukardt
2013-07-14 10:19                   ` Dmitry A. Kazakov
2013-07-14 15:57                     ` Georg Bauhaus
2013-07-16  0:16                       ` Randy Brukardt
2013-07-17  1:30                         ` Shark8
2013-07-17 23:08                           ` Randy Brukardt
2013-07-18  7:19                             ` Dmitry A. Kazakov
2013-07-19  4:36                               ` Randy Brukardt
2013-07-16  0:13                     ` Randy Brukardt
2013-07-16  8:37                       ` Dmitry A. Kazakov
2013-07-16 22:13                         ` Randy Brukardt
2013-07-17  7:59                           ` Dmitry A. Kazakov
2013-07-17 23:26                             ` Randy Brukardt
2013-07-18  7:37                               ` Dmitry A. Kazakov
2013-07-19  4:32                                 ` Randy Brukardt
2013-07-19  7:16                                   ` Dmitry A. Kazakov
2013-07-20  5:32                                     ` Randy Brukardt
2013-07-20  9:06                                       ` Dmitry A. Kazakov
2013-07-12  1:01           ` Slow? Ada?? Bill Findlay
2013-07-09  7:57   ` The future of Spark . Spark 2014 : a wreckage Stefan.Lucks
2013-07-09 20:46     ` Randy Brukardt
2013-07-10  8:03       ` Stefan.Lucks
2013-07-10 12:46         ` Simon Wright
2013-07-10 23:10         ` Randy Brukardt
2013-07-11 19:23           ` Stefan.Lucks
2013-07-12  0:21             ` Randy Brukardt
2013-07-12  9:12               ` Stefan.Lucks
2013-07-12 20:47                 ` Randy Brukardt
2013-08-05  5:43                 ` Paul Rubin
2013-07-10 12:19   ` vincent.diemunsch
2013-07-10 16:02     ` Simon Wright
2013-07-11  0:36     ` Randy Brukardt
2013-07-11  6:19       ` Simon Wright
2013-07-11 23:11         ` Randy Brukardt
2013-07-11 10:10       ` vincent.diemunsch
2013-07-11 14:23         ` Dmitry A. Kazakov
2013-07-12  0:07           ` Randy Brukardt
2013-07-12  0:00         ` Randy Brukardt
2013-07-12  7:25           ` Simon Wright
2013-07-12 20:07             ` Randy Brukardt
2013-07-12 14:23           ` Robert A Duff
2013-07-12 20:14             ` Randy Brukardt
2013-07-11 23:50       ` Shark8
2013-07-08 23:18 ` Peter C. Chapin
2013-07-09  7:34   ` Stefan.Lucks
2013-07-09 15:21     ` Adam Beneschan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox