[CakeML-dev] speed of evaluation in logic

Michael.Norrish at data61.csiro.au Michael.Norrish at data61.csiro.au
Wed May 24 00:28:00 UTC 2017


Looking at ALL_EL_CONV’s implementation, it seems as if it will always traverse the entire list, and so won’t bottom out quickly if there is an element not satisfying the predicate.  But it’s still slower than EVAL even if the predicate is true of every element. 

One possible explanation for this is that ALL_EL_CONV calls dest_list to extract an ML list of the list’s elements.  This is then used to build up a copy of the input list on the LHS of the final result theorem (through repeated calls to SPECL). This is clearly wasteful. Using Portable.pointer_eq, you can test to see if the term on the LHS of the resulting theorem is equal to the term originally used.  With EVAL it is; with ALL_EL_CONV it isn’t. 

Michael

On 23/5/17, 23:59, "Anthony Fox" <acjf3 at cam.ac.uk> wrote:

    Hi Ramana,
    
    I’ve finally found some time to look into this a bit...
    
    One big problem seems to be the use of ListConv1.ALL_EL_CONV when computing secs_ok. It’s performance seems to be dreadful in comparison to EVAL. Replacing
    
      ListConv1.ALL_EL_CONV eval_T
    
    with just eval_T gave me a timing for that section of
    
      sec_ok (par) real: 0.67728s
    
    which was a big improvement. However, there seem to be problems elsewhere as well. For example, flat_bytes is computed slowly. In this case
    
    val flat_bytes =
          listSyntax.mk_flat(rconc map_line_bytes')
          |> (REWRITE_CONV (List.rev bytes_defs) THENC eval)
    
    is nine times faster than current version. Another slow computation is enc_secs_again, so I suspect that this could be improved as well.
    
    It may be simplest to stick with EVAL for to_bytes_x64, as it’s performance seems hard to beat.
    
    Cheers,
    Anthony
    
    > On 18 May 2017, at 07:37, Ramana Kumar <Ramana.Kumar at cl.cam.ac.uk> wrote:
    > 
    > I am confused about how EVAL is getting so much speed in lab_to_target. Bear in mind that this is without parallelisation (afaik), which BOOT uses a lot in this phase of compilation. My current guess is that there must be some very smart custom conversions installed for lab_to_target that don't fire under BOOT's strategy. For example, did anyone write custom conversions for calculating sec_ok for previously encoded instructions? Calculating sec_ok under BOOT's strategy, even in parallel, often takes longer than EVAL on the entire lab_to_target evaluation. I know there are custom conversions for memoising the encoder (which both strategies benefit from), but are those custom conversions doing more than I'm aware of?
    
    
    _______________________________________________
    Developers mailing list
    Developers at cakeml.org
    https://lists.cakeml.org/listinfo/developers
    



More information about the Developers mailing list