2012-10-25

How to Write a Simple Web Application Using Ocamlnet

This post might seem to be in apparent contradiction: Ocamlnet is a large, very opinionated framework for network programming that solves most, if not all, those infrastructure issues that need to be taken care of when writing a networked, distributed, fault-tolerant server, from process management to string decoding and including protocol parsing and building. The fact is that Ocamlnet is not particularly complicated for all that it does, and it does quite a bit; but there are (for me at least) three hurdles to overcome in order to start using it effectively:

  • It is big. The project includes Posix wrappers, string handling, process management, CGI programming, various protocols, RPC registration, interprocess communication…
  • It is strange. Most of the API is object oriented rather than functional, using inheritance extensively
  • It is underdocumented. While the API documents are complete, and the project page includes some cookbook examples, for most non-trivial needs you have to go deep into the source code

In this instance I'll follow a tutorial, top-down style and I won't necessarily show complete compile-and-run code, which means that you'll have to reconstruct the source code to a compilable state, but I hope it will still be useful as a starting point and guide to writing HTTP services with Ocamlnet. This tutorial assumes you have installed OCaml 4.0 with Findlib, Ocamlnet 3.6, Yojson and its dependencies and PCRE-Ocaml.

Also, I won't attempt to compare it with Ocsigen since I've never used it, not least because I'm not really fond of convention-over-configuration-style frameworks I don't know it at all, and maybe I'm irrationally prejudiced :-). If you want full control of your code and feel comfortable with a large number of tweakable knobs, then Ocamlnet is for you. If you have work that needs to be done quickly and you're used to RoR or Django, Ocamlnet will definitely set your hair on fire.

Setting up a Netplex server: http.ml

The architecture I use is a Web application written in HTML that consumes JSON services published in the back-end. Ocamlnet lets me serve both the static content (HTML, CSS, JavaScript and graphic assets) and route a number of dynamic services. I can use just that for development, and have a production Apache server that serves the static content and proxies the dynamic requests to the Ocamlnet-published services. Another simplification I make is that I manually route the method URLs instead of letting Ocamlnet do it itself, as it is otherwise perfectly capable to. This makes it simpler to configure the services, at the cost of having to explicitely handle routing in code.

Let's begin by taking a look at the main function:

let main () =
  let database f =
    db_file := Io.get_absolute_path f;
    if not (Sys.file_exists !db_file) then raise (Arg.Bad ("File not found: " ^ f))
  in
  let usage = "usage: " ^ (Filename.basename Sys.executable_name) ^ " [options]" in
  let opt_list, cmdline_cfg = Netplex_main.args () in
  let opt_list = Arg.align ([
"-db"        , Arg.String database,                   "<file>  Database file";
  ] @ opt_list) in
  Arg.parse opt_list (fun s -> raise (Arg.Bad ("Invalid argument: " ^ s))) usage;
  Netsys_signal.init ();
  Netplex_main.startup
    ~late_initializer:(fun _ _container ->
      Netlog.logf `Notice "Starting up")
    (Netplex_mp.mp ())
    Netplex_log.logger_factories
    Netplex_workload.workload_manager_factories
    [
      service_factory ();
    ]
    cmdline_cfg

let () = if not !Sys.interactive then main ()

Netplex is the part of Ocamlnet that orchestrates the management and intercommunication between the processes that make up a network service. It has a number of command-line options for configuration, most notably -fg to launch the service in the foreground instead of as a detached dæmon. Netplex_main.args gives back a list of needed options upon which to add program-specific ones. In this case the only option is to pass a database file. Every filesystem resource must be accessed by absolute path, since Netplex changes the working directory to / upon startup. This file is stored in a global reference:

let db_file = ref (Io.get_absolute_path "myfile.db")

Once the command line is parsed, the service is created. First, Ocamlnet has to take over signal handling to give the service an orderly lifecycle (Netsys is the collection of modules providing cross-platform POSIX functionality). Netplex is then started with the multi-process parallelizer, the standard log and workload factories that set up themselves out of the configuration file, and a single custom service factory that will create the HTTP services themselves:

let service_factory = Nethttpd_plex.nethttpd_factory
  ~name:"nethttpd"
  ~hooks:(new environment_hooks)
  ~config_cgi:Netcgi.({
    default_config with
    default_exn_handler = false;
    permitted_input_content_types = [
      "application/json";
    ] @ default_config.permitted_input_content_types
  })
  ~handlers:[
    "json_service", Net.json_service ~dyn_uri:"/cgi" [
      "daterange", with_db Daterange.json;
      "calculate", with_db Calculate.json;
      "calendar" , with_db Calendar.json;
    ];
  ]
  ~services:Nethttpd_plex.default_services

The service name must correspond with that defined in the config file. In order to arrange for the workers to have access to the database file I intercept the service creation with hooks to their lifecycle process to open it and close it as needed. Netcgi sets up the environment that each HTTP service requires to function; in this case I use a minimal configuration that augments valid POST MIME types with a type for JSON requests (not otherwise used in this case) and opt out of the standard exception handler in exchange for my own. I configure a single "json_service" handler that will dispatch to the relevant methods of type cgi_activation → Yojson.Basic.json. The Netplex services for this service are the default Nethttpd_plex ones required by the infrastructure in order to manage the lifecycle of the process group: creation, tear-down and IPC. Note well that the factory is a thunk, not a data structure, the resulting type is unit → Netplex_types.processor_factory.

The lifecycle hooks are specified as a subclass of Netplex_kit.empty_processor_hooks. It uses the Netplex environment plug-in to store a shared reference to the database in a way that both thread- and process-based services can access in an uniform manner:

class environment_hooks = object
  inherit Netplex_kit.empty_processor_hooks ()
  method! post_add_hook _ container =
    container#add_plugin Netplex_mutex.plugin;
    container#add_plugin Netplex_sharedvar.plugin
  method! post_start_hook container =
    Netlog.logf `Info "(%s) opening database \"%s\"" (Net.cur_sys_id ()) !db_file;
    try  set_db (DB.open_database !db_file)
    with DE42x.Error (msg) -> container#log `Crit msg
  method! pre_finish_hook _container =
    Netlog.logf `Info "(%s) closing database" (Net.cur_sys_id ());
    match get_db () with
    | Some db -> DB.close_db db
    | None    -> ()
end

In this case I open and close database connections (represented here as an open file descriptor) which are stored in a per-process environment:

let get_db, set_db =
  let env_id = "MyService.localenv" in
  let module E = Netplex_cenv.Make_var_type (struct type t = DB.t end) in
  (fun () -> try Some (E.get env_id) with Netplex_cenv.Container_variable_not_found _ -> None),
  E.set env_id

Neplex_cenv makes a strongly-typed accessor for shared variables; in this case I have just one keyed by env_id. As a utility I arrange for my service methods to be closed over a reference to the database (cf the handler setup above):

let with_db proc arg = match get_db () with
| None    -> Net.failf "no database!"
| Some db -> proc db arg

A Nethttp JSON framework: net.ml

Every Nethttpd_plex-based service follows the same structure, while the specifics will make up for the bulk of the application. In this example these details have to do with utilities that make consuming and producing JSON data easier. I have a Net module with a number of helpers, of which I've used two already, cur_sys_id and failf:

let failf fmt = Printf.ksprintf failwith fmt
and argf  fmt = Printf.ksprintf invalid_arg fmt

let cur_sys_id () = match Netplex_cenv.current_sys_id () with
| `Process pid -> Printf.sprintf "PID %d" pid
| `Thread  tid -> Printf.sprintf "TID %d" tid

Another useful function is an encoding-safe string wrapper:

let text = Netencoding.Html.encode_from_latin1

Normally, Nethttp sends HTML 4.01-formatted error messages. In a JSON-based application it is preferable to have standardized JSON errors:

let error_json (env : Netcgi.cgi_environment) status fields cause message =
  let json_of_header hdr =
    try `String (env#input_header_field hdr)
    with Not_found -> `Null
  in
  try
    let script_name = env#cgi_script_name in
    let code = Nethttp.int_of_http_status status in
    env#log_error (Printf.sprintf "%s: %s (Status %i)" script_name message code);
    env#set_output_header_fields []; (* reset *)
    env#set_output_header_field "Content-type" "application/json; charset=utf-8";
    env#set_status status;
    env#set_multiple_output_header_field "Cache-control"
      [ "max-age=0"; "must-revalidate" ];
    let secs = Netdate.mk_mail_date (Unix.time ()) in
    env#set_output_header_field "Expires" secs;
    List.iter (fun (n,v) -> env#set_multiple_output_header_field n v) fields;
    env#send_output_header();
    if env#cgi_request_method <> "HEAD" then
      Yojson.Basic.to_output env#out_channel (`Assoc [
        "status"       , `Int    code;
        "statusText"   , `String (Nethttp.string_of_http_status status);
        "cause"        , `String cause;
        "message"      , `String message;
        "scriptName"   , `String script_name;
        "requestMethod", `String env#cgi_request_method;
        "queryString"  , `String env#cgi_query_string;
        "referrer"     ,  json_of_header "referer"
      ]);
    env#out_channel#flush ();
    env#out_channel#close_out ()
  with e ->
    Netlog.logf `Crit "Unexpected exception %s" (Printexc.to_string e)

This is a good example of how to use the cgi_environment to query the CGI execution and to exert maximum control over the HTTP response. I raise standard Ocaml exceptions from the method handlers and translate them into the relevant HTTP status codes by wrapping them in a higher-order protective function:

let protect handler (cgi : Netcgi.cgi_activation) =
  try handler cgi
  with
  | Netcgi.Argument.Oversized ->
    error_json cgi#environment `Request_entity_too_large []
      "Oversized" "A POST parameter exceeds maximum allowed size"
  | Invalid_argument msg ->
    error_json cgi#environment `Bad_request []
      "Bad request" (text msg)
  | Failure msg ->
    error_json cgi#environment `Internal_server_error []
      "Method failure" (text msg)
  | Not_found ->
    error_json cgi#environment `Not_implemented []
      "Not implemented" "The requested operation is not implemented"
  | exn ->
    let msg = Printexc.to_string exn in
    error_json cgi#environment `Internal_server_error []
      "Internal server error" ("Unexpected exception: " ^ text msg)

It is not normally necessary to manipulate the cgi_environment in such a detailed, low-level manner; the cgi_activation does pretty much the same thing in a easier-to-use way:

let send_json json (cgi : Netcgi.cgi_activation) =
  cgi#set_header ~content_type:"application/json; charset=utf-8" ~cache:`No_cache ();
  Yojson.Basic.to_output cgi#out_channel json;
  cgi#output#commit_work ()

Note well that Yojson doesn't provide a streaming interface: you must build the entire JSON value, which gets serialized in bulk; this makes necessary to configure the HTTP service so that the cgi_activations it creates are at least buffered:

let json_service ?dyn_uri handlers =
  let dyn_translator path =
      let len  = String.length path in
      let path = if len != 0 && path.[0] = '/'
        then String.sub path 1 (len - 1)
        else path
      in
      Pcre.replace ~pat:"/" ~templ:"_" path
  and dyn_handler env cgi =
    protect (fun cgi ->
      let h = List.assoc env#cgi_path_translated handlers in
      send_json (h cgi) cgi) cgi
  in Nethttpd_services.({
    dyn_handler;
    dyn_activation = std_activation `Std_activation_buffered;
    dyn_uri;
    dyn_translator;
    dyn_accept_all_conditionals = true;
  })

The dynamic path translator removes the leading slash and turns subsequent slashes into underscores, so that a method in the namespace /cgi, say can serve as a look-up key in the list of handlers: a call to /cgi/my/method/name will turn into a key my_method_name. This is, of course, a completely arbitrary decision. The dynamic handler in turn looks up the method handler (recall, of type cgi_activation → Yojson.Basic.json) by this key, calls it with the cgi_activtion expecting a JSON response and sends it out. Since the handling is protected against exceptions, any missing method, parameter validation error or exceptional condition is sent out as the corresponding HTTP error response.

Speaking of parameter extraction, I don't use anything fancy like parsing combinators, just plain old higher-order functions and regular expressions validating the result of CGI accessor functions:

let with_arg arg f = Io.unwind ~protect:(fun arg -> arg#finalize ()) f arg

let get_arg cgi name =
  try  Some (with_arg (cgi#argument name) (fun arg -> arg#value))
  with Not_found -> None

let parse ?default ~validate ~parse cgi name =
  match default, get_arg cgi name with
  | None  , None    -> argf "Missing parameter \"%s\"" name
  | Some v, None    -> v
  | _     , Some p  ->
    try  parse (Pcre.extract ~rex:validate ~full_match:false p)
    with Not_found -> argf "Invalid parameter \"%s\"" name

Since CGI arguments can be, if large, buffered into a temporary file, Netcgi requires explicit finalization. Every error is signaled with an Invalid_argument exception which protect catches and translates into a HTTP 400 (Bad Request) via error_json. Parsing specific argument types is straightforward:

let re_char     = Pcre.regexp "^(.)$"
let re_bool     = Pcre.regexp "^(true|false)$"
let re_int      = Pcre.regexp "^([-+]?\\d+)$"
let re_float    = Pcre.regexp "^([-+]?\\d+(?:.\\d*))$"
let re_date     = Pcre.regexp "^(\\d{4})-(\\d{2})-(\\d{2})$"
let re_datetime = Pcre.regexp "^(\\d{4})-(\\d{2})-(\\d{2})[ T](\\d{2}):(\\d{2}):(\\d{2})(Z|[-+]\\d{4})$"

let parse_char ?default cgi = parse ?default ~validate:re_char
  ~parse:(fun res -> res.(0).[0]) cgi

let parse_bool cgi = parse ~default:false ~validate:re_bool
  ~parse:(fun res -> bool_of_string res.(0)) cgi

let parse_int ?default cgi = parse ?default ~validate:re_int
  ~parse:(fun res -> int_of_string res.(0)) cgi

let parse_float ?default cgi = parse ?default ~validate:re_float
  ~parse:(fun res -> float_of_string res.(0)) cgi

let parse_date ?default cgi = parse ?default ~validate:re_date ~parse:(fun res ->
  let year   = int_of_string res.(0)
  and month  = int_of_string res.(1)
  and day    = int_of_string res.(2)
  in let dummy = Netdate.create 0. in
  Netdate.({ dummy with year; month; day; })) cgi

let parse_datetime ?default cgi = parse ?default ~validate:re_datetime ~parse:(fun res ->
  let year   = int_of_string res.(0)
  and month  = int_of_string res.(1)
  and day    = int_of_string res.(2)
  and hour   = int_of_string res.(3)
  and minute = int_of_string res.(4)
  and second = int_of_string res.(5)
  and zone   = match res.(6) with
  |"Z" -> 0
  | dt ->
    let hrs = int_of_string (String.sub dt 1 2)
    and mns = int_of_string (String.sub dt 3 2) in
    let off = 60 * hrs + mns in
    if dt.[0] == '+' then off else -off
  in let dummy = Netdate.create 0. in
  Netdate.({ dummy with year; month; day; hour; minute; second; zone; })) cgi

Writing the JSON methods: myservice.ml

That is the infrastructure, in broad strokes. I put each method in its own module, following a simple convention:

  • I define a type t of parsed method arguments:

    type t = {
      lon : float;
      lat : float;
      dt  : Netdate.t;
      jd  : Jd.t;
      tz  : int;
      lim : Jd.t option;
    }
    

    (in this instance, Jd.t is the type of dates represented as Julian dates)

  • I define a validate function to parse the CGI arguments into a value of type t:

    let validate db cgi =
      let open Net in
      let lon = parse_float    cgi "lon"
      and lat = parse_float    cgi "lat"
      and dt  = parse_datetime cgi "dt" in
      let jd  = jd_of_netdate   dt in
      let tz  = parse_int      cgi "tz"  ~default:dt.Netdate.zone in
      let lim = if parse_bool  cgi "tab"
        then Some (Jd.min db.DB.max_date (jd_of_netdate (Net.parse_date cgi "lim")))
        else None
      in
      if not (-180. <= lon && lon <= 180.) then
        Net.argf "Longitude out of range"
      else
      if not (-66.56 <= lat && lat <= 66.56) then
        Net.argf "Latitude out of range"
      else
      if Jd.compare jd db.DB.min_date < 0 then
        Net.argf "Date too early"
      else
      if Jd.compare jd db.DB.max_date > 0 then
        Net.argf "Date too late"
      else
      { lon = lon /. Trig.radian;
        lat = lat /. Trig.radian;
        dt;
        jd;
        tz;
        lim; }
    
  • I define and export a json function to generate the actual output:

    let json db cgi =
      let req = validate db cgi in
      let tz  = req.dt.Netdate.zone / 60 in
      let tdt = Jd.dynamic_time req.jd in
      (* … *)
      `Assoc [
        "jd"           , `Float  t;
        "dt"           ,  Net.time   Jd.(tdt <-> req.jd);
        "lst"          ,  Net.time  (lst /. Jd.secs_day);
        "lon"          ,  Net.angle  ~reduce:false req.lon;
        "lat"          ,  Net.angle  ~reduce:false req.lat;
        (* … *)
      ]
    

    (functions Net.time and Net.angle return appropriate JSON values). This exported function goes into the dynamic method map, as seen in the service_factory above.

Configuring the Netplex server: myservice.conf

That is mostly it, code-wise. It remains the detail of configuring Netplex. I use a simple myservice.conf file:

netplex {
  controller {
    max_level = "info";
    logging {
      type = "file";
      file = "/var/log/myservice.log";
      component = "*";
      subchannel = "*";
      max_level = "info";
    };
  };

  service {
    name = "myservice";
    protocol {
      name = "http";
      tcp_nodelay = true;
      address {
        type = "internet";
        bind = "0.0.0.0:8080";
      };
    };
    processor {
      type = "nethttpd";
      timeout = 60.0;
      timeout_next_request = 6.0;
      access_log = "enabled";
      suppress_broken_pipe = true;
      host {
        pref_name = "localhost";
        pref_port = 8080;
        names = "127.0.0.1:0";
        uri {
          path = "/";
          service {
            type = "file";
            docroot = "/path/to/static/";
            media_types_file = "/etc/mime.types";
            default_media_type = "application/xhtml+xml";
            enable_gzip = true;
            enable_listings = false;
            index_files = "index.html";
          };
        };
        uri {
          path = "/cgi";
          method {
            allow = "GET";
            service {
              type = "dynamic";
              handler = "json_service";
            };
          };
        };
      };
    };
    workload_manager {
      type = "dynamic";
      max_jobs_per_thread = 1;
      min_free_jobs_capacity = 2;
      max_free_jobs_capacity = 5;
      max_threads = 50;
    };
  };
}

Note that the Nethttpd_plex section declares two URIs: the root path maps to a file service that will serve the static content, defaulting to XHTML, while the /cgi prefix will map to the dynamic JSON handler. This is useful for development, since it only requires launching myservice -fg and trying it with a Web browser on http://127.0.0.1:8080/. In production I set up Apache with mod_proxy like this:

Alias /myservice /path/to/static

<Directory /path/to/static>
  Options FollowSymLinks
  AllowOverride All
  Order allow,deny
  Allow from all
</Directory>

<Location /myservice/>
  AuthType Digest
  AuthName "SERVICE"
  AuthDigestDomain /myservice/
  AuthUserFile /etc/httpd/passwd
  Require valid-user
</Location>

ProxyPass /myservice/cgi  http://127.0.0.1:8080/cgi

(where /path/to/static and /cgi must match what is configured in myservice.conf). Of course you can map your application to the HTTP root, in this case I have a single Apache instance serving various sub-paths.

Compiling: Makefile

It is not necessary to complicate the build process with anything more than a properly written Makefile. In this case I have one interface and one implementation for each JSON method (which you will note don't quite correspond to the dynamic service set-up I've shown first). Note well the list of PACKAGES required for building:

OCAMLFLAGS        = -thread -w @a -unsafe
OCAMLOPTFLAGS     = $(OCAMLFLAGS) -inline 10000
OCAMLLIBS         = unix.cma
CFLAGS            = -I/opt/ocaml/lib/ocaml -arch x86_64 -O3 -Wall -Wextra
PACKAGES          = -package threads,pcre,yojson,netplex,netcgi2,nethttpd

SRC  =            \
    net.ml        \
    myservice.ml  \
    http.ml

PROGS = myservice

all: $(PROGS)

myservice: $(SRC:%.ml=%.cmx)
    ocamlfind ocamlopt $(OCAMLOPTFLAGS) $(PACKAGES) -linkpkg $+ -o $@

%.cmi: %.mli
    ocamlfind ocamlc $(OCAMLFLAGS) $(PACKAGES) -c $<

%.cmo: %.ml
    ocamlfind ocamlc $(OCAMLFLAGS) $(PACKAGES) -c $<

%.cmx: %.ml
    ocamlfind ocamlopt $(OCAMLOPTFLAGS) $(PACKAGES) -c $<

%.o: %.c
    ocamlfind ocamlc -ccopt "$(CFLAGS)" -c $<

clean:
    /bin/rm -rf *.o *.a *.so *.cmi *.cmo *.cmx *~ *.log

distclean: clean
    /bin/rm -rf $(PROGS) depend

depend:
    $(OCAMLDEP) -one-line $(OCAMLLIBS) *.ml *.mli > depend

include depend

.PHONY: clean distclean all

Conclusion

There are a number of more advanced issues I'd like to address in the future. As it is, this framework can handle simple GET and POST requests but won't parse multipart attachments nor handle file transfers. Another thing it doesn't handle is HTTP Authorization; for simple requirements a simple filter can work, while for more complicated set-ups the best way to go is, in my opinion, to leverage Apache as I did here.

For those of you that have to interoperate with SOAP Web Services, the same architectural strategy is perfectly applicable with the aid of PXP and perhaps OC-SOAP.

A big field for further exploration is how to structure a complex Web application into independent services; Netplex makes that possible, if not necessarily easy. There is a hot architectural trend making some noise now called Command-Query Separation (CQS); this pattern can be profitably implemented with a single Netplex RPC service that handles all commands to which the Nethttpd_plex workers delegate. The advantages of this separation are enforced separation of concerns and automatic, transparent fault tolerance and distribution, both of which are the guiding principles behind Ocamlnet's design.

A closing remark that I don't want to miss on making is that the payoff of using Ocamlnet's process model is that it is really fast. My "production" server is an ancient 400 MHz PPC with 384 MB RAM which is perfectly capable of producing and serving really computationally-intensive content with minimal environmental requirements. This is something that I simply couldn't hope to pull off with PHP or Java. I encourage you to try Ocamlnet and see if you find, like I do, that Ocaml is the language of choice of discriminating Web developers.

2012-08-06

Merge Right

The so-called master-transaction update is one of the, if not the defining algorithms of the discipline formerly known as "data processing". Given two sorted files of records with increasing keys, the process applies each record in the transaction file to each record of the the master file and outputs the result, if any, to the updated master file in one pass over each input. The same algorithm can compute the union, intersection or difference of sorted sequences. For instance, the union of two sets represented as sorted lists of unique elements is:

let union       =
  let rec go l r = match l, r with
  | [], xs | xs, []  -> xs
  | x :: xs, y :: ys ->
    match compare x y with
    | -1 -> x :: go       xs (y :: ys)
    |  0 -> x :: go       xs       ys
    |  1 -> y :: go (x :: xs)      ys
    |  _ -> assert false
  in go

Intersection is:

let inter       =
  let rec go l r = match l, r with
  | [], _  | _, []   -> []
  | x :: xs, y :: ys ->
    match compare x y with
    | -1 ->      go       xs (y :: ys)
    |  0 -> x :: go       xs       ys
    |  1 ->      go (x :: xs)      ys
    |  _ -> assert false
  in go

while difference is:

let diff       =
  let rec go l r = match l, r with
  | [], _  | _, []   -> l
  | x :: xs, y :: ys ->
    match compare x y with
    | -1 -> x :: go       xs (y :: ys)
    |  0 ->      go       xs       ys
    |  1 ->      go (x :: xs)      ys
    |  _ -> assert false
  in go

And so on. The three functions use the same underlying merge schemata; what varies is the operation to perform in each of the five possible cases:

  1. The left sequence is nil
  2. The right sequence is nil
  3. The next element in the left sequence is less than the next element in the right sequence
  4. The next element in the left sequence is equal to the next element in the right sequence
  5. The next element in the left sequence is greater than the next element in the right sequence

The question is, then, how many set operations can the merge algorithm implement? These five cases partition both input sequences in disjoint sets such that the output sequence is the natural merge of zero or more of them. For example, processing the sets { 1, 3, 4, 6, 7, 8 } ⋈ { 2, 3, 5, 6, 8, 9 } results in the following steps:

LNRNLTEQGTArguments
1 { 1, 3, 4, 6, 7, 8 } ⋈ { 2, 3, 5, 6, 8, 9, 10 }
2{ 3, 4, 6, 7, 8 } ⋈ { 2, 3, 5, 6, 8, 9, 10 }
3 { 3, 4, 6, 7, 8 } ⋈ { 3, 5, 6, 8, 9, 10 }
4 { 4, 6, 7, 8 } ⋈ { 5, 6, 8, 9, 10 }
5{ 6, 7, 8 } ⋈ { 5, 6, 8, 9, 10 }
6 { 6, 7, 8 } ⋈ { 6, 8, 9, 10 }
7 { 7, 8 } ⋈ { 8, 9, 10 }
8 { 8 } ⋈ { 9, 10 }
9,10∅ ⋈ { 9, 10 }

Abstracting away the operations to perform in each of these five cases we have the following schema:

let merge ln rn lt eq gt : 'a list -> 'a list -> 'a list =
  let rec go l r = match l, r with
  | [], ys -> ln ys
  | xs, [] -> rn xs
  | x :: xs, y :: ys ->
    match compare x y with
    | -1 -> lt x (go       xs (y :: ys))
    |  0 -> eq x (go       xs       ys )
    |  1 -> gt y (go (x :: xs)      ys )
    |  _ -> assert false
  in go

Both ln and rn must decide what to do with the remaining list and so have type α list → α list, while lt, eq and gt must decide what to do with the element in consideration and so have type αα list → α list; thus the type of merge is (α list → α list) → (α list → α list) → (αα list → α list) → (αα list → α list) → (αα list → α list) → α list → α list → α list. The operations on the remainder either pass it unchanged or return nil, while the operations on the next element either add it to the output sequence or not:

let self   xs =      xs
and null   _  =      []
and cons x xs = x :: xs
and tail _ xs =      xs

(some of these have well-known names in functional programming, but here I choose to use these neat, four-letter ones.) With the proviso that the output sequence must be increasing these four functions exhaust the possibilities by parametricity; otherwise, duplications and rearrangements would satisfy the parametric signature. Now union, intersection and difference are simply:

let union l r = merge self self  cons cons cons l r
and inter l r = merge null null  tail cons tail l r
and diff  l r = merge null self  cons tail tail l r

It is obvious that the question I posed above is answered as 25 = 32 possible set operations obtainable by varying each of the five operations. The next question is, then, what are these 32 operations? Let me characterize each of the five sets ln, rn, lt, eq and gt. The easiest one is eq, as it obviously is the intersection of both sets:

eq(A, B) = A ∩ B

By substitution in merge it is possible to show that ln(A, B) = rn(B, A) and vice versa; hence just one set expression suffices. The merge ends with rn for every element in A that is greater than every element in B, as the latter were included in the comparison sets; and conversely for ln. Hence:

ln(A, B) = B / A ≡ ⟨S y : B : ⟨∀ x : A : y > x⟩⟩
rn(A, B) = A / B ≡ ⟨S x : A : ⟨∀ y : B : x > y⟩⟩

(read A / B as "A over B"). All three sets are pairwise disjoint, since A / B ⊆ A and A / B ∩ B = ∅, and conversely, by construction.

The two remaining sets are also symmetric in the sense that lt(A, B) = gt(B, A) but are more difficult to characterize. My first attempt was to think of each element in A as being processed in turn and put into lt(A, B) just when strictly less than all the elements in B against which it could be matched, namely lt(A, B) = ⟨S x : A : x < ⟨min y : B : x ≤ y⟩⟩. The condition can be simplified with a bit of equational reasoning:

   x∈A ∧ x < ⟨min y : B : x ≤ y⟩
≡ { GLB }
   x∈A ∧ ⟨∀ y : y∈B ∧ x ≤ y : x < y⟩⟩
≡ { Trading }
   x∈A ∧ ⟨∀ y : y∈B : x > y ∨ x < y⟩⟩
≡ { Trichotomy }
   x∈A ∧ ⟨∀ y : y∈B : x ≠ y⟩⟩
≡
   x∈A ∧ x∉B

In other words, A − B. The problem is that, since the quantification over an empty set is trivially true, this set is too big as it includes the respective remainder; that is to say A / B ⊆ A − B as I showed above. To preserve disjointness I define:

lt(A, B) = A − B − A / B
gt(A, B) = B − A − B / A

In a Venn diagram, these five sets are:

Venn diagram of the five component sets

So by including or excluding one of the five components depending on the function passed to each of the five operations, the 32 set operations achievable by merge are:

Venn diagrams for all 32 set operations

Or in tabular form:

NlnrnlteqgtFunction
0selfselfconsconsconsA∪B
1selfselfconsconstailA B/A
2selfselfconstailconsA∆B
3selfselfconstailtailA−BB/A
4selfselftailconsconsB A/B
5selfselftailconstailA∩BB/AA/B
6selfselftailtailconsB−AA/B
7selfselftailtailtail B/AA/B
8selfnullconsconsconsA∪BA/B
9selfnullconsconstailA B/AA/B
10selfnullconstailconsA∆BA/B
11selfnullconstailtailA−BB/AA/B
12selfnulltailconsconsB
13selfnulltailconstailA∩BB/A
14selfnulltailtailconsB−A
15selfnulltailtailtail B/A
16nullselfconsconsconsA∪BB/A
17nullselfconsconstailA
18nullselfconstailconsA∆BB/A
19nullselfconstailtailA−B
20nullselftailconsconsB B/AA/B
21nullselftailconstailA∩BA/B
22nullselftailtailconsB−AB/AA/B
23nullselftailtailtail A/B
24nullnullconsconsconsA∪BB/AA/B
25nullnullconsconstailA A/B
26nullnullconstailconsA∆BB/AA/B
27nullnullconstailtailA−BA/B
28nullnulltailconsconsB B/A
29nullnulltailconstailA∩B
30nullnulltailtailconsB−AB/A
31nullnulltailtailtail

Arguably, besides the traditional five set operations A ∪ B, A ∩ B, A − B, B − A and A ∆ B, only the remainders A / B, B / A and perhaps A / B ∪ B / A = A ⊔ B, the join of A and B (not to be confused with the relational operation), are independently useful. These three are obscure, and as far as I know have no name, although I'd love to hear if there is literature about them. This might mean that this exhaustive catalog of set merges is rather pointless, but at least now I know for sure.

2012-08-02

A Helping Phantom Hand

You don't have to be writing an interpreter or some other kind of abstract code to profit from some phantom types. Suppose you have two or more functions that work by "cooking" a simple value (a float, say) with a lengthy computation before proceeding:

let sun_geometric_longitude j =
  let je = to_jcen (to_jde j) in
  (* … computation with je … *)

let sun_apparent_longitude j =
  let je = to_jcen (to_jde j) in
  let q  = sun_geometric_longitude j in
  (* … computation with je … *)

In this case j is a date expressed in Julian Days as a float, and to_jde computes the Ephemeris Time as a 63-term trigonometric polynomial correction on it. sun_apparent_longitude calls sun_geometric_longitude and both call to_jde. Obviously this unnecessary duplication can be factored out:

let sun_geometric_longitude je =
  let t  = to_jcen je in
  (* … computation with je … *)

let sun_apparent_longitude je =
  let q  = sun_geometric_longitude je in
  let t  = to_jcen je in
  (* … computation with je … *)

let foo j =
  let je = to_jde j in
  let l  = sun_apparent_longitude je in
  (* … *)

(to_jcen is cheap and not worth factoring out.) But now a naked float represent two different things, Universal Time and Ephemeris Time, and we have a valid concern of mixing them up. We can wrap the time in an ADT:

type dt = UT of float | TD of float

let to_jcen j = match j with
| UT j ->
  (* … lengthy computation … *)
  TD j
| TD _ -> invalid_arg "to_jcen"

let sun_geometric_longitude je = match je with
| TD je ->
  let t  = to_jcen je in
  (* … computation with je … *)
| UT _  -> invalid_arg "sun_geometric_longitude"

let sun_apparent_longitude je = match je with
| TD je ->
  let q  = sun_geometric_longitude je in
  let t  = to_jcen je in
  (* … computation with je … *)
| UT _  -> invalid_arg "sun_apparent_longitude"

let foo j =
  let je = to_jde j in
  (* … computation with sun_apparent_longitude je … *)

but this forces us to check at run-time whether we mixed times up in our code. A better technique is to use a phantom type. First fix an abstract signature for the module implementing these functions:

module Test : sig
  type 'a dt

  val datetime : yy:int -> mm:int -> dd:int -> hh:int -> nn:int -> ss:float -> [ `JD ] dt
  val to_jde   : [ `JD ] dt -> [ `JDE ] dt
  val to_jcen  : 'a dt -> float

  val sun_geometric_longitude : [ `JDE ] dt -> float
  val sun_apparent_longitude  : [` JDE ] dt -> float
end = struct
  (* … *)

We have a way to construct our type α dt from a Universal datetime, a way to convert it to Dynamical Time with to_jde and operations that respect the kind of measure. The implementation is as before:

  (* … *)
  type 'a dt = float (* phantom type! *)

  let datetime ~yy ~mm ~dd ~hh ~nn ~ss = (* … *)

  let to_jde  j = (* … *)
  let to_jcen j = (* … *)

  let sun_geometric_longitude je =
    (* … computation with a statically checked je … *)

  let sun_apparent_longitude je =
    let q  = sun_geometric_longitude je in
    (* … computation with a statically checked je … *)
end

Now the compiler checks for us that we don't mix up measures. The only inconvenient of this approach is that the type α dt is fully abstract, and you must provide coercions, string_ofs and pretty printers for it if you need to show them or debug your code. There is a way out, though; just make it a private type abbreviation:

module Test : sig
  type 'a dt = private float
  (* … signature exactly as before … *)
end = struct
  type 'a dt = float (* phantom type! *)
  (* … implementation exactly as before … *)
end

Now α dt will be shown in the top-level, can be printed with a coercion (je :> float), etc.

For another simple example, suppose you want to represent sets as lists. The best way to do that is to keep them sorted so that set operations run in linear time. If you want to provide some operations that temporarily destroy the ordering, a phantom type can keep track of the invariant "this list is sorted":

module Set : sig
  type ('a, 'b) t = private 'a list

  val of_list : 'a list -> ('a, [ `S ]) t
  val as_set  : ('a, [ `U ]) t -> ('a, [ `S ]) t
  val empty   : ('a, [ `S ]) t
  val mem     : 'a -> ('a, [ `S ]) t -> bool
  val add     : 'a -> ('a, [ `S ]) t -> ('a, [ `S ]) t
  val union   : ('a, [ `S ]) t -> ('a, [ `S ]) t -> ('a, [ `S ]) t
  val inter   : ('a, [ `S ]) t -> ('a, [ `S ]) t -> ('a, [ `S ]) t
  val append  : ('a, 'b) t -> ('a, 'c) t -> ('a, [ `U ]) t
end = struct
  type ('a, 'b) t = 'a list

  let of_list l   = List.sort compare l
  and as_set  l   = List.sort compare l
  and empty       = []
  let union   l r = (* … linear merge … *)
  and inter   l r = (* … linear merge … *)
  let mem     e   = List.mem e
  and add     e   = union [e]
  and append      = List.append
end

The phantom type [ `S | `U ] tracks the sortedness of the list. Note that in the case of append the argument lists can have any ordering but the result is known to be unsorted. Note also how the fact that the empty list is by definition sorted is directly reflected in the type.

2012-07-19

Theorems for Free: The Monad Edition

This is for the record, since the derivations took me a while and I'd rather not lose them.

A functor is the signature:

module type FUNCTOR = sig
  type 'a t
  val fmap : ('a -> 'b) -> ('a t -> 'b t)
end

satisfying the following laws:

Identity:    fmap id      ≡ id
Composition: fmap (f ∘ g) ≡ fmap f ∘ fmap g

An applicative structure or idiom is the signature:

module type APPLICATIVE = sig
  type 'a t
  val pure : 'a -> 'a t
  val ap : ('a -> 'b) t -> ('a t -> 'b t)
end

satisfying the following laws:

Identity:     ap (pure id)                  ≡ id
Composition:  ap (ap (ap (pure (∘)) u) v) w ≡ ap u (ap v w)
Homomorphism: ap (pure f) ∘ pure            ≡ pure ∘ f
Interchange:  ap u (pure x)                 ≡ ap (pure (λf.f x)) u

An applicative functor is the structure:

module type APPLICATIVE_FUNCTOR = sig
  type 'a t
  include FUNCTOR     with type 'a t := 'a t
  include APPLICATIVE with type 'a t := 'a t
end

that is simultaneously a functor and an applicative structure, satisfying the additional law:

Fmap: fmap ≡ ap ∘ pure

A monad is the structure:

module type MONAD = sig
  type 'a t
  val return : 'a -> 'a t
  val bind : ('a -> 'b t) -> ('a t -> 'b t)
end

satisfying the following laws:

Right unit:    bind return     ≡ id
Left unit:     bind f ∘ return ≡ f
Associativity: bind f ∘ bind g ≡ bind (bind f ∘ g)

Every monad is an applicative functor:

module ApplicativeFunctor (M : MONAD)
: APPLICATIVE_FUNCTOR with type 'a t = 'a M.t
= struct
  type 'a t = 'a M.t
  open M
  let fmap f = bind (fun x -> return (f x))
  let pure   = return
  let ap u v = bind (fun f -> fmap f v) u
end

This can be proved by easy (but tedious) equational reasoning. That the proof is rigorous is Wadler's celebrated result.

Proof of the Functor Identity:

  fmap id
≡ { definition }
  bind (return ∘ id)
≡ { composition }
  bind return
≡ { Monad Right unit }
  id

Proof of the Functor Composition:

  fmap f ∘ fmap g
≡ { definition }
  bind (return ∘ f) ∘ bind (return ∘ g)
≡ { Monad Associativity }
  bind (bind (return ∘ f) ∘ return ∘ g)
≡ { Monad Left unit }
  bind (return ∘ f ∘ g)
≡ { definition }
  fmap (f ∘ g)

A number of naturality conditions are simple equations between λ-terms. I'll need these later:

Lemma 1 (Yoneda):

  fmap f ∘ (λh. fmap h x)
≡ { defn. ∘, β-reduction }
  λg. fmap f (fmap g x)
≡ { defn. ∘ }
  λg. (fmap f ∘ fmap g) x
≡ { Functor Composition }
  λg. fmap (f ∘ g) x
≡ { abstract }
  λg. (λh. fmap h x) (f ∘ g)
≡ { defn. ∘, η-contraction }
  (λh. fmap h x) ∘ (f ∘)

Lemma 2:

  fmap f ∘ return
≡ { definition }
  bind (return ∘ f) ∘ return
≡ { Monad Left unit }
  return ∘ f

Lemma 3:

  bind f ∘ fmap g
≡ { definition fmap }
  bind f ∘ bind (return ∘ g)
≡ { Monad Associativity }
  bind (bind f ∘ return ∘ g)
≡ { Monad Left unit }
  bind (f ∘ g)

Lemma 4:

  bind (fmap f ∘ g)
≡ { definition fmap }
  bind (bind (return ∘ f) ∘ g)
≡ { Monad Associativity }
  bind (return ∘ f) ∘ bind g
≡ { definition fmap }
  fmap f ∘ bind g

The Applicative Functor condition is easy to prove and comes in as a handy lemma:

  ap ∘ pure
≡ { definition }
  λv. bind (λf. fmap f v) ∘ return
≡ { Monad Left unit }
  λv. λf. fmap f v
≡ { η-contraction }
  fmap

Proof of the Applicative Identity:

  ap (pure id)
≡ { Applicative Functor }
  fmap id
≡ { Functor Identity }
  id

Proof of the Applicative Homomorphism:

  ap (pure f) ∘ pure
≡ { Applicative Functor }
  fmap f ∘ pure
≡ { Lemma 2, defn. pure }
  pure ∘ f

Proof of the Applicative Interchange:

  ap (pure (λf.f x)) u
≡ { Applicative Functor }
  fmap (λf.f x) u
≡ { definition }
  bind (return ∘ (λf.f x)) u
≡ { defn. ∘, β-reduction }
  bind (λf. return (f x)) u
≡ { Lemma 2 }
  bind (λf. fmap f (return x)) u
≡ { definition }
  ap u (pure x)

The proof of the Applicative Composition condition is the least straightforward of the lot, as it requires ingenuity to choose the reduction to apply at each step. I started with a long, tedious derivation that required forward and backward reasoning; at the end I refactored it in byte-sized lemmas in order to simplify it as much as I could. As a heuristic, I always try to start from the most complicated expression to avoid having to guess where and what to abstract (that is, applying elimination rules requires neatness, while applying introduction rules requires backtracking):

  ap (ap (ap (pure (∘)) u) v) w
≡ { Applicative Functor }
  ap (ap (fmap (∘) u) v) w
≡ { definition }
  bind (λf. fmap f w) (bind (λf. fmap f v) (fmap (∘) u))
≡ { Lemma 3 with f := λf. fmap f v, g := (∘) }
  bind (λf. fmap f w) (bind ((λf. fmap f v) ∘ (∘)) u)
≡ { Monad Associativity with f := λf. fmap f w, g := (λf. fmap f v) ∘ (∘) }
  bind (bind (λf. fmap f w) ∘ (λf. fmap f v) ∘ (∘)) u
≡ { defn. ∘ (at right) }
  bind (λf. (bind (λf. fmap f w) ∘ (λf. fmap f v)) (f ∘)) u
≡ { defn. ∘, β-reduction }
  bind (λf. bind (λf. fmap f w) (fmap (f ∘) v)) u
≡ { Lemma 3 with f := λf. fmap f w, g := (f ∘) }
  bind (λf. bind ((λf. fmap f w) ∘ (f ∘)) v) u
≡ { Lemma 1 }
  bind (λf. bind (fmap f ∘ (λf. fmap f w)) v) u
≡ { Lemma 4 with g := λf. fmap f w }
  bind (λf. fmap f (bind (λf. fmap f w) v)) u
≡ { definition }
  ap u (ap v w)

What is this good for? I don't really know; Haskellers can't seem to be able to live without them while I can't seem to justify their application. I suspect laziness has a lot to do with it; in any case, the machinery is more palatable with the appropriate combinators:

module Functor (F : FUNCTOR) = struct
  include F
  let ( <$> ) = fmap
end

module Applicative (A : APPLICATIVE) = struct
  include A
  let ( <*> ) = ap
end

module Monad (M : MONAD) = struct
  include M
  include (ApplicativeFunctor (M)
         : APPLICATIVE_FUNCTOR with type 'a t := 'a t)
  let ( <$> )     = fmap
  let ( <*> )     = ap
  let ( >>= ) m f = bind f m
end

2012-07-17

An Odd Lemma

While proving that every monad is an applicative functor, I extracted the following derivation as a lemma:

  fmap f ∘ (λh. fmap h x)
≡ { defn. ∘, β-reduction }
  λg. fmap f (fmap g x)
≡ { defn. ∘ }
  λg. (fmap f ∘ fmap g) x
≡ { Functor }
  λg. fmap (f ∘ g) x
≡ { abstract f ∘ g }
  λg. (λh. fmap h x) (f ∘ g)
≡ { defn. ∘, η-contraction }
  (λh. fmap h x) ∘ (f ∘)

for all f, x. This is the Yoneda Lemma in a special form. The term λh. fmap h x is the natural transformation between an arbitrary functor F and the functor Hom(X, —), where X is fixed by the type of x. Dan Piponi calls it the hardest trivial thing in mathematics. I didn't recognize it immediately (my category-fu is nonexistent), but the striking symmetry made me curious and I started investigating.

2012-07-12

A minor branch off Braun Trees

Revisiting the post on Braun Trees I noticed that, while pedagogical, the implementation of the root replacement operation rep can be streamlined a bit. By inlining the mutually recursive siftdown, specializing on the matches and adding guards, the result is as follows:

let rec rep compare e = function
| N (_, (N (el, _, _) as l), E)
  when compare el e  < 0 ->
  N (el, rep compare e l, E)
| N (_, (N (el, _, _) as l), (N (er, _, _) as r))
  when compare el e  < 0
    || compare er e  < 0 ->
    if compare el er < 0
      then N (el, rep compare e l, r)
      else N (er, l, rep compare e r)
| N (_, l, r) -> N (e, l, r)
| E           -> failwith "empty heap"

The guards are the strongest possible in order to minimize the needs to descend to the children. The conditional under the second case could be in turn lifted to a guard, since (el < eer < e) ∧ el < erel < eel < er, and similarly for the else case, but the loss of simplicity is quite substantial in my opinion.

2012-07-09

Existential Crisis

In a long discussion at LtU, user Jules Jacobs advances a use-case that for him would have been difficult to solve in a statically-typed language. I focus on his first two points:

  1. A data structure that is a container of T plus a function on T's. I would have needed existential types to hide the T, which I don't get in many languages.
  2. A couple of functions that take heterogeneous list of items that the function will convert to the interface it needs the elements to be with an extensible conversion function. […]

It's true, not many languages have existential types but OCaml happens to do, or at least allows existentials to be encoded. I'll take a big hint from MLton and I'll try to solve both problems (I won't pretend to solve his actual problems; just my interpretation of the problems he posed). Another reason why I write this post is because I always forget what the essence of existential types is. Pierce writes:

Similarly, there are two different ways of looking at an existential type, written {∃X,T}. The logical intuition is that an element of {∃X,T} is a value of type [XS]T, for some type S. The operational intuition, on the other hand, is that an element of {∃X,T} is a pair, written {*S,t}, of a type S and a term t of type [XS]T.

[…] To understand existential types, we need to know two things: how to build (or introduce, in the jargon of §9.4) elements that inhabit them, and how to use (or eliminate) these values in computations.

An existentially typed value is introduced by pairing a type with a term, written {*S,t}. A useful concrete intuition is to think of a value {*S,t} of type {∃X,T} as a simple form of package or module with one (hidden) type component and one term component. The type S is often called the hidden representation type, or sometimes (to emphasize a connection with logic, cf. §9.4) the witness type of the package. For example, the package p = {*Nat, {a=5, f=λx:Nat. succ(x)}} has the existential type {∃X, {a:X, f:XX}}. The type component of p is Nat, and the value component is a record containing a field a of type X and a field f of type XX, for some X (namely Nat).

(Types and Programming Languages, p. 363–364) I quote at some length because Pierce's presentation is so condensed that for me it really bears repeating; also, because every time I want a refresher I can come to this post instead of cracking open the physical book. The gist of it is pithily and more memorably stated in Mitchell and Plotkin's slogan, abstract types have existential type.

The complication is that in ML types are not values, so in order to pack an existential we must reify the type component as a term, for instance as a pair of functions, an injection and a projection:

module type UNIV = sig
  type t
  val embed : unit -> ('a -> t) * (t -> 'a)
end

Why UNIV when we're talking about existential types? Go ask Oleg. Actually, there is a type isomorphism analogous to the logical equivalence between (∃x. P x) ⇒ Q and x.(P x ⇒ Q); as Jeremy Gibbons writes …the justification being that a datatype declaration such as type e = ∃t. E of t foo introduces a constructor E : (∃t. t foo) → e, and this type is isomorphic to t.(t foo → e) because e is independent of t (Haskellisms paraphrased).

In any case, here the existential type is t, and embed produces a (inj, prj) pair that can be applied to values of some type α. Only the prj of the pair can recover the injected value; the use of any other prj will fail. There are at least two possible implementations, one using references and another one using exceptions (which are values of a single open, extensible, generative type). The latter is very clever:

module Univ : UNIV = struct
  type t = exn
  let embed (type u) () =
    let module E = struct
      exception E of u
      let inj x = E x
      let prj = function E x -> x | _ -> raise Not_found
    end in E.(inj, prj)
end

The use of the named type u and a local module are 3.12 features that allow declaring a polymorphic exception (compare the SML solution in MLton's website). Since the exception constructor E is different for every invocation of embed (this is the "generative" bit referred to above), only the prj of the pair can recover the value:

let () =
  let inj_int  , prj_int   = Univ.embed ()
  and inj_float, prj_float = Univ.embed () in
  let r = ref (inj_int 13) in
  let s1 = try string_of_int   (prj_int   !r) with Not_found -> "None" in
  r := inj_float 13.0;
  let s2 = try string_of_int   (prj_int   !r) with Not_found -> "None" in
  let s3 = try string_of_float (prj_float !r) with Not_found -> "None" in
  Printf.printf "%s %s %s\n" s1 s2 s3

Note that the reference r holds values "of different types" via the corresponding inj. This code typechecks and when run outputs:

13 None 13.

On top of this "universal" existential type we can build heterogeneous property lists à la Lisp (see again MLton's site):

module PList : sig
  type t
  val make     : unit -> t
  val property : unit -> (t -> 'a -> unit) * (t -> 'a)
end = struct
  type t = Univ.t list ref

  let make () = ref []

  let property (type u) () =
    let inj, prj = Univ.embed () in
    let put r v = r := inj v :: !r
    and get r   =
      let rec go = function
      | []      -> raise Not_found
      | x :: xs -> try prj x with Not_found -> go xs
      in go !r
    in (put, get)
end

Each property must be created explicitly but independently of any list using it. They encapsulate an existentially-typed value; look-up just proceeds by attempting to project out the corresponding value. These lists really are magical:

let () =
  let put_name  , get_name   = PList.property ()
  and put_age   , get_age    = PList.property ()
  and put_weight, get_weight = PList.property () in
  let show p = Printf.printf "%s: %d years, %.1f kg\n"
    (get_name   p)
    (get_age    p)
    (get_weight p)
  in
  let boy, girl = PList.make (), PList.make () in
  put_name   boy  "Tim";
  put_age    boy  13;
  put_weight boy  44.0;
  put_name   girl "Una";
  put_age    girl 12;
  put_weight girl 39.0;
  List.iter show [boy; girl]

Here boy and girl are two different, independent property lists that act as extensible records with labels get_name, get_age and get_weight. The list iteration prints the properties uniformly via show, without having to cast or do any strange contortions (at least not any outside the definitions). The output is:

Tim: 13 years, 44.0 kg
Una: 12 years, 39.0 kg

Of course nothing says that the property lists must be homogeneous in the properties they contain; looking for an inexistent property will simply fail. On the other hand, probably the preferred way to handle extensible records in OCaml is via objects, using structural subtyping in the same way dynamically-typed languages would use duck typing. This would make solving the original problem a little more familiar to Python or Ruby programmers; but then, recovering the original objects from the lists would be impossible without downcasts.

2012-07-02

Assessing Abstractions

Back to OCaml! Catching up with StackOverflow's OCaml questions, I found an interesting one about private type abbreviations in module signatures. One thing in that conversation that struck me as odd was the assertion that the compiler optimizes single-constructor variants, thereby subsuming the semantics of Haskell's all three declarators, data, type and newtype, into one. Definitive proof would be obtainable by diving into the compiler's source. I decided instead to read the output of ocamlopt to try to catch a glimpse of this, even if I know it doesn't constitute definitive proof. What I found is both good and bad, or as we say here una de cal y una de arena.

Consider the following module definitions:

module S = struct
  type t = P of int * int
  let make x y = P (x, y)
end

module T = struct
  type t = int * int
  let make x y = x, y
end

Module S (for "single sum type") declares a single-variant type, while T (for "tuple type") declares an abbreviation. Putting them in a file and compiling it with ocamlopt -S -c generates the following listing (prelude and other specificities omitted):

S.makeT.make
_camlAbbrev__make_1033:
        subq    $8, %rsp
        movq    %rax, %rdi
.L101:  subq    $24, %r15
        movq    _caml_young_limit@GOTPCREL(%rip), %rax
        cmpq    (%rax), %r15
        jb      .L102
        leaq    8(%r15), %rax
        movq    $2048, -8(%rax)
        movq    %rdi, (%rax)
        movq    %rbx, 8(%rax)
        addq    $8, %rsp
        ret
.L102:  call    _caml_call_gc
        jmp     .L101
_camlAbbrev__make_1038:
        subq    $8, %rsp
        movq    %rax, %rdi
.L105:  subq    $24, %r15
        movq    _caml_young_limit@GOTPCREL(%rip), %rax
        cmpq    (%rax), %r15
        jb      .L106
        leaq    8(%r15), %rax
        movq    $2048, -8(%rax)
        movq    %rdi, (%rax)
        movq    %rbx, 8(%rax)
        addq    $8, %rsp
        ret
.L106:  call    _caml_call_gc
        jmp     .L105

(ocamlopt -version is 3.12.1). The compiler generates the exact same code in both cases. The bolded instructions build the heap-allocated 3-word pair as tag, first component, second component. In order to test modular abstraction, I coerce these implementations into the following signatures:

module A : sig
  type t = int * int
  val make : int -> int -> t
end = T

module B : sig
  type t = private int * int
  val make : int -> int -> t
end = T

module X : sig
  type t = P of int * int
  val make : int -> int -> t
end = S

module Y : sig
  type t = private P of int * int
  val make : int -> int -> t
end = S

Modules A and B coerce module T with a public and a private type definition, respectively. Modules X and Y do the same with module S. The following code shows how to use each variant of type t in a destructuring pattern match:

let a () = A.(let   (x, y) =  make 2 3               in x + y)
let b () = B.(let   (x, y) = (make 2 3 :> int * int) in x + y)
let x () = X.(let P (x, y) =  make 2 3               in x + y)
let y () = Y.(let P (x, y) =  make 2 3               in x + y)

As explained in the manual, a private type abbreviation requires an explicit coercion for the types to be compatible. The use of a single-variant type makes for not only safer coding (the intent behind Haskell's newtype), it also allows for lighter syntax. I was surprised by the assembly code generated by ocamlopt (somewhat simplified):

ab
_camlAbbrev__a_1060:
        subq    $8, %rsp
.L109:  subq    $24, %r15
        movq    _caml_young_limit@GOTPCREL(%rip), %rax
        cmpq    (%rax), %r15
        jb      .L110
        leaq    8(%r15), %rax
        movq    $2048, -8(%rax)
        movq    $5, (%rax)
        movq    $7, 8(%rax)
        movq    8(%rax), %rbx
        movq    (%rax), %rax
        leaq    -1(%rax, %rbx), %rax
        addq    $8, %rsp
        ret
.L110:  call    _caml_call_gc
        jmp     .L109
_camlAbbrev__b_1063:
        subq    $8, %rsp
.L113:  subq    $24, %r15
        movq    _caml_young_limit@GOTPCREL(%rip), %rax
        cmpq    (%rax), %r15
        jb      .L114
        leaq    8(%r15), %rax
        movq    $2048, -8(%rax)
        movq    $5, (%rax)
        movq    $7, 8(%rax)
        movq    8(%rax), %rbx
        movq    (%rax), %rax
        leaq    -1(%rax, %rbx), %rax
        addq    $8, %rsp
        ret
.L114:  call    _caml_call_gc
        jmp     .L113
cd
_camlAbbrev__x_1066:
        subq    $8, %rsp
.L117:  subq    $24, %r15
        movq    _caml_young_limit@GOTPCREL(%rip), %rax
        cmpq    (%rax), %r15
        jb      .L118
        leaq    8(%r15), %rax
        movq    $2048, -8(%rax)
        movq    $5, (%rax)
        movq    $7, 8(%rax)
        movq    8(%rax), %rbx
        movq    (%rax), %rax
        leaq    -1(%rax, %rbx), %rax
        addq    $8, %rsp
        ret
.L118:  call    _caml_call_gc
        jmp     .L117
_camlAbbrev__y_1070:
        subq    $8, %rsp
.L121:  subq    $24, %r15
        movq    _caml_young_limit@GOTPCREL(%rip), %rax
        cmpq    (%rax), %r15
        jb      .L122
        leaq    8(%r15), %rax
        movq    $2048, -8(%rax)
        movq    $5, (%rax)
        movq    $7, 8(%rax)
        movq    8(%rax), %rbx
        movq    (%rax), %rax
        leaq    -1(%rax, %rbx), %rax
        addq    $8, %rsp
        ret
.L122:  call    _caml_call_gc
        jmp     .L121

Identical machine code generated in all four cases, which I find very encouraging. Here the compiler inlines the tupling constructor make (the three first bolded lines; note that an integer n is tagged as 2·n + 1 so that 2 is represented by 5 and 3 by 7). What I find a bummer is that the tuple is immediately destructured (into rbx and rax, next two bolded lines) and operated on its components (last bolded line; note in passing how addition of tagged integers is done with leaq, since 2·(m + n) + 1 = (2·m + 1) + (2·n + 1) - 1), without any kind of CSE performed. This is perfectly summarized by Pascal Cuoq in a comment to another StackOverflow question:

[…] Last time I checked it didn't even optimize the allocation of a, b in match a, b with x, y -> .... If you wish to check by yourself, I found that reading the x86 assembly generated by ocamlopt -S was convenient for me because I didn't have to learn a new representation.

This, I suspect, is an artifact of inlining in the presence of code emission that must be correct in the face of separate compilation. Disturbingly enough, ocamlopt seems to inline even across module boundaries. Separating the test into an abbrev.ml implementation:

module S = struct
  type t = P of int * int
  let make x y = P (x, y)
end

module T = struct
  type t = int * int
  let make x y = x, y
end

module A = T
module B = T
module X = S
module Y = S

with interface abbrev.mli:

module A : sig type t =              int * int val make : int -> int -> t end
module B : sig type t = private      int * int val make : int -> int -> t end
module X : sig type t =         P of int * int val make : int -> int -> t end
module Y : sig type t = private P of int * int val make : int -> int -> t end

the following code in test.ml:

open Abbrev

let a () = A.(let   (x, y) =  make 2 3               in x + y)
let b () = B.(let   (x, y) = (make 2 3 :> int * int) in x + y)
let x () = X.(let P (x, y) =  make 2 3               in x + y)
let y () = Y.(let P (x, y) =  make 2 3               in x + y)

compiles to the same assembly as above! Xavier Leroy spells out the conditions for ocamlopt to do cross-module inlining; I think it obvious that these tests do fulfill them. An interesting tidbit in that message is the command-line argument -dcmm to dump a RTL-like representation of the generated code:

(function camlTest__a_1038 (param/1056: addr)
 (let match/1057 (alloc 2048 5 7)
   (+ (+ (load match/1057) (load (+a match/1057 8))) -1)))

(function camlTest__b_1041 (param/1058: addr)
 (let match/1059 (alloc 2048 5 7)
   (+ (+ (load match/1059) (load (+a match/1059 8))) -1)))

(function camlTest__x_1044 (param/1060: addr)
 (let match/1061 (alloc 2048 5 7)
   (+ (+ (load match/1061) (load (+a match/1061 8))) -1)))

(function camlTest__y_1048 (param/1062: addr)
 (let match/1063 (alloc 2048 5 7)
   (+ (+ (load match/1063) (load (+a match/1063 8))) -1)))

I think it is not very productive to worry too much about the cost of abstractions in OCaml, as it probably is offset by the naïveté of the code generator.

2012-06-29

2D Interpolation, Part 5: Final Optimizations

The code has now the proper shape to carry out optimizations to their logical consequences, to the point that I question the wisdom in some of the previous transformations. It is true that in compiler construction some passes introduce constructs only for latter passes to eliminate them, in the hope of an overall simplification. In this case the starting point will be the elimination of the pixel-for-pixel dispatch to the required interpolator, by hoisting the switch out of the inner loop to a top-level procedure. I keep the last version of the interpolator as interp1 applying the same trick I used before to compare pixels:

void interp0() {
  switch (method) {
  case NEAREST:  interpN(); break;
  case BILINEAR: interpL(); break;
  case BICUBIC:  interpC(); break;
  }
}

The initial version of these interpolators are three identical copies of the previous interp0, appropriately renamed. For the sake of brevity I will omit these copies, showing instead the final, simplified form of each one. In the case of interpN(), I first eliminate the switch. Then I eliminate all references to the repeated border (variables having 0 or 3 as indices). Since there is no need to repeat elements, I now can access the source array directly. This in turn allows me to simplify all index computations, adjusting the condition on the last column as required. In fact, there is no need to keep count of the rows as yi already does that, so the end condition on the outer loop gets radically simpler. A further optimization I can afford is to eliminate the need for floating point constants yf, xf by inlining the integer calculation directly into the conditional:

void interpN() {
  int yr = 0;
  int yi = 1;
  int offset = 0;
  for (int i = 0; i < height; i++) {
    int c11 = srcpix[(yi-1)*ncols+0], c12 = srcpix[(yi-1)*ncols+1];
    int c21 = srcpix[ yi   *ncols+0], c22 = srcpix[ yi   *ncols+1];
    int xr = 0;
    int xi = 1;
    for (int j = 0; j < width; j++) {
      final int c = 2*yr < height ? 2*xr < width ? c11 : c12 : 2*xr < width ? c21 : c22;
      pixels[offset++] = c;
      xr += ncols - 1;
      if (xr >= width) {
        xr -= width;
        xi += 1;
        if (xi < ncols) {
          c11 = c12; c12 = srcpix[(yi-1)*ncols+xi];
          c21 = c22; c22 = srcpix[ yi   *ncols+xi];
        }
      }
    }
    yr += nrows - 1;
    if (yr >= height) {
      yr -= height;
      yi += 1;
    }
  }
}

The simplification on the linear interpolator interpL() proceeds exactly as before. The access pattern is identical, since the stencil is 2×2, so the end result is the same except for the pixel calculation:

void interpL() {
  int yr = 0;
  int yi = 1;
  int offset = 0;
  for (int i = 0; i < height; i++) {
    final float yf = (float) yr / (float) height;
    int c11 = srcpix[(yi-1)*ncols+0], c12 = srcpix[(yi-1)*ncols+1];
    int c21 = srcpix[ yi   *ncols+0], c22 = srcpix[ yi   *ncols+1];
    int xr = 0;
    int xi = 1;
    for (int j = 0; j < width; j++) {
      final float xf = (float) xr / (float) width;
      final int c = bilinear(xf, yf, c11, c12, c21, c22);
      pixels[offset++] = c;
      xr += ncols - 1;
      if (xr >= width) {
        xr -= width;
        xi += 1;
        if (xi < ncols) {
          c11 = c12; c12 = srcpix[(yi-1)*ncols+xi];
          c21 = c22; c22 = srcpix[ yi   *ncols+xi];
        }
      }
    }
    yr += nrows - 1;
    if (yr >= height) {
      yr -= height;
      yi += 1;
    }
  }
}

This leaves me with the bicubic interpolator interpC(). On the one hand there is no need nor possibility to change anything besides eliminating the switch, as all previous transformations were based on the use of a 4×4 stencil with repetitions. On the other, I always question myself whether I can do better. I reason as follows: the stencil ranges over the image from pixel (-1, -1) to pixel (width−3, height−3); in case of under- or overflow the missing pixels are repeated. This means that using min and max as needed can fetch the repetitions for free! In general, the index will be min(max(yi-d, 0),nrows-1), but depending on the value of d and knowing the range of yi some simplifications are possible. The same is true for the repetition of the last column. In the inner loop I use the same trick I did with the arrayCopy(), that is, move the old pixels and fetch the new only if in range:

void interpC() {
  int yr = 0;
  int yi = 2;
  int offset = 0;
  for (int i = 0; i < height; i++) {
    final float yf = (float) yr / (float) height;
    int c00 = srcpix[max(yi-3,     0)*ncols+0];
    int c01 = srcpix[max(yi-3,     0)*ncols+0];
    int c02 = srcpix[max(yi-3,     0)*ncols+1];
    int c03 = srcpix[max(yi-3,     0)*ncols+2];
    int c10 = srcpix[   (yi-2       )*ncols+0];
    int c11 = srcpix[   (yi-2       )*ncols+0];
    int c12 = srcpix[   (yi-2       )*ncols+1];
    int c13 = srcpix[   (yi-2       )*ncols+2];
    int c20 = srcpix[   (yi-1       )*ncols+0];
    int c21 = srcpix[   (yi-1       )*ncols+0];
    int c22 = srcpix[   (yi-1       )*ncols+1];
    int c23 = srcpix[   (yi-1       )*ncols+2];
    int c30 = srcpix[min(yi  ,nrows-1)*ncols+0];
    int c31 = srcpix[min(yi  ,nrows-1)*ncols+0];
    int c32 = srcpix[min(yi  ,nrows-1)*ncols+1];
    int c33 = srcpix[min(yi  ,nrows-1)*ncols+2];
    int xr = 0;
    int xi = 2;
    for (int j = 0; j < width; j++) {
      final float xf = (float) xr / (float) width;
      final int c = bicubic(xf, yf,
            c00, c01, c02, c03, c10, c11, c12, c13,
            c20, c21, c22, c23, c30, c31, c32, c33);
      pixels[offset++] = c;
      xr += ncols - 1;
      if (xr >= width) {
        xr -= width;
        xi += 1;
        c00 = c01; c01 = c02; c02 = c03;
        c10 = c11; c11 = c12; c12 = c13;
        c20 = c21; c21 = c22; c22 = c23;
        c30 = c31; c31 = c32; c32 = c33;
        if (xi < ncols) {
          c03 = srcpix[max(yi-3,      0)*ncols+xi];
          c13 = srcpix[   (yi-2        )*ncols+xi];
          c23 = srcpix[   (yi-1        )*ncols+xi];
          c33 = srcpix[min(yi  ,nrows-1)*ncols+xi];
        }
      }
    }
    yr += nrows - 1;
    if (yr >= height) {
      yr -= height;
      yi += 1;
    }
  }
}

Note that the stencil is loaded at the top of the inner loop with full knowledge of the values that yi and xi can take. Note also that eliminating the conditions implicit in the min and max calculations would require inverting the loops so that the control variables are yi and xi. This is possible but would require modular arithmetic on the destination indices; I would have to start the overall process from scratch.

In the mean time, I suspect that the introduction of row buffers was a red herring (although I swear an innocent, unwitting one). I didn't think of using min and max until the code was focused enough for me to see it from another side. What with insight it appears perfectly obvious can be and often is obscured by unrelated considerations. Selecting the "best" strategy at each step requires knowing the full path to the goal, but when faced with limited information the best we can hope for is that our heuristics don't mislead us too far away. In any case backtracking is inevitable for those of us lacking the gift of foreknowledge, and I think much more honest (if boring) to fully disclose my veering into blind alleys.

The take-away lesson of this (admittedly repetitious, if not downright tedious) exercise is to show how to manually apply some of the strategies that modern parallelizing compilers use automatically to vectorize and optimize straight-loop code, including the use of comparisons with unoptimized versions to assess and ensure the correctness of the transformations. When faced with optimized code I am often left wondering what process the author followed to arrive at that result; my own process is rather pedestrian and methodical and without any magic tricks other than a basic knowledge of high-school arithmetic.

2012-06-28

2D Interpolation, Part 4: ARGB Interpolation

After linearizing all array accesses, interpolating ARGB values is easy from the algorithmic side of things; so easy things first. I'll handle the pixel interpolation proper by abstracting them away in their own little functions. Processing an ARGB source array is just a matter of changing the variable declarations:

final int[] srcpix = {
0xff0051ff, 0xff00fffb, 0xff9cff00, 0xff0051ff,
0xffff0000, 0xff00ff08, 0xffffdb00, 0xff00fffb,
0xff9cff00, 0xff00fffb, 0xff0051ff, 0xffffdb00,
0xffffdb00, 0xff9cff00, 0xff00fffb, 0xff00ff08,
};
final int nrows = 4;
final int ncols = 4;

void interp0() {
  final int[] p = new int[4*ncols+8];
  p[1*ncols+2] = srcpix[0*ncols]; arrayCopy(srcpix, 0*ncols, p, 1*ncols+3, ncols); p[2*ncols+3] = srcpix[1*ncols-1];
  p[2*ncols+4] = srcpix[1*ncols]; arrayCopy(srcpix, 1*ncols, p, 2*ncols+5, ncols); p[3*ncols+5] = srcpix[2*ncols-1];
  p[3*ncols+6] = srcpix[2*ncols]; arrayCopy(srcpix, 2*ncols, p, 3*ncols+7, ncols); p[4*ncols+7] = srcpix[3*ncols-1];
  arrayCopy(p, 1*ncols+2, p, 0, ncols+2);
  int yr = 0;
  int yi = 2;
  int offset = 0;
  for (int i = 0; i < height; i++) {
    final float yf = (float) yr / (float) height;
    int c00 = p[0*ncols+0], c01 = p[0*ncols+1], c02 = p[0*ncols+2], c03 = p[0*ncols+3];
    int c10 = p[1*ncols+2], c11 = p[1*ncols+3], c12 = p[1*ncols+4], c13 = p[1*ncols+5];
    int c20 = p[2*ncols+4], c21 = p[2*ncols+5], c22 = p[2*ncols+6], c23 = p[2*ncols+7];
    int c30 = p[3*ncols+6], c31 = p[3*ncols+7], c32 = p[3*ncols+8], c33 = p[3*ncols+9];
    int xr = 0;
    int xi = 3;
    for (int j = 0; j < width; j++) {
      final float xf = (float) xr / (float) width;
      int c = 0;
      switch (method) {
      case NEAREST:
        c = yf < 0.5 ? xf < 0.5 ? c11 : c12 : xf < 0.5 ? c21 : c22;
        break;
      case BILINEAR:
        c = bilinear(xf, yf, c11, c12, c21, c22);
        break;
      case BICUBIC:
        c = bicubic(xf, yf,
                    c00, c01, c02, c03, c10, c11, c12, c13,
                    c20, c21, c22, c23, c30, c31, c32, c33);
        break;
      }
      pixels[offset++] = c;
      xr += ncols - 1;
      if (xr >= width) {
        xr -= width;
        xi += 1;
        if (xi < ncols+2) {
          c00 = c01; c01 = c02; c02 = c03; c03 = p[0*ncols+xi+0];
          c10 = c11; c11 = c12; c12 = c13; c13 = p[1*ncols+xi+2];
          c20 = c21; c21 = c22; c22 = c23; c23 = p[2*ncols+xi+4];
          c30 = c31; c31 = c32; c32 = c33; c33 = p[3*ncols+xi+6];
        }
      }
    }
    yr += nrows - 1;
    if (yr >= height) {
      yr -= height;
      yi += 1;
      if (yi <= nrows) {
        arrayCopy(p, (ncols+2), p, 0, 3*(ncols+2));
        if (yi < nrows) {
          p[3*ncols+6] = srcpix[yi*ncols];
          arrayCopy(srcpix, yi*ncols, p, 3*ncols+7, ncols);
          p[4*ncols+7] = srcpix[(yi+1)*ncols-1];
        }
      }
    }
  }
}

The pixel array is the result of obtaining the hue() for each value in the original float array data. As you can see, I simplified the inner switch by extracting the interpolation schemes off to named routines. These routines will have to unpack the four components in each pixel, interpolate each component and pack back the result pixel. The bilinear interpolator is straightforward:

static int bilinear(final float t, final float u,
                    final int c00, final int c01,
                    final int c10, final int c11) {
  final int a00=(c00>>24)&0xff, r00=(c00>>16)&0xff, g00=(c00>>8)&0xff, b00=c00&0xff;
  final int a01=(c01>>24)&0xff, r01=(c01>>16)&0xff, g01=(c01>>8)&0xff, b01=c01&0xff;
  final int a10=(c10>>24)&0xff, r10=(c10>>16)&0xff, g10=(c10>>8)&0xff, b10=c10&0xff;
  final int a11=(c11>>24)&0xff, r11=(c11>>16)&0xff, g11=(c11>>8)&0xff, b11=c11&0xff;
  final float a = linear(u, linear(t, a00, a01), linear(t, a10, a11));
  final float r = linear(u, linear(t, r00, r01), linear(t, r10, r11));
  final float g = linear(u, linear(t, g00, g01), linear(t, g10, g11));
  final float b = linear(u, linear(t, b00, b01), linear(t, b10, b11));
  return (int) a << 24 | (int) r << 16 | (int) g << 8 | (int) b;

Note that the pixels are unpacked in "row" (ARGB) order but interpolated in "column" order; these transpositions are typical of vectorized and GPU code. The result is very different to the previous version of the linear interpolator, since I'm interpolating vectors here, not scalars mapped to a color ramp:

bilinear interpolation

You can see very noticeable Mach banding in the form of color ridges and creases. hue() mitigated this effect by smootherstep'ping the parameter, and I can do the same here, by converting a float channel into a saturated (that is, range limited) int channel:

static int bsat(float x) {
  if (x < 0)
    return 0;
  else if (x > 255)
    return 255;
  x /= 255f;
  return (int) ((10 - (15 - 6 * x) * x) * x * x * x * 255);
}

Changing the last line in bilinear() to:

  return bsat(a) << 24 | bsat(r) << 16 | bsat(g) << 8 | bsat(b);

results in an image with the high-frequency components smoothed out or filtered:

bilinear interpolation with smoothing

Much nicer. In fact I now understand why there are so many interpolation options to resize an image in Adobe Photoshop®. In the case of the bicubic interpolation I quickly learned that some sort of clipping and/or rescaling is vital, since the "tension" given by the derivatives can make the interpolated values overshoot:

bicubic interpolation with channel overflow errors

This is easily seen by masking out every channel but one:

bicubic interpolation with channel overflow errors, red channel

In fact, a little experimentation shows that interpolated values can go as low as -72 or as high as 327! The worst outliers are, of course, the result of interpolating high-frequency matrices of the form [[0 1 1 0] [1 0 0 1] [1 0 0 1] [0 1 1 0]]. Local rescaling is not a good choice, in my opinion, since it would wash out all channels equally; in particular, it would make images more transparent than they really are. The obvious choice is clipping, although doing that without filtering introduces artifacts:

bicubic interpolation with clipping

Using the bsat() above produces the best results:

bicubic interpolation with clipping and smoothing

The final code for the ARGB bicubic interpolator is:

static int bicubic(final float t, final float u,
                   final int c00, final int c01, final int c02, final int c03,
                   final int c10, final int c11, final int c12, final int c13,
                   final int c20, final int c21, final int c22, final int c23,
                   final int c30, final int c31, final int c32, final int c33) {
  final int a00=(c00>>24)&0xff, r00=(c00>>16)&0xff, g00=(c00>>8)&0xff, b00=c00&0xff;
  final int a01=(c01>>24)&0xff, r01=(c01>>16)&0xff, g01=(c01>>8)&0xff, b01=c01&0xff;
  final int a02=(c02>>24)&0xff, r02=(c02>>16)&0xff, g02=(c02>>8)&0xff, b02=c02&0xff;
  final int a03=(c03>>24)&0xff, r03=(c03>>16)&0xff, g03=(c03>>8)&0xff, b03=c03&0xff;
  final int a10=(c10>>24)&0xff, r10=(c10>>16)&0xff, g10=(c10>>8)&0xff, b10=c10&0xff;
  final int a11=(c11>>24)&0xff, r11=(c11>>16)&0xff, g11=(c11>>8)&0xff, b11=c11&0xff;
  final int a12=(c12>>24)&0xff, r12=(c12>>16)&0xff, g12=(c12>>8)&0xff, b12=c12&0xff;
  final int a13=(c13>>24)&0xff, r13=(c13>>16)&0xff, g13=(c13>>8)&0xff, b13=c13&0xff;
  final int a20=(c20>>24)&0xff, r20=(c20>>16)&0xff, g20=(c20>>8)&0xff, b20=c20&0xff;
  final int a21=(c21>>24)&0xff, r21=(c21>>16)&0xff, g21=(c21>>8)&0xff, b21=c21&0xff;
  final int a22=(c22>>24)&0xff, r22=(c22>>16)&0xff, g22=(c22>>8)&0xff, b22=c22&0xff;
  final int a23=(c23>>24)&0xff, r23=(c23>>16)&0xff, g23=(c23>>8)&0xff, b23=c23&0xff;
  final int a30=(c30>>24)&0xff, r30=(c30>>16)&0xff, g30=(c30>>8)&0xff, b30=c30&0xff;
  final int a31=(c31>>24)&0xff, r31=(c31>>16)&0xff, g31=(c31>>8)&0xff, b31=c31&0xff;
  final int a32=(c32>>24)&0xff, r32=(c32>>16)&0xff, g32=(c32>>8)&0xff, b32=c32&0xff;
  final int a33=(c33>>24)&0xff, r33=(c33>>16)&0xff, g33=(c33>>8)&0xff, b33=c33&0xff;
  final float a = cubic(u, cubic(t, a00, a01, a02, a03), cubic(t, a10, a11, a12, a13),
                           cubic(t, a20, a21, a22, a23), cubic(t, a30, a31, a32, a33));
  final float r = cubic(u, cubic(t, r00, r01, r02, r03), cubic(t, r10, r11, r12, r13),
                           cubic(t, r20, r21, r22, r23), cubic(t, r30, r31, r32, r33));
  final float g = cubic(u, cubic(t, g00, g01, g02, g03), cubic(t, g10, g11, g12, g13),
                           cubic(t, g20, g21, g22, g23), cubic(t, g30, g31, g32, g33));
  final float b = cubic(u, cubic(t, b00, b01, b02, b03), cubic(t, b10, b11, b12, b13),
                           cubic(t, b20, b21, b22, b23), cubic(t, b30, b31, b32, b33));
  return bsat(a) << 24 | bsat(r) << 16 | bsat(g) << 8 | bsat(b);
}

Quite a mouthful, but I arrived at it by methodic and meticulous columnar editing. Now everything is ready to tackle the problem of hoisting the conditional in the inner loop and deriving three different, fully streamlined interpolators. Until next time.