Saturday, 20 August 2016

On type level programming in Haskell: Storing constraints

In the previous article, I described the difference between types and constraints in the Haskell type system, with the aiming of being able to create a value container, similar to the standard Dynamic type except that we can use it with a type class (which is a constraint not a type) without needing to extract the value itself (which we can only do if we know its type). Essentially, we need a way to take a constraint and store it (or, to use the term commonly used for this operation, reify it). But in order to understand how we can work with Haskell type constraints more directly, we first need to examine what they actually are at a lower level.

Type classes and dictionaries

As we discovered in the last article, a type class is a constraint on a type, requiring it to implement a certain set of functions. These functions' types are adapted to the arguments of the type class, for instance when we provide an instance of the Eq type class for a new type MyType, the (==) operator's type changes from Eq a => a -> a -> Bool* to the much simpler MyType -> MyType -> Bool. But the question that remains is how we find those functions when we need to invoke them. Consider this code:

equalYesOrNo :: Eq a => a -> a -> String
equalYesOrNo first second = case first == second of
                               True -> "Yes"
                               False -> "No"

This is a polymorphic function that will work with any argument type as long as it has an implementation of the Eq type class. But each of those types has its own implementation of (==), so how do we know which one to call? One possible answer is to create a new version of the function for each type that it will be used with, but in a language with recursive type constructors, this isn't really an option, as we can't necessarily know what types will exist in the program in advance; consider for example the use of a type data Nat=Z|S Nat which (with the -XDataKinds extension) can be used to build arbitrarily large types in a program like this:

{-# LANGUAGE DataKinds, ScopedTypeVariables, KindSignatures #-}

import Data.Proxy

data Nat = Z | S Nat

class NatToInt (nat :: Nat) where
    natToInt :: proxy nat -> Int

instance NatToInt Z where
    natToInt _ = 0
instance NatToInt n => NatToInt (S n) where
    natToInt _ = 1 + natToInt (Proxy :: Proxy n)

f :: forall (nat :: Nat) . NatToInt nat => Proxy nat -> IO Int
f p = readLn >>= \ x -> if x == 1 then f (Proxy :: Proxy (S nat))
                                  else return $ natToInt p

So we need a way of parameterising the type of a value when passing it to a polymorphic function so that the function can select the right instance when calling its class functions. The way Haskell does this is to use a structure called a dictionary that contains references to the appropriate functions; one such dictionary is passed for each class constraint on a function's types as an extra parameter to the function.

Dictionaries get passed around and stored whenever a polymorphic type value is passed around or stored. So you can put one into a data structure like this (using GADT syntax, -XGADTs):

data ShowBox where
  ShowBox :: Show a => a -> ShowBox

You can then use pattern matching on the structure to implicitly import the dictionary into a context:

showShowBox :: ShowBox -> String
showShowBox (ShowBox a) = show a

Haskell still doesn't know the type of the value 'a' in that last function, but it doesn't need to because it has the type's dictionary for its Show instance stored away quietly in the ShowBox value.

Note that this fails:

data EqBox where
  EqBox :: Eq a => a -> EqBox
boxesEqual :: EqBox -> EqBox -> Bool
boxesEqual (EqBox a) (EqBox b) = a == b

The reason for this is that Haskell doesn't know whether or not the values 'a' and 'b' have the same type, so can't send either of them to the dictionary defined for the other.

So, by putting a constraint on a structure, we can store the constraint for later use. But there's one more thing that we'd like to be able to do: define a structure that can store any constraint.

Using ConstraintKinds

The ConstraintKinds extension allows us to do this. Using ConstraintKinds, we can turn a constraint into a type (sort-of, at least: it's a type that is not allowed to be applied to values, but it can be tracked by the Haskell type system, stored in type variables, and so on). Specifically, a type class constraint like Show becomes a type constructor of kind * -> Constraint, which shows that it is applied to a type that is applicable to values, i.e. kind '*', to give a resulting type 'Constraint'; anything that has type 'Constraint' can go into a context, i.e. on the left of '=>' in a type signature. So we can now extend our Box ADT to accept any type class constraint:

data AnyBox (c :: * -> Constraint) where
  AnyBox :: c a => a -> AnyBox

This now allows us to arbitrarily store and use any type class constraint we want. The 'showBox' example could now become:

showAnyBox :: AnyBox Show -> String
showAnyBox (AnyBox a) = show a

Now there's only one thing left to consider before we can replicate Dynamic with the ability to store a type class constraint for the value: how do we get the value back out of a Dynamic if the Haskell compiler forgets about the type when it gets stored? We'll look at that next time.

Sunday, 14 August 2016

On type-level programming in Haskell: types and constraints

When I started working on the Kind interpreter in Haskell, I had a few goals:

  • Get the interpreter working both with minimal hassle and efficiently, which Haskell was reputed to be a good language for
  • Refresh my functional programming skills, which had been a little neglected for a few yearsdecades.
  • Learn about Haskell's type system, which was reputed to be one of the best in a mainstream language.

While working on the object system for Kind's interpreter, I got a perfect chance to explore the third of those goals, because I wanted to do something that is perhaps a little harder in Haskell than it should be: store data of an unknown type in a variable, and access it later without needing to guess its exact type.

As an example, in a typical object-oriented language, I can do something like this:

class MyDataObject {
   Showable value;   // interface defines 'show()' method that returns a string
   // constructor/accessors go here
}

MyDataObject obj1 = new MyDataObject (32);

out.print (obj1.getValue().show());

Note that the last line there does not know or need to know that the value stored in obj1 is an integer; it could in fact be anything that supports the appropriate interface.

It turns out that doing this in Haskell is hard. There is a standard type, Data.Dynamic that is somewhat similar to MyDataObject in that it allows you to package up an arbitrary value and forget its actual type at compile time, but the problem is that the only way to get the value back out again is if you know the value's type. But Haskell's equivalent to the Showable interface I used above isn't a type, but a constraint. Which I now need to explain...

Types versus Constraints

In an Object Oriented language, interfaces and classes each have their own type, and those types have subtype and supertype relationships to each other. We can happily convert between one type and its supertypes (although the other way around either requires a runtime check [static languages], or could fail at runtime if an unsupported operation is used [dynamic languages]). Everything is a type.

In Haskell, things are a little more complicated. To a first approximation, it supports three entirely distinct kinds* of type:

  • Data types - things that can have values. In the typical OO language, classes, interfaces, primitives, structures, enumerations, delegates, events, or just about anything else you can think of is a data type. In Haskell, data types are merely the surface level. We also have:
  • Type constructors. These are like functions that return types. They are vaguely equivalent to a generic type in an OO language, except that they have a first class existence of their own, and can never actually be *used* without full instantiation (cf Java, where I can have an ArrayList<String> and simply assign it to a variable of type ArrayList -- in Haskell you'd get a kind error if you tried to do that).
  • Type constraints, of which there is only one variety in vanilla un-extended Haskell: the type class. These are like predicates on types, that state whether a particular set of functions is available for them.

This last point is critical, and something a lot of novice Haskell programmers miss. When we see some code like this:

class MyTypeClass t where
    someFunction :: t -> t
    anotherFunction :: t -> t -> Int

data Imp = Imp Int

instance MyTypeClass Imp where
    someFunction (Imp i) = i + 1
    anotherFunction (Imp a) (Imp b) = a - b

f :: MyTypeClass t => t -> t -> t
f a b | anotherFunction (someFunction a) b == 0 = a
      | otherwise = f (someFunction a) b

we might look at this from the background of spending most of our time in object-oriented environments and say that MyTypeClass is like an interface or abstract class. The use of the name "class" which seems familiar reinforces that thought. And at it's surface, it is a bit like that; you can use it to do some of the same things. But in its details it is different.

One thing to note is that the functions in MyTypeClass do not have the same type in different instances. In the instance for MyTypeClass Imp, someFunction for example has type Imp -> Imp, while for another instance, perhaps MyTypeClass Gnome, the type would be different (in this case, Gnome -> Gnome). In an object-oriented language, the function would be specialised for the first argument, but the remaining arguments and the result would not be specialised. In such a situation, the types of the functions would look something like this:

  someFunction :: Imp -> MyTypeClass
  anotherFunction :: Imp -> MyTypeClass -> Int

Note that this isn't valid Haskell. The reason for this is because MyTypeClass isn't a type; Haskell doesn't know how to generically handle objects that have instances of it, because of the way type classes are implemented (which is something I'll discuss in more detail later on) -- in every case, the type of a value either received or returned from a function must be known by the caller (and indeed in cases where a value is returned, the caller gets to specify what type is returned).

It turns out that the fact that we can require *all* of the values mentioned in a type class to be exactly the same type is particularly useful. Without the ability to do this, for example, you can't implement a generic Monad type, which means that all of the abstract operations that can be performed on Monads in a language like Haskell (e.g. sequencing operations, collecting results of lists of operations, folding lists of operations, applying monadic actions to lists of values and collecting the results, lfiting functions to operate on monadic types, and so on) must be implemented for each individual monadic type in a traditional object-oriented language. Less esoterically, you can implement type-safe comparison where you don't have to worry about the possibility of two types that aren't mutually comparable being found at runtime: that simply can't happen with a comparison type class, the compiler will catch the problem.

But this flexibility does come at a cost: type classes cannot be considered types, at least not in a sound manner, because the types of different implementations' functions are not compatible, so cannot be passed to each other. Make them compatible, and we lose the flexibility that allows a typesafe Monad or Comparable type.

So why can't Dynamic return a value of a type class?

Now, we have the background to get back to our original point. To get the value out of a Dynamic container, we need to specify its type. It looks like this:

  frobnicateString :: Dynamic -> String
  frobnicateString dyn = case (fromDyn dyn) :: Maybe String of
                             Nothing -> "wasn't a string"
                             Just str -> reverse str

In the case statement, we force fromDyn to attempt to return a Maybe String (i.e. either a string or an indicator that no string is available). Haskell could have worked out that was what I was trying to do, but I've left the type signature in for clarity's sake. fromDyn checks to see whether the value passed to it contains a string; if it does it returns it in a "Just" wrapper, otherwise it returns "Nothing". But we can't specify a type class for it to return, because type classes aren't types. Haskell needs to know what the type of a value is in order to work with it; that's a basic constraint of how the language works. In the next post, I'll start looking at how we can solve this problem.

Pratt Parsing in Parsec. Perfect.

While working on the Kind interpreter in Haskell, using the Parsec parser combinator library, I came upon a difficulty.

Parsec has a built-in expression parser generator, but unfortunately it has a problem:

Prefix and postfix operators of the same precedence can only occur once (i.e. --2 is not allowed if - is prefix negate).

This restriction sort-of makes sense when you consider '-' as there is little reason to do this, but in languages that have implicit conversion to bool (which Kind will have), !!expr is a common idiom used to convert a value to bool without inverting it, and this means that the Parsec expression parser cannot understand this. Or for C-style pointer handling, being able to do **p (where p is a pointer to a pointer to some value) is pretty-much essential. The Parsec expression parser therefore is not useful for anything other than trivial languages, as far as I can see. I needed a replacement.

The traditional solution to writing infix expression parsers with precedence is to break your grammar productions into a series of different types of term that can include any operators of a given precedence or a value of the next type of term, with atomic values at the innermost level. For example, you might have:

  expr ::= expr ('+'|'-') expr                -- 1
  expr ::= term1                              -- 2
  term1 ::= term1 ('*'|'/') term1             -- 3
  term1 ::= term2                             -- 4
  term2 ::= variable | literal | '(' expr ')' -- 5

An example expression might be parsed like this:

  5 + 17 * 2
  [term2:5] + 17 * 2                 -- apply production 5
  [term1:5] + 17 * 2                 -- apply production 4
  [expr:5] + 17 * 2                  -- apply production 2
  [expr:5 +] 17 * 2                  -- partially apply production 1
  [expr:5 +] [term2:17] * 2          -- apply production 5
  [expr:5 +] [term1:17] * 2          -- apply production 4
  [expr:5 +] [term1:17 *] 2          -- partially apply production 3
  [expr:5 +] [term1:17 *] [term2:2]  -- apply production 5
  [expr:5 +] [term1:17 *] [term1:2]  -- apply production 4
  [expr:5 +] [term1:34]              -- complete production 3 & calculate result
  [expr:5 +] [expr:34]               -- apply production 2
  [expr:39]                          -- complete production 1

This works, and you can implement it using Parsec, but there are problems:

  • If you have many precedence levels, you need many different production types and it makes your parser confusing.
  • Depending on the algorithm used, your parser might internally not like left-recursive definitions, so you may have to adjust how the grammar is laid out according to your parser's internal structure. My understanding is that Parsec doesn't like left-recursive grammars, for example, so the nice logical grammar I give above would need to be made more complicated in order to fix that.
  • Your parser may spend a lot of time just translating terms between different precedence levels.

The Pratt parser algorithm, also known as a "top-down operator precedence parser", is an algorithm for handling expressions with infix operators by using a table associating operators with their precedences rather than having separate productions for them. I won't go into details of the algorithm here, as there are already many excellent articles available; this one is the easiest to understand in my opinion. Surprisingly, there didn't seem to be an implementation of a Pratt parser on Hackage, the Haskell repository for useful libraries. So I set about making one... and the result is here.

The interface is as a parser combinator, i.e. it takes a number of functions that parse individual parts of an expression and produces a function that uses those to parse an entire expression. The package includes an example that parses a simple language into a parse tree, then prints the tree. The example should be fairly simple to understand; here's the gist of it:

-- apply optional trailing whitespace to a parser
wsopt :: ContentStripper String () Identity ()
wsopt = optional spaces

This defines a parser that will ignore any spaces it encounters. This is used to discard any irrelevant input between the tokens of the expression; a more advanced language might also include the ability to strip comments at this point.

-- parse an integer value (with optional trailing whitespace)
parseIntValue :: ExprParser 
parseIntValue = do
    x <- many1 digit
    wsopt
    return (IntValue (read x))

Integer literals: parse one or more digits, skip any whitespace, then return the digits as an integer literal value (wrapped in a container for the parse tree).

-- parse an operator symbol
operator :: OperatorParser String () Identity String
operator = try (string "ifTrue") <|> many1 (oneOf "<>:@~\\/|!$%^&*-_=+")
           <|> string "(" <|> try (string "let")

Operators are either the string "ifTrue", a sequence of any of a set of symbols, an open bracket, or the string "let" (which is used for expressions like let id = expr1 in expr2). Note that we're taking a broad view on what constitutes an "operator" here.

There are other parsers for more involved parts of the language (like the parser for handling "let" expressions as mentioned above), but we'll skip those for now, and jump to a couple that you'll probably want something similar to for just about any language you implement with this library:

-- parse a binary operator with standard semantics
parseStdOp :: LeftDenotation String () Identity Expr String
parseStdOp (OperatorInfo name precedence _) lhs pex = do
    rhs <- pex precedence
    return (BinOp lhs name rhs)

When you register a binary operator with the library, you provide a function that is called to process it. While it can do anything you want, in most cases you'll just want to parse the right hand side of the expression and then wrap it up with the left hand side in a parse tree node, which is what the code above does. The OperatorInfo record is a copy of the information we provided when the operator was registered, which we use here to make a generic function that can handle any operator; if you have a function specifically for each operator you won't need this. lhs is the already-parsed tree for everything on the left of the operator. pex is a function generated by the Pratt parser that you can call to parse the right hand side of the operator, until an operator is found in it whose precedence is too low to bind inside our right hand side; we call it with our operator's precedence and capture the result, then return a tree node with both that subtree and the original left-hand subtree, and the type of operator that was found.

-- parse '(' <expression> ')'
parseBracketExpr :: NullDenotation String () Identity Expr
parseBracketExpr pex = between
    openBracket closeBracket
    (pex (LAssoc 0))
    where openBracket = char '('
          closeBracket = char ')'

Bracketed expressions are parsed as prefix operators (which are essentially the same thing as infix operators except they don't have a left-hand side subtree), but we override the precedence we parse to so that any valid operator is bound into the resulting subexpression. We check that this is followed by a close bracket, and return the generated expression tree. Many languages are likely to include this function verbatim.

-- parse terms
parseTerm :: NullDenotation String () Identity Expr
parseTerm pex = 
    parseIntValue <|>
    parseVarRef <|>
    parseBracketExpr pex

We need to give the parser generator a parser that can handle any individual term of the language. In this case, terms are either integers, variable references, or bracketed sub-expressions.

-- operator data
operatorList :: [OperatorInfo String () Identity Expr String]
operatorList = [
    OperatorInfo "-" (LAssoc 50) parseStdOp,
    OperatorInfo "+" (LAssoc 50) parseStdOp,
    OperatorInfo "|" (LAssoc 40) parseStdOp,
    OperatorInfo "*" (LAssoc 70) parseStdOp,
    OperatorInfo "&" (LAssoc 60) parseStdOp,
    OperatorInfo "/" (LAssoc 70) parseStdOp,
    OperatorInfo "<" (LAssoc 30) parseStdOp,
    OperatorInfo ">" (LAssoc 30) parseStdOp,
    OperatorInfo "<=" (LAssoc 30) parseStdOp,
    OperatorInfo ">=" (LAssoc 30) parseStdOp,
    OperatorInfo "||" (LAssoc 10) parseStdOp,
    OperatorInfo "&&" (LAssoc 20) parseStdOp,
    OperatorInfo "^" (RAssoc 90) parseStdOp,
    OperatorInfo "ifTrue" (RAssoc 5) parseIfOp]

prefixOperatorList :: [PrefixOperatorInfo String () Identity Expr String]
prefixOperatorList = [
    SimplePrefixOperator "-" bindPrefixOp,
    SimplePrefixOperator "!" bindPrefixOp,
    PrefixParserOperator "let" parseLetOp]

Finally, we can produce a table of all the operators understood by our language. Note that "ifTrue" is an infix operator that handles more than one right hand subexpression (it has the form expr ifTrue expr1 else expr2) and "let" is a prefix operator that likewise handles multiple subexpressions (as described above). These show more advanced ways of using the library than just standard prefix & infix code.

With this table in hand, we can generate a parser that can handle the entire language of expressions:

parser :: ExprParser
parser = buildPrattParser operatorList prefixOperatorList wsopt operator parseTerm

Using the resulting parser is as simple as using any Parsec parser.

parseToText :: ExprParser -> String -> String
parseToText parser input = case parse parser "input" input of
       Left error -> show error
       Right expr -> PP.render $ pPrint expr

main::IO()
main = do
   input <- getContents
   putStrLn (parseToText parser input)

The entire language, including parse tree structure and pretty printer, takes only 107 lines of code (excluding blank lines and comments).

Thursday, 4 August 2016

About the Kind Language

The Kind Programming Language is an object-oriented / functional multi-paradigm language currently in development.  Its aims are:
  • to be a general purpose language specifically optimized for larger applications;
  • to provide a minimal set of core capabilities that include sufficiently powerful mechanisms to allow useful and novel end-user-facing functions to be developed as libraries rather than requiring language extensions;
  • to allow efficient code to be developed without placing undue burdens on the programmers who do not need that efficiency; and
  • to provide a useful compromise that (hopefully) will satisfy both camps in the debate between static and dynamic languages.
To achieve these, specific features of the design that are planned include:
  • facilities for declarative programming techniques, including declarative exception handling, transaction management, and resource ownership handling;
  • advanced object-oriented features including multimethods, declarative delegation, fully realised meta-object protocol, and mixins;
  • transparent parametric polymorphism with automatically-inferred constraints, allowing a programming style that will rarely require type annotations;
  • metaprogramming integrated into the language to allow for compile-time metaprograms to be written using a (usefully large) subset of the same language as the run-time program;
  • metaprogramming system to allow for metaprograms to manipulate the parse tree of the program; and
  • type-system with optional declarative resource ownership annotations to allow automatic exact reclamation of used resources when they are no longer required, but also allowing for ownership to be left to the system to manage (e.g. via a garbage collector) when the programmer does not feel the need to use type annotations.
The Kind Programming Language is (as of August 2016) under active development and is not yet suitable for use (the current implementation allows the definition of functions with simple arithmetic, but does not yet have a working object system).  But keep reading for updates, examples, and discussions about features, implementation details, Kind's inspirations taken from other languages, and so on.