The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.
                                 Muldis::D
                                   TODO
---------------------------------------------------------------------------

Following is a summary of things that still need doing.  It is specific to
the Muldis D specification distribution only, and doesn't talk about things
that would go in other distributions, including implementations.  (But,
look at lib/Muldis/D/SeeAlso.pod for a list of actual or possible
implementations.)

Alternately, this list deals with possible ideas to explore, which may or
may not be good ideas to pursue.

The following list is loosely ordered by priority, and is organized into
groups by approximate subject area, but list items may
actually be addressed in a different order.  There is no specific time
table for these items; they are simply to be done "as soon as possible".

* Generally speaking, make a new release to CPAN once every week, assuming
the progress is non-trivial, so there are regular public snapshots with
nicely rendered documentation.

----------

* Update the development status of the Muldis D language spec to "alpha"
from "pre-alpha" (but don't set the version
to 1.0.0 yet) only when the Muldis Rosetta
Example Engine reference implementation has fully implemented the language
core, or a significant and computationally complete working subset thereof,
and so the language spec is then considered sufficiently complete with
corner cases exposed; Muldis Rosetta would also be updated to "alpha"
status et al simultaneously.  There are no other preconditions to consider
either project "alpha" status.  Curr est circa Aug..Sep of 2010 for this.

* Preconditions for considering the Muldis D language spec to be either
"beta" or "released" status or "1.0" incl:  A significant, computationally
complete working core or subset thereof as a Parrot hosted language, a
TAP speaking test suite with significant feature coverage, a serious level
of post-alpha-status design input solicited of other interested parties,
implementations over multiple SQL DBMSs.  Curr est end 2010 or early 2011.

----------

* In all 3 STD.pod, add code examples for each of these 4 material kinds:
scalar-type, domain-type, subset-type, mixin-type.

* In all 3 STD.pod, complete the description text, defining interpretation
in PTMD_STD and structure in the 2 Perl-STD, for each of these 7 material
kinds: scalar-type, tuple-type, relation-type, domain-type, subset-type,
mixin-type, subset-constraint.

* In all 3 STD.pod, populate the entire pod sub-section for each of these 2
material kinds, to provide concrete grammar, description text, and code
examples: distrib-key-constraint, distrib-subset-constraint.

----------

* Change the basic exception throwing mechanism from a function/procedure
to its own expression/statement node kind.  Call the new node kind "fail"
or "failure" or "throw" or "raise" something.  The "fail" node has a child
expression node or references a variable node which defines an Exception
value.  Simply evaluating a "fail" expression node will throw the exception
so a "fail" expr node is expected to only be the child of a short-circuit
expression like ??!!.
- Add a "fail" term, which throws a generic/default Exception value,
and/or a tight-binding "fail" prefix-keyword which takes an Exception arg;
that term/prefix is the concrete syntax for the new fail node.
- The "assertion" function can then go away; instead of writing
[$foo asserting $foo != 0], say [$foo = 0 ?? fail !! $foo].
- Add a few simple functions that each result in a kind of generic
Exception value.  At least have a niladic one for the most gen exception.
Then one could write [<cond-expr> ?? gen_exception() !! <expr-when-ok>].
- The treated() function then is just a wrapper over ??!! + isa.
- The fail() procedure will go away, replaced with a term/keyword also,
which maps to the "fail" statement node.
- Maybe use 'fail' for niladic term and 'raise' for prefix term?
- New keyword speelings:
    - failure
    - raised <expr>
    - fail
    - raise <var>

* Add support for routine parameters to have aliases, that is, for a named
parameter to be able to bind with a named argument where the argument may
have several possible names.  One use for this would be to support
parameters where it is desired to refer to them within their routine using
one name, but to use a different name in the argument, such as because the
latter is shorter or reads better (the Perl 6 spec should have some
examples of this).  Another use for this is to provide better support for
mixtures of arbitrary numbers each of positional and named routine
arguments; any parameters that would be reasonable to have a positional
argument would have 2 names, where one is an integer and one is text.  All
Muldis D grammars would be updated to no longer consider 'topic' and
'other' as special, which is a contrived notion, and instead consider
'0','1',... special.  And so, for all system-defined or user-defined
routines, any `op(foo,&bar,baz)` would be parsed into the same thing as
`op("0"=>foo,&"1"=>bar,"2"=>baz)`, and `.name` would be `"0".name`.  Now it
will so happen that "topic","other" will be commonly used in parameter
names, typically paired with "0","1" but we can now be a lot freer to name
parameters something more descriptive, such as "addends", and not
artificially make them topic/other simply so they support positional
syntax.  An idea for declaration syntax when aliases exist is to use the
"|" char; eg `function foo (Int <-- topic|"0" : Int, other|"1" : Int)`.
Of course, this complexity is only in param lists; arg lists are unchanged
and still are plain tuples with a single name per attr/arg.
For simplicity, a single param name will be more important than the others,
and only that would be its "expression node name" or "variable name" within
its routine, by which it must be referenced; therefore, the current
system catalog for declaring parameters can remain unchanged, and new
rtn-decl-type rtn-heading-attrs can be added to declare aliases.
Largely for flexibility, and correctness where they don't make sense,
parameters will never automatically have a number alias, but rather only
when the routine definer explicitly gives it one.
Of course, these aliases only apply to regular params, not global params.
One result of this change is that the Muldis D grammars will no longer
consider positional ro and rw args in separate spaces such that they can
appear in either order; now all positional args must be in the correct
mixed relative order, as there is only one "0", not one per ro and rw.

* Add special syntax for more ops:
    - ?#foo - "has 1+ elements" - is_not_empty(foo)
    - !#foo - "has zero elements" - is_empty(foo)
    - foo :=!# - assign_empty(&foo)
... and maybe rename underlying routines in the process.

* Update the mixins feature to add support for mixins that define
attributes that types can compose, whereby we support some approximation of
"specialization by extension" while still actually being just
"specialization by constraint".
Maybe also it could be said ...
A primary purpose of mixins is to help with managing software reuse, mainly
when multiple types have a number of attributes in common, a mixin can
define these and then the multiple types can compose that mixin.  A mixin
or type that composes a mixin can both add additional attributes of its own
to what the mixin defines, and the composer can add extra constraints over
the composed attributes like forcing a subtype.
Maybe also do ...
Support delegation / 'handles'; for example:
    - Name explic delegate to Text attr
    - maybe Blob, Text explic delegate to String attr
    - a ColoredCircle would delegate to both Color and Circle attrs?
This will all take some work to get right; not /all/ Rat/etc can be subst.
Probably *only* those operators that Rational/etc explicitly declares can
be delegated to Rat/etc by TAIInstant/etc.

* Replace many N-adic routines with dyadic ones, specifically
those whose definition is a repetition of a dyadic operation (so, 'sum' or
'join' etc yes but 'mean' no), which users then can invoke by way of a
reduction function if they want N-adic syntax.  Also let system catalog
store more information such as whether or not functions are commutative or
associative or idempotent or symmetric etc; likewise, the function def can
store what the operation's identity value is, if it has one, as meta-data,
useable when comm/assoc; the reduction func can read this using a
meta-programming function or something.  Reduction will fail if used on a
base func that doesn't define an identity if given an empty list.
The point of this change is to make the common dyadic case of N-adic
operators simpler, and also set a foundation for user-defined operators
that provide more information such that a compiler can be more effective
in optimizing them, or something.
The explicit/normal way, then, to indicate in code whether you want the
parser to produce a reduce op wrapper call rather than nested direct
invocations in the system catalog, is to just invoke the reduction
operator directly and explicitly pass an operand list; but the reduce op
would have special syntax, taking normal collection exprs, such as:
    [+] {5,23,5}
    [~] ['hello', 'world']
    [join] {order,inventory}
    [*] {1..5}
... or something.  Not using that would parse into nested dyadic calls
instead though the compiler can still rearrange.
Once we do that, its also simple to add hyper-operators, though arguably
these are redundant with 'map' or 'extension' etc.
Or this would be better for simplicity, given it won't be used as often,
and any dyadic infix function at all may be used, spelled the same way:
    reducing + {4,23,5}
    reducing ~ [...]
    reducing join {...}
    reducing * {...}
    reducing <nlx.lib.myfunc>(a=>3) {...}
... and so the regular operators can be parsed as usual.
Or maybe:
    reduced {4,23,5} using +
    reduced {...} using <nlx.lib.myfunc(a=>3)
... but that might have an end-weight problem?
Or, still go symbolic like the first one, but use prefix notation so that
it works well with both symbolic and wordy or inline-defined operators:
    []+ {5,23,5}
    []~ ['hello', 'world']
    []join {order,inventory}
    []* {1..5}
    []<nlx.lib.myfunc>(a=>3) {...}

* Furthering the above, add somewhat generalized support for what Perl 6
calls "meta" operators, at least in that we define and exploit several.
The general reducer above would be one of these.  Another is the negated
relational, whose syntax is putting ! or not- in front of any Bool-resu op.
Another is the assignment, putting := in front of any function.
For !, we can then eliminate all the "not" variants of any Bool-resulting
functions, so eg "x != y" parses into "not(is_identical(x,y))", same as if
they had said "!(x = y)".  As for the old intended purpose of all the not-
variants, which is to preserve the user's intent of how code should look,
we could simply have an alias for the not() function which is what is
parsed into when != is used, and the old not() is just parsed into when the
separate prefix op is used.  On the other hand, while lots of not- variants
would go away, we'll keep the alias-but-param-order-reversed dualities such
as less-than/greater-than and sub-superset; unlike these, what we're
eliminating would not result in losing track of which args are lhs/rhs.
A related change is infix ops like ≠ or ⊈ would parse into not(foo()) even
though they don't have the !; these would be aliases for the combos, same
as Perl 6 has != as an alias for !==.
For :=, we can eliminate all the updaters that are just shorthands for
doing an op and assigning the result to one of the args.  And so a
"foo :=union bar" would parse to "assign(&foo,union(foo,bar))".  Once
again, an alias for assign() can exist which such combos are parsed into,
where the regular assign() is used when users write "foo := foo union bar".
Of course, despite Muldis D requiring operator combos where singles used to
work, we assume that implementations will be smart enough to, say, use a
single "!=" or "insert into foo ..." etc when it sees the combination, so
there is no performance loss.
Probably, any meta'd operator would have the same precedence as the base
operator that it is modifying.
Adding the hyper-meta may not be useful since we already have map()/etc;
or alternately it might be useful in avoiding some uses of map or extend
or substitute etc where users are just adding/defining one attr.
Or maybe hyper-meta would only be useful with Set/Array/Bag because the
general map/extend/etc would require naming the attribute explicitly.
As for ASCII vs Unicode etc, that preference is never encoded in the system
catalog, so when code would be generated from the system catalog, it would
be up to the generator's configuration for which versions are used.

----------

* Update the virtual attributes maps so there is a way to manually specify
a reverse function, as meanwhile all the virtuals don't have to be either
read-only or updatable due to an automatically generated reverse function,
which might vary by implementation, which may be considered broken.  Note
that the reverse functions might have to be defined as per-tuple
operations, separately for insert/substitute/delete.

* Add new "material" kinds that define state constraints (address as simple
nlx.*.data.*), like type constraints but ref in reverse.

* Add new "material" kinds that define
transition constraints.  The initial version would simply take a
'before' and an 'after' state argument.  But it may be good to provide an
alternative where arguments show the delta since often that is what
constraint writers want to know anyway.

* Update the "material" kinds that def stimulus-response rules / triggered
routines so that they work for more kinds of stimuli, and maybe change the
keywords.  The material kind has 2 main attributes, where the "stimulus"
defines what to look out for and "response" defines what to do when the
former is sighted.  Some possible keywords for the first are "stimulus",
"cause", "when"; for the latter, "response", "effect", "invoke".

* Add new "material" kinds that define descriptions of resource locks that
one wants to get, starting with basic whole dbvar, relvar locks (address as
simple fed.data.foo.*, as well as simple relvar tuple locks (addr as prior
plus lists of values to match like with a semijoin); leave out generic
predicate locks at first but note they will be added later.
Update the system catalog concerning managing shared|exclusive locks or
looking for consistent reads between statements, etc.

* Large updates to docs concerning transactions and resource locking.
Note:  Supposedly PostgreSQL and MySQL use read-committed isolation by
default while SQLite provides serializable.

* Rewrite the "Exception" catalog type so it can carry metadata on what
kind of exception occurred, not just that an exception occurred.

* Also study SQL concept of conditions and handlers, looks sort of like
something between exception handling, signals; or it is their exceptions.

* Also adapt something like Postgres' LISTEN/NOTIFY/UNLISTEN feature, which
is an effective way for DB clients to be sent signals, such as when a
database relvar has changed.

* Use a conceptual framework for database transactions that is strongly
inspired by how distributed source-code version control systems (VCSs)
work, in particular drawing on GIT specifically.  The fundamental feature
of the framework is that the DBMS is managing a depot consisting of 1..N
versions of the same database, where every one of these versions is both
consistent and durable.  Each version is completely defined in isolation,
conceptually, and so any versions in a depot may be deleted without
compromising each of the other versions' ability to define a version of the
entire database.  It is implementation-dependent as to how the versions are
actually stored, such as each having all of the data versus most of them
just having deltas from some other version; what matters is that each
version *appears* to be self-contained.  Every version is created as a
single atomic action, and it is never modified afterwards, though it may be
later deleted (also an atomic action).  Every in-DBMS user process,
henceforth called "user", has its own concept of the current state of the
database, which is one of the depot's versions that is designated a "head".
A user's current head is never replaced during the course of the in-DBMS
process unless the user explicitly replaces it, such as by either
performing an update or requesting to see the latest version (the latter
done such as with an explicit "synchronize" control statement).  Therefore,
each user is highly isolated from all the others, and is guaranteed
consistent repeatable reads and no phantoms; they will get repeatable reads
until they request otherwise.  The framework has no native concept of
"nesting transactions" or "savepoints" or explicit "commit" or "rollback"
commands.  Rather, every single DBMS-performed parent-most multi-update
statement (which is the smallest scope where TTM requires the database to
be consistent both immediately before and immediately after its execution),
is a durable atomic transaction all by itself.  The effect of a successful
multi-update statement is to both produce a new (durable) version in the
depot and to update the executing user's "head" to be that new version (the
prior version may then be deleted automatically depending on
circumstances); a failed multi-update statement is a no-op for the depot,
and the user gets a thrown exception.  A depot's versions are arranged in a
directed acyclic graph where each version save the oldest cites 1..N other
versions as its parents, and conversely each version may have 0..N
children.  A child version has exactly 1 parent when it was created as the
result of executing a multi-update statement in the context of the parent
version; the parent version is the pre-update state of the database and the
child is the post-update state of the database.  A child version has
multiple parents when it is the result of merging or serializing the
changes of multiple users' statements that ran in parallel.  One main
purpose of tracking parents like this is for reliable merging of parallel
changes, so that the intended semantics of each change can be interpreted
correctly, and potential conflicts can be easily detected, and effectively
resolved.  More on how this works follows below.  Note that versions simply
have unique identifiers to be referenced with and there is no implied
ordering between them if they are generated as serial numbers or using date
stamps, though versions with earlier date stamps are given priority in the
case of a merge conflict.  So a multi-update statement is the only native
"transaction" concept, and it is ACID all by itself.  Now, the
multi-statement "transactions" or concepts of nested transactions or
savepoints would all be syntactic sugar over the native concept, and
basically involve keeping track of versions prior to the head and
optionally making an older one the head.  This framework uses the VCS
concept of "branching" (which is something that GIT strongly encourages the
use of, as GIT makes later "merging" relatively painless) as the native way
to manage concurrent autonomous database updates by multiple users.  By
default, when no users have made any changes to the database, a depot just
has a "trunk", and its childmost or only version is called "master"; every
database user process' "head" starts off as the "master" version when that
process starts.  Each (autonomous) user process that wants to update the
database will start by creating a new branch off of the trunk, and
subsequent versions of theirs will go into that, rather than into the trunk
or some other branch.  The trunk is shared by all users while each user's
branch is just for that user, as their private working space.  Note that,
unlike a VCS in general where branches can become long-lived and interact
with each other independently of the trunk, the framework instead follows
the typical needs of an RDBMS, which espouses a single world view as being
dominant over any others, and expects that any branches will be very
short-lived, not existing for longer than a conceptual "database
transaction" would; only the trunk is expected to be long-lived.  (This
isn't to say that a DBMS can't maintain them long term, but one that acts
like a typical RDBMS of today wouldn't.)  Note that the final action on a
branch that involves merging into the trunk, this would be perceived by all
other DBMS users as all of the changes wrought by the branch being a single
atomic update, though the user performing it may see several steps.

* Flesh out matters related to starting or communicating between multiple
autonomous in-DBMS processes, in general, besides the special case about
sequence generators.

----------

* Add to Routines_Catalog.pod and other files
definitions of any remaining routines, eg String routines, that would be
needed so that for all system-defined types all the necessary
system-defined routines would exist that are necessary for defining said
types, especially their constraint or mapping etc definitions.  So in
String.pod we need [catenation, repeat, length, has_substr] etc.
Also add "is_coprime" or GCD or LCM or etc which are used either in the
constraint definition of Rat or in a normalization function for Rat; see
also "the Euclidean algorithm" as an efficient way to do the calculations.

* Consider adding type introspection routines like: is_incomplete() or
is_dh() or is_primitive|structure|reference|enumerated etc.  Or don't
since one could look that up in the system catalog.  But more tests on
individual values might be useful, or maybe we have enough already.

* Add ext/TAP.pod, which is a partial port of Perl 5's Test::More / Perl
6's Test.pm / David Wheeler's pgTAP to Muldis D; assist users in testing
their Muldis D code using TAP protocol.  The TAP messages have type Text.

----------

* Add concept of shallowly homogeneous / sh- relation types to complement
the deeply version, and named maximal types like SHRelation, SHSet,
SHArray, etc to complement the DH/etc, and sh_set_of/etc to complement
dh_set_of/etc; but not sh-scalar or sh-tuple as the concept doesn't make
sense there.  Then update functions like Relation.union/etc to take
sh_set_of.Relation rather than set_of.Relation, which more formally defines
some of their input constraints.

* Consider adding an imperative for-each looping statement; the main
question here is whether it should work on any (unordered) relation or just
on an Array (in which case it iterates through the tuples in sequence by
index); the question is what tasks the for-each would be used for; perhaps
both versions are useful; presumably the main reason to have for-each at
all is when I/O is involved and some derivative needs to be output either
where order matters or where order does not matter; but perhaps only a
routine is needed here such as a catenate function plus normal I/O output.
The question also is what tasks would an imperative for-each be needed for
that functional constructs like the list-processing relational functions
can't better be used for those tasks instead.

----------

* Add a round-rule param to rat division, I suppose, since in general we'll
need it if we want to maintain a rational radix through every op (+,-,*
will already do so when all their args are in the desired radix).

* Add explicit support for +/- underflow, +/- overflow, NaNs, etc.
I'm inclined to think +/- zero is unnecessary when we have underflow and
can be confusing anyway (just a single normal number zero is better).
I'm not sure if +/- overflows are useful or if infinities cover them for
our purposes.  How this would work is that we define a set
of scalar singleton types, one for each of the special values.  Then we
define extended versions of the Int, Rat, etc types where the extended
types are defined in terms of being union types that union the regular
numeric types with the special singleton type values.  This approach also
means just one each of +Underflow, -Overflow, etc is needed and is a member
of extended Int or Rat etc.  Consider using the existing names "Int"/"Rat"
with the versions that include these special values, and make new names for
the current simpler versions that don't, such as "IntNS" (int no specials),
"RatNS", etc.  Either way, it is useful to support the full range of values
that a Perl 6 numeric can support, or that an IEEE float can support,
without users necessarily having to define it themselves.
IDEA:  Maybe make all normal math/etc ops work with the extended versions
(those with NaNs, infinities, etc) and in situations where users don't want
those special values they just use a declared type excluding them, and then
the normal type constraints will take care of throwing exceptions when one
divides by zero for example.

* Flesh out Interval.pod to add a complement of functions for comparing
multiple intervals in different ways, such as is-subset, is-overlap,
is-consecutive, etc, as well as for deriving intervals from a
union/intersect/etc of others, as well as for treating intervals as normal
relations in some contexts, such as for joining or filtering etc, as well
as a function or 3 to do normalization of Interval values.
Maybe the type name 'Range' can be used for something.
Maybe the type name 'Span' or 'SpanSet' can be used for something;
there are Perl modules with those names concerning date ranges.
Input is welcome as to what interval-savvy functions Muldis D should have.

* IN PROGRESS ...
Add Bool-resulting relational operators EXISTS and FORALL, that provide
"existential quantification" and "universal quantification" respectively,
these being useful in constraint definitions.  See TTM book p168, pp394-5
for some info on those.  Also add analogies to Perl 5's List::MoreUtils
operators any(), all(), notall(), none(), true(), false(); some of those
may be the same as EXISTS/FORALL.  Also add an EXACTLY operator like the
Tutorial D language has, and a one() op that is between any() and none().
Maybe some pure boolean ops can be added analogous to the above also; eg
any() an alias for or() and all() an alias for and().
is_(any|all|one|none|notall|etc)_of_(restr|semijoin|semidiff|etc)
source is any|etc matching|where|etc filter|etc
ADD RELATIONAL OPERATORS THAT COMBINE BOOL OPS ADDED IN 0.80.0 WITH
RELATIONAL MAP/RESTRICTION/ETC AND ... The new functions are modelled after
some in Perl 5's List::MoreUtils module.
That is, add prefix ops exactly|all|any|one|none|etc
which take a relation and result in True or False depending on what that
relation's cardinality is.  In some cases, an extra arg is needed:
    - exactly((s⋉t),n) = (#(s⋉t) = n)
    - none((s⋉t)) = exactly((s⋉t),0) = !#(s⋉t)
    - any((s⋉t)) = !exactly((s⋉t),0) = ?#(s⋉t)
    - all((s⋉t),#s) = exactly((s⋉t),#s) = (#(s⋉t) = #s)
    - notall((s⋉t),#s) = !exactly((s⋉t),#s) = (#(s⋉t) != #s)
    - one((s⋉t)) = exactly((s⋉t),1) = (#(s⋉t) = 1)
OR MAYBE THESE AREN'T ANY MORE USEFUL THAN THEIR EQUIVALENT EXPRS.

* Consider adding sequence generator updaters|procedures in Integer.pod.

* Consider adding random value generators for data types other than integer
and rational numerics, such as for character strings or binary strings.

* Consider analogy to SQL's "[UNION|EXCEPT|INTERSECT] CORRESPONDING BY
(attr1,attr2,...)", which is a shorthand for combining projection and
union, that takes a list of attributes and unions the projections of those
attributes from every input relation; so this means, as with join(), that
the input relations don't need to have the same headings.

----------

* In PTMD_STD, consider further changes to how character escape sequences
in strings/etc are done.  For example, whether the simple escape sequence
for each string delimiter char may be used in all kinds of strings (as they
are now) or just in strings having the same delim char as is being escaped.

* IN PROGRESS ...
Update the STD dialects to support inline definition of basic
routines (and types?) right in the expressions/etc where they are used,
such as filter functions in restriction() invocations, so many common cases
look much more like their SQL or Perl counterparts, or for that matter, a
functional language's anonymous higher order functions.  This syntax would
be sugar over an explicit material definition plus a FooRef val selection,
which means the inner def effectively is an expression node, and users can
choose to name or not name the FooRef selecting node as normal with value
expressions.  It is expected that the materials could be decl anonymously
and names for them (the inn.foo, not the FooRef's lex.foo) would be
generated as per inline expression nodes etc.

* Further to the previous item, add some special syntax, similar to how one
references a parameter to get its argument's value, which can see into the
caller's lexical scope.  This would be sugar over declaring parameters with
the same name and having the caller explicitly pass arguments to it,
without having to explicitly write that.  Generally this syntax would only
be used with inline-declared routines.  But similarly, add some special
syntax allowing one to essentially just write the body of a routine without
having to explicitly write its heading / parameter list, which is useful
for routines invoked directly from a host language, where said parameters
are attached to host bind variables.  Now one still has to say what the
expected data type is for these bind variables, but then the explicit
syntax for such Muldis D routines is more like that of a SQL statement you
plug into DBI or whatever, without the explicit framing.  May not work
anywhere, but should help where it does.  Maybe use $$foo rather than $foo
to indicate that the 'foo' wasn't explicitly declared in the current
lexical scope and we are referring to the caller or a bind variable.  Or
rather than $$foo, have something like "(param foo : Bar)" for an
expression-inline parameter definition and use, where the part after the
"param" has all the same syntax as an actual param list; this is the one
for host language bind parameters.  Actually that might be useful by
itself.  Similarly "(caller foo)" would be the look to parent Muldis D
lexical scope, or $$foo would just do that maybe, unless this should have
an explicit type declaration still.  Note, if same inline-declared host
param used more than once, you just need "(param foo : Bar)" form once and
other uses can just say foo as per usual; in fact, it must be this way.

* Consider in all STD adding a new pragma that concerns whether data in
delimited character string literals is ASCII or Unicode etc.
Example PTMD_STD grammar additions:
            <ws>? ',' <ws>? str_char_repertoire <ws>? '=>' <ws>? <str_cr>
    <str_cr> ::=
        '{' <ws>?
            [<str_cr_describes> <ws>? '=>' <ws>? <str_char_reper>]
                ** [<ws>? ',' <ws>?]
        <ws>? '}'
    <str_cr_describes> ::=
        all | text | name | cmnt
    <str_char_reper> ::=
          ASCII
        | Unicode_5.2.0_canon
        | Unicode_5.2.0_compat
Example PTMD_STD code additions:
    str_char_repertoire => { text => Unicode_5.2.0_canon,
        name => Unicode_5.2.0_compat, cmnt => Unicode_5.2.0_compat },
    str_char_repertoire => { all => ASCII },
Of particular interest is the Unicode canonical vs compatibility, that is
NFC|D vs NFKC|D; it is generally recommended such as by the Unicode
consortium to use canonical for general data but to use compatibility for
things like identifiers or to avoid some kinds of security problems; see
http://www.unicode.org/faq/normalization.html.  Note that compatibility is
a smaller repertoire than canonical, so converting from the latter to the
former will lose information.  The text|name|cmnt affect how delimited char
strs that are Text|Name|Comment are interpreted, and the effects are
orthogonal to whether characters are specified literally or in escaped
(eg "\c<...>" form); canonical will preserve exactly what is stated (but
for normalization to NFD) and compatibility will take what is stated and
fold it so semantically same characters become the same codepoints (like as
normalizing to NFKD).  The suggested usage is compatibility for Name to
help avoid security or other problems, and canonical for Text; as for
Comment, I currently don't know which is better.  If ASCII is chosen, the
semantics are different; with both Unicode any input is accepted but folded
if needed; for ASCII, it is more likely an exception would be raised if
there are any codepoints outside the 0..127 range in character strings.
The 'all' is a shorthand for giving the same value to all 3 text|name|cmnt
and is more likely to occur with ASCII but it might happen otherwise.
An additional reason to raise this feature is to setup support for other
char sets in future, such as Mojikyo, TRON, GB18030, etc which go beyond
Unicode eg no Han-unification (see http://www.jbrowse.com/text/unij.html +
http://www.ruby-forum.com/topic/165927) but type system also needs update.

* Update HDMD_Perl6_STD.pod considering that a 2010.03.03 P6Syn update
eliminated the special 1/2 literal syntax for rats and so now one writes
<1/2> instead (no whitespace allowed by the '/'); now 1/2 could still work
but now it does so using regular constant folding and so having a higher
precedence op nearby affects its interpretation.

* Update HDMD_Perl6_STD.pod considering names of Perl collection types,
such that "Enum" is the immutable "Pair" and "EnumMap" was renamed from
"Mapping", and "FatRat" is now the "Rat" of unlimited size, etc.

* Consider using postcircumfix syntax for extracting single relation
attrs into Set or Bag etc, meaning wrap_attr; eg "r.@S{a}", "r.@B{a}".
Now that might not work for Array extraction, unless done like
"(r.@A{a} ordered ...)" or some such, which isn't pure postcircumfix,
but that may be for the best anyway.

* Consider adding concrete syntax that is shorthand for multiple
single-attribute extractions where each goes to a separate named expression
node (or variable) but the source is a single collection-typed expr/var.
Or the source could be a multiplicity as well, or mix and match.
The idea here is to replicate some common idioms in Perl such as
"(x, y) = @xy[0,1]" or "(x, y) = %xy{'x','y'}", this being more useful
when the source is an anonymous arbitrary expression.
Proposed syntax is that, on each side of the "::=" or ":=", the source and
target lists are bounded in square brackets, indicating named items assign
in order, and syntax for collections supplying/taking multiple items are
ident to single-attr accessors (having a ".") but that a list is in the
braces/brackets; for example: "[x, y] ::= [3, 4]",
"[a, b] ::= t.{a,b}", "[c, d] ::= ary.[3,5]".  This syntax would
resolve into multiple single-attr accessors when app in system catalog.

* In PTMD_STD, consider loosening the grammar regarding some of the normal
prefix or postfix or infix operators so that rather than mandating
whitespace be present between the operators and their arguments, the
whitespace is optional where it wouldn't cause a problem.  On the other
hand, is "! foo" really such a problem?  People might often write "not
foo" there anyway as that is a preferred form for boolean ops.  In
particular, while someone may want to write say "foo++" or "foo--", the
latter is problematic since "-" are allowed in bareword identifiers and
that could mis-parse, and someone would probably prefer to then write
[foo --] rather than ["foo"--] anyway.

----------

* Restore the concept of public-vs-private entities directly in sub|depots.

* Restore the concept of "topic namespaces" (analogous to SQL DBMS concept
of "current database|schema" etc) in some form if not redundant.

* Update the system catalog to deal with database users and privileges etc.

----------

* IN PROGRESS ...
A Muldis D host|peer language can probably hold lexical-to-them variables
whose Muldis D value object is External typed, and so they could
effectively pass around an anonymous closure of
their own language.  Such a value object would be a black box to the host
and can't be dumped to Muldis D source code.

* IN PROGRESS ...
Fully support direct interaction with other languages, mainly either peer
Parrot-hosted languages or each host language of the Muldis D
implementation.  Expand the definition of the "reference" main type
category (or if we need to, create a 5th similarly themed main category) so
that it is home to all foreign-managed values, which to Muldis D are simply
black boxes that Muldis D can pass around routines, store in transient
variables, and use as attributes of tuples or relations.  These
of course can not be stored in a Muldis D depot/database, but they can be
kept in transiant values of Muldis D collection types which are held in
lexical variables by the peer or host language; that language is then
really just using Muldis D as a library of relational functions to organize
and transform its own data.  We also need to add a top level namespace by
which we can reference or invoke the opaque-to-us data types and routines
of the peer or host language.  This can not go under sys.imp or
sys.anything because these are supposed to represent user-defined types and
routines, which in a dynamic peer language can appear or disappear or
change at a moment's notice, same as in Muldis D; on the other hand, types
or routines built-in to the peer/host language that we can assume are as
static as sys.std, could go under sys.imp or something.  This also doesn't
go under fed etc since fed is reserved for data under Muldis D control and
only ever contains pure s/t/r types.  Presumably this namespace will be
subdivided by language analogously to sys.imp or whatever syntax Perl 6
provides for calling out into foreign languages co-hosted on Parrot.  Since
all foreign values are treated as black boxes by Muldis D, it is assumed
that the Muldis D implementation's bindings to the peer/host language will
be providing something akin to a simple pointer value, and that it would
provide the means to know what foreign values are mutually distinct or
implement is_identical for them.  One thing for certain is that every
foreign value is disjoint from every Muldis D value, and by default every
foreign value is mutually distinct from every other foreign too, unless
identity is overloaded by the foreign, like how Perl 6's .WHICH works.
The foreign-access namespace may have a simple catalog variable
representing what types and routines it is exposing, but to Muldis D this
would be we_may_update=false.

* IN PROGRESS ...
About External type ... update Perl5_STD and
Perl6_STD to add a new selector node kind 'External' which takes any Perl
value or object as its payload; this is treated completely as a black box
in general within the Muldis D implementation.  For matters of identity
within the Muldis D envirnment, it works as follows:  Under Perl 6, the
Perl value's .WHICH result determines its identity.  Under Perl 5, if the
value is a Perl ref ('ref obj' returns true) then its memory address is
used, and this applies to all objects also (since all refs are mutable,
this seems to be the safest bet); otherwise ('ref obj' is false) then the
value's result in a string context, "obj", is used as the identity; the
mem addr and stringification would both be prefixed with some constant to
distinguish the 2 that might stringify the same.  By default, an
External supports no operators but is/not_identical.

----------

* Add new "FTS" or "FullTextSearch" extension which provides weighted
indexed searching of large Text values in terms of their component tokens,
such as what would be considered "words" in human terms.  This is what
would map to the full text search capabilities that underlying SQL DBMSs
may provide, if they are sufficiently similar to each other, or there might
be distinct FTS extensions for significantly different ones?

* Add new "Perl5Regex" extension which provides full use of the Perl 5
regular expression engine for pattern matching and transliteration of Text
values.  Maybe the PCRE library can implement this on other foundations
than Perl 5 itself if they are sufficiently alike; otherwise we can also
have a separate "PCRE" extension.  Or the same extension can provide both?

* Add new "Perl6Rules" extension which provides full use of the Perl 6
rules engine for pattern matching and transliteration of Text values.

* Add new "PGE" or "ParrotGrammarEngine" extension, or whatever an
appropriate replacement is, for pattern matching and transliteration of
Text values.  This and "Perl6Rules" may or may not be sufficiently similar
to combine into one extension.

* Add functions for splitting strings on separators or catenating them with
such to above extensions or to Text.pod as appropriate.  Text has one now.

* Update or supplement the order-determination function for Text so that it
compares whole graphemes (1 grapheme = sequence starting with a base
codepoint plus N combining codepoints, or something) as single string
elements, rather than say comparing a base char against a combining char.

* Add new "Complex" extension which provides the numeric "complex" data
types (each expressed as a pair of real numbers with at least 2 possreps
like cartesian vs polar) and operators.  Note that the SQL standard does
not have such data types but both many general languages as well some
hardware CPUs natively support them.  Probably make "Complex" a mixin type
and have the likes of "RatComplex" and "IntComplex" composing it.  Note
that a complex number over just integers is also called a Gaussian integer.
A question to ask is whether a distinct "imaginary" type is useful; some
may say it is and Digital Mars' "D" has it, but I don't know if others do.
In any event, complex numerics should most likely not be part of the core,
even though their candidacy could be considered borderline; for one thing,
I would expect that most actual uses of them would work with inexact math.

* Add other mathematical extensions, such as ones that add trigonometric
functions et al, or ones that deal with hyperreal/hypercomplex/etc types,
or ones with variants of the core numeric types that propagate NaNs etc.

* Consider adding a sleep() system-service routine, if it would be useful.

* Add multiplication and division operators to the Duration types; these
would both be dyadic ops where the second op is a Numeric.

* Consider adding a Temporal.pod type specific to representing a period in
time, maybe simply as an alias for 'interval_of.*Instant' or some such.
See also the PGTemporal project and its 'Period' type.

* Flesh out "Spatial" extension; provide operators for the spatial data
types, maybe overhaul the types.

* Consider another dialect that is JSON ... like HDMD in form, but stringy.

----------

* Add one or more files to the distro that are straight PTMD_STD code like
for defining a whole depot (as per the above) but instead these files
define all the system entities.  Or more specifically they define just the
interfaces/heads of all the system-defined routines, and they have the
complete definitions of all system-defined types, and they declare all the
system catalog dbvars/dbcons.  In other words these files contain
everything that is in the sys.cat dbcon; anything that users can introspect
from sys.cat can also be read from these files in the form of PTMD_STD
code, more or less.  The function of these files is analogous to the Perl 6
Setting files described in the Perl 6 Synopsis 32, except that the Muldis D
analogy explicitly does not define the bodies of any built-in routines.  An
idea is that Muldis D implementations could take these files as is and
parse them to populate their sys.cat that users see; of course, the
implementations can actually implement the routines/types as they want.
Note that although this Muldis D code would be bundled with the spec, it is
most likely that the PTMD_STD-written standard impl test suite will not.
Note that these files will not go in lib/ but in some other new dir.  Note
that it is likely any implementation will bundle a clone of these files
(suitably integrated as desired) rather than having an actual external
dependency on the Muldis::D distro.  Note that some explicit comment might
be necessary to say there are no licensing restrictions on copying this
builtins-interfaces-defining code into Muldis D implementations, or maybe
no comment is necessary.  Probably a good precedent is to look at what
legalities concern existing tutorial/etc books that have sample code.

* Create another distribution, maybe called Muldis::D::Validator, which
consists essentially of just a t/ directory holding a large number of files
that are straight PTMD_STD code, and that emit the TAP protocol when
executed.  The structure and purpose of this collection is essentially
identical to the official Perl 6 test suite.  A valid Muldis D
implementation could conceivably be defined as any interpreter which runs
this test suite correctly.  This new distro would be a "testing requires"
external dependency of both Muldis::Rosetta and any Parrot-hosted language
or other implementation, though conceivably either could bundle a clone of
Muldis::D::Validator rather than having an actual external dependency.
This test suite would be LGPL licensed.  This new distribution would have a
version number that is of X.Y.Z format like Muldis::D itself, where the X.Y
part always matches that of the Muldis D spec that it is testing compliance
with, while the .Z always starts at zero and increments independently of
the Muldis D spec, as often there may be multiple updates to ::Validator
for awhile between releases of the language spec, and also since .Z updates
in the language spec only indicate bug fixes and shouldn't constitute a
change to the spec from the point of view of ::Validator.

----------

* Whatever else needs doing, such as, fixing bugs.