It's a bit late in the game to be bringing this up, but I'm wondering what approaches have been used for representing data types (such as those found in SQL) that are standard-enough to have recognizable names but which don't exist as such in A-Shell. In most cases, there are one-to-one matches, such as the SQL int (i,4) or char(#) (s,#). In others, like NULL, there's really no perfect solution. And there are many, like SQL date that are somewhere in between in the sense that we can store the data perfectly (in this case in an S,10 variable), but it would be nice to store the format information as part of the mapped type.

The obvious (and perhaps only) tool we have for this is DEFTYPE. As an example, the standard ashinc:types.def file contains DEFTYPEs such as...
Code
deftype T_SQL_DATE    = S,10     !  CCYY-MM-DD
deftype T_SQL_TIME    = S,12     !  hh:mm:ss.ttt 
... allowing you to map a variable like ...
Code
map2 invoiceDate, T_SQL_DATE

Not only does that hopefully eliminate any ambiguity as to what the data in that variable should look like, but it also allows you to take advantage of Dynamic Structures and Dynamic Functions to create generic code that knows how to handle data formats at runtime based on the types.

As you can see from the other defined types in the ashinc:types.def file, I've adopted a pseudo-standard for defined type names using all CAPS, starting with T_. For those types that fall into a group or belong to a particular customer, I'll typically follow the T_ with the abbreviation for that group/customer, e.g. T_SQL_DATE. In retrospect it might have been nice to encourage some level of standardization for types that many of us will encounter (like SQL types), but I guess it doesn't matter that much, as long we don't use the same type names for different types (which could present an obstacle to using shared utility functions).

But aside from the DEFTYPE naming convention, there is still a question of the best way of mapping these external/logical types to our internal/raw types. For example, what's the best way to store a field of SQL type bit ? (In the database, this would be a single bit field used to represent TRUE (1) and FALSE (0). Obviously from a raw data standpoint, we could do what SQL actually does, packing them into a series of B,1 bytes. But although we can test bits individually in ASB, we don't really have a way to store them as 'fields'. So it might be more convenient to use up a whole byte (the waste! the horror! shocked ) just to make it easier to copy these fields between ASB and some database.

One problem with that idea though is that because ASB does not separate logical and arithmetic boolean operations (they're all handled bitwise arithmetically), it makes a lot more sense to store a TRUE value as -1 (all bits set) so that the NOT operator works as expected. (NOT -1 is zero, but NOT 1 is still non-zero and thus appears to be still TRUE.) (As an aside, for whatever reason types.def file defines the BOOLEAN type as i,2, wasting even space.)

Another problem is how to represent NULL. For the ordered map context, our pseudo-NULL representation using .NULL, .ISNULL(), etc. is mostly workable, but requires 6 bytes to represent the literal "<null>", and that only makes sense with string and X data types. Obviously we can't do that with purely numeric types, especially those shorter than 6 bytes. SQL databases are able to store NULL independently from any other binary data values by storing it as an attribute rather than actual data. While ASB variables do have attributes, I'm not sure it's practical to apply that concept here (and in any case, we've used up the 16 bits currently allocated; expanding that would break the current RUN format and be decidedly non-trivial).

But what other options are there? Up until now, I've pretty much just opted to treat database NULL fields as equivalent to "" or 0, depending on the type. (ASQL SQLOP_FETCH_ROW used to do that unconditionally but now offers it as a choice.) But now I have an application that really wants to keep TRUE, FALSE and NULL separate in a database bit field. I'm thinking maybe to go with 0, -1, and 2 and create type-specific functions like Fn'T_SQL_BIT'Unpack$() and Fn'T_SQL_BIT'Pack$() to handle importing / exporting / printing / etc. But that only makes sense for the bit type because it has unused bits, and wouldn't work with other numeric types. Which brings me back where I started, wondering what approaches anyone else has employed.