Skip to content

Dolt 0.11.0 released

Compare
Choose a tag to compare
@oscarbatori oscarbatori released this 12 Nov 18:38
17d14ba

We are excited to announce the release of Dolt 0.11.0.

SQL

System Table

We implemented a dolt log table, thus making our first attempt to surface dolt version control concepts in SQL by surfacing commit data. This will allow users to leverage commit data in an automated setting via SQL. Clone a public repo to see how it works:

$ dolt clone Liquidata/ip-to-country
$ cd ip-to-country
$ dolt sql -q "select * from dolt_log"
$ dolt sql -q "select committer,date from dolt_log order by date desc"
+-------------+--------------------------------+
| committer   | date                           |
+-------------+--------------------------------+
| Tim Sehn    | Wed Sep 25 12:30:43 -0400 2019 |
| Tim Sehn    | Wed Sep 18 18:27:02 -0400 2019 |
.
.
.

Timestamps

We added support for DATETIME data type in SQL. This is a major milestone in achieving compatibility with existing RDBMS solutions.

Performance

We continue to rapidly improve our SQL implementation. On the performance side some degenerate cases of query performance saw large improvements. We also resolved some issues where update statements had to be "over parenthesized", with the parser now matching the standard.

Other

We support null values in CSV files that are imported via the command line, as well as minor bug fixes under the hood.

If you find any bugs, or have any questions or feature requests, please create an issue and we will take a look.

Merged PRs

  • 208: go/libraries/doltcore/row: tagged_values.go: Fix n^2 behavior in ParseTaggedValues.
    ParseTaggedValues used to call Tuple.Get(0)...Tuple.Get(n), but Tuple.Get(x)
    has O(n) perf, so the function did O(n^2) decoding work to decode a tuple.
    Use a TupleIterator instead.
  • 206: go/store/types: Improve perf of value decoding for primitive types.
    This fixes a performance regression in value decoding after the work to make it easier to add primitive types to the storage layer.
    First, we change some map lookups into slice lookups, because hashing the small integers on hot decode paths dominates CPU profiles.
    Next, we inline logic for some frequently used primitive types in value_decoder.go, as opposed to going through the table indirection. This is about a 30% perf improvement for linear scans on skipValue(), which is worth the duplication here.
    Code for adding a kind remains correct if the decoder isn't changed to include an inlined decode path. We omit inlining UUID and InlineBlob here for now.
  • 199: Km/redo import nulls
  • 198: checkout a remote only branch
  • 196: Bh/log table
  • 195: Added Timestamp to Dolt and Datetime to SQL
    Have a look!
    I ran into an import cycle issue that I just could not figure out how to avoid, except by putting the tests into their own test folder (sqle/types/tests), so that's why they're in there. In particular, the cycle was that sqle imports sqle/types, and the tests rely on (and thus must import) sqle, causing the cycle.
    I'm thinking of adding tests for the other SQL types later so that we have a few more built-in tests using the server portion, rather than everything using the -q pathway. That will be a different/future PR though.
  • 193: diff where and limit
  • 191: fix branch name panic with period
    Looked into supporting periods in branch names, but it looks like noms relies on periods specifically pretty heavily. Seems to be excluded from the regex below by design, since they build some types on the expectation that a branch name or ref contain a period.
    My understanding is that a user's branch name is used to look up a particular dataset within the noms layer and this variable (go/store/datas/dataset.go):
    // DatasetRe is a regexp that matches a legal Dataset name anywhere within the
    // target string.
    var DatasetRe = regexp.MustCompile(`[a-zA-Z0-9\-_/]+`)
    
    acts as the regex source of "truth" for branch names/ dataset look ups, and I believe more.
    Noms also expects to be able to append a . to this string in order to parse the string later and correctly create it's Path types...
    I went down a rabbit hole trying to change all of the noms Path delimiters to be a different character, but the changes go pretty deep and start breaking a lot of things. Happy to continue down that course in order to support periods in branch names, but it might take me a bit of time change everything. I'm also not sure what character should replace the period... asterisk? Anyway, this PR seemed like low hanging fruit fix to resolve the panic at least.
  • 190: Missed Kind to String
    In my last PR, it looks like I missed that we were using the old DoltToSQLType hardcoded map from the original SQL implementation. I didn't change it everywhere (it's used heavily in the old SQL code that isn't even being called anymore), but it's changed where it matters. Added a new interface function and changed the printing code to be a bit more consistent (we were mixing uppercase with lowercase).
    I'm also returning different values, such as BIGINT for sql.Int64, as int parses in MySQL to a 32-bit integer, which isn't correct. Essentially made it so that if you took the CREATE statement exactly as-is and exported your data to a bunch of inserts and ran it in MySQL then it wouldn't error out, as it previously would have.
  • 189: diff source refactor
  • 188: go/cmd/git-dolt/README.md: Add comparison to git-lfs and note on updates
  • 187: Remove skip on test for / in branch names. Added a skipped test for .…
    … in branch names. . in branch names panics rights now.
  • 186: go/go.mod: Pick up sqlparser improvements for ADD COLUMN, RENAME COLUMN. Fix some tests.
  • 185: Moved command line SQL processing to use new engine for CREATE and DROP
  • 183: add appid to logevents requests
    Need to update the requests so that it reflects the current proto definitions.
  • 182: Moved SQL types to an interface
    Have a look! Just make an empty struct type that implements SqlTypeInit and add the struct to sqlTypeInitializers and you've got a type that works in SQL now!
  • 179: clone reliability
  • 178: go/store/nbs: store.go: Be more careful about updates to nbs field values until all operations have completed successfully.
  • 177: go/cmd/dolt: Bump version to 0.10.0

Closed Issues

  • 194: Wide tables create poor query performance