0.16.0 [ACTION REQUIRED]
Dolt 0.16.0 is a very exciting release. It contains an important change to how we store columns, as well as a host of exciting features. The change to how we store columns does require users to migrate their repositories when they upgrade. We will provide some background, as well as the (very simple) migration procedure, before discussing features in the release.
We are absolutely committed to making this as painless as possible for our users, as a consequence don't hesitate to shoot us a note at [email protected] if you face any difficulty with the migration, or just need to discuss it further due to the sensitive nature of your data.
Unique Tags Migration
Dolt uses an integer value called a tag to identify columns. This contrived example illustrates that:
$ dolt init
$ cat > text.csv
name,id
novak,1
$ dolt table import -c --pk=id peeps text.csv
CREATE TABLE `peeps` (
`name` LONGTEXT COMMENT 'tag:0',
`id` LONGTEXT NOT NULL COMMENT 'tag:1',
PRIMARY KEY (`id`)
);
Versions of Dolt prior to this release only required tag uniqueness per table in a given commit, not across tables and across every commit.. This caused issues when diffing and merging between commits where column tags had been reused. We decided to bite the bullet and make the fix. Going forward, all column tags will be unique across all tables and history.
Existing Dolt repositories must be migrated to be used with Dolt client 0.16.0 and later. Running a command with the new client on an old repository will result in an error message prompting you to migrate the repository. The migration process is heavily tested, deterministic, and makes no changes to underlying data. It merely fixes the format to satisfy the requirements of new versions. . After upgrading Dolt, run a single command in your repository to migrate:
$ dolt migrate
Once that's complete you should be done. If you have Dolt data living at a remote, and collaborators, there's just one additional step. The first user to upgrade will need to run:
$ dolt migrate --push
This will force push the migrated branches. Subsequent collaborators will have to then run:
$ dolt migrate --pull
This will sync their migrated repo with the migrated remote, and preserve any local changes they have, applying them on top.
SQL
We are committed to making our SQL implementation as close as possible 100% correct, and this release represents a big step towards that goal. The improvements include:
SHOW CREATE VIEW
now works, so views can be inspected in the canonical manner- views now appear in
SHOW TABLES
statements - added support for new types:
DECIMAL
,TIME
,SET
,ENUM
,NCHAR
,NVARCHAR
, and aliases to many more dolt sql-server
anddolt sql
now support accessing multiple dolt repositories in a single SQL session. Each repository is exposed as a database. See databases withSHOW DATABASES
, and select one to query withUSE $database
. Joins across repositories are supported. Startdolt sql
ordolt sql-server
with the new--multi-db-dir
argument, which must name a directory containing dolt repositories to expose in queries.dolt sql-server
now supports writes, which means that the working set will be updated byUPDATE, INSERT, DELETE
and other statements which change data. The SQL server was previously read-only. This fixes #549. Important caveat: only one concurrent connection is allowed to prevent data races. Support for better concurrency is being tracked in #579- functions
user()
,left()
,if()
are now supported, motivated by getting Dolt working with DataGrip - saved queries now execute at the command line with
dolt sql -x
option - more complete implementation of
information_schema
database - JSON output option for SQL query results with
dolt sql -r json
VCS in SQL
As well as making our SQL implementation compliant with MySQL, we are also committed to implementing all VCS operations available on the command line available via the SQL interface.
- we now have a
dolt_branch
system table where the list of branches on a repo is surfaced in SQL
Remotes
We have now fixed AWS S3 remotes, so if you want to use your own S3 infrastructure as a backend, you can do that. See the dolt remote
CLI documentation for details. While we love to see data and users on DoltHub, we are committed to making Dolt and open standard, and that means being useful to the broadest possible audience.
Bug Fixes etc.
As well as fixing the issue with remotes, we fixed a number of other bugs:
- checking out or merging working set doc files
- Better SQL error messages
- SQL queries respect case of column aliases, issue here
- Queries required by Jetbrains DataGrip are now supported issue here.
Merged PRs
- 577: streaming map edits
- 576: Help Fix
As identified by Asgavar in #553 there is a segfault caused by differences in logic between isHelp and the Parse function of the ArgParser. I found that changing the Parser to be like the isHelp function caused issues for some commands if you have a branch named help or a table named help. As a result I opted to change the isHelp logic instead.
Thank you @Asgavar - 572: More Types V2
Have fun @zachmu - 571: Skipping git-dolt bats tests on Windows due to flakiness
Is this fine? By putting it at the end of thesetup
function, it's equivalent to manually putting a skip on every test. Then whenever we fix it, we can just delete it in one place. - 569: Andy/migrate push pull
- 568: Zachmu/sql updates
Implemented auto-commit database flavor, and use it in SQL server. Also:- Fix prompt formatting for shell
- Update result printing for non-SELECT queries
- Use a dolt version string for SQL server
This relies on dolthub/go-mysql-server#84. Will update go.mod once it's checked in.
- 565: SQL Reserved word and SQL Keyword in column name tests
- 564: go/libraries/doltcore/env: paths.go: Consult HOME environment variable for home location before consulting os/user.
- 563: Dockerfile: Bump golang version; use mod=readonly.
- 559: Andy/migration refactor
- 556: Basic SQL batch mode bats tests
- 554: Skipped bats test for help command segfault
- 552: using correct root
- 551: Zachmu/sql json
Added JSON result output to sql command. Also fixed handling of NULL values in CSV output.
This fixes #533 - 548: read only version of branches table
- 547: Km/doc checkout bug
Taylor brought a bad docs bug to my attention. If you had a modified doc, and thendolt checkout <branch>
ordolt checkout -b <branch>
, your local docs would be overwritten by the working root (your changes vanish).
The intended behavior is to keep the working set exactly as it is ondolt checkout <branch>
, given there are no conflicts. Since your "working set" for docs is technically on the filesystem, and not on the roots, it was getting wiped. Now i'm pulling the unstagedDocDiffs from the filesystem, and excluding those when it comes time tosaveDocsOnCheckout
.
Added bats test coverage too - 546: /bats/1pk5col-strings.bats: Add skipped test for exporting to csv then reimporting
- 545: /benchmark/sql_regressions/DoltRegressionsJenkinsfile: Add sql watcher 3
- 544: Zachmu/bheni patch
Fixed bug in parsing URLs of relative file paths. - 543: bats/aws-remotes.bats: Enable test for push.
- 542: Added skipped bats test for issue 538
#538 - 540: interface for multi-db and tests
- 539: Db/dolt harness test
Pretty simple tests, but I think are effective. Tested against the commit that initially caused the breaks and these tests failed... should also have caught the "unable to find table errors" that occurred during the harness fixing iteration process. Feels pretty seems fine to me! LMK - 537: Jenkinsfile: Add environment variables for running AWS remote bats tests.
- 536: bats/aws-remotes.bats: Add some smoke tests for AWS remotes interactions.
Currently these get skipped in CI. Will follow up with CI changes after this lands as they will require some external state creation and some small infra work.
Push is skipped here and can be unskipped after #531 lands.
Clone is skipped here and will remain skipped until another PR fixes it. I took a pass this morning but clone logic has gotten a little hairy and I wasn't happy with the progress I was making. Going to take another pass soon. - 535: /benchmark/sql_regressions/run_regressions.sh: Silently exit if dolt version exists in dolt-sql-performance repo
- 534: go/cmd/dolt: commands/clone: In the case that Clone bailed early, we hung indefinitely.
- 531: store/datas/database.go: Update datas.CanUsePuller to check for write support in TableFileStore.
The TableFileStore optimizations accidentally broke push support for
non-dotlremoteapi/non-filepath chunk stores. Attempting to push an AWS remote
currently results in an error.
This is a quick minimal PR to check for write support in a TableFileStore and
not use the push/pull fast path if it isn't supported. - 530: store/datas/database.go: Update datas.CanUsePuller to check for write support in TableFileStore.
The TableFileStore optimizations accidentally broke push support for
non-dotlremoteapi/non-filepath chunk stores. Attempting to push an AWS remote
currently results in an error.
This is a quick minimal PR to check for write support in a TableFileStore and
not use the push/pull fast path if it isn't supported. - 528: go/cmd/dolt/commands: Handle some verrs which were previously being discarded
I noticed some places where we were discarding some verr values. Looked quickly for open issues related to these and didn't find any, but the errors seem pretty important. The refactorings are maybe a little tricky so I appreciate as many eyes as I can get. - 527: Andy/autogen import
- 526: /benchmark/sql_regressions/run_regressions.sh: Fix error messages
- 525: Fixed bad regexes in diff --sql tests.
Now these are failed tests that need to be fixed.
Andy, sorry I did not catch these bad regexes in review the first time. Unfortunately, these are working regexes now and the tests do not test what you think they do. We need to go back and fix the tests or fix the bad behavior in the diff --sql - 524: Tim/new diff sql tests
- 523: fix logictest
- 522: go/libraries/doltcore/env/dolt_docs.go: Add newline to end of initial LICENSE/README text
Fixes this:
- 519: bats/sql.bats: Add skipped bats test for ON DUPLICATE KEY UPDATE insert
- 518: Andy/autogen migrate
use the same tag generation method in the migration that we do in creating tables
The old migration just assigned sequential tags to all of the columns in the repo. - 517: Multi DB
- 514: CSV and JSON Import Behavior Changes
Based on what was discussed before, this PR modifies the import behavior so that, when importing CSV files, if the destination schema has a bool replacement (BIT(1)
orTINYINT/BOOLEAN
) then it will parseTRUE
andFALSE
(and other variations) as their respective integers. This restores a behavior that was removed when the new type system was introduced, and all of the code paths were coalesced.
In addition, this PR also fixes a bug where importing a null value through JSON would ignore a column havingNOT NULL
set. - 513: Fixed a bug in the newly created skipped bats test for diff --cached …
…where I referred to instead of correctly using - 512: Added skipped bats test for dolt diff --cached
- 511: Fix doltharness
- 508: /benchmark/sql_regressions/run_regressions.sh: Fix mean query to include group by test_file
Need to group by both PKs - 506: Andy/force push & force fetch
- 503: Bh/puller error fix
- 500: go/cmd/dolt/commands/diff.go: Handle error from
rowconv.NewJoiner
indiffRows
I was just poking around a little bit and noticed that we weren't handling this error. - 497: Changes to support per connection sql state
- 494: Bumped version
- 492: fix return value on failure to push
- 86: Zachmu/autocommit
Bug fix for interpreting autocommit - 85: Zachmu/autocommit
Auto commit - 84: Zachmu/update results
Added support for OkResult in result sets, which mirrors the OK packet that MySQL sends on non-SELECT queries.
Also:- Added MaxConns param to server
- Added Support for Commit statements (no op)
Depends on dolthub/vitess#21
- 82: Zachmu/alias case
Case sensitive column aliases in result schema. - 81: Zachmu/datagrip
Lots of changes related to getting datagrip working:- Better information_schema support. Several tables have no rows, but are defined with the correct.
- Better view support, including "show create view"
- Added several new functions: LEFT, INSTR, IF, SCHEMA, USER
- Added support for unquoted strings in SET VARIABLE statements
- Eliminated several instances of custom parsing and used vitess directly instead
- 80: Fixed case branches worrying about type
For issue #529 - 79: Error handling for USE DB_NAME when DB_NAME is not valid
- 78: Added support for queries that don't specify a table when there is no table selected.
- 77: Current db in session
- 76: per connection state
Want your thoughts on moving the IndexRegistry and the ViewRegistry out of the Catalog object and making them accessible as part of the context. It's now up to the SessionBuilder implementation to provide the IndexRegistry and ViewRegistry for a session. In dolt we would register indexes and views at that point, and would be altered when the connection changes what head is pointed at.