Skip to content

Releases: dolthub/dolt

0.22.5

03 Dec 19:14
Compare
Choose a tag to compare

This is a patch release that adds no new features or bug fixes.

Merged PRs

  • 1065: fix typo in GA for Homebrew
  • 1064: Fix brew formula

0.22.3

03 Dec 03:14
Compare
Choose a tag to compare

Merged PRs

  • 1062: Updated go-mysql-server with a patch to fix failing function expressions
  • 238: Zachmu/funcs
    Got rid of all embedded function fields in function types, since they make it impossible for analyzer to finish (function fields are not comparable with reflect.DeepEquals, which the analyzer uses to decide if the query plan has settled).

0.22.2

02 Dec 06:34
Compare
Choose a tag to compare

Merged PRs

  • 1059: pass in-memory gc gen when we conjoin
    On the conjoin path, we're not passing "garbage collection generation" when we update the manifest. NomsBlockStore interprets this as the c
    onjoin having been preempted by and out-of-process write and blocks the write.
  • 1052: Vinai/dolt commit author no config
    Add a bats test that models the following behavior.
    1. Unsets name and user.
    2. Makes a sql change
    3. Add a commit with --author

0.22.1

25 Nov 02:17
Compare
Choose a tag to compare

Merged PRs

  • 1045: Conslidated benchmark directory
  • 1043: Vinai/1034 add author info
    This adds the --author tag to dolt commit
  • 1042: Vinai/clean up tags
    Cleans up some of the comments left on #1041
  • 1041: Vinai/1023 remove tag info
    Fixes #1022 and #1023
  • 1040: go/libraries/doltcore/remotestorage: Add hedged download requests to remotestorage/chunk_store.
  • 1039: go/libraries/doltcore/remotestorage: Refactor urlAndRanges a bit.
  • 1038: go/libraries/doltcore/remotestorage: Simplify concurrentExec implementation with errgroup.
  • 1037: proto: Add StreamDownloadLocations to ChunkStoreService.
  • 1036: go/cmd/dolt/commands: added write queries and ancestor commit to filter-branch
  • 1033: Temporary parallelism implementation on indexes
  • 1031: reset --hard
  • 1029: Added dolt_version()
    As version() is used to emulate the target MySQL version, I've added dolt_version() so that one may specifically query the dolt version.
  • 1026: Increased the default sql server timeout to 8 hours
  • 1025: go/libraries/doltcore/remotestorage: chunk_store.go: Small improvements to GetDownloadLocations batch size and HTTP GET error logging.
  • 1024: dolt filter-branch
  • 1022: Skipping tests broken by recent changes to info schema (EXTRA)
  • 1021: != operator now uses indexes

Closed Issues

  • 1034: Add --author option to dolt commit
  • 1023: Remove tag info from EXTRA in dolt SQL schema

0.22.0

12 Nov 02:12
Compare
Choose a tag to compare

We are excited to announce the minor version release of Dolt 0.22.0.

SQL Tables

We continue to expand the SQL tables that surface information about the commit graph, in this release we added:

  • dolt_commits
  • dolt_commit_ancestors
  • dolt_commit_diffs_<table>

SQL

We added support for prepared statements to our SQL server.

Merged PRs

  • 1019: Fix dolt ls --all
  • 1018: mysql-client-tests: Add some simple client connector tests for prepared statements.
  • 1016: Rewrote the README
  • 1015: go/go.mod: Bump go-mysql-server; support prepared statements.
  • 1014: Added bats test for index merging from branch without index
  • 1013: dolt_commits and dolt commit_ancestors tables
  • 1012: added reset_hard() sql function
  • 1011: Bh/commit diff
  • 1009: Richer commit message for Dolt Homebrew bump
  • 1008: Mergeable Indexes Pt. 2
    Tests for mergeable indexes
  • 1002: s/liquidata-inc/dolthub/ for ishell and mmap-go
  • 1001: NewCreatingWriter breaks dolthubapi with recent changes
    There might be a better fix for this, but dolthubapi uses NewCreatingWriter which breaks with Andy's recent changes (it's being used in dolthubapi here)
  • 233: Reorder Master
  • 232: Indexes search for exact match rather than submatches
  • 231: added 'auto_increment' to EXTRA field for DESCRIBE table;
  • 229: Add support for prepared statements.

Closed Issues

  • 1007: dolt push does not seem to push correctly in Windows Powershell
  • 1003: AUTO_INCREMENT column info does not display in describe table output

0.21.4

06 Nov 03:06
Compare
Choose a tag to compare

Merged PRs

  • 999: Another fix to brew bump job
  • 997: Fix typo in brew bump
    From failed run on most recent release:
    Screen Shot 2020-11-05 at 6 29 02 PM

0.21.2

06 Nov 01:38
Compare
Choose a tag to compare

Merged PRs

  • 995: support for ALTER TABLE AUTO_INCREMENT
  • 994: Updated namespace for sqllogictest
  • 993: Added WSL notice to README
  • 990: mysql auto increment semantics
  • 989: Fix a few docs typos
  • 988: {bats, go}: Some fixes to InferSchema and add bats test
  • 987: Turbine Import Fix
  • 985: go/**/*.go: Update copyright headers for company name change.
  • 982: go/libraries/utils/async: Have ActionExecutor use sync.WaitGroup.
  • 981: Attempt to clean up error signaling in diff summary.
  • 980: In prettyPrint, defer closing the iterator before doing anything else
    We were missing close() when an UPDATE or INSERT etc. had an error during cursor iteration, therefore leaving a server process running. Also save sql history file before executing the query, so it gets saved even if the user interrupts execution.
  • 976: /.github/workflows/ci-go-tests.yaml: run go tests only when go/ changes
    I think this might be a good addition... will only run go tests when there are go changes
  • 975: Extract some import logic to be used in dolthubapi
    In reference to this comment https://github.com/dolthub/ld/pull/5262#discussion_r514465176
    I had some duplicate logic in dolthubapi for the import table api. I extracted some logic so that I can use InferSchema and MoveDataToRoot to root to reduce some of the duplications
  • 974: Skipped two newly added test queries that don't work in dolt yet
  • 973: Support for COM_LIST_FIELDS, fixed SHOW INDEXES
  • 972: Update README.md
    Removed errant Liquidata reference
  • 971: Added GitHub workflow tests for race conditions
    Will fail until #967 is merged into master, however the workflow only works when the PR is based against master. Therefore this PR does not target the aforementioned PR's branch.
  • 970: Memory fix for CREATE INDEX
    Used a pre-existing 16 million row repo to test CREATE INDEX memory usage on.
    Before:
    72.47GB RAM Usage
    18min 48sec
    After:
    1.88GB RAM Usage
    2min 2sec
    Copied the same strategy as used in table_editor.go to periodically flush the contents once some arbitrary amount of operations have been performed.
  • 967: go: Make all tests pass under -race.
  • 966: go/store/types/edits: Rework AsyncSortedEdits to use errgroup, and a transient goroutine for each work item.
  • 965: dolt merge --no-ff
  • 225: Andy/mysql auto increment
  • 224: Zachmu/xx
    Use xxhash everywhere, and standardize the construction of hash keys.
  • 223: Zachmu/in subquery
    Implemented hashed lookups for IN (SELECT ... ) expressions. This is about 5x faster than using indexed lookups into the subquery table in tests.
    In a followup I'm going to replace the existing CRC64 hashing with xxhash everywhere it's used, so we're back to a single hash function.
  • 221: Fixed bug in delete and update caused by indexes being pushed down to tables
  • 220: Support for COM_LIST_FIELDS, fixed SHOW INDEXES
  • 219: Zachmu/turbine perf
    1. Do pushdown analysis within subqueries
    2. Push index lookups down to tables in subqueries
  • 218: Fix unit tests to run with -race.
  • 217: validate auto_increment on in-line and out-of-line PR defs

Closed Issues

  • 978: Support UNIQUE in CREATE TABLE statements, not just in CREATE INDEX statements
  • 962: Index creation must not be limited by working memory
  • 961: UNIQUE does not work on field level
  • 959: Cannot create UNIQUE index on FK fields Dolt considers it duplicate

0.21.1

27 Oct 02:56
Compare
Choose a tag to compare

We are excited to announce the release of Dolt 0.21.1, a patch release with functionality and performance improvements.

Benchmarks

A significant new aspect of the Dolt release process will be providing SQL benchmarks. You can read a blog about our approach to benchmarking using sysbench here, and you can find the benchmarking tools here. By way of example the benchmarks for this release were created with the following command:

./run_benchmarks.sh bulk_insert oscarbatori v0.21.0 v0.22.1

This produced the following result, which we host on DoltHub:
Screen Shot 2020-10-27 at 6 49 59 AM

Merged PRs

  • 957: go/store/{datas,util/tempfiles}: Fix some races in map writes. One effects clones, one effects only tests.
  • 953: create auto_increment tables with out-of-line PR defs
  • 952: go/libraries/doltcore/sqle: Add support for UPDATE and DELETE using table indexes.
  • 949: auto increment
  • 947: don't drop column values on column drop
  • 946: go/cmd/dolt: commands/sql: Small improvement to only call rowIter.Close() once on sql results iterators.
  • 945: Use docker-compose for orchestrating benchmarking
  • 944: go/store/types: value_store: Optimize GC to work in parallel and use less memory.
  • 942: feature gating
  • 941: Upgraded to latest go-mysql-server and re-enabled query plan tests
  • 939: Added new indexes overwriting auto-generated indexes
  • 938: go/store/{nbs,chunks}: Convert some core methods to provide results in callbacks. Convert some functions to use errgroup.
  • 937: Update README.md with the latest dolt commands
  • 934: Add go routine to clone
    I parallelized the table file writing process by using go routines. Specifically, I made use of the "golang.org/x/sync/errgroup" package which allows for convenient error management across a waitgroup.
    A couple of benchmarks I tested this on were
    1. Dolt-benchmarks-test: No difference in speed really
    2. Coronavirus: Original ~30sec. Current 15sec
    3. Tatoeba Sentence Translation: Original: ~17mins Current: 10mins
  • 933: /go/libraries/doltcore/diff: Ignore NULLs in cell-wise diff
    fix for #899
    The from root in this repo has NULL values written to the map which causes erroneous diffs.
    https://www.dolthub.com/repositories/dolthub/us-supreme-court-cases/compare/master/hb502v6tf3uj43ijfhot6dopmgdm1muk
  • 932: /go/cmd/dolt/commands: Help Text Fix
  • 216: Updated sql.MergeableIndexLookup interface
  • 215: memory: *_index.go: Construct sql equality evaluations with accurate types in the literals.
  • 214: auto increment
  • 213: triggers bugfix
    Fixed bug in insert triggers, which couldn't handle out-of-order column insertions.
    Fixes #950
  • 212: sql/analyzer: pushdown.go: Allow pushdown on Update, RowUpdateAccumulator and DeleteFrom plan nodes.
  • 211: join bugs
  • 210: sql/plan: {update,insert,update,process}.go: Fix some potential issues with context lifecycle and reuse.
    • insert, update, delete: Only call underlying table editors with our captured
      context once when we are Close(). Return a nil error after that.
    • process: Change to only call onDone when the rowTrackingIter is Closed.
    • process: Change to call childIter.Close() before onDone is called. Child
      iterators have a right to Close() before the context in which they are
      running is canceled.
  • 208: Create UNIQUE index if present in column definition
  • 207: Pushdown and plan decoration
    Two major changes:
    1. Changes to pushdown operation, to push table predicates below join nodes and to fix many bugs and deficiencies. Also large refactoring.
    2. Added DecoratedNodes to query plans to illustrate when indexes are being used to access tables outside the context of a join

0.21.0

13 Oct 21:56
Compare
Choose a tag to compare

We are excited to announce the release of Dolt 0.21.0. This release contains a host of exciting features, as well as our usual blend of bug fixes and performance improvements.

Squash merge

As a result of our own internal data collaboration projects, we realized that a squash command for condensing change sets as a consideration for collaborators was an essential tool. This is now in Dolt.

NFS Mounted Drives

A user highlighted that Dolt didn't work with NFS mounted drives due to the way it was interacting with the filesystem. We have now fixed this.

Garbage Collection

We now have a dolt gc command for cleaning up unreferenced data. This was requested by several users as a space saving mechanism in production settings.

Performance Improvements

We continue to aggressively pursue performance improvements, most notably a huge improvement in full table scans.

sysbench tooling

As we detailed in a blogpost yesterday we have created a tooling to provide our development team and contributors with a simple way to measure SQL performance. For example, to compare a arbitrary commit to the current working set (to test whether changes introduce expected performance benefits):

$ ./run_benchmarks.sh bulk_insert <username> 19f9e571d033374ceae2ad5c277b9cfe905cdd66

This will build Dolt at the appropriate commits, spin up containers with sysbench, and execute the benchmarks.

Documentation Fixes

An open source contributor provided several fixes to our CLI documentation, which we have gratefully merged.

GCP Remotes

We have fixed Google Cloud Platform remotes motivated by a bug report from a user experimenting with Dolt.

Merged PRs

  • 930: Bump go-mysql-server
  • 929: store/types: value_store.go: GC implementation uses errgroup instead of atomicerr.
  • 928: gc chunks
    Implements garbage collection by traversing a Database from its root chunk and coping all reachable chunks to a new set of NBS tables.
    While "garbage collection generation" will protect the NBS from corruption by out-of-process writers, GC is not currently thread safe for concurrent use in-process. Getting to online GC will require work around protecting in-progress writes that are not yet reachable from the root chunk.
  • 927: /.github/workflows/ci-bats-tests.yaml: skip aws tests if no secrets found
  • 925: benchmark tools
  • 923: doc corrections
    fixed some typos (I think 😊)
  • 922: go/util/sremotesrv: grpc.go: Echo the client's NbsVersion in GetRepoMetadata.
  • 921: fix gcp remotes
  • 920: go/go.mod: Adopt dolthub/fslock fork. Forked version uses Open(RDRW) for lock file on *nix, which works on NFS.
  • 918: /.github/workflows/ci-bats-tests.yaml: remove deprecated syntax
  • 917: Increase maxiumum SQL statement length to 100MB (initially 512K)
    Signed-off-by: Zach Musgrave [email protected]
  • 915: Daylon's suggestions for bheni perf PR Pt. 2
  • 914: Fix for reading old dolt_schemas
  • 913: squash merge
  • 912: go/store/{datas,nbs}: Use application-level temp dir for byte sink chunk files with datas.Puller.
  • 911: Daylon's suggestions for bheni perf PR
  • 910: Adding "Garbage Collection Generation" to manifest file
    This new manifest field will support NomsBlockStore garbage collection and protect against NBS corruption. Storing gcGen in the manifest will support deleting chunks from an NBS in a safe way. NBS instances that see a different gcGen than they saw when they last read the manifest will error and require clients to re-attempt their write.
    NBS will now have three forms of write errors (not including IO errors or other kinds of unexpected errors):
    • nbs.errOptimisticLockFailedTables: Another writer landed a manifest update since the last time we read the manifest. The root chunk is unchanged and the set of chunks referenced in the manifest is either the same or has strictly grown. Therefore the NBS can handle this by rebasing on the new set of tables in the manifest and re-attempting to add the same set of novel tables.
    • nbs.errOptimisticLockFailedRoot: Another writer landed a manifest update that includes a new root chunk. The set of chunks referenced in the manifest is either the same or has strictly grown, but it is not know which chunk are reachable from the new root chunk. The NBS has to pass this value to the client and let them decide. If the client is a datas.database it will attempt to rebase, read the head of the dataset it is committing to, and execute its mergePolicy (Dolt passes a noop mergePolicy).
    • chunks.ErrGCGenerationExpired: This is similar to a moved root chunk, but with no guarantees about what chunks remain in the ChunkStore. Any information from CS.Has(ctx, chunk) is now stale. Writers must rewrite all data to the chunkstore.
  • 909: use tr to lowercase output instead of {output,,}
    lowercasing via parameter expansion ${output,,} is only supported on Bash 4+. I switched to using tr so I could run the tests locally.
  • 205: Implemented drop trigger
    As discussed, we disallow dropping any triggers that are referenced in other triggers.
  • 204: Added trigger statements

0.20.2

02 Oct 19:38
Compare
Choose a tag to compare

We are excited to announce the release of Dolt 0.20.2, including a minor version bump as we introduce a new feature SQL triggers.

SQL Triggers

SQL triggers are SQL snippets that can be executed every time a row is inserted. Here is a simple example taken from the blog post announcing the feature:

$ dolt sql
> create table a (x int primary key);
> create table b (y int primary key);
> create trigger adds_one before insert on a for each row set new.x = new.x + 1;
> insert into a values (1), (3);
Query OK, 2 rows affected
trigger_blog> select * from a;
+---+
| x |
+---+
| 2 |
| 4 |
+---+

Any legal SQL statement can be executed as a trigger, here we just defined a simple increment.

Merged PRs

  • 908: Added comments for clarity
  • 907: Release
  • 906: Fixed conflict resolution and additional trigger tests
  • 905: Updated to latest go-mysql-server
  • 904: Added trigger functionality to Dolt
  • 900: Reference new org name
  • 897: Fixed CREATE LIKE multi-db
    Fixes #654
  • 896: Moved everything over to SHOW CREATE TABLE and fixed diff panic
  • 894: Fixed UNIQUE NULL handling to match MySQL
  • 892: Andy/gc table files
  • 890: Working Ruby ruby/mysql test
    Not to be confused with mysql/ruby which uses the MySQL C API.
  • 889: Release
  • 202: Zachmu/triggers 5
    Added additional validation for trigger creation and execution:
    • Use of NEW / OLD row references
    • Circular trigger chains
  • 200: Zachmu/triggers 4
    Support for DELETE and UPDATE triggers
  • 199: Reference new org name
  • 198: Added proper support for SET NAMES, and also turned off strict checking for setting unknown system variables.
  • 197: Zachmu/user vars
    User vars now working. Can stomp on a system var of the same name, as before my last batch of changes.
  • 196: Allow CREATE TABLE LIKE to reference different databases
  • 195: Zachmu/triggers 3
    This gets SET new.x = foo expressions working for triggers. This required totally rewriting how we were handling setting system variable as well, since these two kinds of statements are equivalent in the parser.
    Also deletes the convert_dates analyzer rule, which impacts 0 engine tests.
  • 194: No longer return span iters from most nodes by default.
  • 193: Implemented CREATE TABLE LIKE and updated information_schema
    Tests will come in a separate PR