Releases: louthy/language-ext
Language-Ext 5.0 alpha-1
This release should only be consumed by those who are interested in the new features coming in the monster v5
release.
Just to give you an idea of the scale of this change:
- 193 commits
- 1,836 files changed
- 135,000 lines of code added (!)
- 415,000 lines of code deleted (!!)
It is a monster and should be treated with caution...
- It is not ready for production
- It is not feature complete
- The new features don't have unit tests yet and so are probably full of bugs
- I haven't yet dogfooded all the new functionality, so it may not seem as awesome as it will eventually become!
If you add it to a production project, you should only do so to see (potentially) how many breaking changes there are. I would not advise migrating a production code-base until I get close to the final release.
I am also not going to go into huge detail about the changes here, I will simply list them as headings. I will do a full set of release notes for the beta
release. You can however follow the series of articles I am writing to help you all prep for v5
-- it goes (and will go) into much more detail about the features.
New Features
- Higher-kinded traits
K<F, A>
- higher-kinds enabling interface- Includes:
- Defintions (interfaces listed below)
- Static modules (
Functor.map
,Alternative.or
,StateM.get
, ...) - Extension methods (
.Map
,.Or
,Bind
, etc.), - Extension methods that replace LanguageExt.Transformers (
BindT
,MapT
, etc. ), now fully generic. - Trait implementations for all Language-Ext types (
Option
,Either<L>
, etc.)
Functor<F>
Applicative<F>
Monad<M>
Foldable<F>
Traversable<T>
Alternative<F>
SemiAlternative<F>
Has<M, TRAIT>
Reads<M, OUTER_STATE, INNER_STATE>
Mutates<M, OUTER_STATE, INNER_STATE>
ReaderM<M, Env>
StateM<M, S>
WriterM<M, OUT>
MonadT<M, N>
- Monad transformersReaderT<Env, M, A>
WriterT<Out, M, A>
StateT<S, M, A>
IdentityT<M, A>
EitherT<L, M, R>
ValidationT<F, M, S>
OptionT<M, A>
TryT<M, A>
IdentityT<M, A>
ResourceT<M, A>
Free<F, A>
- Free monadsIO<A>
- new IO monad that is the base for all IOEff<RT, A>
monad rewritten to use monad-transformers (StateT<RT, ResourceT<IO>, A>
)Eff<RT, A>
doesn't needHasCancel
trait (or any trait)- Transducers
Pure
/Fail
monads- Lifting
- Improved guards, when, unless
- Nullable annotations - still WIP, mostly complete on Core)
- Collection initialisers
Breaking changes
- netstandard2.0 no longer supported (.NET 8.0+ only)
Seq1
made[Obsolete]
- 'Trait' types now use static interface methods
- The 'higher-kind' trait types have all been refactored
- The
Semigroup<A>
andMonoid<A>
types have been refactored - The static
TypeClass
class has been renamedTrait
Apply
extensions that use rawFunc
removed- Manually written
Sequence
extension methods have been removed - Manually written
Traverse
extension methods have been removed ToComparer
doesn't exist on theOrd<A>
trait any more- Renamed
LanguageExt.ClassInstances.Sum
Guard<E>
has becomeGuard<E, A>
UnitsOfMeasaure
namespace converted to a static classEither
doesn't supportIEnumerable<EitherData>
any moreEither
'bi' functions have their arguments flipped- Nullable (struct) extensions removed
- Support for
Tuple
andKeyValuePair
removed - Types removed outright
Some<A>
OptionNone
EitherUnsafe<L, R>
EitherLeft<L>
EitherRight<L>
Validation<MFail, Fail, A>
Try<A>
TryOption<A>
TryAsync<A>
TryOptionAsync<A>
Result<A>
OptionalResult<A>
- Async extensions for
Option<A>
ExceptionMatch
,ExceptionMatchAsync
,ExceptionMatchOptionalAsync
- Libraries removed outright
LanguageExt.SysX
LanguageExt.CodeGen
LanguageExt.Transformers
Big fixes and minor improvements release
New:
- Support for
IfFail
inTry
,TryOption
,TryAsync
, andTryOptionAsync
- Thanks @mark-pro 👍 - Update prelude for Arithmetic typeclass - Thanks @benjstephenson 👍
- Improvements to the
Effects
samples
Bug fixes:
Breaking change: OptionAsync await + Producer.merge
This is a fixes release.
OptionAsync
I have brought forward a change to OptionAsync
that I was saving for v5
: the removal of the async-awaiter. You can't now await
an OptionAsync
. The resulting value wasn't clear, and honestly the async
/await
machinery is really quite shonky outside of using it for Tasks.
I have made the OptionAsync
implementation aware of nullable references, and so you can now await
the Value
property instead:
public Task<A?> Value
That will reproduce the same behaviour as before. You can still await
the ToOption()
method, which returns a Task<Option<A>>
, if you want to do matching on the underlying option. Or call the various Match*
methods.
This release fixes the following issues:
Producer.merge
error handling
Producer merging was silently ignoring errors. They now exit and return the first error and shutdown other producers they were merged with. Merged producers also listen for cancellation correctly now.
Finally, you can only merge produces with a bound value of Unit
. This is to stop the silent dropping of their return value as well as the need to provide a final (bound) value for merged producers, which doesn't really make sense. That also means the +
operator can't work any more because it can't be defined for the Producer<..., A>
type. So you must use Producer.merge
.
This fixes an issue mentioned in: #1177
repeatM
doesn't cause a stack-overflow
Certain elements of the Pipes
capability of language-ext are direct ports from the Haskell Pipes library, which uses recursion everywhere. repeatM
was causing a stack-overflow on usage, this is now fixed.
Example usage:
public static Effect<Runtime, Unit> effect =>
Producer.repeatM(Time<Runtime>.nowUTC) | writeLine<DateTime>();
static Consumer<Runtime, X, Unit> writeLine<X>() =>
from x in awaiting<X>()
from _ in Console<Runtime>.writeLine($"{x}")
from r in writeLine<X>()
select r;
repeat
improvements
Removed the Repeat
case from the Pipes DSL which simplifies it and brings it closer to the Haskell version. Updated the repeat
combinator function to use the same Enumerate
case that yieldAll
uses. This has benefits that it doesn't spread out when composed with other Proxy
types. This is should mean it's easier to pick bits of the expression to repeat, rather than the whole effect being repeated due to the spread.
Trampoline
Added trampolining functionality. It's relatively light at the moment, I am considering approaches to enable genuine recursion in the effects system. Don't rely on this, it may be removed if it doesn't prove useful and almost certainly will have API changes if it stays.
Breaking Change: Pipes enumerate
There Pipes functions: enumerate
, enumerate2
, observe
, observe2
have been deleted and replaced with yieldAll
(that accepts IEnumerable
, IAsyncEnumerable
, or IObservable
).
The previous implementation had mixed behaviours, some that always yielded the values, some that turned the remainder of the pipes expression into a enumeration. This wasn't entirely clear from the name and so now there is a single set of yieldAll
functions that always yield
all the values in the collection downstream.
The behaviour of the always yield enumerate
functions was also buggy, and didn't result in the remainder of a Producer
or Pipe
being invoked after the yield
. :
public static Effect<Runtime, Unit> effect =>
repeat(producer) | consumer;
static Producer<Runtime, int, Unit> producer =>
from _1 in Console<Runtime>.writeLine("before")
from _2 in yieldAll(Range(1, 5))
from _3 in Console<Runtime>.writeLine("after")
select unit;
static Consumer<Runtime, int, Unit> consumer =>
from i in awaiting<int>()
from _ in Console<Runtime>.writeLine(i.ToString())
select unit;
In the example above, "after"
would never be called, this is now fixed.
There is also a new &
operator overload for Pipes which performs the operations in series. This has the effect of concatenating Producers (for example), but will work for Pipe
, Consumer
, Client
, and Server
.
// yields [1..10]
static Producer<Runtime, int, Unit> producer =>
yieldAll(Range(1, 5)) & yieldAll(Range(6, 5));
There's still work to do on repeat
, but this was quite a difficult change, so I'll leave that for now.
Other fixes:
Fixes and improvements release
This release puts out the 4.3.*
beta changes:
And contains a number of contributed improvements:
- Added LanguageExt Either and F# Result Converters
- Added Prism optics for manipulating Option fields
- Eq.ToEqualityComparer (Ord.ToComparer)
And a number of contributed bug fixes:
- Fix for broken
Task.Cast
andValueTask.Cast
- Fix incorrect docstring for two LeftToSeq implementations.
- Fix Distinct with custom comparer (hash bug)
- Fix for SearchOptions.TopDirectoryOnly option not working
- Fix for missing implementations of FileIO (test)
- Fix for Fin.GetHashCode Throws an Exception when Fin.Succ is False
Thanks to all those who contributed. I am still super busy with other projects right now, and I don't always get to PRs as quickly as I would like, but It's always appreciated.
Any problems, please report in the Issues.
Refactored `Error` type + `Eff` and `Aff` applicative functors
There have been a number of calls on the Issues page for a ValidationAsync
monad, which although it's a reasonable request (and I'll get to it at some point I'm sure), when I look at the example requests, it seems mostly the requestors want a smarter error handling story in general (especially for the collection of multiple errors).
The error-type that I'm building most of the modern functionality around (in Fin
, Aff
, and Eff
for example) is the struct
type: Error
. It has been designed to handle both exceptional and expected errors. But the story around multiple errors was poor. Also, it wasn't possible to carry additional information with the Error
, it was a closed-type other than ability to wrap up an Exception
- so any additional data payloads was cumbersome and ugly.
Extending the
struct
type to be more featureful was asking for trouble, as it was already getting pretty messy.
Error
refactor
So, I've bitten the bullet and refactored Error
into an abstract record
type.
Error
sub-types
There are a few built-in sub-types:
Exceptional
- An unexpected errorExpected
- An expected errorManyErrors
- Many errors (possibly zero)
These are the key base-types that indicate the 'flavour' of the error. For example, a 'user not found' error isn't
something exceptional, it's something we expect to happen. An OutOfMemoryException
however, is
exceptional - it should never happen, and we should treat it as such.
Most of the time we want sensible handling of expected errors, and bail out completely for something exceptional. We also want to protect ourselves from information leakage. Leaking exceptional errors via public APIs is a sure-fire way to open up more information to hackers than you would like. The Error
derived types all try to protect against this kind of leakage without losing the context of the type of error thrown.
When Exceptional
is serialised, only the Message
and Code
component is serialised. There's no serialisation of the inner Exception
or its stack-trace. It is also possible to construct an Exceptional
message with an alternative message:
Error.New("There was a problem", exception);
That means if the Error
gets serialised, we only get a "There was a problem"
and an error-code.
Deserialisation obviously means we can't recover the
Exception
, but the state of theError
will still beExceptional
- so it's possible to carry the severity of the error across domain boundaries without leaking too much information.
Error
methods and properties
Essentially an error is either created from an Exception
or it isn't. This allows for expected errors to be represented without throwing exceptions, but also it allows for more principled error handling. We can pattern-match on the
type, or use some of the built-in properties and methods to inspect the Error
:
IsExceptional
-true
for exceptional errors. ForManyErrors
this istrue
if any of the errors are exceptional.IsExpected
-true
for non-exceptional/expected errors. ForManyErrors
this istrue
if all of the errors are expected.Is<E>(E exception)
-true
if theError
is exceptional and any of the the internalException
values are of typeE
.Is(Error error)
-true
if theError
matches the one provided. i.e.error.Is(Errors.TimedOut)
.IsEmpty
-true
if there are no errors in aManyErrors
Count
-1
for most errors, orn
for the number of errors in aManyErrors
Head()
- To get the first errorTail()
- To get the tail of multiple errors
You may wonder why
ManyErrors
could be empty. That allows forErrors.None
- which works a little likeOption.None
. We're saying: "The operation failed, but we have no information on why; it just did".
Error
construction
The Error
type can be constructed as before, with the various overloaded Error.New(...)
calls.
For example, this is an expected error:
Error.New("This error was expected")
When expected errors are used with codes then equality and matching is done via the code only:
Error.New(404, "Page not found");
And this is an exceptional error:
try
{
}
catch(Exception e)
{
// This wraps up the exceptional error
return Error.New(e);
}
Finally, you can collect many errors:
Error.Many(Error.New("error one"), Error.New("error two"));
Or more simply:
Error.New("error one") + Error.New("error two")
Error
types with additional data
You can extend the set of error types (perhaps for passing through extra data) by creating a new record that inherits Exceptional
or Expected
:
public record BespokeError(bool MyData) : Expected("Something bespoke", 100, None);
By default the properties of the new error-type won't be serialised. So, if you want to pass a payload over the wire, add the [property: DataMember]
attribute to each member:
public record BespokeError([property: DataMember] bool MyData) : Expected("Something bespoke", 100, None);
Using this technique it's trivial to create new error-types when additional data needs to be moved around, but also there's a ton of built-in functionality for the most common use-cases.
Error
breaking changes
- Because
Error
isn't astruct
any more,default(Error)
will now result innull
. In practice this shouldn't affect anyone. BottomException
is now inLanguageExt.Common
Error
documentation
There's also a big improvement on the API documentation for the Error
types
Aff
and Eff
applicative functors
Now that Error
can handle multiple errors, we can implement applicative behaviours for Aff
and Eff
. If you think of monads enforcing sequential operations (and therefore can only continue if each operation succeeds - leading to only one error report if it fails), then applicative-functors are the opposite in that they can run independently.
This is what's used for the
Validation
monads, to allow multiple operations to be evaluated, and then all of the errors collected.
By adding Apply
to Aff
and Eff
, we can now do the same kind of validation-logic both synchronously and asynchronously.
Contrived example
First let's create a simple asynchronous effect that delays for a period of time:
static Aff<Unit> delay(int milliseconds) =>
Aff(async () =>
{
await Task.Delay(milliseconds);
return unit;
});
Now we'll combine that so we get an effect that parses a string
into an int
, and adds a delay of 1000
milliseconds (the delay is to simulate calling some external IO).
:
static Aff<int> parse(string str) =>
from x in parseInt(str).ToAff(Error.New("parse error: expected int"))
from _ in delay(1000)
select x;
Notice how we're converting the
Option<int>
to anAff
, and providing an error value to use if theOption
isNone
Next we'll use the applicative behaviour of the Aff
to run two operations in parallel. When they complete the values will be applied to the function that has been lifted by SuccessAff
.
static Aff<int> add(string sx, string sy) =>
SuccessAff((int x, int y) => x + y)
.Apply(parse(sx), parse(sy));
To measure what we're doing, let's add a simple function called report
. All it does is run an Aff
, measures how long it takes, and prints the results to the screen:
static async Task report<A>(Aff<A> ma)
{
var sw = Stopwatch.StartNew();
var r = await ma.Run();
sw.Stop();
Console.WriteLine($"Result: {r} in {sw.ElapsedMilliseconds}ms");
}
Finally, we can run it:
await report(add("100", "200"));
await report(add("zzz", "yyy"));
The output for the two operations is this:
Result: Succ(300) in 1032ms
Result: Fail([parse error: expected int, parse error: expected int]) in 13ms
Notice how the first one (which succeeds) takes 1032ms
- i.e. the two parse operations ran in parallel. And on the second one, we get both of the errors returned. The reason that one finished so quickly is because the delay was after the parseInt
call, so we exited immediately.
Of course, it would be possible to do this:
from x in parse(sx)
from y in parse(sy)
select x + y;
Which is more elegant. But the success path would take 2000ms
, and the failure path would only report the first error.
Hopefully that gives some insight into the power of applicatives (even if they're a bit ugly in C#!)
Beta
This will be in beta for a little while, as the changes to the Error
type are not trivial.
Effect scheduling improvements
The existing Schedule
type has been massively upgraded to support even more complex scheduling for repeating, retrying, and folding of Aff
and Eff
types.
A huge thanks to @bmazzarol who did all of the heavy lifting to make this feature a reality!
It has been refactored from the ground up, a Schedule
now is a (possibly infinite) stream of durations. Each duration indicates to the retry
, repeat
, and fold
behaviours how long to wait between each action. The generation of those streams comes from:
Schedule.Forever
- infinite stream of zero length durationsSchedule.Once
- one item stream of zero length durationSchedule.Never
- no durations (a schedule that never runs)Schedule.TimeSeries(1, 2, 3 ...)
- pass in your own durations to build a bespoke scheduleSchedule.spaced(space)
- infinite stream ofspace
length durationsSchedule.linear(seed, factor)
- schedule that recurs continuously using a linear back-offSchedule.exponential(seed, factor)
- schedule that recurs continuously using a exponential back-offSchedule.fibonacci(seed, factor)
- schedule that recurs continuously using a fibonacci based back-offSchedule.upto(max)
- schedule that runs for a given durationSchedule.fixedInterval(interval)
- if that action run between updates takes longer than the interval, then the action will run immediatelySchedule.windowed(interval)
- a schedule that divides the timeline intointerval
-long windows, and sleeps until the nearest window boundary every time it recurs.Schedule.secondOfMinute(second)
- a schedule that recurs every specified second of each minuteSchedule.minuteOfHour(minute)
- a schedule that recurs every specified minute of each hourSchedule.hourOfDay(hour)
- a schedule that recurs every specified hour of each daySchedule.dayOfWeek(day)
- a schedule that recurs every specified day of each week
These schedules are mostly infinite series, and so to control their 'length' we compose with ScheduleTransformer
values to create smaller series, or to manipulate the series in some way (jitter for example). The following functions generate ScheduleTransformer
values.
Schedule.recurs(n)
- Clamps the schedule durations to only recurn
times.Schedule.NoDelayOnFirst
- Regardless of any other settings, it makes the first duration zeroSchedule.RepeatForever
- Repeats any composed schedules foreverSchedule.maxDelay(max)
- limits the returned delays to max delay (upper clamping of durations).Schedule.maxCumulativeDelay(Duration max)
- keeps a tally of all the delays so-far, and ends the generation of the series oncemax
delay has passedSchedule.jitter(minRandom, maxRandom, seed)
- adds random jitter to the durationsSchedule.jitter(factor, seed)
- adds random jitter to the durationsSchedule.decorrelate(factor, seed)
- transforms the schedule by de-correlating each of the durations both up and down in a jittered way.Schedule.resetAfter(max)
- resets the schedule after a provided cumulative max durationSchedule.repeats(n)
- not to be confused with recurs, this repeats the schedulen
times.Schedule.intersperse(schedule)
- intersperse the provided schedule between each duration in the schedule.
Schedule
and ScheduleTransformer
can be composed using |
(union) or &
(intersection):
var schedule = Schedule.linear(1 * sec) | Schedule.recurs(3) | Schedule.repeat(3);
// [1s, 2s, 3s, 1s, 2s, 3s, 1s, 2s, 3s]
Union |
will take the minimum of the two schedules to the length of the longest, intersect &
will take the maximum of the two schedules to the length of the shortest.
One thing remaining to-do is to bring
HasTime<RT>
back into theCore
and allow these schedules to use injectable time. Some of the functions already take aFunc<DateTime>
to access 'now', this will be expanded so time can be sped up or slowed down, with the schedules 'just working'. That'll be in the next few weeks I'm sure, and is related to this issue.
Check out the API documentation to see what's what. And again, thanks to @bmazzarol for the hard work 👍
BREAKING: New LanguageExt.Transformers package
The transformers extensions, which are a big set of T4 templates for generating extension methods for nested monadic types have now been broken out into their own package: LanguageExt.Transformers
If you use the following functions: BindT
, MapT
, FoldT
, FoldBackT
, ExistsT
, ForAllT
, IterT
, FilterT
, PlusT
, SubtractT
, ProductT
, DivideT
, SumT
, CountT
, AppendT
, CompareT
, EqualsT
, or ApplyT
- then you will get compile errors, and will need to add a reference to the LanguageExt.Transformers
package.
I've done this for a couple of reasons:
- There's been an ongoing concern from a number of users of this library about the size of the
LanguageExt.Core
library. This change takes theCore
package from3,276 kb
to2,051 kb
.- The
Core
library will always be quite chunky because of the sheer amount of features, but the transformer extension methods definitely aren't always needed, so breaking them out made sense
- The
- I suspect issues around the .NET ReadyToRun usage will be alleviated somewhat by this change.
- I can't prove this, but the C# tooling has had a hard time with those 10,000s of generated extension methods before - so rather than wait for Microsoft to fix their tooling, I'm trying to be proactive and see if this will help.
The main transformer extensions that remain in the Core
library are:
Traverse
Sequence
These are so heavily used that I believe moving them out into the Transformers
library would mean everyone would be obliged to use it, and therefore it wouldn't achieve anything. There may be an argument for bringing BindT
and MapT
back into the core at some point. I will see how this plays out (it wouldn't be a future breaking change if that were the case).
Any problems, please report via the Issues in the usual way.
Aff, Eff, AtomHashMap, TrackingHashMap + fixes [RTM]
Language-ext had been in beta for a few months now. Today we go back to full 'RTM' mode.
The recent changes are:
AtomHashMap
and Ref
change-tracking
A major new feature that allows for tracking of changes in an AtomHashMap
or a Ref
(using STM). The Change
event publishes HashMapPatch<K, V>
which will allow access to the state of the map before and after the change, as well as a HashMap<K, Change<V>>
which describes the transformations that took place in any transactional event.
In the LanguageExt.Rx
project there's various observable stream extensions that leverage the Change
event:
OnChange()
- which simply streams theHashMapPatch<K, V>
OnEntryChange()
- which streams theChange<V>
for any key within the mapOnMapChange()
- which streams the latestHashMap<K, V>
snapshot
Ref
which represents a single value in the STM system, has a simpler Change
event that simply streams the latest value. It also has an Rx
extension, called Change()
.
Documented in previous beta release notes
TrackingHashMap<K, V>
This is a new immutable data-structure which is mostly a clone of HashMap
; but one that allows for changes to be tracked. This is completely standalone, and not related to the AtomHashMap in any way other than it's used by the AtomHashMap.Swap
method. And so, this has use-cases of its own.
Changes are tracked as a HashMap<K, Change<V>>
. That means there's at most one change-value stored per-key. So, there's no risk of an ever expanding log of changes, because there is no log! The changes that are tracked are from the 'last snapshot point'. Which is from the empty-map or from the last state where tracking HashMap.Snapshot()
is called.
Documented in previous beta release notes
Fixes to the Aff
and Eff
system
The Aff
and Eff
system had some unfortunate edge-cases due to the use of memoisation by-default. The underlying system has been simplified to be more of a standard reader-monad without memoisation. You can still memoise if needed by calling: ma.Memo()
.
Future changes will make the
Aff
andEff
into more of a DSL, which will allow for certain elements of the system to be 'pure', and therefore safely memoisable, and other elements not. My prototype of this is in too early a stage to release though, so I've taken the safer option here.
Breaking change: Both Clone
and ReRun
have been removed, as they are now meaningless.
New package LanguageExt.SysX
- for .NET5.0+ features
The LanguageExt.Sys
package is a wrapper for .NET BCL IO functionality, for use with the runtime Eff
and Aff
monads. This is going to stay as netstandard2.0
for support of previous versions of the .NET Framework and .NET Core. This new package adds features that are for net5.0+
.
The first feature to be supported is the Activity
type for Open-Telemetry support.
ScheduleAff
and ScheduleEff
usage was inconsistent
Depending on the monad they were used with, you might see a 'repeat' that was 1 greater than it should have been. This is now fixed.
Lazy Seq
equality fix
Under certain rare circumstances it was possible for the equality operator to error with a lazy Seq
. Fix from @StefanBertels - thanks!
No more empty array allocations
It seams c# isn't smart enough to turn a new A[0]
into a non-allocating operation. And so they have all been replaced with Array.Empty()
. Fix from @timmi-on-rails - thanks!
Any problems, please report them in the Issues as usual. Paul 👍
Change events for AtomHashMap and the STM system (via Ref)
As requested by @CK-LinoPro in this Issue. AtomHashMap
and Ref
now have Change
events.
Change<A>
Change<A>
is a new union-type that represents change to a value, and is used by AtomHashMap
and TrackingHashMap
.
You can pattern-match on the Change
value to find out what happened:
public string WhatHappened(Change<A> change) =>
change switch
{
EntryRemoved<A> (var oldValue) => $"Value removed: {oldValue}",
EntryAdded<A> (var value) => $"Value added: {value}",
EntryMapped<A, A>(var from, var to) => $"Value mapped from: {from}, to: {to}",
_ => "No change"
};
EntryMapped<A, B>
is derived from EntryMappedFrom<A>
and EntryMappedTo<B>
, so for any A -> B
mapping change, you can just match on the destination value:
public string WhatHappened(Change<A> change) =>
change switch
{
EntryRemoved<A> (var oldValue) => $"Value removed: {oldValue}",
EntryAdded<A> (var value) => $"Value added: {value}",
EntryMappedTo<A>(var to) => $"Value mapped from: something, to: {to}",
_ => "No change"
};
That avoids jumping through type-level hoops to see any changes!
There are also various 'helper' properties and methods for working with the derived types:
Member | Description |
---|---|
HasNoChange |
true if the derived-type is a NoChange<A> |
HasChanged |
true if the derived-type is one-of EntryRemoved<A> or EntryAdded<A> or EntryMapped<_, A> |
HasAdded |
true if the derived-type is a EntryAdded<A> |
HasRemoved |
true if the derived-type is a EntryRemoved<A> |
HasMapped |
true if the derived-type is a EntryMapped<A, A> |
HasMappedFrom<FROM>() |
true if the derived-type is a EntryMappedFrom<FROM> |
ToOption() |
Gives the latest value from the Change , as long as the Change is one-of EntryAdded or EntryMapped or EntryMappedTo |
There are also constructor functions to build your own Change
values.
AtomHashMap<K, V>
and AtomHashMap<EqK, K, V>
The two variants of AtomHashMap
both now have Change
events that can be subscribed to. They emit a HashMapPatch<K, V>
value, which contains three fields:
Field | Description |
---|---|
From |
HashMap<K, V> that is the state before the change |
To |
HashMap<K, V> that is the state after the change |
Changes |
HashMap<K, Change<V>> that describes the changes to each key |
There are three related Rx.NET
extensions in the LanguageExt.Rx
package:
AtomHashMap Extension |
Description |
---|---|
OnChange() |
Observable stream of HashMapPatch<K, V> |
OnMapChange() |
Observable stream of HashMap<K, V> , which represents the latest snapshot of the AtomHashMap |
OnEntryChange() |
Observable stream of (K, Change<V>) , which represents the change to any key within the AtomHashMap |
Example
var xs = AtomHashMap<string, int>();
xs.OnEntryChange().Subscribe(pair => Console.WriteLine(pair));
xs.Add("Hello", 456);
xs.SetItem("Hello", 123);
xs.Remove("Hello");
xs.Remove("Hello");
Running the code above yields:
(Hello, +456)
(Hello, 456 -> 123)
(Hello, -123)
Swap
method (potential) breaking-change
The implementation of Swap
has changed. It now expects a Func<TrackingHashMap<K, V>, TrackingHashMap<K, V>>
delegate instead of a Func<HashMap<K, V>, HashMap<K, V>>
delegate. This is so the Swap
method can keep track of arbitrary changes during the invocation of the delegate, and then emit them as events after successfully committing the result.
Ref<A>
Refs are used with the atomic(() => ...)
, snapshot(() => ...)
, and serial(() => ...)
STM transactions. Their changes are now tracked during a transaction, and are then (if the transaction is successful) emitted on the Ref<A>.Change
event. These simply publish the latest value.
As before there's Rx
extensions for this:
Ref Extension |
Description |
---|---|
OnChange() |
provides an observable stream of values |
Example
var rx = Ref("Hello");
var ry = Ref("World");
Observable.Merge(rx.OnChange(),
ry.OnChange())
.Subscribe(v => Console.WriteLine(v));
atomic(() =>
{
swap(rx, x => $"1. {x}");
swap(ry, y => $"2. {y}");
});
This outputs:
1. Hello
2. World
TrackingHashMap
This is a new immutable data-structure which is mostly a clone of HashMap
; but one that allows for changes to be tracked. This is completely standalone, and not related to the AtomHashMap
in any way other than it's used by the AtomHashMap.Swap
method. And so, this has use-cases of its own.
Changes are tracked as a HashMap<K, Change<V>>
. That means there's at most one change-value stored per-key. So, there's no risk of an ever expanding log of changes, because there is no log! The changes that are tracked are from the 'last snapshot point'. Which is from the empty-map or from the last state where trackingHashMap.Snapshot()
is called.
Example 1
var thm = TrackingHashMap<int, string>();
Console.WriteLine(thm.Changes);
thm = thm.Add(100, "Hello");
thm = thm.SetItem(100, "World");
Console.WriteLine(thm.Changes);
This will output:
[]
[(100: +World)]
Note the +
, this indicates an Change.EntryAdded
. And so there has been a single 'add' of the key-value pair (100, World)
. The "Hello"
value is ignored, because from the point of the snapshot: a value has been added. That's all we care about.
If I take a snapshot halfway through, you can see how this changes the output:
var thm = TrackingHashMap<int, string>();
thm = thm.Add(100, "World");
thm = thm.Snapshot();
thm = thm.SetItem(100, "Hello");
Console.WriteLine(thm.Changes);
This outputs:
[(100: World -> Hello)]
So the snapshot is from when there was a (100, World)
pair in the map.
Hopefully that gives an indication of how this works!