Almondala is a Mandelbrot set explorer, written in Rust (compiled to WebAssembly) and TypeScript.
Benoit Mandelbrot discovered this fractal. His surname is is German for "almond bread". I coined Almondala as a portmanteau of almond and mandala.
- Keys:
- Arrow keys to pan.
X
to zoom in.Z
to zoom out.- SPACE or ESCAPE to reset.
- Mouse:
- Click on a point of the Mandelbrot to move it to the center of the canvas.
- Double click to move and zoom.
- Drag a point to move it to a new location on the canvas.
- Buttons.
- RAINBOW to toggle color/grayscale.
⟲
to replay zoom, i.e. to zoom out to the initial scale or to zoom back in to the scale at which the replay started.+
and-
to the adjust maximum number of iterations to check before coloring a pixel black. Higher values give greater precision, but take longer.i
for a reminder of this info.˄
and˅
to increment/decrement the power to which each number in the sequence is raised.
Resizing the window also resets the view.
Two features are closely based on the example in section 15.14 of David Flanagan in JavaScript: The Definitive Guide
, 7th edition, 2020, namely the worker pool and the idea of partitioning the canvas into tiles.
From Ross Hill's Mandelbrot.site, I got idea of checking the perimeter of the tile first and coloring the whole tile black if no point on the perimeter escapes.
Simply view online at Almondala.
Alternatively, here is a guide to build and run locally. First, clone the repo and navigate into it by entering the following commands into a terminal:
git clone https://github.com/pjtunstall/almondala
cd almondala
Install Rust, if you haven't already.
Install Rust dependencies with
cargo add .
Make sure you have wasm-pack
installed:
cargo install wasm-pack
Install Node.js if you don't already have it. This will also install npm
(Node Package Manager). Then install TypeScript and the related undici-types
package as dev dependencies with the commmand
npm install
Run the build script with
npm run build
NOTE: The build script includes a final step that corrects the import path in worker.js
for the Wasm module from "relative to the TypeScript source file" (as required by the TypeScript compiler) to "relative to the compiled JavaScript file" (so that worker.js
itself can actually import the Wasm module). This should run different instructions depending on the platform, but, as yet, has only been tested on macOS.
Start a local server, for example:
python3 -m http.server
Open a browser. When the popup prompts you, allow the application to accept incoming connections. Then navigate to http://localhost:8000/public/
.
This repo includes several old feature branches. At present, these are in raw JavaScript, as they date to before I switched to using TypeScript for the project. Another significant change I've made to the main branch since I last touched them that I'm no longer trying to parallelize the Rust with the rayon crate (library). Benchmarking showed that rayon
made the calculations 1.8 times slower. As I now realize, this is because WebAssembly doesn't have direct support for multithreading at the hardware level and instead relies on JavaScript worker threads for parallelism.
fake
: a progressive loading effect: panning or zooming the current frame before calculating the next one. (Works up to a point: a series of pans and zooms will eventually get out of sync with the properly calculated view, maybe due accumulated rounding errors. In the current asynchronous setup, there would also be the issue of rendered values always falling behind the actual state, hence why I've only used the effect for the replay zoom.)offscreen
: two worker threads, each of which puts its image to anOffscreenCanvas
. A request to calculate is sent to both simultaneously. One does a quick first pass with a smaller iteration limit. The main thread toggles the opacity of the two canvases to display the results as needed. (Works, but with occasional glitchy jumps, and reset is jarring on Firefox. Essentially supersceded when I successfully introduced the worker pool that's now used inmain
.)lines
: an attempt at calculating odd and even numbered columns separately, one after the other, so as to have something to display faster, while waiting for the rest of the calculation. (The basic idea of calculating alternate lines works--the Rust does its job--but the branch is not yet fully functional. It derived fromoffscreen
, and I think the two workers/canvases are complicting matters.)shared
: an attempt at sharing memory betwee JS and Wasm. (Not yet working. The idea is still relevant as a possible future optimization.)
See their README
s for more info.
In earlier versions, I somehow absent-mindedly wound up using Vec<u8>
for each pixel. When this came to my attention, I switched to using [u8; 4]
, and took the opportunity to compare the performance.
Originally, I calculated the color of each pixel every render. Now I use tables of precalculated values. The tables, COLORS
and SHADES
are generated by a macro in the generate_tables
crate. again, I compared the performace.
On a 2013 MacBook Air, I compared 100 million calls, passing a random escape iteration value to the function that generates a colored pixel; likewise the function that generates a grayscale pixel. The Vec<u8>
versions took 80 and 71 seconds respectively. The [u8; 4]
versions took 60 and 52 seconds. With the current technique of lookup tables, this is further to 47 seconds.
Here is how I timed my old Mandelbrot calculation when I was using rayon, before I realized that a single instance of the Wasm module doesn't have access to multithreading. The mean duration was 903ms (standard deviation 51ms). Without rayon
, it was 585ms (standard deviation 66ms).
const phi = 1.618033988749895;
async function benchmarkMandelbrot() {
const start = performance.now();
calculate_mandelbrot(phi * 600, 600, 1024, 1024, -0.6, 0, phi, 1, 23, 17, 17);
const end = performance.now();
const duration = end - start;
return duration;
}
const durations = [];
for (let i = 0; i < 100; i++) {
const duration = await benchmarkMandelbrot();
console.log(`${i}. ${duration}ms`);
durations.push(duration);
}
const averageDuration =
durations.reduce((sum, duration) => sum + duration, 0) / durations.length;
console.log(`${averageDuration}ms (average)`);
const std = Math.sqrt(
durations.reduce(
(sum, duration) => sum + Math.pow(duration - averageDuration, 2),
0
) / durations.length
);
console.log(std);
Further developments may include:
- CODE
- Refactor:
- Split up the
State
class (separate the UI from the mathematical state)? And, more generally, review the code in the light of best practices. - Look out for suitable places to use the characteristic TS object types: enum, interface, union, intersection, extension.
- Split up the
- Test:
- Test Rust: calculate some known values, fuzz test set inclusion for values in known regions.
- Test UI: creation & existence of elements, fuzz test with random input events, explore determinstic simulation testing.
- Try it out on as many combinations of devices, screen sizes, aspect ratios, resolutions, operating systems, and browsers and possible.
- Optimize:
- Try out different ways to order and time how the tile data is made visible.
- Reuse perimeter calculation of tiles, also shared edges.
- Share memory between Wasm and JS.
- Try making calculations cancellable so that some could skipped as an alternative to processing multiple slow requests that come close together. Be sure to benchmark to see if it actually helps. In Rust, this could be done with an async runtime like tokio. On the JavaScript end, look into AbortController.
- Benchmark with and without wasm-opt, and try out different wasm-opt options. I'm currently using this tool to optimize compilation. See the build script in
package.json
. (Option02
is recommended as generally best for performance, but there are many details that can be customized.)
- Refactor:
- FEATURES
- More buttons:
- Share state by encoding it in the URL.
- Save image.
- Touchscreen gestures: pinch and zoom.
- Explore different color schemes to offer as options.
- Investigate how to safely represent numbers with [https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic](arbitrary precision). At the moment, zoom is limited by to the precision of 64-bit floats. In practice, below
2e-13
, the image starts to get blocky. One strategy might by to representscale
,mid.x
, andmid.y
in TypeScript with theDecimal
type fromdecimal.js-light
and serialize them to pass to Rust. In Rust, I could use the rug crate (library). The Rust functioncalculate_mandelbrot
would receive them as typeString
. It could then deserialize them them as instances ofrug::Float
, a "multi-precision floating-point number with arbitrarily large precision and correct rounding", setting the precision based on the length of theString
. An instance of the corresponding arbitrary-precision complex typerug::Complex
could then be constructed from the real and imaginary parts. As I understand it, since the precision ofrug::FLoat
andrug::Complex
has to be set during construction, arithmetic operations would have to be wrapped with logic to set an appropriate precision for the outcome.
- More buttons:
Finally, a CI/CD-related point. For a while, I had Netlify run the build script on each deployment via a netlify.toml
file in the root of my project with the following:
[build]
command = "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y && source $HOME/.cargo/env && npm run build"
publish = "public"
[build.environment]
RUST_VERSION = "stable"
But when I introduced wasm-op, it seemed more convenient to just build locally. At some point, I may explore installing and caching wasm-opt in the Netlify build environment. Alternatively, I might look into using a custom Docker image to ensure a more controlled and consistent build environment.