Replies: 1 comment 1 reply
-
Adding to the general goals and roadmap, here are some notes and opinions on current/future implementation details and features. Lessons learnedSome lessons learned from the current architecture and the ongoing experiments. Most of the issues listed stem from features implemented by yours truly, with the best intentions at the time, but which slowly became obsolete as our vision of the project evolved; now we can reevaluate them and remove/rebuild/adjust as needed. Opt-In markersThe Opt-in annotations are one of my favorite Kotlin features, and we should use them, but only where it actually makes sense. For example, an Runtime abstractionsThe abstractions currently in place over GraalVM components ( A change in the naming convention would also help readability, since the current names, "Polylot Engine" and "Polyglot Context", can be easily confused with the GraalVM components they wrap, but we will cross that bridge when we get to it. Core APIsWe should strive for a language-agnostic runtime core module, to prevent an excessive bias toward JavaScript (like we have now) that may hinder future efforts in building truly polyglot intrinsics and other features. Experimental work shows this is entirely possible and improves build and IDE performance, since it enables working on the runtime engine without the overhead of language-specific implementation details. About the Plugins System DSL 3000™️The current Plugins system ( While the API served its purposed and was flexible enough to accommodate for some unforeseen use cases, it has become increasingly difficult to extend and is already causing severe issues for the implementation of some features. The next Extensions API should be integrated with DI (using framework-agnostic annotations from CLI command handlersThe CLI module (and its more recent companion, A future architecture should split these responsibilities into (at least) two layers: user input (configuration files, CLI arguments) processing, and guest code execution. Extracting the evaluation logic into specialized service interfaces will provide us with more room for testing these features in isolation and drastically reduce the complexity of the command handlers. Complex intrinsicsCertain features provided by the runtime can be quite complex in their logic, and draw dependencies that require special attention, such as the HTTP serving intrinsics. The current implementation (using Netty) requires us to manage the packaging of native libraries and apply many fine-tuning options to achieve a good end result. These features are complex enough to warrant their own module, as they are not typically required, say, when working on the runtime core, or when fixing a bug in the JavaScript support logic. Extracting them into a separate project (not repository, simply a sub-project in the Elide build) could help the development experience significantly. Elide Next-GenAn experimental project called New abstractionsRather than use fun sample() {
// use default configuration values
val config = RuntimeConfig()
// all-in-one extension + language provided by `elide-js`
val javascript = JavaScriptExtension()
// prepare the extensions and languages
val extensions = setOf(javascript)
val languages = setOf(javascript)
// create a new runtime instance, a wrapper over GraalVM Engine
// that adds support for extensions and other high-level features
val runtime = GuestRuntime.create(config, extensions, languages)
// a scope wraps a GraalVM Context
val scope = runtime.newScope()
scope.context.eval("js", "console.log('hello')")
} These abstractions can easily be integrated with a DI container, for example Micronaut: @Factory class RuntimeFactory {
@Singleton fun provideRuntimeConfig(
// create configuration using Micronaut's
// configuration properties
properties: RuntimeConfigurationProperties
): RuntimeConfig {
// map properties to the runtime config struct
...
}
@Singleton fun provideRuntime(
config: RuntimeConfig,
// DI discovers all available implementations
extensions: List<RuntimeExtension>,
languages: List<RuntimeLanguage>,
): GuestRuntime {
return GuestRuntime.create(config, extensions, languages)
}
} The idea behind these abstractions is that they only enhance the wrapped components: Enhanced extensionsAn improvement over the current Plugins API, this extension system is meant to be used with DI: extensions are classes implementing class MyExtension : RuntimeExtension {
// configure a GraalVM engine builder before it is wrapped by a GuestRuntime
override fun configureEngine(builder: Engine.Builder) {
// set a boolean option to "true" (a string value is used)
enableOption("engine.BackgroundCompilation")
}
// run initializer code on a GuestScope after its context is built, this is
// where bindings are installed, init scripts are run, etc.
override fun initializeScope(scope: GuestScope) {
scope.context.eval("js", "console.log('initialized')")
}
// run cleanup code after the `initializeScope` event has been called for all
// extensions, this allows more complex interactions between extensions
override fun finalizeScope(scope: GuestScope) {
// remove a binding at the end of the initialization phase
scope.context.getBindings("js").remove("myInternalBinding")
}
} Additional events are available to configure the Execution orderAnother long-standing problem with the Plugins API is that there is no guarantee over the order in which plugins are called, and since they also don't have access to a DI container, it is very difficult to establish effective dependencies between plugins or with other components. The new Extensions system addresses this problem directly, and allows extensions to depend on each other, establishing the order in which they are called by the runtime: class MyExtension : RuntimeExtension {
override fun install(handle: RuntimeExtension.Handle) {
// this extensions needs bindings to be installed first,
// guest bindings will be present in the scope during init
handle.runAfter(BINDINGS_EXTENSION)
// ensure we run before the embedded guest scripts, e.g.
// to prepare some internal state for them
handle.runBefore(GUEST_SCRIPTS_EXTENSION)
}
override fun initializeScope(scope: GuestScope) {
// we can run code that uses bindings added by the
// bindings extension, the order is respected
}
}
Specialized language extensionsAdding support for a guest language is done through the class JavaScriptLanguage : RuntimeLanguage {
override val languageId: GuestLanguage = "js"
// choose whether to be enabled in a given engine; since the language
// list must be passed to the engine builder on construction, it is
// currently not possible to access the engine itself during this check
override fun enabledInEngine() {
// always enable JavaScript (default implementation)
return true
}
} Improved support for Guest BindingsBindings were never fully integrated with DI in the current architecture, because of how the DSL requires them to be registered. Additionally, language plugins needed to opt-in during context initialization in a way which was prone to error and hard to reason about. A new approach to bindings now allows for much better control over which symbols are present in a guest context, and provides many useful features such as binding lifecycles. Dedicated bindings registration APIA new interface, the class MyBindingsRegistrar : GuestBindingRegistrar {
override fun register(bindings: GuestBindingRegistry, scope: GuestScope) {
// context-bound values are always available
bindings.register("message", "hello", LANGUAGE_JS, GuestBindingLifecycle.CONTEXT)
// init-bound values are removed after scope initialization
bindings.register("limited", "temp", LANGUAGE_JS, GuestBindingLifecycle.INITIALIZER)
}
} Binding lifecycleSome Guest Bindings are meant to be available always, acting as language or platform intrinsics (e.g.
JPMS pluginAs part of the efforts to improve the build experience, the JPMS convention was extracted to a separate plugin, with its own project extension for configuring some settings, and added task descriptions and grouping. Overall the plugin is much more stable than the convention and works as expected in the most common use cases, making JPMS support painless. Buildkit pluginA subtle but significant improvement to the build, the Buildkit plugin exposes the Options API, which reads Gradle properties from all available sources and provides type safety when accessing their values: // build.gradle.kts
// searches for the 'elide.cool.enabled' Gradle property, and reads its
// value as a Boolean; returns false if not found.
val useCoolFeature: Boolean = optionEnabled("cool.enabled")
// searches for the 'elide.cool.name' property, failing if not found
val coolName: String = option("cool.name")
// same as the above but using a default value instead of failing
val coolButWithDefault: String = option("cool.name", default = "Elide") Synthetic Proxies plugin (🚧 under development 🚧)Bridging guest and host code is best handled by using the // this class will implement `ProxyObject` at compile time; only properties
// and methods annotated with @ProxyMember will be exposed as members
@SyntheticProxy class MyIntrinsic {
// properties can be registered as members, read-only properties
// will remain so when the proxy is implemented
@get:ProxyMember val message: String = "Hello!"
// methods are added as executable members
@ProxyMember fun hello() {
println(message)
}
} |
Beta Was this translation helpful? Give feedback.
-
We need to make some changes to the way Elide is structured; the monorepo is hitting a point where it is heavy to work with, which is slowing velocity, both in mechanical circumstances (i.e. CI, releases...) and during development.
This new architecture aims to keep most of what was introduced in
v4
, which was a major departure from the Bazel-oriented structure inv3
. There should be a specific suite of goals which land at roughly the same endpoint, over time, rather than a complete restructuring of what Elide is and how it is expressed as software.Goals for
v5
What follows is a suite of goals and anti-goals for this proposed new architecture:
Conceptual Changes
Elide has grown and solidified as an idea. The framework aspect of Elide's use hasn't really caught on, but the runtime idea has; we spend a lot of energy maintaining framework-style use, which is probably superfluous now. In this new architecture, we should focus less on the concept of a JVM framework and more on the final endpoint of a runtime which is useful as a developer tool, shipped as a native binary.
Build-time DI
If our final target is mostly native use, we probably don't need a full-blown JSR-compliant DI container in our own targets. We can potentially get creative here and use build-time reflection technologies (things like Reflekt) instead of traditional runtime reflection.
Reflective capabilities aren't used much host-side, and guest-side those capabilities are entirely separate. Build-time reflection improvements are likely to yield downstream improvements in binary size and performance as compiler visibility increases.
Build-time JIT
Truffle is fully available at build-time, and we haven't done much tuning with regard to the JIT's use for built-in code. We could probably do something smarter here like embedded tests or small programs which exercise the runtime at build time, allowing for JIT to warm up for guest languages before it is snapshotted and held for runtime use.
Like the DI changes proposed above, these changes are likely to increase compiler understanding of the runtime, and so could yield binary size and performance improvements.
Static JNI
Unpacking and loading native libraries at runtime is slow, necessarily, because it involves several steps which are individually slow: (1) seeking through the binary to find (typically large) embedded resources, (2) unpacking and de-compressing those resources to a place on disk which is safe for use, and (3) engaging in the JVM dance required to load a native library from either an absolute path or
java.library.path
, both of which are annoying for their own reasons.Linking these libraries at build time instead improves binary size (because such data is no longer embedded, but inlined as code), and performance (because such code is loaded as part of the binary). Because this code is also now visible to the native compiler, and to GraalVM, optimization can take place on top of it.
We don't load a ton of native libraries at runtime. Most of them are from Netty and are well known to the team already. Thus, embedding these libs statically makes sense from a devex perspective too.
epoll
kqueue
unix
Rustification
Along with our static JNI stuff, we should endeavor to write as much of our native layer in Rust. Where possible, we can re-implement native libraries as we go, or add new functionality with JNI-based interfaces, on top of static linkage through
org.graalvm..Feature
implementations. These native binaries can be distributed easily in JARs using our new build-time unpacking logic.We already have Rust embedded in our binary, at the hidden
elide lint
command, which is in early development for use with the new tooling API. We could extend this to things like native transports, where memory safety is particularly important, and where performance gain is particularly effective against top-line metrics.Dynamic Plugins
For several reasons, dynamic language plugins (and tooling plugins) are growing more attractive (primarily, binary size). It doesn't seem possible yet to split our binary into shared libraries for each language, but research is ongoing, and the GraalVM team seems welcome to ideas in this area.
Reusable Tooling
To build on the success of our amazing Gradle build conventions, we should publish our tooling in binary form to an internal repository, which we can use as a means to split the project into smaller repositories. This will enable us to move the Kotlin Multiplatform side of our build to specific repositories which need it, and then downstream repos can consume these libs (and the associated conventions) without needing to rebuild them, or needing to index and keep modules in-memory, both of which are growing costly.
build-infra
repo for Elide itselfOther R&D
There are tools for working with binaries. We've tried compression before and it hasn't yielded great results; tools like
upx
can mangle binaries or make them even slower. So, what do we do here? What does it look like to think outside the box? Are there other tools we should be trying? And so on.Beta Was this translation helpful? Give feedback.
All reactions