- Unexpected deployment deletion during topology unmarshalling (GH-375)
- Parsing of a description field of an TOSCA interface is interpreted as an operation (GH-372)
- Yorc does not support python3 (GH-319)
- Implement an anti-affinity placement policy for Openstack (GH-84)
- Allow to configure Ansible configuration file (GH-346)
- Monitor deployed services liveness (GH-104)
- Add job status feedback for slurm and k8s jobs (GH-351)
- Upgrade Ansible to 2.7.9 (GH-364)
- Reduce the volume of data stored in Consul part 1 (GH-361)
- Unable to delete a deployment with non-conform topology (GH-368)
- REST API doc changes on deployment update support in premium version (GH-352)
- Bootstrap Yorc with a Vault instance (GH-282)
- Refactor Slurm jobs (GH-220)
- Yorc does not log a slurm command error message, making diagnostic difficult (GH-348)
- Yorc hostspool now allows more filtering (GH-89)
- Yorc support of kubernetes PersistentVolumeClaim (GH-209)
- Bootstrap using a premium version of Alien4Cloud fails to configure https/SSL (GH-345)
- Deployment fails on error "socket: too many open files" (GH-334)
- Yorc bootstrap does not correctly treat default alien4cloud version download (GH-286)
- Attribute notification is not correctly set with HOST keyword (GH-338)
- Custom command events doesn't provide enough information (GH-324)
- Update Default Oracle JDK download URL as the previous one is not available for download anymore (GH-341)
- Bad notifications storage with several notified for one attribute notifier (GH-343)
- Yorc panics on segmentation violation attempting to deploy Ystia Forge Slurm topology (GH-321)
- Panic can append when undeploying Slurm computes (GH-326)
- Technical update to use Alien4Cloud 2.1.1 (Used in bootstrap)
- Purging n deployment in parallel, one can fail on error: Missing targetId for task with id (GH-293)
- Deployment with a topology parsing error remains in initial status (GH-283)
- Interface name is not retrieved from custom command Rest request (GH-287)
- Instances are adding into topology before creating task (GH-289)
- Missing events for uninstall workflow in purge task (GH-302)
- All ssh connections to Slurm are killed if ssh server has reached the max number of allowed sessions (GH-291)
- It can take a considerable delay for a deployment to change status to UNDEPLOYMENT_IN_PROGRESS (GH-306)
- Slurm job monitoring is not designed for concurrency (GH-308)
- SSH Session pool: Panic if connection failed, this impacts Slurm infrastructure (GH-315)
- Bootstrap a secure Yorc setup (GH-179)
- Yorc bootstrap should save input values used to bootstrap a setup (GH-248)
- Publish value change event for instance attributes (GH-222)
- Move to Go modules to manage dependencies (GH-183)
- Document How to create a Yorc Plugin (GH-119)
- Slurm user credentials can be defined as slurm deployment topology properties, as an alternative to yorc configuration properties (GH-281)
- Can't deploy applications using a secured yorc/consul (GH-274)
- K8S jobs namespace should not be removed if its provided (GH-245)
- Unable to purge an application that appears in the list (GH-238)
- When scaling down instances are not cleaned from consul (GH-257)
- Yorc bootstrap fails if downloadable URLs are too long (GH-247)
- Increase default workers number per Yorc server from
3
to30
(GH-244)
- Bootstrap fails on Red Hat Enterprise Linux 7.5 (GH-252)
- Technical update to use Alien4Cloud 2.1.0 final version (Used in bootstrap)
- Support Jobs lifecycle enhancements (new operations
submit
,run
,cancel
) (GH-196) - Forbid the parallel execution of several scheduled actions. This is for instance used for the asynchronous run operation of Jobs. This will prevent a same action to be scheduled in parallel (for jobs it will prevent checking and doing same actions several times) (GH-230)
- Generate Alien 2.1-compatible events (GH-148)
- No output properties for services on GKE (GH-214)
- K8S service IP missing in runtime view when deploying on GKE (GH-215)
- Bootstrap of HA setup fails on GCP, at step configuring the NFS Client component (GH-218)
- Fix issue when default yorc.pem is used by Ansible with ssh-agent (GH-233)
- Publish workflow events when custom workflow is finished (GH-234)
- Bootstrap without internet access fails to get terraform plugins for local yorc (GH-239)
- CUDA_VISIBLE_DEVICES contains some unwanted unprintable characters GH-210)
- The orchestrator requires now at least Ansible 2.7.2 (upgrade from 2.6.3 introduced in GH-194)
- Allow to bootstrap a full stack Alien4Cloud/Yorc setup using yorc CLI (GH-131)
- Use ssh-agent to not write ssh private keys on disk (GH-201)
- ConnectTo relationship not working for kubernetes topologies (GH-212)
- Allow user to provide an already existing namespace to use when creating Kubernetes resources (GH-76)
- Generate unique names for GCP resources (GH-177)
- Need a HOST public_ip_address attribute on Hosts Pool compute nodes (GH-199)
- Support GCE Block storages. (GH-82)
- Concurrent workflows and custom commands executions are now allowed except when a deployment/undeployment/scaling operation is in progress (GH-182)
- Enable scaling of Kubernetes deployments (GH-77)
- The orchestrator requires now at least Terraform 0.11.8 and following Terraform plugins (with corresponding version constraints):
Consul (~> 2.1)
,AWS (~> 1.36)
,OpenStack (~> 1.9)
,Google (~ 1.18)
andnull provider (~ 1.0)
. (Terraform upgrade from 0.9.11 introduced in GH-82) - Consul version updated to 1.2.3
- Support GCE Public IPs. (GH-82)
- Split workflow execution unit to step in order to allow a unique workflow to be executed by multiple Yorc instances. (GH-93)
- Make the run step of a Job execution asynchronous not to block a worker during the duration of the job. (GH-85)
- Added configuration parameters in Kubernetes infrastructure allowing to connect from outside to a cluster created on Google Kubernetes Engine (GH-162)
- Allow to upgrade from version 3.0.0 to a newer version without loosing existing data (GH-130)
- Inputs are not injected into Slurm (srun) jobs (GH-161)
- Yorc consul service registration fails if using TLS (GH-153)
- Retrieving operation output when provisioning several instances resolves to the same value for all instances even if they are actually different (GH-171)
- Allow to use 'In cluster' authentication when Yorc is deployed on Kubernetes. This allows to use credentials provided by Kubernetes itself. (GH-156)
- REQ_TARGET keyword into TOSCA functions was broken. This was introduced during the upgrade to Alien4Cloud 2.0 that changed how requirements definition on node templates (GH-159)
- Parse Alien specific way of defining properties on relationships (GH-155)
- The orchestrator requires now at least Ansible 2.6.3 (upgrade from 2.4.1 introduced in GH-146)
- Providing Ansible task output in Yorc logs as soon as the task has finished (GH-146)
- Parse of TOSCA value assignment literals as string. This prevents issues on strings being interpreted as float and rounded when converted back into strings (GH-137)
- Install missing dependency
jmespath
required by thejson_query
filter of Ansible (GH-139) - Capabilities context props & attribute are not injected anymore for Ansible recipes implementation (GH-141)
- Manage applications secrets (GH-134)
- Relax errors on TOSCA get_attributes function resolution that may produce empty results instead of errors (GH-75)
- Fix build issues on go1.11 (GH-72)
Yorc 3.0.0 is the first major version since we open-sourced the formerly known Janus project. Previous versions have been made available on GitHub.
We are still shifting some of our tooling like road maps and backlogs publicly available tools. The idea is to make project management clear and to open Yorc to external contributions.
Alien4Cloud released recently a fantastic major release with new features leveraged by Yorc to deliver a great orchestration solution.
Among many features, the ones we will focus on below are:
- UI redesign: Alien4Cloud 2.0.0 includes various changes in UI in order to make it more consistent and easier to use.
- Topology modifiers: Alien4Cloud 2.0.0 allows to define modifiers that could be executed in various phases prior to the deployment. Those modifiers allow to transform a given TOSCA topology.
We are really excited to announce our first support of Google Cloud Platform.
Yorc now natively supports Google Compute Engine to create compute on demand on GCE.
Yorc 3.0.0 supports a new infrastructure that we called "Hosts Pool". It allows to register generic hosts into Yorc and let Yorc allocate them for deployments. These hosts can be anything, VMs, physical machines, containers, ... whatever as long as we can ssh into them for provisioning. Yorc exposes a REST API and a CLI that allow to manage the hosts pool, making it easy to integrate it with other tools.
For more informations about the Hosts Pool infrastructure, check out our dedicated documentation.
We made some improvements with our Slurm integration:
- We now support Slurm "features" (which are basically tags on nodes) and "constraints" syntax to allocate nodes. Examples here.
- Support of srun and sbatch commands (see Jobs scheduling below)
In Yorc 2 we made a first experimental integration with Kubernetes. This support and associated TOSCA types are deprecated in Yorc 3.0. Instead we switched to new TOSCA types defined collectively with Alien4Cloud.
This new integration will allow to build complex Kubernetes topologies.
Alien4Cloud has a great feature called "Services". It allows both to define part of an application to be exposed as a service so that it can be consumed by other applications, or to register an external service in Alien4Cloud to be exposed and consumed by applications.
This feature allows to build new use cases like cross-infrastructure deployments or shared services among many others.
We are very excited to support it!
Yet another super interesting feature! Until now TOSCA components handled by Yorc were designed to be hosted on a compute (whatever it was) that means that component's life-cycle scripts were executed on the provisioned compute. This feature allows to design components that will not necessary be hosted on a compute, and if not, life-cycle scripts are executed on the Yorc's host.
This opens a wide range of new use cases. You can for instance implement new computes implementations in pure TOSCA by calling cloud-providers CLI tools or interact with external services
Icing on the cake, for security reasons those executions are by default sand-boxed into containers to protect the host from mistakes and malicious usages.
This release brings a tech preview support of jobs scheduling. It allows to design workloads made of Jobs that could interact with each other and with other "standard" TOSCA component within an application. We worked hard together with the Alien4Cloud team to extent TOSCA to support Jobs scheduling.
In this release we mainly focused on the integration with Slurm for supporting this feature (but we are also working on Kubernetes for the next release 😄). Bellow are new supported TOSCA types and implementations:
- SlurmJobs: will lead to issuing a srun command with a given executable file.
- SlurmBatch: will lead to issuing a sbatch command with a given batch file and associated executables
- Singularity integration: allows to execute a Singularity container instead of an executable file.
Alien4Cloud and Yorc can now mutually authenticate themselves with TLS certificates.
We constantly try to improve feedback returned to our users about runtime execution. In this release we are publishing logs with more context on the node/instance/operation/interface to which the log relates to.
Yorc 3.0 brings foundations on applicative monitoring, it allows to monitor compute liveness at a interval defined by the user. When a compute goes down or up we use or events API to notify the user and Alien4Cloud to monitor an application visually within the runtime view.
Our monitoring implementation was designed to be a fault-tolerant service.