Full Featured in-browser TAK Client powered by AWS
&
Facilitate ETL operations to bring non-TAK data sources into a TAK Server
Testing locally can be done either running the server directly (recommended for development) or by running the provided Docker Compose services (recommended for limited testing)
Note that for full functionality, CloudTAK needs to be deployed into an AWS environment and that many of the services it provides will initiate AWS API calls with no graceful fallback.
docker-compose up --build
Once the database and API service have built, the server will start on port 5000.
In your webbrowser visit http://localhost:5000
to view the ETL UI
Installation outside of the docker environment is also fairly straightforward.
In the ./api
, perform the following
npm install
echo "CREATE DATABASE tak_ps_etl" | psql
npx knex migrate:latest
cd web/
npm install
npm run build
cd ..
npm run dev
The ETL service assumes several pre-requisite dependencies are deployed before initial ETL deployment. The following are dependencies which need to be created:
Name | Notes |
---|---|
coe-vpc-<name> |
VPC & networking to place tasks in - repo |
coe-ecs-<name> |
ECS Cluster for API Service - repo |
coe-ecr-etl |
ECR Repository for storing API Images - repo |
coe-ecr-etl-tasks |
ECR Repository for storing Task Images - repo |
coe-elb-access |
Centralized ELB Logs - repo |
An AWS ACM certificate must also be generated that covers the subdomain that CloudTAK is deployed to as well
as the second level wildcard. Where in the example below CloudTAK is deployed to ie: map.example.com
The second
level wildcard will be used for serving tiles, currently configured to be tiles.map.example.com
IE:
*.example.com
*.map.example.com
coe-ecr-etl
Can be created using the dfpc-coe/ecr repository.
From the ecr repo:
npm install
npx deploy create etl
coe-ecr-etl-tasks
Can be created using the dfpc-coe/ecr repository.
From the ecr repo:
npm install
npx deploy create etl-tasks
From the root directory, install the deploy dependencies
npm install
An script to build docker images and publish them to your ECR is provided and can be run using:
npm run build
from the root of the project. Ensure that you have created the necessary ECR repositories as descrived in the
previos step and that you have AWS credentials provided in your current terminal environment as an aws ecr get-login-password
call will be issued.
From the root directory, install the deploy dependencies
npm install
Deployment to AWS is handled via AWS Cloudformation. The template can be found in the ./cloudformation
directory. The deployment itself is performed by Deploy which
was installed in the previous step.
The deploy tool can be run via the following
npx deploy
To install it globally - view the deploy README
Deploy uses your existing AWS credentials. Ensure that your ~/.aws/credentials
has an entry like:
[coe]
aws_access_key_id = <redacted>
aws_secret_access_key = <redacted>
Deployment can then be performed via the following:
npx deploy create <stack>
npx deploy update <stack>
npx deploy info <stack> --outputs
npx deploy info <stack> --parameters
Stacks can be created, deleted, cancelled, etc all via the deploy tool. For further information
information about deploy
functionality run the following for help.
npx deploy
Further help about a specific command can be obtained via something like:
npx deploy info --help
Name | Notes |
---|---|
coe-media-<name> |
Task Definitions for Media Server Support - repo |
An S3 bucket will be created as part of the CloudFormatiom stack that contains geospatial assets related to user files, missions, CoTs, etc. The following table is an overview of the prefixes in the bucket and their purpose
Prefix | Description |
---|---|
attachment/{sha256}/{file.ext} |
CoT Attachments by Data Package reported SHA |
data/{data sync id}/{file.ext} |
CloudTAK managed Data Sync file contents |
import/{UUID}/{file.ext} |
User Imports |
profile/{email}/{file.ext} |
User Files |