Enable session recording with AWS and Vault
Boundary 0.13 added SSH session recording support for HCP Boundary Plus and Boundary Enterprise. Session recording provides insight into user actions over remote SSH sessions to meet regulatory requirements for organizations and prevent malicious behavior. Administrators can enable session recording on SSH targets in their Boundary environment and replay recordings back within the Boundary admin UI.
This tutorial demonstrates enabling SSH session recording using Amazon S3 as the storage backend and HashiCorp Vault for credential management. Learners will deploy the required AWS resources using Terraform.
Tutorial overview
- Prerequisites
- Background
- Get setup
- Deploy Vault, targets, and workers
- Configure Vault
- Set up Boundary
- Enable session recording
- Verify and play back recordings
Prerequisites
Note
This tutorial was tested on 10 October 2023 using macOS 13.6, and in the Windows Subsystem for Linux (WSL) with Ubuntu 20.04. If you deploy the lab on a Windows machine, ensure you perform all the lab steps within the WSL.
Before moving on, check that the following versions or greater are installed.
This tutorial recommends completing the HCP Boundary administration tutorials first. The learner should have a working Boundary cluster and org running on HCP.
A Boundary binary greater than 0.13.2 in your
PATH
A Vault binary greater than 1.12.0 in your
PATH
is recommended. Any version of Vault greater than 1.7 should work with this tutorial.Terraform 0.14.9 or greater is required. The binary must be available in your
PATH
.The
jq
utility installed and in yourPATH
The
make
utility is recommended to simplify workflow management for this tutorial, and should be installed and in yourPATH
.The tutorial can be completed without usingmake
.Installing the Boundary Desktop App provides an optional workflow at the end of this tutorial. The 1.2.0 version or above is required for Vault support.
This tutorial assumes basic knowledge of using Vault, including managing policies, roles, and tokens. If you are new to using Vault, complete the Getting Started with Vault quick start tutorials before you integrate Vault with Boundary.
Session recording background
In highly regulated environments, a common requirement and challenge is having a system of record that archives actions taken on the network so that organizations can improve their security posture and enhance compliance.
Session recording allows administrators to get insight into user actions over remote SSH sessions in order to meet various regulatory requirements for organizations and prevent malicious behavior. Administrators can enable session recording on SSH targets in their Boundary environment, store signed recordings in their Amazon S3 storage bucket, and replay recordings back within the Boundary admin UI.
Recorded sessions are converted into a Boundary session recording (BSR), a binary file format and specification created to define the structure of Boundary recording files.
BSR is designed to:
- Support the recording of both multiplexed and non-multiplexed protocols
- Allow recordings of independent byte streams in a session to be written in parallel
- Support an optimal user experience during playback
- Be extensible to support more protocols in the future
BSR contains all of the data transmitted between a user and a target during a session and is available within your storage bucket. These files are signed to ensure they are tamper-proof.
SSH session recording is available as a part of the new Plus tier in both HCP Boundary and Boundary Enterprise.
Configure the lab environment
Several components are needed for the lab environment for this tutorial:
- HCP Boundary Plus or Boundary Enterprise cluster
- Amazon S3 storage bucket
- SSH host for testing recordings
- Vault server with policies allowing connections from Boundary and credentials for the SSH target
- Boundary AWS host catalog, Vault credential store, and SSH target resources
Deploy an HCP Boundary Plus cluster
Session recording, credential injection, and SSH targets are features available in HCP Boundary Plus.
First, deploy an HCP Boundary cluster with the HCP Plus sku selected.
Launch the HCP Portal and login.
Select your organization and project. From within that project, select Boundary from the Services menu in the left navigation.
Click Deploy Boundary.
In the Instance Name text box, provide a name for your Boundary instance.
Under Choose a tier, select the Plus option to enable session recording.
Under the Create an administrator account section, enter the Username and Password for the initial Boundary administrator account.
Click Deploy. Wait for the instance to initialize before proceeding.
Note
The first 50 sessions for any HCP Boundary cluster are free, after which you will be charged. You can safely delete the HCP Plus cluster after this tutorial without incurring any costs.
The following values will be used as environment variables later on. Copy the Boundary Cluster URL from the HCP Boundary portal.
- Boundary address: the
BOUNDARY_ADDR
variable - Boundary Cluster ID: the
BOUNDARY_CLUSTER_ID
variable - Boundary admin username: the
BOUNDARY_USERNAME
variable - Boundary admin password: the
BOUNDARY_PASSWORD
variable
Store these values in a safe location.
Note
The Boundary cluster ID is derived from the Boundary address. For example, if your cluster URL is:
https://abcd1234-e567-f890-1ab2-cde345f6g789.boundary.hashicorp.cloud
Then your cluster id is abcd1234-e567-f890-1ab2-cde345f6g789
.
Next, click Open Admin UI.
Log in to Boundary using your admin credentials used to launch the cluster.
Navigate to the Auth Methods page using the left navigation panel. Locate the
password
auth method, and copy its ID (such as ampw_AQSr776Hnm
).
You will use this value later on:
- Boundary auth method ID: the
BOUNDARY_AUTH_METHOD_ID
variable
Review Terraform configuration
Open a terminal and navigate to a working directory, such as the home directory. Clone the sample repository containing the lab environment config files.
Navigate into the learn-boundary-vault-quickstart directory and list its contents.
The repository contains the following files and folders:
Makefile
: Definitions of scripts for easy lab deployment and cleanup.infra/
: Terraform resources for configuring Vault, EC2 hosts, Amazon S3 storage buckets, and Boundary workers.scripts/
: Setup script for Make, and service scripts needed by Vault and Boundary workers.vault
/: Vault policies for Boundary and the KV secrets engine.
Because this lab environment utilizes Vault for credential management, make
is
used to reduce complexity when deploying Terraform, configuring Vault, and
setting up the Boundary workers.
The tutorial content can be completed without using make
.
These components are explained at a high level in this tutorial, but review the content at your own pace before proceeding. While the Terraform code is extensive, a deep knowledge of Terraform is not necessary to deploy the lab environment and continue learning about session recording.
Deploy Vault, targets, and workers
The infra/
folder contains several Terraform config files that specify the
resources used in this lab:
By default, the following resources are deployed:
- 1 HashiCorp Vault instance (including Boundary worker service)
- 2 Amazon Linux EC2 target instances (including Boundary worker service)
- 1 key pair for EC2 instance access (for Vault and targets)
- 1 Amazon S3 storage bucket
- 2 IAM users (1 for the host catalog, 1 for the S3 storage bucket)
The required IAM roles and policies are also deployed and associated with the storage bucket, instances, and IAM users. VPCs, subnets, and gateways are also assigned as required to allow the Boundary worker services to communicate with HCP Boundary.
Tip
This is a simplified workflow. To reduce costs, this setup does not deploy separate Boundary workers to the target VPCs. Instead, the targets and Vault run the Boundary worker service itself. In a more realistic environment, these services would run on dedicated instances, and provide access to the hosts on their respective VPCs.
An HCP worker deployed on the same network as Vault is required for integrating private Vault clusters with HCP Boundary. Additionally, a self-managed worker is also needed to route traffic to targets on private networks, like the AWS target hosts in this tutorial. To learn more about setting up self-managed workers, refer to the Self-Managed Worker Registration with HCP Boundary tutorial.
This tutorial automatically deploys the latest available worker binary for the
HCP Boundary control plane. To use a different version of the worker binary,
modify the scripts/vault_worker_init
and scripts/target_worker_init
files.
The binary version should match the version of the control plane you are
deploying to. Check the version of the control plane in the HCP Boundary portal.
Set required environment variables
The following environment variables are required to deploy the lab environment:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
BOUNDARY_ADDR
BOUNDARY_USERNAME
BOUNDARY_PASSWORD
BOUNDARY_AUTH_METHOD_ID
BOUNDARY_CLUSTER_ID
First, set the AWS variables.
Next, set the required Boundary variables.
Verify all the environment variables have been set:
Apply Terraform
The make
utility is used to manage the Terraform deployment.
You can execute the following make
commands from the learn-boundary-session-rec-aws-vault/
directory to interact with Terraform:
apply
: Deploys a set of AWS resources defined in the infra folder. Supporting the ability to test a self managed Boundary worker & any version of Vault. Required env variables:BOUNDARY_CLUSTER_ID
. Optional env variables:INSTANCE_COUNT
(determine number of aws_instances to create for testing dynamic host catalog[1,5]). Created resource names will be prefixed with the Terraform workspace value, which will be derived from the whoami output.force-apply
: Taints the aws_instance resource to recreate and refresh Vault.destroy
: Destroys the AWS resources defined in the infra folder.
Remember that you can only use make
from the root of this repository.
Tip
You do not have to use make
. For example, if the tutorial tells you to
execute make apply
, you can use the following syntax instead:
The generic syntax for any make
command used in this tutorial is:
All of the make
commands can be viewed within the Makefile
, and should be
executed from the learn-boundary-session-rec-aws/
directory.
Next, apply Terraform using make apply
. The deployment will usually complete
within five minutes.
Your output will include container resources prefixed by your Terraform workspace name,
which is derrived from the output of whoami
. For example, the
host_key_pair_name
value is prefixed by username-
, which will match the
whoami
output of the user that deployed Terraform.
You can query these outputs any time by executing make terraform_output
:
The lab infrastructure is now deployed.
Next, configure Vault and register the Boundary workers.
Configure Vault
Setting up Vault to inject credentials for the targets requires the following steps:
- Obtain the Vault root token to enable login
- Write the
boundary-controller
andkv-read
policies to Vault - Enable the kv-v2 secrets engine
- Create a target secret, including a username and private key
- Register a Boundary worker to provide network access to Vault from HCP Boundary
These steps are simplified using two make
calls:
make vault_root_token
: Copies the root token to your local machine from the Vault host. The command also prints theVAULT_TOKEN
andVAULT_ADDR
values, which should be exported as environment variables.make vault_init
: Logs into Vault, writes theboundary-controller
andkv-read
policies to Vault, and generates a client token for Boundary. This token will be used to set up the Vault credential store, and should be exported as theVAULT_CRED_STORE_TOKEN
variable.
Obtain the root token
Use make vault_root_token
to print the Vault root token and Vault address.
Enter yes
when prompted to continue connecting to the Vault EC2 host.
Examine the output, and execute the suggested commands to export the
VAULT_TOKEN
and VAULT_ADDR
environment variables.
Next, you will log in to Vault and write the necessary policies for Boundary, and generate a client token for the Boundary credential store.
Write policies and create a client token
The make vault_init
comand performs the following actions:
- Write the
boundary-controller
andkv-read
policies to Vault - Create a kv secret at
secret/ssh_host
with the target credentials - Generate a client token for Boundary
The target instances (and Vault) all utilize the same keypair to reduce
complexity for this example. The username for all instances is ec2-user
, and
the keypair is stored on your local machine at ~/.ssh/username-host-key
. This
means a single secret will be used when injecting the credentials later on with
Boundary.
To learn more about credential injection with Vault, refer to the HCP credential injection with private Vault tutorial.
Execute make vault_init
.
Examine the output, and execute the suggested command to export the
VAULT_CRED_STORE_TOKEN
environment variable.
Register the Vault worker
A Boundary worker is required to provide private network access to HCP Boundary and connect users to targets. This means both Vault and the target instances require a Boundary worker deployed in their respective networks.
While a worker would usually be deployed on a separate instances, this tutorial reduces costs by running the Boundary worker service on the same instances as Vault and the targets. The worker service was deployed as part of the Terraform apply.
Refer to the scripts/vault_worker_init
file to learn more about how the worker
service was deployed.
When the worker service was started, a token was produced to register the worker with HCP Boundary. You can register Boundary workers using the Boundary CLI or Admin Console Web UI.
The make register_vault_worker
command first logs into the Vault host and obtains
the worker auth token. Next, it authenticates to your HCP Boundary instance
using the BOUNDARY_ADDR
, BOUNDARY_USERNAME
and BOUNDARY_PASSWORD
environment variables you set earlier. Finally, it registers the worker using
the boundary workers create
command.
Execute make register_vault_worker
.
Copy the worker ID from the output.
Next, verify that the worker was registered.
Start by logging in to Boundary using your admin credentials.
Read the worker details.
Notice the tags defined for this worker:
This worker is tagged with "type":["worker","vault"]
, as defined in the worker
configuration file in scripts/vault_init
, deployed on the Vault host. These
tags will be used later on when setting up Boundary's Vault credential store.
Register the target workers
Just like Vault, a Boundary worker is required to provide private network access to the targets. The worker service is also running directly on the targets in this example to reduce costs.
The make register_target_workers
command logs into the target's host and
obtains the worker auth token. Next, it authenticates to your HCP Boundary
instance and registers the worker using the boundary workers create
command.
Execute make register_target_workers
. Enter yes
twice when prompted to
connect to the target instances and obtain the worker tokens.
Next, read the aws-worker-1
details and examine its tags.
Notice the tags defined for worker1
:
This worker is tagged with "type":["dev-worker","worker1"]
, as defined in the
worker configuration file in scripts/worker_init
deployed on this target host.
Now, read the aws-worker-2
worker details.
Notice the tags defined for worker1
:
This worker is tagged with "type":["prod-worker","worker2"]
, as defined in the
worker configuration file in scripts/worker_init
deployed on this target host.
These tags will be used later on when setting up Boundary's AWS host sets.
Set up Boundary
The following are required to set up session recording for an SSH target in Boundary:
- A credential store
- A credential library
- A Boundary storage bucket
- An SSH target type with credential injection enabled
These resources can be configured via the Admin Console UI, the CLI, or Terraform. Select a workflow below to continue setting up Boundary.
Warning
Before proceeding, note that Boundary storage bucket lifecycle management is still under development. In order to prevent unintentional loss of session recordings, orgs that contain storage buckets cannot currently be deleted. Before continuing, please note that the org created for this tutorial cannot currently be deleted.
Start by logging in to HCP Boundary within the terminal.
Log in to the HCP portal.
From the HCP Portal's Boundary page, click Open Admin UI - a new page will open.
Enter the admin username and password you created when you deployed the new instance and click Authenticate.
Next, set up a new testing org and project scope.
Note
Please use a new test org for this tutorial, because orgs that contain session recordings cannot currently be deleted.Navigate to to the Orgs page and click New Org.
Fill out the new org form with a Name of
ssh-recording-org
and Description ofSSH test org
. Click Save.From within the new org, click New Project.
Fill out the new project form with a Name of
ssh-recording-project
and Description ofSecure Socket Handling recordings
. Click Save.
Create a host catalog
You can use a dynamic host catalog to import the hosts created by Terraform into Boundary. These hosts will be used later on when configuring an SSH target.
Select Host Catalogs from the left navigation panel.
Choose New.
Enter
aws-recording-catalog
in the Name field, and enter a description ofaws session recording host catalog
.Select the Dynamic type. Select the AWS provider, and enter the following details:
- AWS Region:
<YOUR_AWS_REGION>
- Access Key ID:
<YOUR_host_catalog_access_key_id>
- Secret Access Key:
<YOUR_host_catalog_secret_access_key>
The
host_catalog_access_key_id
andhost_catalog_secret_access_key
are sensitive Terraform outputs. This means you will have to manually extract their contents from the terraform.tfstate file.Open the shell session where Terraform was deployed, and execute the following:
Copy these values into their fields.
Lastly, check the box beside Disable credential rotation.
- AWS Region:
Click Save.
Create the dev host set
A host set can be used to sort hosts by environment.
Start by creating a host set for the dev hosts.
Select the Host Sets tab, and then select New.
Enter
dev_host_set
in the Name field.Next, define the instances that should belong to the dev host set. This can be done by examining the instance tags, which are printed in the Terraform output:
Notice the instance tagged as
"env" = "dev"
.To select it for the host set, enter the following in the Filter field:
Click Save.
Click the Hosts tab, and verify that the
boundary-host-1
host appears as expected. If it's missing, wait a few moments and refresh the page.
Create the prod host set
Next, create a host set for the prod hosts.
Navigate back to the
aws-recording-catalog
host catalog. Click Manage, and select New Host Set.Enter
prod_host_set
in the Name field.Define the instances that should belong to the prod host set by selecting the instance tagged as
"env" = "prod"
.To select it for the host set, enter the following in the Filter field,
Click Save.
Click the Hosts tab, and verify that the
boundary-host-2
host appears as expected. If it's missing, wait a few moments and refresh the page.
Create a credential store
Next, create a Vault credential store within Boundary using the
VAULT_CRED_STORE_TOKEN
value, defined when you set up Vault.
The vault
credential store type is used for Vault integration, but you can also use static
credential stores with credential injection.
When you set up a worker, it's important to create a filter for the credential store. A worker filter will identify workers that should be used as proxies for the new credential store, and ensure these credentials are brokered from the private Vault.
Navigate to the global scope within the UI, and select the Workers page.
Select vault-worker
, and notice its Worker Tags:
"type" = ["s3", "vault", "worker"]
With the tags and VAULT_CRED_STORE_TOKEN
value, set up a new credential store.
Navigate back to the
ssh-recording-org
, and select thessh-recording-project
.Select the Credential Stores page, and click New.
Enter the Name
Vault AWS Host Credentials
.Select the type Vault, and enter the following details:
- Address:
<YOUR_VAULT_ADDR>
- Worker Filter:
"vault" in "/tags/type"
- Token:
<YOUR_VAULT_CRED_STORE_TOKEN>
The
VAULT_ADDR
andVAULT_CRED_STORE_TOKEN
values were exported as environment variables when executingmake vault_init
in the Write policies and create a client token section. Check their values in the terminal session used to apply Terraform:Copy these values into their fields.
- Address:
Click Save.
Create a credential library
A credential library is used to determine what credentials should be accessed from Vault, and the path to query for them.
Note
Credential libraries of type ssh_private_key
cannot currently be created
with the UI. Use the CLI to create the credential library.
Create a new credential library of type ssh_private_key
within Boundary using
the credential store ID and passing the vault-path of secret/data/ssh_host
.
To gather the CRED_STORE_ID
, navigate to the Credential Stores page within
the ssh-recording-project
, and copy the Vault AWS Host Credentials
credential store ID (such as csvlt_CixM26cMMn
).
Return to the shell session used to deploy Terraform and log in to Boundary
using your admin credentials. These were set as environment variables at the
beginning of the tutorial as BOUNDARY_USERNAME
and BOUNDARY_PASSWORD
.
Next, create the credential library.
Example output:
Open the Boundary Admin Web UI, and navigate back to the Credential Stores page.
From the Vault AWS Host Credentials
host catalog, click the Credential
Libraries tab. Verify that the vault-cred-library
was created successfully,
and click on its name to verify its details.
Enable session recordings
Two tasks are left to finish setting up session recordings:
- Set up a Boundary storage bucket
- Create an SSH target
The SSH target requires the injected application credentials (supplied from the Vault credential library), and the Boundary storage bucket it should associate recordings with.
Create a storage bucket
Within Boundary, a storage bucket resource is used to store the recorded sessions. A storage bucket represents a bucket in an external store, in this case, Amazon S3. You must create a Boundary storage bucket associated with an external store before enabling session recording.
Navigate to to the
global
scope, and then the Storage Buckets page.Click New Storage Bucket. Fill out the following details:
- Name: ssh-test-bucket
- Scope:
ssh-recording-org
- Bucket name:
YOUR_S3_BUCKET_NAME
- Region:
YOUR_AWS_REGION
- Access key ID:
YOUR_recording_storage_user_access_key_id
- Secret access key:
YOUR_recording_storage_user_secret_access_key
- Worker filter:
"s3" in "/tags/type"
The
recording_storage_user_access_key_id
andrecording_storage_user_secret_access_key
values are sensitive Terraform outputs. This means you will have to manually extract their contents from the terraform.tfstate file.Additionally, the name of the Amazon S3 bucket is required from the Terraform outputs.
Open the shell session where Terraform was deployed, and execute the following:
Copy these values into their fields.
For the Worker Filter, a worker with access to the S3 storage bucket is needed. A public S3 bucket is used for this tutorial, meaning any worker will have access. In the case of a private S3 bucket, a worker with appropriate network access should be deployed and registered with Boundary, then entered here.
Select the worker tagged with
"type" = ["s3", "vault", "worker"]
, which also provides access to Vault. The following filter will select this worker:Lastly, check the box next to Disable credential rotation.
Click Save.
Create an SSH target
To finish setting up recordings, create a target for the boundary-host-1
host.
Navigate to the Targets page within
ssh-recording-project
and click New.Fill out the New Target form. Select a Type of SSH.
- Name:
dev-recording-target
- Type:
SSH
- Default Port:
22
- Maximum Connections:
-1
- Name:
Click Save.
Click the Workers tab for the new target.
Next to Egress workers, click Edit worker filter.
An egress worker filter specifies which worker has access to the target, such as a worker deployed in the same network as the target. An ingress worker specifies how to route a Boundary client to the target network, and is not used in this example.
Recall the tags associated with the
aws-worker-1
, which provides access to theboundary-host-1
host:The tags for this worker are:
"type" = ["worker1", "dev-worker", "s3"]
An appropriate filter to select this worker is:
- Egress worker filter:
"dev-worker" in "/tags/type"
Paste the filter expression in and click Save.
- Egress worker filter:
Now associate dev-recording-target
with dev-host-set
.
Select the Host Sources tab for
dev-recording-target
.Click Add Host Sources.
Check the box next to the host set named dev_host_set, then click Add Host Sources.
Now associate the SSH target with the Vault credential library.
Select the Injected Application Credentials tab for
dev-recording-target
.Click +Add Injected Application Credentials.
Check the box next to the credential library named vault-cred-library, then click Add Injected Application Credentials.
Enable session recording
Finally, enable session recording for the dev-recording-target
.
Navigate back to the
dev-recording-target
Details page.Under Session Recording, click Enable recording.
On the Enable Session Recording for Target page, toggle the switch next to Record sessions for this target.
For the AWS storage buckets, select the
ssh-test-bucket
.Click Save.
Under the dev-recording-target
Details page, the ssh-test-bucket
should
now be listed under Session Recording.
Optionally, you may repeat the process with a new prod-recording-target
. This
target should be configured like the dev target, but should use the
"prod-worker" in "/tags/type"
worker filter instead. You can use the same storage bucket
for both targets.
Record a session
Now you are ready to test session recording using dev-recording-target
.
To log into Boundary using the Desktop app, you must gather the BOUNDARY_ADDR
values from the HCP Boundary Admin Console.
Check the value of BOUNDARY_ADDR
in the terminal session where Terraform was
applied.
Open the Boundary Desktop app.
Enter the Boundary cluster URL (for example,
https://d2a6e010-ba05-431a-b7f2-5cbc4e1e9f06.boundary.hashicorp.cloud
) and
click Submit.
Authenticate using your HCP Boundary admin credentials.
On the Targets page, notice the target details for ssh-recording-target
.
Click Connect to initiate a session.
The Successfully Connected page displays the target ID (Target Connection details) and Proxy URL.
To start a session, open your terminal or SSH client. You can start a session using SSH and the Proxy URL from the Boundary Desktop app.
Connect on 127.0.0.1 and provide the proxy port using the -p
option. Enter
yes
when prompted to establish a connection.
When you are finished, you can close the connection to the server by entering
exit
, or you can cancel the session directly from the Boundary Desktop app
under the Sessions view.
View the recording
You can view a list of all recorded sessions, or if you know the ID of a specific recorded session, you can find any channels associated with that recording.
To play back a session, open the Admin Console Web UI, and re-authenticate as the admin user if necessary.
From the global
scope, navigate to the Session Recordings page.
Note that the following details are listed for each recording:
- Time
- Status
- User
- Target
- Duration
Click View next to the recording.
Within the Session Playback page, click Play for Channel 1.
After the recording loads, hover your mouse on the video player and click the play button to start playback.
Note the Channel details on the right, which display the duration and bytes up / bytes down.
Validate the recording
A session recording represents a directory structure of files in an external object store that together are the recording of a single session between a user and a target.
Verify that the recording exists within Boundary.
Read the recording's details.
Note the Channel Recording
, labeled Mime Types: application/x-asciicast
(ID
chr_cjTX96USZC
in this example). Downloading this recording would produces a
.cast
file, which can be played back locally using
asciinema.
If you want to download this file, execute the following command:
BSR files
The Boundary Session Recording (BSR) file defines a hierarchical directory structure of files and a binary file format. It contains all the data transmitted between a user and a target during a single session.
Boundary stores the recordings within the Amazon S3 storage bucket as BSR files.
A BSR connections directory contains a summary of connections, as well as inbound and outbound requests. If you use a multiplexed protocol, there are subdirectories for the channels.
The asciicast format is well suited for the playback of interactive shell activity.
However, some aspects of the recording cannot be translated into asciicast.
For example, if an SSH session uses the RemoteCommand
option, or is used to exec
a command, the command is not displayed in the asciicast.
The output of the command may be displayed, though.
If you use SSH for something other than an interactive shell, such as for file transfer, X11 forwarding, or port forwarding, Boundary does not attempt to create an asciicast.
In all cases, the SSH session is still recorded in the BSR file and you can view the BSR file in the external storage bucket.
Cleanup and teardown
Destroy the AWS resources.
Execute
make destroy
to destroy the Terraform resources in AWS. Enteryes
to confirm the deletion.Destroy the Boundary resources.
Note
Recall that Boundary storage bucket lifecycle management is still under development. In order to prevent unintentional loss of session recordings, orgs that contain storage buckets cannot currently be deleted. When destroying your Boundary resources, you will receive an error if you attempt to delete the storage bucket, or the scopes that contain the bucket.
From the Admin Console Web UI, destroy the following resources:
- Targets
- Vault host catalog
- Vault credential store
Scopes containing session recordings cannot currently be deleted.
Unset the environment variables used in any active terminal windows for this tutorial.