16

I have a module that creates all the infrastructure needed for a lambda including the ECR that stores the image:

resource "aws_ecr_repository" "image_storage" {
  name                 = "${var.project}/${var.environment}/lambda"
  image_tag_mutability = "MUTABLE"

  image_scanning_configuration {
    scan_on_push = true
  }
}

resource "aws_lambda_function" "executable" {
  function_name = var.function_name
  image_uri     = "${aws_ecr_repository.image_storage.repository_url}:latest"
  package_type  = "Image"
  role          = aws_iam_role.lambda.arn
}

The problem with this of course is that it fails because when aws_lambda_function runs the repository is there but the image is not: the image is uploaded using my CI/CD.

So this is a chicken egg problem. Terraform is supposed to only be used for infrastructure so I cannot/should not use it to upload an image (even a dummy one) but I cannot instantiate the infrastructure unless the image is uploaded in between repository and lambda creation steps.

The only solution I can think of is to create ECR separately from the lambda and then somehow link it as an existing aws resource in my lambda but that seems kind of clumsy.

Any suggestions?

5
  • 1
    Yes, separate creation is what is usually done in such a case. Commented Nov 10, 2021 at 2:38
  • 1
    Btw. you're providing the repository url to the image_uri, not a docker image url so this won't work anyway. A repository can contain multiple docker images. It should look more or less like this ${aws_ecr_repository.image_storage.repository_url}/imageName:latest. So you need to create an ECR separatly first, then upload the image and then create the lambda... Commented Nov 12, 2021 at 0:53
  • 1
    This is the only way I have found to build a dcontainerized lambda, push to ECR and use in an APIGateway hands-on.cloud/terraform-deploy-python-lambda-container-image Commented Mar 3, 2022 at 18:58
  • Running across the same, what did you end up with? I thought possibly having it create a dummy image. Also for the deployment, how are you updating the lambda version? Commented Jun 30, 2022 at 10:33
  • @Baron I posted a solution. Let me know if you have any more questions. Commented Jun 30, 2022 at 19:46

3 Answers 3

11

I ended up using the following solution where a dummy image is uploaded as part resource creation.

resource "aws_ecr_repository" "listing" {
  name                 = "myLambda"
  image_tag_mutability = "MUTABLE"

  image_scanning_configuration {
    scan_on_push = true
  }

  provisioner "local-exec" {
    command = <<-EOT
      docker pull alpine
      docker tag alpine dummy_container
      docker push dummy_container
    EOT
  }
}
Sign up to request clarification or add additional context in comments.

2 Comments

With docker push dummy_container you'd push to the official Docker repository (docker.io/library/dummy_container), which leads to a denied: requested access to the resource is denied error. How did you actually handle the pushing?
Got it working with the help of this answer.
3

Building off @przemek-lach's answer plus @halloei's comment, I wanted to post a fully-working ECR repository that gets provisioned with a dummy image

data "aws_ecr_authorization_token" "token" {}

resource "aws_ecr_repository" "repository" {
  name                 = "lambda-${local.name}-${local.environment}"
  image_tag_mutability = "MUTABLE"
  tags = local.common_tags
  image_scanning_configuration {
    scan_on_push = true
  }
  lifecycle {
    ignore_changes = all
  }

  provisioner "local-exec" {
    # This is a 1-time execution to put a dummy image into the ECR repo, so 
    #    terraform provisioning works on the lambda function. Otherwise there is
    #    a chicken-egg scenario where the lambda can't be provisioned because no
    #    image exists in the ECR
    command     = <<EOF
      docker login ${data.aws_ecr_authorization_token.token.proxy_endpoint} -u AWS -p ${data.aws_ecr_authorization_token.token.password}
      docker pull alpine
      docker tag alpine ${aws_ecr_repository.repository.repository_url}:SOME_TAG
      docker push ${aws_ecr_repository.repository.repository_url}:SOME_TAG
      EOF
  }
}

5 Comments

how do you handle the installation of docker, which is required to run the script in the local-exec?
also, how do you deal with authentication when running the aws cli command?
2 very good callouts. If you're running this locally it isn't a problem (which I was doing when I was messing around with this). If you have a hosted terraform solution for an internal PR bot, this gets trickier. Docker would need to be installed on the machine running the terraform command (which is definitely not a given), and for AWS auth I'm not sure the best secure way to set that up. But I would guess using environment variables would be the way to go
I used an aws sts assume-role command in the provisioner to get the required AWS access. Regarding docker, I am running Terraform on a GitLab runner that has docker installed, so no issues there (I'm using the same assume-role trick to authenticate to AWS and do the docker login)
With Terraform 1.9.5 and AWS 4.48.0 I had to use self.repository_url in the local-exec provisioner to avoid a cycle error.
1

This was quite the chicken-and-egg problem.

Below is a solution whose only dependency is curl; it accomplishes a docker pull && docker push using curl against docker's API v2 spec. You will also need the aws cli as a dependency, as required by fetchEcrToken (see helpers.sh).


The final terraform usage looks like this:

example.tf

locals {
  repo_name = "my-repo"
  ecr_fqdn  = replace(aws_ecr_repository.this.repository_url, "//.*$/", "")  # remove everything after first slash
}

resource "aws_ecr_repository" "this" {
  name                 = local.repo_name
  image_tag_mutability = "MUTABLE"
}

module "ecr_repo_image" {
  source = "./ecr_curl"

  name               = "my-project"
  pull_ecr_is_public = true
  pull_repo_fqdn     = "public.ecr.aws"
  pull_repo_name     = "lambda/provided"
  pull_image_tag     = "al2-x86_64"
  push_ecr_is_public = false
  push_repo_fqdn     = local.ecr_fqdn
  push_repo_name     = local.repo_name
  push_image_tag     = "latest"
}

resource "terraform_data" "ecr_repo_image" {
  triggers_replace = [
    aws_ecr_repository.this.repository_url,
  ]

  provisioner "local-exec" {
    command = module.ecr_repo_image.command
  }
}

Following are the dependencies required to achieve the above.

ecr_curl/outputs.tf

locals {
  pull_then_push_path = "${path.module}/pull_then_push.sh"
  helpers_path        = "${path.module}/helpers.sh"
  download_dir_path   = "${path.module}/${var.name}-image"
}

output "command" {
  value = <<-EOF
    source '${local.helpers_path}'
    PULL_CURL_AUTH_HEADER=$(IS_PUBLIC='${var.pull_ecr_is_public}' curlAuthHeader) \
      PULL_REPO_FQDN='${var.pull_repo_fqdn}' \
      PULL_REPO_NAME='${var.pull_repo_name}' \
      PULL_IMAGE_TAG='${var.pull_image_tag}' \
      PULL_DOWNLOAD_DIR_PATH='${local.download_dir_path}' \
      PUSH_CURL_AUTH_HEADER=$(IS_PUBLIC='${var.push_ecr_is_public}' curlAuthHeader) \
      PUSH_REPO_FQDN='${var.push_repo_fqdn}' \
      PUSH_REPO_NAME='${var.push_repo_name}' \
      PUSH_IMAGE_TAG='${var.push_image_tag}' \
      '${local.pull_then_push_path}'
  EOF
}

ecr_curl/variables.tf

variable "name" {
  description = "The name to uniquely identify the module's instance, e.g. my-lambda-function"
  type        = string
}

variable "pull_ecr_is_public" {
  description = "If the ECR repo we're pulling from is public (vs. private)"
  type        = bool
}

variable "pull_repo_fqdn" {
  description = "The FQDN of the ECR repo we're pulling from, e.g. public.ecr.aws"
  type        = string
}

variable "pull_repo_name" {
  description = "The name of the ECR repo we're pulling from, e.g. my-repo"
  type        = string
}

variable "pull_image_tag" {
  description = "The tag of the image we're pulling, e.g. latest"
  type        = string
}

variable "push_ecr_is_public" {
  description = "If the ECR repo we're pushing to is public (vs. private)"
  type        = bool
}

variable "push_repo_fqdn" {
  description = "The FQDN of the ECR repo we're pushing to, e.g. 012345678910.dkr.ecr.<region-name>.amazonaws.com"
  type        = string
}

variable "push_repo_name" {
  description = "The name of the ECR repo we're pushing to, e.g. my-repo"
  type        = string
}

variable "push_image_tag" {
  description = "The tag of the image we're pushing, e.g. latest"
  type        = string
}

ecr_curl/helpers.sh

fetchEcrToken() {
  # https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html#registry_auth_http
  # https://docs.aws.amazon.com/AmazonECR/latest/public/public-registries.html#registry_auth_http

  base_ecr_command='ecr'
  query='authorizationData[].authorizationToken'
  if [[ "$IS_PUBLIC" == 'true' ]]; then
    base_ecr_command='ecr-public'
    query='authorizationData.authorizationToken'
  elif [[ "$IS_PUBLIC" != 'false' ]]; then
    echo "fetchEcrToken: expected IS_PUBLIC to be 'true' or 'false' but received '$IS_PUBLIC'" >&2
    return 1
  fi

  ecr_token=$(aws "$base_ecr_command" get-authorization-token --output text --query "$query")
  if [[ -z "$ecr_token" || "$ecr_token" == 'None' ]]; then
    echo 'fetchEcrToken: Failed to fetch ecr_token' >&2
    return 1
  fi
  echo "$ecr_token"
}

curlAuthHeader() {
  token_type='Basic'
  if [[ "$IS_PUBLIC" == 'true' ]]; then
    token_type='Bearer'
  elif [[ "$IS_PUBLIC" != 'false' ]]; then
    echo "curlAuthHeader: expected IS_PUBLIC to be 'true' or 'false' but received '$IS_PUBLIC'" >&2
    return 1
  fi

  token=$(IS_PUBLIC="$IS_PUBLIC" fetchEcrToken)
  echo "Authorization: $token_type $token"
}

curlWithAuthHeader() {
  auth_params=()
  if [[ -n "$CURL_AUTH_HEADER" ]]; then
    auth_params+='-H'
    auth_params+="$CURL_AUTH_HEADER"
  fi
  curl "${auth_params[@]}" "$@"
}

curlStatus() {
  curl_output="$1"
  if [[ -z "$curl_output" ]]; then
    echo 'curlStatus: no $1 for curl_output' >&2
    return 1
  fi
  echo "$curl_output" | grep '^< HTTP/[0-9\.]* ' | tail -n 1 | awk '{print $3}'
}

repoUrl() {
  if [[ -z "$REPO_FQDN" ]]; then
    echo 'repoUrl: no value for REPO_FQDN' >&2
    return 1
  fi

  if [[ -z "$REPO_NAME" ]]; then
    echo 'repoUrl: no value for REPO_NAME' >&2
    return 1
  fi

  echo "https://$REPO_FQDN/v2/$REPO_NAME"
}

var_is_defined() {
  var_name="$1"
  if [[ -z "$var_name" ]]; then
    echo 'var_is_defined: no $1 for var_name' >&2
    return 1
  fi

  if set | grep -q "^$var_name="; then
    echo 'true'
  else
    echo 'false'
  fi
}

ecr_curl/pull_then_push.sh

#!/bin/sh
set -e


##
# `docker pull` from a repo and then `docker push` to another repo with just curl
# https://distribution.github.io/distribution/spec/api/
#
# Example usage:
#   source './helpers.sh'
#   PULL_CURL_AUTH_HEADER=$(IS_PUBLIC='true' curlAuthHeader) \
#     PULL_REPO_FQDN='public.ecr.aws' \
#     PULL_REPO_NAME='lambda/provided' \
#     PULL_IMAGE_TAG='al2-x86_64' \
#     PULL_DOWNLOAD_DIR_PATH='/path/to/download/image/contents' \
#     PUSH_CURL_AUTH_HEADER=$(IS_PUBLIC='false' curlAuthHeader) \
#     PUSH_REPO_FQDN='123456789012.dkr.ecr.us-east-1.amazonaws.com' \
#     PUSH_REPO_NAME='my-repo' \
#     PUSH_IMAGE_TAG='latest' \
#     ./pull_then_push.sh
##


##
# Initialize
##

SCRIPT_NAME="${0##*/}"  # https://stackoverflow.com/a/192699

dir_path() {
  # https://stackoverflow.com/a/43919044
  a="/$0"; a="${a%/*}"; a="${a:-.}"; a="${a##/}/"; BINDIR=$(cd "$a"; pwd)
  echo "$BINDIR"
}
DIR_PATH=`dir_path`

source "$DIR_PATH/helpers.sh"


##
# Assert inputs
##

_pull_curl_auth_header_is_defined=$(var_is_defined 'PULL_CURL_AUTH_HEADER')
if [[ $_pull_curl_auth_header_is_defined == 'false' ]]; then
  echo "$SCRIPT_NAME: \$PULL_CURL_AUTH_HEADER is undefined. If no auth header required then set PULL_CURL_AUTH_HEADER=''" >&2
  exit 1
elif [[ $_pull_curl_auth_header_is_defined != 'true' ]]; then
  echo "$SCRIPT_NAME: unexpected value of '$_pull_curl_auth_header_is_defined' for _pull_curl_auth_header_is_defined" >&2
  exit 1
fi

if [[ -z "$PULL_REPO_FQDN" ]]; then
  echo "$SCRIPT_NAME: no value for \$PULL_REPO_FQDN" >&2
  exit 1
fi

if [[ -z "$PULL_REPO_NAME" ]]; then
  echo "$SCRIPT_NAME: no value for \$PULL_REPO_NAME" >&2
  exit 1
fi

if [[ -z "$PULL_IMAGE_TAG" ]]; then
  echo "$SCRIPT_NAME: no value for \$PULL_IMAGE_TAG" >&2
  exit 1
fi

if [[ -z "$PULL_DOWNLOAD_DIR_PATH" ]]; then
  echo "$SCRIPT_NAME: no value for \$PULL_DOWNLOAD_DIR_PATH" >&2
  exit 1
fi

_push_curl_auth_header_is_defined=$(var_is_defined 'PUSH_CURL_AUTH_HEADER')
if [[ $_push_curl_auth_header_is_defined == 'false' ]]; then
  echo "$SCRIPT_NAME: \$PUSH_CURL_AUTH_HEADER is undefined. If no auth header required then set PUSH_CURL_AUTH_HEADER=''" >&2
  exit 1
elif [[ $_push_curl_auth_header_is_defined != 'true' ]]; then
  echo "$SCRIPT_NAME: unexpected value of '$_push_curl_auth_header_is_defined' for _push_curl_auth_header_is_defined" >&2
  exit 1
fi

if [[ -z "$PUSH_REPO_FQDN" ]]; then
  echo "$SCRIPT_NAME: no value for \$PUSH_REPO_FQDN" >&2
  exit 1
fi

if [[ -z "$PUSH_REPO_NAME" ]]; then
  echo "$SCRIPT_NAME: no value for \$PUSH_REPO_NAME" >&2
  exit 1
fi

if [[ -z "$PUSH_IMAGE_TAG" ]]; then
  echo "$SCRIPT_NAME: no value for \$PUSH_IMAGE_TAG" >&2
  exit 1
fi


##
# Pull image
##

export CURL_AUTH_HEADER="$PULL_CURL_AUTH_HEADER"
pull_info=$(
  REPO_FQDN="$PULL_REPO_FQDN" \
  REPO_NAME="$PULL_REPO_NAME" \
  IMAGE_TAG="$PULL_IMAGE_TAG" \
  DOWNLOAD_DIR_PATH="$PULL_DOWNLOAD_DIR_PATH" \
  "$DIR_PATH/pull.sh"
)


##
# Push image
##

export CURL_AUTH_HEADER="$PUSH_CURL_AUTH_HEADER"
push_info=$(
  REPO_FQDN="$PUSH_REPO_FQDN" \
  REPO_NAME="$PUSH_REPO_NAME" \
  IMAGE_TAG="$PUSH_IMAGE_TAG" \
  LAYER_PATHS=$(find "$PULL_DOWNLOAD_DIR_PATH/layers" -type f) \
  CONFIG_PATH="$PULL_DOWNLOAD_DIR_PATH/config.json" \
  "$DIR_PATH/push.sh"
)


##
# Clean up
##

unset CURL_AUTH_HEADER
rm -rf "$PULL_DOWNLOAD_DIR_PATH"

ecr_curl/pull.sh

#!/bin/sh
set -e


##
# `docker pull` from repo with just curl
# https://distribution.github.io/distribution/spec/api/
#
# Example usage:
#   export REPO_FQDN='889087436126.dkr.ecr.us-east-1.amazonaws.com'
#   export REPO_NAME='points-ninja-invite-bot-beta'
#   export IMAGE_TAG='latest'
#   export DOWNLOAD_DIR_PATH='/path/to/download/image/contents'
#   ./pull.sh
##


##
# Initialize
##

SCRIPT_NAME="${0##*/}"  # https://stackoverflow.com/a/192699

dir_path() {
  # https://stackoverflow.com/a/43919044
  a="/$0"; a="${a%/*}"; a="${a:-.}"; a="${a##/}/"; BINDIR=$(cd "$a"; pwd)
  echo "$BINDIR"
}
DIR_PATH=`dir_path`

source "$DIR_PATH/helpers.sh"


##
# Assert inputs
##

if [[ -z "$REPO_FQDN" ]]; then
  echo "$SCRIPT_NAME: no value for \$REPO_FQDN" >&2
  exit 1
fi

if [[ -z "$REPO_NAME" ]]; then
  echo "$SCRIPT_NAME: no value for \$REPO_NAME" >&2
  exit 1
fi

if [[ -z "$IMAGE_TAG" ]]; then
  echo "$SCRIPT_NAME: no value for \$IMAGE_TAG" >&2
  exit 1
fi

if [[ -z "$DOWNLOAD_DIR_PATH" ]]; then
  echo "$SCRIPT_NAME: no value for \$DOWNLOAD_DIR_PATH" >&2
  exit 1
fi


##
# Helper functions
##

jqFormatFile() {
  file_path="$1"
  if [[ -z "$file_path" ]]; then
    echo 'jqFormat: no $1 for file_path' >&2
    return 1
  fi
  jq_json=$(cat "$file_path" | jq)
  echo "$jq_json" > "$file_path"
}

digestValue() {
  digest="$1"
  if [[ -z "$digest" ]]; then
    echo 'digestValue: no $1 for digest' >&2
    return 1
  fi
  echo "${digest##*:}"  # echo everything after the :
}

fetchManifest() {
  echo 'fetchManifest: starting' >&2

  if [[ -z "$REPO_URL" ]]; then
    echo 'fetchManifest: no value for $REPO_URL' >&2
    return 1
  fi

  if [[ -z "$IMAGE_TAG" ]]; then
    echo 'fetchManifest: no value for $IMAGE_TAG' >&2
    return 1
  fi

  curl_output=$(
    curlWithAuthHeader -L -v \
    "$REPO_URL/manifests/$IMAGE_TAG" \
    2>&1
  )
  status_code=$(curlStatus "$curl_output")
  if [[ "$status_code" != '200' ]]; then
    echo "fetchManifest: curl returned http status code of '$status_code'" >&2
    return 1
  fi

  line_num=$(echo "$curl_output" | grep -n '"schemaVersion": 2' | awk '{print $1}')
  line_num=$(( ${line_num%:*} - 1 ))
  echo "$curl_output" | tail -n "+$line_num"
  echo 'fetchManifest: complete' >&2
}

downloadLayer() {
  if [[ -z "$REPO_URL" ]]; then
    echo 'downloadLayer: no value for $REPO_URL' >&2
    return 1
  fi

  digest="$1"
  if [[ -z "$digest" ]]; then
    echo 'downloadLayer: no $1 for digest' >&2
    return 1
  fi

  file_path="$2"
  if [[ -z "$file_path" ]]; then
    echo 'downloadLayer: no $2 for file_path' >&2
    return 1
  fi

  url="$REPO_URL/blobs/$digest"
  echo "downloadLayer: starting downloading '$url' to '$file_path'" >&2

  curl_output=$(
    curlWithAuthHeader -L -v \
    --output "$file_path" \
    "$url" \
    2>&1
  )
  status_code=$(curlStatus "$curl_output")
  if [[ "$status_code" != '200' ]]; then
    echo "downloadLayer: curl returned http status code of '$status_code'" >&2
    return 1
  fi
  echo "downloadLayer: complete downloading '$url' to '$file_path'" >&2
}

download_image() {
  dir_path="$1"
  if [[ -z "$dir_path" ]]; then
    echo 'download_image: no $1 for dir_path' >&2
    return 1
  fi
  mkdir -p "$dir_path"
  rm -rf "$dir_path"/*.json

  manifest_json=$(fetchManifest)

  manifest_file_path="$dir_path/manifest-pull.json"
  echo "$manifest_json" > "$manifest_file_path"

  config_digest=$(echo "$manifest_json" | jq -r '.config.digest')
  if [[ -z "$config_digest" || "$config_digest" == 'null' ]]; then
    echo 'download_image: failed to parse config_digest from manifest' >&2
    return 1
  fi

  config_file_path="$dir_path/config.json"
  downloadLayer "$config_digest" "$config_file_path"
  jqFormatFile "$config_file_path"

  layers_dir_path="$dir_path/layers"
  mkdir -p "$layers_dir_path"
  rm -rf "$layers_dir_path"/*.blob

  layer_digests=$(echo "$manifest_json" | jq -r '.layers | .[] | .digest')
  echo "$layer_digests" | while read layer_digest ; do
    layer_file_path="$layers_dir_path/$(digestValue "$layer_digest").blob"
    downloadLayer "$layer_digest" "$layer_file_path"
  done

  echo "download_image: completed download to '$dir_path'" >&2
  quote='"'
  echo "{ ${quote}manifest_file_path${quote}: ${quote}$manifest_file_path${quote}, ${quote}config_file_path${quote}: ${quote}$config_file_path${quote}, ${quote}layers_dir_path${quote}: ${quote}$layers_dir_path${quote} }" | jq
}


##
# Configure global variables
##

REPO_URL=$(REPO_FQDN="$REPO_FQDN" REPO_NAME="$REPO_NAME" repoUrl)


##
# Do the work
##

download_image "$DOWNLOAD_DIR_PATH"

ecr_curl/push.sh

#!/bin/sh
set -e


##
# `docker push` to repo with just curl
# https://distribution.github.io/distribution/spec/api/
# https://stackoverflow.com/a/59901770
#
# Example usage:
#   export REPO_FQDN='889087436126.dkr.ecr.us-east-1.amazonaws.com'
#   export REPO_NAME='points-ninja-invite-bot-beta'
#   export IMAGE_TAG='latest'
#   export LAYER_PATHS='./assets/layer.blob'
#   export CONFIG_PATH='./assets/config.json'
#   ./push.sh
##


##
# Initialize
##

SCRIPT_NAME="${0##*/}"  # https://stackoverflow.com/a/192699

dir_path() {
  # https://stackoverflow.com/a/43919044
  a="/$0"; a="${a%/*}"; a="${a:-.}"; a="${a##/}/"; BINDIR=$(cd "$a"; pwd)
  echo "$BINDIR"
}
DIR_PATH=`dir_path`

source "$DIR_PATH/helpers.sh"


##
# Assert inputs
##

if [[ -z "$REPO_FQDN" ]]; then
  echo "$SCRIPT_NAME: no value for \$REPO_FQDN" >&2
  exit 1
fi

if [[ -z "$REPO_NAME" ]]; then
  echo "$SCRIPT_NAME: no value for \$REPO_NAME" >&2
  exit 1
fi

if [[ -z "$IMAGE_TAG" ]]; then
  echo "$SCRIPT_NAME: no value for \$IMAGE_TAG" >&2
  exit 1
fi

if [[ -z "$LAYER_PATHS" ]]; then
  echo "$SCRIPT_NAME: no value for \$LAYER_PATHS" >&2
  exit 1
fi

if [[ -z "$CONFIG_PATH" ]]; then
  echo "$SCRIPT_NAME: no value for \$CONFIG_PATH" >&2
  exit 1
fi

echo "$LAYER_PATHS" | while read layer_path ; do
  if [[ ! -f "$layer_path" ]]; then
    echo "$SCRIPT_NAME: no file exists at layer_path of '$layer_path'" >&2
    exit 1
  fi
done

if [[ ! -f "$CONFIG_PATH" ]]; then
  echo "$SCRIPT_NAME: no file exists at CONFIG_PATH='$CONFIG_PATH'" >&2
  exit 1
fi


##
# Helper functions
##

fileSize() {
  file_path="$1"
  if [[ -z "$file_path" ]]; then
    echo 'fileSize: no $1 for file_path' >&2
    return 1
  fi
  echo $(( `wc -c < "$file_path"` ))
}

fileDigest() {
  file_path="$1"
  if [[ -z "$file_path" ]]; then
    echo 'fileDigest: no $1 for file_path' >&2
    return 1
  fi
  echo "sha256:$(sha256sum "$file_path" | cut -d ' ' -f1)"
}

initiateUpload() {
  if [[ -z "$REPO_URL" ]]; then
    echo 'initiateUpload: no value for $REPO_URL' >&2
    return 1
  fi

  echo 'initiateUpload: starting' >&2

  curl_output=$(
    curlWithAuthHeader -X POST -siL -v \
    -H "Connection: close" \
    "$REPO_URL/blobs/uploads" \
    2>&1
  )
  status_code=$(curlStatus "$curl_output")
  if [[ "$status_code" != '202' ]]; then
    echo "initiateUpload: curl returned http status code of '$status_code'" >&2
    return 1
  fi

  location=$(
    echo "$curl_output" \
    | grep Location \
    | sed '2q;d' \
    | cut -d: -f2- \
    | tr -d ' ' \
    | tr -d '\r'
  )
  if [[ -z "$location" ]]; then
    echo 'initiateUpload: Failed to parse Location from curl_output' >&2
    return 1
  fi

  echo 'initiateUpload: complete' >&2
  echo "$location"
}

patchLayer() {
  echo 'patchLayer: starting' >&2

  if [[ -z "$file_path" ]]; then
    echo 'patchLayer: no value for $file_path' >&2
    return 1
  fi

  if [[ -z "$file_size" ]]; then
    echo 'patchLayer: no value for $file_size' >&2
    return 1
  fi

  if [[ -z "$target_location" ]]; then
    echo 'patchLayer: no value for $target_location' >&2
    return 1
  fi

  curl_output=$(
    curlWithAuthHeader -X PATCH -v \
    -H "Content-Type: application/octet-stream" \
    -H "Content-Length: $file_size" \
    -H "Connection: close" \
    --data-binary @"$file_path" \
    $target_location \
    2>&1
  )
  status_code=$(curlStatus "$curl_output")
  if [[ "$status_code" != '201' ]]; then
    echo "patchLayer: curl returned http status code of '$status_code'" >&2
    return 1
  fi

  location=$(
    echo "$curl_output" \
    | grep 'Location' \
    | cut -d: -f2- \
    | tr -d ' ' \
    | tr -d '\r'
  )
  if [[ -z "$location" ]]; then
    echo 'patchLayer: Failed to parse location from curl_output' >&2
    return 1
  fi

  echo 'patchLayer: complete' >&2
  echo "$location"
}

putLayer() {
  echo 'putLayer: starting' >&2

  if [[ -z "$file_digest" ]]; then
    echo 'putLayer: no value for $file_digest' >&2
    return 1
  fi

  if [[ -z "$target_location" ]]; then
    echo 'putLayer: no value for $target_location' >&2
    return 1
  fi

  curl_output=$(
    curlWithAuthHeader -X PUT -v \
    -H "Content-Type: application/octet-stream" \
    -H "Content-Length: 0" \
    -H "Connection: close" \
    "$target_location?digest=$file_digest" \
    2>&1
  )
  status_code=$(curlStatus "$curl_output")
  if [[ "$status_code" != '201' ]]; then
    echo "putLayer: curl returned http status code of '$status_code'" >&2
    return 1
  fi

  echo 'putLayer: complete' >&2
}

uploadManifest() {
  if [[ -z "$REPO_URL" ]]; then
    echo 'uploadManifest: no value for $REPO_URL' >&2
    return 1
  fi

  if [[ -z "$IMAGE_TAG" ]]; then
    echo 'uploadManifest: no value for $IMAGE_TAG' >&2
    return 1
  fi

  manifest_path="$1"
  if [[ -z "$manifest_path" ]]; then
    echo 'uploadManifest: no $1 for manifest_path' >&2
    return 1
  fi

  echo "uploadManifest: starting '$manifest_path'" >&2
  size=$((`fileSize "$manifest_path"` - 1))

  curl_output=$(
    curlWithAuthHeader -X PUT -vvv \
    -H "Content-Type: application/vnd.docker.distribution.manifest.v2+json" \
    -H "Content-Length: $size" \
    -H "Connection: close" \
    -d "$(cat "$manifest_path")" \
    "$REPO_URL/manifests/$IMAGE_TAG" \
    2>&1
  )
  status_code=$(curlStatus "$curl_output")
  if [[ "$status_code" != '201' ]]; then
    echo "uploadManifest: curl returned http status code of '$status_code'" >&2
    return 1
  fi

  echo "uploadManifest: complete '$manifest_path'" >&2
}

uploadImage() {
  if [[ -z "$CONFIG_PATH" ]]; then
    echo 'uploadImage: no value for $CONFIG_PATH' >&2
    return 1
  fi

  if [[ -z "$LAYER_PATHS" ]]; then
    echo 'uploadImage: no value for $LAYER_PATHS' >&2
    return 1
  fi

  echo "uploadImage: starting config at '$CONFIG_PATH'" >&2
  config_size=$(fileSize "$CONFIG_PATH")
  config_digest=$(fileDigest "$CONFIG_PATH")
  patch_location=$(initiateUpload)
  put_location=$(
    file_path="$CONFIG_PATH" \
    file_size="$config_size" \
    target_location="$patch_location" \
    patchLayer
  )
  file_digest="$config_digest" target_location="$put_location" putLayer
  echo "uploadImage: complete config at '$CONFIG_PATH'" >&2

  echo 'uploadImage: starting layers' >&2
  layers_json='[]'
  while read layer_path ; do
    echo "uploadImage: starting layer at '$layer_path'" >&2
    layer_size=$(fileSize "$layer_path")
    layer_digest=$(fileDigest "$layer_path")
    patch_location=$(initiateUpload)
    put_location=$(
      file_path="$layer_path" \
      file_size="$layer_size" \
      target_location="$patch_location" \
      patchLayer
    )
    file_digest="$layer_digest" target_location="$put_location" putLayer
    layers_json=$(echo "$layers_json" | jq "$(cat <<-EOF
      . += [{
        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
        "size": $layer_size,
        "digest": "$layer_digest"
      }]
        EOF
    )")
    echo "uploadImage: complete layer at '$layer_path'" >&2
  done <<< "$LAYER_PATHS"
  echo 'uploadImage: complete layers' >&2

  echo 'uploadImage: starting manifest' >&2
  manifest_content="$(cat <<-EOF
    {
      "schemaVersion": 2,
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "config": {
        "mediaType": "application/vnd.docker.container.image.v1+json",
        "size": $config_size,
        "digest": "$config_digest"
      },
      "layers": $layers_json
    }
    EOF
  )"
  manifest_path="$(dirname "$CONFIG_PATH")/manifest-push.json"
  echo "$manifest_content" | jq > "$manifest_path"
  uploadManifest "$manifest_path"
  echo 'uploadImage: complete manifest' >&2
}


##
# Configure global variables
##

REPO_URL=$(REPO_FQDN="$REPO_FQDN" REPO_NAME="$REPO_NAME" repoUrl)


##
# Do the work
##

uploadImage

2 Comments

Thanks! I fixed a couple of problems, removed aws-cli dependency, simplified the code and released this as a terraform module.
@STerliakov thank you for packaging this

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.