If you arrived here, I presumably believe that you already know Nostr . If you haven’t heard of it, Nostr stands for “Notes and Other Stuff Transmitted by Relays” and is an open protocol for censorship-resistant global networks created by @fiatjaf . Like HTTP or TCP-IP, Nostr is a protocol, an open standard upon which anyone can build. Each Nostr account is based on a public/private key pair. A simple way to think about this is that your public key is your username and your private key is your password. After you created your key pair and registered with a client, you might want your username (the public key) checked and verified showing the Nostr community that you’re a real user, just like on Twitter. For example, check out mine here on nostr.directory/ :

The verification process on Nostr is documented in a Nostr Implementation Possibilities (NIP) called NIP-05 . NIP-05 enables a Nostr user to map their public key to a DNS-based internet identifier. Read more details here .

In this article, I will show you my experiment of how one could use AWS CloudFront, API Gateway, Lambda and DynamoDB to build a NIP-05 identity service, then use Terraform to manage and deploy the whole service stack. If you would like to use other free or paid services, see here or here . BTW, I didn’t use this way to get my public key verified, I used a static nostr.json file served by my blog website. Read my hugo blogs series to find out how. Below diagram shows the high level flow of the AWS services:

Introduction

Terraform

HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.

AWS Services

tfenv

tfenv is a Terraform version manager, which can help you to install and use a specific version of Terraform.

Build

Prerequisites

The tools you will need to complete the build:

  • AWS Console
  • AWS Command Line Interface
  • Terraform by HashiCorp
  • tfenv (optional)
  • A purchased domain (optional)

Instructions

  1. First, if you don’t have an AWS account, then you can follow this instruction to set up an AWS account and create an administrator user. There are 2 options to create the user, IAM or IAM Identity Centre. I chose IAM Identity Centre. Note, this is different to the root user during your AWS account sign up. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to an administrative user, and use only the root user to perform tasks that require root user access.
  2. Then, follow this guide to install AWS CLI. The AWS Command Line Interface (AWS CLI) is an open source tool that enables you to interact with AWS services using commands in your command-line shell.
  3. Next, either install Terraform directly following this guide or use tfenv to install the terraform. My terraform version is v1.3.6 .
  4. For this experiment, I just simply use terraform local backend . A backend defines where Terraform stores its state data files. By default, Terraform uses a backend called local, which stores state as a local file on disk. You can also setup an AWS S3 backend if you prefer.
  5. Here is my project structure 📂
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
.
├── LICENSE
├── package.json
├── package-lock.json
├── README.md
├── src
│   ├── app.ts
│   ├── dynamodb.ts
│   └── logging.ts
├── terraform
│   ├── apigateway.tf
│   ├── cloudfront.tf
│   ├── main.tf
│   ├── outputs.tf
│   ├── terraform.tfstate
│   └── terraform.tfstate.backup
└── tsconfig.json   
  1. Now let’s write our Lambda first. I’m using TypeScript for it:
    📃 app.ts:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import {APIGatewayProxyEvent, Context} from "aws-lambda"  
import {logger} from './logging'  
import {getPubKey, PubKeyItem} from "./dynamodb";  
  
type AuthoriseResponse = {  
    isAuthorized: boolean  
}  
  
export const authorizer = async (event: APIGatewayProxyEvent): Promise<AuthoriseResponse | void> => {  
    if ((<APIGatewayProxyEvent>event).headers["x-origin-verify"] === process.env["secret_token"]) {  
        return {isAuthorized: true}  
    } else {  
        return {isAuthorized: false}  
    }}  
  
export const handler = async (event: APIGatewayProxyEvent, context: Context) => {  
    logger.defaultMeta = {requestId: context.awsRequestId}  
    logger.info('Event:', event)  
  
    let responseMessage: any = {};  
  
    if (event.queryStringParameters && event.queryStringParameters['name']) {  
        let name = event.queryStringParameters['name'];  
        let keyItem: PubKeyItem = await getPubKey(name)  
        responseMessage[name] = keyItem?.payload?.pubkey  
    }  
  
    return {  
        statusCode: 200,  
        headers: {  
            'Content-Type': 'application/json',  
        },        
        body: JSON.stringify({  
            names: responseMessage,  
        })    
        }
    }

In line 9 and line 16, I exported 2 functions, one is to be used for the API Gateway authorizer and one for the main handler to lookup the public keys in DynamoDB.
In line 10, the authorizer will check if a special header x-origin-verify exists and equal to the configured secret token to make sure only requests coming from AWS CloudFront is permitted and block other requests coming directly from the API Gateway. Read this AWS article for the idea behind.
📃 dynamodb.ts:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import {DynamoDBClient, GetItemCommand, GetItemCommandInput} from "@aws-sdk/client-dynamodb"  
import {marshall, unmarshall} from "@aws-sdk/util-dynamodb"  
import {logger} from "./logging";  
  
export type PubKeyItem = {  
    name: string  
    payload?: PubKey  
}  
  
export type PubKey = {  
    pubkey?: string  
    updated?: number  
}  
  
const pubkeyTableName = 'nostr-pub-keys'  
const client = new DynamoDBClient(Boolean(process.env.dynamodb_url) ? {endpoint: process.env.dynamodb_url} : {})  
  
export const getPubKey = async (name: string): Promise<PubKeyItem> => {  
    const params: GetItemCommandInput = {  
        TableName: pubkeyTableName,  
        Key: marshall({  
            "name": name  
        }),  
    };  
    try {  
        const result = await client.send(new GetItemCommand(params));  
        if (result && result.Item) {  
            return (<PubKeyItem>unmarshall(result.Item))  
        }    } catch (error) {  
        logger.error('DynamoDB GetItemCommand error', error)  
    }}

Line 15 defines the DynamoDB table name, which can be found in the Terraform configure file.
7. For Terraform configure files, let’s look at them one by one.
📃 main.tf:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
terraform {  
  required_providers {  
    aws = {  
      source  = "hashicorp/aws"  
      version = "~> 4.0.0"  
    }  
    random = {  
      source  = "hashicorp/random"  
      version = "~> 3.1.0"  
    }  
    archive = {  
      source  = "hashicorp/archive"  
      version = "~> 2.2.0"  
    }  
  }}  
  
provider "aws" {  
  region  = "us-east-1"  
  profile = "terraform"  
}  
  
resource "random_string" "random_token" {  
  length           = 256  
  special          = true  
  override_special = "_-"  
}  
  
resource "random_id" "random_path" {  
  byte_length = 16  
}  
  
resource "aws_dynamodb_table" "pub_keys" {  
  name           = "nostr-pub-keys"  
  billing_mode   = "PROVISIONED"  
  read_capacity  = 5  
  write_capacity = 5  
  hash_key       = "name"  
  
  attribute {  
    name = "name"  
    type = "S"  
  }  
}  
  
# TS Lambda and permission  
data "aws_iam_policy_document" "lambda_exec_role_policy" {  
  statement {  
    actions = [  
      "logs:CreateLogStream",  
      "logs:PutLogEvents"  
    ]  
    resources = [  
      "arn:aws:logs:*:*:*"  
    ]  
  }  statement {  
    actions = [  
      "dynamodb:BatchGetItem",  
      "dynamodb:GetItem",  
      "dynamodb:Query",  
      "dynamodb:Scan",  
      "dynamodb:BatchWriteItem",  
      "dynamodb:PutItem",  
      "dynamodb:UpdateItem"  
    ]  
    resources = [  
      aws_dynamodb_table.pub_keys.arn,  
    ]  }}  
  
resource "aws_iam_role" "lambda_exec" {  
  assume_role_policy = <<EOF  
{  
  "Version": "2012-10-17",  
  "Statement": [  
    {  
      "Action": "sts:AssumeRole",  
      "Principal": {  
        "Service": "lambda.amazonaws.com"  
      },  
      "Effect": "Allow"  
    }  
  ]  
}  
EOF  
}  
  
resource "aws_iam_role_policy" "lambda_exec_role" {  
  role   = aws_iam_role.lambda_exec.id  
  policy = data.aws_iam_policy_document.lambda_exec_role_policy.json  
}  
  
data "external" "build" {  
  program = [  
    "bash", "-c", <<EOT  
(npm run prezip) >&2 && echo "{\"dest\": \"package\"}"  
EOT  
  ]  
  working_dir = "${path.module}/../"  
}  
  
data "archive_file" "lambda_zip" {  
  type        = "zip"  
  source_dir  = "${data.external.build.working_dir}/${data.external.build.result.dest}"  
  output_path = "${path.module}/nostr-nip05.zip"  
}  
  
# App Lambda  
resource "aws_lambda_function" "app_lambda" {  
  function_name = "nostr-nip05-app"  
  
  filename         = data.archive_file.lambda_zip.output_path  
  source_code_hash = data.archive_file.lambda_zip.output_base64sha256  
  environment {  
    variables = {  
      log_level = "info"  
    }  
  }  
  timeout = 30  
  handler = "app.handler"  
  runtime = "nodejs14.x"  
  role    = aws_iam_role.lambda_exec.arn  
}  
  
resource "aws_cloudwatch_log_group" "app_log" {  
  name              = "/aws/lambda/${aws_lambda_function.app_lambda.function_name}"  
  retention_in_days = 14  
}  
  
# Auth Lambda  
resource "aws_lambda_function" "auth_lambda" {  
  function_name = "nostr-nip05-auth"  
  
  filename         = data.archive_file.lambda_zip.output_path  
  source_code_hash = data.archive_file.lambda_zip.output_base64sha256  
  environment {  
    variables = {  
      secret_token = random_string.random_token.result  
    }  
  }  
  timeout = 30  
  handler = "app.authorizer"  
  runtime = "nodejs14.x"  
  role    = aws_iam_role.lambda_exec.arn  
}  
  
resource "aws_cloudwatch_log_group" "auth_log" {  
  name              = "/aws/lambda/${aws_lambda_function.auth_lambda.function_name}"  
  retention_in_days = 14  
}

In line 19, I’m using my pre-defined AWS profile, please replace with your own one.
In line 33, the DynamoDB table is defined, should be the same as used in the dynamodb.ts file.
Line 91-104, it executes an external command line call to run npm to compile and copy all the necessary files and generate a zip file. My package.json looks like below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
"scripts": {  
  "clean": "rimraf dist && rimraf package",  
  "mkdirs": "mkdir dist && mkdir package",  
  "copy:js": "cp dist/*.js* package/",  
  "copy:node-modules": "cp -r node_modules package/",  
  "copy": "npm run copy:js && npm run copy:node-modules",  
  "compile": "tsc",  
  "reinstall": "rimraf node_modules && npm install",  
  "build": "npm run compile && npm run copy",  
  "rebuild": "npm run reinstall && npm run build",  
  "prezip": "npm run clean && npm run mkdirs && npm run rebuild"  
}

All the final js files will be copied to package folder before zipping.
📃 apigateway.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# API Gateway  
  
resource "aws_apigatewayv2_api" "api" {  
  name          = "nostr-nip05-apigw"  
  protocol_type = "HTTP"  
}  
  
resource "aws_apigatewayv2_stage" "api" {  
  api_id      = aws_apigatewayv2_api.api.id  
  name        = "$default"  
  auto_deploy = true  
  
}  
  
resource "aws_apigatewayv2_integration" "api" {  
  api_id                 = aws_apigatewayv2_api.api.id  
  integration_uri        = aws_lambda_function.app_lambda.invoke_arn  
  integration_type       = "AWS_PROXY"  
  integration_method     = "POST"  
  payload_format_version = "2.0"  
}  
  
resource "aws_apigatewayv2_authorizer" "authorizer" {  
  name                              = "nostr-nip05-authorizer"  
  api_id                            = aws_apigatewayv2_api.api.id  
  authorizer_payload_format_version = "2.0"  
  authorizer_result_ttl_in_seconds  = 0  
  authorizer_type                   = "REQUEST"  
  authorizer_uri                    = aws_lambda_function.auth_lambda.invoke_arn  
  enable_simple_responses           = true  
  identity_sources                  = ["$request.header.x-origin-verify"]  
}  
  
resource "aws_apigatewayv2_route" "api" {  
  api_id             = aws_apigatewayv2_api.api.id  
#  route_key          = "ANY /${random_id.random_path.hex}/{proxy+}"  
  route_key          = "GET /${random_id.random_path.hex}/.well-known/nostr.json"  
  target             = "integrations/${aws_apigatewayv2_integration.api.id}"  
  authorization_type = "CUSTOM"  
  authorizer_id      = aws_apigatewayv2_authorizer.authorizer.id  
}  
  
resource "aws_lambda_permission" "apigw" {  
  action        = "lambda:InvokeFunction"  
  function_name = aws_lambda_function.app_lambda.arn  
  principal     = "apigateway.amazonaws.com"  
  source_arn    = "${aws_apigatewayv2_api.api.execution_arn}/*/*"  
}  
  
resource "aws_lambda_permission" "auth" {  
  action        = "lambda:InvokeFunction"  
  function_name = aws_lambda_function.auth_lambda.arn  
  principal     = "apigateway.amazonaws.com"  
  source_arn    = "${aws_apigatewayv2_api.api.execution_arn}/authorizers/${aws_apigatewayv2_authorizer.authorizer.id}"  
}

In line 31, the authorizer will check if $request.header.x-origin-verify header exists or not, if not, it returns a 401 Unauthorized response without calling the Lambda function.
In line 37, we specify the API route key. Comment out this line, and comment in line 36 if you want to support any route.
📃 cloudfront.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# Cloudfront  
  
resource "aws_cloudfront_distribution" "api-cf" {  
  origin {  
    domain_name = replace(aws_apigatewayv2_stage.api.invoke_url, "/^https?://([^/]*).*/", "$1")  
    origin_id   = "apigw"  
    origin_path = "/${random_id.random_path.hex}"  
  
    custom_header {  
      name  = "x-origin-verify"  
      value = random_string.random_token.result  
    }  
  
    custom_origin_config {  
      https_port             = 443  
      http_port              = 80  
      origin_protocol_policy = "https-only"  
      origin_ssl_protocols   = ["TLSv1.2"]  
    }  }  
  enabled             = true  
  is_ipv6_enabled     = true  
  wait_for_deployment = false  
  
  default_cache_behavior {  
    allowed_methods  = ["GET", "HEAD"]  
    cached_methods   = ["GET", "HEAD"]  
    target_origin_id = "apigw"  
  
    forwarded_values {  
      query_string = true  
      cookies {  
        forward = "all"  
      }  
    }  
    viewer_protocol_policy = "redirect-to-https"  
    default_ttl            = 0  
    min_ttl                = 0  
    max_ttl                = 0  
  
    function_association {  
      event_type   = "viewer-response"  
      function_arn = aws_cloudfront_function.viewer_response.arn  
    }  
  }  
  restrictions {  
    geo_restriction {  
      restriction_type = "none"  
    }  
  }  
  viewer_certificate {  
    cloudfront_default_certificate = true  
  }  
}  
  
resource "aws_cloudfront_function" "viewer_response" {  
  name    = "nostr-nip05-viewer-response"  
  runtime = "cloudfront-js-1.0"  
  publish = true  
  code    = <<EOT  
  function handler(event) {  
    var response  = event.response;  
    var headers  = response.headers;  
  
    // If Access-Control-Allow-Origin CORS header is missing, add it.  
    // Since JavaScript doesn't allow for hyphens in variable names, we use the dict["key"] notation.  
    if (!headers['access-control-allow-origin']) {  
        headers['access-control-allow-origin'] = {value: "*"};  
        console.log("Access-Control-Allow-Origin was missing, adding it now.");  
    }  
  
    return response;  
  }  
  EOT  
}

Line 9-12, we use a generated random token as the secret token for the x-origin-verify header.
Line 40-43, we use an edge function to add the access-control-allow-origin header to the response. See this for the explanation.
📃 outputs.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Output value definitions  
  
output "lambda_name" {  
  description = "Name of the Lambda function."  
  value = aws_lambda_function.app_lambda.function_name  
}  
output "authorizer_name" {  
  description = "Name of the Authorizer function."  
  value = aws_lambda_function.auth_lambda.function_name  
}  
output "gateway_url" {  
  description = "Base URL for API Gateway stage."  
  value = aws_apigatewayv2_stage.api.invoke_url  
}  
  
output "cloudfront_domain" {  
  value = aws_cloudfront_distribution.api-cf.domain_name  
}
  1. Finally, go to terraform folder and execute terraform init command. If the init command completed successfully, then you can execute terraform plan to check your settings and terraform apply to build and deploy your service stack onto AWS. Outputs will look like:
1
2
3
4
5
6
7
8
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

authorizer_name = "nostr-nip05-auth"
cloudfront_domain = "djqrav07u3gut.cloudfront.net"
gateway_url = "https://6nukwnp3td.execute-api.us-east-1.amazonaws.com/"
lambda_name = "nostr-nip05-app"
  1. From the outputs, copy the cloudfront domain, then run curl https://<your-cloudfront-domain>/.well-known/nostr.json?name=<your username>. Deploying CloudFront takes some time, if you get an error Could not resolve host: <your-cloudfront-domain>, then go to AWS console and check your CloudFront Distributions, see if the newly deployed cloudfront status is Enabled. If enabled, try the curl command again, then you should get {"names":{}} back.
  2. Manually add an item to the AWS DynamoDB using AWS console:
    Use the JSON view format to add a test item:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
  "name": {
    "S": "<your username>"
  },
  "payload": {
    "M": {
      "pubkey": {
        "S": "<your pubkey>"
      },
      "updated": {
        "N": "1678604680491"
      }
    }
  }
}

Then you should see {"names":{"<your username>":"<your pubkey>"}} as the curl command response.

That’s it 🏁 The 2 things left to be done is to add an AWS Route53 domain associated with the CloudFront, and a Lambda function to add a pubkey with a user to the DynamoDB table. Will leave these to the readers to figure them out…😏

References