Skip to main content

2 posts tagged with "cloud-oriented"

View All Tags

· 30 min read
Asher Sterkin

Exploring Cloud Hexagonal Design with Winglang, TypeScript, and Ports & Adapters

As I argued elsewhere, automatically generating cloud infrastructure specifications directly from application code represents “The Next Logical Step in Cloud Automation.” This approach, sometimes referred to as “Infrastructure From Code” (IfC), aims to:

Ensure automatic coordination of four types of interactions with cloud services: life cycle management, pre- and post-configuration, consumption, and operation, while making pragmatic choices of the most appropriate levels of API abstraction for each cloud service and leaving enough control to the end-user for choosing the most suitable vendor, based on personal preferences, regulations or brownfield deployment constraints

While analyzing the IfC Technology Landscape a year ago, I identified five attributes essential for analyzing major offerings in this space:

  • Programming Language — is an IfC product based on an existing mainstream programming language(s) or embarks on developing a new one?
  • Runtime Environment — does it still use some existing runtime environment (e.g. NodeJS)?
  • API — is it proprietary or some form of standard/open source? Cloud-specific or cloud-agnostic?
  • IDE — does it assume its proprietary, presumably cloud-based, Integrated Development Environment or could be integrated with one or more of existing IDEs?
  • Deployment — does it assume deployment applications/services to its own cloud account or produced artifacts could be deployed to the customer’s own cloud account?

At that time, Winglang appeared on my radar as a brand-new cloud programming-oriented language running atop the NodeJS runtime. It comes with an optional plugin for VSCode, its own console, and fully supports cloud self-hosting via popular cloud orchestration engines such as Terraform and AWS CDK.

Today, I want to explore how well Winglang is suited for supporting the Clean Architecture style, based on the Hexagonal Ports and Adapters pattern. Additionally, I’m interested in how easily Winglang can be integrated with TypeScript, a representative of mainstream programming languages that can be compiled into JavaScript and run atop the NodeJS runtime engine.

Disclaimer

This publication is a technology research report. While it could potentially be converted into a tutorial, it currently does not serve as one. The code snippets in Winglang are intended to be self-explanatory. The language syntax falls within the common Algol-60 family and is, in most cases, straightforward to understand. In instances of uncertainty, please consult the Winglang Language Reference, Library, and Examples. For introductory materials, refer to the References.

Acknowledgements

Many thanks to Elad Ben-Israel, Shai Ber, and Nathan Tarbert for the valuable feedback on the early draft of this paper.

Table of Contents

  1. Disclaimer
  2. Acknowledgements
  3. Part One: Creating the Core
    3.1 Step Zero: “Hello, Winglang!” Preflight
    3.2 Step One: “Hello, Winglang!” Inflight
    3.3 Step Two: Generalizing Functionality by Accepting the Argument
    3.4 Deciding if the Hexagon Approach is Right for You
  4. Part Two: Encapsulating the Core within Hexagon
    4.1 Step Four: Extracting Core
    4.2 Step Five: Extracting the makeGreeting(name) Request Handler
    4.3 Step Six: Connecting the Handler via Cloud Function Port
    4.4 Step Seven: Reimplementing the Core in TypeScript
    4.5 Step Eight: Implementing the REST API Port
    4.6 Step Nine: Extracting the REST API Request Adapter
    4.7 Step Ten: Testing the REST API Request Adapter
    4.8 Step Eleven: Extracting the GreetingService
    4.9 Step Twelve: Enhancing REST API Request Adapter for Content Negotiation
  5. References
    6.1 Winglang Publications
    6.2 My Publications on “Infrastructure From Code”
    6.3 Hexagonal Architecture

Part One: Creating the Core

Step Zero: “Hello, Winglang!” Preflight

Creating the simplest possible “Hello, World!” application is a crucial, yet often overlooked, validation step in new software technology. Although such an application lacks practical utility, it reveals the general accessibility of the technology to newcomers. As a marketing wit once told me, “We have only one chance to make a first impression.” So, let’s begin with a straightforward one-liner in Winglang.

About Winglang: Winglang is an innovative cloud-oriented programming language designed to simplify cloud application development. It integrates seamlessly with cloud services, offering a unique approach to building and deploying applications directly in the cloud environment. This makes Winglang an intriguing option for developers looking to leverage cloud capabilities more effectively.

Installing Winglang is straightforward, assuming you already have npm and terraform installed and configured on your computer. As a technology researcher, I primarily work with remote desktops. Therefore, I won’t delve into the details of preparing your workstation here. My personal setup, once stabilized, will be shared in a separate publication.

My first step is to create a one-line application that prints the sentence “Hello, Winglang!” In Winglang, this is indeed could be done in a single line:

log(“Hello, Winglang!”);

However, to execute this one line of code, we need to compile it by typing wing compile:

Image1

Winglang adopts an intriguing approach by distinctly separating the phases of programmatic definition of cloud resources during compilation and their use during runtime. This is articulated in Winglang as Preflight and Inflight execution phases.

Simply put, the Preflight phase occurs when application code is compiled into a target orchestration engine template, such as a local simulator or Terraform, while the Inflight phase is when the application code executes within a Cloud Function or Container.

The ability to use the same syntax for programming the compilation phase and even print logs is quite a unique feature. For comparison, consider the ability to use the same syntax for programming “C” macros or C++ templates to print debugging logs of the compilation phase, just as you would program the runtime phase.

Step One: “Hello, Winglang!” Inflight

Now, I aim to create the simplest possible application that prints the sentence “Hello, Winglang!” during runtime, that is during the Inflight phase. In Winglang, accomplishing this requires just a couple of lines, similar to what you’d expect in any mainstream programming language:

bring cloud;

log("Hello, Winglang, Preflight!");

let helloWorld = new cloud.Function(inflight (event: str) => {
log("Hello, Winglang!");
});

By typing wing it in the VSCode Terminal, you can bring up the Winglang simulator (I prefer the preview in the editor). Click on cloud.Function, then on Invoke, and you will see the following:

Image2

This is pretty cool and Winglang definitely passes the initial smoke test.

Step Two: Generalizing Functionality by Accepting the name Argument

To move beyond simply printing static text, we’re going to slightly modify our initial function to return the greeting “Hello,<name>!”, where <name> is the function’s argument. The updated code, along with the simulator’s output, will look something like this:

Image3

Keep in mind, there’s no need to close the simulator. Simply edit the file, hit CTRL+S to save, and the simulator will automatically load the new version.

In today’s world, a system without test automation support hardly has a right to exist. Let’s add some tests to our simple function (now renamed to makeGreeting):

Image4

Again, there’s no need to close the simulator. The entire process is interactive and flows quite smoothly.

You can also run the tests via the command line in the VSCode Terminal:

Image5

The same test can also be run automatically in the cloud by typing, for example, wing test -t tf-aws. Additionally, the same code can be deployed on a target cloud.

Cloud neutrality support in Winglang is important and fascinating topic, which will be covered in more details in the next Step Four: Extracting Core section.

Deciding if the Hexagon Approach is Right for You

If all you need is to develop simple Transaction Scripts that:

  • Are triggered by an event happening to a cloud resource, e.g., REST API Gateway.
  • Optionally retrieve data from another Cloud Resource, like a Blob Storage Bucket.
  • Perform some very simple calculations.
  • Optionally send data to another Cloud Resource, such as a Blob Storage Bucket.
  • Can ideally be written once and require minimal maintenance.

Then you may choose to stop here. Explore Winglang Examples to see what can be achieved today, and visit Winglang Issues for insights on current limitations and future plans. However, if you’re interested in exploring how Winglang supports complex software architectures with potentially intricate computational logic and long-term support requirements, you are welcome to proceed to Part Two of this publication.

Part Two: Encapsulating the Core within Hexagon

Hexagonal Architecture, introduced by Alistair Cockburn in 2005, represented a significant shift in the way software applications were structured. Also known as the Ports and Adapters pattern, this architectural style was designed to create a clear separation between an application’s core logic and its external components. It enables applications to be equally driven by users, programs, automated tests, or batch scripts, and allows for development and testing in isolation from runtime devices and databases. By organizing interactions through ‘ports’ and ‘adapters’, the architecture ensures that the application remains agnostic to the nature of external technologies and interfaces. This approach not only prevented the infiltration of business logic into user interface code but also enhanced the flexibility and maintainability of software, making it adaptable to various environments and technologies.

While I believe that Alistair Cockburn, like many other practitioners, may have misinterpreted the original intent of layered software architecture as introduced by E.W. Dijkstra in his seminal work, “The Structure of ‘THE’ Multiprogramming System” (a topic I plan to address in a separate publication), the foundational idea he presents remains useful. As I argued in my earlier publication, the Ports metaphor aligns well with cloud resources that trigger specific events, while software modules interacting directly with the cloud SDK effectively function as Adapters.

Numerous attempts (see References) have been made to apply Hexagonal Architecture concepts to cloud and, more specifically, serverless development. A notable example is the blog post “Developing Evolutionary Architecture with AWS Lambda,” which showcases a repository structure closely aligned with what I envision. However, even this example employs a more complex application than what I believe is necessary for initial exploration. I firmly hold that we should fully understand and explore the simplest possible applications, at the “Hello, World!” level, before delving into more complex scenarios. With this in mind, let’s examine how far we can go in building a straightforward Greeting Service.

Step Four: Extracting Core

First and foremost, our goal is to extract the Core and ensure its complete independence from any external dependencies:

bring cloud;

pub class Greeting impl cloud.IFunctionHandler {
pub inflight handle(name: str): str {
return "Hello, {name}!";
}
}

At the moment, the Winglang Module System does not support public functions. I does, however, support public static class functions, which semantically are equivalent. Unfortunately, I cannot directly pass a public static inflight function to cloud.Function (it only works for closures), and I need to implement the cloud.IFunctionHandler interface. These limitations are fairly understandable and quite typical for a new programming system.

By extracting the core into a separate module, we can focus on what brings the application to life in the first place. This also enables extensive testing of the core logic independently, as shown below:

bring "./core" as core;
bring expect;

let greeting = new core.Greeting();

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, World!", greeting.handle("World"));
expect.equal("Hello, Winglang!", greeting.handle("Winglang"));
}

Keeping the simulator up with only the core test allows us to quickly explore application logic and discuss it with stakeholders without worrying about cloud resources. This approach often epitomizes what a true MVP (Minimum Viable Product) is about:

Image6

The main file is now streamlined, focusing on system-level packaging and testing:

bring cloud;
bring "./core" as core;


let makeGreeting = new cloud.Function(inflight (name: str): str => {
log("Received: {name}");
let greeting = core.Greeting.makeGreeting(name);
log("Returned: {greeting}");
return greeting;
});


bring expect;

test "it will return 'Hello, `<name>`!'" {
expect.equal("Hello, Winglang!", makeGreeting.invoke("Winglang"));
}

To consolidate everything, it’s time to introduce a Makefile to automate the entire process:


.PHONY: all test_core test_local test_remote

cloud ?= aws

all: test_remote

test_core:
wing test test.core.main.w -t sim

test_local: test_core
wing test main.w -t sim

test_remote: test_local
wing test main.w -t tf-$(cloud)

Here, I’ve defined a Makefile variable cloud with the default value aws, which specifies the target cloud platform for remote tests. By using Terraform as an orchestration engine, I ensure that the same code and Makefile will run without any changes on any cloud platform supported by Winglang, such as aws, gcp, or azure.

The output of remote testing is worth examining:

Image7

As we can see, Winglang automatically converts the Preflight code into Terraform templates and invokes Terraform commands to deploy the resulting stack to the cloud. It then runs the same test, effectively executing the Inflight code on the actual cloud, aws in this case, and finally deletes all resources. In such cases, I don't even need to access the cloud console to monitor the process. I can treat the cloud as a supercomputer, working with it through Winglang's cross-compilation mechanism.

The project structure now mirrors our architectural intent:


greeting-service/

├── core/
│ └── Greeting.w

├── main.w
├── Makefile
└── test.core.main.w

Step Five: Extracting the makeGreeting(name) Request Handler

The core functionality should be purely computational, stateless, and free from side effects. This is crucial to ensure that the core does not depend on any external framework and can be fully tested automatically. Introducing states or external side effects would generally hinder this possibility. However, we still aim to isolate application logic from the real environment represented by Ports and Adapters. To achieve this, we introduce a separate Request Handler module, as follows:


bring cloud;
bring "../core" as core;

pub class Greeting impl cloud.IFunctionHandler {
pub inflight handle(name: str): str {
log("Received: {name}");
let greeting = core.Greeting.makeGreeting(name);
log("Returned: {greeting}");
return greeting;
}
}

In this case, the GreetingHandler is responsible for logging, which is a side effect. In more complex applications, it would communicate with external databases, message buses, third-party services, etc., via Ports and Adapters.

The core logic is now encapsulated as a plain function and is no longer derived from the cloud.IFunctionHandler interface:


pub class Greeting {
pub static inflight makeGreeting(name: str): str {
return "Hello, {name}!";
}
}

The unit test for the core logic is accordingly simplified:

bring "./core" as core;
bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, World!", core.Greeting.makeGreeting("World"));
expect.equal("Hello, Wing!", core.Greeting.makeGreeting("Wing"));
}

The responsibility of connecting the handler and core logic now falls to the main.w module:

bring cloud;
bring "./handlers" as handlers;


let greetingHandler = new handlers.Greeting();
let makeGreetingFunction = new cloud.Function(greetingHandler);

bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingFunction.invoke("Wing"));
}

Once again, the project structure reflects our architectural intent:

greeting-service/

├── core/
│ └── Greeting.w
├── handlers/
│ └── Greeting.w
├── main.w
├── Makefile
└── test.core.main.w

It should be noted that for a simple service like Greeting, such an evolved structure could be considered over-engineering and not justified by actual business needs. However, as a software architect, it’s essential for me to outline a general skeleton for a fully-fledged service without getting bogged down in application-specific complexities that might not yet be known. By isolating different system components from one another, we make future system evolution less painful, and in many cases just practically feasible. In such cases, investing in a preliminary system structure by following best practices is fully justified and necessary. As Grady Booch famously said, “One cannot refactor a doghouse into a skyscraper.”

In general, keeping core functionality purely stateless and free from side effects, and isolating stateful application behavior with potential side effects into separate handlers, is conceptually equivalent to the monadic programming style widely adopted in Functional Programming environments.

Step Six: Connecting the Handler via Cloud Function Port

We can now remove the direct cloud.Function creation from the main module and encapsulate it into a separate GreetingFunction port as follows:

bring "./handlers" as handlers;
bring "./ports" as ports;


let greetingHandler = new handlers.Greeting();
let makeGreetingService = new ports.GreetingFunction(greetingHandler);

bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}

The GreetingFunction is defined in a separate module like this:

bring cloud;

pub class GreetingFunction {
\_f: cloud.Function;
new(handler: cloud.IFunctionHandler) {
this.\_f = new cloud.Function(handler);
}
pub inflight invoke(name: str): str {
return this.\_f.invoke(name);
}
}

This separation of concerns allows the main.w module to focus on connecting different parts of the system together. Specific port configuration is performed in a separate module dedicated to that purpose. While such isolation of GreetingHandler might seem unnecessary at this stage, it becomes more relevant when considering the nuanced configuration supported by Winglang cloud.Function, including execution platform (e.g., AWS Lambda vs Container), environment variables, timeout, maximum resources, etc. Extracting the GreetingFunction port definition into a separate module naturally facilitates the concealment of these details.

The project structure is updated accordingly:

greeting-service/

├── core/
│ └── Greeting.w
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingFunction.w
├── main.w
├── Makefile
└── test.core.main.w

The adopted naming convention for port modules also allows for the inclusion of multiple port definitions within the same project, enabling the selection of the required one based on external configuration.

Step Seven: Reimplementing the Core in TypeScript

There are several reasons why a project might consider implementing its core functionality in a mainstream programming language that can still run atop the underlying runtime environment. For example, using TypeScript, which compiles into JavaScript, and can be integrated with Winglang. Here are some of the most common reasons:

  • Risk Mitigation: Preserving the core regardless of the cloud programming environment in use.
  • Available Skills: It’s often easier to find developers familiar with a mainstream language than with a new one.
  • Existing Code Base: Typical brownfield situations.
  • 3rd Party Libraries: Essential for core functionality, such as specific algorithms.
  • Automation Ecosystem Maturity: More options are available for exhaustive testing of core functionality in mainstream languages.
  • Support for Specific Styles: For instance, better support for pure functional programming.

The Greeting service core functionality, redeveloped in TypeScript, would look like this:

export function makeGreeting(name: string): string {
return \`Hello, ${name}!\`;
}

Its unit test, developed using the jest framework, would be:

import { makeGreeting } from "@core/makeGreeting";

describe("makeGreeting", () => {
it("should return a greeting with the provided name", () => {
const name = "World";
const expected = "Hello, World!";
const result = makeGreeting(name);
expect(result).toBe(expected);
});
});

To make it accessible to Winglang language modules, a simple wrapper is needed:

pub inflight class Greeting {
pub extern "../target/core/makeGreeting.js" static inflight makeGreeting(name: str): str;
}

The main technical challenge is to place the compiled JavaScript version where the Winglang wrapper can find it. For this project, I decided to use the target folder, where the Winglang compiler puts its artifacts. To achieve this, I created a dedicated tsconfig.build.json:

{
"extends": "./tsconfig.json",
"compilerOptions": {
"outDir": "./target",
// ... production-specific compiler options ...
},
"exclude": \[
"core/\*.test.ts"
\]
}

The Makefile was also modified to automate the process:

.PHONY: all install test\_core test\_local test\_remote

cloud ?= aws

all: test\_remote

install:
npm install

test\_core: install
npm run test

build\_core: test\_core
npm run build

test\_local: build\_core
wing test main.w -t sim

test\_remote: test\_local
wing test main.w -t tf-$(cloud)

The folder structure reflects the changes made:

greeting-service/

├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── tsconfig.build.json
└── tsconfig.json

Step Eight: Implementing the REST API Port

Now, let’s consider making our Greeting service accessible via a REST API. This could be necessary, for instance, to enable demonstrations from a web browser or to facilitate calls from external services that, due to security or technological constraints, cannot communicate directly with the GreetingFunction port. To accomplish this, we need to introduce a new Port definition and modify the main.w module, while keeping everything else unchanged:

bring cloud;
bring http;


pub class GreetingApi{
pub apiUrl: str;

new(handler: cloud.IFunctionHandler) {
let api = new cloud.Api();

api.get("/greetings", inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
return cloud.ApiResponse{
status: 200,
body: handler.handle(request.query.get("name"))
};
});

this.apiUrl = api.url;
}

pub inflight invoke(name: str): str {
let result = http.get("{this.apiUrl}/greetings?name={name}");
assert(200 == result.status);
return result.body;
}

}

To maintain a consistent testing interface, I implemented an invoke method that functions similarly to the GreetingFunction port. This design choice is not mandatory but rather a matter of convenience to minimize the amount of change.

The main.w module now allocates the GreetingApi port:

bring "./handlers" as handlers;
bring "./ports" as ports;


let greetingHandler = new handlers.Greeting();
let makeGreetingService = new ports.GreetingApi(greetingHandler);

bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}

Since there is now something to use externally, the Makefile was modified to include deploy and destroy targets,as follows:


.PHONY: all install test\_core build\_core update test\_adapters test\_local test\_remote compile tf-init deploy destroy

cloud ?= aws
target := target/main.tf$(cloud)

all: test\_remote

install:
npm install

test\_core: install
npm run test

build\_core: test\_core
npm run build

update:
sudo npm update -g wing

test\_adapters: update
wing test test.adapters.main.w -t sim

test\_local: build\_core test\_adapters
wing test test.main.w -t sim

test\_remote: test\_local
wing test test.main.w -t tf-$(cloud)

compile:
wing compile main.w -t tf-$(cloud)

tf-init: compile
( \\
cd $(target) ;\\
terraform init \\
)

deploy: tf-init
( \\
cd $(target) ;\\
terraform apply -auto-approve \\
)

destroy:
( \\
cd $(target) ;\\
terraform destroy -auto-approve \\
)

The browser screen looks almost as expected, but notice a strange JSON.parse error message (will be addressed in the forthcoming section):

The project structure is updated to reflect these changes:

Image8

greeting-service/

├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingApi.w
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── tsconfig.build.json
└── tsconfig.json

Step Nine: Extracting the REST API Request Adapter

The GreetingApi port implementation introduced in the previous section slightly violates the Single Responsibility Principle, which states: “A class should have only one reason to change.” Currently, there are multiple potential reasons for change:

  1. HTTP Routing Conventions: URL path with or without variable parts.
  2. HTTP Request Processing.
  3. HTTP Response Formatting.

We can generally agree that while HTTP Request Processing and HTTP Response Formatting are closely related, HTTP Routing stands apart. To decouple these functionalities, we introduce an ApiAdapter responsible for converting cloud.ApiRequest to cloud.ApiResponse, thereby extracting this functionality from the GreetingApi port.

To achieve this, we introduce a new IRestApiAdapter interface:

bring cloud;


pub interface IRestApiAdapter {
inflight handle(request: cloud.ApiRequest): cloud.ApiResponse;
}

The GreetingApiAdapter class is defined as follows:

bring cloud;
bring "./IRestApiAdapter.w" as restApiAdapter;

pub class GreetingApiAdapter impl restApiAdapter.IRestApiAdapter {
\_h: cloud.IFunctionHandler;
new(handler: cloud.IFunctionHandler) {
this.\_h = handler;
}
inflight pub handle(request: cloud.ApiRequest): cloud.ApiResponse {
return cloud.ApiResponse{
status: 200,
body: this.\_h.handle(request.query.get("name"))
};
}
}

The modified GreetingApi port class is now:

bring cloud;
bring http;
bring "../adapters/IRestApiAdapter.w" as restApiAdapter;

pub class GreetingApi{
\_apiUrl: str;
\_adapter: restApiAdapter.IRestApiAdapter;
new(adapter: restApiAdapter.IRestApiAdapter) {
let api = new cloud.Api();
this.\_adapter = adapter;

api.get("/greetings", inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
return this.\_adapter.handle(request);
});
this.\_apiUrl = api.url;
}
pub inflight invoke(name: str): str {
let result = http.get("{this.\_apiUrl}/greetings?name={name}");
assert(200 == result.status);
return result.body;
}
}

The main.w module is updated accordingly:

bring "./handlers" as handlers;
bring "./ports" as ports;
bring "./adapters" as adapters;

let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingApiAdapter(greetingHandler);
let makeGreetingService = new ports.GreetingApi(greetingStringAdapter);
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}

The project structure reflects these changes:

greeting-service/

├── adapters/
│ └── greetingApiAdapter.w
│ └── IRestApiAdapter.w
├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingApi.w
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── tsconfig.build.json
└── tsconfig.json

Step Ten: Testing the REST API Request Adapter

Extracting the GreetingApiAdapter from the GreetingApi port might seem like a purist action, performed to demonstrate the potential value of Adapters, even if artificially and not strictly necessary. However, this perspective changes when we consider serious testing. The GreetingApiAdapter implementation from the previous section assumes that the name argument always comes within the query part of the HTTP request. But what happens if it doesn't? The system will crash, while according to standard it should respond with the HTTP 400 (Bad Request) status code in such cases. The modified structure allows us to introduce a separate unit test fully dedicated to testing the GreetingApiAdapter:

bring cloud;
bring expect;
bring "./adapters" as adapters;
bring "./handlers" as handlers;

let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingStringRestApiAdapter(greetingHandler);

test "it will return 200 and correct answer when name supplied" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("Hello, Wing!", response.body);
}

test "it will return 400 and error message when name is not supplied" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"somethingElse" => "doesNotMatter"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(400, response.status);
expect.equal("Query name=<name> is missing", response.body);
}

Running this test with the existing implementation will result in failure, necessitating the following changes:

bring cloud;
bring "./IRestApiAdapter.w" as restApiAdapter;


pub class GreetingStringRestApiAdapter impl restApiAdapter.IRestApiAdapter {
\_h: cloud.IFunctionHandler;

new(handler: cloud.IFunctionHandler) {
this.\_h = handler;
}

inflight pub handle(request: cloud.ApiRequest): cloud.ApiResponse {
if let name = request.query.tryGet("name") {
return cloud.ApiResponse{
status: 200,
body: this.\_h.handle(name)
};
} else {
return cloud.ApiResponse{
status: 400,
body: "Query name=<name> is missing"
};
}
}
}

The main lesson from this story is that system complexity can exist in multiple places, not always within the core logic. Separation of concerns aids in managing this complexity through dedicated and isolated test suites.

Step Eleven: Extracting the GreetingService

After all the modifications made, the resulting version of the main.w module has become quite complex, incorporating the logic of wiring system handlers, ports, and adapters. Additionally, maintaining end-to-end system tests within the same module is only feasible up to a point. Different testing and production environments may be necessary to address various security and cost considerations. To tackle these issues, it's advisable to extract the GreetingService configuration into a separate module:

bring "./handlers" as handlers;
bring "./ports" as ports;
bring "./adapters" as adapters;


pub class Greeting {
pub api: ports.GreetingApi;

new() {
let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingStringRestApiAdapter(greetingHandler);
this.api = new ports.GreetingApi(greetingStringAdapter);
}
}

Ideally, the creation of the Greeting service object should be implemented using a static method, following the Factory Method design pattern. However, I encountered difficulties in this approach, as Preflight static functions require a context, which I was unable to determine how to obtain. Nonetheless, even in this form, extracting the Greeting service class opens up multiple possibilities for different configurations in testing and production environments. The main.w module can now be relieved of the testing code:

bring "./service.w" as service;


let greetingService = new service.Greeting();

The system end-to-end test is now placed in its dedicated test.main.w module:

bring "./service.w" as service;

let greetingService = new service.Greeting();
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", greetingService.api.invoke("Wing"));
}

In this case, code duplication is minimal, and as previously mentioned, a real system will have different configurations for test and production environments. The detailed specifications for these will be passed to the Greeting service class constructor.

Step Twelve: Enhancing REST API Request Adapter for Content Negotiation

Now, I aim to put the resulting architecture to the final test by partially implementing HTTP Content Negotiation. Specifically, the Greeting service should support returning a greeting statement as plain text, HTML, or JSON, depending on the client's request. The appropriate way to express these requirements is to modify the GreetingApiAdapter unit test as follows:

bring cloud;
bring expect;
bring "./adapters" as adapters;
bring "./handlers" as handlers;

let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingApiAdapter(greetingHandler);

test "it will return 200 and plain text answer when name is supplied without headers" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("Hello, Wing!", response.body);
expect.equal("text/plain", response.headers?.get("Content-Type"));
}

test "it will return 200 and json answer when name is supplied with headers Accept: application/json" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
headers: {"Accept" => "application/json"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("application/json", response.headers?.get("Content-Type"));
let data = Json.tryParse(response.body);
let expected = Json.stringify(Json {
greeting: "Hello, Wing!"
});
expect.equal(expected, response.body);
}

test "it will return 200 and html answer when name is supplied with headers Accept: text/html" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
headers: {"Accept" => "text/html"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("text/html", response.headers?.get("Content-Type"));
let body = response.body ?? "";
assert(body.contains("Hello, Wing!"));
}

test "it will return 400 and error message when name is not supplied" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"somethingElse" => "doesNotMatter"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(400, response.status);
expect.equal("Query name=<name> is missing", response.body);
expect.equal("text/plain", response.headers?.get("Content-Type"));
}

Suddenly, having a separate class for HTTP request/response handling doesn’t seem like a purely theoretical exercise, but rather a very pragmatic architectural decision. To make these tests pass, substantial modifications are needed in the GreetingApiAdapter class:

bring cloud;
bring "./IRestApiAdapter.w" as restApiAdapter;
bring "../core" as core;


pub class GreetingApiAdapter impl restApiAdapter.IRestApiAdapter {
\_h: cloud.IFunctionHandler;

new(handler: cloud.IFunctionHandler) {
this.\_h = handler;
}

inflight static \_textPlain(greeting: str): str {
return greeting;
}

inflight static \_applicationJson(greeting: str): str {
let responseBody = Json {
greeting: greeting
};
return Json.stringify(responseBody);
}

inflight \_findContentType(formatters: Map<inflight (str): str>, headers: Map<str>): str {
let contentTypes = (headers.tryGet("Accept") ?? "").split(",");
for ct in contentTypes {
if formatters.has(ct) {
return ct;
}
}
return "text/plain";
}

inflight \_buildOkResponse(headers: Map<str>, name: str): cloud.ApiResponse {
let greeting = this.\_h.handle(name) ?? ""; // TODO: guard against empty greeting or what??
let formatters = {
"text/plain" => GreetingApiAdapter.\_textPlain,
"text/html" => core.Greeting.formatHtml,
"application/json" => GreetingApiAdapter.\_applicationJson
};
let contentType = this.\_findContentType(formatters, headers);
return cloud.ApiResponse{
status: 200,
body: formatters.get(contentType)(greeting),
headers: {"Content-Type" => contentType}
};
}

inflight pub handle(request: cloud.ApiRequest): cloud.ApiResponse {
if let name = request.query.tryGet("name") {
return this.\_buildOkResponse(request.headers ?? {}, name);
} else {
return cloud.ApiResponse{
status: 400,
body: "Query name=<name> is missing",
headers: {"Content-Type" => "text/plain"}
};
}
}
}

Notice how quickly the complexity escalates. We’re not done yet, as we need a proper HTML formatter. The easiest way to implement it seemed to be in TypeScript, so I decided to place it in the core package:

export function formatHtml(greeting: string): string {
return \`
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Wing Greeting Service</title>

<!-- Tailwind CSS Play CDN https://tailwindcss.com/docs/installation/play-cdn -->
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body class="flex items-center justify-center h-screen">
<div class="text-center", id="greeting">
<h1 class="text-2xl font-bold">${greeting}</h1>
</div>
</body>
</html>
\`
}

There is, of course, a separate unit test for it:

import { formatHtml } from "@core/formatHtml";

describe("formatHtml", () => {
it("should return a properly formatted HTML greeting page", () => {
const greeting = "Hello, World!";
const result = formatHtml(greeting);
expect(result).toContain(greeting);
});
});

Placing the HTML response formatter in the core package could be debated as a violation of Hexagonal Architecture principles. Indeed, formatting an HTML response doesn’t seem to belong to the core application logic. Technically, relocating it wouldn’t be too hard, and in a larger real-world system, that’s probably what should be done. However, I chose to place it there to consolidate all TypeScript-related components in one place and to test and build them through the same set of Makefile targets.

Now, the browser gets response in format it could understand and render properly:

Image9

As stated at the outset, the objectives of this technology research report were to explore:

The exploration was conducted using the simplest “Hello, World!” application, which evolved into the GreetingService through twelve incremental steps, each introducing a minor modification to the previous code base. This resulted in the following project structure:

greeting-service/

├── adapters/
│ └── greetingApiAdapter.w
│ └── IRestApiAdapter.w
├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingApi.w
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── service.w
├── test.adapters.main.w
├── test.main.w
├── tsconfig.build.json
└── tsconfig.json

In my view, this structure reflects the overall service architecture quite well. As a minor improvement, I would consider relocating the TypeScript related files to a sub-level within the core folder.

Overall, the Winglang Module System passed the initial test, providing substantial support for the separation of concerns as prescribed by the Hexagonal Ports and Adapters pattern. It also offers reasonable support for interoperability with NodeJS runtime engine-based languages, such as TypeScript. My wish list for potential improvements includes:

  • Support for Preflight static functions in modules other than main.w, essential for the effective implementation of the Factory Method design pattern, crucial for supporting non-trivial service configurations.
  • Automatic lifting of Inflight static functions in modules other than main.w (this worked for TypeScript external functions), to eliminate the need for some extra boilerplate.
  • Automatic generation of Winglang wrappers for external functions.

This report evaluates the Winglang programming language for implementing one sequential stage of a more general Staged Event-Driven Architecture (SEDA). The assessment of how well Winglang supports the full-fledged Event-Driven part and asynchronous stage implementation (most likely for Handlers) will be the subject of future research. Stay tuned.

References

Winglang Publications

  1. Elad Ben-Israel, “Cloud, why so difficult?”
  2. Pouya Hallaj, “Wing: Programing language for the cloud”
  3. Artem Sokhin, “Revolutionize Cloud Programming with Wing: A New Cloud-Oriented Language”
  4. Jin, “Wing Language: Streamlining Cloud-Oriented Programming for Human-AI Collaboration”
  5. Sebastian Korfmann, “A Cloud Development Troubleshooting Treasure Hunt”
  6. Jesse Warden, “Wing — Programming Language for the Cloud”
  7. Shai Ber, “Winglang: Cloud Development Programming for the AI Era”

My Publications on “Infrastructure From Code”

  1. Asher Sterkin, “If your Computer is the Cloud, what should its Operating System look like?”
  2. Asher Sterkin, “Cloud Application Infrastructure from Code (IfC): The Next Logical Step in Cloud Automation”
  3. Asher Sterkin, “4 Pillars of the “Infrastructure from Code”
  4. Asher Sterkin, “IfC-2023: Technology Landscape”

Hexagonal Architecture

  1. Alistar Cockburn, “Hexagonal architecture”
  2. Robert C. Martin, “Clean Architecture”
  3. Krzysztof Słomka, “Hexagonal Architecture with Nest.js and TypeScript”
  4. Sairyss, “Domain-Driven Hexagon”
  5. Carlos Cunha, “A Hexagonal Approach to Writing Microservices for Scalable and Decentralized Business: How to use Ports and Adapter with TypeScript”
  6. Walid Karray, “Building a Todo App with TypeScript Using Clean Architecture: A Detailed Look at the Directory Structure”
  7. Andy Blackledge, “Hexagonal Architecture with CDK, Lambda, and TypeScript”
  8. Dyarlen Iber, “Hexagonal Architecture and Clean Architecture (with examples)”
  9. Khalil Stemmler, “Clean Node.js Architecture”
  10. James Beswick, Luca Mezzalira, “Developing evolutionary architecture with AWS Lambda”
  11. Adam Fanello, “Hexagonal Architecture by Example (in TypeScript)
  12. Royi Benita, “Clean Node.js Architecture — With NestJs and TypeScript”

· 16 min read
Hasan Abu-Rayyan

Wow its 2024, almost a quarter of the way through the 21st century, if you are reading this you probably should pat yourself on the back, because you did it! You have survived the crazy roller coaster ride that has lingered over the last several years, ranging from a pandemic to global insecurity with ongoing wars.

So finally 2024 is here, and we all get to ask ourselves, "Is this the year things finally start going back to normal?"... probably not! Though, as we all sit on the edge of our seats waiting for the next global crisis (my bingo card has mole people rising to the surface) we can take solace in one silver lining. Wing Custom Platforms are all the rage, and easier than ever to build!

In this blog series I'm going to be walking through how to build, publish, and use your own Wing Custom Platforms. Now before we get too deep, and since this is the first installment of what will probably be many procrastinated iterations, lets just do a quick level set.

Let me introduce Wing

A programming language for the cloud.

Wing combines infrastructure and runtime code in one language, enabling developers to stay in their creative flow, and to deliver better software, faster and more securely.

lightbult-moment

Please star ⭐ Wing


What Are Wing Custom Platforms?

The purpose of the post is not to explain all the dry details of Wing Platforms, thats the job of the Wing docs (I'll provide reference links down below). Rather we want to get into the fun of building one, so Ill briefly explain.

Wing Custom Platforms offer us a way to hook into a Wing application's compilation process. This is done through various hooks that a custom platform can implement. As of the today, some of these hooks include:

  • preSynth: called before the compiler begins to synthesize, and gives us access to the root app in the construct tree.
  • postSynth: called right after artifacts are synthesized, and will give us access to manipulate the resulting configuration. In the case of a Terraform provisioner this is the Terraform JSON configuration.
  • validate: called right after the postSynth hook and provides the same input, however the key difference is the passed config is immutable. Which is important for validation operations

There are several other hooks that exist though, we wont go into all those in this blog.

Lets Get Building!

One more bit of information we need before we start building our very own Custom Platform which is kind of important is, "what is our platform going to do?"

I'm glad you asked! We are going to build a Custom Platform that will enhance the developer experience when working with Terraform based platforms, some of which come builtin with Wing installation such as tf-aws, tf-azure, and tf-gcp.

The specific enhancement is we want to add is the functionality to configure how Terraform state files are managed through the use of Terraform backends. By default all of the builtin Terraform based platforms will use local state file configurations, which is nice for quick experimentation, but lacks some rigor for production quality deployments.

The Goal

Build and publish a Wing Custom Platform that provides a way to configure your Terraform backend state management.

For the purpose of brevity we will focus on 3 backend types, s3, azurerm, and gcs

Required Materials

  • Wing
  • NPM & Node
  • A bit of Typescript know-hows
  • A wish and a prayer

Creating The Project

To begin lets just create a new npm project, I'm going to be a little bit more bare bones in this guide, so ill just create a package.json and tsconfig.json

Below is my package.json file, the only real interesting part about it is the dev dependency on @winglang/sdk this is so we can use some of the exposed Platform types, which we will see an example of soon.

{
"name": "@wingplatforms/tf-backends",
"version": "0.0.1",
"main": "index.js",
"repository": {
"type": "git",
"url": "https://github.com/hasanaburayyan/wing-tf-backends"
},
"license": "ISC",
"devDependencies": {
"typescript": "5.3.3",
"@winglang/sdk": "0.54.30"
},
"files": ["lib"]
}

Here is the tsconfig.json Ive omitted a few other details for brevity since some other options are just personal preference. Whats worth noting here is how I have decided to structure the project. All my code will exist in a src folder and my expectations are that output of compilation will be in the lib folder. Now you might set your project up different and thats fine, but its worth explaining if you are just following along.

{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"rootDir": "./src",
"outDir": "./lib",
"lib": ["es2020", "dom"]
},
"include": ["./src/**/*"],
"exclude": ["./node_modules"]
}

Then to prep our dependencies we can just run npm install

Lets Code!

Okay now that that initial setup is out of the way, time to start writing our Platform!!

First Ill create a file src/platform.ts this will contain the main code for our Platform, which is used by the Wing compiler. The bare minimum code required for a Platform would look like this

import { platform } from "@winglang/sdk";

export class Platform implements platform.IPlatform {
readonly target = "tf-*";
}

Here we create and export the our Platform class, which implements the IPlatform interface. All the platform hooks are optional so we don't actually have to define anything else for this to technically be valid.

Now the required bit is defining target this mechanism allows a platform to define the provisioning engine and cloud provider it is compatible with. At the time of this blog post there is not actually an enforcement of this compatibly but... we imagine it works :)

Okay, so we have a barebones Platform but its not actually useful yet, lets change that! First we will plan on using environment variables to determine which type of backend our users want to use, as well as what is the key for the state file.

So we will provide a constructor in our Platform:

import { platform } from "@winglang/sdk";

export class Platform implements platform.IPlatform {
readonly target = "tf-*";
readonly backendType: string;
readonly stateFileKey: string;

constructor() {
if (!process.env.TF_BACKEND_TYPE) {
throw new Error(`TF_BACKEND_TYPE environment variable must be set.`);
}
if (!process.env.TF_STATE_FILE_KEY) {
throw new Error("TF_STATE_FILE_KEY environment variable must be set.");
}

this.backendType = process.env.TF_BACKEND_TYPE;
this.stateFileKey = process.env.TF_STATE_FILE_KEY;
}
}

Cool, now we are starting to get moving. Our Platform will require the users to have two environment variables set when compiling their Wing code, TF_BACKEND_TYPE and TF_STATE_FILE_KEY for now we will just persist this data as instance variables.

One more house keeping item we need to do is export our Platform code, to do this lets create an index.ts with a single line that looks like this:

export * from "./platform";

Testing Our Platform

Before we get much further I just want to show how to test your Platform locally to see it working. In order to test this code we need to first compile it using the command npx tsc and since we already defined everything in our tsconfig.json we will conveniently have a folder named lib that contains all the generated JavaScript code.

Lets create a super simple Wing application to use this Platform with.

// main.w
bring cloud;

new cloud.Bucket();

The above Wing code will just import the cloud library and use it to create a Bucket resource.

Next we will run a Wing compile command using our Platform in combination with some other Terraform based Platform, in my case it will be tf-aws

wing compile main.w --platform tf-aws --platform ./lib

Note: We are providing two Platforms tf-aws and a relative path to our compiled Platform ./lib The ordering of these Platforms is also important tf-aws MUST come first since its a Platform that implements the newApp() API. We won't dive deeper into that in this post but the reference reading materials down below will provide links if you want to dive deeper.

Now running this code will result in the following error:

wing compile main.w -t tf-aws -t ./lib

An error occurred while loading the custom platform: Error: TF_BACKEND_TYPE environment variable must be set.

Now before you freak out, just know thats one of them good errors :) we can indeed see our Platform code was loaded and run because the Error was thrown requiring TF_BACKEND_TYPE as an environment variable. If we now rerun the compile command with the required variables we should get a successful compilation

TF_BACKEND_TYPE=s3 TF_STATE_FILE_KEY=mystate.tfstate wing compile main.w -t tf-aws -t ./lib

To be extra sure the compilation worked we can inspect the generated Terraform code in target/main.tfaws/main.tf.json

{
"//": {
"metadata": {
"backend": "local",
"stackName": "root",
"version": "0.17.0"
},
"outputs": {}
},
"provider": {
"aws": [{}]
},
"resource": {
"aws_s3_bucket": {
"cloudBucket": {
"//": {
"metadata": {
"path": "root/Default/Default/cloud.Bucket/Default",
"uniqueId": "cloudBucket"
}
},
"bucket_prefix": "cloud-bucket-c87175e7-",
"force_destroy": false
}
}
},
"terraform": {
"backend": {
"local": {
"path": "./terraform.tfstate"
}
},
"required_providers": {
"aws": {
"source": "aws",
"version": "5.31.0"
}
}
}
}

We should see that a single Bucket is being created, however it is still using the local Terraform backend and that is because we still have some work to do!

Implementing The postSynth Hook

Since we want to edit the generated Terraform configuration file after the code has been synthesized, we will implement the postSynth hook. As I explained earlier this hook is called right after synthesis completes and passes the resulting configuration file.

What is more useful about this hook is it allows us to return a mutated version of the configuration file.

To implement this hook we will update our Platform code with this

export class Platform implements platform.IPlatform {
// ...
postSynth(config: any): any {
if (this.backendType === "s3") {
if (!process.env.TF_S3_BACKEND_BUCKET) {
throw new Error(
"TF_S3_BACKEND_BUCKET environment variable must be set."
);
}

if (!process.env.TF_S3_BACKEND_BUCKET_REGION) {
throw new Error(
"TF_S3_BACKEND_BUCKET_REGION environment variable must be set."
);
}

config.terraform.backend = {
s3: {
bucket: process.env.TF_S3_BACKEND_BUCKET,
region: process.env.TF_S3_BACKEND_BUCKET_REGION,
key: this.stateFileKey,
},
};
}
return config;
}
}

Now we can see there is some control flow logic happening here, if the user wants to use an s3 backend we will need some additional input such as the name and region of the bucket, which we will use TF_S3_BACKEND_BUCKET and TF_S3_BACKEND_BUCKET_REGION to configure.

Assuming all of the required environment variables exist, we can then manipulate the provided config object, where we set config.terraform.backend to use an s3 configuration block. Finally the config object is returned.

Now to see this all in action we will need to compile our code (npx tsc) and provide all four required s3 environment variables. To make the commands easier to read Ill do it in multiple lines:

# compile platform code
npx tsc

# set env vars
export TF_BACKEND_TYPE=s3
export TF_STATE_FILE_KEY=mystate.tfstate
export TF_S3_BACKEND_BUCKET=myfavorites3bucket
export TF_S3_BACKEND_BUCKET_REGION=us-east-1

# compile wing code!
wing compile main.w -t tf-aws -t ./lib

And viola! We should now be able to look at our Terraform config and see that a remote s3 backend is being used:

// Parts of the config have been omitted for brevity
{
"terraform": {
"required_providers": {
"aws": {
"version": "5.31.0",
"source": "aws"
}
},
"backend": {
"s3": {
"bucket": "myfavorites3bucket",
"region": "us-east-1",
"key": "mystate.tfstate"
}
}
},
"resource": {
"aws_s3_bucket": {
"cloudBucket": {
"bucket_prefix": "cloud-bucket-c87175e7-",
"force_destroy": false,
"//": {
"metadata": {
"path": "root/Default/Default/cloud.Bucket/Default",
"uniqueId": "cloudBucket"
}
}
}
}
}
}

ITS ALIVE!!!

If you have been following along, pat yourself on the back again! Now on top of surviving the early 2020s you have also written your first Wing Custom Platform!

Now before we go into how to make it available for use to other Wingnuts, lets actually make our code a little cleaner, and a bit more usefully robust.

Supporting Multiple Backends

In order to live up to its name tf-backends it should probably support multiple backends! To accomplish this lets just use some good ol' coding chops to abstract a bit.

We want our Platform to support s3, azurerm, and gcs to accomplish this we just have to define different config.terraform.backend blocks based on the desired backend.

To make this work I'm going to create a few more files:

src/backends/backend.ts

// simple interface to define a backend behavior
export interface IBackend {
generateConfigBlock(stateFileKey: string): void;
}

Now several backend classes that implement this interface

src/backends/s3.ts

import { IBackend } from "./backend";

export class S3 implements IBackend {
readonly backendBucket: string;
readonly backendBucketRegion: string;

constructor() {
if (!process.env.TF_S3_BACKEND_BUCKET) {
throw new Error("TF_S3_BACKEND_BUCKET environment variable must be set.");
}

if (!process.env.TF_S3_BACKEND_BUCKET_REGION) {
throw new Error(
"TF_S3_BACKEND_BUCKET_REGION environment variable must be set."
);
}

this.backendBucket = process.env.TF_S3_BACKEND_BUCKET;
this.backendBucketRegion = process.env.TF_S3_BACKEND_BUCKET_REGION;
}

generateConfigBlock(stateFileKey: string): any {
return {
s3: {
bucket: this.backendBucket,
region: this.backendBucketRegion,
key: stateFileKey,
},
};
}
}

src/backends/azurerm.ts

import { IBackend } from "./backend";

export class AzureRM implements IBackend {
readonly backendStorageAccountName: string;
readonly backendStorageAccountResourceGroupName: string;
readonly backendContainerName: string;

constructor() {
if (!process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_NAME) {
throw new Error(
"TF_AZURERM_BACKEND_STORAGE_ACCOUNT_NAME environment variable must be set."
);
}

if (!process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_RESOURCE_GROUP_NAME) {
throw new Error(
"TF_AZURERM_BACKEND_STORAGE_ACCOUNT_RESOURCE_GROUP_NAME environment variable must be set."
);
}

if (!process.env.TF_AZURERM_BACKEND_CONTAINER_NAME) {
throw new Error(
"TF_AZURERM_BACKEND_CONTAINER_NAME environment variable must be set."
);
}

this.backendStorageAccountName =
process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_NAME;
this.backendStorageAccountResourceGroupName =
process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_RESOURCE_GROUP_NAME;
this.backendContainerName = process.env.TF_AZURERM_BACKEND_CONTAINER_NAME;
}

generateConfigBlock(stateFileKey: string): any {
return {
azurerm: {
storage_account_name: this.backendStorageAccountName,
resource_group_name: this.backendStorageAccountResourceGroupName,
container_name: this.backendContainerName,
key: stateFileKey,
},
};
}
}

src/backends/gcs.ts

import { IBackend } from "./backend";

export class GCS implements IBackend {
readonly backendBucket: string;

constructor() {
if (!process.env.TF_GCS_BACKEND_BUCKET) {
throw new Error(
"TF_GCS_BACKEND_BUCKET environment variable must be set."
);
}

if (!process.env.TF_GCS_BACKEND_PREFIX) {
throw new Error(
"TF_GCS_BACKEND_PREFIX environment variable must be set."
);
}

this.backendBucket = process.env.TF_GCS_BACKEND_BUCKET;
}

generateConfigBlock(stateFileKey: string): any {
return {
gcs: {
bucket: this.backendBucket,
key: stateFileKey,
},
};
}
}

Now that we have our backend classes defined, we can update our Platform code to use them. My final Platform code looks like this:

import { platform } from "@winglang/sdk";
import { S3 } from "./backends/s3";
import { IBackend } from "./backends/backend";
import { AzureRM } from "./backends/azurerm";
import { GCS } from "./backends/gcs";
import { Local } from "./backends/local";

// TODO: support more backends: https://developer.hashicorp.com/terraform/language/settings/backends/local
const SUPPORTED_TERRAFORM_BACKENDS = ["s3", "azurerm", "gcs"];

export class Platform implements platform.IPlatform {
readonly target = "tf-*";
readonly backendType: string;
readonly stateFileKey: string;

constructor() {
if (!process.env.TF_BACKEND_TYPE) {
throw new Error(
`TF_BACKEND_TYPE environment variable must be set. Available options: (${SUPPORTED_TERRAFORM_BACKENDS.join(
", "
)})`
);
}
if (!process.env.TF_STATE_FILE_KEY) {
throw new Error("TF_STATE_FILE_KEY environment variable must be set.");
}
this.backendType = process.env.TF_BACKEND_TYPE;
this.stateFileKey = process.env.TF_STATE_FILE_KEY;
}

postSynth(config: any): any {
config.terraform.backend = this.getBackend().generateConfigBlock(
this.stateFileKey
);
return config;
}

/**
* Determine which backend class to initialize based on the backend type
*
* @returns the backend instance based on the backend type
*/
getBackend(): IBackend {
switch (this.backendType) {
case "s3":
return new S3();
case "azurerm":
return new AzureRM();
case "gcs":
return new GCS();
default:
throw new Error(
`Unsupported backend type: ${
this.backendType
}, available options: (${SUPPORTED_TERRAFORM_BACKENDS.join(", ")})`
);
}
}
}

BOOM!! Our Platform now supports all 3 different backends we wanted to support!

Feel free to build and test each one.

Publishing Our Platform For Use

Now I'm not going to explain all the intricate details about how npm packages work, since I would do a poor job of that as indicated by the fact my below examples will use a version 0.0.3 (third times the charm!)

However if you have followed along thus far you will be able to run the following commands Note: in order to publish this library you will need to have defined a package name that you are authorized to publish to. If you use mine (@wingplatforms/tf-backends) you're gonna have a bed time

```bash
# compile platform code again
npx tsc

# package your code
npm pack

# publish your package
npm publish

If done right you should see something along the lines of

npm notice === Tarball Details ===
npm notice name: @wingplatforms/tf-backends
npm notice version: 0.0.3
npm notice filename: wingplatforms-tf-backends-0.0.3.tgz
npm notice package size: 36.8 kB
npm notice unpacked size: 119.5 kB
npm notice shasum: 0186c558fa7c1ff587f2caddd686574638c9cc4c
npm notice integrity: sha512-mWIeg8yRE7CG/[...]cT8Kh8q/QwlGg==
npm notice total files: 17
npm notice
npm notice Publishing to https://registry.npmjs.org/ with tag latest and default access

Using The Published Platform

With the Platform created lets try it out. Note: I suggest using a clean directory for playing with it

Using the same simple Wing application as before

// main.w
bring cloud;

new cloud.Bucket()

We need to add one more thing to use a Custom Platform, a package.json file which only needs to define the published Platform as a dependency:

{
"dependencies": {
"@wingplatforms/tf-backends": "0.0.3"
}
}

With both those files create lets install our custom Platform using npm install

Finally we lets set up all the environment variables for GCS and run our Wing compile command. Note: since we are using a installed npm library we will provide the package name and not ./lib anymore!

export TF_BACKEND_TYPE=gcs
export TF_STATE_FILE_KEY=mystate.tfstate
export TF_GCS_BACKEND_BUCKET=mygcsbucket

wing compile main.w -t tf-aws -t @wingplatforms/tf-backends

Now we should be able to see that the generated Terraform config is using the correct remote backend!

{
"terraform": {
"required_providers": {
"aws": {
"version": "5.31.0",
"source": "aws"
}
},
"backend": {
"gcs": {
"bucket": "mygcsbucket",
"key": "mystate.tfstate"
}
}
},
"resource": {
"aws_s3_bucket": {
"cloudBucket": {
"bucket_prefix": "cloud-bucket-c87175e7-",
"force_destroy": false,
"//": {
"metadata": {
"path": "root/Default/Default/cloud.Bucket/Default",
"uniqueId": "cloudBucket"
}
}
}
}
}
}

Whats Next?

Now that we have built and published our first Wing Custom Platform, the sky is the limit! Get out there and start building the Custom Platforms to your hearts content <3 and keep a look out for the next addition to this series on Platform building!

In the meantime make sure you to join the Wing Slack community: https://t.winglang.io/slack and share what you are working on, or any issues you run into.

Want to read more about Wing Platforms? Check out the Wing Platform Docs


If you enjoyed this article Please star ⭐ Wing