Inflight Magazine no. 11
The 11th issue of the Wing Inflight Magazine.
The 11th issue of the Wing Inflight Magazine.
Hello Wingnuts!
We recently updated the project's roadmap to share more details about our vision for the project. As the Wing language and toolchain has an ambitious goal, we're hoping this gives you a better idea of what to expect in the coming months.
We want to stabilize as many of the items below as possible and we're eagerly interested in you feedback and collaboration, either through GitHub or our Discord server.
We want to provide a robust CLI for people to compile, test, and perform other essential functions with their Wing code.
We want the CLI to be easy to install, update, and set up on a variety of systems.
Dependency management should be simple and insulated from problems specific to individual Node package managers (e.g npm, pnpm, yarn).
The Wing toolchain makes it easy to create and publish Wing libraries (winglibs) with automatically generated API docs, and it doesn't require existing knowledge of node package managers.
Wing Platforms allow you to specify a layer of infrastructure customizations that apply to all application code written in Wing. It should be simple to create new Wing platforms or extend existing platforms.
With Wing Platforms it's possible to specify both multi-cloud abstractions in Wing as well as the actual platform (for example, the implementation of cloud.Bucket) in Wing.
Wing lets you easily write tests that run both locally and on the cloud.
Wing test runner can be customized per platform.
Tests in Wing can be run in a variety of means -- via the CLI, via the Wing Console, and in CI/CD.
The design of the test system should make it easy for developers to write reproducible (or deterministic) tests, and also provide facilities for debugging.
Wing's syntax and type system is robust, well documented, and easy for developers to learn.
Developers coming from other mainstream languages with C-like syntax (Java, C++, TypeScript) should feel right at home.
Most Wing code is statically typed in order to support automatic permissions.
Wing should be able to interoperate with a vast majority of TypeScript libraries.
It should be straightforward to import libraries that are available on npm and automatically have corresponding Wing types generated for them based on the TypeScript type information.
The language also has mechanisms for more advanced users to use custom JavaScript code in Wing.
We want Wing to have friendly, easy to understand error messages that point users towards how to fix their problems.
Wing has a built-in language server that gives users a first-class editing and refactoring experience in their IDEs.
Wing provides a batteries-included experience for performing common programming tasks like working with data structures, file systems, calculations, random number generation, HTTP requests, and other common needs.
Wing also has a ecosystem of Wing libraries ("winglibs") that make it easy to write cloud applications by providing easy-to-use abstractions over popular cloud resources.
This includes a []cloud
module](/docs/api/category/cloud) that is an opinionated set of resources for writing applications on the most popular public clouds.
The cloud primitives are designed to be cloud-agnostic (we aren't biased towards a specific cloud provider).
These cloud primitives all can be run to a high degree of fidelity and performance with the local simulator.
Not all winglibs may be fully stable when the language reaches 1.0.
We want to provide a first class local development experience for Wing that makes it easy and fast to test your applications locally.
It gives you observability into your running application and interact live with different components.
It gives you a clearer picture of your infrastructure graph and how preflight and inflight code are related.
It complements the experience of writing code in a dedicated editor.
We want it to be easy for people to get exposed to Wing code and have ways to try applications without having to install Wing locally.
The Wing docs should provide content appealing to different kinds of developers trying to acquire different kinds of information at different stages -- from tutorials to references to how-to-guides (documentation quadrants)
Wing docs need to have content for both the personas of developers writing their own applications and platform engineers aiming to provide simpler abstractions and tools for their teams.
We want to provide hundreds of examples and code snippets to make it easy to learn the syntax of the language and easy to see how to solve common use cases.
If you have any questions, would like to contribute feel free to reach out to us and join us on our mission to make cloud development easier for everyone.
- The Wing Team
The 10th issue of the Wing Inflight Magazine.
In this tutorial, we will build an AI-powered Q&A bot for your website documentation.
π Create a user-friendly Next.js app to accept questions and URLs
π§ Set up a Wing backend to handle all the requests
π‘ Incorporate Langchain for AI-driven answers by scraping and analyzing documentation using RAG
π Complete the connection between the frontend input and AI-processed responses.
Wing is an open-source framework for the cloud.
It allows you to create your application's infrastructure and code combined as a single unit and deploy them safely to your preferred cloud providers.
Wing gives you complete control over how your application's infrastructure is configured. In addition to its easy-to-learn programming language, Wing also supports Typescript.
In this tutorial, we'll use TypeScript. So, don't worryβyour JavaScript and React knowledge is more than enough to understand this tutorial.
Here, youβll create a simple form that accepts the documentation URL and the userβs question and then returns a response based on the data available on the website.
First, create a folder containing two sub-folders - frontend
and backend
.Β The frontend
folder contains the Next.js app, and the backend
folder is for Wing.
mkdir qa-bot && cd qa-bot
mkdir frontend backend
Within the frontend
folder, create a Next.js project by running the following code snippet:
cd frontend
npx create-next-app ./
Copy the code snippet below into the app/page.tsx
file to create the form that accepts the userβs question and the documentation URL:
"use client";
import { useState } from "react";
export default function Home() {
const [documentationURL, setDocumentationURL] = useState<string>("");
const [question, setQuestion] = useState<string>("");
const [disable, setDisable] = useState<boolean>(false);
const [response, setResponse] = useState<string | null>(null);
const handleUserQuery = async (e: React.FormEvent) => {
e.preventDefault();
setDisable(true);
console.log({ question, documentationURL });
};
return (
<main className='w-full md:px-8 px-3 py-8'>
<h2 className='font-bold text-2xl mb-8 text-center text-blue-600'>
Documentation Bot with Wing & LangChain
</h2>
<form onSubmit={handleUserQuery} className='mb-8'>
<label className='block mb-2 text-sm text-gray-500'>Webpage URL</label>
<input
type='url'
className='w-full mb-4 p-4 rounded-md border text-sm border-gray-300'
placeholder='https://www.winglang.io/docs/concepts/why-wing'
required
value={documentationURL}
onChange={(e) => setDocumentationURL(e.target.value)}
/>
<label className='block mb-2 text-sm text-gray-500'>
Ask any questions related to the page URL above
</label>
<textarea
rows={5}
className='w-full mb-4 p-4 text-sm rounded-md border border-gray-300'
placeholder='What is Winglang? OR Why should I use Winglang? OR How does Winglang work?'
required
value={question}
onChange={(e) => setQuestion(e.target.value)}
/>
<button
type='submit'
disabled={disable}
className='bg-blue-500 text-white px-8 py-3 rounded'
>
{disable ? "Loading..." : "Ask Question"}
</button>
</form>
{response && (
<div className='bg-gray-100 w-full p-8 rounded-sm shadow-md'>
<p className='text-gray-600'>{response}</p>
</div>
)}
</main>
);
}
The code snippet above displays a form that accepts the userβs question and the documentation URL, and logs them to the console for now.
Perfect! πYouβve completed the application's user interface. Next, letβs set up the Wing backend.
Wing provides a CLI that enables you to perform various actions within your projects.
It also providesΒ VSCodeΒ andΒ IntelliJΒ extensions that enhance the developer experience with features like syntax highlighting, compiler diagnostics, code completion and snippets, and many others.
Before we proceed, stop your Next.js development server for now and install the Winglang CLI by running the code snippet below in your terminal.
npm install -g winglang@latest
Run the following code snippet to ensure that the Winglang CLI is installed and working as expected:
wing --version
Next, navigate to the backend
folder and create an empty Wing Typescript project. Ensure you select the empty
template and Typescript as the language.
wing new
Copy the code snippet below into the backend/main.ts
file.
import { cloud, inflight, lift, main } from "@wingcloud/framework";
main((root, test) => {
const fn = new cloud.Function(
root,
"Function",
inflight(async () => {
return "hello, world";
})
);
});
The main()
function serves as the entry point to Wing.
It creates a cloud function and executes at compile time. The inflight
function, on the other hand, runs at runtime and returns a Hello, world!
text.
Start the Wing development server by running the code snippet below. It automatically opens the Wing Console in your browser at http://localhost:3000
.
wing it
You've successfully installed Wing on your computer.
From the previous sections, you've created the Next.js frontend app within the frontend
folder and the Wing backend within the backend
folder.
In this section, you'll learn how to communicate and send data back and forth between the Next.js app and the Winglang backend.
First, install theΒ Winglang ReactΒ library within the backend folder by running the code below:
npm install @winglibs/react
Next, update the main.ts
file as shown below:
import { main, cloud, inflight, lift } from "@wingcloud/framework";
import React from "@winglibs/react";
main((root, test) => {
const api = new cloud.Api(root, "api", { cors: true })
;
//ππ» create an API route
api.get(
"/test",
inflight(async () => {
return {
status: 200,
body: "Hello world",
};
})
);
//ππ» placeholder for the POST request endpoint
//ππ» connects to the Next.js project
const react = new React.App(root, "react", { projectPath: "../frontend" });
//ππ» an environment variable
react.addEnvironment("api_url", api.url);
});
The code snippet above creates an API endpoint (/test
) that accepts GET requests and returns a Hello world
text. The main
function also connects to the Next.js project and adds the api_url
as an environment variable.
The API URL contained in the environment variable enables us to send requests to the Wing API route. Now, how do we retrieve the API URL within the Next.js app and make these requests?
Update the RootLayout
component within the Next.js app/layout.tsx
file as done below:
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang='en'>
<head>
{/** ---ππ» Adds this script tag ππ» ---*/}
<script src='./wing.js' defer />
</head>
<body className={inter.className}>{children}</body>
</html>
);
}
Re-build the Next.js project by running npm run build
.
Finally, start the Wing development server. It automatically starts the Next.js server, which can be accessed at http://localhost:3001
in your browser.
You've successfully connected the Next.js to Wing. You can also access data within the environment variables using window.wingEnv.<attribute_name>
.
In this section, you'll learn how to send requests to Wing, process these requests with LangChain and OpenAI, and display the results on the Next.js frontend.
First, let's update the Next.js app/page.tsx
file to retrieve the API URL and send user's data to a Wing API endpoint.
To do this, extend the JavaScript window
object by adding the following code snippet at the top of the page.tsx
file.
"use client";
import { useState } from "react";
interface WingEnv {
api_url: string;
}
declare global {
interface Window {
wingEnv: WingEnv;
}
}
Next, update the handleUserQuery
function to send a POST request containing the user's question and website's URL to a Wing API endpoint.
//ππ» sends data to the api url
const [response, setResponse] = useState<string | null>(null);
const handleUserQuery = async (e: React.FormEvent) => {
e.preventDefault();
setDisable(true);
try {
const request = await fetch(`${window.wingEnv.api_url}/api`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ question, pageURL: documentationURL }),
});
const response = await request.text();
setResponse(response);
setDisable(false);
} catch (err) {
console.error(err);
setDisable(false);
}
};
Before you create the Wing endpoint that accepts the POST request, install the following packages within the backend
folder:
npm install @langchain/community @langchain/openai langchain cheerio
Cheerio enables us to scrape the software documentation webpage, while the LangChain packages allow us to access its various functionalities.
The LangChain OpenAI integration package uses the OpenAI language model; therefore, you'll need a valid API key. You can get yours from theΒ OpenAI Developer's Platform.
Next, letβs create the /api
endpoint that handle incoming requests.
The endpoint will:
First, import the following into the main.ts
file:
import { main, cloud, inflight, lift } from "@wingcloud/framework";
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { createRetrievalChain } from "langchain/chains/retrieval";
import React from "@winglibs/react";
Add the code snippet below within the main()
function to create the /api
endpoint:
api.post(
"/api",
inflight(async (ctx, request) => {
//ππ» accept user inputs from Next.js
const { question, pageURL } = JSON.parse(request.body!);
//ππ» initialize OpenAI Chat for LLM interactions
const chatModel = new ChatOpenAI({
apiKey: "<YOUR_OPENAI_API_KEY>",
model: "gpt-3.5-turbo-1106",
});
//ππ» initialize OpenAI Embeddings for Vector Store data transformation
const embeddings = new OpenAIEmbeddings({
apiKey: "<YOUR_OPENAI_API_KEY>",
});
//ππ» creates a text splitter function that splits the OpenAI result chunk size
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 200, //ππ» characters per chunk
chunkOverlap: 20,
});
//ππ» creates a document loader, loads, and scraps the page
const loader = new CheerioWebBaseLoader(pageURL);
const docs = await loader.load();
//ππ» splits the document into chunks
const splitDocs = await splitter.splitDocuments(docs);
//ππ» creates a Vector store containing the split documents
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
embeddings //ππ» transforms the data to the Vector Store format
);
//ππ» creates a document retriever that retrieves results that answers the user's questions
const retriever = vectorStore.asRetriever({
k: 1, //ππ» number of documents to retrieve (default is 2)
});
//ππ» creates a prompt template for the request
const prompt = ChatPromptTemplate.fromTemplate(`
Answer this question.
Context: {context}
Question: {input}
`);
//ππ» creates a chain containing the OpenAI chatModel and prompt
const chain = await createStuffDocumentsChain({
llm: chatModel,
prompt: prompt,
});
//ππ» creates a retrieval chain that combines the documents and the retriever function
const retrievalChain = await createRetrievalChain({
combineDocsChain: chain,
retriever,
});
//ππ» invokes the retrieval Chain and returns the user's answer
const response = await retrievalChain.invoke({
input: `${question}`,
});
if (response) {
return {
status: 200,
body: response.answer,
};
}
return undefined;
})
);
The API endpoint accepts the userβs question and the page URL from the Next.js application, initialises ChatOpenAI
and OpenAIEmbeddings
, loads the documentation page, and retrieves the answers to the userβs query in the form of documents.
Then, splits the documents into chunks, saves the chunks in the MemoryVectorStore
, and enables us to fetch answers to the question using LangChain retrievers.
From the code snippet above, the OpenAI API key is entered directly into the code; this could lead to security breaches, making the API key accessible to attackers. To prevent this data leak, Winglang allows you to save private keys and credentials in variables called secrets
.
When you create a secret, Wing saves this data in a .env
file, ensuring it is secured and accessible.
Update the main()
function to fetch the OpenAI API key from the Wing Secret.
main((root, test) => {
const api = new cloud.Api(root, "api", { cors: true });
//ππ» creates the secret variable
const secret = new cloud.Secret(root, "OpenAPISecret", {
name: "open-ai-key",
});
api.post(
"/api",
lift({ secret })
.grant({ secret: ["value"] })
.inflight(async (ctx, request) => {
const apiKey = await ctx.secret.value();
const chatModel = new ChatOpenAI({
apiKey,
model: "gpt-3.5-turbo-1106",
});
const embeddings = new OpenAIEmbeddings({
apiKey,
});
//ππ» other code snippets & configurations
);
const react = new React.App(root, "react", { projectPath: "../frontend" });
react.addEnvironment("api_url", api.url);
});
secret
variable declares a name for the secret (OpenAI API key).lift().grant()
grants the API endpoint access to the secret value stored in the Wing Secret.inflight()
function accepts the context and request object as parameters, makes a request to LangChain, and returns the result.apiKey
using the ctx.secret.value()
function.Finally, save the OpenAI API key as a secret by running this command in your terminal.
Great, now our secrets are stored and we can interact with our application. Let's take a look at it in action!
Here is a brief demo:
Let's dig a little bit deeper into the Winglang docs to see what data our AI bot can extract.
So far, we have gone over the following:
Wing aims to bring back your creative flow and close the gap between imagination and creation. Another great advantage of Wing is that it is open-source. Therefore, if you are looking forward to building distributed systems that leverage cloud services or contribute to the future of cloud development, Wing is your best choice.
Feel free toΒ contributeΒ to theΒ GitHubΒ repository,Β andΒ share your thoughtsΒ with the team and the large community of developrs.
The source code for this tutorial is available here.
Thank you for reading! π
Building Slack apps can be a daunting task for beginners. Between understanding the Slack API, setting up a server to handle incoming requests, and deploying the app to a cloud provider, there are many steps involved. For instance setting up a slack app locally on your machine is simple enough, but then deploying it to a cloud provider can be challenging and might require re-architecting your app.
In this tutorial, I'm going to show you how to build a Slack app using Wing, making use of the Wing Console for local simulation and then deploying it to AWS with a single command!
Wing is an open source programming language for the cloud, that also provides a powerful and fun local development experience.
Wing combines infrastructure and runtime code in one language, enabling developers to stay in their creative flow, and to deliver better software, faster and more securely.
To take a closer look at Wing checkout our Github repository.
First, you need to install Wing on your machine (you'll need Node.js >= 20.x installed):
npm i -g winglang
You can check the CLI version like this (the minimum version required by this tutorial is 0.75.0):
wing --version
0.75.0
Ok, now that we have the Wing CLI installed we can use the Slack quick-start template to start building our Slack app.
$ mkdir my-slack-app
$ cd my-slack-app
$ wing new slack
This will create the following project structure:
my-slack-app/
βββ main.w
βββ package-lock.json
βββ package.json
Let's take a look at the main.w
file, where we can see a code template for a simple Slack app that updates us on files uploaded to a bucket.
Note: your template will have more comments and explanations than the one below. I have removed them for brevity.
bring cloud;
bring slack;
let botToken = new cloud.Secret(name: "slack-bot-token");
let slackBot = new slack.App(token: botToken);
let inbox = new cloud.Bucket() as "file-process-inbox";
inbox.onCreate(inflight (key) => {
let channel = slackBot.channel("INBOX_PROCESSING_CHANNEL");
channel.post("New file: {key} was just uploaded to inbox!");
});
slackBot.onEvent("app_mention", inflight(ctx, event) => {
let eventText = event["event"]["text"].asStr();
log(eventText);
if eventText.contains("list inbox") {
let files = inbox.list();
let message = new slack.Message();
message.addSection({
fields: [
{
type: slack.FieldType.mrkdwn,
text: "*Current Inbox:*\n-{files.join("\n-")}"
}
]
});
ctx.channel.postMessage(message);
}
});
Note: Be sure to replace the
INBOX_PROCESSING_CHANNEL
with the name of the slack channel you want to post to.
We can see above the very first resource defined is a cloud.Secret
which is used to store the Slack bot token. This is a secure way to store sensitive information in Wing. So before we can get started we need to create a Slack app and get the bot token to use in our Wing app.
Create from Scratch
and give your app a name and select the workspace you want to deploy it to.
app_mentions:read
chat:write
chat:write.public
to add these head over to the OAuth & Permissions
section and add those permissions.
Once its installed you can copy the bot token and let's head back over to our Wing app.
Now that we have our bot token, we can add it to our application by running the wing secrets
command and pasting the token when prompted:
β― wing secrets
1 secret(s) found
? Enter the secret value for slack-bot-token: [hidden]
Now that our bot token is stored our application is ready to run!
To run the app locally we can use the Wing Console, which simulates the cloud environment on your local machine. To start the console run:
wing it
This will open a browser window showing the Wing Console. You should see something similar to this:
Now in order to see the Slack bot in action, let's add some more code to our main.w
file. We will add a function that will make a new file each time it is called.
The following code can be appended to the main.w
file:
let counter = new cloud.Counter();
new cloud.Function(inflight () => {
let i = counter.inc();
inbox.put("file-{i}.txt", "Hello, Slack!");
}) as "Add File";
Once you save the file, the Wing Console will hot reload and you should now see a function resource we can play with that looks like this:
So now we can click on the Add File
function and interact with it in the right side panel. Go ahead and invoke the function a few times.
And BAM!! You should now be seeing messages in your Slack channel every time you invoke the function!
One thing we will notice is the Slack application is supposed to support the ability to list the files in the inbox. This is done by mentioning the Slack app and saying list inbox
.
If you try this now you will see absolutely nothing happens :) ββ this is because we need to enable events in our slack app.
To do this head over to the Event Subscriptions
section in the Slack API dashboard and enable events. You will need to provide a URL for the Slack API to send events to. Luckily Wing makes providing this URL easy with builtin support for tunneling.
To get the tunnel URL go back to the Wing Console and Open a tunnel for this endpoint
After a moment the icon will change to a eye with a slash through it, now we can copy the url by right clicking the endpoint and selecting Copy URL
.
Next let's head over to the Event Subscriptions
section in the slack API dashboard and paste the URL in the Request URL
field. However, we need to append slack/events
to the end of the URL. So it should look something like this:
This should only take a few seconds to verify, and once its verified you can scroll down to the Subscribe to Bot Events
section and add the app_mention
event like so:
Lastly, don't forget to save your changes!
Now head back to your Slack channel where you received the messages earlier and mention your app and say list inbox
. Our Slack apps may have different names so it wont be exactly the same as the example below:
You should now see a message in the channel with the files in the inbox!
Awesome! You have now built a Slack app using Wing and tested it locally. Now let's deploy it to AWS!
Before we start you will need the following to follow along:
Getting our code ready for AWS is as simple as running 2 commands. The first thing we need to do is prepare the cloud.Secret
for the tf-aws
platform. This is done by running the wing secrets
command with the --platform
flag:
β― wing secrets --platform tf-aws
1 secret(s) found
? Enter the secret value for slack-bot-token: [hidden]
Storing secrets in AWS Secrets Manager
Secret slack-bot-token does not exist, creating it.
1 secret(s) stored AWS Secrets Manager
This will result in the same prompt as before, but this time the secret will be stored in AWS Secrets Manager.
Next let's compile the application for tf-aws
:
β― wing compile --platform tf-aws
This will compile the application and generate all the necessary Terraform files, and assets needed to deploy the application to AWS.
To deploy the code run the following commands:
terraform -chdir=./target/main.tfaws init
terraform -chdir=./target/main.tfaws apply -auto-approve
This will begin the deployment process and should only take about a minute to complete (barring internet connection issues). The result will show an output that contains a URL for the API Gateway endpoint and look something like this:
Apply complete! Resources: 30 added, 0 changed, 0 destroyed.
Outputs:
App_Api_Endpoint_Url_E233F0E8 = "https://p9y42fs0gg.execute-api.us-east-1.amazonaws.com/prod"
App_Slack_Request_Url_FF26641D = "https://p9y42fs0gg.execute-api.us-east-1.amazonaws.com/prod/slack/events"
The last step we need to do is to copy that App_Slack_Request_Url
and paste it into the Request URL
field in the Event Subscriptions
section in the Slack API dashboard. This will tell Slack to now send events to our deployed applications API Gateway endpoint.
You should see the URL verified in a few seconds.
DONT FORGET TO SAVE YOUR CHANGES!
Let's first test adding a file to the inbox, which in AWS is an s3 bucket. Navigate to the S3 console in AWS and find the bucket with the name that contains file-process-inbox
there will be some unique hashing to the end of it. For example, my bucket was named: file-process-inbox-c8419ccc-20240530151737187700000004
Upload any file on your machine to this bucket, and you should see a message in your Slack channel!
Lastly, let's test the list inbox
command.
And there you have it! You have successfully built a Slack app using Wing, tested it locally, and deployed it to AWS!
If you find yourself wanting to learn more about Wing, or had any issues with this tutorial, or just wanna chat, feel free to join our Discord server!
By the end of this article, you will build and deploy a ChatGPT Client using Wing and Next.js.
This application can run locally (in a local cloud simulator) or deploy it to your own cloud provider.
Building a ChatGPT client and deploying it to your own cloud infrastructure is a good way to ensure control over your data.
Deploying LLMs to your own cloud infrastructure provides you with both privacy and security for your project.
Sometimes, you may have concerns about your data being stored or processed on remote servers when using proprietary LLM platforms like OpenAIβs ChatGPT, either due to the sensitivity of the data being fed into the platform or for other privacy reasons.
In this case, self-hosting an LLM to your cloud infrastructure or running it locally on your machine gives you greater control over the privacy and security of your data.
Wing is a cloud-oriented programming language that lets you build and deploy cloud-based applications without worrying about the underlying infrastructure. It simplifies the way you build on the cloud by allowing you to define and manage your cloud infrastructure and your application code within the same language. Wing is cloud agnostic - applications built with it can be compiled and deployed to various cloud platforms.
To follow along, you need to:
To get started, you need to install Wing on your machine. Run the following command:
npm install -g winglang
Confirm the installation by checking the version:
wing --version
mkdir assistant
cd assistant
npx create-next-app@latest frontend
mkdir backend && cd backend
wing new empty
We have successfully created our Wing and Next.js projects inside the assistant directory. The name of our ChatGPT Client is Assistant. Sounds cool, right?
The frontend and backend directories contain our Next and Wing apps, respectively. wing new empty
creates three files: package.json
, package-lock.json
, and main.w
. The latter is the appβs entry point.
The Wing simulator allows you to run your code, write unit tests, and debug your code inside your local machine without needing to deploy to an actual cloud provider, helping you iterate faster.
Use the following command to run your Wing app locally:
wing it
Your Wing app will run on localhost:3000
.
npm i @winglibs/openai @winglibs/react
main.w
file. Let's also import all the other libraries weβll need.bring openai
bring react
bring cloud
bring ex
bring http
bring
is the import statement in Wing. Think of it this way, Wing uses bring
to achieve the same functionality as import
in JavaScript.
cloud
is Wingβs Cloud library. It exposes a standard interface for Cloud API, Bucket, Counter, Domain, Endpoint, Function and many more cloud resources. ex
is a standard library for interfacing with Tables and cloud Redis database, and http
is for calling different HTTP methods - sending and retrieving information from remote resources.
We will use gpt-4-turbo
for our app but you can use any OpenAI model.
Create a Class
to initialize your OpenAI API. We want this to be reusable.
We will add a personality
to our Assistant
class so that we can dictate the personality of our AI assistant when passing a prompt to it.
let apiKeySecret = new cloud.Secret(name: "OAIAPIKey") as "OpenAI Secret";
class Assistant {
personality: str;
openai: openai.OpenAI;
new(personality: str) {
this.openai = new openai.OpenAI(apiKeySecret: apiKeySecret);
this.personality = personality;
}
pub inflight ask(question: str): str {
let prompt = `you are an assistant with the following personality: ${this.personality}. ${question}`;
let response = this.openai.createCompletion(prompt, model: "gpt-4-turbo");
return response.trim();
}
}
Wing unifies infrastructure definition and application logic using the preflight
and inflight
concepts respectively.
Preflight code (typically infrastructure definitions) runs once at compile time, while inflight code will run at runtime to implement your appβs behavior.
Cloud storage buckets, queues, and API endpoints are some examples of preflight. You donβt need to add the preflight keyword when defining a preflight, Wing knows this by default. But for an inflight block, you need to add the word βinflightβ to it.
We have an inflight block in the code above. Inflight blocks are where you write asynchronous runtime code that can directly interact with resources through their inflight APIs.
Let's walk through how we will secure our API keys because we definitely want to take security into account.
Let's create a .env
file in our backendβs root and pass in our API Key:
OAIAPIKey = Your_OpenAI_API_key
We can test our OpenAI API keys locally referencing our .env file, and then since we are planning to deploy to AWS, we will walk through setting up the AWS Secrets Manager.
First, let's head over to AWS and sign into the Console. If you don't have an account, you can create one for free.
Navigate to the Secrets Manager and let's store our API key values.
We have stored our API key in a cloud secret named OAIAPIKey
. Copy your key and we will jump over to the terminal and connect to our secret that is now stored in the AWS Platform.
wing secrets
Now paste in your API Key as the value in the terminal. Your keys are now properly stored and we can start interacting with our app.
Storing your AI's responses in the cloud gives you control over your data. It resides on your own infrastructure, unlike proprietary platforms like ChatGPT, where your data lives on third-party servers that you donβt have control over. You can also retrieve these responses whenever you need them.
Letβs create another class that uses the Assistant class to pass in our AIβs personality and prompt. We would also store each modelβs responses as txt
files in a cloud bucket.
let counter = new cloud.Counter();
class RespondToQuestions {
id: cloud.Counter;
gpt: Assistant;
store: cloud.Bucket;
new(store: cloud.Bucket) {
this.gpt = new Assistant("Respondent");
this.id = new cloud.Counter() as "NextID";
this.store = store;
}
pub inflight sendPrompt(question: str): str {
let reply = this.gpt.ask("{question}");
let n = this.id.inc();
this.store.put("message-{n}.original.txt", reply);
return reply;
}
}
We gave our Assistant the personality βRespondent.β We want it to respond to questions. You could also let the user on the frontend dictate this personality when sending in their prompts.
Every time it generates a response, the counter increments, and the value of the counter is passed into the n
variable used to store the modelβs responses in the cloud. However, what we really want is to create a database to store both the user prompts coming from the frontend and our modelβs responses.
Let's define our database.
Wing has ex.Table
built-in - a NoSQL database to store and query data.
let db = new ex.Table({
name: "assistant",
primaryKey: "id",
columns: {
question: ex.ColumnType.STRING,
answer: ex.ColumnType.STRING
}
});
We added two columns in our database definition - the first to store user prompts and the second to store the modelβs responses.
We want to be able to send and receive data in our backend. Letβs create POST and GET routes.
let api = new cloud.Api({ cors: true });
api.post("/assistant", inflight((request) => {
// POST request logic goes here
}));
api.get("/assistant", inflight(() => {
// GET request logic goes here
}));
let myAssistant = new RespondToQuestions(store) as "Helpful Assistant";
api.post("/assistant", inflight((request) => {
let prompt = request.body;
let response = myAssistant.sendPrompt(JSON.stringify(prompt));
let id = counter.inc();
// Insert prompt and response in the database
db.insert(id, { question: prompt, answer: response });
return cloud.ApiResponse({
status: 200
});
}));
In the POST route, we want to pass the user prompt received from the frontend into the model and get a response. Both prompt and response will be stored in the database. cloud.ApiResponse
allows you to send a response for a userβs request.
Add the logic to retrieve the database items when the frontend makes a GET request.
Add the logic to retrieve the database items when the frontend makes a GET request.
api.get("/assistant", inflight(() => {
let questionsAndAnswers = db.list();
return cloud.ApiResponse({
body: JSON.stringify(questionsAndAnswers),
status: 200
});
}));
Our backend is ready. Let's test it out in the local cloud simulator.
Run wing it
.
Lets go over to localhost:3000
and askΒ our Assistant a question.
Both our question and the Assistantβs response has been saved to the database. Take a look.
We need to expose the API URL of our backend to our Next frontend. This is where the react library installed earlier comes in handy.
let website = new react.App({
projectPath: "../frontend",
localPort: 4000
});
website.addEnvironment("API_URL", api.url);
Add the following to the layout.js
of your Next app.
import { Inter } from "next/font/google";
import "./globals.css";
const inter = Inter({ subsets: ["latin"] });
export const metadata = {
title: "Create Next App",
description: "Generated by create next app",
};
export default function RootLayout({ children }) {
return (
<html lang="en">
<head>
<script src="./wing.js" defer></script>
</head>
<body className={inter.className}>{children}</body>
</html>
);
}
We now have access to API_URL
in our Next application.
Letβs implement the frontend logic to call our backend.
import { useEffect, useState, useCallback } from 'react';
import axios from 'axios';
function App() {
const [isThinking, setIsThinking] = useState(false);
const [input, setInput] = useState("");
const [allInteractions, setAllInteractions] = useState([]);
const retrieveAllInteractions = useCallback(async (api_url) => {
await axios ({
method: "GET",
url: `${api_url}/assistant`,
}).then(res => {
setAllInteractions(res.data)
})
}, [])
const handleSubmit = useCallback(async (e)=> {
e.preventDefault()
setIsThinking(!isThinking)
if(input.trim() === ""){
alert("Chat cannot be empty")
setIsThinking(true)
}
await axios({
method: "POST",
url: `${window.wingEnv.API_URL}/assistant`,
headers: {
"Content-Type": "application/json"
},
data: input
})
setInput("");
setIsThinking(false);
await retrieveAllInteractions(window.wingEnv.API_URL);
})
useEffect(() => {
if (typeof window !== "undefined") {
retrieveAllInteractions(window.wingEnv.API_URL);
}
}, []);
// Here you would return your component's JSX
return (
// JSX content goes here
);
}
export default App;
The retrieveAllInteractions
function fetches all the questions and answers in the backendβs database. The handSubmit
function sends the userβs prompt to the backend.
Letβs add the JSX implementation.
import { useEffect, useState } from 'react';
import axios from 'axios';
import './App.css';
function App() {
// ...
return (
<div className="container">
<div className="header">
<h1>My Assistant</h1>
<p>Ask anything...</p>
</div>
<div className="chat-area">
<div className="chat-area-content">
{allInteractions.map((chat) => (
<div key={chat.id} className="user-bot-chat">
<p className='user-question'>{chat.question}</p>
<p className='response'>{chat.answer}</p>
</div>
))}
<p className={isThinking ? "thinking" : "notThinking"}>Generating response...</p>
</div>
<div className="type-area">
<input
type="text"
placeholder="Ask me any question"
value={input}
onChange={(e) => setInput(e.target.value)}
/>
<button onClick={handleSubmit}>Send</button>
</div>
</div>
</div>
);
}
export default App;
Navigate to your backend directory and run your Wing app locally using the following command
cd ~assistant/backend
wing it
Also run your Next.js frontend:
cd ~assistant/frontend
npm run dev
Letβs take a look at our application.
Letβs ask our AI Assistant a couple developer questions from our Next App.
Weβve seen how our app can work locally. Wing also allows you to deploy to any cloud provider including AWS. To deploy to AWS, you need Terraform and AWS CLI configured with your credentials.
tf-aws
. The command instructs the compiler to use Terraform as the provisioning engine to bind all our resources to the default set of AWS resources.cd ~/assistant/backend
wing compile --platform tf-aws main.w
cd ./target/main.tfaws
terraform init
terraform apply
Note: terraform apply
takes some time to complete.
You can find the complete code for this tutorial here.
As I mentioned earlier, we should all be concerned with our apps security, building your own ChatGPT client and deploying it to your cloud infrastructure gives your app some very good safeguards.
We have demonstrated in this tutorial how Wing provides a straightforward approach to building scalable cloud applications without worrying about the underlying infrastructure.
If you are interested in building more cool stuff, Wing has an active community of developers, partnering in building a vision for the cloud. We'd love to see you there.
Just head over to our Discord and say hi!
The 9th issue of the Wing Inflight Magazine.
import ReactPlayer from "react-player"; import console_new_look from "./assets/2024-03-13-magazine-008/console-new-look.mp4";
The 8th issue of the Wing Inflight Magazine.
This is an experience report on the initial steps of implementing a CRUD (Create, Read, Update, Delete) REST API in Winglang, with a focus on addressing typical production environment concerns such as secure authentication, observability, and error handling. It highlights how Winglang's distinctive features, particularly the separation of Preflight cloud resource configuration from Inflight API request handling, can facilitate more efficient integration of essential middleware components like logging and error reporting. This balance aims to reduce overall complexity and minimize the resulting code size. The applicability of various design patterns, including Pipe-and-Filters, Decorator, and Factory, is evaluated. Finally, future directions for developing a fully-fledged middleware library for Winglang are identified.
In my previous publication, I reported on my findings about the possible implementation of the Hexagonal Ports and Adapters pattern in the Winglang programming language using the simplest possible GreetingService
sample application. The main conclusions from this evaluation were:
Initially, I planned to proceed with exploring possible ways of implementing a more general Staged Event-Driven Architecture (SEDA) architecture in Winglang. However, using the simplest possible GreetingService
as an example left some very important architectural questions unanswered. Therefore I decided to explore in more depth what is involved in implementing a typical Create/Retrieve/Update/Delete (CRUD) service exposing standardized REST API and addressing typical production environment concerns such as secure authentication, observability, error handling, and reporting.
To prevent domain-specific complexity from distorting the focus on important architectural considerations, I chose the simplest possible TODO service with four operations:
Using this simple example allowed me to evaluate many important architectural options and to to come up with an initial prototype of a middleware library for the Winglang programming language compatible with and potentially surpassing popular libraries for mainstream programming languages, such as Middy for Node.js middleware engine for AWS Lambda and AWS Power Tools for Lambda.
Unlike my previous publication, I will not describe the step-by-step process of how I arrived at the current arrangement. Software architecture and design processes are rarely linear, especially beyond beginner-level tutorials. Instead, I will describe a starting point solution, which, while far from final, is representative enough to sense the direction in which the final framework might eventually evolve. I will outline the requirements, I wanted to address, the current architectural decisions, and highlight directions for future research.
Developing a simple, prototype-level TODO REST API service in Winglang is indeed very easy, and could be done within half an hour, using the Winglang Playground:
To keep things simple, I put everything in one source, even though, it of course could be split into Core, Ports, and Adapters. Letβs look at the major parts of this sample.
First, we need to define cloud resources, aka Ports, that we are going to use. This this is done as follows:
bring ex;
bring cloud;
let tasks = new ex.Table(
name: "Tasks",
columns: {
"id" => ex.ColumnType.STRING,
"title" => ex.ColumnType.STRING
},
primaryKey: "id"
);
let counter = new cloud.Counter();
let api = new cloud.Api();
let path = "/tasks";
Here we define a Winglang Table to keep TODO Tasks with only two columns: task ID and title. To keep things simple, we implement task ID as an auto-incrementing number using the Winglang Counter
resource. And finally, we expose the TODO Service API using the Winglang Api
resource.
Now, we are going to define a separate handler function for each of the four REST API requests. Getting a list of all tasks is implemented as:
api.get(
path,
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let rows = tasks.list();
let var result = MutArray<Json>[];
for row in rows {
result.push(row);
}
return cloud.ApiResponse{
status: 200,
headers: {
"Content-Type" => "application/json"
},
body: Json.stringify(result)
};
});
Creating a new task record is implemented as:
api.post(
path,
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let id = "{counter.inc()}";
if let task = Json.tryParse(request.body) {
let record = Json{
id: id,
title: task.get("title").asStr()
};
tasks.insert(id, record);
return cloud.ApiResponse {
status: 200,
headers: {
"Content-Type" => "application/json"
},
body: Json.stringify(record)
};
} else {
return cloud.ApiResponse {
status: 400,
headers: {
"Content-Type" => "text/plain"
},
body: "Bad Request"
};
}
});
Updating an existing task is implemented as:
api.put(
"{path}/:id",
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let id = request.vars.get("id");
if let task = Json.tryParse(request.body) {
let record = Json{
id: id,
title: task.get("title").asStr()
};
tasks.update(id, record);
return cloud.ApiResponse {
status: 200,
headers: {
"Content-Type" => "application/json"
},
body: Json.stringify(record)
};
} else {
return cloud.ApiResponse {
status: 400,
headers: {
"Content-Type" => "text/plain"
},
body: "Bad Request"
};
}
});
Finally, deleting an existing task is implemented as:
api.delete(
"{path}/:id",
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let id = request.vars.get("id");
tasks.delete(id);
return cloud.ApiResponse {
status: 200,
headers: {
"Content-Type" => "text/plain"
},
body: ""
};
});
We could play with this API using the Winglang Simulator:
We could write one or more tests to validate the API automatically:
bring http;
bring expect;
let url = "{api.url}{path}";
test "run simple crud scenario" {
let r1 = http.get(url);
expect.equal(r1.status, 200);
let r1_tasks = Json.parse(r1.body);
expect.nil(r1_tasks.tryGetAt(0));
let r2 = http.post(url, body: Json.stringify(Json{title: "First Task"}));
expect.equal(r2.status, 200);
let r2_task = Json.parse(r2.body);
expect.equal(r2_task.get("title").asStr(), "First Task");
let id = r2_task.get("id").asStr();
let r3 = http.put("{url}/{id}", body: Json.stringify(Json{title: "First Task Updated"}));
expect.equal(r3.status, 200);
let r3_task = Json.parse(r3.body);
expect.equal(r3_task.get("title").asStr(), "First Task Updated");
let r4 = http.delete("{url}/{id}");
expect.equal(r4.status, 200);
}
Last but not least, this service can be deployed on any supported cloud platform using the Winglang CLI. The code for the TODO Service is completely cloud-neutral, ensuring compatibility across different platforms without modification.
Should there be a need to expand the task details or link them to other system entities, the approach remains largely unaffected, provided the operations adhere to straightforward CRUD logic and can be executed within a 29-second timeout limit.
This example unequivocally demonstrates that the Winglang programming environment is a top-notch tool for the rapid development of such services. If this is all you need, you need not read further. What follows is a kind of White Rabbit hole of multiple non-functional concerns that need to be addressed before we can even start talking about serious production deployment.
You are warned. The forthcoming text is not for everybody, but rather for seasoned cloud software architects.
TODO sample service implementation presented above belongs to the so-called Headless REST API. This approach focuses on core functionality, leaving user experience design to separate layers. This is often implemented as Client-Side Rendering or Server Side Rendering with an intermediate Backend for Frontend tier, or by using multiple narrow-focused REST API services functioning as GraphQL Resolvers. Each approach has its merits for specific contexts.
I advocate for supporting HTTP Content Negotiation and providing a minimal UI for direct API interaction via a browser. While tools like Postman or Swagger can facilitate API interaction, experiencing the API as an end user offers invaluable insights. This basic UI, or what I refer to as an "engineering UI," often suffices.
In this context, anything beyond simple Server Side Rendering deployed alongside headless protocol serialization, such as JSON, might be unnecessarily complex. While Winglang provides support for Website
cloud resource for web client assets (HTML pages, JavaScript, CSS), utilizing it for such purposes introduces additional complexity and cost.
A simpler solution would involve basic HTML templates, enhanced with HTMX's features and a CSS framework like Bootstrap. Currently, Winglang does not natively support HTML templates, but for basic use cases, this can be easily managed with TypeScript. For instance, rendering a single task line could be implemented as follows:
import { TaskData } from "core/task";
export function formatTask(path: string, task: TaskData): string {
return `
<li class="list-group-item d-flex justify-content-between align-items-center">
<form hx-put="${path}/${task.taskID}" hx-headers='{"Accept": "text/plain"}' id="${task.taskID}-form">
<span class="task-text">${task.title}</span>
<input
type="text"
name="title"
class="form-control edit-input"
style="display: none;"
value="${task.title}">
</form>
<div class="btn-group">
<button class="btn btn-danger btn-sm delete-btn"
hx-delete="${path}/${task.taskID}"
hx-target="closest li"
hx-swap="outerHTML"
hx-headers='{"Accept": "text/plain"}'>β</button>
<button class="btn btn-primary btn-sm edit-btn">β</button>
</div>
</li>
`;
}
That would result in the following UI screen:
Not super-fancy, but good enough for demo purposes.
Even purely Headless REST APIs require strong usability considerations. API calls should follow REST conventions for HTTP methods, URL formats, and payloads. Proper documentation of HTTP methods and potential error handling are crucial. Client and server errors need to be logged, converted into appropriate HTTP status codes, and accompanied by clear explanation messages in the response body.
The need to handle multiple request parsers and response formatters based on content negotiation using Content-Type and Accept headers in HTTP requests led me to the following design approach:
Adhering to the Dependency Inversion Principle ensures that the system Core is completely isolated from Ports and Adapters. While there might be an inclination to encapsulate the Core within a generic CRUD framework, defined by a ResourceData
type, I advise caution. This recommendation stems from several considerations:
Another option would be to abandon the Core data types definition and rely entirely on untyped JSON interfaces, akin to a Lisp-like programming style. However, given Winglang's strong typing, I decided against this approach.
Overall, the TodoServiceHandler
is quite simple and easy to understand:
bring "./data.w" as data;
bring "./parser.w" as parser;
bring "./formatter.w" as formatter;
pub class TodoHandler {
_path: str;
_parser: parser.TodoParser;
_tasks: data.ITaskDataRepository;
_formatter: formatter.ITodoFormatter;
new(
path: str,
tasks_: data.ITaskDataRepository,
parser: parser.TodoParser,
formatter: formatter.ITodoFormatter,
) {
this._path = path;
this._tasks = tasks_;
this._parser = parser;
this._formatter = formatter;
}
pub inflight getHomePage(user: Json, outFormat: str): str {
let userData = this._parser.parseUserData(user);
return this._formatter.formatHomePage(outFormat, this._path, userData);
}
pub inflight getAllTasks(user: Json, query: Map<str>, outFormat: str): str {
let userData = this._parser.parseUserData(user);
let tasks = this._tasks.getTasks(userData.userID);
return this._formatter.formatTasks(outFormat, this._path, tasks);
}
pub inflight createTask(
user: Json,
body: str,
inFormat: str,
outFormat: str
): str {
let taskData = this._parser.parsePartialTaskData(user, body);
this._tasks.addTask(taskData);
return this._formatter.formatTasks(outFormat, this._path, [taskData]);
}
pub inflight replaceTask(
user: Json,
id: str,
body: str,
inFormat: str,
outFormat: str
): str {
let taskData = this._parser.parseFullTaskData(user, id, body);
this._tasks.replaceTask(taskData);
return taskData.title;
}
pub inflight deleteTask(user: Json, id: str): str {
let userData = this._parser.parseUserData(user);
this._tasks.deleteTask(userData.userID, num.fromStr(id));
return "";
}
}
As you might notice, the code structure deviates slightly from the design diagram presented earlier. These minor adaptations are normal in software design; new insights emerge throughout the process, necessitating adjustments. The most notable difference is the user: Json
argument defined for every function. We'll discuss the purpose of this argument in the next section.
Exposing the TODO service to the internet without security measures is a recipe for disaster. Hackers, bored teens, and professional attackers will quickly target its public IP address. The rule is very simple:
any public interface must be protected unless exposed for a very short testing period. Security is non-negotiable.
Conversely, overloading a service with every conceivable security measure can lead to prohibitively high operational costs. As I've argued in previous writings, making architects accountable for the costs of their designs might significantly reshape their approach:
If cloud solution architects were responsible for the costs incurred by their systems, it could fundamentally change their design philosophy.
What we need, is a reasonable protection of the service API, not less but not more either. Since I wanted to experiment with full-stack Service Side Rendering UI my natural choice was to enforce user login at the beginning, to produce a JWT Token with reasonable expiration, say one hour, and then to use it for authentication of all forthcoming HTTP requests.
Due to the Service Side Rendering rendering specifics using HTTP Cookie to carry over the session token was a natural (to be honest suggested by ChatGPT) choice. For the Client-Side Rendering option, I might need to use the Bearer Token delivered via the HTTP Request headers Authorization field.
With session tokens now incorporating user information, I could aggregate TODO tasks by the user. Although there are numerous methods to integrate session data, including user details into the domain, I chose to focus on userID
and fullName
attributes for this study.
For user authentication, several options are available, especially within the AWS ecosystem:
As an independent software technology researcher, I gravitate towards the simplest solutions with the fewest components, which also address daily operational needs. Leveraging the AWS Identity Center, as detailed in a separate publication, was a logical step due to my existing multi-account/multi-user setup.
After integration, my AWS Identity Center main screen looks like this:
That means that in my system, users, myself, or guests, could use the same AWS credentials for development, administration, and sample or housekeeping applications.
To integrate with AWS Identity Center I needed to register my application and provide a new endpoint implementing the so-called βAssertion Consumer Service URL (ACS URL)β. This publication is not about the SAML standard. It would suffice to say that with ChatGPT and Google search assistance, it could be done. Some useful information can be found here. What came very handy was a TypeScript samlify library which encapsulates the whole heavy lifting of the SAML Login Response validation process.
What Iβm mostly interested in is how this variability point affects the overall system design. Letβs try to visualize it using a semi-formal data flow notation:
While it might seem unusual this representation reflects with high fidelity how data flows through the system. What we see here is a special instance of the famous Pipe-and-Filters architectural pattern.
Here, data flows through a pipeline and each filter performs one well-defined task in fact following the Single Responsibility Principle. Such an arrangement allows me to replace filters should I want to switch to a simple Basic HTTP Authentication, to use the HTTP Authorization header, or use a different secret management policy for JWT token building and validation.
If we zoom into Parse and Format filters, we will see a typical dispatch logic using Content-Type and Accept HTTP headers respectively:
Many engineers confuse design and architectural patterns with specific implementations. This misses the essence of what patterns are meant to achieve.
Patterns are about identifying a suitable approach to balance conflicting forces with minimal intervention. In the context of building cloud-based software systems, where security is paramount but should not be overpriced in terms of cost or complexity, this understanding is crucial. The Pipe-and-Filters design pattern helps with addressing such design challenges effectively. It allows for modularization and flexible configuration of processing steps, which in this case, relate to authentication mechanisms.
For instance, while robust security measures like SAML authentication are necessary for production environments, they may introduce unnecessary complexity and overhead in scenarios such as automated end-to-end testing. Here, simpler methods like Basic HTTP Authentication may suffice, providing a quick and cost-effective solution without compromising the system's overall integrity. The goal is to maintain the system's core functionality and code base uniformity while varying the authentication strategy based on the environment or specific requirements.
Winglang's unique Preflight compilation feature facilitates this by allowing for configuration adjustments at the build stage, eliminating runtime overhead. This capability presents a significant advantage of Winglang-based solutions over other middleware libraries, such as Middy and AWS Power Tools for Lambda, by offering a more efficient and flexible approach to managing the authentication pipeline.
Implementing Basic HTTP Authentication, therefore, only requires modifying a single filter within the authentication pipeline, leaving the remainder of the system unchanged:
Due to some technical limitations, itβs currently not possible to implement Pipe-and-Filters in Winglang directly, but it could be quite easily simulated by a combination of Decorator and Factory design patterns. How exactly, we will see shortly. Now, letβs proceed to the next topic.
In this publication, Iβm not going to cover all aspects of production operation. The topic is large and deserves a separate publication of its own. Below, is presented what I consider as a bare minimum:
To operate a service we need to know what happens with it, especially when something goes wrong. This is achieved via a Structured Logging mechanism. At the moment, Winglang provides only a basic log(str)
function. For my investigation, I need more and implemented a poor man-structured logging class
// A poor man implementation of configurable Logger
// Similar to that of Python and TypeScript
bring cloud;
bring "./dateTime.w" as dateTime;
pub enum logging {
TRACE,
DEBUG,
INFO,
WARNING,
ERROR,
FATAL
}
//This is just enough configuration
//A serious review including compliance
//with OpenTelemetry and privacy regulations
//Is required. The main insight:
//Serverless Cloud logging is substantially
//different
pub interface ILoggingStrategy {
inflight timestamp(): str;
inflight print(message: Json): void;
}
pub class DefaultLoggerStrategy impl ILoggingStrategy {
pub inflight timestamp(): str {
return dateTime.DateTime.toUtcString(std.Datetime.utcNow());
}
pub inflight print(message: Json): void {
log("{message}");
}
}
//TBD: probably should go into a separate module
bring expect;
bring ex;
pub class MockLoggerStrategy impl ILoggingStrategy {
_name: str;
_counter: cloud.Counter;
_messages: ex.Table;
new(name: str?) {
this._name = name ?? "MockLogger";
this._counter = new cloud.Counter();
this._messages = new ex.Table(
name: "{this._name}Messages",
columns: Map<ex.ColumnType>{
"id" => ex.ColumnType.STRING,
"message" => ex.ColumnType.STRING
},
primaryKey: "id"
);
}
pub inflight timestamp(): str {
return "{this._counter.inc(1, this._name)}";
}
pub inflight expect(messages: Array<Json>): void {
for message in messages {
this._messages.insert(
message.get("timestamp").asStr(),
Json{ message: "{message}"}
);
}
}
pub inflight print(message: Json): void {
let expected = this._messages.get(
message.get("timestamp").asStr()
).get("message").asStr();
expect.equal("{message}", expected);
}
}
pub class Logger {
_labels: Array<str>;
_levels: Array<logging>;
_level: num;
_service: str;
_strategy: ILoggingStrategy;
new (level: logging, service: str, strategy: ILoggingStrategy?) {
this._labels = [
"TRACE",
"DEBUG",
"INFO",
"WARNING",
"ERROR",
"FATAL"
];
this._levels = Array<logging>[
logging.TRACE,
logging.DEBUG,
logging.INFO,
logging.WARNING,
logging.ERROR,
logging.FATAL
];
this._level = this._levels.indexOf(level);
this._service = service;
this._strategy = strategy ?? new DefaultLoggerStrategy();
}
pub inflight log(level_: logging, func: str, message: Json): void {
let level = this._levels.indexOf(level_);
let label = this._labels.at(level);
if this._level <= level {
this._strategy.print(Json {
timestamp: this._strategy.timestamp(),
level: label,
service: this._service,
function: func,
message: message
});
}
}
pub inflight trace(func: str, message: Json): void {
this.log(logging.TRACE, func,message);
}
pub inflight debug(func: str, message: Json): void {
this.log(logging.DEBUG, func, message);
}
pub inflight info(func: str, message: Json): void {
this.log(logging.INFO, func, message);
}
pub inflight warning(func: str, message: Json): void {
this.log(logging.WARNING, func, message);
}
pub inflight error(func: str, message: Json): void {
this.log(logging.ERROR, func, message);
}
pub inflight fatal(func: str, message: Json): void {
this.log(logging.FATAL, func, message);
}
}
There is nothing spectacular here and, as I wrote in the comments, a cloud-based logging system requires a serious revision. Still, itβs enough for the current investigation. Iβm fully convinced that logging is an integral part of any service specification and has to be tested with the same rigor as core functionality. For that purpose, I developed a simple mechanism to mock logs and check them against expectations.
For a REST API CRUD service, we need to log at least three types of things:
In addition, depending on needs the original error message might need to be converted into a standard one, for example in order not to educate attackers.
How much if any details to log depends on multiple factors: deployment target, type of request, specific user, type of error, statistical sampling, etc. In development and test mode, we will normally opt for logging almost everything and returning the original error message directly to the client screen to ease debugging. In production mode, we might opt for removing some sensitive data because of regulation requirements, to return a general error message, such as βBad Requestβ, without any details, and apply only statistical sample logging for particular types of requests to save the cost.
Flexible logging configuration was achieved by injecting four additional filters in every request handling pipeline:
This structure, although not an ultimate one, provides enough flexibility to implement a wide range of logging and error-handling strategies depending on the service and its deployment target specifics.
As with logs, Winglang at the moment provides only a basic throw <str>
operator, so I decided to implement my version of a poor man structured exceptions:
// A poor man structured exceptions
pub inflight class Exception {
pub tag: str;
pub message: str?;
new(tag: str, message: str?) {
this.tag = tag;
this.message = message;
}
pub raise() {
let err = Json.stringify(this);
throw err;
}
pub static fromJson(err: str): Exception {
let je = Json.parse(err);
return new Exception(
je.get("tag").asStr(),
je.tryGet("message")?.tryAsStr()
);
}
pub toJson(): Json { //for logging
return Json{tag: this.tag, message: this.message};
}
}
// Standard exceptions, similar to those of Python
pub inflight class KeyError extends Exception {
new(message: str?) {
super("KeyError", message);
}
}
pub inflight class ValueError extends Exception {
new(message: str?) {
super("ValueError", message);
}
}
pub inflight class InternalError extends Exception {
new(message: str?) {
super("InternalError", message);
}
}
pub inflight class NotImplementedError extends Exception {
new(message: str?) {
super("NotImplementedError", message);
}
}
//Two more HTTP-specific, yet useful
pub inflight class AuthenticationError extends Exception {
//aka HTTP 401 Unauthorized
new(message: str?) {
super("AuthenticationError", message);
}
}
pub inflight class AuthorizationError extends Exception {
//aka HTTP 403 Forbidden
new(message: str?) {
super("AuthorizationError", message);
}
}
These experiences highlight how the developer community can bridge gaps in new languages with temporary workarounds. Winglang is still evolving, but its innovative features can be harnessed for progress despite the language's age.
Now, itβs time to take a brief look at the last production topic on my list, namely
Scaling is a crucial aspect of cloud development, but it's often misunderstood. Some neglect it entirely, leading to problems when the system grows. Others over-engineer, aiming to be a "FANG" system from day one. The proclamation "We run everything on Kubernetes" is a common refrain in technical circles, regardless of whether it's appropriate for the project at hand.
Neitherβneglect nor over-engineeringβ extreme is ideal. Like security, scaling shouldn't be ignored, but it also shouldn't be over-emphasized.
Up to a certain point, cloud platforms provide cost-effective scaling mechanisms. Often, the choice between different options boils down to personal preference or inertia rather than significant technical advantages.
The prudent path involves starting small and cost-effectively, scaling out based on real-world usage and performance data, rather than assumptions. This approach necessitates a system designed for easy configuration changes to accommodate scaling, something not inherently supported by Winglang but certainly within the realm of feasibility through further development and research. As an illustration, let's consider scaling within the AWS ecosystem:
In essence, Winglang's approach, emphasizing the Preflight and Inflight stages, holds promise for facilitating these scaling strategies, although it may still be in the early stages of fully realizing this potential. This exploration of scalability within cloud software development emphasizes starting small, basing decisions on actual data, and remaining flexible in adapting to changing requirements.
In the mid-1990s, I learned about Commonality Variability Analysis from Jim Coplien. Since then, this approach, alongside Edsger W. Dijkstra's Layered Architecture, has been a cornerstone of my software engineering practices. Commonality Variability Analysis asks: "In our system, which parts will always be the same and which might need to change?" The Open-Closed Principle dictates that variable parts should be replaceable without modifying the core system.
Deciding when to finalize the stable aspects of a system involves navigating the trade-off between flexibility and efficiency, with several stages from code generation to runtime offering opportunities for fixation. Dynamic language proponents might delay these decisions to runtime for maximum flexibility, whereas advocates for static, compiled languages typically secure crucial system components as early as possible.
Winglang, with its unique Preflight compilation phase, stands out by allowing cloud resources to be fixed early in the development process. In this publication, I explored how Winglang enables addressing non-functional aspects of cloud services through a flexible pipeline of filters, though this granularity introduces its own complexity. The challenge now becomes managing this complexity without compromising the system's efficiency or flexibility.
While the final solution is a work in progress, I can outline a high-level design that balances these forces:
This design combines several software Design Patterns to achieve the desired balance. The process involves:
This approach shifts complexity towards implementing the Pipeline Builder machinery and Configuration specification. Experience teaches such machinery could be implemented (described for example in this publication). That normally requires some generic programming and dynamic import capabilities. Coming up with a good configuration data model is more challenging.
Recent advances in generative AI-based copilots raise new questions about achieving the most cost-efficient outcome. To understand the problem, let's revisit the traditional compilation and configuration stack:
This general case may not apply to every ecosystem. Here's a breakdown of the typical layers:
This complex structure has limitations. Generics can obscure the core language, macros are unsafe, configuration files are poorly disguised scripts, and code generators rely on inflexible static templates. These limitations are why I believe the current trend of Internal Development Platforms has limited growth potential.
As we look forward to the role of generative AI in streamlining these processes, the question becomes: Can generative AI-based copilots not only simplify but also enhance our ability to balance commonality and variability in software engineering?
This is going to be the main topic of my future research to be reported in the next publications. Stay tuned.
The landscape of serverless architectures is constantly evolving - and the patterns used by developers today range from microservices, to hexagonal architectures, to multi-tier architectures, to event-based architectures (EDAs). One of the most important challenges every development team considers when designing and building their system is how well it secures their application.
In this post, weβre going to deep dive on the private API gateway pattern: including what a private API gateway entails, its distinct advantages, and why you or your team might opt for this route over other services.
Based on our teamβs experience, weβll be focusing on the capabilities available when building private API gateways on AWS using their managed VPC and API Gateway services. But many of the lessons will also apply to cloud applications built using Azureβs API Management service and other major cloud providers.
If we simplify what a private API gateway is, it's a secure means of exposing a set of APIs within a private network, typically established using a Virtual Private Cloud (VPC). Letβs first understand each of these.
An API Gateway makes it easy for developers to create, publish, maintain, monitor, and secure large numbers of API endpoints at scale. The gateway provides a central point where you can manage API throttling, authorization, API versioning, and monitoring configuration.
On the other hand, a Virtual Private Cloud (or VPC), is a mechanism for creating a logically isolated virtual networking environment. Such a virtual network parallels a network that youβd operate in your own data center, including subnets, IP addressing capabilities, routing tables, and gateways to connect to other networks.
When your API gateway is located within a VPC, you have a private API gateway. Unlike their public counterparts accessible over the internet, private API gateways are crafted to be accessed exclusively from within the specified network. This means only backend services and databases created within your organization can access the API endpoints.
Let's take a look at three common use cases.
By confining your API within a private network, you minimize exposure to potential security threats originating from a public API over the internet. For the security conscious, VPCs provide a much saner starting point for securing your application since private compute resources can simply say βallow any traffic from our private network,β reducing the need to manually manage firewalls and IP tables. In this scenario, it doesnβt really matter what youβre building - you just donβt want the public internet messing around with it!
Some compliance standards or organizational policies mandate the use of VPCs to ensure that sensitive data remains within controlled environments. For example, with regulations like HIPAA in the United States, deploying applications within VPCs is a common strategy to ensure the confidentiality and security of patient data. In these situations, cloud architectures that are designed around API endpoint usage will benefit from being able to use private API gateways.
For companies that operate in a hybrid cloud environment (a mix of public cloud and on-premises data centers), a private API gateway can manage and route traffic within the private network. This is essential for sensitive data that cannot be transferred to public clouds due to policy or regulatory reasons. Migrating applications to the cloud is made easier by the fact that modern cloud providers like AWS allow application code running within VPCs to have secure access to the common services like S3, DynamoDB, and IAM - all routed through the backbone of Amazonβs networking infrastructure.
While setting up cloud applications in private networks has many security benefits, accessing a private API from external environments, such as during development or testing phases, can be cumbersome. VPCs typically add complexity to serverless applications by increasing the amount of infrastructure that needs to be managed through tools like Terraform and CloudFormation, and may require setting up bastion hosts for debugging applications in production.
Furthermore, establishing secure connections to any networks outside of the VPC may require setting up Virtual Private Networks (VPNs) or using services like Amazon Direct Connect, which introduces extra complexity.
In the Wing Discord community, weβve seen many developers share their advice and experiences building cloud applications of all shapes and sizes. Recently, we heard about a userβs positive experience building a cloud application with Wing that established API endpoints within their VPC through a private API Gateway. Inspired by their efforts, we extracted their solution into a template that makes it easy for anyone to turn their API gateway into a private one.
One of the biggest pains people have when using serverless is that they constantly have to wait for deployments to finish in order to test changes to their code. This is where Winglang and it's cloud simulator rocks - it allows you to develop your entire serverless application without having to deploy anything on the cloud, letting you iterate in milliseconds instead of minutes.
If youβre curious to learn more, check out our tutorial that walks you step-by-step through building a simple application with a private API gateway and deploy it to your own AWS account. If you have feedback or any other questions, let us know by dropping a comment in our community discord. Donβt be shy!
As the saying goes, there are several ways to skin a cat...in the tech world, there are 5 ways to skin a Lambda Function π€©
As developers try to bridge the gap between development and DevOps, I thought it would be helpful to compare Programming Languages and DevTools.
Let's start with the idea of a simple function that would upload a text file to a Bucket in our cloud app.
The next step is to demonstrate several ways this could be accomplished.
Note: In cloud development, managing permissions and bucket identities, packaging runtime code, and handling multiple files for infrastructure and runtime add layers of complexity to the development process.
Let's dive into some code!
After installing Wing, let's create a file:
main.w
If you aren't familiar with the Wing Programming Language, please check out the open-source repo HERE
bring cloud;
let bucket = new cloud.Bucket();
new cloud.Function(inflight () => {
bucket.put("hello.txt", "world!");
});
Let's do a breakdown of what's happening in the code above.
bring cloud
is Wing's import syntax
Create a Cloud Bucket:
let bucket = new cloud.Bucket();
initializes a new cloud bucket instance.
On the backend, the Wing platform provisions a new bucket in your cloud provider's environment. This bucket is used for storing and retrieving data.
Create a Cloud Function: The
new cloud.Function(inflight () => { ... });
statement defines a new cloud function.
This function, when triggered, performs the actions defined within its body.
bucket.put("hello.txt", "world!");
uploads a file named hello.txt with the content world! to the cloud bucket created earlier.
wing compile --platform tf-aws main.w
terraform apply
That's it, Wing takes care of the complexity of (permissions, getting the bucket identity in the runtime code, packaging the runtime code into a bucket, having to write multiple files - for infrastructure and runtime), etc.
Not to mention it generates IAC (TF or CF), plus Javascript that you can deploy with existing tools.
But while you develop, you can use the local simulator to get instant feedback and shorten the iteration cycles
Wing even has a playground that you can try out in the browser!
Step 1: Initialize a New Pulumi Project
mkdir pulumi-s3-lambda-ts
cd pulumi-s3-lambda-ts
pulumi new aws-typescript
Step 2. Write the code to upload a text file to S3.
This will be your project structure.
pulumi-s3-lambda-ts/
ββ src/
β ββ index.ts # Pulumi infrastructure code
β ββ lambda/
β ββ index.ts # Lambda function code to upload a file to S3
ββ tsconfig.json # TypeScript configuration
ββ package.json # Node.js project file with dependencies
Let's add this code to index.ts
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
// Create an AWS S3 bucket
const bucket = new aws.s3.Bucket("myBucket", {
acl: "private",
});
// IAM role for the Lambda function
const lambdaRole = new aws.iam.Role("lambdaRole", {
assumeRolePolicy: JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: "sts:AssumeRole",
Principal: {
Service: "lambda.amazonaws.com",
},
Effect: "Allow",
Sid: "",
},
],
}),
});
// Attach the AWSLambdaBasicExecutionRole policy
new aws.iam.RolePolicyAttachment("lambdaExecutionRole", {
role: lambdaRole,
policyArn: aws.iam.ManagedPolicy.AWSLambdaBasicExecutionRole,
});
// Policy to allow Lambda function to access the S3 bucket
const lambdaS3Policy = new aws.iam.Policy("lambdaS3Policy", {
policy: bucket.arn.apply((arn) =>
JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: ["s3:PutObject", "s3:GetObject"],
Resource: `${arn}/*`,
Effect: "Allow",
},
],
})
),
});
// Attach policy to Lambda role
new aws.iam.RolePolicyAttachment("lambdaS3PolicyAttachment", {
role: lambdaRole,
policyArn: lambdaS3Policy.arn,
});
// Lambda function
const lambda = new aws.lambda.Function("myLambda", {
code: new pulumi.asset.AssetArchive({
".": new pulumi.asset.FileArchive("./src/lambda"),
}),
runtime: aws.lambda.Runtime.NodeJS12dX,
role: lambdaRole.arn,
handler: "index.handler",
environment: {
variables: {
BUCKET_NAME: bucket.bucket,
},
},
});
export const bucketName = bucket.id;
export const lambdaArn = lambda.arn;
Next, create a lambda/index.ts directory for the Lambda function code:
import { S3 } from "aws-sdk";
const s3 = new S3();
export const handler = async (): Promise<void> => {
const bucketName = process.env.BUCKET_NAME || "";
const fileName = "example.txt";
const content = "Hello, Pulumi!";
const params = {
Bucket: bucketName,
Key: fileName,
Body: content,
};
try {
await s3.putObject(params).promise();
console.log(
`File uploaded successfully at https://${bucketName}.s3.amazonaws.com/${fileName}`
);
} catch (err) {
console.log(err);
}
};
Step 3: TypeScript Configuration (tsconfig.json)
{
"compilerOptions": {
"target": "ES2018",
"module": "CommonJS",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules", "**/*.spec.ts"]
}
After creating a Pulumi project, a yaml file will automatically be generated. pulumi.yaml
name: s3-lambda-pulumi
runtime: nodejs
description: A simple example that uploads a file to an S3 bucket using a Lambda function
template:
config:
aws:region:
description: The AWS region to deploy into
default: us-west-2
Ensure your lambda
directory with the index.js
file is correctly set up. Then, run the following command to deploy your infrastructure: pulumi up
Step 1: Initialize a New CDK Project
mkdir cdk-s3-lambda
cd cdk-s3-lambda
cdk init app --language=typescript
Step 2: Add Dependencies
npm install @aws-cdk/aws-lambda @aws-cdk/aws-s3
Step 3: Define the AWS Resources in CDK
File: index.js
import * as cdk from "@aws-cdk/core";
import * as lambda from "@aws-cdk/aws-lambda";
import * as s3 from "@aws-cdk/aws-s3";
export class CdkS3LambdaStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Create the S3 bucket
const bucket = new s3.Bucket(this, "MyBucket", {
removalPolicy: cdk.RemovalPolicy.DESTROY, // NOT recommended for production code
});
// Define the Lambda function
const lambdaFunction = new lambda.Function(this, "MyLambda", {
runtime: lambda.Runtime.NODEJS_14_X, // Define the runtime
handler: "index.handler", // Specifies the entry point
code: lambda.Code.fromAsset("lambda"), // Directory containing your Lambda code
environment: {
BUCKET_NAME: bucket.bucketName,
},
});
// Grant the Lambda function permissions to write to the S3 bucket
bucket.grantWrite(lambdaFunction);
}
}
Step 4: Lambda Function Code
Create the same file struct as above and in the pulumi directory: index.ts
import { S3 } from 'aws-sdk';
const s3 = new S3();
exports.handler = async (event: any) => {
const bucketName = process.env.BUCKET_NAME;
const fileName = 'uploaded_file.txt';
const content = 'Hello, CDK! This file was uploaded by a Lambda function!';
try {
const result = await s3.putObject({
Bucket: bucketName!,
Key: fileName,
Body: content,
}).promise();
console.log(`File uploaded successfully: ${result}`);
return {
statusCode: 200,
body: `File uploaded successfully: ${fileName}`,
};
} catch (error) {
console.log(error);
return {
statusCode: 500,
body: `Failed to upload file: ${error}`,
};
}
};
First, compile your TypeScript code: npm run build
, then
Deploy your CDK to AWS: cdk deploy
Step 1: Initialize a New CDKTF Project
mkdir cdktf-s3-lambda-ts
cd cdktf-s3-lambda-ts
Then, initialize a new CDKTF project using TypeScript:
cdktf init --template="typescript" --local
Step 2: Install AWS Provider and Add Dependencies
npm install @cdktf/provider-aws
Step 3: Define the Infrastructure
Edit main.ts to define the S3 bucket and Lambda function:
import { Construct } from "constructs";
import { App, TerraformStack } from "cdktf";
import { AwsProvider, s3, lambdafunction, iam } from "@cdktf/provider-aws";
class MyStack extends TerraformStack {
constructor(scope: Construct, id: string) {
super(scope, id);
new AwsProvider(this, "aws", { region: "us-west-2" });
// S3 bucket
const bucket = new s3.S3Bucket(this, "lambdaBucket", {
bucketPrefix: "cdktf-lambda-",
});
// IAM role for Lambda
const role = new iam.IamRole(this, "lambdaRole", {
name: "lambda_execution_role",
assumeRolePolicy: JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: "sts:AssumeRole",
Principal: { Service: "lambda.amazonaws.com" },
Effect: "Allow",
},
],
}),
});
new iam.IamRolePolicyAttachment(this, "lambdaPolicy", {
role: role.name,
policyArn:
"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole",
});
const lambdaFunction = new lambdafunction.LambdaFunction(this, "MyLambda", {
functionName: "myLambdaFunction",
handler: "index.handler",
role: role.arn,
runtime: "nodejs14.x",
s3Bucket: bucket.bucket, // Assuming the Lambda code is uploaded to this bucket
s3Key: "lambda.zip", // Assuming the Lambda code zip file is named lambda.zip
environment: {
variables: {
BUCKET_NAME: bucket.bucket,
},
},
});
// Grant the Lambda function permissions to write to the S3 bucket
new s3.S3BucketPolicy(this, "BucketPolicy", {
bucket: bucket.bucket,
policy: bucket.bucket.apply((name) =>
JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: "s3:*",
Resource: `arn:aws:s3:::${name}/*`,
Effect: "Allow",
Principal: {
AWS: role.arn,
},
},
],
})
),
});
}
}
const app = new App();
new MyStack(app, "cdktf-s3-lambda-ts");
app.synth();
Step 4: Lambda Function Code
The Lambda function code should be written in TypeScript and compiled into JavaScript, as AWS Lambda natively executes JavaScript. Here's an example index.ts for the Lambda function that you need to compile and zip:
import { S3 } from "aws-sdk";
const s3 = new S3();
exports.handler = async () => {
const bucketName = process.env.BUCKET_NAME || "";
const content = "Hello, CDKTF!";
const params = {
Bucket: bucketName,
Key: `upload-${Date.now()}.txt`,
Body: content,
};
try {
await s3.putObject(params).promise();
return { statusCode: 200, body: "File uploaded successfully" };
} catch (err) {
console.error(err);
return { statusCode: 500, body: "Failed to upload file" };
}
};
You need to compile this TypeScript code to JavaScript, zip it, and upload it to the S3 bucket manually or using a script.
Ensure the s3Key in the LambdaFunction resource points to the correct zip file in the bucket.
Compile your project using npm run build
Generate Terraform Configuration Files
Run the cdktf synth
command. This command executes your CDKTF app, which generates Terraform configuration files (*.tf.json
files) in the cdktf.out
directory:
Deploy Your Infrastructure
cdktf deploy
Step 1: Terraform Setup
Define your AWS Provider and S3 Bucket Create a file named main.tf with the following:
provider "aws" {
region = "us-west-2" # Choose your AWS region
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket_prefix = "lambda-upload-bucket-"
acl = "private"
}
resource "aws_iam_role" "lambda_execution_role" {
name = "lambda_execution_role"
assume_role_policy = jsonencode({
Version = "2023-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
}
resource "aws_iam_policy" "lambda_s3_policy" {
name = "lambda_s3_policy"
description = "IAM policy for Lambda to access S3"
policy = jsonencode({
Version = "2023-10-17"
Statement = [
{
Action = ["s3:PutObject", "s3:GetObject"],
Effect = "Allow",
Resource = "${aws_s3_bucket.lambda_bucket.arn}/*"
},
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_s3_access" {
role = aws_iam_role.lambda_execution_role.name
policy_arn = aws_iam_policy.lambda_s3_policy.arn
}
resource "aws_lambda_function" "uploader_lambda" {
function_name = "S3Uploader"
s3_bucket = "YOUR_DEPLOYMENT_BUCKET_NAME" # Set your deployment bucket name here
s3_key = "lambda.zip" # Upload your ZIP file to S3 and set its key here
handler = "index.handler"
role = aws_iam_role.lambda_execution_role.arn
runtime = "nodejs14.x"
environment {
variables = {
BUCKET_NAME = aws_s3_bucket.lambda_bucket.bucket
}
}
}
Step 2: Lambda Function Code (TypeScript)
Create a TypeScript file index.ts for the Lambda function:
import { S3 } from 'aws-sdk';
const s3 = new S3();
exports.handler = async (event: any) => {
const bucketName = process.env.BUCKET_NAME;
const fileName = `uploaded-${Date.now()}.txt`;
const content = 'Hello, Terraform and AWS Lambda!';
try {
await s3.putObject({
Bucket: bucketName!,
Key: fileName,
Body: content,
}).promise();
console.log('Upload successful');
return {
statusCode: 200,
body: JSON.stringify({ message: 'Upload successful' }),
};
} catch (error) {
console.error('Upload failed:', error);
return {
statusCode: 500,
body: JSON.stringify({ message: 'Upload failed' }),
};
}
};
Finally after uploading your Lambda function code to the specified S3 bucket, run terraform apply
.
I hope you enjoyed this comparison of five simple ways to write a function in our cloud app that uploads a text file to a Bucket.
As you can see, most of the code becomes very complex, except for Wing.
If you are intrigued about Wing and like how we are simplifying the process of cloud development, please join our community and reach out to us on Twitter.
As I argued elsewhere, automatically generating cloud infrastructure specifications directly from application code represents βThe Next Logical Step in Cloud Automation.β This approach, sometimes referred to as βInfrastructure From Codeβ (IfC), aims to:
Ensure automatic coordination of four types of interactions with cloud services: life cycle management, pre- and post-configuration, consumption, and operation, while making pragmatic choices of the most appropriate levels of API abstraction for each cloud service and leaving enough control to the end-user for choosing the most suitable vendor, based on personal preferences, regulations or brownfield deployment constraints
While analyzing the IfC Technology Landscape a year ago, I identified five attributes essential for analyzing major offerings in this space:
At that time, Winglang appeared on my radar as a brand-new cloud programming-oriented language running atop the NodeJS runtime. It comes with an optional plugin for VSCode, its own console, and fully supports cloud self-hosting via popular cloud orchestration engines such as Terraform and AWS CDK.
Today, I want to explore how well Winglang is suited for supporting the Clean Architecture style, based on the Hexagonal Ports and Adapters pattern. Additionally, Iβm interested in how easily Winglang can be integrated with TypeScript, a representative of mainstream programming languages that can be compiled into JavaScript and run atop the NodeJS runtime engine.
This publication is a technology research report. While it could potentially be converted into a tutorial, it currently does not serve as one. The code snippets in Winglang are intended to be self-explanatory. The language syntax falls within the common Algol-60 family and is, in most cases, straightforward to understand. In instances of uncertainty, please consult the Winglang Language Reference, Library, and Examples. For introductory materials, refer to the References.
Many thanks to Elad Ben-Israel, Shai Ber, and Nathan Tarbert for the valuable feedback on the early draft of this paper.
Creating the simplest possible βHello, World!β application is a crucial, yet often overlooked, validation step in new software technology. Although such an application lacks practical utility, it reveals the general accessibility of the technology to newcomers. As a marketing wit once told me, βWe have only one chance to make a first impression.β So, letβs begin with a straightforward one-liner in Winglang.
About Winglang: Winglang is an innovative cloud-oriented programming language designed to simplify cloud application development. It integrates seamlessly with cloud services, offering a unique approach to building and deploying applications directly in the cloud environment. This makes Winglang an intriguing option for developers looking to leverage cloud capabilities more effectively.
Installing Winglang is straightforward, assuming you already have npm and terraform installed and configured on your computer. As a technology researcher, I primarily work with remote desktops. Therefore, I wonβt delve into the details of preparing your workstation here. My personal setup, once stabilized, will be shared in a separate publication.
My first step is to create a one-line application that prints the sentence βHello, Winglang!β In Winglang, this is indeed could be done in a single line:
log(βHello, Winglang!β);
However, to execute this one line of code, we need to compile it by typing wing compile
:
Winglang adopts an intriguing approach by distinctly separating the phases of programmatic definition of cloud resources during compilation and their use during runtime. This is articulated in Winglang as Preflight and Inflight execution phases.
Simply put, the Preflight phase occurs when application code is compiled into a target orchestration engine template, such as a local simulator or Terraform, while the Inflight phase is when the application code executes within a Cloud Function or Container.
The ability to use the same syntax for programming the compilation phase and even print logs is quite a unique feature. For comparison, consider the ability to use the same syntax for programming βCβ macros or C++ templates to print debugging logs of the compilation phase, just as you would program the runtime phase.
Now, I aim to create the simplest possible application that prints the sentence βHello, Winglang!β during runtime, that is during the Inflight phase. In Winglang, accomplishing this requires just a couple of lines, similar to what youβd expect in any mainstream programming language:
bring cloud;
log("Hello, Winglang, Preflight!");
let helloWorld = new cloud.Function(inflight (event: str) => {
log("Hello, Winglang!");
});
By typing wing it in the VSCode Terminal, you can bring up the Winglang simulator (I prefer the preview in the editor). Click on cloud.Function
, then on Invoke
, and you will see the following:
This is pretty cool and Winglang definitely passes the initial smoke test.
name
ArgumentβTo move beyond simply printing static text, weβre going to slightly modify our initial function to return the greeting βHello,<name>!
β, where <name>
is the functionβs argument. The updated code, along with the simulatorβs output, will look something like this:
Keep in mind, thereβs no need to close the simulator. Simply edit the file, hit CTRL+S to save, and the simulator will automatically load the new version.
In todayβs world, a system without test automation support hardly has a right to exist. Letβs add some tests to our simple function (now renamed to makeGreeting
):
Again, thereβs no need to close the simulator. The entire process is interactive and flows quite smoothly.
You can also run the tests via the command line in the VSCode Terminal:
The same test can also be run automatically in the cloud by typing, for example, wing test -t tf-aws
. Additionally, the same code can be deployed on a target cloud.
Cloud neutrality support in Winglang is important and fascinating topic, which will be covered in more details in the next Step Four: Extracting Core section.
If all you need is to develop simple Transaction Scripts that:
Then you may choose to stop here. Explore Winglang Examples to see what can be achieved today, and visit Winglang Issues for insights on current limitations and future plans. However, if youβre interested in exploring how Winglang supports complex software architectures with potentially intricate computational logic and long-term support requirements, you are welcome to proceed to Part Two of this publication.
Hexagonal Architecture, introduced by Alistair Cockburn in 2005, represented a significant shift in the way software applications were structured. Also known as the Ports and Adapters pattern, this architectural style was designed to create a clear separation between an applicationβs core logic and its external components. It enables applications to be equally driven by users, programs, automated tests, or batch scripts, and allows for development and testing in isolation from runtime devices and databases. By organizing interactions through βportsβ and βadaptersβ, the architecture ensures that the application remains agnostic to the nature of external technologies and interfaces. This approach not only prevented the infiltration of business logic into user interface code but also enhanced the flexibility and maintainability of software, making it adaptable to various environments and technologies.
While I believe that Alistair Cockburn, like many other practitioners, may have misinterpreted the original intent of layered software architecture as introduced by E.W. Dijkstra in his seminal work, βThe Structure of βTHEβ Multiprogramming Systemβ (a topic I plan to address in a separate publication), the foundational idea he presents remains useful. As I argued in my earlier publication, the Ports metaphor aligns well with cloud resources that trigger specific events, while software modules interacting directly with the cloud SDK effectively function as Adapters.
Numerous attempts (see References) have been made to apply Hexagonal Architecture concepts to cloud and, more specifically, serverless development. A notable example is the blog post βDeveloping Evolutionary Architecture with AWS Lambda,β which showcases a repository structure closely aligned with what I envision. However, even this example employs a more complex application than what I believe is necessary for initial exploration. I firmly hold that we should fully understand and explore the simplest possible applications, at the βHello, World!β level, before delving into more complex scenarios. With this in mind, letβs examine how far we can go in building a straightforward Greeting Service.
First and foremost, our goal is to extract the Core and ensure its complete independence from any external dependencies:
bring cloud;
pub class Greeting impl cloud.IFunctionHandler {
pub inflight handle(name: str): str {
return "Hello, {name}!";
}
}
At the moment, the Winglang Module System does not support public functions. I does, however, support public static class functions, which semantically are equivalent. Unfortunately, I cannot directly pass a public static inflight function to cloud.Function
(it only works for closures), and I need to implement the cloud.IFunctionHandler
interface. These limitations are fairly understandable and quite typical for a new programming system.
By extracting the core into a separate module, we can focus on what brings the application to life in the first place. This also enables extensive testing of the core logic independently, as shown below:
bring "./core" as core;
bring expect;
let greeting = new core.Greeting();
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, World!", greeting.handle("World"));
expect.equal("Hello, Winglang!", greeting.handle("Winglang"));
}
Keeping the simulator up with only the core test allows us to quickly explore application logic and discuss it with stakeholders without worrying about cloud resources. This approach often epitomizes what a true MVP (Minimum Viable Product) is about:
The main file is now streamlined, focusing on system-level packaging and testing:
bring cloud;
bring "./core" as core;
let makeGreeting = new cloud.Function(inflight (name: str): str => {
log("Received: {name}");
let greeting = core.Greeting.makeGreeting(name);
log("Returned: {greeting}");
return greeting;
});
bring expect;
test "it will return 'Hello, `<name>`!'" {
expect.equal("Hello, Winglang!", makeGreeting.invoke("Winglang"));
}
To consolidate everything, itβs time to introduce a Makefile
to automate the entire process:
.PHONY: all test_core test_local test_remote
cloud ?= aws
all: test_remote
test_core:
wing test test.core.main.w -t sim
test_local: test_core
wing test main.w -t sim
test_remote: test_local
wing test main.w -t tf-$(cloud)
Here, Iβve defined a Makefile
variable cloud
with the default value aws
, which specifies the target cloud platform for remote tests. By using Terraform as an orchestration engine, I ensure that the same code and Makefile
will run without any changes on any cloud platform supported by Winglang, such as aws
, gcp
, or azure
.
The output of remote testing is worth examining:
As we can see, Winglang automatically converts the Preflight code into Terraform templates and invokes Terraform commands to deploy the resulting stack to the cloud. It then runs the same test, effectively executing the Inflight code on the actual cloud, aws
in this case, and finally deletes all resources. In such cases, I don't even need to access the cloud console to monitor the process. I can treat the cloud as a supercomputer, working with it through Winglang's cross-compilation mechanism.
The project structure now mirrors our architectural intent:
greeting-service/
β
βββ core/
β βββ Greeting.w
β
βββ main.w
βββ Makefile
βββ test.core.main.w
makeGreeting(name)
Request HandlerThe core functionality should be purely computational, stateless, and free from side effects. This is crucial to ensure that the core does not depend on any external framework and can be fully tested automatically. Introducing states or external side effects would generally hinder this possibility. However, we still aim to isolate application logic from the real environment represented by Ports and Adapters. To achieve this, we introduce a separate Request Handler module, as follows:
bring cloud;
bring "../core" as core;
pub class Greeting impl cloud.IFunctionHandler {
pub inflight handle(name: str): str {
log("Received: {name}");
let greeting = core.Greeting.makeGreeting(name);
log("Returned: {greeting}");
return greeting;
}
}
In this case, the GreetingHandler
is responsible for logging, which is a side effect. In more complex applications, it would communicate with external databases, message buses, third-party services, etc., via Ports and Adapters.
The core logic is now encapsulated as a plain function and is no longer derived from the cloud.IFunctionHandler
interface:
pub class Greeting {
pub static inflight makeGreeting(name: str): str {
return "Hello, {name}!";
}
}
The unit test for the core logic is accordingly simplified:
bring "./core" as core;
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, World!", core.Greeting.makeGreeting("World"));
expect.equal("Hello, Wing!", core.Greeting.makeGreeting("Wing"));
}
The responsibility of connecting the handler and core logic now falls to the main.w
module:
bring cloud;
bring "./handlers" as handlers;
let greetingHandler = new handlers.Greeting();
let makeGreetingFunction = new cloud.Function(greetingHandler);
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingFunction.invoke("Wing"));
}
Once again, the project structure reflects our architectural intent:
greeting-service/
β
βββ core/
β βββ Greeting.w
βββ handlers/
β βββ Greeting.w
βββ main.w
βββ Makefile
βββ test.core.main.w
It should be noted that for a simple service like Greeting, such an evolved structure could be considered over-engineering and not justified by actual business needs. However, as a software architect, itβs essential for me to outline a general skeleton for a fully-fledged service without getting bogged down in application-specific complexities that might not yet be known. By isolating different system components from one another, we make future system evolution less painful, and in many cases just practically feasible. In such cases, investing in a preliminary system structure by following best practices is fully justified and necessary. As Grady Booch famously said, βOne cannot refactor a doghouse into a skyscraper.β
In general, keeping core functionality purely stateless and free from side effects, and isolating stateful application behavior with potential side effects into separate handlers, is conceptually equivalent to the monadic programming style widely adopted in Functional Programming environments.
We can now remove the direct cloud.Function
creation from the main module and encapsulate it into a separate GreetingFunction
port as follows:
bring "./handlers" as handlers;
bring "./ports" as ports;
let greetingHandler = new handlers.Greeting();
let makeGreetingService = new ports.GreetingFunction(greetingHandler);
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}
The GreetingFunction
is defined in a separate module like this:
bring cloud;
pub class GreetingFunction {
\_f: cloud.Function;
new(handler: cloud.IFunctionHandler) {
this.\_f = new cloud.Function(handler);
}
pub inflight invoke(name: str): str {
return this.\_f.invoke(name);
}
}
This separation of concerns allows the main.w
module to focus on connecting different parts of the system together. Specific port configuration is performed in a separate module dedicated to that purpose. While such isolation of GreetingHandler
might seem unnecessary at this stage, it becomes more relevant when considering the nuanced configuration supported by Winglang cloud.Function, including execution platform (e.g., AWS Lambda vs Container), environment variables, timeout, maximum resources, etc. Extracting the GreetingFunction
port definition into a separate module naturally facilitates the concealment of these details.
The project structure is updated accordingly:
greeting-service/
β
βββ core/
β βββ Greeting.w
βββ handlers/
β βββ Greeting.w
βββ ports/
β βββ greetingFunction.w
βββ main.w
βββ Makefile
βββ test.core.main.w
The adopted naming convention for port modules also allows for the inclusion of multiple port definitions within the same project, enabling the selection of the required one based on external configuration.
There are several reasons why a project might consider implementing its core functionality in a mainstream programming language that can still run atop the underlying runtime environment. For example, using TypeScript, which compiles into JavaScript, and can be integrated with Winglang. Here are some of the most common reasons:
The Greeting service core functionality, redeveloped in TypeScript, would look like this:
export function makeGreeting(name: string): string {
return \`Hello, ${name}!\`;
}
Its unit test, developed using the jest framework, would be:
import { makeGreeting } from "@core/makeGreeting";
describe("makeGreeting", () => {
it("should return a greeting with the provided name", () => {
const name = "World";
const expected = "Hello, World!";
const result = makeGreeting(name);
expect(result).toBe(expected);
});
});
To make it accessible to Winglang language modules, a simple wrapper is needed:
pub inflight class Greeting {
pub extern "../target/core/makeGreeting.js" static inflight makeGreeting(name: str): str;
}
The main technical challenge is to place the compiled JavaScript version where the Winglang wrapper can find it. For this project, I decided to use the target
folder, where the Winglang compiler puts its artifacts. To achieve this, I created a dedicated tsconfig.build.json
:
{
"extends": "./tsconfig.json",
"compilerOptions": {
"outDir": "./target",
// ... production-specific compiler options ...
},
"exclude": \[
"core/\*.test.ts"
\]
}
The Makefile
was also modified to automate the process:
.PHONY: all install test\_core test\_local test\_remote
cloud ?= aws
all: test\_remote
install:
npm install
test\_core: install
npm run test
build\_core: test\_core
npm run build
test\_local: build\_core
wing test main.w -t sim
test\_remote: test\_local
wing test main.w -t tf-$(cloud)
The folder structure reflects the changes made:
greeting-service/
β
βββ core/
β βββ Greeting.w
β βββ makeGreeting.ts
β βββ makeGreeting.test.ts
βββ handlers/
β βββ Greeting.w
βββ ports/
β βββ greetingFunction.w
βββ jest.config.js
βββ main.w
βββ Makefile
βββ package-lock.json
βββ package.json
βββ tsconfig.build.json
βββ tsconfig.json
Now, letβs consider making our Greeting service accessible via a REST API. This could be necessary, for instance, to enable demonstrations from a web browser or to facilitate calls from external services that, due to security or technological constraints, cannot communicate directly with the GreetingFunction
port. To accomplish this, we need to introduce a new Port definition and modify the main.w
module, while keeping everything else unchanged:
bring cloud;
bring http;
pub class GreetingApi{
pub apiUrl: str;
new(handler: cloud.IFunctionHandler) {
let api = new cloud.Api();
api.get("/greetings", inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
return cloud.ApiResponse{
status: 200,
body: handler.handle(request.query.get("name"))
};
});
this.apiUrl = api.url;
}
pub inflight invoke(name: str): str {
let result = http.get("{this.apiUrl}/greetings?name={name}");
assert(200 == result.status);
return result.body;
}
}
To maintain a consistent testing interface, I implemented an invoke
method that functions similarly to the GreetingFunction
port. This design choice is not mandatory but rather a matter of convenience to minimize the amount of change.
The main.w
module now allocates the GreetingApi
port:
bring "./handlers" as handlers;
bring "./ports" as ports;
let greetingHandler = new handlers.Greeting();
let makeGreetingService = new ports.GreetingApi(greetingHandler);
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}
Since there is now something to use externally, the Makefile
was modified to include deploy
and destroy
targets,as follows:
.PHONY: all install test\_core build\_core update test\_adapters test\_local test\_remote compile tf-init deploy destroy
cloud ?= aws
target := target/main.tf$(cloud)
all: test\_remote
install:
npm install
test\_core: install
npm run test
build\_core: test\_core
npm run build
update:
sudo npm update -g wing
test\_adapters: update
wing test test.adapters.main.w -t sim
test\_local: build\_core test\_adapters
wing test test.main.w -t sim
test\_remote: test\_local
wing test test.main.w -t tf-$(cloud)
compile:
wing compile main.w -t tf-$(cloud)
tf-init: compile
( \\
cd $(target) ;\\
terraform init \\
)
deploy: tf-init
( \\
cd $(target) ;\\
terraform apply -auto-approve \\
)
destroy:
( \\
cd $(target) ;\\
terraform destroy -auto-approve \\
)
The browser screen looks almost as expected, but notice a strange JSON.parse
error message (will be addressed in the forthcoming section):
The project structure is updated to reflect these changes:
greeting-service/
β
βββ core/
β βββ Greeting.w
β βββ makeGreeting.ts
β βββ makeGreeting.test.ts
βββ handlers/
β βββ Greeting.w
βββ ports/
β βββ greetingApi.w
β βββ greetingFunction.w
βββ jest.config.js
βββ main.w
βββ Makefile
βββ package-lock.json
βββ package.json
βββ tsconfig.build.json
βββ tsconfig.json
The GreetingApi
port implementation introduced in the previous section slightly violates the Single Responsibility Principle, which states: βA class should have only one reason to change.β Currently, there are multiple potential reasons for change:
We can generally agree that while HTTP Request Processing and HTTP Response Formatting are closely related, HTTP Routing stands apart. To decouple these functionalities, we introduce an ApiAdapter
responsible for converting cloud.ApiRequest
to cloud.ApiResponse
, thereby extracting this functionality from the GreetingApi
port.
To achieve this, we introduce a new IRestApiAdapter
interface:
bring cloud;
pub interface IRestApiAdapter {
inflight handle(request: cloud.ApiRequest): cloud.ApiResponse;
}
The GreetingApiAdapter
class is defined as follows:
bring cloud;
bring "./IRestApiAdapter.w" as restApiAdapter;
pub class GreetingApiAdapter impl restApiAdapter.IRestApiAdapter {
\_h: cloud.IFunctionHandler;
new(handler: cloud.IFunctionHandler) {
this.\_h = handler;
}
inflight pub handle(request: cloud.ApiRequest): cloud.ApiResponse {
return cloud.ApiResponse{
status: 200,
body: this.\_h.handle(request.query.get("name"))
}