Workers AI
This guide will walk you through setting up and deploying a Workers AI project. You will use Workers, an AI Gateway binding, and a large language model (LLM), to deploy your first AI-powered application on the Cloudflare global network.
- Sign up for a Cloudflare account ↗.
- Install
Node.js
↗.
Node.js version manager
Use a Node version manager like Volta ↗ or nvm ↗ to avoid permission issues and change Node.js versions. Wrangler, discussed later in this guide, requires a Node version of 16.17.0
or later.
You will create a new Worker project using the create-Cloudflare CLI (C3). C3 is a command-line tool designed to help you set up and deploy new applications to Cloudflare.
Create a new project named hello-ai
by running:
Running npm create cloudflare@latest
will prompt you to install the create-cloudflare package and lead you through setup. C3 will also install Wrangler, the Cloudflare Developer Platform CLI.
For setup, select the following options:
- For What would you like to start with?, choose
Hello World example
. - For Which template would you like to use?, choose
Hello World Worker
. - For Which language do you want to use?, choose
TypeScript
. - For Do you want to use git for version control?, choose
Yes
. - For Do you want to deploy your application?, choose
No
(we will be making some changes before deploying).
This will create a new hello-ai
directory. Your new hello-ai
directory will include:
- A “Hello World” Worker at
src/index.ts
. - A
wrangler.toml
configuration file.
Go to your application directory:
You must create an AI binding for your Worker to connect to Workers AI. Bindings allow your Workers to interact with resources, like Workers AI, on the Cloudflare Developer Platform.
To bind Workers AI to your Worker, add the following to the end of your wrangler.toml
file:
Your binding is available in your Worker code on env.AI
.
You will need to have your gateway id
for the next step. You can learn how to create an AI Gateway in this tutorial.
You are now ready to run an inference task in your Worker. In this case, you will use an LLM, llama-3.1-8b-instruct-fast
, to answer a question. Your gateway ID is found on the dashboard.
Update the index.ts
file in your hello-ai
application directory with the following code:
Up to this point, you have created an AI binding for your Worker and configured your Worker to be able to execute the Llama 3.1 model. You can now test your project locally before you deploy globally.
While in your project directory, test Workers AI locally by running wrangler dev
:
You will be prompted to log in after you run wrangler dev
. When you run npx wrangler dev
, Wrangler will give you a URL (most likely localhost:8787
) to review your Worker. After you go to the URL Wrangler provides, you will see a message that resembles the following example:
Before deploying your AI Worker globally, log in with your Cloudflare account by running:
You will be directed to a web page asking you to log in to the Cloudflare dashboard. After you have logged in, you will be asked if Wrangler can make changes to your Cloudflare account. Scroll down and select Allow to continue.
Finally, deploy your Worker to make your project accessible on the Internet. To deploy your Worker, run:
Once deployed, your Worker will be available at a URL like:
Your Worker will be deployed to your custom workers.dev
subdomain. You can now visit the URL to run your AI Worker.
By completing this tutorial, you have created a Worker, connected it to Workers AI through an AI Gateway binding, and successfully ran an inference task using the Llama 3.1 model.