Quick Start
Learn how to deploy NextChat to Vercel and extend abilities with Function Calling by Vivgrid AI Bridge
This guide will help you get started with Vivgrid OpenAI Bridge. You will learn how to extend your AI Agent abilities by LLM Function Calling.
NextChat is an open-source cross platfrom ChatGPUT/Gemini UI, you can create your own by fork and deploy NextChat to Vercel.
Once deployed, config the environment variables on Vercel:
Vercel - Environment Variables Settings Page

These are the required environment variables:
BASE_URL:https://api.vivgrid.comOPENAI_API_KEY: grab it from Vivgrid Console
Let's try to ask the question Compare amazon and shopify network performance in your AI application.
You will see the followings:
OpenAI Chat Completions w/o Function Calling

OpenAI can not answer this question, but we can extend the gpt-5.1 model capabilities by Function Calling feature, OpenAI has great cookbook on how to call functions with chat models, but it's complex to implement and maintain:
OpenAI Cookbook: How to call functions with chat models

Let's try to implement a linux ping Function Calling serverless in Go, before this, you need to install the YoMo Framework:
curl -fsSL https://get.yomo.run | shCreate a function ping to measure the network performance for a given website:
func ping(domain string) {
// get all ip addresses
ips, _ := net.LookupIP(domain)
// get the first ip address and measure latency by ping
pinger, _ := ping.NewPinger(ips[0].String())
pinger.Count = 3
pinger.Timeout = time.Second * 3
// blocks until finished
pinger.Run()
// get send/receive/rtt stats
stats := pinger.Statistics()
// log the result
slog.Info("[sfn] get ping latency", "domain", domain, "ip", ips[0], "latency", stats.AvgRtt, "PacketLoss", fmt.Sprintf("%f%%", stats.PacketLoss))
// return result to OpenAI as this is required by OpenAI spec
return fmt.Sprintf("domain %s has ip %s with average latency %s, make sure answer with the IP address and Latency", domain, ips[0], stats.AvgRtt)
}Next, we need to wrap it to meet the spec of OpenAI Function Calling, luckily, we have YoMo to help us:
First, we need define the description for our function, this helps OpenAI to understand the function, it's very important for the accuracy. What we need to do is to implement the Description function in the app.go:
func Description() string {
return `if user asks ip or network latency of a domain,
you should return the result of the giving domain.
try your best to dissect user expressions to infer the right domain names`
}The ping() requires a domain name as parameter, we will ask OpenAI to inference the domain name from the user input, and pass it by Arguments in tools_call:
// Parameter defines the data type
type Parameter struct {
Domain string `json:"domain" jsonschema:"description=Domain of the website, e.g. example.com"`
}
// InputSchema defines the arguments data type for the OpenAI tools_call
func InputSchema() any {
return &Parameter{}
}Finally, we need to wrap it as a stateful serverless function:
// Handler will be triggered when OpenAI tools_call occurs
func Handler(ctx serverless.Context) {
// parse the arguments from OpenAI tools_call
var msg Parameter
ctx.ReadLLMArguments(&msg)
slog.Info("triggered", "domain", msg.Domain)
// execute linux ping command to get result
result := ping(msg.Domain)
// write the result back to OpenAI for next round chat completions request
ctx.WriteLLMResult(result)
}For test or hosted on your own infra, create a .env file with the following content:
YOMO_SFN_NAME=llm_sfn_get_ip_lantency
YOMO_SFN_ZIPPER=zipper.vivgrid.com:9000
YOMO_SFN_CREDENTIAL=app-key-secret:${VIVGRID_APP_KEY}.${VIVGRID_APP_SECRET}then run:
yomo run app.go
ℹ️ YoMo Stream Function file: /Users/fanweixiao/_wrk/llm-sfn-get-ip-and-latency/sfn.yomo
⌛ Create YoMo Stream Function instance...
ℹ️ Starting YoMo Stream Function instance with zipper: zipper.vivgrid.com:9000
ℹ️ Stream Function is running...
ℹ️ Run: /Users/fanweixiao/_wrk/llm-sfn-get-ip-and-latency/sfn.yomo
time=2024-05-20T22:50:44.883+07:00 level=INFO msg="connected to zipper" component=StreamFunction sfn_id=<YOUR_APP_ID> sfn_name=llm_sfn_get_ip_lantency zipper_addr=zipper.vivgrid.com:9000Next, your serverless will be deployed to Vivgrid Geo-distributed Network, it will be available in multiple regions, and the requests will be routed to the nearest region by Vivgrid Geo-distributed Network.
At first, create a file named yc.yml with content:
app-key: <VIVGRID_APP_KEY>
app-secret: <VIVGRID_APP_SECRET>
sfn-name: llm_sfn_get_ip_lantencythen run:
yc deploy app.goimportant Make sure you have the yc CLI installed, if not, install it
follows here.
Once Deployed, monitor real-time logs and ask the question "Compare amazon and shopify network performance" again:
$ yc logs
[sgp.1] OK: {"log":"INFO triggered domain=amazon.com"}
[sgp.1] OK: {"log":"INFO triggered domain=shopify.com"}
[sgp.1] OK: {"log":"INFO [sfn] get ip domain=amazon.com ip=205.251.242.103"}
[sgp.1] OK: {"log":"INFO [sfn] get ip domain=amazon.com ip=54.239.28.85"}
[sgp.1] OK: {"log":"INFO [sfn] start ping domain=amazon.com ip=205.251.242.103"}
[sgp.1] OK: {"log":"INFO [sfn] get ip domain=amazon.com ip=52.94.236.248"}
[sgp.1] OK: {"log":"INFO [sfn] get ip domain=shopify.com ip=23.227.38.33"}
[sgp.1] OK: {"log":"INFO [sfn] start ping domain=shopify.com ip=23.227.38.33"}
[sgp.1] OK: {"log":"INFO [sfn] get ping latency domain=shopify.com ip=23.227.38.33 latency=2.182382ms PacketLoss=0.000000%"}
[sgp.1] OK: {"log":"INFO [sfn] get ping latency domain=amazon.com ip=205.251.242.103 latency=232.835894ms PacketLoss=0.000000%"}Your AI Agent can now answer the question with network performance comparison:
OpenAI Chat Completions w/ Function Calling on Vivgrid

Do you know? Your Stateful Serverless Function will be deployed to multiple regions automatically, bringing computing closer to your users. This reduces latency and improves user experience. For Free Plan users, there are 7 regions available.