Open Weather API (and Chat GPT)

Here’s an approach that worked for me for getting the Open Weather response and then feeding that into OpenAI to leverage their GPT models:

:stop_sign: Note that you have to be very specific about the syntax or it won’t work. The little details really matter here.

Here’s a summary of the flow:

  1. Make the HTTP request to Open Weather’s OneCall API
  2. Use an expression to build the specially formatted payload for OpenAI’s GPT API
    1. Stub the base message template for OpenAI
    2. Double stringify the weather response data
    3. Trim off the wrapping " characters from double stringifying
    4. Build the final message using the template
  3. Make the HTTP request to OpenAI
  4. Store the response message content from Open AI in a variable

Snippets

And the relevant snippets for your reference.

First Expression
Build the specially formatted payload for OpenAI’s API.

message = "Can you put together a single sentence forecast with the most relevant information from the following Open Weather API JSON response for tomorrow's weather? The units are imperial. Please keep the response concise and only include information if it is relevant. \\n\\nBefore generating a response, take a moment to internally (silently) think about the steps you would take to determine what information is relevant to include in the forecast. Ignore the provided 'summary' if one is included as I want you to generate your own. \\n~~~\\n __WEATHER_JSON__ \\n~~~"
stringJson = stringify(stringify($context.response.data.daily[0]))
stringJson = substring(stringJson, 1, size(stringJson)-1)
replace(message, "__WEATHER_JSON__", stringJson)

Notes:

  • The base message template must properly escape any new-lines, double quotes, etc.
  • We double stringify the JSON that we get back from open weather since we need it properly escaped
    "{\"humidity\": 48, \"temp\": {\"min\": 73.45... }"
    
  • But the double stringify ends up wrapping the whole things in quotes, so we strip those off
    {\"humidity\": 48, \"temp\": {\"min\": 73.45... }
    

OpenAI HTTP Action
This is a standard HTTP action and the formatting is following the documentation from OpenAI. If you want to adjust the parameters, you might want to do so in the OpenAI Playground and use the View Code button and change the type to JSON to make sure you get the right format. It’s critical that your content within your messages gets injected properly as shown in the example.

{
  "model": "gpt-3.5-turbo",
  "messages": [{"role": "user", "content": "{{$AString}}"}],
  "temperature": 1,
  "max_tokens": 256,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0
}

Notes:

  • Make sure to POST the JSON data to the right endpoint
  • Ensure you’ve setup the headers properly:
    Key Value
    Authorization Bearer YOUR-KEY
  • Ensure the Payload is formatted exactly as show above and the content is pointing to the proper variable (the output from the earlier expression)

Use OpenAI Response
We’re grabbing the result from the response based on the format noted in the OpenAI documentation:

$context.response.data.choices[0].message.content
2 Likes