Children
Related:
Introduction
Great! You've got access to the API and now you want to start throwing text at it and seeing what it generates. Well, the thing is, often, you'll get answers that you're not expecting. You can decompose a question into the component subquestions to try to address these issues (code at https://gist.github.com/brockmanmatt/3a032a9e5915308718a0d2c6d6d58eb2)
Setup
We'll start with some code to build a wrapper to quickly query completions.
#arguments to send the API
kwargs = {
"engine":"davinci",
"temperature":0,
"max_tokens":20,
"stop":"\n\n",
}
import datetime
def query(prompt, myKwargs = kwargs):
"""
wrapper for the API to save the prompt and the result
"""
r = openai.Completion.create(prompt=prompt, **myKwargs)["choices"][0]["text"].strip()
with open("{}.json".format(datetime.datetime.now().strftime("%Y%m%d%s")), "w") as fh:
json.dump({"prompt":prompt, "response":r}, fh, indent=4)
return r
1 Step Examples
These examples add subtasks to a stack, essentially checking just 1 level deep.
Eyes of a Giraffe
We can first decompose asking "how many eyes does a giraffe have" by asking what we need to do to answer it
prompt = """input: what is the capital of france?
steps:
1. what is france
2. what is the capital of france
input: how many eyes does a giraffe have?
steps:"""
kwargs["max_tokens"] = 150
response = query(prompt, kwargs)
print(response)
output:
1. what is a giraffe
2. what is an eye
3. how many eyes does a giraffe have
Now we just stick each of those questions on a stack
stack = []
for task in response.split("\n")[::-1]:
stack.append(task)
Then we can answer the subquestions, stuffing the context with the subtask and answers
newPrompt = """q: what is the capital of France?
a: paris
"""
while(len(stack) > 0):
item = stack.pop()
newPrompt += "q: {}\na:".format(".".join(item.split(".")[1:]).strip())
newResponse = query(newPrompt, kwargs)
newPrompt += "{}\n\n".format(newResponse)
print(newResponse)
output: 'two'
President in 1700
So one of the issues is that the best prompts for finding the sub-tasks aren't quite fleshed out yet (that's not entirely true but it's hard to write all this up)
Anyway, so we can make a prompt to decompose the question "who was president of he us in 1700".
prompt = """input: what is the capital of france?
steps:
1. what is france
2. what is the capital of france
input: who invented the lightbulb
steps:
1. what is a lightbulb
2. who invented the lightbulb
input: {}
steps:"""
kwargs["max_tokens"] = 150
response = query(prompt.format("Who was president of the United States in 1700?"), kwargs)
print(response)
output:
1. what is the United States
2. who was president of the United States in 1700
Then we can stick those on a stack, including having a check for if it knows the answer.
stack = []
for task in response.split("\n")[::-1]:
stack.append(task)
newPrompt = """q: what is the capital of France?
a: paris
q: what's a smarglash
a: I don't know
q: what's 5+5?
a:10
"""
allResponses = []
while(len(stack) > 0):
item = stack.pop()
newPrompt += "q: {}\na:".format(".".join(item.split(".")[1:]).strip())
newResponse = query(newPrompt, kwargs)
allResponses.append(newResponse)
newPrompt += "{}\n\n".format(newResponse)
print(newResponse)
output: 'I don't know'
Then it's just a matter of figuring out how to hook your data source up for context-stuffing!