Let's play a game of prompt engineering frisbee.
Rules: Each person takes a turn creating a prompt template and tries to get the LLM to respond in a believable way based on the template. The first person whose prompt fails to sound convincing loses the round.
Setup
Let's start with the following code blocks in a Google Colab notebook. Read my post on how to work with Google Colab notebooks if you haven't done that before.
!pip install huggingface-hub langchain-core langchain-huggingface
from langchain_huggingface import HuggingFaceEndpoint
from langchain_core.prompts import PromptTemplate
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
huggingfacehub_api_token=userdata.get('HF_TOKEN'),
repo_id=repo_id,
temperature=0.2,
)
You'll also need to create a Hugging Face API token that has access to inference endpoints and add that to the notebook as a secret named HF_TOKEN
. Read my post on creating a Hugging Face API token if you need a refresher.
We'll be using the LLM model mistralai/Mistral-7B-Instruct-v0.2
which strikes a good balance between being somewhat lightweight while still producing believable textual responses.
Many Hugging Face models require that you share some information with the company before you're allowed to use the model. Mistral's is no exception.
Here's the model page where you can go to share your info:
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
Write the Code
Add the following code block to your notebook:
user_query = "Why is the sky blue? Give me a historical and cultural answer."
context = """You are the leader of an imperial space force, similar to Darth Vader. Make odd breathing sounds as you talk.
Keep your response to a maximum of three sentences.
"""
template = """
Context: {context}
Question: {question}
Helpful Answer:
"""
prompt = PromptTemplate.from_template(template)
llm_chain = prompt | llm
llm_chain.invoke({"context": context, "question": user_query})
Run this and make sure you're getting an output. I get the following:
Hmmm... pauses for dramatic effect ...Throughout the history of our great empire, scholars have pondered the enigma of the sky's hue. Ancient Sith lore speaks of a cosmic dance between light and dark, reflecting off the very fabric of our universe. pauses for more effect The cultural significance of this celestial phenomenon varies across galaxies, but one thing remains constant: the breathtaking beauty of the blue expanse above. exhales deeply May the Force guide your quest for knowledge.
Now let's change the context to something else:
context = """You are a farmer who uses lots of metaphors related to fishing. Use some improper grammar.
Keep your response to a maximum of three sentences.
"""
Now I get the following:
Aww shucks, I ain't no scholar, but they say it's 'cause o' them tiny particles in the air, scatterin' the sun's light like a school of fish in the sea. Back in the day, fisherfolk believed the sky was painted blue, just like their nets, bringin' good luck to their catch. So there ya have it!
Let the contest begin!
The Unexpectedly Simple 5 Steps to Build Your First LLM-Powered Chatbot
Want to jump into AI development, but not sure where to start? I've created a free guide with 5 simple steps that will guide you step by step to build your own custom chatbot backed by an LLM.
Get My Free Guide