So far I think most of us tested and used chatGPT for some tasks. Probably you tried other tools built on top like MONICA or Jupyter Notebook extensions for data science projects. Once you understand how tools work you might wonder about how to avoid starring in prompts for the next answer and think even about how to adjust your sequence of questions and re-ask(re-execute) conversation. I found interesting these blogs Build Your Own ChatGPT-Like App with Streamlit and ChatGPT for SAP developments – threat or an opportunity?

Let’s see how to build your own class with save, load, and continue methods, import this like a local library, and explore the capabilities of chat GPT API.

Table of content

Bulidig the class chatGPT
  Constructor
  Post the message
  Generate response
  Reset data content
  Save data content
  Load data content
  List messages
  The class chatGPT
Using the class chatGPT
  First question and response
  Retrieve data content
  Save data content
  Reset data content
  Load data content
  Continue conversation
Conclusion

The full working code is provided on GitHub with my last run.

In Chatbot chatGPT with save & load & continue v1.0 the class is defined directly in Jupyter Notebook

In Chatbot chatGPT with save & load & continue v1.1 the class is saved in chatGPT.py and you can import it leaving the code outside.

from chatGPT import chatGPT # chatGPT.py file with class chatGPT

 

Execute notebooks in your browser with Binder. It takes time to build the environment.
Binder

Bulidig the class chatGPT

Let’s define the class chatGPT and methods.

Constructor

First, we start with the constructor and initialize the data type to store responsescompletions, and messages, the model, and api_key.

    def __init__(self, api_key, model):
        openai.api_key = api_key
        self.model = model
        
        self.messages = list()
        self.responses = list()
        self.completions = list()  
        self._file_default = "chatbot_data.json"

Post the message

Next is to define the method to post the message in the required format.

    def message_post(self, prompt):
        self.messages.append({"role": "user", "content": prompt})
        return self.messages

Generate response

Next is to call the API with the method generate_response to get the response for messages content and store all data into responsescompletions, and messages.

    def generate_response(self):
        completion = openai.ChatCompletion.create(
            model=self.model,
            messages=self.messages
        )
        response = completion.choices[0].message.content
        self.messages.append({"role": "assistant", "content": response})
        
        self.response = response
        self.responses.append(response)
                
        self.completion = completion
        self.completions.append(completion)
        
        return self.response

Reset data content

The reset method clears all content from responsescompletions, and messages.

    def reset(self):
        self.messages = list()
        self.responses = list()
        self.completions = list()    

Save data content

The save_data message saves in JSON file the content from responsescompletions, and messages.

    def save_data(self, file_path = None):
        data = {
            "responses": self.responses,
            "completions": self.completions,
            "messages": self.messages
        }
        
        if file_path is None:
            file_path = self._file_default
        
        with open(file_path, "w") as f:
            json.dump(data, f)
            

Load data content

The load_data method returns content from the saved file in JSON format. The content is not uploaded into the object. You can use the method with a new object as well.

    def load_data(self, file_path = None):
        if file_path is None:
            file_path = self._file_default
        
        with open(file_path, "r") as f:
            data = json.load(f)
            responses = data["responses"]
            completions = data["completions"]
            messages = data["messages"]
            return responses, completions, messages

The load_messages method passes the content of messages to the object so you can continue your conversation with your chatGPT.

    def load_messages(self, messages):
        self.messages = messages 

List messages

The list_messages method prints the content from messages.

    def list_messages(self):
        # list all messages
        for e in self.messages:
            print(f"role: {e['role']}")
            print(f"content:n{e['content']}")
            print(f"n")

 

The class chatGPT

This is the entire class.

class chatGPT:
    def __init__(self, api_key, model):
        openai.api_key = api_key
        self.model = model
        
        self.messages = list()
        self.responses = list()
        self.completions = list()        

        self._file_default = "chatbot_data.json"
        
    def message_post(self, prompt):
        self.messages.append({"role": "user", "content": prompt})
        return self.messages

    def generate_response(self):
        completion = openai.ChatCompletion.create(
            model=self.model,
            messages=self.messages
        )
        response = completion.choices[0].message.content
        self.messages.append({"role": "assistant", "content": response})
        
        self.response = response
        self.responses.append(response)
                
        self.completion = completion
        self.completions.append(completion)
        
        return self.response

    def reset(self):
        self.messages = list()
        self.responses = list()
        self.completions = list()        
        
    def save_data(self, file_path = None):
        data = {
            "responses": self.responses,
            "completions": self.completions,
            "messages": self.messages
        }
        
        if file_path is None:
            file_path = self._file_default
        
        with open(file_path, "w") as f:
            json.dump(data, f)
            
    def load_data(self, file_path = None):
        if file_path is None:
            file_path = self._file_default
        
        with open(file_path, "r") as f:
            data = json.load(f)
            responses = data["responses"]
            completions = data["completions"]
            messages = data["messages"]
            return responses, completions, messages
    
    def load_messages(self, messages):
        self.messages = messages 

    def list_messages(self):
        # list all messages
        for e in self.messages:
            print(f"role: {e['role']}")
            print(f"content:n{e['content']}")
            print(f"n")

Using the class chatGPT

Once you defined the class you can start using it. Initialize your object. You can define multiple objects (bots) at once each with its own content. If you use save and load you have to manage file_name to avoid overwritten with file_default!

# initialize
my_chatbot = chatGPT(api_key=openai_api_key, model="gpt-3.5-turbo")

First question and response

First question and response

First question. 🧐

%%time
# your question
my_chatbot.message_post("Romania cities")
response = my_chatbot.generate_response()
print(response)

 

The response. 🤖

# Out
1. Bucharest
2. Cluj-Napoca
3. Timișoara
4. Iași
5. Craiova
6. Constanța
7. Galați
8. Brașov
9. Ploiești
10. Oradea
11. Arad
12. Sibiu
13. Bacău
14. Baia Mare
15. Buzău
16. Focșani
17. Suceava
18. Târgu Mureș
19. Bistrița
20. Satu Mare.
CPU times: total: 78.1 ms
Wall time: 40.7 s

Retrieve data content

The data content is stored as attributes responsecompletion, responsescompletions, and messages so you can access it without running methods. 👁️

# completion
my_chatbot.completion
[<OpenAIObject chat.completion id=chatcmpl-74O3WnmLwaz4zqvmTUs29Jvyg3IwT at 0x20060e0ae50> JSON: {
   "choices": [
     {
       "finish_reason": "stop",
       "index": 0,
       "message": {
         "content": "1. Bucharestn2. Cluj-Napocan3. Timiu0219oaran4. Iau0219in5. Craiovan6. Constanu021ban7. Galau021bin8. Brau0219ovn9. Ploieu0219tin10. Oradean11. Aradn12. Sibiun13. Bacu0103un14. Baia Maren15. Buzu0103un16. Focu0219anin17. Suceavan18. Tu00e2rgu Mureu0219n19. Bistriu021ban20. Satu Mare.",
         "role": "assistant"
       }
     }
   ],
   "created": 1681280138,
   "id": "chatcmpl-74O3WnmLwaz4zqvmTUs29Jvyg3IwT",
   "model": "gpt-3.5-turbo-0301",
   "object": "chat.completion",
   "usage": {
     "completion_tokens": 132,
     "prompt_tokens": 11,
     "total_tokens": 143
   }

Save data content

You can save content. The filename is file_default if not specified otherwise.

# save in a file responses, completions, messages
my_chatbot.save_data()

Reset data content

You can reset data content.

# reset - responses, completions, messages
my_chatbot.reset()
my_chatbot.responses, my_chatbot.completions, my_chatbot.messages

Load data content

Load content from the file saved previously and look into the data content.

# load from a file responses, completions, messages
responses, completions, messages = my_chatbot.load_data()
responses, completions, messages

Continue conversation

Pass messages to your chatbot. You can create a new object if you want or need it.

# load messages from default file
my_chatbot.load_messages(messages)

 

And continue your conversation.

%%time
# continue with a follow-up question
my_chatbot.message_post("monuments in above cities")
response = my_chatbot.generate_response()
print(response)

 

The response that recalls (uses) previous conversations from the content saved and loaded.

# Out
Here are some of the notable monuments in the above mentioned cities in Romania:

1. Bucharest: Palace of the Parliament, Romanian Athenaeum, Triumphal Arch, Stavropoleos Monastery
2. Cluj-Napoca: St. Michael's Church, Matthias Corvinus Monument, National History Museum of Transylvania, Banffy Palace
3. Timișoara: Huniade Castle, Millennium Church, Victory Square, Timișoara Orthodox Cathedral
4. Iași: Palace of Culture, Golia Monastery, Three Hierarchs Monastery, Union Square
5. Craiova: Nicolae Romanescu Park, Saint Demetrius Cathedral, Craiova Art Museum, Alexandru Ioan Cuza Park
6. Constanța: The Roman Mosaic, Carol I Mosque, The Casino, Genoese Lighthouse.
7. Galați: Natural Sciences Museum, Mihai Eminescu Park, Galati Public Garden, The Sulina channel lighthouse 
8. Brașov: Black Church, Mount Tâmpa, The White Tower, The Rope Street
9. Ploiești: Oil Museum, Clock Museum, Memorial House of Nichita Stănescu, Art Museum Ploiești.
10. Oradea: Unirii Square, The Black Eagle Palace, Oradea Fortress, The Moon Church
11. Arad: Holy Trinity Cathedral, Ioan Slavici Theatre, Palace of Culture, Reconciliation Park
12. Sibiu: The Council Tower, The Brukenthal Palace, The Bridge of Lies, The Lutheran Evangelical Cathedral
13. Bacău: Stephen the Great Monument, Bacau Natural Sciences Museum, Saint John the Baptist Church, Mihail Sadoveanu Memorial House
14. Baia Mare: Merry Cemetery of Saschiz, Baia Mare Ethnographic Museum, The Baia Mare Art Museum, Baia Mare City Hall.
15. Buzău: The Communal Palace, Saint Eligius Church, The Municipal Park, The Waterfall of Lopata
16. Focșani: Focșani History Museum, Saint Nicholas Church, The Heroes Memorial, Zoological Park Focșani
17. Suceava: The Fortress of Suceava, The Painted Monasteries of Bucovina, Stefan cel Mare Statue, The History Museum of Bucovina
18. Târgu Mureș: Culture Palace, Teleki-Bolyai Library, Piarist Church, Reformed Church 
19. Bistrița: The Lutheran Church, Bistrița Museum Complex, The Main Square, The Mill.
20. Satu Mare: Satu Mare Roman-Catholic Cathedral, The Decebal Victory Monument, The Firemen’s Tower, The Terry Guard Tower.
CPU times: total: 15.6 ms
Wall time: 2min 10s

 

Conclusion

I participated in the first SAP HANA ML Challenge – Employee Churn 🤖 organized by the SAP expert team and I came in second place 🏆.

I hope you enjoy my blog I wrote about the challenge and the solution.

In March 2023 Witalij Rudnicki organized the next SAP Community Developer Challenge: EDA with SAP HANA and Python.

I started using chat GPT after the SAP HANA ML Challenge and found it can be a brave assistant, along with all the online resources, adding more interactivity to your brainstorming. It makes mistakes and comes up with new ideas all the time, but your human role is to understand them. Large Language Models (LLM) are built on top of online public human contribution. One day this blog might be part of LLMs, but without any link to the source. You see a book or article includes a list of sources, and I think that’s what should provide LLMs along with responses – to generate ranks to sources. This could help make LLMs more accurate and connect humans with humans beyond time and distance. 😉😊😪🤔

 

Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x