How to build a ChatGPT agent? A step-by-step guide using React Native and Back4app

In today’s digital age, chatbots and AI-driven applications are becoming increasingly popular. They offer a unique way to interact with users, gather data, and provide instant responses.

One of the challenges developers face is integrating these chatbots with backend servers to fetch or store data.

In this article, we’ll explore how to build a ChatGPT agent that queries the Back4App Parse Server. By the end of this guide, you’ll have a clear understanding of one of the simple ways in which you can seamlessly integrate these two powerful tools.

1. Code Snippets with Explanation:

1.1 Setting up the Back4App Parse Server

Before diving into the ChatGPT integration, it’s essential to set up the Back4App Parse Server. Here’s a basic setup in which Parse is initialized in the App.js file (assuming Parse & AsyncStorage libraries are installed and imported):

//Initializing the SDK.
Parse.setAsyncStorage(AsyncStorage);

//You need to copy BOTH the the Application ID and the Javascript Key from: Dashboard->App Settings->Security & Keys
Parse.initialize(APP_ID, JS_KEY);
Parse.serverURL = "https://parseapi.back4app.com/";

Explanation: This code sets up the connection to the Back4App Parse Server. Make sure to replace the placeholders with your actual credentials.

1.2 Preparing Chat Screen

import React, { useState, useRef, useEffect } from "react";
import {
  View,
  TextInput,
  Button,
  Text,
  ScrollView,
  ActivityIndicator,
  Linking,
  StyleSheet,
  StatusBar,
  TouchableOpacity,
} from "react-native";
import { Ionicons } from "@expo/vector-icons";
import { SafeAreaView } from "react-native-safe-area-context";
import { generateResponse } from "./ChatGPTService";
import { generateFinalResponse } from "./ChatGPTAnswer";

// In a React Native application
import Parse from "parse/react-native.js";

const SearchScreen = ({ navigation }) => {
  
  const [userInput, setUserInput] = useState(""); 
  const [isLoading, setIsLoading] = useState(false);
  const [chatItems, setChatItems] = useState([]);

  const scrollViewRef = useRef();

  useEffect(() => {
    scrollViewRef.current.scrollToEnd({ animated: true });
  }, [chatItems]);

  const sendMessage = async () => {
    if (!userInput) return;

    // Add the user's message to the chatItems state immediately
    setChatItems((prevItems) => [
      ...prevItems,
      { type: "message", value: `User: ${userInput}` },
    ]);
    setUserInput("");
    setIsLoading(true);

    try {
      const botResponse = await generateResponse(userInput);
      const validJsonString = botResponse.replace(/(\w+):/g, '"$1":');
      const jsonString = validJsonString.replace(/'/g, '"');
      let parsedMessage;
      try {
        parsedMessage = JSON.parse(jsonString);
      } catch {
        // Handle the invalid JSON case
        setChatItems((prevItems) => [
          ...prevItems,
          {
            type: "message",
            value: `ChatGPT: Sorry, I couldn't understand what you said. Can you try again?`,
          },
        ]);
        setIsLoading(false);
        return; // return early to stop the remaining code in the sendMessage function
      }

      const results = await Parse.Cloud.run("queryCategories", parsedMessage);

      let newChatItems = [];
      for (let result of results) {
        const botAnswer = await generateFinalResponse(JSON.stringify(result));
        newChatItems.push({
          type: "message",
          value: `ChatGPT: ${botAnswer}`,
          phone: result.phone,
        });
      }

      setChatItems((prevItems) => [...prevItems, ...newChatItems]);
      setIsLoading(false);

      const extractedPhoneNumbers = results
        .map((obj) => obj.phone)
        .filter(Boolean); 
    } catch (error) {
      console.error("Error:", error);
      setChatItems((prevItems) => [
        ...prevItems,
        {
          type: "message",
          value: `ChatGPT: Sorry, I couldn't process that.`,
        },
      ]);
    }
  };

  return (
    <SafeAreaView style={{ flex: 1 }}>
      <StatusBar style={{ backgroundColor: "white" }} barStyle="dark-content" />

      <View style={styles.container}>
        <View style={styles.headerContainer}>
          <View style={{ flex: 0.4 }}>
            <TouchableOpacity
              hitSlop={{ top: 10, bottom: 10, left: 10, right: 10 }}
              style={styles.backButton}
              onPress={() => navigation.goBack()}
            >
              <Ionicons name="arrow-back" size={24} color="black" />
            </TouchableOpacity>
          </View>
          <View style={{ flex: 1 }}>
            <Text style={styles.header}>ChatGPT</Text>
          </View>
        </View>
        <ScrollView
          ref={scrollViewRef}
          contentContainerStyle={{ paddingBottom: 20 }}
          style={styles.chatContainer}
        >
          {chatItems.map((item, index) => {
            return (
              <View
                key={index}
                style={
                  item.value.startsWith("User:")
                    ? styles.userMessage
                    : styles.botMessage
                }
              >
                <Text
                  style={[
                    styles.messageText,
                    {
                      color: item.value.startsWith("User:") ? "white" : "black",
                    },
                  ]}
                >
                  {item.value}
                </Text>
                {item.phone && (
                  <View
                    style={{
                      marginTop: 10,
                      paddingHorizontal: 20,
                      marginBottom: 20,
                    }}
                  >
                    <Button
                      title={`Book Now`}
                      onPress={() => {
                        Linking.openURL(`tel:${item.phone}`);
                      }}
                    />
                  </View>
                )}
              </View>
            );
          })}
        </ScrollView>
        <View style={styles.inputContainer}>
          <TextInput
            style={styles.input}
            value={userInput}
            onChangeText={setUserInput}
            placeholder="Type a message"
          />
          <Button title="Send" onPress={sendMessage} />
        </View>
        {isLoading && (
          <ActivityIndicator
            size="large"
            color="#0000ff"
            style={styles.loader}
          />
        )}
      </View>
    </SafeAreaView>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    backgroundColor: "#f5f5f5",
  },
  chatContainer: {
    marginTop: 40,

    flex: 1,
    padding: 10,
  },
  inputContainer: {
    flexDirection: "row",
    alignItems: "center",
    padding: 10,
    backgroundColor: "#fff",
    borderTopWidth: 1,
    borderTopColor: "#ddd",
  },
  input: {
    flex: 1,
    padding: 10,
    borderRadius: 20,
    backgroundColor: "#f1f1f1",
    marginRight: 10,
  },
  userMessage: {
    alignSelf: "flex-end",
    backgroundColor: "#0084ff",
    borderRadius: 15,
    marginBottom: 10,
    padding: 10,
    maxWidth: "80%",
  },
  botMessage: {
    alignSelf: "flex-start",
    backgroundColor: "#e5e5e5",
    borderRadius: 15,
    marginBottom: 10,
    padding: 10,
    maxWidth: "80%",
  },
  messageText: {
    color: "#333",
  },
  loader: {
    position: "absolute",
    left: 0,
    right: 0,
    top: 0,
    bottom: 0,
    alignItems: "center",
    justifyContent: "center",
   
  },
  headerContainer: {
    flexDirection: "row",
    alignItems: "center",
    marginBottom: 20,
    backgroundColor: "#fff",
    paddingVertical: 10,
  },
  header: {
    fontSize: 24,

    fontWeight: "bold",
    marginLeft: 40,
  },
});

export default SearchScreen;

This code defines a React Native component named SearchScreen. This component serves as a chat interface where users can interact with a ChatGPT agent.

The chatbot processes user messages, extracts keywords, queries the Back4App Parse Server, and then generates a response to display back to the user.

sendMessage Function:

  • This asynchronous function handles the process of sending a user’s message, getting a response from the ChatGPT agent, querying the Back4App Parse Server, and updating the chat with the bot’s response.
  • The user’s message is immediately added to the chat.
  • The generateResponse function is called to get a response from the ChatGPT agent.
  • The response is then parsed and sent to the Back4App Parse Server using a cloud function named “queryCategories”.
  • The results from the server are processed, and the final response is generated using the generateFinalResponse function. This response is then added to the chat.

1.3 generateResponse function

export const generateResponse = async (message) => {
  try {
    const response = await fetch(OPENAI_API_ENDPOINT, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${OPENAI_API_KEY}`,
      },
      body: JSON.stringify({
        model: "gpt-3.5-turbo", // Use the model you want. I've used gpt-3.5-turbo as an example.
        messages: [
          {
            role: "system",
            content:
              "you will help the developers with querying the database in back4app, currently there is one class which is 'categories', when you receive user messages extract location and keyword to pass it to back4app query. Your response will only be like the following : {location: 'New York', keyword: 'football'},  dont add anything else to the response.",
          },
          {
            role: "user",
            content: message,
          },
        ],
      }),
    });

    const data = await response.json();

    // Extract the assistant's message from the response
    const assistantMessage = data.choices[0].message.content;

    return assistantMessage;
  } catch (error) {
    console.error("Error:", error);
    return "Sorry, I couldn't process that.";
  }
};

The generateResponse function is an asynchronous function designed to interact with OpenAI’s API to get a response based on the user’s message.

  1. Parameters:
    • message: The user’s input message that needs to be processed.
  2. Functionality:
    • The function initiates a POST request to the OPENAI_API_ENDPOINT (which is presumably defined elsewhere in the code).
    • The request headers include the content type and an authorization token (OPENAI_API_KEY).
    • The body of the request contains:
      • The model to be used (gpt-3.5-turbo in this case).
      • A series of messages, starting with a system message that instructs the model on its role. The system message specifies that the model should help developers with querying the database in Back4App, particularly the ‘categories’ class. It also instructs the model to extract the location and keyword from user messages and format the response in a specific way.
      • The user’s message is then added as the second message in the series.
    • Once the request is made, the function waits for the response, parses it as JSON, and extracts the assistant’s message from the response.
    • The assistant’s message is then returned.
  3. Error Handling:
    • If any error occurs during the process, it’s logged to the console, and a default error message (“Sorry, I couldn’t process that.”) is returned.

In essence, the generateResponse function serves as an intermediary between the user and OpenAI’s GPT-3.5-turbo model.

It formats the user’s message, sends it to the model, and retrieves a structured response that can be used to query the Back4App database.

1.4 generateFinalResponse function

export const generateFinalResponse = async (message) => {
  try {
    const response = await fetch(OPENAI_API_ENDPOINT, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${OPENAI_API_KEY}`,
      },
      body: JSON.stringify({
        model: "gpt-3.5-turbo", // Use the model you want. I've used gpt-3.5-turbo as an example.
        messages: [
          {
            role: "system",
            content:
              "You will receive database fetched data from the server, I want you to read each data and make a summary of it for each one separately. and then represent it to the user similar to this way 'Here is a place you can book from' then continue with the information about the places but dont write the phone number in the text.",
          },
          {
            role: "user",
            content: message,
          },
        ],
      }),
    });

    const data = await response.json();

    // Extract the assistant's message from the response
    const assistantMessage = data.choices[0].message.content;
    return assistantMessage;
  } catch (error) {
    console.error("Error:", error);
    return "Sorry, I couldn't process that.";
  }
};

The generateFinalResponse function is an asynchronous function that interacts with OpenAI’s API to generate a summarized response based on data fetched from a server.

  1. Parameters:
    • message: This represents the data fetched from the server that needs to be summarized.
  2. Functionality:
    • The function initiates a POST request to the OPENAI_API_ENDPOINT (presumably defined elsewhere in the code).
    • The request headers specify the content type and an authorization token (OPENAI_API_KEY).
    • The body of the request contains:
      • The model to be used (gpt-3.5-turbo in this instance).
      • A series of messages, starting with a system message that instructs the model on its task. The system message specifies that the model should take the fetched data, summarize each piece of data separately, and present it to the user in a specific format. It also instructs the model not to include phone numbers in the text.
      • The fetched data (user’s message) is then added as the second message in the series.
    • After making the request, the function waits for the response, parses it as JSON, and extracts the assistant’s message from the response.
    • The assistant’s summarized message is then returned.
  3. Error Handling:
    • If any error occurs during the process, it’s logged to the console, and a default error message (“Sorry, I couldn’t process that.”) is returned.

In summary, the generateFinalResponse function acts as a bridge between the fetched server data and OpenAI’s GPT-3.5-turbo model.

It sends the data to the model, instructs it to generate a summarized response, and retrieves this response to be presented to the user.

Conclusion

We’ve learned how to combine a chatbot with a server to make a smart app. By trying out what we discussed, you can create even cooler apps in the future.

With ChatGPT and Back4App, our chatbot can answer questions using real data. This is just a starting point for us at Coders Technologies, as we always strive to deliver high-quality solutions to our clients. There’s so much more potential with these tools. Keep exploring and happy coding!

If you liked this article, please check this comprehensive guide to Build an app using ChatGPT.

How to create a ChatGPT agent?

– Integrate ChatGPT with Back4App for seamless data handling.
– React Native-based chat interface with ChatGPT processing.
– Utilize OpenAI’s API for dynamic chat responses.


Leave a reply

Your email address will not be published.