Building an Investment Portfolio GPT Companion with OpenAI, LLaMA and Flask

David Shilman
GoPenAI
Published in
4 min readJun 21, 2023

--

In this tutorial, I will describe my experience building the Investment Portfolio GPT Companion web application. This application leverages the capabilities of OpenAI’s GPT models, along with Python Flask and Llama frameworks. The Investment Portfolio GPT Companion is a single-page application (SPA) that utilizes a proprietary investment portfolio GPT model to provide answers and insights related to the assets in the investment portfolio. I will explain how to construct the LLaMA model using OpenAI and contextual information obtained from a market data API. You can find the complete code for this application in this GitHub repository.

The idea for the Investment Portfolio GPT Companion came about from my own, only sometimes successful investment experience and my interest in algo trading strategies, market data APIs, and GPT models. As a fintech technologist, I saw the opportunity to combine these diverse domains into a practical use case and educational content.

I hope this product demo will give a better understanding of the application’s features.

Before we build the investment LLaMA model, we need to create the GPT knowledgebase, also referred to as the LLaMA index. This knowledge base will include information specific to my made-up investment portfolio. I used the Mboum Finance API, a market data provider freely available on the Rapid API Hub, to build the GPT model knowledgebase. The Mboum Finance API offers various market and instrument endpoints, and for this project, I used the market/news/{stock} endpoint.

The Mboum Finance API accepts a list of instrument tickers from my investment portfolio and returns an array of data elements containing article information. For this tutorial, I hardcoded the list of stocks in the app code. In a real-world scenario, the logic responsible for building the LLaMA index would likely be triggered by messaging, APIs, or scheduler events such as AWS SQS or EventBridge. To build the GPT model, you’ll need an OpenAI account and API Key, which should be configured as environment variables.

FLASK_APP=app.py
FLASK_ENV=development
FLASK_DEBUG=True

OPENAI_API_KEY=***********************************
X-RapidAPI-Key=***********************************

X-RapidAPI-Host=mboum-finance.p.rapidapi.com
QUOTE_API_URL=https://mboum-finance.p.rapidapi.com/qu/quote
NEWS_API_URL=https://mboum-finance.p.rapidapi.com/ne/news/

We will use the VectorStoreIndex type index, created from an Article type array converted into a Document type array. Once the Llama SDK generates the VectorStoreIndex, it will be persisted to local storage for future use by the Flask application.

import os
from datetime import datetime, timedelta

import pytz
import requests
from dotenv import load_dotenv
from llama_index import Document, StorageContext, VectorStoreIndex
from llama_index.node_parser.simple import SimpleNodeParser

load_dotenv()

def create_index():

tickers = "AAPL, IBM, TSLA, AMZN"
articles = get_stock_news_feed(tickers)
create_index_from_articles(articles=articles)

def create_index_from_articles(articles):

path = "indexed_files/api_index"
documents = []
for article in articles:
documents.append(Document(article))

nodes = SimpleNodeParser().get_nodes_from_documents(documents)

index = VectorStoreIndex(nodes)
index.storage_context.persist(f'./{path}')

Now that we have the GPT index saved and available for use, we can start the Flask app. The Flask application will instantiate the VectorStoreIndex type global variable loaded from local storage during startup.

from dotenv import load_dotenv
from flask import Flask, jsonify, render_template, request
from llama_index import (SimpleDirectoryReader, StorageContext,
VectorStoreIndex, load_index_from_storage)

app = Flask(__name__)

index = None

def create_index():

global index
storage_context = StorageContext.from_defaults(persist_dir="./indexed_files/api_index")
index = load_index_from_storage(storage_context)

We define a route that renders a single-page HTML template and an API endpoint to query the LLaMA index engine based on the user-submitted query. The LLaMA engine response is returned to UI in JSON format and displayed to the user.

@app.route('/')
def home():
return render_template('index.html')


@app.route("/api/query")
def query():
global index

query_str = request.args.get('question', None)
print(f"question: {query_str}")
if not query_str:
return jsonify({"error": "Please provide a question."})

response = None
try:

query_engine = index.as_query_engine()
response = query_engine.query(query_str).response

except Exception as e:
print(f"Exception: {e}")
return jsonify({'response': e})

return jsonify({'response': response})

In this tutorial, I have demonstrated how you can utilize a market data API, along with Python Flask and Llama libraries, to build a use-case-specific ChatGPT application. By following these steps, you can create the Investment Portfolio GPT Companion web application, and provide valuable insights about an investment portfolio. Feel free to explore the application code in this GitHub repository to enhance and customize the application according to your specific requirements.

I want to acknowledge the excellent article by Amir Tadrisi “Building an Intelligent Education Platform with OpenAI, ChatGPT, and Django.” I am not a UI developer; therefore, I took the liberty to use some of his UI code for this project.

--

--

Technologist with vast experience in product architecture, software engineering and driving technology innovations