๐๐๐ถ๐น๐ฑ ๐ ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐ ๐ผ๐ฑ๐ฒ๐น ๐๐ถ๐๐ต ๐๐ต๐ฎ๐๐๐ฃ๐ง. Random Forest Example
Identify top leads and launch campaigns at speed.
๐๐๐๐ ๐๐ข๐ฅ๐๐๐ฉ ๐๐ฅ๐ฅ๐ค๐ง๐ฉ๐ช๐ฃ๐๐ฉ๐ฎ ๐๐ค๐ง: ๐ผ๐ ๐ผ๐ช๐ฉ๐ค๐ข๐๐ฉ๐๐ค๐ฃ ๐ผ๐๐๐ฃ๐๐๐๐จ, ๐๐๐ง๐ ๐๐ฉ๐๐ฃ๐ ๐๐๐๐ฃ๐๐๐๐จ, ๐๐๐๐ ๐๐๐ฃ ๐๐๐๐ฃ๐๐๐๐จ, ๐๐ฉ๐๐ง๐ฉ๐ช๐ฅ๐จ, ๐๐๐๐ง๐ค-๐๐ข๐๐ก๐ก-๐๐๐๐๐ช๐ข ๐๐ฃ๐ฉ๐๐ง๐ฅ๐ง๐๐จ๐๐จ (๐๐๐๐)
Concise YouTube Video here
Was working on an analysis project involving model build. Using GPT and Bard as coding co-pilots. Started to wonder if GPT (GPT Plus) would be able to handle a full model build with just prompts and instructions.
Amazingly, yes, but with some caveats and constraints. Check out my video to see how it works. Quick and concise video below. In depth video on my YouTube channel [ https://youtu.be/v_Z9F60QymA ]
PROMPTS
Shared at the end. Will vary from case to case. Customize as necessary.
USE CASES
1. Small datasets, low complexity models: Build end-to-end with GPT.
2. Large datasets, complex models: Share a small sample, get code, run on your platform, iterate with GPT with results and code.
3. Data engineering โ modelling dataset: This is the biggest piece in the model build pipeline. Share sample data for modeling cleaning, run code on your platform, iterate.
TIPS AND TRICKS
1. Know GPT Limits: Crashes with high complexity models and larger datasets. Play with data/ models to gauge.
2. Start with low complexity: Calibrate hyperparameters slowly if the model is not robust. e.g., start with just 30 trees and depth of only 3 for random forest.
3. Check assumptions and review work: e.g., once it dropped 30% of my pop as outliers.
4. Tends to overfit models: Give specific instructions and keep an eye out.
5. Model metrics: Can share Confusion Matrix / Precision-Recall-Accuracy / others. Request the one you need.
6. Explanatory variables: Some like Feature Importance are easy for GPT, but tends to crash with others like Partial Dependency Plots. Get the code, run it yourself. Use Google Colab T4 GPU for intensive tasks. Has free limits.
7. Decile Table: Tends to sort it in reverse order; keep an eye out.
8. Timing: Runs faster in off-hours (US). I have seen a 3โ5X difference
DATA SECURITY
1. PI Data: Anonymize or drop.
2. Uploaded File Security: Use sample data or scrambled data.
3. Uploaded files easily hacked on GPT Store GPTโ. See my post for more information on hacking & countermeasures. Code Red: Unprotected GPTs & AI Apps exposed by simple hacks. Protecting & Countermeasures | by Amar Harolikar | Jan, 2024 | Medium
Not yet heard of uploaded files from user conversations being hacked. Itโs an evolving area, so need to be mindful.
CONSIDERATIONS
On a live project, data engineering and creating a modeling dataset account for ~80% of the model build effort. Implementation factors also play a significant role. This post and video focuses on model building aspect
๐๐ฎ๐๐ฒ ๐ฃ๐ฟ๐ผ๐บ๐ฝ๐๐
Go in sequence else chance of ChatGPT erroring out. Modify prompts as per your specific use case. This might not be the best option for all propensity models.
๐ฃ๐ฟ๐ผ๐บ๐ฝ๐#๐ญ. Analyze the provided campaign dataset: preprocess it, then build and validate a propensity model on training and testing sets. Take a best judgement call on missing values, outliers and junk data in records. Check for duplicates. Use random forest. Special check for overfitting. If overfitting, then reduce model complexity so that test and trainings align very close. Run as many iterations as needed for that. Start with less-complex model hyperparameters as per below.
n_estimators: start wtih 30 trees
max_depth : start wtih 3
max_features: start with โlog2โ
min_samples_split: start with 50
min_samples_leaf: start wtih 50
Report model metrics (ROC-AUC, Gini coefficient) for both test and training. Keep the test and training datasets ready for further analysis.
For rest of this conversation, please keep all your responses, intermediate responses and updates: brief, curt and concise. Nothing verbose. But make sure to share the important points. Test/ Train Split / Treatment of Missing โ Outliers โ Duplicates/ Model Used. / Model Metrics as mentioned above, etc. Keep all details handy for creating detailed documentation later. Keep all codes also handy as i would need that for scoring the full base separately.
Note : If model results are not good then tweak hyperparameters and ask ChatGPT to run it again.
๐ฃ๐ฟ๐ผ๐บ๐ฝ๐#๐ฎ. Provide decile table for test and train. CSV format. Side by side. Keep Decile Number, Count of Records, Number of Responders, Average Probability
๐ฃ๐ฟ๐ผ๐บ๐ฝ๐#๐ฏ. Feature Importance score: CSV format
๐ฃ๐ฟ๐ผ๐บ๐ฝ๐#๐ฐ. Score the dataset and share original dataset with score.
๐ฃ๐ฟ๐ผ๐บ๐ฝ๐#๐ฑ. Provide full code that i can use to build and score my main base separately. The main base has a million records. Make sure to include the following amongst other things: Test-Train โ Model Build, Scoring Code to score main base, Code patch for deciling (output to CSV in local temp runtime google colab directory), code for feature importance output to csv
My dataset file path is filepath=โ/content/drive/MyDrive/xxx/BANK_1M_M.csvโ
The data structures is exactly the same. Give me a code that i can directly copy paste and use.