๐—•๐˜‚๐—ถ๐—น๐—ฑ ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด ๐— ๐—ผ๐—ฑ๐—ฒ๐—น ๐˜„๐—ถ๐˜๐—ต ๐—–๐—ต๐—ฎ๐˜๐—š๐—ฃ๐—ง. Random Forest Example

Amar Harolikar
4 min readFeb 10, 2024

Identify top leads and launch campaigns at speed.

๐™ƒ๐™ž๐™œ๐™ ๐™„๐™ข๐™ฅ๐™–๐™˜๐™ฉ ๐™Š๐™ฅ๐™ฅ๐™ค๐™ง๐™ฉ๐™ช๐™ฃ๐™ž๐™ฉ๐™ฎ ๐™›๐™ค๐™ง: ๐˜ผ๐™„ ๐˜ผ๐™ช๐™ฉ๐™ค๐™ข๐™–๐™ฉ๐™ž๐™ค๐™ฃ ๐˜ผ๐™œ๐™š๐™ฃ๐™˜๐™ž๐™š๐™จ, ๐™ˆ๐™–๐™ง๐™ ๐™š๐™ฉ๐™ž๐™ฃ๐™œ ๐™–๐™œ๐™š๐™ฃ๐™˜๐™ž๐™š๐™จ, ๐™‡๐™š๐™–๐™™ ๐™‚๐™š๐™ฃ ๐™–๐™œ๐™š๐™ฃ๐™˜๐™ž๐™š๐™จ, ๐™Ž๐™ฉ๐™–๐™ง๐™ฉ๐™ช๐™ฅ๐™จ, ๐™ˆ๐™ž๐™˜๐™ง๐™ค-๐™Ž๐™ข๐™–๐™ก๐™ก-๐™ˆ๐™š๐™™๐™ž๐™ช๐™ข ๐™€๐™ฃ๐™ฉ๐™š๐™ง๐™ฅ๐™ง๐™ž๐™จ๐™š๐™จ (๐™ˆ๐™Ž๐™ˆ๐™€)

Concise YouTube Video here

Video details โ€” YouTube Studio

Was working on an analysis project involving model build. Using GPT and Bard as coding co-pilots. Started to wonder if GPT (GPT Plus) would be able to handle a full model build with just prompts and instructions.

Amazingly, yes, but with some caveats and constraints. Check out my video to see how it works. Quick and concise video below. In depth video on my YouTube channel [ https://youtu.be/v_Z9F60QymA ]

PROMPTS
Shared at the end. Will vary from case to case. Customize as necessary.

USE CASES
1. Small datasets, low complexity models: Build end-to-end with GPT.

2. Large datasets, complex models: Share a small sample, get code, run on your platform, iterate with GPT with results and code.

3. Data engineering โ€” modelling dataset: This is the biggest piece in the model build pipeline. Share sample data for modeling cleaning, run code on your platform, iterate.

TIPS AND TRICKS
1. Know GPT Limits: Crashes with high complexity models and larger datasets. Play with data/ models to gauge.

2. Start with low complexity: Calibrate hyperparameters slowly if the model is not robust. e.g., start with just 30 trees and depth of only 3 for random forest.

3. Check assumptions and review work: e.g., once it dropped 30% of my pop as outliers.

4. Tends to overfit models: Give specific instructions and keep an eye out.

5. Model metrics: Can share Confusion Matrix / Precision-Recall-Accuracy / others. Request the one you need.

6. Explanatory variables: Some like Feature Importance are easy for GPT, but tends to crash with others like Partial Dependency Plots. Get the code, run it yourself. Use Google Colab T4 GPU for intensive tasks. Has free limits.

7. Decile Table: Tends to sort it in reverse order; keep an eye out.

8. Timing: Runs faster in off-hours (US). I have seen a 3โ€“5X difference

DATA SECURITY
1. PI Data: Anonymize or drop.
2. Uploaded File Security: Use sample data or scrambled data.
3. Uploaded files easily hacked on GPT Store GPTโ€™. See my post for more information on hacking & countermeasures. Code Red: Unprotected GPTs & AI Apps exposed by simple hacks. Protecting & Countermeasures | by Amar Harolikar | Jan, 2024 | Medium

Not yet heard of uploaded files from user conversations being hacked. Itโ€™s an evolving area, so need to be mindful.

CONSIDERATIONS
On a live project, data engineering and creating a modeling dataset account for ~80% of the model build effort. Implementation factors also play a significant role. This post and video focuses on model building aspect

๐—•๐—ฎ๐˜€๐—ฒ ๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜๐˜€
Go in sequence else chance of ChatGPT erroring out. Modify prompts as per your specific use case. This might not be the best option for all propensity models.

๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜#๐Ÿญ. Analyze the provided campaign dataset: preprocess it, then build and validate a propensity model on training and testing sets. Take a best judgement call on missing values, outliers and junk data in records. Check for duplicates. Use random forest. Special check for overfitting. If overfitting, then reduce model complexity so that test and trainings align very close. Run as many iterations as needed for that. Start with less-complex model hyperparameters as per below.

n_estimators: start wtih 30 trees
max_depth : start wtih 3
max_features: start with โ€œlog2โ€
min_samples_split: start with 50
min_samples_leaf: start wtih 50

Report model metrics (ROC-AUC, Gini coefficient) for both test and training. Keep the test and training datasets ready for further analysis.

For rest of this conversation, please keep all your responses, intermediate responses and updates: brief, curt and concise. Nothing verbose. But make sure to share the important points. Test/ Train Split / Treatment of Missing โ€” Outliers โ€” Duplicates/ Model Used. / Model Metrics as mentioned above, etc. Keep all details handy for creating detailed documentation later. Keep all codes also handy as i would need that for scoring the full base separately.

Note : If model results are not good then tweak hyperparameters and ask ChatGPT to run it again.

๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜#๐Ÿฎ. Provide decile table for test and train. CSV format. Side by side. Keep Decile Number, Count of Records, Number of Responders, Average Probability

๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜#๐Ÿฏ. Feature Importance score: CSV format

๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜#๐Ÿฐ. Score the dataset and share original dataset with score.

๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜#๐Ÿฑ. Provide full code that i can use to build and score my main base separately. The main base has a million records. Make sure to include the following amongst other things: Test-Train โ€” Model Build, Scoring Code to score main base, Code patch for deciling (output to CSV in local temp runtime google colab directory), code for feature importance output to csv

My dataset file path is filepath=โ€™/content/drive/MyDrive/xxx/BANK_1M_M.csvโ€™

The data structures is exactly the same. Give me a code that i can directly copy paste and use.

--

--