How I am building an internal chatGPT interface for a company
Have you guys tried out the inference table in #databricks? I have recently come across this new functionality in #databricks.
Essentially it can log input and output being passed through #mlflow Serving Endpoint as a table in #unitycatalog
This is a great solution for me because I am currently developing an internal #chatgpt interface for a company. With this internal interface, we are able to censor all user prompts by any rules we set, log I/O that are only stored in local cache to be compliant with #infosec, and introduce extra functionalities using model endpoint (i.e., multi-modality, extra security measure, data integration, prompt flow, etc).
My current setup for this #mlops project is to use VNet controlled private #azure #openai model endpoint from the #frontend infrastructure built in #reactjs. Then a few of the regex-based censorship rules are triggered from the frontend client side. Filtered input is then passed through the custom #python function served by #mlflow Serving Endpoint in #databricks, where this custom function not only logs I/O through this inference table, but also allows for extra functionalities such as pdf parsing via #langchain. [2]
If you are interested in learning more about this new functionality that is now in public preview, here is the reference from #databricks
Happy Hunting my fellow ML Engineers!
[1] https://docs.databricks.com/en/machine-learning/model-serving/inference-tables.html
[2] https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf
If you want to read more about AI & ML subjects and more, please come check out my Medium articles!
or visit me in different communities!
def visit_me_elsewhere():
#visit me in Linkedin community
Linkedin = 'https://www.linkedin.com/in/jshinm/'
#visit me in my personal website
Website = 'https://modrev.org/jshinm'