Bartley RichardsoninRAPIDS AIPreprocess Your Training Data at Lightspeed with Our GPU-based Tokenizer for BERT Language ModelsThis tokenizer, applied as a pre-processing step before input into a BERT language model, runs up to 270x faster than CPU implementationsMay 28, 20201May 28, 20201
Bartley RichardsoninRAPIDS AIcyBERTNeural network, that’s the tech; To free your staff from, bad regexDec 5, 2019Dec 5, 2019
Bartley RichardsoninRAPIDS AICyber Log Accelerators (CLX)See how the cybersecurity workflow is really is the data science workflow.Nov 7, 2019Nov 7, 2019