I had a Tensorflow which I was ready to move to the next stage of production but had difficulties with some of the standard tools. CoreML and Serving didn’t seem to work for my use-. I believe they’d work if I had model inference with the TF graph rather than procedurally.

To bridge this experience gap, I wrapped up inference in a Python REST so that I could expose it to the . My exploration can be read here:


How would you put a trained ML model into production? Any recommended resource?

Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9pkt7w/p_productionizing_a_machine_learning_model_with/)


Please enter your comment!
Please enter your name here