I had a Tensorflow which I was ready to move to the next stage of production but had difficulties with some of the standard tools. CoreML and Serving didn’t seem to work for my use-case. I believe they’d work if I had structured inference with the TF graph rather than procedurally.

To bridge this experience gap, I wrapped up inference in a Python REST so that I could expose it to the open . My exploration can be read here:


How would you put a trained ML model into production? Any recommended resource?

Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9pkt7w/p__a__learning_model_with/)


Please enter your comment!
Please enter your name here