CloudSurfer

Automatically discover the optimal cloud configuration and hardware on AWS, GCP and Azure to run your AI models.

Inference
Transformers
CNN
PyTorch
Hugging Face
TensorFlow
ONNX

🚧 Coming soon 🚧

The CloudSurfer module allows users to automatically compare the inference performance of their deep learning model across hardware and cloud providers. It leverages state-of-the-art optimization techniques to custom-accelerate the models on each platform, providing the user with an accurate benchmark of their model performances in terms of speed, accuracy, and cost.

With CloudSurfer, users can input their model in their preferred deep learning framework and express their preferences for accuracy and performance. The library will then automatically test the model on a range of hardware and cloud platforms, using optimization techniques to ensure that the results are accurate and representative of the model's performances.

Users can then compare the results side-by-side, seeing the performance of their model on different hardware and cloud providers. This is key to make informed decisions about which platform (cloud and hardware type) to pick, without having to guess or rely on outdated information.

Overall, CloudSurfer provides a powerful and easy-to-use tool to optimize deep learning models and to choose the best inference hardware and cloud platform. Try it out today, and reach out if you have any feedback!

Stay up to date on the latest releases