Today we are introducing four new AMIs targeted at developers that want to offload compute intensive applications or tasks from their thin clients, laptops, or even mobile devices into the cloud where they can utilize vastly more powerful systems to get these tasks done orders of magnitude faster. The four new AMIs are as follows: Bitfusion Mobile Deep Learning Service, Bitfusion Mobile Image Manipulation Service, Bitfusion Mobile Rendering Service, and Bitfusion Mobile Video Processing Service. Each AMI comes with a simple REST API which can be used as is and for which we provide simple example scripts. Alternatively, you can build on top of our API and provide your own services or integrate these AMIs into your applications. Here are the details for each new AMI:
Pre-installed with Nvidia Drivers, Cuda 7.5 Toolkit, Caffe, GPU Rest Engine, Pre-trained Models, and a simple Rest API server. Use existing pre-trained models or train your own models and then integrate inference tasks into your applications via the provided REST API.
API Reference: https://github.com/bitfusionio/deep_learning_service
Pre-installed with Nvidia Drivers, Cuda 7.5 Toolkit, and ImageMagick 7. Achieve optimal image transformation performance by using this AMI via the easy to use REST API.
API Reference: https://github.com/bitfusionio/imagemagick_api
Pre-installed with Nvidia Drivers, Cuda 7.5 Toolkit, and Blender 2.77. Optimized for Nvidia GRID GPU instances as well as CPU instances. Easy to use REST API for quick remote rendering tasks.
API Reference: https://github.com/bitfusionio/blender_api
Pre-installed with Nvidia Drivers, Cuda 7.5 Toolkit, ffmpeg 3, and multiple codecs. Achieve optimal performance by using this AMI on CPU as well as Nvidia GRID instances.
API Reference: https://github.com/bitfusionio/video_processing_service