Recently Google released TensorFlow 0.8 which amongst other features provides distributed computing support. While this is great for power users, the most important step for most people trying to get started with machine learning or deep leaning is simply to have a powerful and pre-configured instance. To solve this problem, we recently released Bitfusion Ubuntu 14 TensorFlow AMI using version 0.8 of TensorFlow which has been configured to work equally well across CPU and GPU AWS instances.
One of the nifty features of AWS is that one can utilize spot instances over on-demand instances in order to significantly reduce costs. To use spot instances, we need to create a spot instance request which includes a maximum price that we are willing to pay per hour, as well as a few other constraints such as the instance type and availability zone. You can find a detailed discussion of all the AWS spot instance parameters in the following AWS user guide.
We've been been cooking up several more AMIs to get you started on AWS quickly. This time we are introducing the Nvidia Digits 3 AMI which is designed to to get you started quickly with Nvidia's deep learning package which includes their branched versions of Caffe and Torch, as well as a browser accessible interface for quick experimentation. The second AMI is built on top of Ubuntu 14, the Cuda 7.5 Toolkit, and the latest Nvidia drivers, and is targeted at Cuda developers and those intending to deploy GPU applications with ease.
We recently presented at the GPU Technology Conference, where we demonstrated how to containerize GPU application with Docker and utilize Bitfusion Boost. This week, at the SaltConf 16 conference, we will be taking this concept a step further and demonstrating GPU accelerated containers through a complete Docker ecosystem under SaltStack control. In particular, we will show how we utilize both these technologies to create virtual GPU clusters that provide maximum performance and data center utilization for compute intensive applications.
In early April 2016, we started offering Monster GPU Machines on Amazon Web Services (AWS) powered by Bitfusion Boost and have seen really great interest. In the last couple of weeks alone we have seen massive usage and is growing at an even faster rate recently.
FREMONT, CA - APRIL 5, 2016 – AMAX, a leading provider of HPC, Cloud/IaaS, GPU and Data Center solutions, will demonstrate GPU virtualization technology at the GPU Technology Conference (GTC) 2016 on April 5-7, 2016. The demo will feature AMAX's award-winning Deep Learning Platforms running Bitfusion Boost to virtualize GPU resources from multiple nodes for rendering and deep learning applications.
You may have seen our recent post of enabling customers to create Monster GPU Machines on AWS using our Boost Technology. Ready to see some real applications using Boost, meet some of our partners utilizing Boost on their systems, and find out what else we can do with Boost? Then please come join us next week at the GPU Technology Conference (GTC) in sunny Silicon Valley, California.
At Bitfusion, our job is to know how well various compute-intensive workloads scale on different infrastructures and to help people maximize performance. Since we launched our Deep Learning and CUDA AMIs in the AWS Marketplace we’ve heard many of our customers ask for bigger GPU instances, but the largest Amazon EC2 instance, the g2.8xlarge, currently maxes out at just 4 GPUs.
There are many workloads which require significant image manipulation such as visualization and analysis of geospatial data to generate georeferenced imagery and terrain data. These workloads can be found in a wide variety of industries ranging from aerospace and defense to security and planetary research. One tool which is commonly used to tackle such tasks across a vast spectrum of Linux distributions is ImageMagick. ImageMagick is also found it just about all of the most popular web-stacks to handle image transformations such as re-sizing, contrast enhancement, and the application of various filters.