GET STARTED

Tensorflow 1.0 Updated Bitfusion AMI Available

We’re excited to announce that the newest version of our Bitfusion Ubuntu 14 Tensorflow AMI is now available in the AWS Marketplace with Tensorflow 1.0! This Tensorflow release marks a major milestone of functionality, robustness, and -- going forward -- backwards compatibility.
Read More

ami aws tensorflow

Chainer 1.21 Updated Bitfusion AMI Now Available

We’re excited to announce the latest version of our Bitfusion Ubuntu 14 Chainer AMI is now available in the AWS Marketplace.
Read More

ami aws chainer

New: Bitfusion MXNet AMI Now Available

We’re excited to announce the newest AMI in our lineup: Bitfusion Ubuntu 14 MXNet AMI, now available in the AWS Marketplace. Amazon CTO Werner Vogels has recently announced that MXNet is the company's official deep learning framework of choice, paving the way for fast growth and interest this previously lesser known project.
Read More

ami aws mxnet

Latest Bitfusion Caffe AMI Now Available

We’re excited to announce the newest version of our Bitfusion Ubuntu 14 Caffe AMI is now available in the AWS Marketplace.
Read More

ami aws caffe

Latest Bitfusion Tensorflow AMI and New Pricing

We’re excited to announce that the newest version of our Bitfusion Ubuntu 14 Tensorflow AMI is now available in the AWS Marketplace. We’ve also made some exciting new changes to our pricing model to help save you money.
Read More

ami aws tensorflow

Latest Bitfusion Chainer AMI Now Available

We’re excited to announce the newest version of our Bitfusion Ubuntu 14 Chainer AMI is now available in the AWS Marketplace.
Read More

ami aws chainer

Latest Bitfusion Theano AMI Now Available

We’re excited to announce the newest version of our Bitfusion Ubuntu 14 Theano AMI is now available in the AWS Marketplace.
Read More

ami aws theano

Quick Comparison of TensorFlow GPU Performance on AWS P2 and G2 Instances

TensorFlow GPU performance on AWS p2 instances is between 2x-3x faster when compared to previous generation g2 instances across a variety of convolutional neural networks. Recently, we made our Bitfusion Deep Learning  AMIs available on the newly announced AWS P2 instances. Naturally, one of the first questions that arises is, how does the performance of the new P2 instances compare to that of the the previous generation G2 instances. In this post we take a quick look at single-GPU performance across a variety of convolutional neural networks. To keep things consistent we start each EC2 instance with the exact same AMI, thus keeping the driver, cuda, cudnn, and framework the same across the instances. TensorFlow GPU Performance To evaluate TensorFlow performance we utilized the Bitfusion TensorFlow AMI along with the convnet-benchmark to measure for forward and backward propagation times for some of the more well known convolutional neural networks including AlexNet, Overfeat, VGG, and GoogleNet. Because of the much larger GPU memory of 12 GiB, the P2 instances can accommodate much larger batch sizes than the G2 instances. For the purpose of the benchmarks below, the batch sizes were selected for each network type such that they could run on the G2 as well as on the P2 instances. The Tables below summarize the results obtained for G2 and P2 instances:   Bitfusion Ubuntu 14 TensorFlow AMI Launch on AWS!   g2.2xlarge - Nvidia K520 Network Batch Size Forward Pass (ms) Backward Pass (ms) Total Time (ms) AlexNet 512 502 914 1416 Overfeat 256 1134 2934 4068 VGG 64 750 2550 3300 GoogleNet 128 600 1587 2187     p2.xlarge - Nvidia K80 Network Batch Size Forward Pass (ms) Backward Pass (ms) Total Time (ms) AlexNet 512 254 462 716 Overfeat 256 427 847 1274 VGG 64 423 869 1292 GoogleNet 128 341 783 1124         Averaging the speedup across all four types of networks, the results show an approximate ~2.42x improvement in performance - not bad for an instance which is only ~1.39 more expensive on an hourly on demand basis. We have several other Deep learning AMIs available in the AWS Marketplace including Caffe, Chainer, Theano, Torch, and Digits.  If you are interested in seeing GPU Performance benchmarks for any of the above drop us a note.  
Read More

aws tensorflow

Apposphere Uses Bitfusion Deep Learning AMIs and AWS to Quickly Deploy Intelligent Mobile Solutions

Leads and prospects generated from search or display ads can be very costly and challenging to obtain. That's why Apposphere, an Austin, Texas based company, saw a huge opportunity in mining the huge amount of information and activity happening on social media for lead generation.
Read More

aws case study data science deep learning

Deploy Bitfusion Boost on AWS faster than ever

Enabling development, deployment, and acceleration of multi-node GPU applications from deep learning to oil exploration. Back in March, we first described how to deploy Bitfusion Boost on AWS to create a 16 GPU cluster. We received a lot of customer feedback since then, in particular we paid attention to issues that tripped you up in order to make the experience more seamless. With that in mind, we engaged the AWS Marketplace team to integrate Bitfusion Boost directly into our products, enabling you to spin-up Bitfusion Boost GPU clusters directly from the AWS Marketplace with just a few clicks. Some of the major improvements include: Run multi-gpu enabled applications across multiple GPU instances without any additional configurations or code changes Boost enabled AMIs can be launched in cluster-mode directly from the AWS Marketplace Boost enabled AMI clusters can be launched in all AWS regions that contain GPU instances AMI opt-in process is identical for single-instance and cluster-mode AMI launches Monthly cluster cost estimates are provided directly in the AWS Marketplace Simplified cluster launch parameters for CFNs enable easier cluster customization   Summary Launching a Bitfusion Boost Cluster now entails only 4 easy steps: Locate a Bitfusion Boost enabled AMI in the AWS Marketplace Select a Bitfusion Boost Cluster configuration Fine-tune the Bitfusion Boost Cluster launch parameters Launch the Bitfusion Boost Cluster and verify proper operation   Detailed Instructions Locate a Bitfusion Boost enabled AMI You can locate all Bitfusion Boost enabled AMIs in the AWS marketplace by clicking here. Alternatively, below are direct links to our AMIs which are presently Boost enabled. If you don't already have an AWS account, you can create one by clicking here. Bitfusion Boost Ubuntu 14 Cuda 7 Bitfusion Boost Ubuntu 14 Cuda 7.5 Bitfusion Boost Ubuntu 14 Caffe Bitfusion Boost Ubuntu 14 Torch Select a Bitfusion Boost cluster configuration For this example we are using the Bitfusion Boost Ubuntu Cuda 7.5 AMI, and we will launch an 8 GPU cluster. The image below has several color-coded boxes: Blue Box: Shows detailed descriptions of the available deployment (delivery) options for this AMI. Green Box: Selection box where you can pick the cluster you want to create. Pick the GPU Optimized Cluster here. Yellow Box: Estimated costs for the cluster if you were going to run it 24/7 for an entire month. Even though the cost of the infrastructure is shown for a month, the actual charges will be calculated based on hourly usage. Once you have selected the GPU Optimized Cluster option, click on the large Continue button above it, and you will be forwarded to the Launch on EC2 page shown below. The important sections are once again highlighted by color-coded boxes: Blue Box: Select the AWS region in which you would like to launch the Bitfusion Boost Cluster. Green Box: Click this button to proceed and fine-tune the cluster parameters. Fine-Tune the Bitfusion Boost Cluster parameters After you click the Launch with CloudFormation Console button you will be taken to the Select Template AWS page. Simply click the Next button on the bottom right and you will be presented with several options to fine-tune the cluster you are about to launch. All the available options are described in detail in our Boost on AWS Documentation, however, to launch the 8 GPU cluster we only need to specify two options as highlighted in the figure below: Blue Box: Select a key name which you will use to SSH into the instance. If you have not create an AWS key before you can create one by following the AWS directions here. After you create the key, return to the fine-tuning page where the key needs to be selected, refresh the page, and then select they key you just created. Green Box: You must enter here the IP address from which you will be connecting to the EC2 instance. For now enter 0.0.0.0/0 to keeps things simple, however, for future clusters consider setting a specific IP from which you will be connecting to increase the security of the cluster even further. Once you set these two fields, click the Next button on the bottom right and you will be forwarded to the Options pages. Nothing needs to be set here, so simply click the Next button again to go to the Review page. Launch the Bitfusion Boost Cluster One the Review page you must click the check-box next to the "I acknowledge that this template might cause AWS CloudFormation to create IAM resources" text at the very bottom of the page to enable our template to provision the cluster for you. Only thing left to do is clicking the Create button, and your cluster will be created! At this point you are forwarded to the Stack Management page on AWS. It will most likely be blank initially, but after a couple minutes you will see a stack being created as shown in the image below. You can click the check-box next to the stack to obtain additional information about the stack. You will see that the status is shown as CREATE_IN_PROGRESS. The creation of the cluster can take anywhere from 5 to 10 minutes. If you are curious about all the details that we are taking care of simply click on the events tab. Eventually you will see the status change to CREATE_COMPLETE - time to log in to the cluster and verify that everything is working as expected. To log in to the instance you need to obtain the instance IP address. You can find this information by navigating to your AWS Console, clicking on EC2, and then clicking on running instances. In case you have other instance running, filter the instances by "bitfusion-boost" and you should see two instances as shown below. Blue Box: Select the AWS instance that contains the cuda75 in the name. This is the application instance into which you will log in, and from which you will execute your Cuda / GPU applications. The instances below it, with gpunode in the name, is the instance hosting the additional GPUs. Depending on how many additional GPUs you selected when creating your cluster, you may have multiple of these instances. Green Box: Note down the Public DNS address listed in this box for your instance. You will use this address in the commands below to access the instance and execute applications. To access the instance application instance execute the following command: ssh -i {path to your pem file} ubuntu@{public dns address} Once you are logged in execute the following command to verify that all 8 GPUs are available to your application: bfboost client /usr/local/cuda-7.5/samples/bin/x86_64/linux/release/deviceQuery You should see obtain the following output: deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 8, Device0 = GRID K520, Device1 = GRID K520, Device2 = GRID K520, Device3 = GRID K520, Device4 = GRID K520, Device5 = GRID K520, Device6 = GRID K520, Device7 = GRID K520 Result = PASS BFBoost run complete. You are all set. Happy coding and development on your 8+ GPU Bitfusion Boost Cluster.
Read More

aws

Search

New Call-to-action

Stay Up-to-Date!

Get our regular deep learning and AI news, insights, tutorials, and more.