GET STARTED

TensorFlow 0.9 AMI with Keras, cuDNN 5, and 30-40% faster

A few weeks ago we published a tutorial on Easy TensorFlow Model Training on AWS using our Bitfusion TensorFlow AMI. This quick tutorial as well as the AMI have proven immensely popular with our users and we received various feature requests. As such this week we are releasing v0.03 of the TensorFlow AMI which introduces several new features:
Read More

ami tensorflow

New Blender AMI with Mate Desktop, Turbo VNC Server, and VirtualGL

Last week we released several media related AMIs featuring a RESTful API interfaces. We received feedback, particularly on our Blender rendering AMI, that some of you would like to try and work directly with Blender via a remote session and a Desktop environment. Always listening to user feedback, this week we are releasing a Bitfusion Ubuntu 14 Blender AMI pre-installed with Nvidia Drivers, Cuda 7.5 Toolkit, Blender 2.77, and a complete Linux desktop environment including Mate Desktop, TurboVNC Server and VirtualGL for full 3D hardware acceleration of OpenGL applications when using remote display software.
Read More

ami

New Bitfusion Deep Learning and Media AMIs with REST APIs for Developers

Today we are introducing four new AMIs targeted at developers that want to offload compute intensive applications or tasks from their thin clients, laptops, or even mobile devices into the cloud where they can utilize vastly more powerful systems to get these tasks done orders of magnitude faster. The four new AMIs are as follows: Bitfusion Mobile Deep Learning Service, Bitfusion Mobile Image Manipulation Service, Bitfusion Mobile Rendering Service, and Bitfusion Mobile Video Processing Service. Each AMI comes with a simple REST API which can be used as is and for which we provide simple example scripts. Alternatively, you can build on top of our API and provide your own services or integrate these AMIs into your applications. Here are the details for each new AMI:
Read More

ami deep learning

Deploy Bitfusion Boost on AWS faster than ever

Enabling development, deployment, and acceleration of multi-node GPU applications from deep learning to oil exploration. Back in March, we first described how to deploy Bitfusion Boost on AWS to create a 16 GPU cluster. We received a lot of customer feedback since then, in particular we paid attention to issues that tripped you up in order to make the experience more seamless. With that in mind, we engaged the AWS Marketplace team to integrate Bitfusion Boost directly into our products, enabling you to spin-up Bitfusion Boost GPU clusters directly from the AWS Marketplace with just a few clicks. Some of the major improvements include: Run multi-gpu enabled applications across multiple GPU instances without any additional configurations or code changes Boost enabled AMIs can be launched in cluster-mode directly from the AWS Marketplace Boost enabled AMI clusters can be launched in all AWS regions that contain GPU instances AMI opt-in process is identical for single-instance and cluster-mode AMI launches Monthly cluster cost estimates are provided directly in the AWS Marketplace Simplified cluster launch parameters for CFNs enable easier cluster customization   Summary Launching a Bitfusion Boost Cluster now entails only 4 easy steps: Locate a Bitfusion Boost enabled AMI in the AWS Marketplace Select a Bitfusion Boost Cluster configuration Fine-tune the Bitfusion Boost Cluster launch parameters Launch the Bitfusion Boost Cluster and verify proper operation   Detailed Instructions Locate a Bitfusion Boost enabled AMI You can locate all Bitfusion Boost enabled AMIs in the AWS marketplace by clicking here. Alternatively, below are direct links to our AMIs which are presently Boost enabled. If you don't already have an AWS account, you can create one by clicking here. Bitfusion Boost Ubuntu 14 Cuda 7 Bitfusion Boost Ubuntu 14 Cuda 7.5 Bitfusion Boost Ubuntu 14 Caffe Bitfusion Boost Ubuntu 14 Torch Select a Bitfusion Boost cluster configuration For this example we are using the Bitfusion Boost Ubuntu Cuda 7.5 AMI, and we will launch an 8 GPU cluster. The image below has several color-coded boxes: Blue Box: Shows detailed descriptions of the available deployment (delivery) options for this AMI. Green Box: Selection box where you can pick the cluster you want to create. Pick the GPU Optimized Cluster here. Yellow Box: Estimated costs for the cluster if you were going to run it 24/7 for an entire month. Even though the cost of the infrastructure is shown for a month, the actual charges will be calculated based on hourly usage. Once you have selected the GPU Optimized Cluster option, click on the large Continue button above it, and you will be forwarded to the Launch on EC2 page shown below. The important sections are once again highlighted by color-coded boxes: Blue Box: Select the AWS region in which you would like to launch the Bitfusion Boost Cluster. Green Box: Click this button to proceed and fine-tune the cluster parameters. Fine-Tune the Bitfusion Boost Cluster parameters After you click the Launch with CloudFormation Console button you will be taken to the Select Template AWS page. Simply click the Next button on the bottom right and you will be presented with several options to fine-tune the cluster you are about to launch. All the available options are described in detail in our Boost on AWS Documentation, however, to launch the 8 GPU cluster we only need to specify two options as highlighted in the figure below: Blue Box: Select a key name which you will use to SSH into the instance. If you have not create an AWS key before you can create one by following the AWS directions here. After you create the key, return to the fine-tuning page where the key needs to be selected, refresh the page, and then select they key you just created. Green Box: You must enter here the IP address from which you will be connecting to the EC2 instance. For now enter 0.0.0.0/0 to keeps things simple, however, for future clusters consider setting a specific IP from which you will be connecting to increase the security of the cluster even further. Once you set these two fields, click the Next button on the bottom right and you will be forwarded to the Options pages. Nothing needs to be set here, so simply click the Next button again to go to the Review page. Launch the Bitfusion Boost Cluster One the Review page you must click the check-box next to the "I acknowledge that this template might cause AWS CloudFormation to create IAM resources" text at the very bottom of the page to enable our template to provision the cluster for you. Only thing left to do is clicking the Create button, and your cluster will be created! At this point you are forwarded to the Stack Management page on AWS. It will most likely be blank initially, but after a couple minutes you will see a stack being created as shown in the image below. You can click the check-box next to the stack to obtain additional information about the stack. You will see that the status is shown as CREATE_IN_PROGRESS. The creation of the cluster can take anywhere from 5 to 10 minutes. If you are curious about all the details that we are taking care of simply click on the events tab. Eventually you will see the status change to CREATE_COMPLETE - time to log in to the cluster and verify that everything is working as expected. To log in to the instance you need to obtain the instance IP address. You can find this information by navigating to your AWS Console, clicking on EC2, and then clicking on running instances. In case you have other instance running, filter the instances by "bitfusion-boost" and you should see two instances as shown below. Blue Box: Select the AWS instance that contains the cuda75 in the name. This is the application instance into which you will log in, and from which you will execute your Cuda / GPU applications. The instances below it, with gpunode in the name, is the instance hosting the additional GPUs. Depending on how many additional GPUs you selected when creating your cluster, you may have multiple of these instances. Green Box: Note down the Public DNS address listed in this box for your instance. You will use this address in the commands below to access the instance and execute applications. To access the instance application instance execute the following command: ssh -i {path to your pem file} [email protected]{public dns address} Once you are logged in execute the following command to verify that all 8 GPUs are available to your application: bfboost client /usr/local/cuda-7.5/samples/bin/x86_64/linux/release/deviceQuery You should see obtain the following output: deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 8, Device0 = GRID K520, Device1 = GRID K520, Device2 = GRID K520, Device3 = GRID K520, Device4 = GRID K520, Device5 = GRID K520, Device6 = GRID K520, Device7 = GRID K520 Result = PASS BFBoost run complete. You are all set. Happy coding and development on your 8+ GPU Bitfusion Boost Cluster.
Read More

aws

Bitfusion Presenting at Data By the Bay Conference - May 19, 2016

Please join us on Thursday May 19, 2016 at 10:40am at the Data By the Bay Conference as we present on the Promise of Heterogeneous computing.
Read More

data science Events

Easy TensorFlow Model Training on AWS

Recently Google released TensorFlow 0.8 which amongst other features provides distributed computing support. While this is great for power users, the most important step for most people trying to get started with machine learning or deep leaning is simply to have a powerful and pre-configured instance. To solve this problem, we recently released Bitfusion Ubuntu 14 TensorFlow AMI using version 0.8 of TensorFlow which has been configured to work equally well across CPU and GPU AWS instances.
Read More

tensorflow tutorial

Tutorial for model creation and training on NVIDIA DIGITS

Train neural network models with Caffe using sample MNIST or CIFAR datasets and learn how to use your own dataset to train models Before you start this tutorial, you should have launched a NVIDIA DIGITS instance on Nimbix, and have the url to the DIGITS UI on Nimbix. If you haven’t launched the NVIDIA DIGITS instance on Nimbix, go here to learn how to do that or watch this screencast.  We prepared the NVIDIA DIGITS application to have the three most common data sets used for learning with NVIDIA DIGITS and Caffe. These data sets are: MNIST, available in /db/mnist, with training and test data sets in /db/mnist/train and /db/mnist/test respectively cifar10, available at /db/cifar10, with training and test data sets in /db/cifar10/train and /db/cifar10/test respectively cifar100, available at /db/cifar100, with training and test data sets in /db/cifar100/train and /db/cifar100/test respectively DIGITS has auto-completion, so when you begin to type these paths, you will see these paths available to you. Creating a Model Select the blue images button under datasets, and choose “Classification.” This well guide you through setting up a data set of images that you can train using deep learning. Enter the settings in the screenshot below to configure your train and validation databases.   Training with Caffe Now you are ready to train your model! Go back to the main DIGITS page and select Images>Classification to create a new classification model. On this page, the defaults will work fine for a first time training a model. You simply need to select the data set you already configured, and select the GPUs in your instance at the bottom of the page. Kick it off and watch it run. Custom Data Sets If you have a custom data set, you can upload your custom data sets using Filezilla to drop.jarvice.com. See How do I transfer files to and from JARVICE? for instructions on using Filezilla. Many applications including NVIDIA DIGITS have an ssh server running. If you would like to upload or download your data while DIGITS is running using scp, rsync or another application, you can configure your SSH keys prior to launching your job. See How do I upload my SSH public key? What is it used for? for information configuring password-less SSH. You can upload data sets using these methods to your /data directory, referred to as drop.jarvice.com, or your “drop”, and then enter this path instead of /db/mnist. If you have not yet created your own custom data set, there are instructions on the DIGITS github page on how to structure your image database and prepare labels. Now you are ready to train your Deep Learning models with GPUs in the cloud! Want more? If you are interested in learning about enabling more advanced deep learning applications in the cloud like NVIDIA DIGITS, or customizing your own machine learning environment in the cloud, please don’t hesitate to live chat with us right on this page by clicking the blue/white bubble on the bottom right or emailing us at [email protected] You can also learn more about NVIDIA DIGITS from here.  
Read More

tutorial

Get started with NVIDIA DIGITS for as low as 49 cents in just 3 easy steps

Start training deep learning models on NVidia K80 or Titan X GPUs starting at $0.49/hour  Get Started Today The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning in the hands of data scientists and researchers. Quickly design the best deep neural network (DNN) for your data using real-time network behavior visualization. Best of all, DIGITS is a complete system so you don’t have to write any code. DIGITS integrates the popular Caffe deep learning framework and Torch. You can train neural network models, including pre-built AlexNet and GoogleNet models. Get started with DIGITS in just 3 easy steps and get your DIGITS environment up and running in minutes. (For those of you who would like to watch instead of read, here is a 35 second screencast of the three steps). Step-1: To get started, click here You may have to sign-up for a new account on Nimbix if you don’t have an account already.  Just login to your Nimbix account if you already have an account. After you have logged in to https://platform.jarvice.com, navigate to the “Compute” tab and select NVIDIA DIGITS. Step-2: You can keep the default $0.49 / hour GPU or select other GPUs to suit your needs.  Then hit “Continue”. You may be prompted with a modal summarizing the machine configuration asking you to hit “Submit” again.   Step-3: From the Dashboard tab, click on the job that has just launched to expand its details. A connect button will appear shortly after your job enters the “Processing” state. This will open a link to the NVIDIA DIGITS UI directly in your browser. You will also be able to SSH to the machine using the username “nimbix” and the same IP address/password. If you have any questions or need help get started with NVIDIA DIGITS on Nimbix, please don’t hesitate to live chat with us right on this page by clicking the blue/white bubble on the bottom right or emailing us at [email protected] Once you are on the DIGITS UI, here are a few things you can try.
Read More

nvidia

Deep Learning in the Cloud with NVIDIA DIGITS and Titan-X GPUs starting at $0.49 per hour

Deep learning users can now access pre-configured NVIDIA DIGITS Titan-X GPU instance starting at 49 cents per hour on Nimbix! Get Started Today Data Scientists and Deep learning users can now try the single-click solution on Cloud Service Provider Nimbix to launch an instance configured to run NVIDIA DIGITS on Titan-X GPUs at as low as $0.49/hour, the most affordable high performance GPUs compared to anywhere in the cloud, powered by Bitfusion Boost’s Software Defined Supercompute technology.   The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning in the hands of data scientists and researchers. Quickly design the best deep neural network (DNN) for your data using real-time network behavior visualization. Best of all, DIGITS is a complete system so you don’t have to write any code. DIGITS integrates the popular Caffe deep learning framework and Torch. You can train neural network models, including pre-built AlexNet and GoogleNet models. Developers who use Caffe or Torch can now focus on training their neural networks faster in the most affordable manner, without worrying about configuring drivers, kernels or toolkits! Data Scientists and Developers today spend a lot of time in configuring their development environment to get deep learning frameworks like Caffe or Torch running.  Moreover, there is not really any option to use machines with powerful GPUs like TitanX or K80s on an hourly basis. The single-click solution lowers the barrier for deep learning developers and data scientists to spin up affordable Titan-X GPU instances powered by Bitfusion Boost’s GPU virtualization technology. This enables developers to save money by not having to reserve a larger machine up-front for development, and also gives them flexibility to demand larger number of GPUs when they are ready to use them. Digits instance comes pre-installed with Nvidia Drivers, Cuda 7 Toolkit, cuDNN 3, Caffe, and Torch. Pay-per-use pricing start at $0.49 per hour for one Titan-X GPU, which is the lowest price compared to anywhere in the cloud today. You can keep adding additional GPUs as you see fit for your application. Monthly subscription pricing is also available. Pricing includes all infrastructure and platform licensing fees. You can access NVIDIA DIGITS on Nimbix immediately by going here and selecting NVIDIA DIGITS on the page. Here is a screencast of how to launch a NVIDIA DIGITS instance on Nimbix in less than a minute. To get started, follow these 3 easy steps.   Get Started Today If you have any questions or need help get started with NVIDIA DIGITS on Nimbix, please don't hesitate to live chat with us right on this page by clicking the blue/white bubble on the bottom right or emailing us at [email protected] Once you have launched DIGITS, here are a few things you can try.
Read More

deep learning nvidia

Running Boost Machine Images on Spot Instances

One of the nifty features of AWS is that one can utilize spot instances over on-demand instances in order to significantly reduce costs. To use spot instances, we need to create a spot instance request which includes a maximum price that we are willing to pay per hour, as well as a few other constraints such as the instance type and availability zone. You can find a detailed discussion of all the AWS spot instance parameters in the following AWS user guide.
Read More

tutorial machine images

Search

New Call-to-action

Stay Up-to-Date!

Get our regular deep learning and AI news, insights, tutorials, and more.