Deep learning and AI technologies are revolutionizing the world, whether it’s through self-driving cars, drones, virtual assistants, more accurate medical diagnosis, or automatic lead generation. As a result, AI is drastically altering the ways in which business is conducted. In the 90’s and 2000s, the web revolutionized businesses by offering them ability to improve customer value by an order of magnitude. Take Amazon for example which created a website for selling books which was brick and mortar until then, then later transformed the retail industry and the computing industry. Another example is how Mobile revolutionized businesses just under a decade ago.
Over the last 10 or so years, application performance demands have increasingly been outpacing Moore’s law in a variety of fields, particularly deep learning and AI. The solution has been to adopt heterogeneous accelerated processors, such as GPUs, FPGAs and various specialized ASICs. With the implementation of these alternative compute architectures, hardware has inevitably become more complex, and software more abstract, to keep up with the shifting landscape.
I recently read an article from the Huffington Post titled: “ Prospecting vs Retargeting: Making the Most of the Marketing Mix” . This article sparked this blog post because as a consumer I become infuriated when I shop online and I purchase an item to later get ads for the same thing I just purchased a couple of days ago. Clearly I don’t need another travel backpack because I just bought one. But hey, why not market some hiking shoes, or sleeping bags or ANYTHING other than the travel backpack I just bought. This is where AI and big data come into play.
Authors: Bhavesh Patel – Dell EMC & Mazhar Memon - Bitfusion Deep Learning (DL), a key technique driving artificial intelligence innovation, such as image recognition, chatbots, and self-driving cars, requires algorithms be ‘trained’ using large data sets. Initially, this can be done on a single node (server). However, as the models and datasets grow ever larger and more complex, it becomes essential to scale-out.
Leads and prospects generated from search or display ads can be very costly and challenging to obtain. That's why Apposphere, an Austin, Texas based company, saw a huge opportunity in mining the huge amount of information and activity happening on social media for lead generation.
Bitfusion Deep Learning AMIs including TensorFlow, Caffe, Torch, Theano, Chainer, and Digits 4 are now available on the newly announced AWS P2 Instances. Recently AWS introduced new P2 Instances which feature Nvidia K80 Accelerators with GK210 GPUs. Unlike the previous G2 instanced which were equipped with K520 cards, where each card only had 4 GiB of memory, each GPU in the P2s has 12 GiB of memory with a memory bandwidth of 240 GB/s. The table below summarizes the specifications for the new P2 Instances: Instance vCPU Count System Memory GPU Count Total GPU Memory Network p2.xlarge 4 61 GiB 1 12 GiB High p2.8xlarge 32 488 GiB 8 96 GiB 10 Gigabit p2.16xlarge 64 732 GiB 16 192 GiB 20 Gigabit The new P2 instances provide significant advantages over the last generation of instances when it comes to deep learning, including the ability to train neural networks significantly faster and to work with larger models that previously exceeded the GPU memory limits. We will be posting a follow on blog shortly detailing some performance benchmarks between the new P2 instances and the previous generation of G2 instances. In the meantime, we have qualified our deep learning AMIs on the new P2 instances and they are are available in the AWS Marketplace as follows: Bitfusion Boost Ubuntu 14 Caffe AMI Pre-installed with Ubuntu 14, Nvidia Drivers, Cuda 7.5 Toolkit, cuDNN 5.1, Caffe, pyCaffe, and Jupyter. Boost enabled for multi-node deployment. Get started with Caffe machine learning and deep learning in minutes. Launch on AWS! Bitfusion Boost Ubuntu 14 Torch 7 AMI Pre-installed with Ubuntu 14, Nvidia Drivers, Cuda 7.5 Toolkit, cuDNN 5.1, Torch 7, iTorch, and Jupyter. Boost enabled for multi-node deployment. Get started with Torch numerical computing, machine learning, and deep learning in minutes. Launch on AWS! Bitfusion Ubuntu 14 Chainer AMI Pre-installed with Nvidia Drivers, Cuda 7.5 Toolkit, cuDNN 5.1, Chainer 1.13.0, and Jupyter. Optimized to leverage Nvidia GRID as well as CPU instances. Designed for developers as well as those eager to get started with the flexible Chainer framework for neural networks. Launch on AWS! Bitfusion Ubuntu 14 Digits 4 AMI Pre-installed with the Deep Learning GPU Training System (DIGITS) from Nvidia. Leverage GPU instances to accelerate pre-installed Caffe and Torch applications. Train deep neural networks and view results directly from your browser. Launch on AWS! Bitfusion Ubuntu 14 TensorFlow AMI Pre-installed with Ubuntu 14, Nvidia Drivers, Cuda 7.5 Toolkit, cuDNN 5.1, TensorFlow, Magenta, Keras and Jupyter. Get started with TensorFlow deep learning, machine learning, and numerical computing in minutes with pre-installed tutorial collateral. Launch on AWS! Bitfusion Ubuntu 14 Theano AMI Pre-installed with Ubuntu 14, Nvidia Drivers, Cuda 7.5 Toolkit, cuDNN 5, Theano, and Jupyter. Get started with Theano deep learning, machine learning, and numerical computing, and develop interactive Theano scripts via python directly from your browser. Launch on AWS! Bitfusion Mobile Deep Learning Service AMI Pre-installed with Nvidia Drivers, Cuda 7.5 Toolkit, Caffe, GPU Rest Engine, Pre-trained Models, and a simple Rest API server. Use existing pre-trained models or train your own models and then integrate inference tasks into your applications via the provided REST API. Launch on AWS!
Today we are introducing four new AMIs targeted at developers that want to offload compute intensive applications or tasks from their thin clients, laptops, or even mobile devices into the cloud where they can utilize vastly more powerful systems to get these tasks done orders of magnitude faster. The four new AMIs are as follows: Bitfusion Mobile Deep Learning Service, Bitfusion Mobile Image Manipulation Service, Bitfusion Mobile Rendering Service, and Bitfusion Mobile Video Processing Service. Each AMI comes with a simple REST API which can be used as is and for which we provide simple example scripts. Alternatively, you can build on top of our API and provide your own services or integrate these AMIs into your applications. Here are the details for each new AMI:
Deep learning users can now access pre-configured NVIDIA DIGITS Titan-X GPU instance starting at 49 cents per hour on Nimbix! Get Started Today Data Scientists and Deep learning users can now try the single-click solution on Cloud Service Provider Nimbix to launch an instance configured to run NVIDIA DIGITS on Titan-X GPUs at as low as $0.49/hour, the most affordable high performance GPUs compared to anywhere in the cloud, powered by Bitfusion Boost’s Software Defined Supercompute technology. The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning in the hands of data scientists and researchers. Quickly design the best deep neural network (DNN) for your data using real-time network behavior visualization. Best of all, DIGITS is a complete system so you don’t have to write any code. DIGITS integrates the popular Caffe deep learning framework and Torch. You can train neural network models, including pre-built AlexNet and GoogleNet models. Developers who use Caffe or Torch can now focus on training their neural networks faster in the most affordable manner, without worrying about configuring drivers, kernels or toolkits! Data Scientists and Developers today spend a lot of time in configuring their development environment to get deep learning frameworks like Caffe or Torch running. Moreover, there is not really any option to use machines with powerful GPUs like TitanX or K80s on an hourly basis. The single-click solution lowers the barrier for deep learning developers and data scientists to spin up affordable Titan-X GPU instances powered by Bitfusion Boost’s GPU virtualization technology. This enables developers to save money by not having to reserve a larger machine up-front for development, and also gives them flexibility to demand larger number of GPUs when they are ready to use them. Digits instance comes pre-installed with Nvidia Drivers, Cuda 7 Toolkit, cuDNN 3, Caffe, and Torch. Pay-per-use pricing start at $0.49 per hour for one Titan-X GPU, which is the lowest price compared to anywhere in the cloud today. You can keep adding additional GPUs as you see fit for your application. Monthly subscription pricing is also available. Pricing includes all infrastructure and platform licensing fees. You can access NVIDIA DIGITS on Nimbix immediately by going here and selecting NVIDIA DIGITS on the page. Here is a screencast of how to launch a NVIDIA DIGITS instance on Nimbix in less than a minute. To get started, follow these 3 easy steps. Get Started Today If you have any questions or need help get started with NVIDIA DIGITS on Nimbix, please don't hesitate to live chat with us right on this page by clicking the blue/white bubble on the bottom right or emailing us at email@example.com. Once you have launched DIGITS, here are a few things you can try.