I recently read an article from the Huffington Post titled: “ Prospecting vs Retargeting: Making the Most of the Marketing Mix” . This article sparked this blog post because as a consumer I become infuriated when I shop online and I purchase an item to later get ads for the same thing I just purchased a couple of days ago. Clearly I don’t need another travel backpack because I just bought one. But hey, why not market some hiking shoes, or sleeping bags or ANYTHING other than the travel backpack I just bought. This is where AI and big data come into play.
Authors: Bhavesh Patel – Dell EMC & Mazhar Memon - Bitfusion Deep Learning (DL), a key technique driving artificial intelligence innovation, such as image recognition, chatbots, and self-driving cars, requires algorithms be ‘trained’ using large data sets. Initially, this can be done on a single node (server). However, as the models and datasets grow ever larger and more complex, it becomes essential to scale-out.
Over the past ten years, we have been noticing the trend of application performance demands starting to outpace Moore’s law in a variety of fields. The solution has been to rely on specialized processors like GPUs, FPGAs and other specialized ASICs. With these alternative compute architectures, hardware was becoming more complex and software was becoming more abstract.
Intro to TensorBoard Now that we’re constantly validating the data and saving our model, we can start thinking of ways to visualize the ins and outs of our model or ways to do exploratory data analysis of our model while or after it is done training. In Dandelion Mane’s talk at the TensorFlow Dev Summit 2017, he described it as a flashlight to shine on the black box of deep neural networks. Sometimes shining a bright light is ill-advised.
In our last post we gave a basic introduction to TensorFlow 1.0. What we want to do now is take our foundation and move it forward. One of the most important parts of deep learning is understanding what is going on while the code is running. As our problems get more complicated and our datasets get larger, training time can go from minutes to days. If we’ve picked a model with poor hyper-parameters or just a bad model in general, we don’t want to have to wait hours to make an adjustment to our model. Or if we have great hyperparameters and models, but don’t tell the model to train for enough steps we don't want to start from scratch. Or do we…
In a series of blog posts, we want to show a step-by-step guide on how to get from a basic TensorFlow model to best-in-class architectures. Deep learning is a hot topic and it’s easy to find starting and advanced resources, but it can be difficult to see how to get from the intro material to more advanced models. We at Bitfusion would like to help address the lack of points along the journey to becoming a Deep Learning expert.
We’re excited to announce that the newest version of our Bitfusion Ubuntu 14 Tensorflow AMI is now available in the AWS Marketplace with Tensorflow 1.0! This Tensorflow release marks a major milestone of functionality, robustness, and -- going forward -- backwards compatibility.
We’re excited to announce the newest AMI in our lineup: Bitfusion Ubuntu 14 MXNet AMI, now available in the AWS Marketplace. Amazon CTO Werner Vogels has recently announced that MXNet is the company's official deep learning framework of choice, paving the way for fast growth and interest this previously lesser known project.