Programmable, pooled data center infrastructure has shaped new platforms development over the last few years. Unpredictable resource consumption, with the need to provision compute, storage and networking on-demand, at capacity when the need arises drove the hyperconvergence trend. While compute, storage and networking are well understood and can be provisioned per demand (with platforms such as Cisco HyperFlex and Cisco UCS), machine learning and AI workloads brought a new variable to the equation – GPUs. GPUs today are not part of the hyper converged architecture, and need to be hard configured to a workload, with no elasticity, flexibility, or sharing across users. Essentially GPUs are at the same place storage was 15 years ago.
We are delighted to partner with VMware and deliver GPU virtualization and AI Attached Network. VMware is a leader in compute, storage and networking virtualization. Now through the partnership with Bitfusion, Machine Learning, AI, GPUs and FPGAs can also be virtualized and designed as a composable resource. Look for us at https://lnkd.in/gmzqKEf
Application performance demands have increasingly been outpacing Moore’s law in a variety of fields, particularly AI and deep learning. Co-processors like GPUs offer immense speedup to applications in fields like AI And deep learning, compared to CPUs. At Bitfusion, we build technology to disaggregate co-processors like GPUs and re-aggregate them in real-time over Ethernet, Infiniband RDMA or RoCE network, to create an elastic AI infrastructure. Just like network attached storage, our technology allows customers to do network attached co-processors.
In 2018, I am excited to share with you my plans for the year ahead. The employees, investors, advisors, partners and customers have supported the growth of the company since it's founding, and I m very grateful for that. When I started this entrepreneurial journey three years ago to go after a big opportunity, heterogeneous hardware like GPUs, FPGAs for general purpose compute were just on the horizon and we set out on a mission to build the Operating System for the modern heterogeneous compute datacenter. We are now the world's first infrastructure management platform to support CPUs, GPUs and FPGAs, with AI as a beachhead. The team at Bitfusion has helped form a strong foundation for the company. We hope to accelerate and lead the transformation to elastic heterogeneous infrastructure deployment in the year ahead.
Deep learning and AI technologies are revolutionizing the world, whether it’s through self-driving cars, drones, virtual assistants, more accurate medical diagnosis, or automatic lead generation. As a result, AI is drastically altering the ways in which business is conducted. In the 90’s and 2000s, the web revolutionized businesses by offering them ability to improve customer value by an order of magnitude. Take Amazon for example which created a website for selling books which was brick and mortar until then, then later transformed the retail industry and the computing industry. Another example is how Mobile revolutionized businesses just under a decade ago.
Over the last 10 or so years, application performance demands have increasingly been outpacing Moore’s law in a variety of fields, particularly deep learning and AI. The solution has been to adopt heterogeneous accelerated processors, such as GPUs, FPGAs and various specialized ASICs. With the implementation of these alternative compute architectures, hardware has inevitably become more complex, and software more abstract, to keep up with the shifting landscape.
I recently read an article from the Huffington Post titled: “ Prospecting vs Retargeting: Making the Most of the Marketing Mix” . This article sparked this blog post because as a consumer I become infuriated when I shop online and I purchase an item to later get ads for the same thing I just purchased a couple of days ago. Clearly I don’t need another travel backpack because I just bought one. But hey, why not market some hiking shoes, or sleeping bags or ANYTHING other than the travel backpack I just bought. This is where AI and big data come into play.
Authors: Bhavesh Patel – Dell EMC & Mazhar Memon - Bitfusion Deep Learning (DL), a key technique driving artificial intelligence innovation, such as image recognition, chatbots, and self-driving cars, requires algorithms be ‘trained’ using large data sets. Initially, this can be done on a single node (server). However, as the models and datasets grow ever larger and more complex, it becomes essential to scale-out.
Over the past ten years, we have been noticing the trend of application performance demands starting to outpace Moore’s law in a variety of fields. The solution has been to rely on specialized processors like GPUs, FPGAs and other specialized ASICs. With these alternative compute architectures, hardware was becoming more complex and software was becoming more abstract.
Intro to TensorBoard Now that we’re constantly validating the data and saving our model, we can start thinking of ways to visualize the ins and outs of our model or ways to do exploratory data analysis of our model while or after it is done training. In Dandelion Mane’s talk at the TensorFlow Dev Summit 2017, he described it as a flashlight to shine on the black box of deep neural networks. Sometimes shining a bright light is ill-advised.