The company behind Neural Magic has raised $30 million in a Series A funding round. The company offers an open source modeling and software deployment engine and claims over 1,000 installations. It has also received funding from the National Science Foundation. The company plans to use this money to develop more software for businesses.
Neural Magic raises $30 million Series A funding round
Neural Magic is an early stage AI software company that is building a software platform for deep learning inference. Recently, the company raised $30 million Series A funding from NEA and other investors. The new capital will be used to improve the company’s open-source inference models and proprietary software deployment engine. This will further advance the company’s leadership position in the area of machine learning infrastructure.
Neural Magic’s approach to deep learning is also designed to be environmentally friendly. Its algorithms use automatic model sparsification to reduce the footprint while allowing models to run on CPUs at GPU speeds. Neural Magic also offers a number of “recipes” that machine learning libraries can use.
Neural Magic is a machine learning software company spun out of MIT by two MIT professors. Its software allows companies to run machine learning models on commodity CPUs, which drastically reduces the cost of a machine learning project. This software can help businesses scale their machine learning projects quickly.
Neural Magic’s open-source platform launched in February, and it has more than 1,000 downloads per week. The company has also hired former Sun Microsystems CTO Greg Papadopoulos to join its board. He has expertise in parallel data flow computing architectures.
Neural Magic is one of the most promising companies in the Boston tech scene. The company has raised $15 million in seed funding. Neural Magic’s software runs machine learning models on commodity CPUs, making it an attractive investment for investors. The company has raised over $144 million in total.
Neural Magic is a company that builds software to deploy deep learning algorithms in edge locations. The company also develops hardware to support AI workloads. This investment round is the latest step on Neural Magic’s path to building a successful AI platform. It is on its way to transforming the software industry.
Neural Magic has secured $30 million Series A funding round from existing investors. The funds will be used to support clinical trials. The company plans to expand its platform and add new products. Neural Magic’s mission is to build a more intelligent and personalized world. Its new funding will help the company scale its global operations.
Many companies and developers prefer the simplicity of using basic CPU chips and optimizing with software. A company like Target has racks of hardware in every store and might prefer to optimize with software made by Neural Magic rather than making complex, bespoke investments into specialized accelerator chips like Google’s Tensor Processing Unit (TPU), Neural Magic CEO Brian Stevens said in an interview with VentureBeat.
“That’s the world we’re trying to create. We want to give developers the flexibility to deploy AI on commodity processors already located in the edge location,” he said.
This financing, which brings the company’s total amount raised to $50 million, was led by existing investor NEA, with participation from Andreessen Horowitz, Amdocs, Comcast Ventures, Pillar VC, and Ridgeline Ventures.
Neural Magic will use the new capital to invest in the open source inference models it has built, as well as the proprietary engine the company offers to help developers deploy the models.
Players like AMD and Intel are also working on optimization software layers for their hardware. For example, Intel has released OpenVino, a free toolkit for optimizing deep learning models. However, Neural Magic offers what it calls “recipes” that can be plugged into machine learning libraries like PyTorch to make models more sparse and speed up its engine, Stevens said.
The company’s open source offering launched quietly in February and now has upwards of 1,000 unique installations per week, according to Stevens, who took over as CEO this year. The traction got NEA’s excited enough about the company to lead the latest investment and join Neural Magic’s board, Stevens said. Papadopoulos is the former CTO of Sun Microsystems and has done work at MIT on parallel data flow computing architectures. Papadopoulos came to believe that hardware brings too much friction to inference, meaning companies like Nvidia won’t be able to own the market with GPU hardware alone.
After working with hundreds of product developers at Airbnb, CEO and cofounder Tommy Dang saw that those developers knew how AI could be used to improve their product, but that they also had to rely on data science resources to help implement their ideas. Data scientists do not come inexpensively anywhere in the world.
“People who are working on user-facing features, like engineers and backend engineers – they can code but they didn’t go to school for machine learning or AI,” Dang said. “They definitely know what it is and what is useful, but they don’t have the expertise for that. Existing solutions aren’t designed and built for product developers. So, we provide a web-based tool that empowers those individuals to be able to build AI models – specifically a ranking use case.
“And we’ve seen that building ranking models in products is much in demand. Let’s say you have a lot of news on your home feed or you have a lot of products you want to sell. People need rankings to optimize that for their users. And usually machine learning and AI are well suited to do that.”
Use cases include increasing user engagement by ranking articles, posts, comments, etc. on your user’s home feed or increasing conversion by showing the most relevant products for a user to buy, Dang said.
Mage works by first connecting to existing data sources, such as Amplitude or Snowflake. Once a user adds their data, Mage will provide guided suggestions for cleaning and enhancing that data to maximize the model’s performance during training. Once the model completes training, product developers can use its predictions in real-time via API requests, Dang said.