Startup secures $13M to better train deep learning models

Artificial intelligence startup Run:AI secured $13 million in funding this month for its high-tech training solution for deep learning models, the company announced April 3.

Run:AI, which is based out of Tel Aviv, Israel, created a high-performance compute virtualization layer for deep learning that speeds up the training of neural network models, according to a release. Right now, researchers typically train models by running deep learning workloads on a number of graphic processing units, which can run continuously for days to weeks on pricey computers.

“Traditional computing uses virtualization to help many users or processes share one physical resource efficiently,” Omri Geller, co-founder and CEO of Run:AI, said in the release. “Virtualization tries to be generous. But a deep learning workload is essentially selfish since it requires the opposite—it needs the full computing power of multiple physical resources for a single workload, without holding anything back.

“Traditional computing software just can’t satisfy the resource requirements for deep learning workloads.”

Run:AI’s software, on the other hand, creates a compute abstraction layer that automatically analyzes the computational characteristics of workloads, eliminating bottlenecks and optimizing workloads with graph-based parallel computing algorithms. It automatically allocates and runs workloads, making deep learning experiments run faster and lowering the costs associated with training AI. According to the company, its solution will enable the development of “huge” AI models.

Run:AI received $3 million from TLV Partners in its seed round and an additional $10 million in a Series A round led by Haim Sadger’s S Capital and TLV Partners.

“Executing deep neural network workloads across multiple machines is a constantly moving target, requiring recalculations for each model and iteration based on availability of resources,” Rona Segev-Gal, managing partner of TLV Partners, said in the release. “Run:AI determines the most efficient and cost-effective way to run a deep learning training workload, taking into account the network bandwidth, compute resources, cost, configurations and the data pipeline and size. We’ve seen many AI companies in recent years, but Omri, Ronen and Meir’s approach blew our mind.”

""

After graduating from Indiana University-Bloomington with a bachelor’s in journalism, Anicka joined TriMed’s Chicago team in 2017 covering cardiology. Close to her heart is long-form journalism, Pilot G-2 pens, dark chocolate and her dog Harper Lee.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.