Deep Render believes AI holds the key to more efficient video compression

Chri Besenbruch, CEO of Deep Render, sees many problems with the way video compression standards are developed today. He thinks they aren’t advancing quickly enough, bemoans the fact that they’re plagued with legal uncertainty and decries their reliance on specialized hardware for acceleration.

“The codec development process is broken,” Besenbruch said in an interview with TechCrunch ahead of Disrupt, where Deep Render is participating in the Disrupt Battlefield 200. “In the compression industry, there is a significant challenge of finding a new way forward and searching for new innovations.”

Seeking a better way, Besenbruch co-founded Deep Render with Arsalan Zafar, whom he met at Imperial College London. At the time, Besenbruch was studying computer science and machine learning. He and Zafar collaborated on a research project involving distributing terabytes of video across a network, during which they say they experienced the shortcomings of compression technology firsthand.

The last time TechCrunch covered Deep Render, the startup had just closed a £1.6 million seed round ($1.81 million) led by Pentech Ventures with participation from Speedinvest. In the roughly two years since then, Deep Render has raised an additional several million dollars from existing investors, bringing its total raised to $5.7 million.

“We thought to ourselves, if the internet pipes are difficult to extend, the only thing we can do is make the data that flows through the pipes smaller,” Besenbruch said. “Hence, we decided to fuse machine learning and AI and compression technology to develop a fundamentally new way of compression data getting significantly better image and video compression ratios.”

Deep Render isn’t the first to apply AI to video compression. Alphabet’s DeepMind adapted a machine learning algorithm originally developed to play board games to the problem of compressing YouTube videos, leading to a 4% reduction in the amount of data the video-sharing service needs to stream to users. Elsewhere, there’s startup WaveOne, which claims its machine learning-based video codec outperforms all existing standards across popular quality metrics.

But Deep Render’s solution is platform-agnostic. To create it, Besenbruch says that the company compiled a data set of over 10 million video sequences on which they trained algorithms to learn to compress video data efficiently. Deep Render used a combination of on-premises and cloud hardware for the training, with the former comprising of over a hundred GPUs.

Deep Render claims the resulting compression standard is 5x better than HEVC, a widely-used codec, and can run in real time on mobile devices with a dedicated AI accelerator chip (e.g. the Apple Neural Engine in modern iPhones). Besenbruch says the company is in talks with three large tech firms — all with market caps over $300 billion — about paid pilots, though he declined to share names.

Eddie Anderson, a founding partner at Pentech and board member at Deep Render, shared via email: “Deep Render’s machine-learning approach to codecs completely disrupts an established market. Not only is it a software route to market, but their [compression] performance is significantly better than the current state of the art. As bandwidth demands continue to increase, their solution has the potential to drive vastly improved commercial performance for current media owners and distributors.”

Deep Render currently employs 20 people. By the end of 2023, Besenbruch expects that number will more than triple to 62.

Techyrack Website stock market day trading and youtube monetization and adsense Approval

Adsense Arbitrage website traffic Get Adsense Approval Google Adsense Earnings Traffic Arbitrage YouTube Monetization YouTube Monetization, Watchtime and Subscribers Ready Monetized Autoblog



from Technology news – My Blog https://ift.tt/WqZbiCK
via IFTTT

Post a Comment

Previous Post Next Post