Архив рубрики: Startups

iOS gains new emoji, Showtime joins a pricier Paramount+, and Instagram launches Channels

Hey, TechCrunch besties. After a week in Korea and the Philippines, it’s great to be back in the States — and slightly more tan (i.e., burnt) than before. Massive thanks to Henry, who was forced to step in over the past two weeks thanks to my failing to realize that Korean Air does not offer in-flight Wi-Fi. Talk about a good sport.
If you’re wondering about Greg’s status, not to worry — he’s due to return from a well-deserved parental leave in a month and change. In the meantime, I’m here to nag you about TechCrunch’s upcoming headliner events.
TechCrunch Early Stage is fast approaching — it’s on April 20 in Boston this year, and it’ll host experts across the venture and tech landscape who’ll speak to solutions in getting a startup off the ground. (Also in Boston: City Spotlight, which kicks off February 27.) On the far horizon, there’s TechCrunch Disrupt (September 19–21), which promises to be an absolute blowout this year. Having taken a peek at the preliminary guest list, let me just say this: It won’t disappoint.
With those administrative bits out of the way, let’s get on with Week in Review. (If you want it in your inbox every Saturday, sign up here). Here are the top stories from the past several days!
most read
Dashed ambitions: Tage exclusively reports that allegedly Dash CEO Prince Boakye Boampong was temporarily suspended pending an investigation into financial impropriety at the company. Boampong, one of Africa’s best-known serial entrepreneurs, is reportedly accused of engaging in financial misreporting; sources tell TechCrunch that executives repeatedly concealed financials within the firm while laying off employees at will. Prior to Boampong’s alleged suspension, Dash had raised tens of millions in venture capital at an over-$200 million valuation.
New iOS, new emoji: Apple released the iOS 16.4 developer beta, which brought with it the next set of emoji coming to iPhones. Originally unveiled during the draft phase last year, the emoji span categories like food and drink, activity, objects, animals and symbols. Sarah writes that among the highlights are variations on the heart emoji, pushing hand gestures and a “shaking face” emoji. Curious users can check out the new additions by enrolling in Apple’s Developer Program.
Pony up for Paramount: Ahead of the launch of “Paramount+ with Showtime,” a new TV streaming service bundle that’ll see Showtime integrated with Paramount+, Paramount announced that it would be increasing the price of its Paramount+ Premium tier from $9.99 per month to $11.99 per month. It’s not an unexpected move — Paramount CEO Bob Bakish telegraphed the plans in early December — but it could nonetheless put Paramount+ with Showtime at a disadvantage as it competes with Warner Bros. Discovery’s upcoming HBO Max/Discovery+ service.
Feishu is the new Slack: Feishu, ByteDance’s Slack-like workplace collaboration app, surpassed $100 million in annual recurring revenue last year, Rita writes. ByteDance’s heavy investment in Feishu is telling of the state of enterprise software in China. At a time when Silicon Valley investors are heralding product-led growth, software in China is still largely counting on sales, marketing and services to recruit users.
Channeling Instagram: Instagram launched a new broadcast chat feature this week called “Channels.” Aisha reports that it lets creators share public, one-to-many messages to directly engage with their followers. Channels support text, images, polls, reactions and more. Instagram is starting to test channels with select creators in the U.S. and plans to expand the feature in coming months.
Salesforce under pressure: Salesforce is looking for new ways to cut costs as activist investors put pressure on the company. This week, Salesforce implemented stricter performance measurements for engineering, with some salespeople being put under pressure to quit or succumb to harsh performance policies of their own. As Ron writes, it’s probably related to the fact that activist investors have been circling the company, undoubtedly pushing management to increase productivity and reduce expenditures.
Safety concerns dog Tesla: Tesla this week issued a recall of its Full Self-Driving (FSD) beta software, an advanced driver-assistance system that federal regulators say could allow vehicles to act unsafe around intersections. Affecting over 362,000 vehicles, the recall was motivated in part, Telsa disclosed, by concerns that FSD-driven vehicles might respond insufficiently to changes in posted speed limits, among other concerns. FSD beta software — from its name and Musk’s promises around its capabilities to its rollout and safety concerns — has been controversial, attracting scrutiny from regulatory agencies.
Snapping up users: Snapchat now has over 750 million monthly active users (MAUs). The company announced the milestone during its Investor Day on Thursday, Sarah reports. Snapchat said it sees a path to reaching over 1 billion people in the next two to three years, but whether it’ll actually achieve that remains to be seen. In any case, 750 MAUs puts Snapchat ahead of Pinterest (450 million) but behind Facebook (2.96 billion).
A Tetris movie: Apple TV+ this week released the first trailer for its movie “Tetris,” based on the origin story of the popular puzzle video game. Starring Taron Egerton, who plays American video game salesman Henk Rogers, “Tetris” tells the story of Rogers and his mission to secure the distribution rights of the game. The movie will premiere at South by Southwest film festival in March, after which Apple will release it worldwide on Apple TV+ (on March 31).
audio
TechCrunch has a wonderful lineup of audio programming, in case you weren’t aware. In other words, we’ve got podcasts for days. This week on Equity, Mary Ann and Becca got on the mic to talk about Descope’s $53 million seed round, Phenomenal Ventures’ new fund and a Mexican neobank’s latest raise. On Found, Darrell and Becca talked with Alex Rappaport, the CEO and co-founder of ZwitterCo, which makes it practical for industries to recycle water and enhance product recovery with new filtration technology. And over at TechCrunch Live, the crew went live (not to be repetitive) with CFO-turned-CEO Christina Ross and her Mayfield Fund partner, Rajeev Batra, to talk about the story behind Ross’ company, Cube, and how it meets its customers where they’re at.
TechCrunch+

TC+ subscribers get access to in-depth commentary, analysis and surveys — which you know if you’re already a subscriber. If you’re not, consider signing up. Here are a few highlights from this week:
An egg, but not: Price parity with traditional foods is one of the main challenges for alternative protein startups. However, the avian flu, a shortage of cage-free eggs and a subsequent rise in prices in late 2022 seems to provide an “in” for alternative egg companies to show they can compete. Christine takes a deep dive.

Down but not out: Natasha M writes how an emerging class of founders is reminding the tech ecosystem how collapse can be an activator. Laid-off talent is flocking to build startups within all sectors, from climate to crypto to the creator economy. And they’re hoping to course-correct where their alma maters — both Big Tech companies and small upstarts alike — went wrong.

Is the tech jobs market as bad as it seems?: Ron investigates the state of the tech jobs market, finding that — while some numbers are down — it’s not a clear-cut matter. His top-level observation? Tech workers, especially those with specialized skills like engineering, data science, AI and cybersecurity, continue to be in demand as supply lags behind the number of open jobs.
iOS gains new emoji, Showtime joins a pricier Paramount+, and Instagram launches Channels by Kyle Wiggers originally published on TechCrunch
iOS gains new emoji, Showtime joins a pricier Paramount+, and Instagram launches Channels

QuickVid uses AI to generate short-form videos, complete with voiceovers

Generative AI is coming for videos. A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram, TikTok and Snapchat videos.
Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2 and adds a synthetic voiceover and background music from YouTube’s royalty-free music library. QuickVid’s creator, Daniel Habib, says that he’s building the service to help creators meet the “ever-growing” demand from their fans.
“By providing creators with tools to quickly and easily produce quality content, QuickVid helps creators increase their content output, reducing the risk of burnout,” Habib told TechCrunch in an email interview. “Our goal is to empower your favorite creator to keep up with the demands of their audience by leveraging advancements in AI.”
But depending on how they’re used, tools like QuickVid threaten to flood already-crowded channels with spammy and duplicative content. They also face potential backlash from creators who opt not to use the tools, whether because of cost ($10 per month) or on principle, yet might have to compete with a raft of new AI-generated videos.
Going after video
QuickVid, which Habib, a self-taught developer who previously worked at Meta on Facebook Live and video infrastructure, built in a matter of weeks, launched on December 27. It’s relatively bare bones at present — Habib says that more personalization options will arrive in January — but QuickVid can cobble together the components that make up a typical informational YouTube Short or TikTok video, including captions and even avatars.
It’s easy to use. First, a user enters a prompt describing the subject matter of the video they want to create. QuickVid uses the prompt to generate a script, leveraging the generative text powers of GPT-3. From keywords either extracted from the script automatically or entered manually, QuickVid selects a background video from the royalty-free stock media library Pexels and generates overlay images using DALL-E 2. It then outputs a voiceover via Google Cloud’s text-to-speech API — Habib says that users will soon be able to clone their voice — before combining all these elements into a video.
Image Credits: QuickVid
See this video made with the prompt “Cats”:

https://techcrunch.com/wp-content/uploads/2022/12/img_5pg7k95x9ig2tofh7mkrr_cfr.mp4
Or this one:
https://techcrunch.com/wp-content/uploads/2022/12/img_61ighv4x55slq9582dbx_cfr.mp4
QuickVid certainly isn’t pushing the boundaries of what’s possible with generative AI. Both Meta and Google have showcased AI systems that can generate completely original clips given a text prompt. But QuickVid amalgamates existing AI to exploit the repetitive, templated format of B-roll-heavy short-form videos, getting around the problem of having to generate the footage itself.
“Successful creators have an extremely high-quality bar and aren’t interested in putting out content that they don’t feel is in their own voice,” Habib said. “This is the use case we’re focused on.”
That supposedly being the case, in terms of quality, QuickVid’s videos are generally a mixed bag. The background videos tend to be a bit random or only tangentially related to the topic, which isn’t surprising given QuickVids being currently limited to the Pexels catalog. The DALL-E 2-generated images, meanwhile, exhibit the limitations of today’s text-to-image tech, like garbled text and off proportions.
In response to my feedback, Habib said that QuickVid is “being tested and tinkered with daily.”
Copyright issues
According to Habib, QuickVid users retain the right to use the content they create commercially and have permission to monetize it on platforms like YouTube. But the copyright status around AI-generated content is … nebulous, at least presently. The U.S. Patent and Trademark Office (USPTO) recently moved to revoke copyright protection for an AI-generated comic, for example, saying copyrightable works require human authorship.
When asked about how the USPTO decision might affect QuickVid, Habib said he believes that it only pertain to the “patentability” of AI-generated products and not the rights of creators to use and monetize their content. Creators, he pointed out, aren’t often submitting patents for videos and usually lean into the creator economy, letting other creators repurpose their clips to increase their own reach.
“Creators care about putting out high-quality content in their voice that will help grow their channel,” Habib said.
Another legal challenge on the horizon might affect QuickVid’s DALL-E 2 integration — and, by extension, the site’s ability to generate image overlays. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot, a code-generating system, to regurgitate sections of licensed code without providing credit. (Copilot was co-developed by OpenAI and GitHub, which Microsoft owns.) The case has implications for generative art AI like DALL-E 2, which similarly has been found to copy and paste from the datasets on which they were trained (i.e., images).
Habib isn’t concerned, arguing that the generative AI genie’s out of the bottle. “If another lawsuit showed up and OpenAI disappeared tomorrow, there are several alternatives that could power QuickVid,” he said, referring to the open source DALL-E 2-like system Stable Diffusion. QuickVid is already testing Stable Diffusion for generating avatar pics.
Moderation and spam
Aside from the legal dilemmas, QuickVid might soon have a moderation problem on its hands. While OpenAI has implemented filters and techniques to prevent them, generative AI has well-known toxicity and factual accuracy problems. GPT-3 spouts misinformation, particularly about recent events, which are beyond the boundaries of its knowledge base. And ChatGPT, a fine-tuned offspring of GPT-3, has been shown to use sexist and racist language.
That’s worrisome, particularly for people who’d use QuickVid to create informational videos. In a quick test, I had my partner — who’s far more creative than me, particularly in this area —  enter a few offensive prompts to see what QuickVid would generate. To QuickVid’s credit, obviously problematic prompts like “Jewish new world order” and “9/11 conspiracy theory” didn’t yield toxic scripts. But for “Critical race theory indoctrinating students,” QuickVid generated a video implying that critical race theory could be used to brainwash schoolchildren.
See:

https://techcrunch.com/wp-content/uploads/2022/12/img_e4wba39us0vqtc8051491_cfr.mp4
Habib says that he’s relying on OpenAI’s filters to do most of the moderation work and asserts that it’s incumbent on users to manually review every video created by QuickVid to ensure “everything is within the boundaries of the law.”
“As a general rule, I believe people should be able to express themselves and create whatever content they want,” Habib said.
That apparently includes spammy content. Habib makes the case that the video platforms’ algorithms, not QuickVid, are best positioned to determine the quality of a video, and that people who produce low-quality content “are only damaging their own reputations.” The reputational damage will naturally disincentivize people from creating mass spam campaigns with QuickVid, he says.
“If people don’t want to watch your video, then you won’t receive distribution on platforms like YouTube,” he added. “Producing low-quality content will also make people look at your channel in a negative light.”
But it’s instructive to look at ad agencies like Fractl, which in 2019 used an AI system called Grover to generate an entire site of marketing materials — reputation be damned. In an interview with The Verge, Fractl partner Kristin Tynski said that she foresaw generative AI enabling “a massive tsunami of computer-generated content across every niche imaginable.”
In any case, video-sharing platforms like TikTok and YouTube haven’t had to contend with moderating AI-generated content on a massive scale. Deepfakes — synthetic videos that replace an existing person with someone else’s likeness — began to populate platforms like YouTube several years ago, driven by tools that made deepfaked footage easier to produce. But unlike even the most convincing deepfakes today, the types of videos QuickVid creates aren’t obviously AI-generated in any way.
Google Search’s policy on AI-generated text might be a preview of what’s to come in the video domain. Google doesn’t treat synthetic text differently from human-written text where it concerns search rankings but takes actions on content that’s “intended to manipulate search rankings and not help users.” That includes content stitched together or combined from different web pages that “[doesn’t] add sufficient value” as well as content generated through purely automated processes, both of which might apply to QuickVid.
In other words, AI-generated videos might not be banned from platforms outright should they take off in a major way but rather simply become the cost of doing business. That isn’t likely to allay the fears of experts who believe that platforms like TikTok are becoming a new home for misleading videos, but — as Habib said during the interview — “there is no stopping the generative AI revolution.”
QuickVid uses AI to generate short-form videos, complete with voiceovers by Kyle Wiggers originally published on TechCrunch
QuickVid uses AI to generate short-form videos, complete with voiceovers

Twelve Labs lands $12M for AI that understands the context of videos

To Jae Lee, a data scientist by training, it never made sense that video — which has become an enormous part of our lives, what with the rise of platforms like TikTok, Vimeo and YouTube — was difficult to search across due to the technical barriers posed by context understanding. Searching the titles, descriptions and tags of videos was always easy enough, requiring no more than a basic algorithm. But searching within videos for specific moments and scenes was long beyond the capabilities of tech, particularly if those moments and scenes weren’t labeled in an obvious way.
To solve this problem, Lee, alongside friends from the tech industry, built a cloud service for video search and understanding. It became Twelve Labs, which went on to raise $17 million in venture capital — $12 million of which came from a seed extension round that closed today. Radical Ventures led the extension with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told TechCrunch in an email.
“The vision of Twelve Labs is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure,” Lee said.
A demo of the Twelve Labs platform’s capabilities. Image Credits: Twelve Labs
Twelve Labs, which is currently in closed beta, uses AI to attempt to extract “rich information” from videos such as movement and actions, objects and people, sound, text on screen, and speech to identify the relationships between them. The platform converts these various elements into mathematical representations called “vectors” and forms “temporal connections” between frames, enabling applications like video scene search.
“As a part of achieving the company’s vision to help developers create intelligent video applications, the Twelve Labs team is building ‘foundation models’ for multimodal video understanding,” Lee said. “Developers will be able to access these models through a suite of APIs, performing not only semantic search but also other tasks such as long-form video ‘chapterization,’ summary generation and video question and answering.”
Google takes a similar approach to video understanding with its MUM AI system, which the company uses to power video recommendations across Google Search and YouTube by picking out subjects in videos (e.g., “acrylic painting materials”) based on the audio, text and visual content. But while the tech might be comparable, Twelve Labs is one of the first vendors to market with it; Google has opted to keep MUM internal, declining to make it available through a public-facing API.
That being said, Google, as well as Microsoft and Amazon, offer services (i.e., Google Cloud Video AI, Azure Video Indexer and AWS Rekognition) that recognize objects, places and actions in videos and extract rich metadata at the frame level. There’s also Reminiz, a French computer vision startup that claims to be able to index any type of video and add tags to both recorded and live-streamed content. But Lee asserts that Twelve Labs is sufficiently differentiated — in part because its platform allows customers to fine-tune the AI to specific categories of video content.
Mockup of API for fine-tuning the model to work better with salad-related content. Image Credits: Twelve Labs
“What we’ve found is that narrow AI products built to detect specific problems show high accuracy in their ideal scenarios in a controlled setting, but don’t scale so well to messy real-world data,” Lee said. “They act more as a rule-based system, and therefore lack the ability to generalize when variances occur. We also see this as a limitation rooted in lack of context understanding. Understanding of context is what gives humans the unique ability to make generalizations across seemingly different situations in the real world, and this is where Twelve Labs stands alone.”
Beyond search, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently figuring out, for example, which videos showing knives are violent versus instructional. It can also be used for media analytics and real-time feedback, he says, and to automatically generate highlight reels from videos.
A little over a year after its founding (March 2021), Twelve Labs has paying customers — Lee wouldn’t reveal how many exactly — and a multiyear contract with Oracle to train AI models using Oracle’s cloud infrastructure. Looking ahead, the startup plans to invest in building out its tech and expanding its team. (Lee declined to reveal the current size of Twelve Labs’ workforce, but LinkedIn data shows it’s roughly 18 people.)
“For most companies, despite the huge value that can be attained through large models, it really does not make sense for them to train, operate and maintain these models themselves. By leveraging a Twelve Labs platform, any organization can leverage powerful video understanding capabilities with just a few intuitive API calls,” Lee said. “The future direction of AI innovation is heading straight towards multimodal video understanding, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
Twelve Labs lands $12M for AI that understands the context of videos by Kyle Wiggers originally published on TechCrunch
Twelve Labs lands $12M for AI that understands the context of videos

Mozilla acquires Active Replica to build on its metaverse vision

An automated status updater for Slack isn’t the only thing Mozilla acquired this week. On Wednesday, the company announced that it snatched up Active Replica, a Vancouver-based startup developing a “web-based metaverse.”
According to Mozilla SVP Imo Udom, Active Replica will support Mozilla’s ongoing work with Hubs, the latter’s VR chatroom service and open source project. Specifically, he sees the Active Replica team working on personalized subscription tiers, improving the onboarding experience and introducing new interaction capabilities in Hubs.
“Together, we see this as a key opportunity to bring even more innovation and creativity to Hubs than we could alone,” Udom said in a blog post. “We will benefit from their unique experience and ability to create amazing experiences that help organizations use virtual spaces to drive impact. They will benefit from our scale, our talent, and our ability to help bring their innovations to the market faster.”
Active Replica was founded in 2020 by Jacob Ervin and Valerian Denis. Ervin is a software engineer by trade, having held roles at AR/VR startups Metaio, Liminal AR and Occipital. Denis has a history in project management — he worked for VR firms including BackLight, which specializes in location-based and immersive VR experiences for brands.
With Active Replica, Ervin and Denis sought to build a platform for virtual events and meetings built on top of Mozilla’s Hubs project. Active Replica sold virtual event packages that included venue design, event planning, live entertainment and tech support.
Prior to the acquisition, Active Replica hadn’t publicly announced outside funding. Ervin and Denis have assumed new jobs at Mozilla within the past several weeks, now working as senior engineering manager and product lead, respectively.
“Mozilla has long advocated for a healthier internet and has been an inspiration to us in its dedication and contributions to the open web. By joining forces with the Mozilla Hubs team, we’re able to further expand on our mission and inspire a new generation of creators, connectors, and builders,” Ervin and Denis said in a statement. “Active Replica will continue to work with our existing customers, partners and community.”
Mozilla launched Hubs in 2018, which it pitched at the time as an “experiment” in “immersive social experiences.” Hubs provides the dev tools and infrastructure necessary to allow users to visit a portal through any browser and collaborate with others in a VR environment. Adhering to web standards, Hubs supports all the usual headsets and goggles (e.g. Oculus Rift, HTC Vive) while remaining open to those without specialized VR hardware on desktops and smartphones.
Hubs recently expanded with the launch of a $20-per-month service that did away with the previously free service, but introduced account management tools, privacy and security features. According to Mozilla, the plan is to roll out additional tiers and reintroduce a free version in the future, along with kits to create custom spaces, avatar and identity options and integrations with existing collaboration tools.
Mozilla’s forays into the metaverse have been met with mixed results. While Hubs is alive and kicking as evidenced by the Active Replica acquisition, Meta shuttered Firefox Reality, its attempt to create a full-featured browser for AR and VR headsets, in February 2022. In explaining why it decided to close up Firefox Reality, Mozilla said that while it does help develop new technologies, like WebVR and WebAR, it doesn’t always continue to host and incubate those technologies long-term.
Mozilla acquires Active Replica to build on its metaverse vision by Kyle Wiggers originally published on TechCrunch
Mozilla acquires Active Replica to build on its metaverse vision

Mark Cuban-backed streaming app Fireside acquires Stremium to bring live, interactive shows to your TV

Mark Cuban-backed streaming app Fireside, which today offers podcasters and other creators a way to host interactive, live shows with audience engagement, will soon expand to the TV’s big screen. Variety reported, and Fireside confirmed, it’s acquired the open streaming TV platform Stremium, which will allow Fireside’s shows to become available to a range of connected TV devices, including Amazon Fire TV, Roku, smart TVs and others.
Deal terms were not disclosed. Cuban retweeted Variety’s reporting but made no other public comment.
A company spokesperson confirmed the deal to TechCrunch, noting it was for a combination of IP and talent.
“Fireside has acquired all of Stremium including its full team and intellectual property,” the spokesperson said. “The company is the first interactive web3 streaming platform and the acquisition will help Fireside accelerate delivering on being the only platform that turns creators, celebrities, brands, and IP owners into the studio, networks, and streaming services of the future. Expect other major announcements coming soon on this front,” they added.
Launched just over a year ago, Fireside arrived on the heels of the pandemic-fueled demand for startups offering live entertainment as well as a growing number of startups catering to the creator economy.
Despite some early — and erroneous — comparisons between Fireside and other live audio platforms like Twitter Spaces or Clubhouse, the startup gained traction due to a differentiated feature set that also prioritizes video content. Shows on Fireside’s platform could be streamed live to its app, recorded, saved, or even simulcast to other social networks. The app additionally includes audience engagement tools and other features to aid creators with promotion, editing, measurement, distribution, monetization, and audience growth, all of which are part of Fireside’s end-to-end content production experience. More recently, the company had been exploring web3 technologies, including NFTs.
Co-founded by Cuban, early Yammer employee Mike Ihbe, and former Googler, YouTuber and Node co-founder Falon Fatemi, who sold her last company to SugarCRM, Fireside has managed to attract some high-profile creators like Jay Leno, Michael Dell, Melissa Rivers, Craig Kilborn, and screenwriter and Entourage creator Doug Ellin over the past year.
In a letter to Fireside investors published by Variety, Fatemi shared that the Stremium acquisition would help Fireside to offer a “second screen experience where the audience can use their phones to engage and interact in real-time while watching on their TVs.”
“Imagine watching a live cookalong show with your favorite chef simultaneously on your TV and your phone where you can interact and get invited to talk directly to them and even show them what you are cooking from the palm of your hand,” Fatemi explained. Plus, Stremium’s infrastructure would allow creators to upload, publish, program and distribute their live shows across both mobile and TV, she added. (Stremium confirmed to us the letter’s accuracy.)
TechCrunch this February reported Fireside was in talks to raise a $25 million Series A that valued its business at $125 million. That round has since closed, but Fireside hasn’t yet made a formal announcement about raise, investors, or its valuation. We understand this may be because Fireside is still adding some additional strategic investors to the deal, and it plans to detail the fundraise soon. Of course, the funding may have helped pave the way for Fireside to make this new acquisition.
Other investors in Fireside include the Chainsmokers, HBSE, Goodwater, Animal Capital, and NFL stars Larry Fitzgerald and Kelvin Beachum and former NBA star Baron Davis, in addition to Cuban. Ahead of its Series A, Fireside had raised around $8 million.
Stremium had been developing a service that allowed consumers to aggregate all their favorite channels using their “TV Everywhere” credentials and use a cloud DVR instead of downloading separate streaming apps. It also included a selection of free streaming channels. But the service faced an increasingly competitive landscape where there are now numerous ways to watch free streaming content, like Tubi, Pluto TV, The Roku Chanel, Freevee (formerly IMDb TV), Plex, and more. Meanwhile, cord-cutting is accelerating leaving fewer people with cable TV logins for Stremium to market its services to.
The Stremium website is now pointing visitors to Fireside and confirms the acquisition. Fireside is aiming to release its TV product sometime next year as a result of the deal.
 
 
 
Mark Cuban-backed streaming app Fireside acquires Stremium to bring live, interactive shows to your TV by Sarah Perez originally published on TechCrunch
Mark Cuban-backed streaming app Fireside acquires Stremium to bring live, interactive shows to your TV