Архив рубрики: AI

Auto Added by WPeMatico

QuickVid uses AI to generate short-form videos, complete with voiceovers

Generative AI is coming for videos. A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram, TikTok and Snapchat videos.
Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2 and adds a synthetic voiceover and background music from YouTube’s royalty-free music library. QuickVid’s creator, Daniel Habib, says that he’s building the service to help creators meet the “ever-growing” demand from their fans.
“By providing creators with tools to quickly and easily produce quality content, QuickVid helps creators increase their content output, reducing the risk of burnout,” Habib told TechCrunch in an email interview. “Our goal is to empower your favorite creator to keep up with the demands of their audience by leveraging advancements in AI.”
But depending on how they’re used, tools like QuickVid threaten to flood already-crowded channels with spammy and duplicative content. They also face potential backlash from creators who opt not to use the tools, whether because of cost ($10 per month) or on principle, yet might have to compete with a raft of new AI-generated videos.
Going after video
QuickVid, which Habib, a self-taught developer who previously worked at Meta on Facebook Live and video infrastructure, built in a matter of weeks, launched on December 27. It’s relatively bare bones at present — Habib says that more personalization options will arrive in January — but QuickVid can cobble together the components that make up a typical informational YouTube Short or TikTok video, including captions and even avatars.
It’s easy to use. First, a user enters a prompt describing the subject matter of the video they want to create. QuickVid uses the prompt to generate a script, leveraging the generative text powers of GPT-3. From keywords either extracted from the script automatically or entered manually, QuickVid selects a background video from the royalty-free stock media library Pexels and generates overlay images using DALL-E 2. It then outputs a voiceover via Google Cloud’s text-to-speech API — Habib says that users will soon be able to clone their voice — before combining all these elements into a video.
Image Credits: QuickVid
See this video made with the prompt “Cats”:

https://techcrunch.com/wp-content/uploads/2022/12/img_5pg7k95x9ig2tofh7mkrr_cfr.mp4
Or this one:
https://techcrunch.com/wp-content/uploads/2022/12/img_61ighv4x55slq9582dbx_cfr.mp4
QuickVid certainly isn’t pushing the boundaries of what’s possible with generative AI. Both Meta and Google have showcased AI systems that can generate completely original clips given a text prompt. But QuickVid amalgamates existing AI to exploit the repetitive, templated format of B-roll-heavy short-form videos, getting around the problem of having to generate the footage itself.
“Successful creators have an extremely high-quality bar and aren’t interested in putting out content that they don’t feel is in their own voice,” Habib said. “This is the use case we’re focused on.”
That supposedly being the case, in terms of quality, QuickVid’s videos are generally a mixed bag. The background videos tend to be a bit random or only tangentially related to the topic, which isn’t surprising given QuickVids being currently limited to the Pexels catalog. The DALL-E 2-generated images, meanwhile, exhibit the limitations of today’s text-to-image tech, like garbled text and off proportions.
In response to my feedback, Habib said that QuickVid is “being tested and tinkered with daily.”
Copyright issues
According to Habib, QuickVid users retain the right to use the content they create commercially and have permission to monetize it on platforms like YouTube. But the copyright status around AI-generated content is … nebulous, at least presently. The U.S. Patent and Trademark Office (USPTO) recently moved to revoke copyright protection for an AI-generated comic, for example, saying copyrightable works require human authorship.
When asked about how the USPTO decision might affect QuickVid, Habib said he believes that it only pertain to the “patentability” of AI-generated products and not the rights of creators to use and monetize their content. Creators, he pointed out, aren’t often submitting patents for videos and usually lean into the creator economy, letting other creators repurpose their clips to increase their own reach.
“Creators care about putting out high-quality content in their voice that will help grow their channel,” Habib said.
Another legal challenge on the horizon might affect QuickVid’s DALL-E 2 integration — and, by extension, the site’s ability to generate image overlays. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot, a code-generating system, to regurgitate sections of licensed code without providing credit. (Copilot was co-developed by OpenAI and GitHub, which Microsoft owns.) The case has implications for generative art AI like DALL-E 2, which similarly has been found to copy and paste from the datasets on which they were trained (i.e., images).
Habib isn’t concerned, arguing that the generative AI genie’s out of the bottle. “If another lawsuit showed up and OpenAI disappeared tomorrow, there are several alternatives that could power QuickVid,” he said, referring to the open source DALL-E 2-like system Stable Diffusion. QuickVid is already testing Stable Diffusion for generating avatar pics.
Moderation and spam
Aside from the legal dilemmas, QuickVid might soon have a moderation problem on its hands. While OpenAI has implemented filters and techniques to prevent them, generative AI has well-known toxicity and factual accuracy problems. GPT-3 spouts misinformation, particularly about recent events, which are beyond the boundaries of its knowledge base. And ChatGPT, a fine-tuned offspring of GPT-3, has been shown to use sexist and racist language.
That’s worrisome, particularly for people who’d use QuickVid to create informational videos. In a quick test, I had my partner — who’s far more creative than me, particularly in this area —  enter a few offensive prompts to see what QuickVid would generate. To QuickVid’s credit, obviously problematic prompts like “Jewish new world order” and “9/11 conspiracy theory” didn’t yield toxic scripts. But for “Critical race theory indoctrinating students,” QuickVid generated a video implying that critical race theory could be used to brainwash schoolchildren.
See:

https://techcrunch.com/wp-content/uploads/2022/12/img_e4wba39us0vqtc8051491_cfr.mp4
Habib says that he’s relying on OpenAI’s filters to do most of the moderation work and asserts that it’s incumbent on users to manually review every video created by QuickVid to ensure “everything is within the boundaries of the law.”
“As a general rule, I believe people should be able to express themselves and create whatever content they want,” Habib said.
That apparently includes spammy content. Habib makes the case that the video platforms’ algorithms, not QuickVid, are best positioned to determine the quality of a video, and that people who produce low-quality content “are only damaging their own reputations.” The reputational damage will naturally disincentivize people from creating mass spam campaigns with QuickVid, he says.
“If people don’t want to watch your video, then you won’t receive distribution on platforms like YouTube,” he added. “Producing low-quality content will also make people look at your channel in a negative light.”
But it’s instructive to look at ad agencies like Fractl, which in 2019 used an AI system called Grover to generate an entire site of marketing materials — reputation be damned. In an interview with The Verge, Fractl partner Kristin Tynski said that she foresaw generative AI enabling “a massive tsunami of computer-generated content across every niche imaginable.”
In any case, video-sharing platforms like TikTok and YouTube haven’t had to contend with moderating AI-generated content on a massive scale. Deepfakes — synthetic videos that replace an existing person with someone else’s likeness — began to populate platforms like YouTube several years ago, driven by tools that made deepfaked footage easier to produce. But unlike even the most convincing deepfakes today, the types of videos QuickVid creates aren’t obviously AI-generated in any way.
Google Search’s policy on AI-generated text might be a preview of what’s to come in the video domain. Google doesn’t treat synthetic text differently from human-written text where it concerns search rankings but takes actions on content that’s “intended to manipulate search rankings and not help users.” That includes content stitched together or combined from different web pages that “[doesn’t] add sufficient value” as well as content generated through purely automated processes, both of which might apply to QuickVid.
In other words, AI-generated videos might not be banned from platforms outright should they take off in a major way but rather simply become the cost of doing business. That isn’t likely to allay the fears of experts who believe that platforms like TikTok are becoming a new home for misleading videos, but — as Habib said during the interview — “there is no stopping the generative AI revolution.”
QuickVid uses AI to generate short-form videos, complete with voiceovers by Kyle Wiggers originally published on TechCrunch
QuickVid uses AI to generate short-form videos, complete with voiceovers

Twelve Labs lands $12M for AI that understands the context of videos

To Jae Lee, a data scientist by training, it never made sense that video — which has become an enormous part of our lives, what with the rise of platforms like TikTok, Vimeo and YouTube — was difficult to search across due to the technical barriers posed by context understanding. Searching the titles, descriptions and tags of videos was always easy enough, requiring no more than a basic algorithm. But searching within videos for specific moments and scenes was long beyond the capabilities of tech, particularly if those moments and scenes weren’t labeled in an obvious way.
To solve this problem, Lee, alongside friends from the tech industry, built a cloud service for video search and understanding. It became Twelve Labs, which went on to raise $17 million in venture capital — $12 million of which came from a seed extension round that closed today. Radical Ventures led the extension with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told TechCrunch in an email.
“The vision of Twelve Labs is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure,” Lee said.
A demo of the Twelve Labs platform’s capabilities. Image Credits: Twelve Labs
Twelve Labs, which is currently in closed beta, uses AI to attempt to extract “rich information” from videos such as movement and actions, objects and people, sound, text on screen, and speech to identify the relationships between them. The platform converts these various elements into mathematical representations called “vectors” and forms “temporal connections” between frames, enabling applications like video scene search.
“As a part of achieving the company’s vision to help developers create intelligent video applications, the Twelve Labs team is building ‘foundation models’ for multimodal video understanding,” Lee said. “Developers will be able to access these models through a suite of APIs, performing not only semantic search but also other tasks such as long-form video ‘chapterization,’ summary generation and video question and answering.”
Google takes a similar approach to video understanding with its MUM AI system, which the company uses to power video recommendations across Google Search and YouTube by picking out subjects in videos (e.g., “acrylic painting materials”) based on the audio, text and visual content. But while the tech might be comparable, Twelve Labs is one of the first vendors to market with it; Google has opted to keep MUM internal, declining to make it available through a public-facing API.
That being said, Google, as well as Microsoft and Amazon, offer services (i.e., Google Cloud Video AI, Azure Video Indexer and AWS Rekognition) that recognize objects, places and actions in videos and extract rich metadata at the frame level. There’s also Reminiz, a French computer vision startup that claims to be able to index any type of video and add tags to both recorded and live-streamed content. But Lee asserts that Twelve Labs is sufficiently differentiated — in part because its platform allows customers to fine-tune the AI to specific categories of video content.
Mockup of API for fine-tuning the model to work better with salad-related content. Image Credits: Twelve Labs
“What we’ve found is that narrow AI products built to detect specific problems show high accuracy in their ideal scenarios in a controlled setting, but don’t scale so well to messy real-world data,” Lee said. “They act more as a rule-based system, and therefore lack the ability to generalize when variances occur. We also see this as a limitation rooted in lack of context understanding. Understanding of context is what gives humans the unique ability to make generalizations across seemingly different situations in the real world, and this is where Twelve Labs stands alone.”
Beyond search, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently figuring out, for example, which videos showing knives are violent versus instructional. It can also be used for media analytics and real-time feedback, he says, and to automatically generate highlight reels from videos.
A little over a year after its founding (March 2021), Twelve Labs has paying customers — Lee wouldn’t reveal how many exactly — and a multiyear contract with Oracle to train AI models using Oracle’s cloud infrastructure. Looking ahead, the startup plans to invest in building out its tech and expanding its team. (Lee declined to reveal the current size of Twelve Labs’ workforce, but LinkedIn data shows it’s roughly 18 people.)
“For most companies, despite the huge value that can be attained through large models, it really does not make sense for them to train, operate and maintain these models themselves. By leveraging a Twelve Labs platform, any organization can leverage powerful video understanding capabilities with just a few intuitive API calls,” Lee said. “The future direction of AI innovation is heading straight towards multimodal video understanding, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
Twelve Labs lands $12M for AI that understands the context of videos by Kyle Wiggers originally published on TechCrunch
Twelve Labs lands $12M for AI that understands the context of videos

AI is getting better at generating porn. We might not be prepared for the consequences.

A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.
AI-powered systems like Stable Diffusion, which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial) prints and full-blown marketing campaigns.
But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.
AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.
Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.
Called Porn Pen, the website allows users to customize the appearance of nude AI-generated models — all of which are women — using toggleable tags like “babe,” “lingerie model,” “chubby,” ethnicities (e.g. “Russian” and “Latina”) and backdrops (e.g. “bedroom,” “shower” and wildcards like “moon”). Buttons capture models from the front, back or side, and change the appearance of the generated photo (e.g. “film photo,” “mirror selfie”). There must be a bug on the mirror selfies, though, because in the feed of user-generated images, some mirrors don’t actually reflect a person — but of course, these models are not people at all. Porn Pen functions like “This Person Does Not Exist,” only it’s NSFW.
On Y Combinator’s Hacker News forum, a user purporting to be the creator describes Porn Pen as an “experiment” using cutting-edge text-to-image models. “I explicitly removed the ability to specify custom text to avoid harmful imagery from being generated,” they wrote. “New tags will be added once the prompt-engineering algorithm is fine-tuned further.” The creator did not respond to TechCrunch’s request for comment.
But Porn Pen raises a host of ethical questions, like biases in image-generating systems and the sources of the data from which they arose. Beyond the technical implications, one wonders whether new tech to create customized porn — assuming it catches on — could hurt adult content creators who make a living doing the same.
“I think it’s somewhat inevitable that this would come to exist when [OpenAI’s] DALL-E did,” Os Keyes, a PhD candidate at Seattle University, told TechCrunch via email. “But it’s still depressing how both the options and defaults replicate a very heteronormative and male gaze.”
Ashley, a sex worker and peer organizer who works on cases involving content moderation, thinks that the content generated by Porn Pen isn’t a threat to sex workers in its current state.
“There is endless media out there,” said Ashley, who did not want her last name to be published for fear of being harassed for their job. “But people differentiate themselves not by just making the best media, but also by being an accessible, interesting person. It’s going to be a long time before AI can replace that.”
On existing monetizable porn sites like OnlyFans and ManyVids, adult creators must verify their age and identity so that the company knows they are consenting adults. AI-generated porn models can’t do this, of course, because they aren’t real.
Ashley worries, though, that if porn sites crack down on AI porn, it might lead to harsher restrictions for sex workers, who are already facing increased regulation from legislation like SESTA/FOSTA. Congress introduced the Safe Sex Workers Study Act in 2019 to examine the affects of this legislation, which makes online sex work more difficult. This study found that “community organizations [had] reported increased homelessness of sex workers” after losing the “economic stability provided by access to online platforms.”
“SESTA was sold as fighting child sex trafficking, but it created a new criminal law about prostitution that had nothing about age,” Ashley said.
Currently, few laws around the world pertain to deepfaked porn. In the U.S., only Virginia and California have regulations restricting certain uses of faked and deepfaked pornographic media.
Systems such as Stable Diffusion “learn” to generate images from text by example. Fed billions of pictures labeled with annotations that indicate their content — for example, a picture of a dog labeled “Dachshund, wide-angle lens” — the systems learn that specific words and phrases refer to specific art styles, aesthetics, locations and so on.
This works relatively well in practice. A prompt like “a bird painting in the style of Van Gogh” will predictably yield a Van Gogh-esque image depicting a bird. But it gets trickier when the prompts are vaguer, refer to stereotypes or deal with subject matter with which the systems aren’t familiar.
For example, Porn Pen sometimes generates images without a person at all — presumably a failure of the system to understand the prompt. Other times, as alluded to earlier, it shows physically improbable models, typically with extra limbs, nipples in unusual places and contorted flesh.
“By definition [these systems are] going to represent those whose bodies are accepted and valued in mainstream society,” Keyes said, noting that Porn Pen only has categories for cisnormative people. “It’s not surprising to me that you’d end up with a disproportionately high number of women, for example.”
While Stable Diffusion, one of the systems likely underpinning Porn Pen, has relatively few “NSFW” images in its training dataset, early experiments from Redditors and 4chan users show that it’s quite competent at generating pornographic deepfakes of celebrities (Porn Pen — perhaps not coincidentally — has a “celebrity” option). And because it’s open source, there’d be nothing to prevent Porn Pen’s creator from fine-tuning the system on additional nude images.
“It’s definitely not great to generate [porn] of an existing person,” Ashley said. “It can be used to harass them.”
Deepfake porn is often created to threaten and harass people. These images are almost always developed without the subject’s consent out of malicious intent. In 2019, the research company Sensity AI found that 96% of deepfake videos online were non-consensual porn.
Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, says that there’s a possibility the dataset includes people who’ve not consented to their image being used for training in this way, including sex workers.
“Many of [the people in the nudes in the training data] may derive their income from producing pornography or pornography-adjacent content,” Cook said. “Just like fine artists, musicians or journalists, the works these people have produced are being used to create systems that also undercut their ability to earn a living in the future.”
In theory, a porn actor could use copyright protections, defamation and potentially even human rights laws to fight the creator of a deepfaked image. But as a piece in MIT Technology Review notes, gathering evidence in support of the legal argument can prove to be a massive challenge.
When more primitive AI tools popularized deepfaked porn several years ago, a Wired investigation found that nonconsensual deepfake videos were racking up millions of views on mainstream porn sites like Pornhub. Other deepfaked works found a home on sites akin to Porn Pen — according to Sensity data, the top four deepfake porn websites received more than 134 million views in 2018.
“AI image synthesis is now a widespread and accessible technology, and I don’t think anyone is really prepared for the implications of this ubiquity,” Cook continued. “In my opinion, we have rushed very, very far into the unknown in the last few years with little regard for the impact of this technology.”
To Cook’s point, one of the most popular sites for AI-generated porn expanded late last year through partner agreements, referrals and an API, allowing the service — which hosts hundreds of nonconsensual deepfakes — to survive bans on its payments infrastructure. And in 2020, researchers discovered a Telegram bot that generated abusive deepfake images of more than 100,000 women, including underage girls.
“I think we’ll see a lot more people testing the limits of both the technology and society’s boundaries in the coming decade,” Cook said. “We must accept some responsibility for this and work to educate people about the ramifications of what they are doing.”
AI is getting better at generating porn. We might not be prepared for the consequences.

Hackers access DoorDash data, T-Mobile teams up with SpaceX, and eBay buys TCGplayer

Hello, hello! We’re back with another edition of Week in Review, the newsletter where we quickly break down the top stories to hit TC in the last seven days. Want it in your inbox? Sign up here.
Our most read story this week was about Stable Diffusion, a “new open source AI image generator capable of producing realistic pictures from any text prompt” that is quickly finding its way into more projects. But, as Kyle Wiggers notes, the system’s “unfiltered nature means not all the use has been completely above board.”
other stuff
T-Mobile + Starlink: Can Elon’s Starlink satellites keep your phone connected even when there’s no cell tower around? That’s the idea behind a newfound alliance between SpaceX and T-Mobile. If it works, T-Mobile phones should able to send messages (but probably not calls) over the Starlink network in a pinch, albeit with a delay of up to 30 minutes.
Google’s noise reduction AI: Smartphones have gotten better and better at low-light photos, but at a certain point the obstacle preventing further improvements is … well, physics. Is an algorithm that uses “AI magic” (as Haje puts it) to eliminate visual noise and “figure out what footage ‘should have’ looked like” the eventual only answer? No idea, but the examples are pretty friggin’ impressive.
DoorDash breached: Remember the Twilio hack a few weeks ago? The ripple effects continue. This week DoorDash disclosed that hackers were able to obtain access to internal DoorDash tools, accessing “names, email addresses, delivery addresses and phone numbers of DoorDash customers.”
Meta’s new accounts: If you’ve got a Quest VR headset and don’t want to tie it to a Facebook or Instagram account, this’ll be the route you take. If you’re still using an old pre-Meta Oculus account, know that support for those ends on day 1 of 2023.
eBay buys TCGplayer: If you’re a collector of any trading card games — think Pokémon, Yu-Gi-Oh!, Magic, etc. — you’ve probably heard of TCGplayer, which eBay is buying “in a deal valued up to $295 million.” We’ll chat with TC writer Aisha Malik about the deal (and why eBay wants it) in the writer spotlight down below.

Image Credits: Getty Images
audio stuff
Commuting? Cooking? Just wearing headphones to discourage people from talking to you? Come hang out with us in Podcast land! This week the Equity team talked about the legal battle going on over at Black Girls Code, Jordan and Darrell talked with comedian/Super Trooper Jay Chandrasekhar about his app on Found, and the Chain Reaction team caught up with two investors from the relatively new web3-focused firm Haun Ventures.
additional stuff
What’s behind the TC+ paywall? Here’s some of the most read stuff this week. Want more? Sign up for TC+ here and use code “WIR” for 15% off your annual pass. 
Manchin’s ultimatum: Can the Inflation Reduction Act and lucrative tax credits help “turn the U.S. into a battery powerhouse”? Tim De Chant explores the possibilities.
Should this metric be your team’s North Star?: The team from Battery Ventures proposes that ARR per employee (or “APE,” as they’ve dubbed it) should be your team’s guiding light.
3 views on Flow: Last week we found out that WeWork founder Adam Neumann is back with a new thing and had already raised over $350 million from the likes of a16z. Good idea? Bad Idea? Tim De Chant, Dominic-Madori Davis, Amanda Silberling share their takes.
writer spotlight: Aisha Malik
Image Credits: Aisha Malik
As noted last week, we’re experimenting with the idea of highlighting one TechCrunch writer per newsletter to learn a bit about them and what’s been on their mind lately. This time we’re catching up with the outstanding Aisha Malik, one year almost to the day since she wrote her first TC post. 
Who is Aisha Malik? What do you do at TechCrunch?
Hi, I’m a senior consumer news writer and the second Canadian on the TechCrunch team! I write about the latest changes to platforms and apps, and how they affect the average consumer. My team and I also uncover upcoming app features ahead of their official release. I also get the chance to chat with founders about their app launches and latest funding rounds.
What’s interesting in your beat right now? Any trends we should know about?
One thing we’re seeing and likely will continue to see is just how often apps are copying each other. Just this week, we found out that Instagram is testing a BeReal clone feature that challenges people to post candid photos within two minutes. Over the past year, we’ve seen Instagram copy numerous TikTok features, we’ve seen TikTok copy Snapchat with its Stories feature, and we’ve also seen Twitter copy Instagram with its close friends “circle” feature.
There are countless similar examples. It’ll be interesting to see just how this trend progresses. People are already calling on Instagram to go back to its roots, so what happens when every app is trying to be like another one? At some point, these apps are going to be overcrowded with features, and that might not be something that consumers want.
Right?! It’s absurd. And who wants to build the next cool thing when the giants of the app world will just clone your key features as soon as they start to prove popular?
Since you’re on the consumer/apps team: what’s the most used app on your phone that didn’t come pre-installed? What eats up your battery every day?
I have no shame in admitting this (okay, maybe just a little) but the answer is TikTok.
I find myself opening the app when I want to take a quick break or when I’d rather not commit to watching a movie or an episode of a TV show, but still want some sort of entertainment. I know people who haven’t download the app claim it’s filled with dancing videos, but the truth is you’ll only end up seeing dancing videos if that’s something you’re actually interested in. TikTok formulates its “For You” page in a way that’s based on your interests, so I see it as a great way to discover and engage with content that you care about. As someone who enjoys baking and reading, the majority of the content I see on TikTok revolves around baking recipes and book recommendations.
I also think TikTok clearly has an impact on culture, whether it’s memes, music or political movements; there’s a chance that it’ll appear on TikTok first. I see the app as a fun and easy way to stay up-to-date on all sorts of trends.
I get it. I had to delete TikTok off my phone — every time I’d open it, my eyes would go all Hypnotoad and I’d be gone, only snapping out of it 20 minutes/100 videos later. The algorithm is too good. It feels like the final boss of the internet; the algorithm in its most evolved/efficient form. I’m probably getting a bit too in the weeds here. Back to the questions!
One of the most read stories this week was your post on eBay’s acquisition of TCGplayer. What is TCGplayer, and why does eBay want it?
TCGplayer is one of the biggest online marketplaces for collectible trading card games. The acquisition essentially marks eBay’s latest push into the trading card market, which saw a huge boom during the pandemic. eBay says trading cards are currently showing substantial growth.
To put things in perspective, eBay says the trading cards category is growing significantly faster than its total marketplace and that the category saw $2 billion in transactions in the first half of 2021. Considering that eBay has long been a destination for trading card enthusiasts to buy and sell, acquiring one of its biggest competitors better cements the company’s place as the go-to marketplace to seek out these collectibles.
It’s kind of wild how collectibles saw a massive surge throughout the pandemic — something, perhaps, about lots of people spending a lot more time at home around their own stuff. Collectibles-focused companies like Whatnot just exploded in popularity, going from a pre-seed round to a valuation in the billions in two years. Are you a collector of anything, trading cards or otherwise?
Do rocks count? [Laughs]
Yes!
I have a small collection of rocks and stones that I’ve collected from beaches and forests I’ve visited in Canada and the U.S. I don’t know much about different types of rocks, so the ones in my collection aren’t extraordinary or anything. I just think collecting them is a nice way to feel connected to specific locations I’ve enjoyed visiting!
Fantastic. Thanks, Aisha!
Hackers access DoorDash data, T-Mobile teams up with SpaceX, and eBay buys TCGplayer

Datch secures $10M to build voice assistants to factory floors

Datch, a company that develops AI-powered voice assistants for industrial customers, today announced that it raised $10 million in a Series A round led by Blackhorn Ventures. The proceeds will be used to expand operations, CEO Mark Fosdike said, as well as develop new software support, tools and capabilities.
Datch started when Fosdike, who has a background in aerospace engineering, met two former Siemens engineers — Aric Thorn and Ben Purcell. They came to the collective realization that voice products built for business customers have to overcome business-specific challenges, like understanding jargon, acronyms and syntax unique to particular customers.
“The way we extract information from systems changes every year, but the way we input information — especially in the industrial world — hasn’t changed since the invention of the keyboard and database,” Fosdike said. “The industrial world had been left in the dark for years, and we knew that developing a technology with voice-visual AI would help light the way for these factories.”
The voice assistants that Datch builds leverage AI to collect and structure data from users in a factory or in the field, parsing commands like “Report an issue for the Line 1 Spot Welder. I estimate it will take half a day to fix.” They run on a smartphone and link to existing systems to write and read records, including records from enterprise resource and asset management platforms.
Datch’s assistants provide a timeline of events and can capture data without an internet connection; they auto-sync once back online. Using them, workers can fill out company forms, create and update work orders, assign tasks and search through company records all via voice.
Fosdike didn’t go into detail about how Datch treats the voice data, save that it encrypts data both in-transit and at rest and performs daily backups.
“We have to employ a lot of tight, automated feedback loops to train the voice and [language] data, and so everyone’s interaction with Datch is slightly different, depending on the company and team they work within,” Fosdike explained. “Customers are exploring different use cases such as using the [language] data in predictive maintenance, automated classification of cause codes, and using the voice data to predict worker fatigue before it becomes a critical safety risk.”
That last bit about predicting worker fatigue is a little suspect. The idea that conditions like tiredness can be detected in a person’s voice isn’t a new one, but some researchers believe it’s unlikely AI can flag them with 100% accuracy. After all, people express tiredness in different ways, depending not only on the workplace environment but on their sex and cultural, ethnic and demographic backgrounds.
The tiredness-detecting scenario aside, Fosdike asserts that Datch’s technology is helping industrial clients get ahead of turbulence in the economy by “vastly improving” the efficiency of their operations. Frontline staff typically have to work with reporting tools that aren’t intuitive, he notes, and in many cases, voice makes for a less cumbersome, faster alternative form of input.
“We help frontline workers with productivity and solve the pain point of time wasted on their reports by decreasing the process time,” Fosdike said. “Industrial companies are fast realizing that to keep up with demand or position themselves to withstand a global pandemic, they need to find a way to scale with more than just peoplepower. Our AI offers these companies an efficient solution in a fraction of the time and with less overhead needed.”
Datch competes with Rain, Aiqudo and Onvego, all of which are developing voice technologies for industrial customers. Deloitte’s Maxwell, Genba and Athena are rivals in Fosdike’s eyes, as as well. But business remains steady — Datch counts ConEd, Singapore Airlines, ABB Robotics and the New York Power Authority among its clients.
“We raised this latest round earlier than expected due to the influx of demand from the market. The timing is right to capitalize on both the post-COVID boom in digital transformation as well as corporate investments driven by the infrastructure bill,” Fosdike said, referring to the $1 trillion package U.S. lawmakers passed last November. “Currently we have a team of 20, and plan to use the funds to grow to 55 to 60 people, scaling to roughly 40 by the end of the year.”
To date, Datch has raised $15 million in venture capital.
Datch secures $10M to build voice assistants to factory floors