Архив метки: AI

Deep Render believes AI holds the key to more efficient video compression

Chri Besenbruch, CEO of Deep Render, sees many problems with the way video compression standards are developed today. He thinks they aren’t advancing quickly enough, bemoans the fact that they’re plagued with legal uncertainty and decries their reliance on specialized hardware for acceleration.
“The codec development process is broken,” Besenbruch said in an interview with TechCrunch ahead of Disrupt, where Deep Render is participating in the Disrupt Battlefield 200. “In the compression industry, there is a significant challenge of finding a new way forward and searching for new innovations.”
Seeking a better way, Besenbruch co-founded Deep Render with Arsalan Zafar, whom he met at Imperial College London. At the time, Besenbruch was studying computer science and machine learning. He and Zafar collaborated on a research project involving distributing terabytes of video across a network, during which they say they experienced the shortcomings of compression technology firsthand.
The last time TechCrunch covered Deep Render, the startup had just closed a £1.6 million seed round ($1.81 million) led by Pentech Ventures with participation from Speedinvest. In the roughly two years since then, Deep Render has raised an additional several million dollars from existing investors, bringing its total raised to $5.7 million.
“We thought to ourselves, if the internet pipes are difficult to extend, the only thing we can do is make the data that flows through the pipes smaller,” Besenbruch said. “Hence, we decided to fuse machine learning and AI and compression technology to develop a fundamentally new way of compression data getting significantly better image and video compression ratios.”
Deep Render isn’t the first to apply AI to video compression. Alphabet’s DeepMind adapted a machine learning algorithm originally developed to play board games to the problem of compressing YouTube videos, leading to a 4% reduction in the amount of data the video-sharing service needs to stream to users. Elsewhere, there’s startup WaveOne, which claims its machine learning-based video codec outperforms all existing standards across popular quality metrics.
But Deep Render’s solution is platform-agnostic. To create it, Besenbruch says that the company compiled a dataset of over 10 million video sequences on which they trained algorithms to learn to compress video data efficiently. Deep Render used a combination of on-premise and cloud hardware for the training, with the former comprising over a hundred GPUs.
Deep Render claims the resulting compression standard is 5x better than HEVC, a widely used codec and can run in real time on mobile devices with a dedicated AI accelerator chip (e.g., the Apple Neural Engine in modern iPhones). Besenbruch says the company is in talks with three large tech firms — all with market caps over $300 billion — about paid pilots, though he declined to share names.
Eddie Anderson, a founding partner at Pentech and board member at Deep Render, shared via email: “Deep Render’s machine learning approach to codecs completely disrupts an established market. Not only is it a software route to market, but their [compression] performance is significantly better than the current state of the art. As bandwidth demands continue to increase, their solution has the potential to drive vastly improved commercial performance for current media owners and distributors.”
Deep Render currently employs 20 people. By the end of 2023, Besenbruch expects that number will more than triple to 62.
Deep Render believes AI holds the key to more efficient video compression by Kyle Wiggers originally published on TechCrunch
Deep Render believes AI holds the key to more efficient video compression

Regie secures $10M to generate marketing copy using AI

Regie.ai, a startup using OpenAI’s GPT-3 text-generating system to create sales and marketing content for brands, today announced that it raised $10 million in Series A funding led by Scale Venture Partners with participation from Foundation Capital, South Park Commons, Day One Ventures and prominent angel investors. The fresh investment comes as VCs see a growing opportunity in AI-powered, copy-generating adtech companies, whose tech promises to save time while potentially increasing personalization.
Regie was founded in 2020 by Matt Millen and Srinath Sridhar. Previously a software engineer at Google and Meta, Sridhar is a data scientist by trade, having developed enterprise-scale AI systems that detect duplicate images and rank search results. Millen was formerly a VP at T-Mobile, leading the national sales teams (e.g., strategic accounts and public sector).
With Regie, Sridhar says he and Millen aimed to create a way for companies to communicate with their customers via channels like email, social media, text, podcasts, online advertising and more. Because companies have so many platforms and mediums at their disposal to speak with customers, he notes, it can be a challenge for content marketers to produce continuously compelling content to reach their customers.
“The way content is getting generated has fundamentally changed,” Sridhar told TechCrunch in an email interview. “Marketers and copywriters working in the enterprise … increasingly [need] to produce and manage content and content workflows at scale.”
Regie uses GPT-3 to power its service — the same GPT-3 that can generate poetry, prose and academic papers. But it’s a “flavor” of GPT-3 fine-tuned on a training data set of roughly 20,000 sales sequences (the series of steps to convert prospects into paying customers) and nearly 100 million sales emails. Also in the mix are custom language systems built by Regie to reflect brands and their messaging, designed to be integrated with existing sale platforms like Outreach, HubSpot, and Salesloft.
Image Credits: Regie
Lest the systems spew problematic language, Regie says that every system goes through “human curation” and vetting before being released. The startup also claims to train the systems on “inclusive” language and test them for biases, like bias against certain demographic groups.
Customers can use Regie to generate original, optimized-for-search-engines content or create custom sales sequences. The platform also offers blog- and social-media-post-authoring tools for personalizing messages, as well as a Chrome extension that analyzes the “quality” of emails that customers send — and optionally rewrites the text.
“Generative AI is completely disrupting the way content is created today. The biggest competitors of Regie would be the large content authoring and management platforms that will be completely redesigned AI first going forward,” Sridhar said confidently. “For example, Adobe’s suite of products including Acrobat, Illustrator, Photoshop, now Figma as well as Adobe Experience Cloud will start to get outdated as Regie continues to build on an intelligent content creation and management platform for the enterprise.”
More immediately, Regie competes with vendors like Jasper, Phrasee, Copysmith and Copy.ai — all of which tap AI to generate bespoke marketing copy. But Sridhar argues that Regie is a more vertical platform that caters to go-to-market teams in the enterprise while combining text, images and workflows into a single glass pane.
“Generative AI is such a paradigm shift that not only productivity and top-line of companies will go up as a result, but the bottom line will also go down simultaneously. There are very few products that can improve both sides of that financial equation,” Sridhar continued. “So if a company wants to reduce costs because they want to assimilate sales tools, or reduce outsourced writing while simultaneously increasing revenue, Regie can do that. If you are an outsourced marketing agency looking to retain more customers and efficiently generate content at scale, Regie can definitely do that for agencies as well.”
The company currently has more than 70 software-as-a-service customers on annual contracts, including AT&T, Sophos, Okta and Crunchbase. Sridhar didn’t reveal revenue but said that he expects the 25-person company to grow “meaningfully” this year.
“This is a revolutionary new field. And as always, adoption will require educating the users,” Sridhar said. “It is clear to us as practitioners that the world has changed. But it will take time for others to get their hands dirty and convince themselves that this is happening — and that it is a very positive development. So we have to be patient in educating the industry. We also have to show that content quality isn’t compromised and that it can perform better and be maintained more consistently with the strategic application of AI.”
To date, Regie has raised $14.8 million.
Regie secures $10M to generate marketing copy using AI by Kyle Wiggers originally published on TechCrunch
Regie secures $10M to generate marketing copy using AI

AI is getting better at generating porn. We might not be prepared for the consequences.

A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.
AI-powered systems like Stable Diffusion, which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial) prints and full-blown marketing campaigns.
But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.
AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.
Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.
Called Porn Pen, the website allows users to customize the appearance of nude AI-generated models — all of which are women — using toggleable tags like “babe,” “lingerie model,” “chubby,” ethnicities (e.g. “Russian” and “Latina”) and backdrops (e.g. “bedroom,” “shower” and wildcards like “moon”). Buttons capture models from the front, back or side, and change the appearance of the generated photo (e.g. “film photo,” “mirror selfie”). There must be a bug on the mirror selfies, though, because in the feed of user-generated images, some mirrors don’t actually reflect a person — but of course, these models are not people at all. Porn Pen functions like “This Person Does Not Exist,” only it’s NSFW.
On Y Combinator’s Hacker News forum, a user purporting to be the creator describes Porn Pen as an “experiment” using cutting-edge text-to-image models. “I explicitly removed the ability to specify custom text to avoid harmful imagery from being generated,” they wrote. “New tags will be added once the prompt-engineering algorithm is fine-tuned further.” The creator did not respond to TechCrunch’s request for comment.
But Porn Pen raises a host of ethical questions, like biases in image-generating systems and the sources of the data from which they arose. Beyond the technical implications, one wonders whether new tech to create customized porn — assuming it catches on — could hurt adult content creators who make a living doing the same.
“I think it’s somewhat inevitable that this would come to exist when [OpenAI’s] DALL-E did,” Os Keyes, a PhD candidate at Seattle University, told TechCrunch via email. “But it’s still depressing how both the options and defaults replicate a very heteronormative and male gaze.”
Ashley, a sex worker and peer organizer who works on cases involving content moderation, thinks that the content generated by Porn Pen isn’t a threat to sex workers in its current state.
“There is endless media out there,” said Ashley, who did not want her last name to be published for fear of being harassed for their job. “But people differentiate themselves not by just making the best media, but also by being an accessible, interesting person. It’s going to be a long time before AI can replace that.”
On existing monetizable porn sites like OnlyFans and ManyVids, adult creators must verify their age and identity so that the company knows they are consenting adults. AI-generated porn models can’t do this, of course, because they aren’t real.
Ashley worries, though, that if porn sites crack down on AI porn, it might lead to harsher restrictions for sex workers, who are already facing increased regulation from legislation like SESTA/FOSTA. Congress introduced the Safe Sex Workers Study Act in 2019 to examine the affects of this legislation, which makes online sex work more difficult. This study found that “community organizations [had] reported increased homelessness of sex workers” after losing the “economic stability provided by access to online platforms.”
“SESTA was sold as fighting child sex trafficking, but it created a new criminal law about prostitution that had nothing about age,” Ashley said.
Currently, few laws around the world pertain to deepfaked porn. In the U.S., only Virginia and California have regulations restricting certain uses of faked and deepfaked pornographic media.
Systems such as Stable Diffusion “learn” to generate images from text by example. Fed billions of pictures labeled with annotations that indicate their content — for example, a picture of a dog labeled “Dachshund, wide-angle lens” — the systems learn that specific words and phrases refer to specific art styles, aesthetics, locations and so on.
This works relatively well in practice. A prompt like “a bird painting in the style of Van Gogh” will predictably yield a Van Gogh-esque image depicting a bird. But it gets trickier when the prompts are vaguer, refer to stereotypes or deal with subject matter with which the systems aren’t familiar.
For example, Porn Pen sometimes generates images without a person at all — presumably a failure of the system to understand the prompt. Other times, as alluded to earlier, it shows physically improbable models, typically with extra limbs, nipples in unusual places and contorted flesh.
“By definition [these systems are] going to represent those whose bodies are accepted and valued in mainstream society,” Keyes said, noting that Porn Pen only has categories for cisnormative people. “It’s not surprising to me that you’d end up with a disproportionately high number of women, for example.”
While Stable Diffusion, one of the systems likely underpinning Porn Pen, has relatively few “NSFW” images in its training dataset, early experiments from Redditors and 4chan users show that it’s quite competent at generating pornographic deepfakes of celebrities (Porn Pen — perhaps not coincidentally — has a “celebrity” option). And because it’s open source, there’d be nothing to prevent Porn Pen’s creator from fine-tuning the system on additional nude images.
“It’s definitely not great to generate [porn] of an existing person,” Ashley said. “It can be used to harass them.”
Deepfake porn is often created to threaten and harass people. These images are almost always developed without the subject’s consent out of malicious intent. In 2019, the research company Sensity AI found that 96% of deepfake videos online were non-consensual porn.
Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, says that there’s a possibility the dataset includes people who’ve not consented to their image being used for training in this way, including sex workers.
“Many of [the people in the nudes in the training data] may derive their income from producing pornography or pornography-adjacent content,” Cook said. “Just like fine artists, musicians or journalists, the works these people have produced are being used to create systems that also undercut their ability to earn a living in the future.”
In theory, a porn actor could use copyright protections, defamation and potentially even human rights laws to fight the creator of a deepfaked image. But as a piece in MIT Technology Review notes, gathering evidence in support of the legal argument can prove to be a massive challenge.
When more primitive AI tools popularized deepfaked porn several years ago, a Wired investigation found that nonconsensual deepfake videos were racking up millions of views on mainstream porn sites like Pornhub. Other deepfaked works found a home on sites akin to Porn Pen — according to Sensity data, the top four deepfake porn websites received more than 134 million views in 2018.
“AI image synthesis is now a widespread and accessible technology, and I don’t think anyone is really prepared for the implications of this ubiquity,” Cook continued. “In my opinion, we have rushed very, very far into the unknown in the last few years with little regard for the impact of this technology.”
To Cook’s point, one of the most popular sites for AI-generated porn expanded late last year through partner agreements, referrals and an API, allowing the service — which hosts hundreds of nonconsensual deepfakes — to survive bans on its payments infrastructure. And in 2020, researchers discovered a Telegram bot that generated abusive deepfake images of more than 100,000 women, including underage girls.
“I think we’ll see a lot more people testing the limits of both the technology and society’s boundaries in the coming decade,” Cook said. “We must accept some responsibility for this and work to educate people about the ramifications of what they are doing.”
AI is getting better at generating porn. We might not be prepared for the consequences.

PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

Neji Tawo, the founder of boutique software development company Wiscount Corporation, says he was inspired by his dad to become an engineer. When Tawo was a kid, his dad tasked him with coming up with a formula to calculate the gas in the fuel tanks at his family’s station. Tawo then created an app for gas stations to help prevent gas siphoning.
The seed of the idea for Tawo’s latest venture came from a different source: a TV ad for a charity. Frustrated by his experience filling out donation forms, Tawo sought an alternative, faster way to complete such transactions. He settled on voice.
Tawo’s PayTalk, which is one of the first products in Amazon’s Black Founders Build with Alexa Program, uses conversational AI to carry out transactions via smart devices. Using the PayTalk app, users can do things like find a ride, order a meal, pay bills, purchase tickets and even apply for a loan, Tawo says.
“We see the opportunity in a generation that’s already using voice services for day-to-day tasks like checking the weather, playing music, calling friends and more,” Tawo said. “At PayTalk, we feel voice services should function like a person — being capable of doing several things from hailing you a ride to taking your delivery order to paying your phone bills.”

PayTalk is powered by out-of-the-box voice recognition models on the frontend and various API connectors behind the scenes, Tawo explains. In addition to Alexa, the app integrates with Siri and Google Assistant, letting users add voice shortcuts like “Hey Siri, make a reservation on PayTalk.”
“Myself and my team have bootstrapped this all along the way, as many VCs we approached early on were skeptical about voice being the device form factor of the future. The industry is in its nascent stages and many still view it with skepticism,” Tawo said. “With the COVID-19 pandemic and subsequent shift to doing more remotely across different types of transactions (i.e. ordering food from home, shopping online, etc.), we … saw that there was increased interest in the use of voice services. This in turn boosted demand for our product and we believe that we are positioned to continue to expand our offerings and make voice services more useful as a result.”
Tawo’s pitch for PayTalk reminded me much of Viv, the startup launched by Siri co-creator Adam Cheyer (later acquired by Samsung) that proposed voice as the connective tissue between disparate apps and services. It’s a promising idea — tantalizing, even. But where PayTalk is concerned, the execution isn’t quite there yet. 
The PayTalk app is only available for iOS and Android at the moment, and in my experience with it, it’s a little rough around the edges. A chatbot-like flow allows you to type commands — a nice fallback for situations where voice doesn’t make sense (or isn’t appropriate) — but doesn’t transition to activities particularly gracefully. When I used it to look for a cab by typing the suggested “book a ride” command, PayTalk asked for a pickup and dropoff location before throwing me into an Apple Maps screen without any of the information I’d just entered.
The reservation and booking functionality seems broken as well. PayTalk walked me through the steps of finding a restaurant, asking which time I’d like to reserve, the size of my party and so on. But the app let me “confirm” a table for 2 a.m. at SS106 Aperitivo Bar — an Italian restaurant in Alberta — on a day the restaurant closes at 10 p.m.
Image Credits: PayTalk
Other “categories” of commands in PayTalk are very limited in what they can accomplish — or simply nonfunctional. I can only order groceries from two services in my area (Downtown Brooklyn) at present — MNO African Market and Simi African Foods Market. Requesting a loan prompts an email with a link to Glance Capital, a personal loan provider for gig workers, that throws a 404 error when clicked. A command to book “luxury services” like a yacht or “sea plane” (yes, really) fails to reach anything resembling a confirmation screen, while the “pay for parking” command confusingly asks for a zone number.
To fund purchases through PayTalk (e.g. parking), there’s an in-app wallet. I couldn’t figure out a way to transfer money to it, though. The app purports to accept payment cards, but tapping on the “Use Card” button triggers a loading animation that quickly times out.
I could go on. But suffice it to say that PayTalk is in the very earliest stages of development. I began to think the app had been released prematurely, but PayTalk’s official Twitter account has been advertising it for at least the past few months.
Perhaps PayTalk will eventually grow into the shoes of the pitch Tawo gave me, so to speak — Wiscount is kicking off a four-month tenure at the Black Founders Build with Alexa Program. In the meantime, it must be pointed out that Alexa, Google Assistant and Siri are already capable of handling much of what PayTalk promises to one day accomplish.

The battle for voice recognition inside vehicles is heating up

“With the potential $100,000 investment [from the Black Founders Build with Alexa Program], we will seek to raise a seed round to expand our product offerings to include features that would allow customers to seamlessly carry out e-commerce and financial transactions on voice service-powered devices,” Tawo said. “PayTalk is mainly a business-to-consumer platform. However, as we continue to innovate and integrate voice-activated options … we see the potential to support enterprise use cases by replacing and automating the lengthy form filling processes that are common for many industries like healthcare.”
Hopefully, the app’s basic capabilities get attention before anything else.
PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

Sunshine Contacts may have given out your home address, even if you’re not using the app

A third-party contacts app you’re not using may be handing out your home address to its users. In November, former Yahoo CEO and Google veteran Marissa Mayer and co-founder Enrique Muñoz Torres introduced their newly rebranded startup Sunshine, and its first product, Sunshine Contacts. The new iOS app offers to organize your address book by handling duplicates and merges using AI technology, as well as fill in some of the missing bits of information by gathering data from the web — like LinkedIn profiles, for example.
But some users were surprised to find they suddenly had home addresses for their contacts, too, including for those who were not already Sunshine users.
TechCrunch reached out to Sunshine to better understand the situation, given the potential privacy concerns.

We understand there are several ways that users may encounter someone’s home address in the Sunshine app. A user may already have the address on file in their phone’s address book, of course, or they may have opted in to allow Sunshine to scan their inbox in order to extract information from email signature lines. This is a feature common to other personal CRM solutions, too, like Evercontact.
In the event that someone had signed an email with their home address included in this field, that data could then be added to their contact card in the Sunshine app. In this case, the contact card is updated in the Sunshine Contacts app, which then syncs with your phone’s address book. But this data is not distributed to any other app users.

Image Credits: Sunshine

The app also augments contact cards with information acquired by other means. For example, it may use the information you do have to complete missing fields — like adding a last name, when you had other data that indicated what someone’s full name is, but hadn’t completed filling out the card. The app may also be able to pull in data from a LinkedIn profile, if available.
For home addresses, Sunshine is using the Whitepages API.
The company confirmed to TechCrunch it’s augmenting contact cards with home addresses under some circumstances, even if that contact is not a Sunshine Contacts user. Sunshine says it doesn’t believe this to be any different from a user going to Google to look for someone’s contact information on the web — it’s just automating the process.
Of course, some would argue when you’re talking about automating the collection of home addresses for hundreds or potentially thousands of users — depending on the size of your personal address book database — it’s a bit different than if you went googling to find your aunt’s address so you can mail a Christmas card or called your old college roommate to find out where to send their birthday gift.
However, Sunshine clarified to TechCrunch that it won’t add the home address except in cases when it determines you have a personal connection to the contact in question.
Here, though, Sunshine enters a gray area where the app and its technology will try to figure out who you know well enough to need a home address.
Before adding the address, Sunshine requires you to have the contact’s phone number on file in your address book, not just their email. That would eliminate some people you only have a loose connection with through work, for instance. And it only updates with the home address if the partner API is able to associate that address with a phone number you have.

Image Credits: Sunshine

In addition, Sunshine says that it’s generally able to understand the type of phone number you have on file — like if it’s a residential or business line, or if it’s a landline or mobile number. (It uses APIs to do this, similar to StrikeIron’s though not that particular one.) It also knows who the phone number belongs to. Using this information and further context, the app tries to determine if a phone number is a personal or a professional number and it will try to understand your relationship with the person who owns that number.
In practice, what this means is that if all the information you had on file for a contact was professional information — where they worked, a job title, a work email and a phone number, perhaps — then that person’s contact card would not be updated to include their home address, too.
And because many people use their personal cell for work, Sunshine won’t consider someone a “personal” relationship just because you have their mobile phone number. For example, if you had only a contact’s name and a cell number, you wouldn’t be able to use the app to get their home address.
The result of all this automated analysis is that Sunshine, in theory, only updates contact cards with home addresses where it’s determined there’s a personal relationship.
This, of course, doesn’t take into account some scenarios like bad exes, stalking or a general desire for privacy. Arguably, there are times when someone may have a lot of personal information for a contact in their address book, but the contact in question would rather not have their home address distributed to that person.
The only way to prevent this, presumably, would be to opt out at the source: Whitepages.com. (Once you have your profile URL from the Whitepages website, you can use this online form to have your information suppressed.)

Image Credits: Sunshine

The way the app functions raises questions about what is truly private information these days.
Sunshine points out that people’s home addresses are not as hidden from the world as they may think, which makes them fair game.
It’s true that our home addresses are often publicly available. Although it’s been years since most of us have had a telephone directory dropped on our doorstep with phone and address listings for people in our city, home addresses today are relatively trivial to find when you know where to look online.
In addition to public records — like voter registration databases — there are web-based people finders, too.
Sunshine’s partner, Whitepages.com, makes visitors pay for its data, but others like TruePeopleSearch.com don’t have the same paywall. With someone’s first and last name and city, its website provides access to someone’s home address, prior addresses, cell phone, age and the names of family members and other close associates. (TruePeopleSearch is not a Sunshine partner, we should clarify.)

Marissa Mayer’s startup launches its first official product, Sunshine Contacts

Even though this data is “public,” it’s uncomfortable to see it casually distributed in an app, as that makes it even easier to get to than before.
Plus, after years of being burned by data breaches and data privacy scandals, people tend to be more protective of their personal information than before. And, had they been asked, many would probably decline to have their home addresses shared with Sunshine’s user base. Generally speaking, people appreciate the courtesy of having someone come ask for a home address, when it’s needed — they may not want an app creeping the web to find it and hand it out.
Sunshine Contacts is in an invite-only beta in the U.S., so the company has time to reconsider how this feature is implemented based on user feedback before it becomes widely available.

Sunshine Contacts may have given out your home address, even if you’re not using the app