Архив рубрики: Artificial Intelligence

Auto Added by WPeMatico

Datch secures $10M to build voice assistants to factory floors

Datch, a company that develops AI-powered voice assistants for industrial customers, today announced that it raised $10 million in a Series A round led by Blackhorn Ventures. The proceeds will be used to expand operations, CEO Mark Fosdike said, as well as develop new software support, tools and capabilities.
Datch started when Fosdike, who has a background in aerospace engineering, met two former Siemens engineers — Aric Thorn and Ben Purcell. They came to the collective realization that voice products built for business customers have to overcome business-specific challenges, like understanding jargon, acronyms and syntax unique to particular customers.
“The way we extract information from systems changes every year, but the way we input information — especially in the industrial world — hasn’t changed since the invention of the keyboard and database,” Fosdike said. “The industrial world had been left in the dark for years, and we knew that developing a technology with voice-visual AI would help light the way for these factories.”
The voice assistants that Datch builds leverage AI to collect and structure data from users in a factory or in the field, parsing commands like “Report an issue for the Line 1 Spot Welder. I estimate it will take half a day to fix.” They run on a smartphone and link to existing systems to write and read records, including records from enterprise resource and asset management platforms.
Datch’s assistants provide a timeline of events and can capture data without an internet connection; they auto-sync once back online. Using them, workers can fill out company forms, create and update work orders, assign tasks and search through company records all via voice.
Fosdike didn’t go into detail about how Datch treats the voice data, save that it encrypts data both in-transit and at rest and performs daily backups.
“We have to employ a lot of tight, automated feedback loops to train the voice and [language] data, and so everyone’s interaction with Datch is slightly different, depending on the company and team they work within,” Fosdike explained. “Customers are exploring different use cases such as using the [language] data in predictive maintenance, automated classification of cause codes, and using the voice data to predict worker fatigue before it becomes a critical safety risk.”
That last bit about predicting worker fatigue is a little suspect. The idea that conditions like tiredness can be detected in a person’s voice isn’t a new one, but some researchers believe it’s unlikely AI can flag them with 100% accuracy. After all, people express tiredness in different ways, depending not only on the workplace environment but on their sex and cultural, ethnic and demographic backgrounds.
The tiredness-detecting scenario aside, Fosdike asserts that Datch’s technology is helping industrial clients get ahead of turbulence in the economy by “vastly improving” the efficiency of their operations. Frontline staff typically have to work with reporting tools that aren’t intuitive, he notes, and in many cases, voice makes for a less cumbersome, faster alternative form of input.
“We help frontline workers with productivity and solve the pain point of time wasted on their reports by decreasing the process time,” Fosdike said. “Industrial companies are fast realizing that to keep up with demand or position themselves to withstand a global pandemic, they need to find a way to scale with more than just peoplepower. Our AI offers these companies an efficient solution in a fraction of the time and with less overhead needed.”
Datch competes with Rain, Aiqudo and Onvego, all of which are developing voice technologies for industrial customers. Deloitte’s Maxwell, Genba and Athena are rivals in Fosdike’s eyes, as as well. But business remains steady — Datch counts ConEd, Singapore Airlines, ABB Robotics and the New York Power Authority among its clients.
“We raised this latest round earlier than expected due to the influx of demand from the market. The timing is right to capitalize on both the post-COVID boom in digital transformation as well as corporate investments driven by the infrastructure bill,” Fosdike said, referring to the $1 trillion package U.S. lawmakers passed last November. “Currently we have a team of 20, and plan to use the funds to grow to 55 to 60 people, scaling to roughly 40 by the end of the year.”
To date, Datch has raised $15 million in venture capital.
Datch secures $10M to build voice assistants to factory floors

PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

Neji Tawo, the founder of boutique software development company Wiscount Corporation, says he was inspired by his dad to become an engineer. When Tawo was a kid, his dad tasked him with coming up with a formula to calculate the gas in the fuel tanks at his family’s station. Tawo then created an app for gas stations to help prevent gas siphoning.
The seed of the idea for Tawo’s latest venture came from a different source: a TV ad for a charity. Frustrated by his experience filling out donation forms, Tawo sought an alternative, faster way to complete such transactions. He settled on voice.
Tawo’s PayTalk, which is one of the first products in Amazon’s Black Founders Build with Alexa Program, uses conversational AI to carry out transactions via smart devices. Using the PayTalk app, users can do things like find a ride, order a meal, pay bills, purchase tickets and even apply for a loan, Tawo says.
“We see the opportunity in a generation that’s already using voice services for day-to-day tasks like checking the weather, playing music, calling friends and more,” Tawo said. “At PayTalk, we feel voice services should function like a person — being capable of doing several things from hailing you a ride to taking your delivery order to paying your phone bills.”

PayTalk is powered by out-of-the-box voice recognition models on the frontend and various API connectors behind the scenes, Tawo explains. In addition to Alexa, the app integrates with Siri and Google Assistant, letting users add voice shortcuts like “Hey Siri, make a reservation on PayTalk.”
“Myself and my team have bootstrapped this all along the way, as many VCs we approached early on were skeptical about voice being the device form factor of the future. The industry is in its nascent stages and many still view it with skepticism,” Tawo said. “With the COVID-19 pandemic and subsequent shift to doing more remotely across different types of transactions (i.e. ordering food from home, shopping online, etc.), we … saw that there was increased interest in the use of voice services. This in turn boosted demand for our product and we believe that we are positioned to continue to expand our offerings and make voice services more useful as a result.”
Tawo’s pitch for PayTalk reminded me much of Viv, the startup launched by Siri co-creator Adam Cheyer (later acquired by Samsung) that proposed voice as the connective tissue between disparate apps and services. It’s a promising idea — tantalizing, even. But where PayTalk is concerned, the execution isn’t quite there yet. 
The PayTalk app is only available for iOS and Android at the moment, and in my experience with it, it’s a little rough around the edges. A chatbot-like flow allows you to type commands — a nice fallback for situations where voice doesn’t make sense (or isn’t appropriate) — but doesn’t transition to activities particularly gracefully. When I used it to look for a cab by typing the suggested “book a ride” command, PayTalk asked for a pickup and dropoff location before throwing me into an Apple Maps screen without any of the information I’d just entered.
The reservation and booking functionality seems broken as well. PayTalk walked me through the steps of finding a restaurant, asking which time I’d like to reserve, the size of my party and so on. But the app let me “confirm” a table for 2 a.m. at SS106 Aperitivo Bar — an Italian restaurant in Alberta — on a day the restaurant closes at 10 p.m.
Image Credits: PayTalk
Other “categories” of commands in PayTalk are very limited in what they can accomplish — or simply nonfunctional. I can only order groceries from two services in my area (Downtown Brooklyn) at present — MNO African Market and Simi African Foods Market. Requesting a loan prompts an email with a link to Glance Capital, a personal loan provider for gig workers, that throws a 404 error when clicked. A command to book “luxury services” like a yacht or “sea plane” (yes, really) fails to reach anything resembling a confirmation screen, while the “pay for parking” command confusingly asks for a zone number.
To fund purchases through PayTalk (e.g. parking), there’s an in-app wallet. I couldn’t figure out a way to transfer money to it, though. The app purports to accept payment cards, but tapping on the “Use Card” button triggers a loading animation that quickly times out.
I could go on. But suffice it to say that PayTalk is in the very earliest stages of development. I began to think the app had been released prematurely, but PayTalk’s official Twitter account has been advertising it for at least the past few months.
Perhaps PayTalk will eventually grow into the shoes of the pitch Tawo gave me, so to speak — Wiscount is kicking off a four-month tenure at the Black Founders Build with Alexa Program. In the meantime, it must be pointed out that Alexa, Google Assistant and Siri are already capable of handling much of what PayTalk promises to one day accomplish.

The battle for voice recognition inside vehicles is heating up

“With the potential $100,000 investment [from the Black Founders Build with Alexa Program], we will seek to raise a seed round to expand our product offerings to include features that would allow customers to seamlessly carry out e-commerce and financial transactions on voice service-powered devices,” Tawo said. “PayTalk is mainly a business-to-consumer platform. However, as we continue to innovate and integrate voice-activated options … we see the potential to support enterprise use cases by replacing and automating the lengthy form filling processes that are common for many industries like healthcare.”
Hopefully, the app’s basic capabilities get attention before anything else.
PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

How Niantic evolved Pokémon GO for the year no one could go anywhere

Pokémon GO was created to encourage players to explore the world while coordinating impromptu large group gatherings — activities we’ve all been encouraged to avoid since the pandemic began.
And yet, analysts estimate that 2020 was Pokémon GO’s highest-earning year yet.

By twisting some knobs and tweaking variables, Pokémon GO became much easier to play without leaving the house.

Niantic’s approach to 2020 was full of carefully considered changes, and I’ve highlighted many of their key decisions below.
Consider this something of an addendum to the Niantic EC-1 I wrote last year, where I outlined things like the company’s beginnings as a side project within Google, how Pokémon Go began as an April Fools’ joke and the company’s aim to build the platform that powers the AR headsets of the future.
Hit the brakes
On a press call outlining an update Niantic shipped in November, the company put it on no uncertain terms: the roadmap they’d followed over the last ten-or-so months was not the one they started the year with. Their original roadmap included a handful of new features that have yet to see the light of day. They declined to say what those features were of course (presumably because they still hope to launch them once the world is less broken) — but they just didn’t make sense to release right now.
Instead, as any potential end date for the pandemic slipped further into the horizon, the team refocused in Q1 2020 on figuring out ways to adapt what already worked and adjust existing gameplay to let players do more while going out less.
Turning the dials
As its name indicates, GO was never meant to be played while sitting at home. John Hanke’s initial vision for Niantic was focused around finding ways to get people outside and playing together; from its very first prototype, Niantic had players running around a city to take over its virtual equivalent block by block. They’d spent nearly a decade building up a database of real-world locations that would act as in-game points meant to encourage exploration and wandering. Years of development effort went into turning Pokémon GO into more and more of a social game, requiring teamwork and sometimes even flash mob-like meetups for its biggest challenges.
Now it all needed to work from the player’s couch.
The earliest changes were those that were easiest for Niantic to make on-the-fly, but they had dramatic impacts on the way the game actually works.
Some of the changes:

Doubling the players “radius” for interacting with in-game gyms, landmarks that players can temporarily take over for their in-game team, earning occupants a bit of in-game currency based on how long they maintain control. This change let more gym battles happen from the couch.
Increasing spawn points, generally upping the number of Pokémon you could find at home dramatically.
Increasing “incense” effectiveness, which allowed players to use a premium item to encourage even more Pokémon to pop up at home. Niantic phased this change out in October, then quietly reintroduced it in late November. Incense would also last twice as long, making it cheaper for players to use.
Allowing steps taken indoors (read: on treadmills) to count toward in-game distance challenges.
Players would no longer need to walk long distances to earn entry into the online player-versus-player battle system.
Your “buddy” Pokémon (a specially designated Pokémon that you can level up Tamagotchi-style for bonus perks) would now bring you more gifts of items you’d need to play. Pre-pandemic, getting these items meant wandering to the nearby “Pokéstop” landmarks.

By twisting some knobs and tweaking variables, Pokémon GO became much easier to play without leaving the house — but, importantly, these changes avoided anything that might break the game while being just as easy to reverse once it became safe to do so.
GO Fest goes virtual

Like this, just … online. Image Credits: Greg Kumparak

Thrown by Niantic every year since 2017, GO Fest is meant to be an ultra-concentrated version of the Pokémon GO experience. Thousands of players cram into one park, coming together to tackle challenges and capture previously unreleased Pokémon.

How Niantic evolved Pokémon GO for the year no one could go anywhere

iPhones can now tell blind users where and how far away people are

Apple has packed an interesting new accessibility feature into the latest beta of iOS: a system that detects the presence of and distance to people in the view of the iPhone’s camera, so blind users can social distance effectively, among many other things.
The feature emerged from Apple’s ARKit, for which the company developed “people occlusion,” which detects people’s shapes and lets virtual items pass in front of and behind them. The accessibility team realized that this, combined with the accurate distance measurements provided by the lidar units on the iPhone 12 Pro and Pro Max, could be an extremely useful tool for anyone with a visual impairment.
Of course during the pandemic one immediately thinks of the idea of keeping six feet away from other people. But knowing where others are and how far away is a basic visual task that we use all the time to plan where we walk, which line we get in at the store, whether to cross the street and so on.

The new feature, which will be part of the Magnifier app, uses the lidar and wide-angle camera of the Pro and Pro Max, giving feedback to the user in a variety of ways.

The lidar in the iPhone 12 Pro shows up in this infrared video. Each dot reports back the precise distance of what it reflects off of.

First, it tells the user whether there are people in view at all. If someone is there, it will then say how far away the closest person is in feet or meters, updating regularly as they approach or move further away. The sound corresponds in stereo to the direction the person is in the camera’s view.
Second, it allows the user to set tones corresponding to certain distances. For example, if they set the distance at six feet, they’ll hear one tone if a person is more than six feet away, another if they’re inside that range. After all, not everyone wants a constant feed of exact distances if all they care about is staying two paces away.
The third feature, perhaps extra useful for folks who have both visual and hearing impairments, is a haptic pulse that goes faster as a person gets closer.
Last is a visual feature for people who need a little help discerning the world around them, an arrow that points to the detected person on the screen. Blindness is a spectrum, after all, and any number of vision problems could make a person want a bit of help in that regard.

As ADA turns 30, tech is just getting started helping people with disabilities

The system requires a decent image on the wide-angle camera, so it won’t work in pitch darkness. And while the restriction of the feature to the high end of the iPhone line reduces the reach somewhat, the constantly increasing utility of such a device as a sort of vision prosthetic likely makes the investment in the hardware more palatable to people who need it.
Here’s how it works so far:

Here’s how people detection works in iOS 14.2 beta – the voiceover support is a tiny bit buggy but still super cool https://t.co/vCyX2wYfx3 pic.twitter.com/e8V4zMeC5C
— Matthew Panzarino (@panzer) October 31, 2020

This is far from the first tool like this — many phones and dedicated devices have features for finding objects and people, but it’s not often that it comes baked in as a standard feature.
People detection should be available to iPhone 12 Pro and Pro Max running the iOS 14.2 release candidate that was just made available today. Details will presumably appear soon on Apple’s dedicated iPhone accessibility site.

Microsoft Soundscape helps the visually impaired navigate cities

iPhones can now tell blind users where and how far away people are

Accessibility’s nextgen breakthroughs will be literally in your head

Jim Fruchterman
Contributor

Share on Twitter

Jim Fruchterman is the founder of Tech Matters and Benetech, nonprofit developers of technology for social good.

More posts by this contributor

A $6 trillion wake up call for the tech industry

Predicting the future of technology for people with visual impairments is easier than you might think. In 2003, I wrote an article entitled “In the Palm of Your Hand” for the Journal of Visual Impairment & Blindness from the American Foundation for the Blind. The arrival of the iPhone was still four years away, but I was able to confidently predict the center of assistive technology shifting from the desktop PC to the smart phone. 
“A cell phone costing less than $100,” I wrote, “will be able to see for the person who can’t see, read for the person who can’t read, speak for the person who can’t speak, remember for the person who can’t remember, and guide the person who is lost.” Looking at the tech trends at the time, that transition was as inevitable as it might have seemed far-fetched.
We are at a similar point now, which is why I am excited to play a part of Sight Tech Global, a virtual event Dec. 2-3 that is convening the top technologists to discuss how AI and related technologies will usher in a new era of remarkable advances for accessibility and assistive tech, in particular for people who are blind or visually impaired.
To get to the future, let me turn to the past. I was walking around the German city of Speyer in the 1990s with pioneering blind assistive tech entrepreneur Joachim Frank. Joachim took me on a flight of fancy about what he really wanted from assistive technology, as opposed to what was then possible. He quickly highlighted three stories of how advanced tech could help him as he was walking down the street with me. 

As I walk down the street, and walk by a supermarket, I do not want it to read all of the signs in the window. However, if one of the signs notes that kasseler kipchen (smoked porkchops, his favorite) are on sale, and the price is particularly good, I would like that whispered in my ear.
And then, as a young woman approaches me walking in the opposite direction, I’d like to know if she’s wearing a wedding ring.
Finally, I would like to know that someone has been following me for the last two blocks, that he is a known mugger, and that if I quicken my walking speed, go fifty meters ahead, turn right, and go another seventy meters, I will arrive at a police substation! 

Joachim blew my mind. In one short walk, he outlined a far bolder vision of what tech could do for him, without bogging down in the details. He wanted help with saving money, meeting new friends and keeping himself safe. He wanted abilities which not only equaled what people with normal vision had, but exceeded them. Above all, he wanted tools which knew him and his desires and needs. 
We are nearing the point where we can build Joachim’s dreams.  It won’t matter if the assistant whispers in your ear, or uses a direct neural implant to communicate. We will probably see both. But, the nexus of tech will move inside your head, and become a powerful instrument for equality of access. A new tech stack with perception as a service. Counter-measures to outsmart algorithmic discrimination. Tech personalization. Affordability. 
That experience will be built on an ever more application rich and readily available technology stack in the cloud. As all that gets cheaper and cheaper to access, product designers can create and experiment faster than ever. At first, it will be expensive, but not for long as adoption – probably by far more than simply disabled people – drives down price. I started my career in tech for the blind by introducing a reading machine that was a big deal because it halved the price of that technology to $5,000. Today even better OCR is a free app on any smartphone.
We could dive into more details of how we build Joachim’s dreams and meet the needs of millions of others of individuals with vision disabilities. But it will be far more interesting to explore with the world’s top experts at Sight Tech Global on Dec. 2-3 how those tech tools will become enabled In Your Head!
Registration is free and open to all. 

Accessibility’s nextgen breakthroughs will be literally in your head