The Effect of the Coronavirus Pandemic on the Development of Automation and The Future Of Jobs

From burger-flipping machines to car-building robots—not to mention high-powered software taking on more and more administrative tasks—it seems like hundreds of skills are rapidly becoming obsolete in the U.S. economy.

Not surprisingly, the economic shock of the coronavirus pandemic has accelerated the development of Artificial Intelligence and Automation. Robots don’t get sick. Information Technology (IT) skills are important for workers that want to gain employment after being laid off.




A recent McKinsey study found that AI and Deep Learning could add as much as $3.5 trillion to $5.8 trillion in annual value for companies.

Deep learning by machines uses deep neural networks (DNN), deep belief networks, recurrent neural networks and convolutional neural networks.

DNN involves multiple input and output layers, each finding a correct mathematical manipulation or calculation to turn the input into the output that can reap applications in computer vision, machine vision, audio recognition, speech recognition, natural language processing, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, which can compare to human performance or exceed human performance. Deeper layers or a higher quantity of layers help a machine learn or understand input features better.

Some computer networks, such as Artificial Neural Networks (ANNs) emulate biological neural networks of animal brains. ANNs with less connections than a human brain can be designed to solve specific tasks better than a human.




According to Apple, the “Hey Siri” detector uses a Deep Neural Network (DNN) to convert the acoustic pattern of your voice at each instant into a probability distribution over speech sounds. The DNN then uses a temporal integration process to compute a confidence score that the phrase you uttered was “Hey Siri”. If the score is high enough, Siri wakes up.

Being able to use Siri without pressing buttons is particularly useful when hands are busy, such as when cooking or driving, or when using the Apple Watch. An Apple customer can cut red peppers and cook while sending voice-to-text messages to an Apple iPhone with Whatsapp. According to Apple, the iPhone uses two networks — one for initial detection and another as a secondary checker. The initial detector uses fewer units than the secondary checker, so the Siri process using the initial detector is a vetting process with a simpler first step that minimizes effect on battery life.

The Apple iPhone’s Always On Processor (AOP) has limited processing power to run a detector with a small version of the acoustic model (DNN). When the score exceeds a threshold the motion coprocessor wakes up the main processor, which analyzes the signal using a larger DNN. In the first versions with AOP support, the first detector used a DNN with 5 layers of 32 hidden units and the second detector had 5 layers of 192 hidden units.

“Hey Siri” works in all languages that Siri supports, but “Hey Siri” isn’t necessarily the phrase that starts Siri listening. For instance, French-speaking users need to say “Dis Siri” while Korean-speaking users say “Siri 야” (Sounds like “Siri Ya.”) In Russian it is “привет Siri ” (Sounds like “Privet Siri”), and in Thai “หวัดดี Siri”. (Sounds like “Wadi Siri”.)

— Apple

Apple compares any possible new “Hey Siri” utterance with the stored examples as follows. The (second-pass) detector produces timing information that is used to convert the acoustic pattern into a fixed-length vector, by taking the average over the frames aligned to each state. A separate, specially trained DNN transforms this vector into a “speaker space” where, by design, patterns from the same speaker tend to be close, whereas patterns from different speakers tend to be further apart. Apple compares the distances to the reference patterns created during enrollment with another threshold to decide whether the sound that triggered the detector is likely to be “Hey Siri” spoken by the enrolled user that owns the iPhone.

The process reduces the probability that “Hey Siri” spoken by another person will trigger the iPhone, and also reduces the rate at which other, similar-sounding phrases trigger Siri.

Think about it: The success and new useful Apple iPhone features that make life easier and more efficient are likely directly proportional to the number of people that are losing jobs to automation, or who are in need of training to transition skills to Information Technology because companies are using artificial intelligence processes to replace human workers.

See also …

Apple Machine Learning Research | Hey Siri: An On-device DNN-powered Voice Trigger for Apple’s Personal Assistant

THANKS FOR READING CARDINAL NEWS …




^^ MOBILE? USE VOICE MIC ^^

 facebook … 

Please ‘LIKE’ the ‘Arlington Cardinal Page. See all of The Cardinal Facebook fan pages at Arlingtoncardinal.com/about/facebook …


Help fund The Cardinal Arlingtoncardinal.com/sponsor


THANKS FOR READING CARDINAL NEWS