Google announces its own microprocessor chip called Tensor

Google has developed a bespoke system (SoC), Tensor, to power Pixel phones. âSo thrilled to share our new custom Google Tensor chip, which lasted 4 years (for the scale)! Tensor builds on our 2 decades of computing experience and is our biggest innovation in Pixel to date. Will be on Pixel 6 + Pixel 6 Pro in the fall â, CEO Sundar Pichai tweeted.
After Apple (M1), Huawei (Kirin), Samsung (Exynos), now Google has joined the internal SoC club. The benefits of Tensor include:
- More computing power: Google Pixel uses computer photography and ML to capture images (Night Sight, for example). The tech giant has also introduced powerful voice recognition models for its devices. Features require high computing power and low latency for the best performance. Tensor can bring complex AI innovations to Pixel smartphones.
- Unlock new AI features: Tensor chips give Google the freedom to incorporate new ML-based features without worrying about performance. Robust processors are a prerequisite for running heavy AI workloads.
- More layers for physical security: The new Tensor Security Core with Titan M2 will function as an additional layer of security. Google’s Titan M is a custom-designed chip to protect sensitive data such as passwords, enable encryption, and secure in-app transactions.
The recent release of the first beta of Android 12 provided Pixel users with custom features like notification shadow, volume control, and lock screen. New features in Android 12 provide more transparency over âwhich apps access which dataâ and more control for users to make informed decisions about how much private information apps can access.
The processor is crucial for the performance of the phone and the battery life. Despite the fact that it has Android OS, Google has not been able to put a stop to the smartphone market. With the brand new Tensor chips, the Mountain View giant is looking to revitalize its smartphone segment. Lately, Google has made a litany of innovations in the field of artificial intelligence and machine learning.
Image credits: Google
Google’s LaMDA: The language model for Google’s dialog app is built on Transformer, an open source neural network architecture by Google Research in 2017, which is similar to many current language models such as BERT and GPT-3. At present, he is text-trained but may have future applications in conversational AI, Google Maps, etc.
AI in Google Maps: Two new AI features in Google Maps, including green routes to suggest fuel-efficient routes to users and a safer route for weather and traffic conditions in real time.
AI of the vertices: It is a managed ML platform for deploying and maintaining AI models. The platform enables users to design, deploy and scale machine learning models faster using pre-trained and customized tools within a unified AI platform. Plus, it easily integrates with other open source frameworks including TensorFlow, sci-kit learn, and PyTorch.
Small patterns: The tech giant introduced a new feature in Google Photos that uses machine learning to translate photos into numbers, which it then compares for visual and conceptual similarity.
MOM: The Unified Multitasking Model is the new AI algorithm built on a Transformer architecture and is trained in 75 different languages. MUM has the ability to understand information through text and images that can expand to audio and video in the future.
Join our Telegram group. Be part of an engaging online community. Join here.
Subscribe to our newsletter
Receive the latest updates and relevant offers by sharing your email.
