Monday , October 25 2021

Google AI details how to capture Pixel 3 and choose the Top Shot



[ad_1]

Top Shot is a feature of Google's many AI-powered cameras that Google has introduced with Pixel 3. Google AI shows how functional functionality works and what features your phone suggests when it's a replacement frame.

At a higher level, Top Shot saves and analyzes 1.5-second photo frames, and after clicking on the lock button. It has been captured up to 90 images, choosing Pixel 3 with two high resolution shots.

Shutter frame is processed and saved first. After saving the best alternative options. Google allows Visual Core to capture these alternative ways of extracting images on Pixel 3 HDR + with a small extra latency and the Motion Photo file.

Google Clips did Pixel 3, the company created a computer-based visual model to recognize the three key attributes associated with "best times".

  1. Functional features, like lighting
  2. Target attributes (are the eyes of the subject open, are you smiling?)
  3. Subjective abilities, like emotional expressions

Our network layout detects low-level visual attributes in premature layers, whether the topic is dimmed or not, and then adds additional spreadsheets and parameters to more complex object objects, whether the eyes are open or the attributes are subjective because they are emotional. game or surprise statement.

Google says that Top Shot has face-to-face analysis, but the company has worked to identify "faces not a central theme". "He created additional frame metrics for overall quality of the frame:

  • Subject saliency motion score: a low-resolution optical flow resolution between the current frame and the previous frame is calculated in the ISP, to determine the scene's outgoing movements of the object.
  • Global Blur Scaling Score: The amount of time and motion of the camera is calculated. The motion of the camera is calculated from the gyroscope and OIS (optical stabilization optical image) data sensors.
  • Scores of "3A" – the auto exposure status, auto focus and white car spots are also taken into account.

The scores are individually evaluated to prepare a model to predict overall punctuation of quality, in accordance with the human score frame preference, to maximize end-to-end product quality.

Throughout the development process, Google perceives the best perceived by the user. He collected hundreds of volunteers, and the frames were better asked. Other steps include blurring faces and preventing the handling of different faces.

More About Pixel 3:


Check out 9to5Google YouTube for more news:

[ad_2]
Source link