CAPTCHAs are a common test on the web and are designed to differentiate between actual users and bots. Google today introduced reCAPTCHA v3, hoping to reduce the number of challenges a user might see.
As bots that try to pass as actual humans become more sophisticated, Google has released new versions of its reCAPTCHA API. The most recognized version of this test involves typing in distorted text, with more current forms asking users to identify objects in images.
Over the last decade, reCAPTCHA has continuously evolved its technology. In reCAPTCHA v1, every user was asked to pass a challenge by reading distorted text and typing into a box. To improve both user experience and security, we introduced reCAPTCHA v2 and began to use many other signals to determine whether a request came from a human or bot. This enabled reCAPTCHA challenges to move from a dominant to a secondary role in detecting abuse, letting about half of users pass with a single click.
At the same time, Google has also tried to make the challenge less obtrusive by using various signals to determine whether the user was authentic or fake.
With reCAPTCHA v3, Google is improving the experience even more, with the API returning a score between 0.0 and 1.0 that ranks “how suspicious an interaction is.” The goal is to minimize the “need to interrupt users with challenges at all.”
This new system works by placing reCAPTCHA v3 on more pages — not just the login box — and running “adaptive risk analysis in the background to alert you of suspicious traffic.”
In this way, the reCAPTCHA adaptive risk analysis engine can identify the pattern of attackers more accurately by looking at the activities across different pages on your website. In the reCAPTCHA admin console, you can get a full overview of reCAPTCHA score distribution and a breakdown for the stats of the top 10 actions on your site, to help you identify which exact pages are being targeted by bots and how suspicious the traffic was on those pages.
The new API is also more customizable, with sites able to determine how to fight spam and abuse:
- First, you can set a threshold that determines when a user is let through or when further verification needs to be done, for example, using two-factor authentication and phone verification.
- Second, you can combine the score with your own signals that reCAPTCHA can’t access—such as user profiles or transaction histories.
- Third, you can use the reCAPTCHA score as one of the signals to train your machine learning model to fight abuse.
This post was originally published here