A consortium including the likes of Facebook and Google, have come together to launch a set of test benchmarks for AI. By quantifying AI products against these benchmarks, firms within the area will have the ability to identify optimal product alternatives and, according to the consortium, MLPerf, “take confidence” that they are deploying the ideal solutions.
The package is made up of 5 benchmarks which include English-German machine translations using the WMT English-German data collection, two object detection benchmarks with the COCO data collection, and two image classification benchmarks together with the ImageNet data collection.
As well as providing best practice guidance for businesses in the AI field, it is estimated the benchmarks will assist reestablish further invention as, despite its hype, organizations have been slow to pick up the technologies. In a statement, MLPerf’s overall chair Peter Mattson said, “By producing relevant and common metrics to evaluate new machine learning applications frameworks, hardware accelerators, and cloud and edge computing systems in real-life situations, these benchmarks will establish a level playing field that even the smallest businesses can utilize.”
Plans for MLPerf to quantify AI energy efficiency rates come shortly after investigators found that training profound learning algorithms like OpenAI’s GPT-2 and Google’s BERT and Transformer may have a carbon footprint 5 times that of a vehicle.