Chainlink Wants to Solve the Problem of Hallucinatory Pseudo-Results from AI Applications

Chainlink is implementing a brand new technique to resolve a significant drawback in synthetic intelligence: hallucinatory AI programs. When giant language fashions misread information or generate incorrect new information, the results may be expensive, particularly in finance. As a substitute of counting on a single AI mannequin, Chainlink is now taking a multi-model strategy, utilizing AI programs from OpenAI, Google, and Anthropic.
Laurence MORONEY, a Chainlink advisor and former head of AI at Google, defined that utilizing a number of AI fashions as a substitute of only one reduces the error price. Every AI mannequin is requested individually to investigate the identical monetary information. The system shops verified information on the blockchain, making it clear, immutable, and safe. This consensus-based methodology prevents monetary information from being corrupted by misinformation and will increase the reliability of AI-generated information.
Chainlink’s strategy goals to alter this by lowering handbook information verification and growing monetary accuracy. In a current collaboration with main monetary establishments together with UBS, Franklin Templeton, Wellington Administration, Vontobel, and Sygnum Financial institution, Chainlink examined this AI-powered blockchain system. The outcomes have been promising, demonstrating a discount in errors and inefficiencies in monetary information.
Picture: freepik
Designed by Freepik





