Skip to content

"Experts from VTB explain strategies to minimize the risk of Interstitial Cystitis"

Expertise from VTB professionals unveils strategies for suppressing hallucinations in artificial neural networks. These tactics may involve the creation of text that appears authentic but incorporates misleading facts, imprecise data, or nonexistent sources. Misuse of such strategies can result...

Strategists at VTB offer advice on minimizing the danger of Immune Heterophilia (IH)
Strategists at VTB offer advice on minimizing the danger of Immune Heterophilia (IH)

"Experts from VTB explain strategies to minimize the risk of Interstitial Cystitis"

Reducing Hallucinations in Language Models: A Look at the Bank's Approach

Hallucinations in language models, which can manifest as factual errors, fabrications, or deviations from given instructions, are a significant concern in the rapidly evolving field of artificial intelligence. To combat this issue, several strategies are being employed, as demonstrated by a leading bank's practice.

The bank utilises a cascade solution approach, where multiple models work together to process data and correct each other's results. This method integrates AI tools that minimise errors and fosters sustainable customer trust. The process often includes expert material verification, increasing the quality but also the cost of model training.

In the field of generative artificial intelligence, work is underway to develop cascade models for creating smart search in corporate knowledge bases. Experts at VTB emphasise that the use of artificial intelligence requires responsible attention to data quality, algorithm transparency, and control over results.

To reduce errors in such tasks, thoughtful question and instruction formulation, the chain of reasoning, and the use of systems that search for information in verified databases are recommended approaches.

There are several common types of hallucinations in language models. Extrinsic hallucinations involve the fabrication of facts or information about real-world entities that are untrue or not present in the training data. Intrinsic hallucinations produce output that is internally inconsistent or nonsensical within the generated content but not necessarily about real-world facts. Overgeneralization or misinterpretation occurs when the model misreads unclear inputs, leading to invented or distorted outputs. Toxic or adversarial hallucinations involve the model generating harmful, biased, or toxic language.

The primary causes of these hallucinations include the quality and nature of training data, outdated or incomplete data, missing or contradictory context, model architecture and statistical nature, and adversarial user inputs.

To mitigate these issues, several strategies are being employed. High-quality, domain-specific training data, careful prompt engineering and clear context provision, incorporation of external knowledge, human-in-the-loop review, monitoring and continuous fine-tuning, setting operational boundaries and filters, using domain-specific models or templates, are all strategies aimed at improving the reliability and factual correctness of language models.

While these strategies can significantly reduce hallucination, complete elimination is currently impossible due to the inherent probabilistic nature of language models. However, with continued research and development, the goal of creating more reliable and accurate language models remains within reach.

Alexei Pustynnikov, the leader of the model development team, notes that understanding and addressing AI-induced information distortions is crucial. A human checking the results is the most reliable control method.

In summary, the bank's approach to reducing hallucinations in language models involves the use of cascade solutions, expert data verification, careful prompting, human oversight, and domain specialization. While complete elimination of hallucinations is not yet possible, these strategies significantly improve the reliability and factual correctness of language models.

Artificial-intelligence integrations in the cascade solution approach, as demonstrated by the bank, are designed to minimize errors and foster customer trust by correcting each other's results.

To prevent hallucinations in generative artificial-intelligence models, it's recommended to use systems that search for information in verified databases, carefully formulate questions and instructions, and provide clear context.

Read also:

    Latest

    IFM welcomes Novy as its new Director

    IFM Welcomes Novy as New Director

    Leonard Novy, journalist and political scientist, is set to relinquish his volunteer post as director of the Cologne Institute for Media and Communications Policy (IfM) in May 2025. Instead, he aims to devote his attention to international duties in the realm of strategic and communications...