Hallucination and Alignment Limiting Transformer

Author: Atharv Katkar LinkedIn

Artificial intelligence has transformed how we access information and make decisions. Yet, a persistent challenge remains: hallucination—when AI confidently generates incorrect or fabricated information. Enter HALT (Hallucination and Alignment Limiting Transformer), a novel architecture designed to dramatically reduce hallucinations while preserving AI alignment and personality.

Prerequisites:

LLM : Large Language Model ( GPT-5, Claude, Mistral)

Train.json : A data file which is used to train LLM formatted in instruction & output format it’s second training after giving him 1st training of sentence arrangement and word understanding.

Hallucination : the generation of false, inaccurate, or nonsensical information that is presented as factual and coherent. A dream perhaps.

Continue reading “Hallucination and Alignment Limiting Transformer”

Quality of Embeddings & Triplet Loss

Author: Atharv Katkar Linkedin

Directed by: Sandeep Giri

OVERVIEW:

In Natural Language Processing (NLP), embeddings transform human language into numerical vectors. These are usually arrays of multiple dimensions & have schematic meaning based on their previous training text corpus The quality of these embeddings directly affects the performance of search engines, recommendation systems, chatbots, and more.

But here’s the problem:

Not all embeddings are created equal.

So how do we measure their quality?

To Identify the quality of embeddings i conducted one experiment:

I took 3 leading (Free) Text → Embedding pretrained models which worked differently & provided a set of triplets and found the triplets loss to compare the contextual  importance of each one.

Continue reading “Quality of Embeddings & Triplet Loss”