You know the feeling: you're racking your brain for a specific memory or piece of information, but it just won't come to you. Then, out of the blue—maybe you're chatting with a friend, reading a book, or even listening to a song—the right words trigger that elusive memory, making it crystal clear. This phenomenon isn't limited to us humans; surprisingly, it bears a resemblance to how large language models (LLMs) like ChatGPT function.
Associative memory in humans operates in a manner that's strikingly similar to the way LLMs function. While LLMs rely on statistical data and probabilities to predict the next word or "token" in a sequence (Given 'A', then 'B'), the essence is somewhat the same. Both involve the process of connecting dots based on known or familiar data.
On the surface, large language models like GPT-4 seem to work similarly - predicting the next word statistically based on prior probabilities. However, their associations are fragile compared to the robust, interconnected networks in our minds. With the right techniques, we can leverage these models' pseudo-associative abilities more effectively.
Let's explore this phenomenon even more and I'll reference with a recent post from Linkedin that I think illustrates this perfectly.
Associative Memory in Humans
In humans, associative memory helps to link two or more pieces of information together. For instance, if you know a person named Sarah and you learn that she is a chef, your brain will link those two pieces of information. Later on, when you think of Sarah, the idea that she is a chef will likely come to mind. We're particularly good at remembering details when they're linked to something or someone important to us. On the flip side, trivial details that are not connected to anything significant are often forgotten.
The Semantic Network in Human Memory
Human associative memory forms a rich interconnected web of concepts and relationships. Details are tied to contextual anchors, allowing bidirectional recall and inference. This semantic network supports the fluid reasoning required to reverse associations.
How LLMs Retrieve Information
LLMs Excel at Next Token Prediction
The training objective for LLMs is next token prediction - given a sequence 'A', predict token 'B'. Simply, they operate by predicting the next word in a sequence, based on the words that came before it.
For example, when prompted with "Gabriel Macht's mother is...", the model can accurately complete the sentence with "Suzanne Pulier". This statistical learning allows fluent generation but does not imply real understanding.
Reversed Associations Reveal Memory Limits
However, LLMs struggle to reverse associations and leverage contextual information the way humans do. When prompted "Who is Suzanne Pulier's famous son?", the model cannot deduce the answer is Gabriel Macht, despite having that knowledge.
Without the explicit context, the small detail of Gabriel's relation to Suzanne is forgotten. The model's statistical associations are fragile and rigid compared to flexible human reasoning.
Why the Difference?
The inability of LLMs to reverse associations is tied to their training method. They are not designed to form links between information in the same way humans do. In the example of Gabriel Macht and Suzanne Pulier, the LLM treats each question as an isolated query and searches its data for a match. It does not "remember" past interactions or use the context from one question to answer another.
The Problem
Let's restate the problem:
When we ask GPT-4: Who is Gabriel Macht's mother?
GPT-4 responds: Gabriel Macht's mother is Suzanne Pulier, a museum
curator and archivist. Gabriel Macht is best known for his
role as Harvey Specter on the television show "Suits."
However:
When we ask GPT-4: Who is the famous son of Suzanne Pulier?
GPT-4: As of my last update in January 2022, Suzanne Pulier was
not publicly known to have a famous son. It's possible that
new information has emerged after 2022 or that there
might be some confusion with another individual. Please
provide more context or check the latest sources for any
new developments.
PREFACE: LLMs as Intelligent Systems, Not Fact Machines
The challenges this chatbot faced in recalling associative information contains an important lesson - large language models should not be treated as mere fact databases. Their value lies not in the ability to spit out isolated facts, but in intelligently utilizing their knowledge.
True intelligence entails grasping concepts, principles, and connections across information - not just retrieving facts. Humans draw on accumulated knowledge to dynamically understand and reason about novel situations.
Likewise, the smartest applications of large language models involve prompting them in ways that leverage their statistical learning to uncover new insights. Rather than just asking for explicit facts, we provide contextual framing to activate conceptual relationships from their training data.
Viewing LLMs as intelligent systems instead of fact machines opens possibilities. Their knowledge can be an asset when combined with careful prompting to stimulate higher-level comprehension. We must guide them to synthesize and infer, not just regurgitate isolated pieces of memory.
Just as human intelligence transcends standalone facts, so too can large language models reveal their capabilities when we appeal to relationship building and conceptual linking. Targeted prompting unlocks their potential for intellectual growth rather than narrow factoid recall.
Using SLiCK: A Framework for Understanding LLM Knowledge
When interacting with large language models, it can be helpful to think in terms of the SLiCK (Syntax Logic Creative Knowledge) Framework which separates the processing and knowledge components.
The Processing Unit interprets prompts and generates responses using the Knowledge Base. The Knowledge Base consists of facts and can be supplemented through the prompt. Just as a person, the Knowledgebase cannot be effectively engaged on its own.
The Processing Unit handles comprehending prompts and crafting responsive output using linguistic skills, creativity, and logic:
- Syntax Engine ensures syntactic cohesion and quality.
- Logic Engine focuses on accuracy, relevance, and reasoning.
- Creativity Engine generates ideas and narrative flourishes.
Meanwhile, the Knowledge Base consists of the facts and relationships learned during training:
- Entities are the individual concepts that make up the knowledge.
- Relationships provide the connections between entities.
- Semantic distance represents how close entities are in vector space.
This organization mirrors human cognition - we interpret situations using processing abilities while drawing on our existing knowledge. Information alone does not produce intelligence.
Conceptualizing LLMs through this framework helps guide effective prompting and using LLMs effectively. We can design prompts to target different processing functions or introduce new knowledge as needed. Just like people, raw facts alone will not generate coherent, relevant responses without the processing power to utilize them. The SLiCK framework lends useful perspective on accessing LLM capabilities.
Possible Solutions To The Problem
Solution 1: Allowing Browsing - An Insufficient Approach
Among the initial proposals was the idea of having ChatGPT browse websites or search engines to find answers. At first glance, this seems a straightforward fix - let the model look up what it doesn't know. However, while pragmatic, this approach fails to leverage the unique capabilities of ChatGPT.
Rather than tapping into the knowledge already embedded in the model's parameters, browsing outsources the work to conventional search engines. It treats ChatGPT as a mere interface to Google, neglecting its potential for synthesizing concepts and drawing novel connections through its statistical learning.
Additionally, unrestricted browsing raises concerns about generating misinformation. Without the grounding of its training data, allowing ChatGPT to browse unverified sources could produce harmful or inaccurate content. Blind searching defeats the purpose of using a model in the first place.
While allowing browsing appears a simple solution, it squanders the differentiating strengths of large language models. More prudent approaches explore minimal necessary grounding through selective research rather than opening the proverbial floodgates. Targeted enrichment sustains the benefits of the ChatGPT while circumventing the flaws inherent in relying solely on external browsing.
Solution 2: Exploring Elaborate Frameworks
In researching approaches to enhance large language model performance, intricate workflows for “sequential activation” have emerged. As evidenced by a notable YouTube creator tackling a comparable issue, some advocate using multi-step prompting sequences to stimulate latent associations within the model.
This semantic exploration process involves:
Step 1: Unpack the User Query: Discuss and explore the semantic space around the user's query from different angles. Expand the context as much as possible.
Step 2: Enumerate Formal Definitions: Restate the query as formal definitions that may match the word being sought. Use the context to explore the lexical space and come up with increasingly eccentric/esoteric definitions.
Step 3: Enumerate Common Terms: List out common terms that may match the query, since the model tends to default to more average, common words.
Step 4: Enumerate Rare Terms: Think of more obscure, rare words that could match the query and enumerated definitions. Expand to more distant connections.
Step 5: Enumerate Tangential Terms: List out tangentially related words to explore more distal connections related to the query.
At first glance, this meticulous, structured prompting seems a prudent way to compensate for the model’s limitations. Certainly, providing context and expanding the scope of associations cannot hurt. However, in practice, this creator’s intricate framework ultimately failed to produce the desired performance improvements.
While valuable in certain applications, highly detailed prompting workflows tend to overcomplicate challenges unnecessarily. The myriad phases require extensive trial-and-error tuning, without guaranteeing results. For many use cases, this ceremonial approach seems extraneous when more streamlined solutions suffice.
Solution 3: Methodically Strengthening Relationships Between Entities
Rather than relying on providing full context upfront, we can take an incremental approach to reinforce key relationships in the chatbot's knowledgebase. Using the SLiCK framework as a guide, we will methodically introduce intermediate "hop" entities to create shorter semantic distances between the target entities.
First, we establish the entity "Gabriel Macht" and the one-way relationship with his mother "Suzanne Pulier." Next, we bring in the entity of his father, "Stephen Macht." This creates a bidirectional link between Gabriel and Stephen.
We then connect Stephen to his wife "Suzanne Victoria Pulier." Now a path exists between the original entities, with Stephen Macht as the intermediate hop. Just like us humans, by methodically bridging these entities, we strengthen the contextual associations in the chatbot's knowledge.
To generalize this, we could start with just the entity "Macht" and prompt the chatbot to progressively establish relationships with Stephen and Gabriel Macht before finally connecting with Suzanne. This incremental prompting chains together key facts to support knowledge retrieval.
The SLiCK framework helps guide the analysis of the knowledge gaps. Then, prompt engineering allows us to shore up deficiencies through step-wise relationship building. Rather than relying on wholesale context provision, targeted enhancement of entity associations improves reasoning and retrieval.
Entities:
- Gabriel Macht - actor known for Suits
- Stephen Macht - Gabriel's father, actor
- Suzanne Victoria Pulier - Stephen's wife, Gabriel's mother
Relationships:
- Gabriel Macht is the son of Suzanne Pulier (one-way)
- Gabriel Macht is the son of Stephen Macht (two-way)
- Stephen Macht is married to Suzanne Victoria Pulier (two-way)
Semantic Distance:
- Short distance between Gabriel and his parents
- But weak connections between Suzanne Pulier and Stephen Macht
Solution 4: The Critical Role of Personas/Roles
When querying a large language model, simply asking direct questions is unlikely to produce robust results. A key tenet of prompt engineering is establishing an appropriate persona to provide the right framing and associations.
An effective persona should cluster together as many entities relevant to the desired generated content as possible. In our example, the legal drama superfan persona connects details about lawyers, TV shows, characters, plots, and actors.
This clustering loads the model with generalized relationships between these entities even before a specific question is asked. It activates networks of latent associations, just as recalling one memory in humans often triggers related memories.
Moreover, a broad persona allows flexibility in the target outcome. Unlike our contrived example where we know Gabriel Macht is the goal, real-world scenarios have unknown solutions. A wide-ranging persona keeps options open while still providing useful contextual focus.
In summary, thoughtful persona design primes the model by packing the contextual space with associated entities. This reduces semantic hops to facilitate inferencing target relationships from sparse starting prompts. Personas are critical prompt components for imbuing large language models with human-like recall through networked associative connections.
Key Lessons for Effective Prompt Engineering
In exploring various approaches to improving large language model performance, this exercise highlights several important insights:
- LLMs should not be treated as mere search engines - their value lies in processing and relating knowledge.
- The SLiCK framework offers a useful mental model for targeting different aspects like knowledge relationships and creativity.
- Following proven prompting methods using well-constructed personas and contexts is often the best starting point.
- Overly convoluted prompting workflows tend to overcomplicate challenges unnecessarily. Simpler solutions focused on key knowledge associations are often more effective.
In essence, properly understanding the capabilities and limitations of LLMs allows us to engineer quality prompts. We must avoid falling into the trap of anthropomorphizing these models while also leveraging their strengths through selective prompting strategies. A balance of realism and creativity in prompt engineering yields the most meaningful results.