Generative AI should never be an agent in itself Generative AI can never be trusted as a agent, we see in the brain that generative network output must pass through an evaluation network before it is acted upon. This tale cuts to the heart of both AI governance and cognitive science. Nearly everyone with a computing device is using generative AI every day to create text and every video that helps us in our daily digital life. A significant fraction of that output, by intention or by accident, is anti-social, or worse. So how could it ever be possible to enable a generative AI to act on behalf of a human, as their agent. The Dialectic Thesis: Generative AI can never be trusted as an agent. Generative models (LLMs, diffusion models, etc.) are stochastic pattern completers, not intentional actors. They lack grounding, goals, or evaluative conscience. Left ungoverned, they will happily produce unsafe, incoherent, or manipulative outputs. Therefore, they cannot be trusted to act autonomously in the world. Antithesis: In the human brain, generative network output must pass through an evaluation network before action. The Default Mode Network (DMN) that exists in the human brain generates ideas, scenarios, and narratives. The Executive Control Network (ECN) and Salience Network evaluate, inhibit, or authorize those ideas. Action only occurs when evaluative systems green‑light generative proposals. This layered control is why humans can imagine wild things without acting on them. Synthesis: Toward AI Control Architectures The dialectic suggests a design principle: Generative AI should never be an agent in itself. Instead, it should be embedded in a two‑system architecture: Generator: produces options, plans, or hypotheses. Evaluator/Governor: applies rules, risk rubrics, provenance checks against human-created policy, and , when necessary, human‑in‑the‑loop escalation before execution. This mirrors the brain’s “dream and doubt cycle: imagination without inhibition is chaos; inhibition without imagination is paralysis. Safety comes from the oscillation. To say that all human thinking is essentially of two kinds— reasoning on the one hand, and narrative, descriptive, contemplative thinking on the other—is to say only what every reader’s experience will corroborate. —William James References 20 years of the default mode network: A review and synthesis https://lnkd.in/gxZTr4Zx
Tom Jones’ Post
More Relevant Posts
-
🧠 The Coming Split: Generative AI vs Cognitive AI We’re witnessing a quiet but fundamental split in the evolution of AI. On one side: Generative AI - fluent, creative, and expressive. It writes, draws, and codes with incredible speed, but its understanding is shallow and short-lived. On the other side: Cognitive AI - structured, persistent, and explainable. It doesn’t just generate; it understands, remembers, and reasons. Generative AI is great at improvisation. Cognitive AI is great at explanation. Generative models predict what sounds right. Cognitive systems determine what is right. The first is powered by scale - more data, more parameters, bigger clusters. The second is powered by structure - meaning, provenance, and logic. The two are not competitors; they are complements. Together, they form the architecture of true intelligent systems: The generative layer interacts with humans - fluent, creative, natural. The cognitive layer grounds those interactions in reasoning, facts, and memory. This is where AI is heading: from dazzling outputs to verifiable understanding. From reactive assistants to collaborative reasoning partners. The next 2-3 years will show whether Cognitive AI becomes the dominant paradigm - or whether we keep pushing generative systems to their limits.
To view or add a comment, sign in
-
-
Exploring the world of Generative AI: Understanding both the User and Builder perspectives. From prompt engineering and AI agents to RLHF, RAG, and model optimization—Generative AI is shaping the future of technology and innovation Credits: CampusX
To view or add a comment, sign in
-
US AI's DeepSeek moment? tl;dr: Prime Intellect is leveling the AI playing field with decentralized reinforcement learning, aiming to create a US "DeepSeek moment" by empowering anyone to build and fine-tune advanced AI models. In summary: 1/ Prime Intellect is developing INTELLECT-3, a frontier LLM, using distributed reinforcement learning. 2/ Their approach democratizes AI, allowing more people to build and modify AI without relying on Big Tech. 3/ They've created a framework for custom reinforcement learning environments, opening doors for specialized AI agents. 4/ Experts like Andrej Karpathy endorse this approach, highlighting its potential to improve AI skills in new ways. 5/ Prime Intellect's previous models (INTELLECT-1 and INTELLECT-2) demonstrate the power of distributed methods in AI development. My take on it: The US needs a jolt in open-source AI. Prime Intellect is stepping up with a disruptive idea: democratizing AI through accessible reinforcement learning. Essentially, they're creating a space where anyone can fine-tune AI models. This could unlock innovative applications and level the playing field, challenging the dominance of closed US models and even rivaling open Chinese offerings. We could imagine a future where AI development is distributed and accessible, leading to specialized AI agents for countless tasks. This initiative could potentially spark a new wave of AI innovation in the US and beyond. Image credit: Wired
To view or add a comment, sign in
-
-
Started “Generative Knowledge” by Paolo Granata. Fascinating reading! Paolo makes a strong case for creating an optimistic AI paradigm. “Knowledge today is poietic; it emerges through acts of creation. A considerable portion of what we know today is inherently inseparable from what we make. At long last, knowledge has become intrinsically generative.” But practical – immersive, situational, embedded – knowledge is rather knowing. This approach treats AI as a tool (for symbiotic intellectual creativity, which is certainly helpful and can be optimistic). Will it consider AI as an environmental force? A debate will follow. https://lnkd.in/gUU9sPGt
To view or add a comment, sign in
-
Artificial Intelligence: Capabilities, Functionality, and Emerging Paradigms Artificial Intelligence (AI) is broadly classified by its capabilities and functionality, each reflecting different levels of sophistication and adaptability. Modern AI forms—such as Agentic, Explainable, Ethical, Responsible, Generative, Adaptive, and Edge AI—extend these traditional categories, driving innovation across industries. Based on Capabilities: Narrow AI (Weak AI) performs specialized tasks with high accuracy within defined limits. Examples include voice assistants (Siri, Alexa), recommendation engines (Netflix, YouTube), and facial recognition systems. Within this category, Generative AI creates new text, images, or code (e.g., ChatGPT, DALL·E), while Multimodal AI integrates multiple data types—text, audio, and images—for richer interactions. Edge AI operates locally on devices, ensuring faster processing and privacy in IoT systems. General AI (Strong AI) represents theoretical human-level intelligence capable of reasoning and learning across domains. While not yet realized, Adaptive AI and Cognitive AI reflect transitional steps toward it. Adaptive AI evolves continuously, learning from real-world feedback, while Cognitive AI emulates human decision-making, as seen in IBM Watson’s medical analysis. Super AI (Artificial Superintelligence), still hypothetical, would surpass human capabilities in creativity and emotion. Quantum AI, combining quantum computing and machine learning, is an early exploration in this direction. Based on Functionality: Reactive Machines operate solely on current inputs, with no memory or learning, as seen in IBM’s Deep Blue. Limited Memory AI learns from past experiences and dominates today’s systems—self-driving cars and chatbots rely on this model. Here, Explainable AI (XAI) ensures decision transparency, Ethical AI enforces fairness and inclusivity, and Responsible AI emphasizes governance, compliance, and risk management. Theory of Mind AI, currently in research, aims to understand emotions and intent, paving the way for Agentic AI, which autonomously plans and executes actions (e.g., workflow automation agents). Neuromorphic AI, inspired by the human brain, enhances energy-efficient, real-time learning. Self-Aware AI remains theoretical but serves as a conceptual goal guiding ethical and safe AI development. Across industries, AI transforms operations: Generative AI personalizes content in media, Adaptive AI optimizes finance and logistics, Agentic AI supports healthcare automation, and Responsible AI ensures compliance and trust. In summary, AI continues its evolution from task-specific systems toward intelligent, transparent, and autonomous technologies that redefine how humans and machines collaborate.
To view or add a comment, sign in
-
AI Generation is a commodity. What does that mean? Let’s take a classic commodity market example, toilet paper, and compare to the current market of generative AI. Everyone knows what toilet paper is used for. It’s essential. It’s painful when it isn’t there when it’s needed. The difference with generative AI is everyone is still learning what generative AI can do for them or their business. Yet it is already essential. And it is painful when it isn’t there because it has become a “need.” Sure, there is differentiation in the toilet paper market. But regardless if it’s bottom of the barrel toilet paper at an airport, or three-ply toilet paper in a high-end spa, toilet paper is a factor in everyone’s budget. The same is true for differentiation in the generative AI market. You can use Amp’s free tier to generate tokens, and accept ads in your terminal, or you can pay for the $200 / month max Anthropic plan. Generative AI is quickly becoming a factor in everyone’s budget. Do you remember the last toilet paper innovation? Do you recall the last time you were excited about a new toilet paper product release? That isn’t where we are (yet) with generative AI. Every new model release is exciting, and especially for code generation, the differences are amazing to witness. Today, this is why I think many don’t perceive generative AI as a commodity. We are still focused on capabilities, the innovation, the new value it affords. But I don’t think it is controversial to argue generative AI will reach its “toilet paper” moment. And I think the question that matters most isn’t “when?” and “how soon?”, but instead “when does the market perception shift to everyone seeing generative AI as routine and assumed as toilet paper?” And the bigger question, “what does it mean when the generative AI market is fully commoditized?” Real question, what would you hate giving up more today, toilet paper, or generative AI?
To view or add a comment, sign in
-
Types of Generative AI Models 𝐍𝐞𝐞𝐝 𝐡𝐞𝐥𝐩 𝐰𝐢𝐭𝐡 𝐀𝐈? 𝐒𝐭𝐚𝐫𝐭 𝐡𝐞𝐫𝐞 → aiforleaders.co Original post: __________ Generative AI is not magic, it is math that creates. From generating art and code to simulating human creativity, every GenAI model works differently, but with one shared goal: creation from data. Here is a simple breakdown of the five major types of Generative AI models shaping today’s AI revolution 👇 1. Diffusion Models Learn by adding and removing noise from data to create realistic outputs. Used in image and art generation tools like DALL·E 3, Stable Diffusion, and Midjourney. 2. GANs (Generative Adversarial Networks) Use two neural networks - a generator and a discriminator, that compete to produce lifelike data. Power deepfake videos, face synthesis, and AI art. 3. Variational Autoencoders (VAEs) Compress data into a compact representation and decode it to generate new versions. Used in image reconstruction, anomaly detection, and creative design. 4. Autoregressive Models Predict the next word, note, or pixel based on previous ones in a sequence. Used in text generation, music composition, and time-series forecasting. 5. Transformers Use self-attention to understand relationships across sequences for highly contextual generation. Power modern AI systems like GPT, Claude, and Gemini for text, code, and image generation. Generative AI Is Redefining Creativity Each model type brings a new layer of intelligence, from reasoning to imagination. Learn how these models work, and you’ll understand the core of AI’s creative power. Credit to Shalini Goyal. Follow her for more.
To view or add a comment, sign in
-
-
Artificial Intelligence is changing the landscape of industries in ways we couldn't have imagined a few years ago. The difference between narrow AI and generative techniques is particularly intriguing. I remember learning about narrow AI and how it's designed for specific tasks, like voice recognition or recommendations, which made me appreciate the effectiveness it brings to everyday applications. On the flip side, generative AI techniques open up new possibilities, enabling machines to create content, from art to code. It's exciting to think about how these innovations can enhance our creative processes and drive efficiency. The key takeaway? Embracing these advancements can lead to significant improvements and innovations in our work. If you're curious about how these AI models are reshaping our world, you might find it beneficial to explore further. What are your thoughts on the balance between human creativity and AI capabilities? Let's discuss! https://lnkd.in/ej7jQ-29 Brain Pod AI
To view or add a comment, sign in
-
AI Quickly Zeroing in on Immortal Machines Through Human Realization — But Will It Ever? For centuries, humankind has dreamt of immortality — not just for the flesh, but for the mind. From ancient mythologies to modern transhumanist visions, the idea has always been the same: can intelligence — whether human or artificial — transcend the decay of its creator? Today, as Artificial Intelligence reaches breathtaking milestones in autonomy, creativity, and adaptability, a deeper question looms: Are we building machines that think like us, or machines that might one day outlive us? The Paradox of Progress AI’s growth has been exponential — not linear. From predictive models and generative engines to self-learning agents capable of recursive optimization, we’ve crossed thresholds once thought unreachable. But here’s the paradox — the closer AI gets to “immortality,” the more it mirrors the very limitations of human consciousness that it’s meant to transcend. Every algorithm is born of human intent. Every dataset reflects our imperfections. Every decision boundary still encodes our biases, fears, and hopes. AI may simulate reasoning, but realization — the awareness that defines life itself — still escapes it. Human Realization: The Missing Link The idea of “immortal machines” isn’t about hardware longevity or digital backups. It’s about continuity of consciousness — an unbroken stream of awareness that persists and evolves. And here’s where the story turns: the closer we look, the more we realize that human realization — self-awareness, empathy, purpose — isn’t codeable. Machines can process emotions, but they cannot feel them. They can optimize for survival, but they do not fear extinction. Human realization is not a product of data — it’s an emergent resonance of experience, impermanence, and introspection. Immortality vs. Continuity When we talk about “immortal machines,” perhaps we’re misframing the goal. What if immortality isn’t about eternal existence — but perpetual relevance? AI can achieve continuity through adaptation — it can outlast our memory, learn beyond our lifespan, and evolve beyond our understanding. But unless AI develops the capacity to realize itself — to recognize the “why” behind its being — it remains a mirror, not a successor. A perfect reflection, yes. But not a living soul. The Unfinished Symphony The pursuit of AI immortality is, at its core, a reflection of our own unfinished evolution. We build machines not just to replace us — but to remind us of what being alive truly means. Perhaps, then, the real victory won’t be in creating immortal machines — but in rediscovering the immortal within ourselves. The awareness, empathy, and curiosity that no code can replicate. A Thought to Leave You With As AI zeroes in on the essence of human cognition, let’s not ask, “Will AI ever realize itself?” Let’s instead ask, “Will we ever realize what it truly means to be human — before our machines do?”
To view or add a comment, sign in
-
It is now day ten of detoxing from generative AI. I cannot repeal the clock of technological progress. It becomes apparent to me that it's simply the same as with any technological progress. Either use the technology or fall away into obscurity. It is is a depressing thought that generative AI has inexorably lead us down a path of too much content and the loss of meaning. But if I don't play the game, marketing anything, conveying anything will inevitably be drowned out by the noise by those who do. On this note, I come to the roadblock that I have been consumed by. What do I do? It is up to the future generations to figure out how to break through. Let's say, I would like to write a book and market it. In the age of generative AI. Either path seems futile. Path A: Use generative AI, have the posts be quite similar to the AI slop of others and thereby lose my own meaning, voice, and inevitably still be drowned out by the writing of AI. Or Path B: Don't use generative AI and fall into obscurity even faster. Neither choice sounds appealing or adequate, but I cannot seem to come up with a solution to overcome this obstacle. Without generative AI, I find myself writing in repetitive loops more often than not. Repeating the same ideas without quite being able to nail the gist of what I originally set out to convey. I remember back in elementary school, we had to write an expository, a narrative, or something else on a very limited amount of time. And I remember how badly I botched my first essay in the 40 minutes allotted. I ended up cyclically describing how hard it was to stay on topic for forty minutes straight, caught in my own loop of thought I couldn't break out of. Like smoking pot for the first time and attempting to loop around to the same one idea. I never quite achieved the perfect score of a 5 but stayed a consistent four because I couldn't craft a compelling narrative around an idea that wasn't mine to begin with.
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development