Tag: artificial intelligence

  • Controlling Which Websites Your AI Agent Visits

    Controlling Which Websites Your AI Agent Visits

    When you give an AI agent the ability to browse the Web, you’re handing it a passport with no visa restrictions. Left unchecked, it will go wherever it’s told — or wherever it wanders — including sites you’d never approve of, pages designed to manipulate it, or services that log every request it makes.

    Without guardrails, your agent can leak data, scrape paywalled content, hit rate limits that get your IP banned, or be manipulated by a page into visiting somewhere malicious. Web access control isn’t optional — it’s a core safety layer.


  • AI Poses A “Hidden Threat” To Organizations: Report

    AI Poses A “Hidden Threat” To Organizations: Report

    In December 2025, I had written in one of my newsletters:


    Now, a new article in Harvard Business Review (HBR) raises almost the same concerns, but for organizations implementing AI.

    The Core Concern

    • AI’s fluency and confidence create an illusion of competence, encouraging employees to offload critical thinking to machines, says the HBR article.
    • Over-reliance on AI can hollow out tacit knowledge, judgment, and interpretive reasoning—capabilities essential for innovation, crisis response, and strategic planning.
    • Organizations risk becoming technologically advanced but competitively fragile if they fail to protect human expertise.

    Three Ways AI Erodes Capabilities

    1. People Stop Thinking
      • Employees defer to AI outputs instead of developing their own analyses or strategies.
      • Example: Creston Telecom (Australia) found managers presenting AI-generated scenarios without being able to defend choices.
      • Solution: Instituted AI-free strategy sessions and a six-month “strategy residency” to preserve judgment and systems thinking.
    2. Rules Get Buried in Systems
      • AI embeds subjective, moral decisions (e.g., credit approvals, promotions) into opaque algorithms.
      • This undermines deliberation, accountability, and adaptability.
      • Example: Piedmont Regional Bank (U.S.) noticed its Credit Committee leaning heavily on AI.
        • Response: Quarterly “credit standards roundtables” to debate evolving criteria.
        • Introduced apprenticeships pairing junior analysts with senior lenders, ensuring judgment and accountability remain human-driven.
    3. Social Ties Are Weakened
      • AI displaces collaborative problem-solving, reducing trust and shared purpose.
      • Example: Brightview Creative (U.K. advertising agency) saw clients leaving despite strong campaign metrics.
        • Clients felt they were dealing with a “vending machine” rather than creative partners.
        • Solution: Banned AI-generated content in client presentations, appointed strategic leads to articulate human judgment, and rebuilt client confidence.

    Key Takeaway

    AI can enhance organizational performance—but it cannot:

    • Develop expertise through lived experience
    • Take moral responsibility
    • Build trust, courage, or shared purpose

    The report says these remain irreducibly human functions. Leaders must ensure AI augments rather than replaces them, or risk losing the competitive edge that makes their organizations resilient.

    Source: HBR


    What’s your view on the above? Do comment.

  • Wikipedia Banned AI-written entries — But Bot Had Lot To Say About It

    Wikipedia Banned AI-written entries — But Bot Had Lot To Say About It

    It was only a matter of time. Wikipedia, the Internet’s most trusted crowdsourced encyclopedia, has finally drawn a firm line in the digital sand—and this time, it’s aimed squarely at artificial intelligence.

    Frustrated by made-up facts and sketchy citations, Wikipedia has put its foot down: no more AI-written articles. Reports say the platform has barred its global community of volunteer editors from using AI tools to generate or rewrite entries. AI can still lend a hand with translations or light grammar tweaks — but when it comes to actual content, humans are very much back in charge.

    Click here to read the rest of the story.

  • AI Isn’t Killing Jobs Everywhere Yet

    AI Isn’t Killing Jobs Everywhere Yet

    This could bring a bit of a cheer to our members.

    Recent surveys conducted by the U.S. Federal Reserve and the St. Louis Fed show “no clear evidence” that artificial intelligence (AI)adoption has led to widespread job losses. In fact, industries with higher AI uptake are reporting faster productivity growth on both sides of the Atlantic. Job postings data also indicate that firms embracing AI are not reducing hiring compared to others, suggesting that automation is not yet driving the slowdown in recruitment.

    Philippines and India

    According to a recent blog post by James Pethokoukis, senior Fellow DeWitt Wallace Chair Editor, AEIdeas Blog, Apollo Global Management had tracked unemployment trends in two economies heavily exposed to outsourced service work — call centers and back-office operations. Despite predictions that generative AI would devastate these sectors, neither Manila nor India had shown signs of labor-market deterioration. Analysts noted that if automation were truly eliminating jobs at scale, these markets would be the first to feel the shock.

    Key Takeaway

    Across the U.S., Europe, India, and the Philippines, AI’s labor market impact remains muted. While certain occupations —particularly programming — are experiencing slower growth, broader fears of mass displacement are not yet supported by data. Policymakers face the challenge of balancing vigilance with evidence-based action as AI adoption accelerates.


    Here’s a clear breakdown of the impact of AI on jobs in each country or region mentioned in the article you’re viewing:

    • United States
      • Federal Reserve surveys show no evidence of widespread job losses due to AI.
      • Industries adopting AI are seeing higher productivity growth, not reduced hiring.
      • Programmer jobs are the exception: growth has slowed since ChatGPT’s release, though employment is still rising.
    • Europe
      • Similar to the U.S., European labor markets show no clear signs of AI-driven unemployment.
      • Productivity gains are reported in sectors with higher AI adoption.
    • Philippines
      • Despite heavy exposure in call centers and outsourced services, no labor-market deterioration has been observed.
      • Analysts note this sector would be among the first hit if AI displacement were significant.
    • India
      • Outsourced back-office and service jobs remain stable, with no evidence of mass layoffs linked to AI.
      • Like the Philippines, India’s service-heavy economy is closely watched as a potential early indicator of disruption.

    Source: https://www.aei.org/economics/ai-job-panic-still-outruns-the-evidence/

  • Top AI Coding Assistants in 2026: How Tools Like GitHub Copilot, Cursor And Agent Smith Are Transforming Everyday Development

    Top AI Coding Assistants in 2026: How Tools Like GitHub Copilot, Cursor And Agent Smith Are Transforming Everyday Development

    AI coding assistants have quickly become a part of everyday life for developers. What started as simple autocomplete tools has evolved into something much more powerful; tools that feel like intelligent agents, almost like having your own “Agent Smith” sitting beside you, helping you write, debug, and understand code.

    One of the most widely used tools today is “GitHub Copilot”. It acts like a reliable pair programmer who is always available. As you write code, it suggests entire lines or even full functions based on your comments. For many developers, this means spending less time on boilerplate code and more time focusing on logic and problem-solving. You can simply write a comment describing what you want, and Copilot often fills in the rest in seconds.


    Join The AI For Real Community on WhatsApp.

  • The Coming Of AI Co-Scientist

    The Coming Of AI Co-Scientist

    1. What is an AI Co-Scientist?

    An AI co-scientist is not just a tool that crunches data. It’s a system that actively participates in the scientific process. Instead of only analyzing results, it can:

    • Propose hypotheses
    • Design experiments
    • Interpret findings
    • Suggest next steps

    Think of it less like a calculator and more like a junior (and increasingly senior) research partner that never sleeps and can read millions of papers instantly.


    2. Why Now?

    Several trends have converged to make AI co-scientists possible:

    a. Explosion of scientific data
    Modern science generates far more data than humans can process alone (genomics, climate models, particle physics, etc.).

    b. Advances in AI models
    Large-scale AI systems can now:

    • Understand scientific language
    • Reason across domains
    • Work with code, math, and simulations

    c. Integration with tools
    AI is no longer isolated. It can:

    • Run simulations
    • Access lab equipment (in some setups)
    • Interface with databases and scientific software

  • AI Adoption Surges In Public Sector: Report

    AI Adoption Surges In Public Sector: Report

    Public-sector employees are now using AI at rates that rival the private sector, with Gallup reporting that 43% of government workers engaged with AI tools in late 2025 — a dramatic rise from just 17% in mid-2023.

    This surge highlights a rapid closing of the technology gap between government and business, despite longstanding challenges in recruiting technical talent and navigating stricter governance frameworks.

    The study shows that while private-sector employees still lead in frequent AI use (25% vs. 21%), public-sector workers surpass them in occasional use (22% vs. 16%). This balance puts government slightly ahead in overall adoption. Analysts attribute the growth to the accessibility of generative AI tools, which require little specialized training, allowing employees to experiment independently.

    Crucially, the report emphasizes that managerial support is the decisive factor in whether AI experimentation becomes routine. In public-sector organizations with strong leadership backing, 65% of employees use AI frequently, compared with only 37% in low-support environments. The findings suggest that leadership strategies — not just technology access — will determine whether AI adoption translates into lasting productivity gains.

    Challenges Remain

    Despite the rapid rise in AI adoption across the public sector, Gallup’s study points out several persistent challenges. Government agencies continue to face difficulties in attracting and retaining technical talent, which limits their ability to fully integrate advanced AI systems. Strict governance and compliance frameworks also slow down experimentation compared to the private sector. Moreover, without strong managerial support, many employees remain hesitant to move beyond casual use of AI tools, leaving productivity gains unevenly distributed. These hurdles suggest that while adoption is accelerating, the path to sustainable and transformative AI use in government still requires deliberate investment in leadership, training, and policy innovation.

    Click here to read the report.

  • What Is “AI Slop”?

    What Is “AI Slop”?

    If you spend time on the Internet today, you may have noticed something strange. Articles that say a lot but mean very little. Social media posts filled with generic advice. Images that look impressive at first glance but make no real sense. Much of this growing flood of low-quality content has a new name: “AI slop”.

    AI slop refers to large amounts of content created quickly using artificial intelligence tools but with little care, accuracy or originality. The word “slop” is used deliberately—it suggests something messy, mass-produced and not very nourishing.


  • AI And Copyright – A Primer

    AI And Copyright – A Primer

    AI and copyright are entering a new phase globally. Pure AI-generated content is increasingly treated as public domain, while copyright protection lies in human creativity — editing, arranging, and directing AI outputs.

    For creators, the key shift is clear: documentation and proof of human input are becoming essential to defend ownership in the age of generative AI.

    Here’s a primer on AI and copyright (March 2026) that will help creators understand where they stand on various uses of AI in matters of text, image, and video generation.

  • OpenClaw: A Plain-English Primer For Everyday Gen-AI Users

    OpenClaw: A Plain-English Primer For Everyday Gen-AI Users

    Many of you must have heard of “OpenClaw” by now, but some may still not know what this project is all about. “OpenClaw” is an open-source project that aims to recreate or emulate advanced AI “reasoning” capabilities similar to those seen in proprietary systems. It emerged as part of the broader open-model movement, where developers try to replicate powerful commercial AI features in transparent, community-driven ways.

    For ordinary users of generative AI tools, OpenClaw is not a mainstream app like ChatGPT or Claude. Instead, it is more of a behind-the-scenes framework or model setup that developers can run locally or adapt for research. Still, its goals and the controversy around it matter to everyday users because they touch on privacy, transparency, cost, and AI safety.

    What OpenClaw is Trying To Do

    OpenClaw was designed to reproduce structured reasoning behavior in large language models (LLM). That means:

    • Producing clearer step-by-step thinking.
    • Handling logic, math, and planning tasks more reliably.
    • Making reasoning more inspectable and less of a “black box.”

    In practical terms, it often uses prompting strategies, training tricks, or model fine-tuning to make open-source language models behave more like advanced proprietary systems.

    Why ordinary users should care

    Even if you never install OpenClaw yourself, projects like it influence the AI tools you use every day.

    • They push open models to become more capable.
    • They reduce dependence on a few big companies.
    • They help researchers study how reasoning actually works in AI systems.
    • They can eventually lower costs, since open models can be run without expensive subscriptions.

    Pros of OpenClaw

    Greater transparency
    Because OpenClaw is open source, its methods can be inspected. Researchers and developers can see how reasoning is structured instead of relying on a closed commercial system.

    Community-driven innovation
    Developers around the world can experiment, improve it, or adapt it for new tasks. This often accelerates progress.

    Lower cost and local control
    In principle, OpenClaw setups can be run on local hardware or private servers. That appeals to users and organizations concerned about data privacy or subscription fees.

    Faster experimentation
    Open projects can iterate quickly. When someone finds a better prompting method or fine-tuning trick, it can spread rapidly across the community.

    Cons of OpenClaw

    Complex setup
    It is not plug-and-play. Running it typically requires technical knowledge, hardware resources, and time.

    Inconsistent quality
    Because it is community-driven and built on open models, performance may vary. It may not match the reliability or polish of commercial systems.

    Limited support
    There is no guaranteed customer service. If something breaks, you rely on documentation or community help.

    Safety variability
    Commercial AI providers invest heavily in safety testing and alignment. OpenClaw setups may have fewer guardrails, depending on how they are configured.

    Why OpenClaw Became Controversial

    The controversy mainly centers on how it tried to replicate advanced reasoning features associated with proprietary AI systems.

    Imitating closed-model behavior
    Some critics argued that OpenClaw closely mimicked behaviors associated with proprietary systems, raising questions about whether it was ethically or legally acceptable to reverse-engineer or approximate certain features.

    Training data concerns
    There were debates about whether methods used in open reasoning replication might rely on outputs from proprietary models. If so, that raises intellectual property and licensing questions.

    Safety and misuse risks
    Because it aimed to unlock stronger reasoning in open systems, some observers worried it could lower the barrier for misuse, including automation of harmful tasks.

    Alignment debate
    OpenClaw became part of a broader argument in the AI world: should powerful reasoning capabilities be tightly controlled by a few companies, or openly distributed? Supporters saw it as democratization. Critics saw it as potentially reckless.

    Where it Fits in Bigger AI Picture

    OpenClaw sits within the larger open-source AI ecosystem, alongside projects like Hugging Face and community-driven models such as Meta’s LLaMA. It reflects a growing tension between closed, highly controlled AI systems and open, community-driven alternatives.

    For ordinary users, the takeaway is simple:

    • OpenClaw represents an attempt to make advanced AI reasoning more open and accessible.
    • It offers transparency and flexibility.
    • It also brings technical complexity and safety debates.
    • Its controversy highlights deeper questions about who should control powerful AI capabilities.

    Even if you never directly use OpenClaw, the ideas behind it shape the tools you do use — especially as open models continue to close the gap with commercial AI systems.

    (Image credit: OpenClaw)