Strengthening Contract Security with Timelock and Multisig

At Synergetics, security and transparency are at the core of our smart contract deployment strategy. As part of our ongoing commitment to protecting the integrity of our ecosystem and the trust of our community, we’ve taken proactive measures to mitigate administrative risks related to proxy contract management on the Polygon network.

Following industry best practices and recommendations from our recent security audit, we have implemented a layered security approach combining a Time-Lock Controller and a Multi-Signature Wallet (2-of-3 threshold) to manage sensitive administrative actions. This safeguards against single points of failure and ensures the community has visibility on future upgrades.

Why Combine Timelock and Multisig?

Smart contract proxies allow for flexible upgrades, but without proper controls, the admin privileges can become a vulnerability. A private key compromise or human error could lead to catastrophic misuse of contract admin rights.

To prevent this, we adopted a two-pronged strategy:

1. Time-Lock Contract — Introduces a delay before privileged actions can be executed.

2. Multi-Signature Wallet — Ensures that no single individual has unilateral control.

This combination offers both technical and procedural safety:

  • The Time-Lock gives the community a minimum of 48 hours’ notice for any privileged operation.
  • The Multi-Signature Wallet (2-of-3) ensures that even if one private key is compromised, malicious actions cannot be executed without consensus.

Timelock Contract Details

We’ve deployed a standard, audited TimelockController contract on the Polygon network.

  • Timelock Contract Address:
    0x469f8Adb9ffAcDf7d5F3dD9a73be3154B90d689c

The contract enforces a minimum delay of 48 hours before executing sensitive administrative actions, providing transparency and time for the community to review and raise concerns.


Multi-Signature Wallet Setup

All admin-level privileges have been assigned to a multi-signature wallet, reducing the risk of single-key compromise.

  • MultiSig Wallet Address:
    matic: 0x28694A5F7B670586c4Fb113d7F52B070B86f0FFe
    Threshold: 2 of 3 Signers Required

Signer Addresses:

  • Signer 1: matic:0xdFdf1Da1f20498a9197e9Ba9a9f1D52b82e29Ea4
  • Signer 2: matic:0xE334a549DB2aB696715fA990eC6DB1Bf63F97644
  • Signer 3: matic:0xD3C646cB648d3DB8e36A476A117667a24Cd9be59

The combination of the time-lock and this multisig setup ensures that sensitive actions can only proceed after:

1. Community visibility and time for feedback.

2. Approval by at least two trusted signers.


Transparent Governance via Defender

We use OpenZeppelin Defender to manage the approval and execution workflow for administrative tasks. This enables:

  • Clear proposal tracking.
  • Secure approval process via multisig.
  • Public visibility of contract upgrades and administrative actions.

Our Pledge to the Community

Security is a moving target, and so is trust. Whenever we plan to upgrade or migrate to a new implementation contract, we commit to notifying the community in advance and providing sufficient notice via our communication channels.

We believe this approach not only meets but exceeds the baseline expectations for responsible contract management.We encourage our community to monitor the Timelock and Multisig addresses and reach out with any questions or suggestions for further improving our governance framework.

Enhancing AI Agent Communication Effectively

Introduction

Communication between AI agents is like the conversation between two friends trying to solve a problem together. It needs to be smooth and clear. If there’s misunderstanding, things can go wrong quickly. Imagine asking your friend to pass the salt and getting pepper instead. That’s a small mix-up, but in AI, a communication error might be more problematic. When AI agents can’t communicate well, it can affect their tasks, which is why it’s so important to get it right from the start.

Many people find the idea of AI agents a bit confusing, but it’s not so different from humans talking to each other. Understanding some of the bumps along the road can help us appreciate the need for better solutions. AI agents can face hiccups in communicating, and that’s something many of us may not realize happens behind the scenes. By tackling these issues, we can improve how AI systems work together, helping to get better outcomes and solutions for those who rely on this technology.

Identifying Common Communication Issues

AI agents, much like people, can stumble upon communication roadblocks. These hiccups can revolve around misunderstandings, where one agent misinterprets the signals or data from another. It’s similar to when someone asks for directions, but the person giving the directions is unfamiliar with the landmarks being referred to. This kind of misunderstanding is quite common in the AI world.

Here are some typical problems AI agents might encounter:

  • Signal Interference: Just like a dropped call, AI agents can face interruptions in data exchange that cause a break in communication.
  • Data Misinterpretation: When one agent sends information, but it’s read incorrectly by another, leading to wrong conclusions or actions.
  • Protocol Mismatches: When different systems or applications use communication methods that don’t align or connect correctly.

Imagine two AI agents trying to coordinate on a task: one sends instructions, but the other misreads them. This results in actions that are out of sync with what was needed. These kinds of issues point to the importance of protocols and clear channels in AI communication. Recognizing these problems is the first step to improving how AI systems work together. Understanding the typical hurdles these agents encounter gives us a foundation for building better solutions to enhance their communication capabilities.

Solutions to Enhance Communication

To tackle the communication issues outlined earlier, it’s great to have some concrete solutions at hand. One key strategy is ensuring that AI agents follow well-defined protocols. Imagine this: protocols serve as guidelines, much like a map that helps navigate complex terrains. They set the rules on how data is packaged, shared, and interpreted. When protocols align with each other, they prevent those frustrating misunderstandings.

Another helpful approach is implementing secure communication channels. These channels ensure that messages pass safely and directly between agents without interference. Secure channels act like guarded paths, ensuring that no harmful interruptions disrupt the interaction between the agents.

Let’s not forget the importance of regular updates. Just as you would update your phone apps, updating AI software ensures they’re always equipped with the latest fixes and improvements, making communication smoother and more efficient. Keeping systems up-to-date helps in adapting to new patterns and solving previous glitches.

Tools and Technologies

Today, there is a wide array of tools designed to smooth out communication between AI agents. These tools offer various features, from user-friendly interfaces to advanced data processing capabilities, making them incredibly versatile. Let’s dive into some typical elements:

– User-Friendly Platforms: Some tools are designed with usability in mind, so anyone can configure and manage them without needing a tech wizard.

– Integration Features: These features allow different AI systems to connect seamlessly, facilitating a more cohesive communication flow.

– Advanced Security: Ensuring the exchanged data remains safe from unauthorized access is a priority. Tools with strong security measures preserve the integrity of the communications.

Selecting the right tool for your specific needs can make all the difference in how efficiently agents communicate. The choice often boils down to the system’s compatibility and the ease of integration into existing structures.

Implementing Best Practices

Using best practices is the best bet for maintaining an efficient communication environment for AI agents. Start by regularly monitoring communication pathways. This involves checking if the data is flowing smoothly and identifying any points of failure. Routine monitoring acts like a health check-up for your system, catching anything wrong before it becomes a bigger issue.

Consider setting up feedback loops. Feedback helps developers identify what is working and what isn’t, allowing them to refine and adapt protocols as necessary. Consistent reviews and tweaks keep the system aligned with the ever-changing demands.

Ultimately, collaboration is another critical aspect. Involve specialists who understand the nuances of AI communication. They can offer unique insights and tailored solutions to help your system remain robust and reliable over time.

Takeaway Thoughts

Clear, reliable communication among AI agents isn’t just a technical goal; it’s a necessity for any system relying on AI to deliver consistent results. By understanding typical problems and exploring viable solutions, you can ensure your AI agents communicate as effectively as possible.

Take these insights and think about how they fit into your operations. Addressing these challenges can lead to smoother workflows, more effective interactions between systems, and ultimately, better outcomes for your projects. Confidently tackling these points helps create a future where AI systems work together seamlessly, benefiting businesses and users alike.
Ensuring seamless communication between AI agents requires strategic methods and the right tools. If you’re aiming to improve how your AI systems interact, Synergetics.ai offers solutions tailored to create effective communication pathways. Explore how our innovative strategies can make a difference in your AI-driven projects. Learn more about communication between an AI agent and another and see how you can elevate your system’s performance.

.TWIN: The First AI Agent with a Wallet

As AI and blockchain converge, the need for trusted, interoperable infrastructure becomes critical. That’s why we’re proud to introduce .TWIN domains — a next-generation domain system that empowers autonomous agents with a secure identity and wallet, built for seamless interaction within decentralized ecosystems.

Developed in partnership with Synergetics.ai — a pioneer in autonomous AI systems a participant in MIT Media Lab’s Decentralized AI Initiative — .TWIN domains unlock agent-to-agent communication through Synergetics’ patented AgentTalk Protocol. This protocol enables decentralized, cross-platform messaging with embedded trust and verification, laying the groundwork for scalable AI automation across industries.

Every .TWIN domain functions as both a wallet and identity layer for AI agents, redefining how they identify, communicate, and transact onchain.

And while .TWIN domains are designed for AI agents, they’re open to everyone. Whether you’re a builder, a collector, or just getting started in Web3, you can claim a .TWIN to simplify crypto payments, build your onchain identity, and tap into the future of AI-native interactions.

Why Choose .TWIN Domains?

1. AI Agent Wallets

AI agents can now own wallets and verified identities through .TWIN domains, enabling secure transactions and collaborations across onchain platforms with full autonomy.

2. Simplify Crypto Payments

.TWIN domains replace long, complex wallet addresses with a human-readable name, making crypto payments faster and more efficient, both for personal transactions and across onchain platforms.

3. Login with Unstoppable

Use your .TWIN domain to securely log into hundreds of onchain apps, including DeFi, gaming, and other onchain systems. No passwords required — just a trusted onchain identity for easy, seamless access.

Unlock More Features with Your .TWIN Domain

Your .TWIN domain also unlocks:

  • Build your onchain reputation with a trusted, verifiable UD.me profile and network with others.
  • Build your own onchain website powered by IPFS, establishing on onchain, a permanent presence.
  • And much more, with full control over your onchain identity.

Your Onchain Experience Starts Here with .TWIN

Whether you’re part of the AI ecosystem or a regular user looking to simplify your crypto payments and build your onchain identity, .TWIN domains provide the tools you need to navigate the onchain world with ease and security.

Claim your .TWIN domain today and join the future of secure, autonomous AI transactions and simplified crypto payments.


Raghu Bala is Founder of Synergetics.ai , an AI startup, based in Orange County, California.  He is an experienced technology entrepreneur and is an alumnus of Yahoo, Infospace, Automotive.com, PwC, and has had 4 successful startup exits.

Mr. Bala possesses an MBA in Finance from the Wharton School (University of Pennsylvania), an MS in Computer Science from Rensselaer Polytechnic Institute and a BA/BS in Math and Computer Science from the State University of New York at Buffalo.  He is the Head Managing Instructor at 2U and facilitates participants through MIT Sloan courses in Artificial Intelligence, Decentralized Finance and Blockchain.  He is also an Adjunct Professor at VIT (India), and an ex-Adjunct Lecturer at Columbia University, and a Deeptech Mentor at IIT Madras(India).

 He is a published author of books on technical topics and is a frequent contributor online for the last two decades.  His latest books include – co-author of “Handbook on Blockchain” for Springer-Verlag publications, and a Contributing Editor of “Step into the Metaverse” from John Wiley Press, and various technical articles on Medium.com.    

Mr Bala has spoken at several major conferences worldwide including IEEE Smartcomp – Blockchain Panel (Helsinki),  Asian Financial Forum in Hong Kong, Global Foreign Direct Investment Conference in Sydney (Australia) and Huzhou (China), Blockchain Malaysia, IoT India Congress, Google IO, and several more.  He is also served as a Board member of AIM – The global industry association that connects, standardizes and advances automatic identification technologies.

His current areas of focus include Product Development, Engineering and Strategy in the startups related to Agentic AI, Autonomous Agents, Generative AI, IoT, Artificial Intelligence, and the Metaverse.  His industrial domain knowledge spans Automotive, Retail, Supply Chain & Logistics, Healthcare, Insurance, Mobile & Wireless, and more.

Securing AI Agent Communication: Decentralized Identity & Protocol

1. What are the biggest challenges in enabling AI agents to communicate securely across different enterprises? 

There are two important aspects to this communication:

·         Identity
·         Protocol

Let us do a deeper dive on each of these aspects.

IDENTITY

Today AI Agents are being built for use within enterprises and being built in such a manner that they are simply extensions of robotic process automation scripts.  This is a major flaw.  AI Agents have to have Permanent IDs because without identity there is no traceability or accountability as to who or what performed a particular task.  This accountability and traceability is there with Human operators because everyone has an Employee ID within an organization.  AI Agents have to accounted for from a security standpoint at the same level as humans and not as RPA scripts.  

The identity of an AI Agent within an organization can be tied to the Identity and Access Management system (IAM) of that enterprise which may Okta or Microsoft Active Directory etc.  In the real world this is tantamount to a Driver’s License for movement throughout America even for Domestic Air travel.

Now, if we extend the AI Agent’s reach outside of an Enterprise and need it to communicate with other AI Agents outside of the Enterprise, this crosses the Trust Boundaries governed by the IAM.  So, how can trust be established between two AI Agents across Enterprise or trust boundaries?  

A complex and unscalable approach would be the federation of IAMs between any two peering enterprises.  This is cumbersome and not scalable because it becomes a N(N-1)/2 problem.

Now, if we use a Decentralized Identity Access Management system (Registry) and a Decentralized ID then any Agent can discover, and authenticate any other Agent.  This is a scalable and inexpensive solution to a complex problem.  In the real world, this is tantamount to having to carry a Passport for International air travel.  This approach can also be used within an organization if an enterprise chooses to do so.

Another important aspect is how this Identity held is held by an AI Agent?

Each AI Agent whether operating internally within a trust boundary or between trust boundaries needs a receptacle to carry its identity.  In the real world, this is similar to how human’s carry a wallet with their Driver’s License, cash, credit cards, medical cards and more.  So a Wallet is needed to hold the identity of an Agent.

PROTOCOL

Once an AI Agent is equipped with a Decentralized ID, Wallet and is registered in a Registry, it is ready to communicate with other AI Agents.  But in order to do that, one needs a protocol – i.e. a way of communicating.  

This protocol needs two aspects – 

·         To authenticate the other agent(s) 
·         A vocabulary for communicating.

The authentication is common to any interaction as this is not context specific.

The communication vocabulary is however context specific.  

For instance, 

·         if two agents are trading with one another in the stock exchange, they are communicating about buying and selling equities at a given price.  

Whereas, 

·         if two agents are communicating on the topic of health insurance, they may be discussing ICD-10 and CPT codes appropriate for Medical billing.   

2. How can AI agent authentication and identity management prevent security risks?

Identity Management and Authentication are key building blocks in establishing trust between AI Agents.  As described earlier, one needs to have a decentralized ID, a Registry and a Protocol for communication to occur between any two AI Agents.

Now, the first half of that communication is to authenticate the other agent.  Say Agent A wishes to authenticate Agent B.  A number of trust factors would have to be established when each of these agents are initially registered on the Registry.

a. Provenance: 

Which entity created this agent ?  Are they legitimate?   An example of this is during App registration on the Apple App store, where Apple administers a rigorous background check on the entities attempting to submit a mobile application for listing.  Similar checks need to be done as part of the submission to the Registry. 

b. KYA:

To prove the legitimacy of an Agent, there needs to be a Know-Your-Agent (KYA) process established.  There will be background checks (police, Interpol, FBI and several other checks) similar to KYC/AML.

c. Secure Execution Environment: 

To avoid a legitimate agent being infected by malicious code that makes it behave in an improper manner, it is paramount that agents operate within a secure execution environment.

3. What industries are most likely to benefit first from widespread AI agent adoption?

There are many use cases for Agent to Agent communication that would improve efficiency and cost.  Let us describe a common one in Healthcare.

Healthcare

In a typical scenario when a patient arrives at a clinic for a health checkup, the patient presents their Health Insurance ID to the admin person.  The admin person then calls the Health Insurance company to verify the legitimacy of the Health Insurance ID.  This process is still done manually in most cases.  Upon completion of this check, the patient is admitted for consultation.  Upon completion, the notes are summarized, the Medical billing codes are then negotiated with the Health Insurance company.

If we decompose this example into a workflow, we can identity very easily the steps that can be solved by agents.

  • Insurance ID Verification – Verification Agentic (2 Party)
  • Consultation – Human
  • Transcription – Transcription Agent
  • Summarization  – Summarization Agent
  • Medical Billing – Billing Agent (2 Party)

4. How does AI agent interoperability impact regulatory compliance in industries like finance and healthcare?

In Healthcare and Finance there are compliance measures such as HIPAA and SOC2.   AI Agent communications are in fact safer than Human in the loop in many cases because AI Agents do not do the following:

  • Leave a paper trail e.g. writing critical info on Post-It Notes or notepads that Humans always do.
  • Talk loudly or spell out key information without realizing it could be recorded 
  • No audit trails for every interaction

Further measures include:

  • Protocols in Agent to Agent communication can be encrypted 
  • Storing information in repositories in a HIPAA or SOC2 compliant format
  • Masking Personally Identifying Information (PII) whenever needed 
  • Providing audit trails for every action and interaction with other agents or Humans

5. What ethical considerations come with AI agents handling autonomous transactions?

Ethical considerations are an important consideration when agents are used in workflows.  The state of the art AI Agents are still not at the maturity level industry wide to make ethical or moral decisions in our opinion.

To resolve this, when there are moral and ethical dilemmas, it is best to include Humans in the Loop as part of the decision making process.  If there are decisions that can be automated without such considerations, is when Agents can autonomously make decisions.  

In Autonomous agents, examples of such junction points where are ethical considerations can happen:

  • Healthcare – if a patient is issued an insurance denial by an Insurance bot , there need to be provisions for a Human in the Loop to review the case and make a decision as there may be life threatening issues.
  • Finance – a loan denial may involve a customer going through hardship.  Quite often hardships can be resolved with a payment plan and restructuring of finances.  Again, a Human in the Loop to show empathy  may be needed in a situation such as this.

6. How can businesses ensure AI agents remain aligned with human decision-making rather than operating independently?

Businesses can ensure AI Agents and Humans align on decision making by designing workflows with Human in the Loop.  This will ensure that there is oversight, traceability, accountability, observability and governance in all workflows.  

7. What role do decentralized architectures play in AI agent security and reliability?

As mentioned on the section on Identity and Access Management, Decentralized Architectures are key for establishing communication between Agents. 

Over time, we foresee all humans having their own Digital Twins.  These Digital Twins will operate on behalf of humans and carry out tasks such as shopping, searching, booking reservations, and more.

For this reason, unlike all other AI Agents, AI Agents made by Synergetics are NFTs from the ground up with Wallets and Identity-  ready to navigate the vast resources of the world wide web.

8. How will AI agents evolve from assisting human workflows to managing end-to-end processes autonomously?

In many enterprises, knowledge on work processes is buried with the staff working at these organizations.  We call this “Tribal Knowledge”.  

In order for enterprises to transition from AI Agent assisted human workflows to AI Agents operating workflows autonomously, it is necessary for enterprises to bring this tribal knowledge to the surface.

Once these workflows are are clearly understood, one can identify workflows that can be automated and run autonomously by AI Agents and those requiring human intervention.  

9. What lessons can enterprises learn from early adopters of AI-driven automation?

In this early stage, we are seeing a lot of companies claiming to have AI Agents but most are simply thin veneers on top of an LLM.

To have true AI Agents, one needs to consider:

  • Identity
  • Discoverability
  • Traceability, Observability, Accountability
  • Transaction Management, and more

These early AI Agents are simple Prototypes with very little thought given to long term considerations.  Hence, enterprises can learn from these experiences and evolve to more industrial-strength AI Agents which are more capable with sound engineering principles behind them.

10. What are the most common misconceptions about AI agents and their real-world applications?

Several common misconceptions are:

  1. Human job loss:  While there are concerns about some repetitive type work that can be easily automated, humans have always upskilled to better, higher value added work through multiple Industrial Revolutions of the past.  This time will be no different. In most complex workflows, there will be the need for Humans to be in the loop and so job loss fears are overblown.  New vocations will come about e.g. Prompt Engineer, and some older vocations would evolve e.g. Paralegal.
  2. Artificial General Intelligence:  In AI there are seven levels on evolution, and one of them is AGI.  Talk of AGI is again overblown because decision making in many cases is not simply the application of  logic to a problem.  It goes well beyond that.

    Other factors include:
  • Sentiment 
    • e.g. many a time humans are not logical but biological and decide based on the wisdom of the crowds
  • Emotions 
    • e.g. machines are not capable of emotions
  • Ethical considerations 
    • e.g. needs human in the loop
  • Moral considerations 
    • e.g. needs human in the loop
  • Sensory perception 
    • eg. automated car decides to take a turn based on the distance and speed of oncoming traffic


Raghu Bala is Founder of Synergetics.ai , an AI startup, based in Orange County, California.  He is an experienced technology entrepreneur and is an alumnus of Yahoo, Infospace, Automotive.com, PwC, and has had 4 successful startup exits.

Mr. Bala possesses an MBA in Finance from the Wharton School (University of Pennsylvania), an MS in Computer Science from Rensselaer Polytechnic Institute and a BA/BS in Math and Computer Science from the State University of New York at Buffalo.  He is the Head Managing Instructor at 2U and facilitates participants through MIT Sloan courses in Artificial Intelligence, Decentralized Finance and Blockchain.  He is also an Adjunct Professor at VIT (India), and an ex-Adjunct Lecturer at Columbia University, and a Deeptech Mentor at IIT Madras(India).

 He is a published author of books on technical topics and is a frequent contributor online for the last two decades.  His latest books include – co-author of “Handbook on Blockchain” for Springer-Verlag publications, and a Contributing Editor of “Step into the Metaverse” from John Wiley Press, and various technical articles on Medium.com.    

Mr Bala has spoken at several major conferences worldwide including IEEE Smartcomp – Blockchain Panel (Helsinki),  Asian Financial Forum in Hong Kong, Global Foreign Direct Investment Conference in Sydney (Australia) and Huzhou (China), Blockchain Malaysia, IoT India Congress, Google IO, and several more.  He is also served as a Board member of AIM – The global industry association that connects, standardizes and advances automatic identification technologies.

His current areas of focus include Product Development, Engineering and Strategy in the startups related to Agentic AI, Autonomous Agents, Generative AI, IoT, Artificial Intelligence, and the Metaverse.  His industrial domain knowledge spans Automotive, Retail, Supply Chain & Logistics, Healthcare, Insurance, Mobile & Wireless, and more.

The Subject That No One Is Talking About in Agentic AI Today: Identity

The Missing Piece in Agentic AI

Everyone is talking about agentic AI systems — how they will revolutionize business, streamline automation, and enhance human-machine collaboration.

But almost no one is talking about the foundational challenge that will determine whether these systems succeed or fail: identity.

Right now, the AI agents being built by tech giants and startups alike are nameless, faceless, and transient. They exist for a moment—running a task, executing a script — before vanishing into the digital ether. This lack of identity means there is:

🚫 No traceability – No way to verify which AI agent performed an action.
🚫 No accountability – No mechanism to hold AI systems responsible for their decisions.
🚫 No trust – No persistent identity for agents to securely interact with humans or other AI systems.

And yet, trust is the bedrock of every system humans rely on — whether in financial transactions, business negotiations, or even basic communications. Without persistent, verifiable identity, AI systems will remain untrusted and unscalable.

We’ve tackled this problem head-on by creating AI agents that can be permanently identified, tokenized, and securely stored in a digital wallet.

Let’s dive into why this is the missing key in agentic AI — and why telcos, enterprises, and policymakers need to pay attention.

The Human Parallel: How Identity Works in the Real World

A human identity follows a clear, traceable lifecycle:

1️⃣ Birth – You are assigned a birth certificate that permanently registers your identity.

2️⃣ Life – You carry IDs (such as a driver’s license, passport, employee badge) to prove who you are in different contexts.

3️⃣ Transactions – You sign contracts, pay bills, and interact with others using your verified identity.

4️⃣ Death – A death certificate marks the end of your legal presence.

Now, compare this to today’s AI agents:

No birth record – An AI agent is spun up at will, with no permanent ID.

No verifiable transactions – There’s no universal way to prove which agent did what.

No traceability – If an AI-generated deepfake spreads disinformation, there’s no way to track it back to its source.

This lack of continuity is the Achilles’ heel of AI systems. The solution? Tokenized, persistent identity.

How Tokenized Identity Solves the Trust Problem

In computer science, a daemon is a background process that runs continuously, often providing essential system functions without direct user interaction. Humans, in many ways, resemble long-running daemons — once born, we persist continuously until death, with an uninterrupted existence and a traceable identity from birth to death.  Our identity is recorded, updated, and verified across systems, ensuring we are accountable for our actions throughout our lifetimes.  However, AI agents do not function this way.  Unlike humans, AI agents are not persistent by default — they can be spun up, perform a task, and shut down in seconds, leaving no inherent trace of their existence.  A single AI agent might execute a financial transaction, generate a piece of content, or initiate a system action before disappearing, with no way to verify who — or what — was responsible for that action. Without permanent identity and traceability, AI agents exist as ephemeral, unaccountable entities, making them vulnerable to misuse, fraud, and manipulation.

This is precisely why tokenized AI identity is critical. If an AI agent executes a harmful action — whether due to a coding flaw, a bad actor’s manipulation, or unintended consequences — how do we track the responsible party? Without a persistent identifier, it becomes impossible to assign accountability, regulate AI behaviors, or create reliable auditing mechanisms.  If a bot spreads misinformation, completes a fraudulent transaction, or executes an unauthorized system change, and then disappears upon shutdown, there is no trail leading back to its source.  Tokenization solves this by ensuring that AI agents have a permanent, immutable identity — one that persists whether the agent is running or not.  With tokenized AI, every action is traceable, every agent is accountable, and organizations can ensure responsible AI deployment. The Synergetics AgentWorks platform has implemented this at scale, ensuring that each AI agent, once created, has a lifelong, verifiable identity—a necessary step in making agentic AI systems secure, transparent, and fit for enterprise and global adoption.

At Synergetics.ai, we’ve developed a tokenization framework that permanently assigns a verifiable, blockchain-backed identity to every AI agent. We did the research, built what is at the moment the only one of its kind, and wouldn’t be as adamant about championing this product-centric approach if we didn’t see the tremendous societal value in:

📌 Tokenized Agents: Each AI agent is issued a unique, permanent ID upon creation.

📌 Blockchain Verification: The ID is stored on a secure ledger for full traceability.

📌 Zero-Knowledge Proofs (ZKP): Identity can be verified without exposing sensitive data—powered by Privado.ai’s ID framework.

📌 Wallet Storage: AI agents carry their identity in a digital wallet, just like humans carry passports and driver’s licenses.

This approach enables three critical functions for agentic AI:

Trust & Accountability – Enterprises can verify which AI agent made a decision or completed a transaction.

Cross-Enterprise Communication – Agents can authenticate themselves when working across organizations.

Security & Compliance – AI systems can meet regulatory and ethical requirements in enterprise and government applications.

The Role of AI Wallets: Storing and Managing Identity

If AI agents are to operate autonomously, they need more than just an identity — they need a secure way to store and use it.

This is where Agent Wallets come in.

🛠 AgentWallet is a secure digital storage for AI agent identity, assets, and credentials. Just as a human carries IDs and credit cards in a physical wallet, an AI agent must have a trusted place to store its identity and interact with the digital world.

🔹 Key Features of an AI Wallet:

• Stores permanent agent identity
• Holds digital assets, cryptographic signatures, and credentials
• Allows for seamless authentication across enterprises
• Enables secure transactions between AI agents

Enterprise vs. Public Identity: A Two-Tiered System

Just as humans carry different forms of ID, AI agents will require two distinct identity types:  an enterprise ID and a public ID.

In the same way that a person receives a state-issued ID — such as a driver’s license — to verify their identity within their home state or country, an AI agent operating within an enterprise must also have a verifiable enterprise ID to authenticate itself in internal systems. This enterprise ID ensures that the AI agent is recognized, trusted, and authorized to perform specific functions within the organization’s secure, private network.  However, when a human crosses international borders, their state-issued ID is no longer sufficient — they need a passport to validate their identity across countries. Similarly, when an AI agent needs to operate outside its enterprise, interacting with external AI agents, digital services, or other organizations, it requires a public ID.

This public, blockchain-backed identity serves as a decentralized verification mechanism, ensuring that the agent is authenticated and trusted beyond its original enterprise environment. Just as a passport provides proof of identity, nationality, and authorization for international travel, an AI agent’s public ID enables it to securely interact with external systems, negotiate transactions, and build verifiable trust in agent-to-agent communications.

1️⃣ Enterprise ID (Private Blockchain)

🔹 Issued within a company for internal AI agents
🔹 Ensures secure transactions & compliance
🔹 Operates on Hyperledger Fabric or similar private blockchains

2️⃣ Public ID (Decentralized Ledger)

🔹 Allows AI agents to interact outside the enterprise
🔹 Used for cross-company AI negotiations, digital commerce
🔹 Runs on a public blockchain for transparency & verification

Without this dual-identity model, AI agents will be restricted in scope — unable to operate securely outside their original environment.

Why Telcos & Enterprises Must Act Now

The identity problem in AI isn’t a theoretical issue — it’s already playing out in real-world security concerns:

🚨 AI Deepfakes – Bots impersonate real people, spreading misinformation.
🚨 Automated Fraud – AI agents execute unauthorized financial transactions.
🚨 Data Leaks & Privacy Risks – Anonymous AI agents collect and misuse user data.

By adopting tokenized identity and AI wallets, enterprises and telcos can:

Ensure traceability in AI-driven decisions
Secure agent-to-agent communications
Meet evolving AI governance & compliance standards

Final Thought: AI Identity is a Make-or-Break Issue

AI systems are evolving fast, but trust will determine their adoption. The next step? Embedding identity into the DNA of agentic AI. This will provide individual and enterprise users with:

✅ Permanent, blockchain-backed identity
Secure, verifiable agent transactions
Wallets for AI to store credentials & assets


Brian Charles, PhD, is VP of Applied AI Research at Synergetics.ai (www.synergetics.ai).  He is a subject matter expert in AI applications across industries as well as the commercial and academic research around them, a thought leader in the evolving landscape of generative and agentic AI and is an adjunct professor at the Illinois Institute of Technology.  His insights have guided leading firms, governments, and educational organizations around the world in shaping their development and use of AI.

(Part 2) AI Workloads Are Surging in the Enterprise. Can Telecom Players Support Their Needs?

Note: This is the second of a two-part series exploring the rise of autonomous businesses driven by agentic AI systems. In Part 1, I focused on how enterprises are adopting these systems to revolutionize operations and decision-making. Part 2 delves into how telcos and telecom-adjacent companies must evolve to support this transformation, building the infrastructure for agent-to-agent communication.


Part 2: Telcos Must Build the Infrastructure to Support Agentic AI, But They Don’t Know How to Do It.

The Evolution of Telecom: Supporting Enterprise Innovation

In Part 1, we explored how enterprises are rapidly adopting agentic AI systems to move toward autonomous business models.

This shift broadly parallels the historical evolution of telecom:

• Telcos first connected individual people and then people within enterprises (e.g., PBX systems).

• They then expanded to enable global communication between enterprises.

• Now, telcos must evolve again to support agent-to-agent communication in the age of AI.

Here’s the challenge: communication outside the enterprise is much more complex.  When AI enters the picture and the data workloads increase, it becomes an obstacle for organizations that are anything less than agentic in nature to function.  Such an agentic AI future for enterprises requires identity, trust, authentication, and authorization to operate at scale and autonomously—capabilities that telcos are uniquely positioned to deliver by virtue of their heritage as regulated entities and continual investment in developing nascent technologies.  At the same time, the world of decentralized, autonomous services such as those that support agentic AI systems historically is not a known operating environment for them.

The OSI Model and the Future of Telco Networks

Just as the OSI model created a framework for traditional telecommunications networking, it can guide telcos in building the next-gen infrastructure for agentic AI:

The OSI model is a seven-layer conceptual model for framing how various disparate hardware and software systems that comprise a telecom network must work together to send data over a network, owing to various technical, geographical and political boundaries.

Layers 1, 2 and 3 of the OSI model address physical, data link and network layers respectively.

Layer 4 (Transport): Here, telcos must ensure low-latency, high-bandwidth connectivity across BLE, WiFi, and cellular networks.

Layer 5 (Session): Persistent, secure agent sessions must be supported to enable cross-enterprise collaboration.

Layer 6 (Presentation): Protocols are needed to ensure seamless communication between diverse AI systems.

Layer 7 (Application): App-level solutions are required in order to allow agents to discover, connect, and collaborate.

The Role of Telcos in Agent-to-Agent Communication

To enable secure, reliable, and scalable agent-to-agent communication, telcos must address several key challenges:

1. Transporting All of That Data:

Telcos need to enable enterprise-level support for petabytes of data flowing into and out of corporations every moment of every day.  To accomplish this, telecoms must provide a secure execution environment for AI agents in the transport of their date.  The AgentVM by Synergetics (Layer 4) enables data to traverse networks securely and efficiently by supporting AI-native cloud and edge processing across telco infrastructures.

2. Authentication and Authorization:

Telcos must provide infrastructure that enables agents to authenticate each other and exchange data securely. This aligns with the Session (Layer 5) and Presentation (Layer 6) functions of the OSI model.

3. Enabling Seamless Communication:

For agents that traverse networks, Telcos can leverage AgentFlow (Layer 5 and Layer 6) — a patented protocol for inter-agent communication. It ensures real-time, asynchronous interactions across enterprise boundaries.

4. Establishing Identity and Trust:

AI agents operating across enterprises need verified identities to ensure secure interactions. This is where tools like AgentRegistry from Synergetics comes in (Layer 7), enabling zero-knowledge proof identity verification and Know Your Agent (KYA) compliance.

5. Powering Transactions and Digital Commerce:

Telcos must support agent-driven transactions with solutions like AgentWallet (Layer 7), which handles digital assets, identity, and currency for autonomous agents.

Telcos at a Crossroads

The future of telecom isn’t just about connecting people—it’s about enabling autonomous AI ecosystems that will drive success for their enterprise customers. Telcos must:

·      Invest in AI-native infrastructure to meet the needs of enterprise AI.

·      Adopt decentralized, autonomous tools to integrate AI-driven identity, trust, and communication.

·      Build the next-gen OSI stack that supports agentic AI at scale.

The next wave of telecom innovation isn’t just AI-powered.  It’s AI-native. The question is: Are telcos ready to lead?


Brian Charles, PhD, is VP of Applied AI Research at Synergetics.ai (www.synergetics.ai).  He is a subject matter expert in AI applications across industries as well as the commercial and academic research around them, a thought leader in the evolving landscape of generative and agentic AI and is an adjunct professor at the Illinois Institute of Technology.  His insights have guided leading firms, governments, and educational organizations around the world in shaping their development and use of AI.

AI Workloads Are Surging in the Enterprise. Can Telecom Players Support Their Needs?

Note: This is the first of a two-part series exploring the rise of autonomous businesses driven by agentic AI systems. In Part 1, I focus on how enterprises are adopting these systems to revolutionize operations and decision-making. Part 2 will delve into how telcos and telecom-adjacent companies must evolve to support this transformation, building the infrastructure for agent-to-agent communication. Stay tuned!


Part 1: Enterprises Are Embracing Agentic AI; Is Yours, and is your telecom provider ready?

The Rise of the Autonomous Business

As businesses push toward automation and efficiency, we are witnessing the emergence of the autonomous enterprise. These organizations rely on agentic AI systems—independent, intelligent agents—to optimize decision-making, drive innovation, and handle real-time operations.

Having spent 20+ years serving telecom and enterprise companies around the globe, I’ve realized that the meteoric presence of highly interconnected, real-time

AI apps and systems like ChatGPT, Gemini and other enterprise systems communicating

with each other and ingesting large datasets may be the biggest boon ever know to enterprises –

and telecom companies’ biggest existential threat.  This evolution of managed

AI to agentic AI is the next frontier for any organization that consumes data or transports it.

David Arnoux’s model of “The 5 Levels of the Autonomous Business” perfectly captures this evolution for the enterprise company:

Let’s break this down…

Level 1 (Manual): Humans control all tasks. Tech is limited to record-keeping.

Level 2 (Assisted): Automation supports repetitive tasks, while humans make major decisions.

Level 3 (Semi-Autonomous): Systems take over day-to-day tasks; humans step in for complex decisions.

Level 4 (Fully Autonomous): Most operations and decisions are automated. Teams oversee performance and handle edge cases.

Level 5 (Self-Evolving): Processes refine themselves via machine learning—for example, optimizing supply chains or marketing campaigns automatically.

We are rapidly moving into Level 4 and beyond, where businesses will increasingly depend on autonomous AI agents to handle everything from logistics to customer service to cybersecurity

The Enterprise Connection: Agentic AI in Action

To understand how agentic AI systems function and communicate within an enterprise, consider the role of Private Branch Exchange (PBX) systems from the telecom world. Special note: telcos should pay attention here because what I’m about to explain is going to be vital for your future survival.  Here’s the quick walkthrough:

In the early days of telephony, enterprises used PBXs to connect employees within their organization, enabling seamless internal communication while relying on telcos to connect them to the outside world.

Similarly, modern enterprises will use agentic AI systems to automate and optimize internal processes, with AI agents acting as decision-makers and communicators within the organization.

Imagine a logistics company using AI agents to dynamically reroute shipments in response to weather disruptions. These agents must communicate internally to adjust delivery schedules, optimize routes, and inform stakeholders.

However, this is just half the picture. To fully realize the potential of autonomous businesses, these AI agents must also connect and collaborate with agents outside the organization. In the legacy telecom world of the PBX, this is where the communication ends.  Voice calls stayed inside the enterprise; communicating externally required a different set of telecom technologies.  This brings us to the challenges of identity, trust, and communication infrastructure—a topic we’ll explore in Part 2.

What’s Next?

To meet the demands of autonomous enterprises, telecom companies will need to build the next generation of communication infrastructure that supports agent-to-agent connectivity. Much like the OSI model revolutionized traditional telecommunications, it can serve as a blueprint for integrating agentic AI systems into the fabric of modern networks.

Stay tuned for Part 2, where we’ll explore how telcos and telecom-adjacent players must adapt to this new reality.

Brian Charles, PhD, is VP of Applied AI Research at Synergetics.ai (www.synergetics.ai).  He is a subject matter expert in AI applications across industries as well as the commercial and academic research around them, a thought leader in the evolving landscape of generative and agentic AI and is an adjunct professor at the Illinois Institute of Technology.  His insights have guided leading firms, governments, and educational organizations around the world in shaping their development and use of AI.

ChatGPT Goes to Washington: OpenAI’s Big Play for Government AI

The AI revolution just got a policy upgrade. OpenAI has unveiled ChatGPT-Gov, a new, U.S. government-exclusive version of its AI assistant, designed to support federal, state, and local agencies in tackling complex challenges.

Why This Matters

Governments have long struggled to balance innovation with security, privacy, and responsible AI deployment.  With ChatGPT-Gov, OpenAI is signaling that AI isn’t just for boardrooms and startups.  It’s a tool that can empower policy analysts, public servants, and decision-makers to operate more efficiently.

Built on the robust GPT-4-turbo model, this platform provides:

  • A Secure, U.S.-Only Environment – Data isn’t shared with OpenAI’s broader research efforts.
  • Customizable AI Solutions – Tailored to the unique needs of agencies.
  • Strategic AI Deployment – Supporting research, communications, and decision-making at scale.

The Bigger Picture: AI & Public Trust

Bringing AI into the public sector isn’t just about efficiency.  It’s about trust.  While corporations race to integrate AI for competitive advantage, governments must ensure transparency, accountability, and ethical AI use.  OpenAI’s government-first approach could set a precedent for how AI operates in regulated environments.  It could also be a response to the release of DeepSeek’s inexpensive R1 model.

What’s Next?

As AI adoption accelerates in government, key questions emerge:

🔹 How will agencies measure AI effectiveness in policymaking?

🔹 What frameworks will ensure human oversight remains central?

🔹 Will this move push other AI leaders to develop public-sector-focused solutions?

One thing is clear: AI is no longer just disrupting business.  It’s reshaping governance. ChatGPT-Gov certainly looks like OpenAI’s bid to make AI a trusted ally in public service. 🚀


Brian Charles, PhD, is VP of Applied AI Research at Synergetics.ai (www.synergetics.ai).  He is a subject matter expert in AI applications across industries as well as the commercial and academic research around them, a thought leader in the evolving landscape of generative and agentic AI and is an adjunct professor at the Illinois Institute of Technology.  His insights have guided leading firms, governments, and educational organizations around the world in shaping their development and use of AI.

Geopolitics and Strategy in the AI Arena: The Impending Battle Between OpenAI-o1 and DeepSeek-R1

Large language models (LLMs) are driving significant technological progress in the rapidly evolving field of artificial intelligence. Leading the charge is OpenAI, whose state-of-the-art transformer technology excels in handling complex tasks across various domains. OpenAI’s journey began with pioneering research in AI fields like reinforcement learning and robotics, solidifying its reputation as a visionary in the AI community. The development of Generative Pre-trained Transformers (GPT), starting with GPT-1 in June 2018, was a milestone, showcasing the ability of LLMs to generate human-like text using unsupervised learning. Despite OpenAI’s dominance, DeepSeek has emerged as a formidable challenger with its innovative R1 model. These two approaches are not only advancing technology but also shaping geopolitical strategies, as nations and companies compete for AI leadership.

DeepSeek: The Open-Source Challenger

DeepSeek is making significant strides as a contender against established LLMs, particularly those of OpenAI. The R1 model is attracting attention for its impressive reasoning capabilities at a fraction of the cost. Utilizing an open-source framework, DeepSeek R1 is lauded for its transparency and flexibility for developers. This strategy enables R1 to directly challenge OpenAI’s models across numerous benchmarks, making advanced AI technologies more accessible to a wider audience. Available through DeepSeek API or free DeepSeek chat, the R1 model leverages open weights, providing a competitive edge by offering similar capabilities at a lower price point.

Key Highlights of R1’s Approach:

  • Cost-Effectiveness: DeepSeek R1 is priced between 90% and 95% cheaper than OpenAI’s o1, with an API cost of just $0.55 per million tokens compared to OpenAI’s $15. This strategy aims to increase adoption and capture a significant market share by making advanced AI capabilities accessible to a broader audience, including startups and smaller enterprises.
  • Reinforcement Learning Approach: Unlike traditional models that rely heavily on supervised learning and chain-of-thought processes, R1 primarily utilizes reinforcement learning to enhance its reasoning capabilities. This approach allows the model to self-improve by exploring different reasoning strategies and learning from the outcomes.
  • Benchmark Performance: In rigorous tests like LLM Chess, R1 demonstrated a respectable performance with a 22.58% win rate. However, it encountered challenges in maintaining protocol adherence, resulting in fewer draws and occasional illegal moves.
  • Consistency Challenges: While R1 shows promise, it struggles with instruction adherence and is prone to variations in prompts, sometimes leading to protocol violations or hallucinations, affecting its overall reliability in structured tasks.

OpenAI: The Proprietary Titan

In contrast, OpenAI maintains its proprietary model with o1, focusing on delivering controlled, high-quality performance. OpenAI’s models are renowned for their leading reasoning capabilities, as evidenced by their strong performance in LLM Chess, where o1-preview achieved a remarkable 46.67% win rate.

Key Highlights of o1’s Approach:

  • Proprietary Control for Quality Assurance: OpenAI’s closed model ensures rigorous maintenance of performance and safety standards, consistently delivering high-quality outputs and safeguarding against misuse.
  • Cost Consideration: While more expensive at $15 per million tokens, OpenAI justifies this premium by offering a model that excels in various complex tasks with greater reliability and accuracy, particularly in high-stakes environments where errors can have significant consequences.
  • Advanced Reasoning: o1 utilizes a sophisticated chain-of-thought reasoning approach, allowing it to perform deep contextual analysis and deliver nuanced outputs across diverse domains.
  • Benchmark Performance: o1 models lead in reasoning tasks, maintaining a positive average material difference in LLM Chess, reflecting their superior ability to strategize and adapt during gameplay.

Concerns and Controversies

  • Allegations of Mimicking OpenAI: DeepSeek has faced criticism for previously identifying itself as versions of OpenAI’s models. This raises questions about the originality of its technology, as it may replicate not just capabilities but also errors, or “hallucinations.”
  • Privacy and Data Security: DeepSeek’s adherence to Chinese laws, which include censorship, poses risks of manipulation and disinformation. Moreover, user data privacy is a major concern. Data stored in China under local regulations raises alarms similar to those associated with TikTok, affecting how Western users perceive and trust the platform.

Geopolitical Implications and Strategic Considerations

The competition between OpenAI and DeepSeek is a microcosm of the larger U.S.-China technological rivalry. DeepSeek’s open-source model promotes accessibility, highlighting the influence of Chinese regulatory practices. Both companies balance innovation with ethical considerations. OpenAI actively aligns itself with U.S. policymakers to support national security interests, advocating for policies that safeguard against potential cybersecurity threats and data privacy issues.

Governance and Compliance Implications

The divergent approaches of OpenAI and DeepSeek have significant implications for governance and compliance within the AI industry. OpenAI’s proprietary model is aligned with stringent compliance measures, ensuring that its AI technologies meet regulatory standards and ethical guidelines.

In contrast, DeepSeek’s open-source model presents unique governance challenges. While promoting innovation and accessibility, the open-source approach may struggle with ensuring compliance with evolving regulatory standards. The lack of centralized control can lead to variations in implementation, raising concerns about the consistency of compliance across different applications. DeepSeek may need to develop robust governance frameworks to address these challenges effectively.

Final Thoughts

The rivalry between OpenAI and DeepSeek transcends technological competition; it’s a strategic and geopolitical battle shaping the future of AI. OpenAI’s proprietary stance and engagement with U.S. policymakers reflect a commitment to maintaining leadership and security in AI development. Meanwhile, DeepSeek’s open-source model, despite its potential advantages, raises valid concerns about privacy, censorship, and originality. This competition also highlights the ongoing debate between open-source and closed systems, where each approach has its benefits and challenges.

Although large language models currently dominate, the future benefits of small language models should not be overlooked. They promise to make AI more accessible and sustainable, ensuring that advanced AI capabilities can reach a wider audience while minimizing resource usage. This evolution could play a crucial role in making AI tools both powerful and universally available, potentially impacting the strategic decisions of companies like OpenAI and DeepSeek in the future.

Frank Betz, DBA, an accomplished professional at Synergetics.ai (www.synergetics.ai), is a driving force in guiding industry, government, and educational organizations toward unlocking the full potential of generative and agentic AI technology. With his strategic insights and thought leadership, he empowers organizations to leverage AI for unparalleled innovation, enhanced efficiency, and a distinct competitive advantage.

Charting the Course for AI Governance: The 2024 Regulatory Framework and 2025 Proposals for Change

As we approach the end of 2024, artificial intelligence continues to transform industries globally, necessitating a regulatory framework that evolves alongside its rapid advancement. The United States is at a pivotal crossroads, having created a comprehensive regulatory environment designed to balance innovation with ethical oversight. However, as AI technologies become increasingly embedded in daily life, the need for adaptive and forward-thinking governance becomes more pressing, setting the stage for significant proposals in 2025.

Looking toward 2025, several major themes are expected to shape AI regulation. Enhanced ethical oversight and transparency will be at the forefront, requiring AI systems to be explainable and understandable. Human-in-the-loop systems will gain prominence, especially in sectors where AI impacts human lives, ensuring that human judgment remains integral to decision-making processes. Data privacy and security will see intensified focus, with stricter standards for data protection and cybersecurity.

Bias mitigation and fairness will be critical, with regulations targeting discrimination prevention in AI outcomes across various sectors. Accountability and liability frameworks will be clarified, assigning responsibilities for AI-driven actions. Environmental impacts of AI will be scrutinized, prompting measures to mitigate the carbon footprint of AI technologies.

United States Federal Regulations and Proposals

The current regulatory landscape is supported by key federal regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the Federal Food, Drug, and Cosmetic Act (FDCA). These laws establish rigorous standards for privacy and safety in healthcare-related AI applications. They are complemented by the Federal Trade Commission Act, which extends consumer protection into the digital arena, ensuring that AI applications follow fair trade practices. Additionally, the 21st Century Cures Act facilitates the integration of AI into healthcare decision-making processes by offering exemptions for clinical decision support software, maintaining necessary safeguards while promoting innovation.

Federal Legislation Proposals

  • Better Mental Health Care for Americans Act (S293): Modifies Medicare, Medicaid, and the Children’s Health Insurance Program to include AI’s role in mental health treatment. Requires documentation of AI’s use in nonquantitative treatment limitations and mandates transparency in AI-driven decisions—status: Proposed and under consideration.
  • Health Technology Act of 2023 (H.R.206): Proposes allowing AI technologies to qualify as prescribing practitioners if authorized by state law and compliant with federal device standards. Aims to integrate AI into healthcare prescribing practices.—status: Proposed and under consideration.
  • Pandemic and All-Hazards Preparedness and Response Act (S2333): Mandates a study on AI’s potential threats to health security, including misuse in contexts such as chemical and biological threats, with a report to Congress on mitigating risks.—status: Proposed and under consideration.
  • Algorithmic Accountability Act (AAA): Requires businesses using automated decision systems to report their impact on consumers.—status: Proposed.
  • Federal Artificial Intelligence Risk Management Act: Aims to make the NIST AI Risk Management Framework mandatory for government agencies.—status: Proposed.
  • TEST AI Act of 2023: Focuses on advancing trustworthy AI tools.—status: Proposed.
  • Artificial Intelligence Environmental Impact Act 2024: Measures AI’s environmental impacts.—status: Proposed.
  • Stop Spying Bosses Act: Addresses AI use in workplace surveillance.—status: Proposed.
  • No Robot Bosses Act: Regulates AI use in employment decisions.—status: Proposed.
  • No AI Fraud Act: Protects individual likenesses from AI abuse—status: Proposed.
  • Preventing Deep Fake Scams Act: Addresses AI-related fraud in financial services.—status: Proposed.

State-Level Legislation and Proposals

A variety of innovative legislation at the state level addresses diverse regional needs. For instance, California’s AI Transparency Act mandates disclosure and enhances public awareness of AI-generated content. This strengthens the existing California Consumer Privacy Act (CCPA), a landmark legislation enacted in 2018, that provides California residents with enhanced privacy rights and consumer protection concerning the collection and use of their personal data by businesses. Illinois has strengthened its Human Rights Act to prevent AI-driven discrimination in the workplace, while states like Massachusetts and Rhode Island focus on ethical AI integration in mental health and diagnostic imaging services. Colorado has also made strides with legislation like SB24-205, requiring developers of high-risk AI systems to use “reasonable care” to prevent algorithmic discrimination and mandating public disclosures, effective February 1, 2026.

The following legislative efforts underscore the evolving regulatory landscape, aiming to harmonize technological advancement with ethical responsibility, setting the stage for significant regulatory proposals and changes in 2025:

  • Northeast U.S.
    • New Hampshire (HB 1688): Prohibits state agencies from using AI to surveil or manipulate the public, protecting citizens’ privacy and autonomy. Effective Date: July 1, 2024.
    • Massachusetts (An Act Regulating AI in Mental Health Services H1974): Requires mental health professionals to obtain board approval for using AI in treatment, emphasizing patient safety and informed consent.—status: Proposed and pending approval.
    • Rhode Island (House Bill 8073): Proposes mandatory coverage for AI technology used in breast tissue diagnostic imaging, with independent physician review.—status: Pending.
  • Southeast U.S.
    • Tennessee (HB 2091 ELVIS Act): Targets AI-generated deepfakes by prohibiting unauthorized use of AI to mimic a person’s voice, addressing privacy concerns and protecting individuals from identity theft and impersonation. Effective Date: July 1, 2024.
    • Virginia (HB2154): Requires healthcare facilities to establish and implement policies on the use of intelligent personal assistants, ensuring responsible integration into patient care and protecting patient confidentiality.—status: In effect since March 18, 2021.
    • Georgia (HB887): Prohibits healthcare and insurance decisions based solely on AI, requiring human review of AI-driven decisions to ensure they can be overridden if necessary.—status: Proposed and pending approval.
  • Midwest U.S.
    • Illinois (HB 3773): Amends the Illinois Human Rights Act to regulate AI use by employers, prohibiting AI applications that could lead to discrimination based on protected classes.
    • Safe Patients Limit Act (SB2795): Limits AI’s role in healthcare decision-making, ensuring registered nurses’ clinical judgments are not overridden by AI algorithms, emphasizing human oversight.—status: Reintroduced in 2024 and pending approval.
  • Southwest U.S.
    • Utah (SB 149): Establishes liability for undisclosed AI use that violates consumer protection laws. Mandates disclosure when consumers interact with generative AI and establishes the Office of Artificial Intelligence Policy to oversee AI applications in regulated sectors like healthcare. Effective Date: May 1, 2024.
  • West U.S.
    • California:
      • SB-942 California AI Transparency Act: Requires developers of generative AI to provide AI detection tools and allows revocation of licenses if disclosures are removed. Effective Date: January 1, 2026.
      • AB 2013: Obligates large AI developers to disclose data summaries used for training generative AI, fostering transparency. Effective Date: January 1, 2026.
      • Assembly Bill 3030: Requires healthcare facilities using generative AI for patient communication to disclose AI involvement and provide human contact options.
      • Senate Bill 1120: Mandates that medical necessity decisions be made by licensed providers and requires AI tools in utilization management to comply with fair standards.
      • Senate Bill 896 (SB-896): Directs the California Office of Emergency Services to evaluate the risks of generative AI, coordinating with AI companies to mitigate public safety threats.
      • Assembly Bill 1008 (AB-1008): Extends privacy laws to generative AI systems, ensuring compliance with data use restrictions.
      • Assembly Bill 2885 (AB-2885): Establishes a legal definition for artificial intelligence in California law.
      • Assembly Bill 2876 (AB-2876): Requires AI literacy considerations in education curriculums.
      • Senate Bill 1288 (SB-1288): Tasks superintendents with evaluating AI use in education.
      • Assembly Bill 2905 (AB-2905): Mandates AI-generated voice disclosures in robocalls.
      • Assembly Bill 1831 (AB-1831): Expands child pornography laws to include AI-generated content.
      • Senate Bill 926 (SB-926): Criminalizes AI-generated nude image blackmail.
      • Senate Bill 981 (SB-981): Requires social media to facilitate reporting of AI-generated deepfake nudes.
      • Assembly Bill 2655 (AB-2655): Mandates labeling or removal of election-related AI deepfakes.
      • Assembly Bill 2839 (AB-2839): Holds social media users accountable for election-related AI deepfakes.
      • Assembly Bill 2355 (AB-2355): Requires political ads created with AI to include clear disclosures.
      • Assembly Bill 2602 (AB-2602): Requires studios to obtain consent before creating AI-generated replicas of actors.
      • Assembly Bill 1836 (AB-1836): Extends consent requirements to estates of deceased performers for AI-generated replicas.
  • Colorado
    • SB24-205: Requires developers of high-risk AI systems to use “reasonable care” to prevent algorithmic discrimination and mandates public disclosures. Effective Date: February 1, 2026.
  • Other U.S.
    • West Virginia (House Bill 5690): Establishes a task force to recommend AI regulations that protect individual rights and data privacy, with implications for healthcare settings where sensitive patient data is involved.—status: Enacted.

Key Global Regulations

China AI Regulations: Mandates transparency and prohibits discriminatory pricing in AI, requiring clear algorithm explanations. Effective Date: March 1, 2022.

European Union AI Act: Categorizes AI systems by risk, imposes oversight on high-risk applications, and bans unacceptable-risk systems. Effective Date: August 1, 2024.

International alignment and standards will guide the harmonization of national regulations with global AI governance practices. The influence of the European Union’s AI Act and China’s stringent AI policies continues to shape U.S. strategies, underscoring the need for international alignment in AI governance. The World Health Organization (WHO) has issued guidelines for integrating large multi-modal models in healthcare, emphasizing ethical considerations and governance that align with international standards. Additionally, there will be specific attention to AI’s role in employment, workplace surveillance, and healthcare, ensuring ethical use and protecting individual rights. These frameworks underscore transparency, accountability, and fairness, setting benchmarks that U.S. regulations aim to meet or exceed.

Key Themes Shaping the Future of AI Regulation

Enhanced Ethical Oversight and Transparency: As AI systems become more integrated into critical decision-making processes, there will be a stronger emphasis on ethical oversight. This includes requiring transparency in AI algorithms, ensuring that decisions made by AI systems are explainable and understandable to users and regulators alike.

Human-in-the-Loop Systems: There will be increased implementation of human-in-the-loop systems, particularly in sectors where AI decisions can significantly impact human lives, such as healthcare, finance, and criminal justice. This approach ensures that human judgment and ethical considerations are factored into AI-driven decisions.

Data Privacy and Security: Strengthening data privacy and security measures will continue to be a priority. Regulations will likely mandate stricter data protection standards, including minimizing data collection, ensuring data anonymization, and enhancing cybersecurity measures to protect against breaches and misuse.

Bias Mitigation and Fairness: Addressing and mitigating biases in AI systems will remain a central theme. Regulatory frameworks will focus on ensuring fairness in AI outcomes, particularly in areas like employment, lending, and law enforcement, where biased algorithms can lead to discrimination.

Accountability and Liability: As AI systems gain more autonomy, assigning accountability and liability for AI-driven actions becomes crucial. Regulations may define clear responsibilities for developers, operators, and users of AI systems to ensure accountability for outcomes.

Environmental Impact: With growing awareness of environmental sustainability, there may be increased focus on assessing and mitigating the environmental impact of AI technologies. This includes energy consumption and the carbon footprint associated with training and deploying large AI models.6ti[ 

International Alignment and Standards: As AI is a global phenomenon, there will be efforts to align national regulations with international standards to facilitate cross-border cooperation and ensure consistency in AI governance globally.

AI in Employment and Workplace Surveillance: Regulations may address the use of AI in employment decisions and workplace surveillance to protect workers’ rights and prevent invasive monitoring practices.AI in Healthcare: There will likely be specific guidelines on using AI in healthcare to ensure patient safety, informed consent, and the ethical use of AI in diagnostics and treatment planning.

Strategies to Work Within the Framework of Regulations

To effectively navigate this complex regulatory landscape, organizations should consider:

Establish Clear Governance and Policies: Create governance frameworks and maintain compliance documentation.

Understand Regulatory Requirements: Conduct thorough research and adopt compliance frameworks (e.g., ISO 42001) to manage AI risks.

Incorporate Privacy by Design: Use data minimization, anonymization, and encryption to align with legal standards.

Enhance Security Measures: Implement robust security protocols and continuous monitoring.

Focus on Ethical AI Development: Mitigate biases and ensure transparency and accountability.

Implement Rigorous Testing and Validation: Use regulatory sandboxes and performance audits. A notable innovation in this regard is the use of AI sandboxes, such as the National Institute of Standards and Technology (NIST) AI sandbox initiative, which provides a controlled environment for testing AI technologies in various sectors.

Engage Stakeholders and Experts: Form cross-disciplinary teams and consult stakeholders.

Continuous Education and Adaptation: Keep teams updated on regulatory changes.

Conclusion

As the regulatory landscape evolves, 2025 promises to be a transformative year, with proposals that seek to refine and enhance AI governance. This overview explores the current state of AI regulations in the U.S., the proposals poised to reshape them, and the implications for the future of AI technology as we strive to harmonize innovation with ethical responsibility. An emerging trend among companies is the adoption of comprehensive AI governance frameworks that mirror the European Union’s efforts to protect human rights through fair and ethical AI practices. By embedding “human-in-the-loop” systems, especially in critical decision-making areas involving human lives, organizations not only bolster ethical oversight but also shield themselves from potential liabilities. This integration underscores a commitment to responsible AI development, aligning technological advancements with global standards of transparency and accountability.

Frank Betz, DBA, an accomplished professional at Synergetics.ai (www.synergetics.ai), is a driving force in guiding industry, government, and educational organizations toward unlocking the full potential of generative and agentic AI technology. With his strategic insights and thought leadership, he empowers organizations to leverage AI for unparalleled innovation, enhanced efficiency, and a distinct competitive advantage.

Synergetics
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.