Skip to main content
Philly.com

Why your small business needs an AI policy

By November 25, 2025No Comments

(This column originally appeared in the Inquirer)

It’s no secret that the use of both generative and agentic AI will proliferate over the next few years as the technology becomes more reliable and pervasive.

More than 58% of small businesses are already using AI in their companies, according to a recent study from the U.S. Chamber of Commerce, and that usage is expected to rise this year. For now, most of that can be attributed to chatbots like ChatGPT, Gemini, Copilot and others.

Because of this, your business needs to create and maintain a strict AI policy. Why?

“An AI policy places guardrails around the usage of AI by your employees,” said Philadelphia attorney David Walton, who chairs the artificial intelligence team at Fisher & Phillips. “It allows your employees to use AI faster and better.”

Without an AI policy, a business would be exposed to reputational damage that’s caused by AI “hallucinations” or errors, Walton said. In addition, a company’s proprietary data — pricing, contracts, customers information, processes — could be exposed to the public, particularly when employees use free AI tools that offer less protection.

Lawyer Star Kashman, founding partner of Cyber Law Firm, warns her clients that without an AI policy, employers could be exposed to claims of bias and other lawsuits.

“For example, there might be some resumes from people of certain races, people of certain genders that maybe aren’t as accepted by the AI system, and you’re automatically rejecting great candidates,” she said. “You’re going to be the one that has a huge lawsuit on your hands, even for your employees’ actions, if you weren’t able to protect it.”

A good AI policy should include the following.

Include a statement of purpose for AI

The policy should be clear that AI is allowed only when used responsibly and with guardrails.

It should also be clearly stated that AI tools are used only when they can improve productivity, provided that they are safe and confidential.

Provide a list of approved applications

A company’s AI policy should specify which tools and software are approved by management, both lawyers said.

The tools should be used for business purposes only. Free tools should not be allowed because of their privacy concerns, and if a tool is not listed in the policy, permission is required from management to use it.

When employees use AI on a personal account, Walton said, “it’s hard for the business to control privacy settings, and confidential data may leak into free or public AI models.”

Consider a proprietary information ban

It’s still unclear how safe our data is when AI applications are being used. To that end, it’s a good practice to avoid or even ban the entry of private information into these platforms.

This would include customer data, financial statements, contracts, pricing information, personal identifying factors, trade secrets, or anything medical, legal, or human resources related.

State the ownership of AI work

When an employee makes a “prompt” into an AI chatbot, that query, as well as any resulting workflows and custom instructions, are all assets of the company and should be stated as so.

A company’s AI policy should state that employees must return all AI-created work at separation, cannot export data into their personal accounts and cannot use their own agents or tools for company work.

Avoid AI in HR

AI applications shouldn’t be used in hiring or performance reviews, both Kashman and Walton said. Many platforms leverage AI to perform these functions, but these tools could create more headaches than benefits.

“HR is the front line for legal problems tied to AI,” Walton said. “Relying on AI to make hiring, firing or performance review decisions could be very problematic.”

Ban certain outputs

An AI policy should ban the use of images, videos or voice without management approval. NSFW (not-safe-for-work), pornographic, or defamatory content should be off limits. This can help protect against reputation damage, deepfakes, and offensive content.

Always use human oversight

We know today’s AI tools are far from perfect. Your policy should state that everything AI produces must be validated, checked, sourced, and edited by a human.

Explain why the AI policy exists

AI is new, and your employees are already concerned about this new technology. Kashman says it’s important to explain the “why” behind each rule in your policy.

“Instead of just ‘don’t,’ explain the risk to the employee and company such as hallucinations, data leaks, bias, etc.” she said. “Employees follow rules better when they understand them.”

The uncertain regulatory environment is another big reason for creating an AI policy. Regulation of AI use shouldn’t be expected anytime soon, Walton said.

“Businesses must prepare for state-level AI regulation, especially around risk assessment and bias, because the federal government is unlikely to pass comprehensive laws anytime soon,” he said.

However, some states — like New Jersey — have proposed bills that would require businesses to do formal risk assessments and acceptable-use policies. Meanwhile, President Donald Trump is considering an executive order limiting states from regulating AI.

Kashman said the lack of regulations will leave business owners vulnerable “because tech companies aren’t going to be as liable for harms.” So small businesses “must protect themselves with strong internal policies,” she said.

“An AI assistant or chatbot can help businesses draft a policy or template, especially for non-lawyers who need structure or a first draft,” Kashman said. It’s important to frequently update this policy because the technology, models, privacy terms, and data breaches change rapidly, she added.

“However, be careful,” she said. “AI can’t understand the nuances of a specific business or legal risk, so human review from legal counsel or an expert is necessary.”

Skip to content