Over the past few weeks, I ran a short survey on AI use among marketing and sales professionals in publishing, arts, and education. The respondents were Boxcar Marketing subscribers and LinkedIn followers. My goal was to understand their AI use. More specifically, I wanted to know how AI is actually being used across publishing, marketing, sales, and adjacent roles.
The survey received 24 responses.
This is not a large enough sample to be statistically representative—but it is a useful directional snapshot from people actively working in this space. And, the findings align with larger studies run by BISG on AI use across the North American Book Industry (n=559) and Section AI and its AI Proficiency Report (n=5,000).
The signal is clear:
AI use is widespread—but structured, repeatable workflows are still rare.
AI skills are at the beginner level. People are using AI, mostly for marketing and admin tasks, and the impact is unclear.
AI policies are needed to ensure clarity and to prevent privacy risk or proprietary information leaks.
What We’re Seeing About How Marketers Use AI Tools
Marketing teams are among the most active users of AI, but AI is supporting marketing tasks, not (yet) transforming marketing operations.
Across all respondents (not just marketers):
- Most people are using AI in some form
- AI use is concentrated in
- writing and editing
- ideation
- research and summaries
- Skill levels skew beginner to intermediate
- Only a small group has moved into repeatable workflows or integrations
This creates a gap:
People know how to use AI—but not always how to apply it meaningfully in their work.
Or, as one respondent put it:
“I depend too heavily on ChatGPT… I don’t feel like I’m using it to its full potential.”
AI Skills: The Four Audiences Emerging
One of the most useful outcomes of this survey is how clearly it surfaces four distinct groups.
Each group needs something different—and most AI conversations do not account for that.
1. Non-Users (and Restricted Users)
A meaningful portion of respondents are not using AI at all, or are actively resistant.
Reasons include:
- ethical concerns (training data, IP, privacy)
- environmental impact
- company restrictions
- distrust of outputs
Some responses were direct:
“AI sucks and I refuse to use it.”
“I think it will erode critical thinking.”
This is not a knowledge gap—it is a values and governance gap.
Recommendations (since AI is integrated into business tools like email, word processing, meeting note taking, and design tools):
- Clear, transparent AI policies
- Guidance on data handling and consent
- Defined “safe use” scenarios
- Space for informed participation—not forced adoption
The above should be in place, even if (or especially if) your organization decides not to use AI.
What I would say to the non-AI users and resisters
If your perspective is cautious, skeptical, or a hard “no”, then that perspective belongs at the decision-making table. Please don’t step away from these conversations. The risk isn’t that AI moves forward. It’s that it moves forward without enough critical voices shaping how it’s used. [Or, that stigma prevents employees from being transparent about their AI use.]
As organizations—and governments—move more decisively toward AI adoption, questions about data, consent, environmental impact, quality, and critical thinking don’t go away. They either get addressed thoughtfully, or they get overlooked. This is especially important for AI workflows for marketing and editorial given that intellectual property and copyright is involved.
And for organizations adopting AI: Ignoring this group—or pushing adoption without addressing concerns—will backfire. We can have innovation, along with safeguards for IP, privacy, consent, and bias. I believe we need both.
2. Experimenters
This is the biggest group.
They are:
- Using tools like ChatGPT, Gemini, and Claude
- Applying AI to discrete tasks
- Curious, but not confident
Use cases are familiar:
- Drafting content
- Brainstorming
- Summarizing documents
But the friction shows up quickly:
- “I don’t know what to use it for beyond basics.”
- “Not sure I trust the output.”
- “It takes time to get something useful.”
This is the “I think it works, but now what?” stage.
Recommendations for AI Experimenters:
- Clear examples of next-step use cases
- Understanding of tools vs models
- Guidance on when to use what
- Structured ways to improve output quality
Tip: This is where most teams are currently stuck. The Section AI proficiency report show the same issues. AI is used at a basic level for generating one-off copy suggestions and conducting basic informational searches. People are not stuck because they can’t prompt, they are stuck because they don’t know how to take it further.
3. Regular Users (Task-Level)
These respondents are using AI regularly—even daily—but:
Their usage is still largely task-based, not process-based.
They:
- Rely on AI for writing, editing, ideation
- See some efficiency gains
- But don’t have structured systems
The risk here is subtle:
You feel productive—but you are not compounding value.
Recommendations for Regular/Daily AI users:
- Steps for turning repeat actions into repeatable workflows
- Templates and checklists
- Basic process documentation
- Ways to reduce inconsistency
Tip: This is the bridge between experimentation and real ROI. But the trick to measuring ROI is that you need to have been measuring it with your non-AI workflows. A simple first step is to start time tracking repetitive tasks that you might offload to AI. Then when you experiment, track the saved time. But know that time is an illusion. With AI integrated into a workflow, the executing and building steps are faster, which should free up more time for applying your domain expertise, professional judgement, and human oversight. Instead of tracking time savings, track whether you are spending time on higher-level tasks that have meaningful impact.
4. Workflow Builders (Early Integrators)
A small but important group is building AI workflows for marketing and publishing, design and file management. Their AI use includes:
- Building templates
- Experimenting with repeatable processes
- Integrating AI into workflows
They are closest to meaningful gains.
But even here, the next challenge appears:
How do you scale this beyond yourself?
Recommendations for AI Workflow Builders:
- Documentation frameworks
- Ways to adapt workflows across teams
- Governance guardrails
- Internal sharing systems
What I would say to this group about AI
Start documenting your workflows so you can easily share ideas across teams and functions. When I figure out a series of prompts, or create repeatable process using AI, then I record what I’m doing using Loom, export the transcript, and use that to build documentation with screen captures or the accompanying video link. Scribe.com promises to do the recording and documentation, plus make it easily sharable with teammates. [Anyone tried this yet? I’m working on a way to create a free library of AI use cases that I can share with you. If you have tips, please tell me what’s working in your organization.]
For organization leaders: These are your internal AI leaders—whether they’re recognized or not—please support their efforts and pony up for the cost of the tools they need. And if they are providing lunch and learns or documentation for the company, then that should be compensated.
AI Attitudes: What’s Holding People Back from using AI
Across all groups, a few consistent barriers show up:
1. Trust and accuracy
People do not fully trust outputs—and often do not know how to verify them.
2. Time
Learning and refining AI use feels like extra work.
3. Lack of clear use cases
This is the biggest one.
It’s not “how do I prompt?”
It’s “what is this actually useful for in my job?”
4. Governance uncertainty
- What can I paste into a tool?
- Is this allowed?
- Is this safe?
- How do I turn off training?
5. Tool confusion
- ChatGPT vs Claude vs Gemini vs Perplexity
- What is a model vs a tool?
- What is a CustomGPT vs an agent?
- What is gen AI vs agentic AI?
Organizational AI Readiness: What This Means for Teams and Leaders
If you manage a team, there is a high chance that:
- Some people are experimenting quietly
- Some are avoiding AI entirely
- A few are building useful workflows
- And none of this is being shared systematically or transparently
This creates:
- Duplicated effort
- Uneven adoption
- Hidden risk
- Missed opportunity
And important:
Access to AI is not the same as effective use of AI.
Practical Next Steps
For AI Experimenters
Focus on moving one step beyond basic use:
- Take one repeated task (e.g. drafting marketing copy)
- Turn it into a structured workflow:
- input → prompt → refine → check → output
Start learning:
- When to use ChatGPT vs Claude vs Gemini vs Perplexity
- How different models affect output quality
- How to iterate, not just prompt once
For Regular AI Users
Start documenting:
- What are you doing repeatedly?
- What steps do you follow?
- Where does AI help—and where does it fail?
Turn this into:
- Templates
- Reusable prompts
- Simple checklists
For AI Workflow Builders
Focus on scale:
- Can someone else follow your process?
- What assumptions are you making?
- Where are the risks (accuracy, data, bias)?
Start:
- Sharing workflows internally
- Adapting for different roles
- Adding governance notes
For Non-Users or Restricted Teams
Start with governance, not tools:
- Define what is allowed
- Clarify data boundaries
- Identify low-risk use cases:
- summarizing public content
- drafting non-sensitive materials
- internal brainstorming or note taking
If you do not create a clear path, usage will still happen—just without oversight.
Where Boxcar Marketing Is Focused Next
This survey confirms what I have been hearing anecdotally from clients:
The biggest gap for AI users is identifying the role-specific tasks that benefit from building repeatable AI workflows.
For those interested in AI, my goal is to help teams move from “how do I use AI better” to “how do I make better decisions, using AI as part of how work gets done.” [I’m taking the Responsible AI Professional Certification to ensure I understand how to do this in as safe and ethical a way as possible. The next cohort starts in October 2026.]
It would be amazing if there was a shared library of AI workflows for:
- marketers
- publicists
- sales teams
- and the agencies that support them
There are lots of paid courses, AI consultants, and coaching tools [links to Boxcar Marketing recommendations]. Plus, there is no shortage of one-off examples and prompt libraries. And, BISG is promising to address some of the industry-specific training needs.
In the meantime, I will freely share:
- Real use cases (not generic prompts)
- Templates and checklists
- Guidance on tools and models
- Governance considerations where relevant (not as legal advice but as a starting point)
Want to Contribute?
The most helpful input right now is specific. Do you have a specific title, project or workflow you want to explore?
Here are some of the workflows identified in the survey:
- Campaign reporting
- Publicity outreach
- Metadata generation
- Sales materials
Comment or reach out if you want to experiment, share an idea, or brainstorm AI use.










