16 Comments
User's avatar
Chintan Zalani's avatar

Love SMART framework and the prompt you shared to make the goal actionable. Thanks for putting this together, Zain!

Zain Haseeb's avatar

Glad it was helpful! Thanks for reading!

Juan Gonzalez's avatar

Yeah, vague goals don't get hit. Vague AI usage is just a waste of resources.

Zain Haseeb's avatar

Exactly! And the frustrating part is people don't realize it until December when they're trying to write their self-assessment and have nothing concrete to point to. Thanks for reading Juan!

Juan Gonzalez's avatar

Ohhh too real. They realise after the fact.

Most of us are too much on autopilot mode it seems.

Dennis Berry's avatar

The formula makes AI adoption concrete instead of overwhelming, and it reminds people that clarity and specificity are what turn vague mandates into real progress.

Zain Haseeb's avatar

Exactly right, when you know exactly what you're trying to do, the path forward gets a lot simpler. Thanks for reading Dennis!

Chris Tottman's avatar

I love your 3 or 4 "these are not goals" - I agree they're so crappy. Thanks for giving me the giggles over my morning coffee ☕ we're all simply burning time - get the goals right, no more than 3, and 1 is unbelievably critical ✅

Zain Haseeb's avatar

Ha! Glad I could enhance your morning coffee time :) Thanks for reading Chris!

Melanie Goodman's avatar

This lands because most organisations are still treating AI objectives like learning goals rather than operating goals.

The fog you describe shows up quickly when outcomes are never defined at the start, only effort or intent.

McKinsey reported in 2024 that while over 70 percent of employees use AI tools, fewer than 40 percent of organisations track any business impact from that usage.

That gap explains why Q4 reviews feel inconclusive rather than corrective.

What’s missing is the translation layer between capability and result, not motivation.

How are you helping teams anchor AI goals to one concrete output they can measure inside a quarter?

Zain Haseeb's avatar

Great insights Mel! To your question, the tool I built walks through identifying one high-frequency task first, then builds the goal around that specific output. The constraint is intentional, if you can't name the deliverable, you're not ready to set the goal yet.

That McKinsey stat is wild but not surprising. Activity is easy to track. Impact requires you to define what "better" looks like upfront.

John Brewton's avatar

Vague goals feel safe but quietly guarantee a weak outcome.

Zain Haseeb's avatar

Well said. Safety and outcomes rarely live in the same place.

Max Bernstein's avatar

AI goal setting is (or will be) a massive issue for companies.

Everyone is assigning "goals" without knowing enough about AI to properly know what they are actually assigning.

They know they have to have "AI" somewhere on the reviews but what does that ACTUALLY mean and how is it ACTUALLY helping anyone?

Big opportunity for someone who lives and breathes AI to come in and spearhead this.

Zain Haseeb's avatar

100%. The mandate is clear ("use AI") but the translation layer is completely missing. Companies are asking employees to figure out something leadership hasn't figured out themselves. That's the gap this series is trying to fill.

User's avatar
Comment removed
Jan 18
Comment removed
Zain Haseeb's avatar

The "3 hours per week" trap is so common. I've seen teams celebrate hitting time-based AI goals while their actual work output stayed flat. Measuring inputs feels productive but it's just motion without progress.