Ing. Alejandro Medina on Security Fear and Career Growth
A practical response to Ing. Alejandro Medina: manage AI security risks, stop waiting for perfect timing, and act now.
Ing. Alejandro Medina recently shared something that caught my attention: "Fear about security is the best excuse to avoid career fear." He followed it with a blunt challenge in the same spirit: "Are you afraid of your password being stolen? Take measures, but move forward." Then the line that lands hardest: the floor has shifted again, and we cannot repeat what happened at the beginning of ChatGPT.
I read that and nodded, because I see this pattern everywhere. We tell ourselves we are being responsible, but sometimes we are just delaying the uncomfortable part: learning something new, changing how we work, and risking being a beginner again.
In this post, I want to expand on Medina’s point as a conversation: yes, security matters. But security anxiety becomes a convenient shield when the real fear is professional relevance.
The hidden fear behind "security concerns"
Medina’s message works because it names a common rationalization. In fast-moving moments, especially with AI tools, cloud platforms, and new workflows, it is easy to say:
- "We cannot use that because it might leak data."
- "We should wait until policies are clearer."
- "We will adopt it once the perfect secure version arrives."
Sometimes those concerns are valid. Often, they are also a socially acceptable way to say: "I do not want to be wrong in public," or "I do not want to fall behind and have people notice."
Security fear feels responsible. Career fear feels vulnerable. So we pick the first.
Key idea: Security risk is real, but using it as a blanket reason to stop experimenting is a career risk too.
The floor moved again (and it will keep moving)
When Medina says the floor moved again, I interpret it as the ongoing reset in how value is created at work. ChatGPT was one shockwave. Since then, we have seen:
- AI features baked into everyday tools (email, docs, CRMs, design apps)
- Cheaper and faster automation for analysis, writing, and support
- New expectations for speed and output across roles
The "wait and see" approach worked when change was slow. Now, waiting often means missing the window where small experiments turn into big advantages.
The ChatGPT lesson: early confusion becomes later confidence
At the beginning of ChatGPT, many people froze. They waited for perfect rules, perfect training, perfect clarity. Others tried small, low-risk uses: summarizing public documents, drafting emails, brainstorming, translating, or generating checklists.
A year later, the gap was obvious. The early testers were not necessarily more talented. They were simply less dependent on perfect conditions.
Security is not an on or off switch
The best way to honor Medina’s point is to stop treating security as "use the tool" vs "ban the tool." Real security work is about risk management.
Instead of asking, "Is this perfectly safe?" ask:
- What data would be exposed if something went wrong?
- What is the likelihood of that exposure?
- What controls reduce the impact to an acceptable level?
- What is the cost of not moving?
This turns security into a design constraint, not a wall.
Security done well enables progress. Security used as an excuse prevents it.
A practical playbook: take measures, then advance
Medina’s line "take measures but move forward" is a great operating principle. Here is a simple, realistic playbook you can apply whether you are an individual contributor, manager, or founder.
1) Define safe and unsafe data clearly
Most teams never write this down. Create three buckets:
- Public: OK to share anywhere
- Internal: OK in approved tools, not in public tools
- Sensitive: never paste into external systems (client data, secrets, credentials, regulated data)
When people know the buckets, they stop defaulting to fear.
2) Use strong account hygiene (the boring stuff that works)
If your fear is "password theft" (Medina’s example), do the basics:
- Use a password manager
- Turn on multi-factor authentication
- Do not reuse passwords
- Keep recovery options updated
Most real-world compromises exploit weak hygiene, not advanced AI attacks.
3) Start with low-risk AI use cases
You can build momentum without touching sensitive data:
- Turn meeting notes into action items
- Draft a project plan template
- Create a checklist for QA or onboarding
- Summarize long public PDFs
- Generate test cases from requirements (with sanitized inputs)
Progress does not require reckless sharing.
4) Sanitize and abstract
If you need AI help on a sensitive topic, remove identifiers:
- Replace names with roles (Client A, Vendor B)
- Replace numbers with ranges
- Describe the pattern, not the record
This is not perfect, but it is far better than all-or-nothing thinking.
5) Choose approved tools and document workflows
If you lead a team, give people a safe lane:
- Provide an approved set of tools
- Write short guidance: what is allowed, what is not
- Encourage sharing of prompts and workflows internally
People will use tools anyway. Your job is to make safe behavior the easiest behavior.
The trap of waiting for the perfect scenario
Medina asked: "Are you going to stay waiting for the perfect scenario? Guess what? When it arrives, it will already be too late."
That hits because perfection is a mirage in technology adoption. Policies mature after usage reveals reality. Best practices emerge after mistakes are visible. The "perfect" moment is usually a story we tell ourselves to avoid the awkward beginner stage.
If you want a more useful question than "Is it safe?" try:
- "What is the safest way to learn this now?"
Learning now, safely, beats learning later, urgently.
What moving forward looks like in real jobs
To make Medina’s point concrete, here are examples of forward motion that respect security.
For professionals
- Spend 30 minutes a day building a repeatable workflow (summaries, drafts, research)
- Keep a prompt library and refine it weekly
- Track time saved and quality improvements, then share results with your manager
For managers
- Run a two-week pilot with clear rules and a feedback loop
- Measure outcomes (cycle time, defects, customer response time)
- Turn the best workflows into team standards
For organizations
- Build training that focuses on judgment, not hype
- Establish data classification and tool approval processes
- Encourage experimentation inside guardrails
A closing challenge (in Medina’s spirit)
If you feel stuck, try this self-audit:
- Is my security concern specific, or is it vague?
- Have I proposed controls, or only objections?
- What is the opportunity cost of waiting 3 months?
Then do one small thing this week that is both safe and forward:
- Turn on multi-factor authentication
- Choose one low-risk AI workflow
- Document a rule for sensitive data
- Share one learning with your team
Because Medina is right: the floor moved. It will move again. Your best defense is not freezing. It is building the habit of adapting responsibly.
This blog post expands on a viral LinkedIn post by Ing. Alejandro Medina. View the original LinkedIn post →