Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is unusual — did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack? I'm actually old enough that I was at different workplaces when they started using spreadsheets and email and the web, and I can tell you, they absolutely didn't have to drive adoption by making people fill out paperwork about how they were definitely using the cool new technology.

I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.

For instance, if you weren’t around before version control became the norm, then you probably missed the legions of developers who said things like “Ugh, why do I have to use this stupid thing? It just slows me down and gets in my way! Why can’t I just focus on writing code?” Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.

AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner. If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?

An awful lot of people are stuck in a rut where they tried an early model, got poor results to begin with, and refused to use it again. These people do need a firm, top-down push, or they will be left behind.

This has happened before, many times. Contrary to the article’s claims, sometimes top-down pushes have been necessary even for things we now consider near universally good and productive.



> I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.

There was never any widespread resistance to "the Internet", let's be real here.

In any case, adoption of all those things was bottom-up rather than top-down. CEOs were not mandating that tech teams use version control or ORMs or automated testing. It was tech leadership, with a lot of support from ICs in their department.

Tech people in particular are excited about trying new things. I never heard CEOs mandating top-down that teams use Kubernetes and adding people's Kubernetes usage into their performance reviews, yet Kubernetes spread like wildfire--to the point where many software companies which had no business using something as complicated as Kubernetes started using it. Same with other flavor-of-the-month tools and approaches like event sourcing, NoSQL/MongoDB, etc.

If anything, as a leader you need to slow down adoption of new technology rather than force it upon people. The idea that senior leadership needs to push to get AI used is highly unusual, to say the least.


Isn't Amazon's APIs everywhere another example of just this that came right from the top? In some companies CEOs double as the tech lead, no?


The API mandate notably specified what rather than how. "It doesn’t matter what technology [you] use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter." In some ways it's quite the opposite of CEO mandates to use AI, which specify how you must build things (using AI!) rather than what.

The equivalent of the API mandate for AI would be if CEOs were demanding that all products include a "Summarize Content" button. Or that all code repositories contain a summary of their contents in a README. The use of AI to solve these problems would be an implementation detail.


Or if CEOs were demanding that everything be written in Python. Programming language should also be an implementation detail, not something a CEO would worry about. Just like "using AI."


My recollection is that AWS was extremely popular among developers very early on.


Do you just mean within Amazon? Because outside of Amazon, there was major resistance to AWS/cloud computing in general from older devs highly invested in the status quo. I have spent a significant amount of effort in my career fighting for cloud adoption.


to me this was more about guiding towards a desired outcome. An opinionated bet, but not overly prescriptive. "AI first" is saying do everything with AI and then hope you find some efficiencies, almost by accident.


The trendy stuff does get mandated from the top. It just gets mandated when it's already caught on elsewhere, and the CEO wonders why they aren't cool too.


I see you speak from experience. I feel like I'm watching the same cycle play out over and over again, which is that a new, transformative technology lands, people with a vested interest spend a lot of time denouncing it (your examples mostly land for me), the new technology gets over-hyped and fails to meet some bar and the haters all start crowing about how it's just B.S. and won't ever be useful, etc. etc.

Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.

With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".


Such a sane, nuanced take on new technologies. I wish more people were outspoken about holding these types of opinions.

It feels like the AI discourse is often dominated by irrationally exuberant AI boosters and people with an overwhelming, knee-jerk hatred of the technology, and I often feel like reading tech news is like watching two people who are both wrong argue with one another.


Moderates typically have a lot less to say than extremists and don't feel a need to have their passion heard by the world. The discussion ends up being controlled by the haters and hypers.

New technologies in companies commonly have the same pitfalls that burn out users. The companies have very little ability to tell if a technology is good or bad at the purchasing level. The c-levels that approve the invoices are commonly swayed not by the merits of the technology, but the persuasion of the salespeople or the fears of others in the same industries. This leads to a lot of technology that could/should be good being just absolute crap for the end user.

Quite often the 'best' or at least most useful technology shows up via shadow IT.


And a subgroup (or cousin?) of the exuberant AI boosters are the people absolutely convinced that LLM research leads to the singularity in the next 18-24 months.

I really do wish we could get to a place where the general consensus was something similar to what Anil wrote - the greatest gains and biggest pitfalls are realized by people who aren't experienced in whatever domain they're using it for.

The more experience you have in a given domain, the more narrow your use-cases for AI will be (because you can do a lot of things on your own faster than the time spent coming up with the right prompts and context mods), but paradoxically the better you will be at using the tools because of your increased ability to spot errors.

*Note: by "narrow" I don't mean useless, I just mean benefits typically accrue as speed gains rather than knowledge + speed gains.


Is AI like other technologies though? Most technologies require a learning curve that usually increases as the technology develops and adds features. They become "skills" in themselves. They are tools to be used; not the users of the tools themselves.

AI seems like the opposite to me. It is the technology that is "the learning curve" in the long term. Its whole point long term is to emulate learning/intelligence - it is trying to be the worker, not the workers tool (whether it suceeds or not is another story). The industry seems to treat it as another tech/tool/etc which you need experience/training in which I wonder is the right approach long term.

Many people will be wondering (incl myself) whether learning to use "AI" is really just an accessibility/interface problem. My time is valuable, should I bother if the productivity gains (which may only last a year or so before it changes again) outweigh the learning time/cost of developing tools/wrappers/etc? Everyone will have a different answer to this question based on their current tradeoffs.

I ask the question: If I don't need it right now (e.g. code is only 10-20% of my job for example), why bother learning it when the future AI will require even less intelligence/learning to use?


Unfortunately, thoughtful, nuanced takes don't make headlines, don't get into Harvard Business Review, and don't end up as memos on the CEO's desk. Breathless advocacy and knee-jerk dismissals get the clicks and those are the takes that end up bubbling to the top and influencing the decision makers.


> Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.

But why do they have to fill out some paperwork? If the new technology is a genuine productivity boost and any sort of meaningful performance review is undertaken, then it will show up if they're performing sub-par compared to colleagues.

The real problem is that senior management are lazily passing down mandates in lieu of trusting middle management to do effective performance reviews. Just as it was with Return To Office.


Social responsibility?

I have a few colleagues who like the way they work and would prefer everything to stay the way it is. Such "skilled artisans" might be on the way out, replaced by "Ai factory" mass production.

Sure, they could just be kicked out and replaced. But they worked with the company, in some case for a decade plus. Giving them a fair picture of what seems to be down the road is the very least I'd expect of a company treating it's workers as more than just replaceable cogwheels.


For sure. It's not inherently bad that a company encourages their staff to use AI. But there are better ways than just announcing you are now AI-first in everything and they must tell you how they're using AI in performance reviews. That just creates resentment and incentivises gaming the system.


Some people are good enough that they'll do well on performance reviews anyway, and if there's a new technology that's acting as a force multiplier those are exactly the people who the company most wants to adopt it.


Fair point about comparing performance reviews. It's also practically impossible to judge someone's performance on novel tasks, or whether they're caring enough about tech debt. Even as I wrote my first post there was a nagging voice in my head saying "Almost no one does performance reviews well enough for that".

In my (limited) experience, the tasks you want to assign to elite devs are less amenable to AI in the first place.


Perhaps then they should focus on getting their 0.1x developers using it to get competent rather than trying to get their 1x developers to 10x using it.


Interesting point. If eleven 1x developers multiply their productivity by 10, that's roughly the same as one 10x developer doing the same. Assuming of course that such quantities can be meaningfully applied.


> if you weren’t around before version control became the norm, then you probably missed the legions of developers

I was around before version control and I don't remember that reaction from more than an insignificant percentage of devs. Most devs reacted to the advent of version control with glee because it eased a real pain point.


It is incredibly early. Copilot and Cursor are both incapable of writing a mapping between two structs with identical fields - some of the most menial coding imaginable - because they either don’t have or won’t use basic coding mechanics like looking up the signature of a thing before writing code about it. This is the technology that should be making me 10x more productive? This honestly feels like an emperor has no clothes situation. Being charitable, maybe the hype is all from people generating code into empty projects with no existing context?


I keep running into these sorts of messages in hn but my experience couldn't be more different. Even autocomplete does this automatically for me, let alone using chat/agent in cursor/augment code.


that's weird I'm pretty sure I've done that exact thing multiple times with chatgpt. I've noticed copilot doesn't always work well but it's still frequently useful for me.


I'm sure if I copy and pasted to setup the problem in a ChatGPT browser window, it would work. But I would expect a $10 billion "AI powered" editor to be capable of using the go-to-definition on my behalf...


>> AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try.

My take-away was this is exactly what the OP is targeting. Management's job is to convince you to try and help you make it demonstrate value; mandating "though shall be AI-first" does neither of these effectively - ironically some of your best developers will: require the most evidence to be convinced, fight the hardest, and have the best options to jump ship if you go far enough. It's just weak management when there's the obvious alternative. Dash is a developer relations/evangelist so it's not surprising he bristles at this approach.


> AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner

I'm not a beginner though. In fact I'm actually very experienced at doing my job

Which is why I don't need non-technical management and AI consultants to be telling me what tools I should be using

If I thought AI was going to be a useful tool for me then I would use it

But so far it hasn't, so I don't

I'm not investing my time and energy into a "skill" that doesn't seem like it is going to pay off


> If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?

This way of phrasing it rejects the possibility that maybe the new thing really does suck, and that this can sometimes be identified pretty quickly.


There have also been many fads that were forced on people which fizzled out and didn't amount to much.


> These people do need a firm, top-down push, or they will be left behind.

> even for things we now consider near universally

We aren't at the point where AI tools provide a major productivity boost. Sometimes they help, sometimes they don't, sometimes working with AI has negative productivity.

Assuming AI improves to the point where employees who use it are significantly more productive... They'll excel relative to their peers. The people who can't figure it out will underperform.


> Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner.

I think it’s a very easy skill to acquire, which is why we’ve see such fast user growth for these products.

Most of them have the same interaction model as talking to someone on iMessage / WhatsApp / Messenger.


I think it would be a worthwhile exercise for yourself to find and replace every mention of AI in your post with blockchain or metaverse. Just because something is new doesn’t mean its useful and if you’re having to force knowledge workers to adopt something supposedly making them more productive then its probably a bad sign.


> I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody

Version control is vegetables. You often have to cajole kids to eat their vegetables. AI is being sold as if it's dessert. You don't usually have to cajole kids to eat their dessert, and it's weird to make that part of a performance review. If it really made people more efficient it would spread like wildfire as people sold all of their friends on it. Instead people try it, encounter some small wins and some annoying losses, judge that it's basically "meh", and we get legions of non-technical managers who for some reason feel they need to mandate its use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: