Developers have a new superpower: AI tools that generate code in seconds. At Dékuple, for a few weeks now, a small team has been testing one of them - Cursor - with the hope of automating tedious tasks and allowing more time for high-value work, like understanding user needs.
To me, it sounds great. But not everyone is convinced. During one of our latest progress reviews, a developer raised a rather unexpected concern: “Rodrigo, I’m not sure I like this. This technology will make me complacent and, at the end, a worse developer.”
Well, will it? Will developers lose critical skills because of AI usage? Should a company like Dékuple be worried about adopting AI?
This reminded me of an entirely different industry where automation transformed the way professionals work: aviation. We can learn something from it.
The learnings of AF447
In 2009, an Airbus A330 operated by Air France, flying from Rio de Janeiro to Paris, crashed into the Atlantic after 3 hours and 45 minutes from takeoff. 228 people - all the passengers and crew - were killed.
The accident wasn’t caused by a single factor. As with most failures in complex systems - like a plane - it resulted from a chain of interconnected events.
In this specific case, the problems started when the plane found some icing conditions, and the sensors that measured airspeed froze and stopped working. The plane reacted by disconnecting the autopilot - the autopilot cannot work without reliable speed readings - and switching to manual mode, warning the pilots with an audible signal. The copilots, when taking manual control of the plane, made a series of errors which resulted in the plane stalling and finally falling into the ocean.
In its conclusions and its list of safety recommendations, the accident investigation report listed several points. The very first one is the following (emphasis mine):
Training for Manual Aircraft Handling
There are other possible situations leading to autopilot disconnection for which only specific and regular training can provide the skills necessary to ensure the safety of the flight. Examination of their last training records and check rides made it clear that the copilots had not been trained for manual airplane handling of approach to stall and stall recovery at high altitude.
The A330 and, in general, any modern plane, have so many safety automations in place that, in a regular flight, it is basically impossible for the plane to stall. But, in manual mode, those safety mechanisms are not there, and a pilot might be able to stall the plane.
This tragic accident highlights something crucial: technology can make experts way more efficient - but only if they maintain the ability to take over when it fails. Without that skill, automation can become a dangerous crutch.
So?
There is no doubt that technology and automation have made aviation safer.
At the same time, pilots have adapted, and have learnt to rely on technology to fly planes. Which was also the right thing to do.
The question is: what if technology fails? Should we spend time and money to be prepared for that scenario?
The answer to the previous question should be another question, well known by risk management practitioners: do we care if technology fails? What’s the worst that could happen?
On a plane, we certainly do care: a technological failure might result in the loss of life. Therefore, the pilots should be trained and ready to deal with that type of failures, and be trained to fly a plane manually (as the report recommended).
But, what if your car GPS fails? Should you actually prepare for this by, once per month, dusting off your old street atlas, and forcing yourself to navigate with the atlas with the GPS purposefully off? You might find this entertaining, please do it if that’s the case; but I would argue it is not necessary at all, at least for the average person.
This brings us back to my colleague’s concern: if AI tools become central to our workflow, should we worry about losing essential skills? If developers no longer write every line of code themselves, will they still be able to recognize and correct AI-generated mistakes?
Back to AI at the workplace
What if AI 'fails'? Will we end up with a team of developers unable to function without their favorite tool?
Is that situation actually possible? What does it mean for AI to ‘fail’? There are, in my opinion, only two failure modes to consider:
The AI tool becomes completely unavailable (a broken Internet connection, perhaps?).
The AI tool is working, but spits out unreliable information.
Call me an optimist, but I don’t think the first case is actually possible, at least in the long term. If AI tools crash down in the long term, it will probably be because the Internet has become globally unavailable, and we would be talking about other types of more pressing problems.
Which leaves us with the unreliable information problem. Going back to pilots, all pilots learn how to interpret the cockpit’s warnings and signals - so they can detect malfunctions and take over.
And that’s what organizations must actually do: make sure everyone uses AI tools and advanced technology, without worrying that they become reliant on it (because it’s unlikely to disappear), but make sure that they are trained to detect when technology fails, so they can take over, stop the process, etc.
Technology won’t make you dumber
As far as we know, the discussion about technology making us dumber is, at least, 2300 years old:
[Letters], said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Call me an optimist, but history is clear: technology doesn’t make us weaker - it makes us stronger. Writing didn’t erase our memory, calculators didn’t kill math, and AI won’t make workers neither obsolete nor dumber. The key isn’t resisting progress; it’s knowing when to trust automation and when to step in.
Embrace and adapt yourself to new technologies, and keep on pushing your knowledge and capabilities on those fields that technology has not - yet - solved.