A popular social activity amongst computer science students is participating in hackatons. These are time limited, physical (most of the time) and social competitions where contestants try to finish a prototype in a limited amount of time. This tested the candidates' ability to work under time pressure, collaborate (if the hackaton was team based, usually yes), and showcase their human talent, creativity and adaptability. Or so in paper.
As the tech industry grew, so did people entering the scene. Now things started to get way more competitive and adversarial. The prize pools gotten more serious, and with that tensions amongst peers. It's not too out of the blue to see people unplugging each others computers or intentionally distracting each other to grab an edge. Still, the main appeal of hackatons is to showcase human talent, and despite these nags, it still did its job.
However, now that the transformers "revolution" happened, this quickly changed
the hackatons paradigm. Since the main objective is to deliver a prototype fast,
people found out that LLMs manage to produce these way faster than people. And
most of the time, way better. And who to blame, if you can't beat them, embrace
them, people thought, including myself. It's pointless working your ass hours on something
a claude code instance can finish in a couple of minutes, with the expense of having
some spare tokens of course. Now, hackatons don't evaluate how good you code, it
evaluates how good you prompt engineer, and how many tokens you're willing to use
(doesn't this make hackatons pay to win? I think so.).
Many simply ban AI and make hackatons without it, but these are rare and don't have as much corporate support (since we... kinda need a j*b... yknow)... and also, many do not agree handicapping people to not use a tool.
My take is that AI is a tool, and can be powerful if used responsably. The reason AI is bad today is because people do not know how to use it. Instead of using alongside with what they know, they usually replace the human cognitive functionality altogether. That's why "AI is laying off" people, not because AI itself does it, but because the employers overestimate how much it can solve problems without humans. On paper, it sounds appealing, AI does not take paid leaves, does not unionize, and if trained, aligns with your interests the best. But bear in mind, it can also make mistakes, and the worst part, it doesn't know when it's wrong, despite how much you train it.
Now back to the hackatons, since the objective was to compete with talent to see who can provide
a prototype faster, this objective is long overtaken by AI, more specifically, agentic pipelines.
However, there is no gurantee of that avoiding long term technical debt, and let's be honest, reading
the sloppy source code is torture, and fixing it is exorcism. It's very shiny and appealing at the front
stage, but under the hood, it's a nightmare. Hmm, how about using AI to fix AI slop, surely we can generate
constraints.md, agent.md, etc etc... It's adding unneeded complexity for minimal gains, the most efficient
solution is to mainly use the human mind, and use AI as an intern sidekick, doing painstaking boilerplate companies
usually offload to lower roles, such as writing unit tests, suggest (not overwrite) better DSAs, etc.
As for the hackatons themselves, another problem still remaining is the competitive vibe, inclining people to use less ethical means to win. My take is to ease on corporate "sponsorship" by minimizing their influence, but still get what they want, which is potential talent, and make hackatons what they initially were, a social gathering for people to have an excuse on doing things.