Advent of Code is one of the highlights of December for me.
It's sad, but inevitable, that the global leaderboard had to be pulled. It's also understandable that this year is just 12 days, so takes some pressure off.
If you've never done it before, I recommend it. Don't try and "win", just enjoy the problem solving and the whimsy.
That sounds healthy! But I would note that there's been interesting community discussions on reddit in past years, and I've gotten caught up in the "finish faster so I can go join the reddit discussion without spoilers". It turns out you can have amazing in-jokes about software puzzles and ascii art - but it also taught me in a very visceral way that even for "little" problems, building a visualizer (or making sure your data structures are easy-to-visualize) is startlingly helpful... also that it's nice to have people to commiserate with who got stuck in the same garden path/rathole that you did.
Last year was the first time I ever did the thing in sync, and it was a source of real delight to see other people foot-gunning themselves in the same way as me (also in different ways, schadenfreude and all that....)
Same. I usually try to use it as the "real-world problem" I need for learning a new language. Is there anywhere that people have starter advice/ templates for various languages? I'd love to know
- install like this
- initialize a directory with this command
- here are the VSCode extensions (or whatever IDE) that are the bare minimum for the language
The "only" 12 days might be disappointing (but totally understandable), however I won't mourn the global leaderboard which always felt pointless to me (even without the llm, the fact that it depends on what time you did solved problems really made it impractical for most people to actually compete). Private leaderboards with people on your timezone are much nicer.
The global leaderboard was a great way to find really crazy good people and solutions however - I picked through a couple of these guys solutions and learned a few things. One guy had even written his own special purpose language mainly to make AoC problems fast - he was of course a compilers guy.
I think I’ll set up a local leaderboard with friends this year. I was never going to make it to the global board anyway but it is sad to see it go away.
It's all marketing, I can sell this to you and convert you.
Thing is it may have some interesting challenges, I too, wouldn't want to solve some insane string parsing problem with no interesting idea behind it. For today's problem, I did the naive version and it worked. The modular version created some issues with some corner cases.
There should be more events like AoC. Self-contained problems are very educational.
It always seemed odd to me that a persistent minority of HN readers seem to have no interest in recreational programming/technical problems solving and perpetually ask "why should I care?"
It's totally fine not to care, but I can't quite get why you would then want to be an active member in a community of people who care about this stuff for no other reason than they fundamentally find it interesting.
I _love_ the Advent of Code. I actually (selfishly) love that it's only 12 days this year, because by about half way, I'm struggling to find the time to sit down and do the fantastic problems because of all the holiday activities IRL.
Taking out the public leaderboard makes sense imo. Even when you don't consider the LLM problem, the public Leaderboard's design was never really suited for anyone outside of the very specific short list of (US) timezones where competing for a quick solution was every feasible.
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
Yes: I'd argue that the timings actually work/worked better for Western Europe than the USA, I personally preferred doing the puzzle at 5am (UK) than the midnight equivalent, as I could finish before work (on a good day).
Nearly scratched a decent ranking once only, top 300 or so.
Either Russia (8am) or West Coast US (9pm) would be my preferred options.
Sadly it's 5am for me as I'm in the UK.
In 8 years I can say I've never once tried to be awake at 5am in order to do the puzzle. The one time I happened to still be awake at 5am during AoC I was quite spectacularly drunk so looking at AoC would have been utterly pointless.
Anything before 6.45am and I'm hopefully asleep. 7am isn't great as 7am-8am I'm usually trying to get my kid up, fed and out the door to go to school. Weekends are for not waking up at 7am if I don't need to.
9am or later and it messes with the working day too much.
Looking back at my submission times from 2017 onwards (I only found AoC in 2017 so did 2015/2016 retrospectively) I've only got two submissions under 02:xx:xx (e.g. 7am for me). Both were around 6.42am so I guess I was up a bit earlier that day (6.30am) and was waiting for my kid to wake up and managed to get part 1 done quickly.
My usual plan was to get my kid out of the door sometime between 7.30am and 8am and then work on AoC until I started work around 9am. If I hadn't finished it then I'd get a bit more time during my lunch hour and, if still not finished, find some time in the evening after work and family time.
Out of the 400 submissions from 2017-2024 inclusive I've only got 20 that are marked as ">24h" and many of these were days where I was out for the entire day with my wife/kid so I didn't get to even look at the problem until the next day. Only 4 of them are where I submitted part 1 within 24h but part 2 slipped beyond 24h.
Enormous understatement: I were unencumbered by wife/kids then my life would be quite a bit different.
LLMs spoiled it, but it was fun to see the genuine top times. Watching competitive coders solve in real time is interesting (Youtube videos), and i wouldn't have discovered these without the leader board.
well, I tried to do the first day and I think it's an indictment of my own capabilities that I spent most of my day on the second part and still failed to get the correct result. That sort of programming is not something I've been doing at my current position, but still as programmer that has been working for a decade, that smarts a little.
Python is extremely suitable for these kind of problems. C++ is also often used, especially by competitive programmers.
Which "non-mainstream" or even obscure languages are also well suited for AoC? Please list your weapon of choice and a short statement why it's well suited (not why you like it, why it's good for AoC).
My favourite "non-mainstream" languages are, depending on my mood at the time, either:
- Array languages such as K or Uiua. Why they're good for AoC: Great for showing off, no-one else can read your solution (including yourself a few days later), good for earlier days that might not feel as challenging
- Raw-dogging it by creating a Game Boy ROM in ASM (for the Game Boy's 'Z80-ish' Sharp LR35902). Why it's good for AoC: All of the above, you've got too much free time on your hands
Just kidding, I use Clojure or Python, and you can pry itertools from my cold, dead hands.
C, because it makes every problem into a memory management problem, which is good for you in an 'eat your vegetables' sort of way. It's also the starting point for a lot of other programming languages and related things like HDLs, which is helpful to me.
It has many of the required structures (hashes/maps, ad hoc structs, etc) and is great for knocking up a rough and ready prototype of something. It's also quick to write (but often unforgiving).
I can also produce a solution for pretty much every problem in AoC without needing to download a single separate Perl module.
On the negative side there are copious footguns available in Perl.
(Note that if I knew Python as well as I knew Perl I'd almost certainly use Python as a starting point.)
I also try and produce a Go and a C solution for each day too:
* The Go solution is generally a rewrite of the initial Perl solution but doing things "properly" and correcting a lot of the assumptions and hacks that I made in the Perl code. Plus some of those new fangled "test" things.
* The C solution is a useful reminder of how much "fun" things can be in a language that lacks built-in structures like hashes/maps, etc.
My favorite non-mainstream language for competitions like this and Project Euler is Julia. The startup time is not a factor, and the ability to use UTF-8 symbols as variables makes the code more mathematical.
I like to use Haskell, because parser combinators usually make the input parsing aspect of the puzzles extremely straightforward. In addition, the focus of the language on laziness and recursion can lead to some very concise yet idiomatic solutions.
Example: find the first example for when this "game of life" variant has more than 1000 cells in the "alive" state.
Solution: generate infinite list of all states and iterate over them until you find one with >= 1000 alive cells.
let allStates = iterate nextState beginState # infinite list of consecutive solutions
let solution = head $ dropWhile (\currentState -> numAliveCells currentState < 1000) allStates
Yes, there are some cool solutions using laziness that aren't immediately obvious. For example, in 2015 and 2024 there were problems involving circuits of gates that were elegantly solved using the Löb function:
I actually plan on doing this year in Gleam, because I did the last 5 years in Haskell and want to learn a new language this year. My solutions for last year are on github at https://github.com/WJWH/aoc2024 though, if you're interested.
Haskell values are immutable, so it creates a new state on each iteration. Since most of these "game of life" type problems need to touch every cell in the simulation multiple times anyway, building a new value is not really that much more expensive than mutating in place. The Haskell GC is heavily optimized for quickly allocating and collecting short-lived objects anyway.
But yeah, if you're looking to solve the puzzle in under a microsecond you probably want something like Rust or C and keep all the data in L1 cache like some people do. If solving it in under a millisecond is still good enough, Haskell is fine.
Fun fact about Game of Life is that the leading algorithm, HashLife[1], uses immutable data structures. It's quite well suited to functional languages, and was in fact originally implemented in Lisp by Bill Gosper.
OCaml. There's just enough in the standard library to cover what you need for everything, for any non-trivial parsing tasks, there's a parser generator and lexer generator bundled, and if you want to pull in extra support libraries so you're not looking to implement, say, a trie from scratch.
This year I've been working on a bytecode compiler for it, which has been a nice challenge. :)
When I want to get on the leaderboard, though, I use Go. I definitely felt a bit handicapped by the extra typing and lack of 'import solution' (compared to Python), but with an ever-growing 'utils' package and Go's fast compile times, you can still be competitive. I am very proud of my 1st place finish on Day 19 2022, and I credit it to Go's execution speed, which made my brute-force-with-heuristics approach just fast enough to be viable.
yep, https://github.com/lukechampine/slouch. Fair warning, it's some of the messiest code I've ever written (or at least, posted online). Hoping to clean it up a bit once the bytecode stuff is production-ready.
* The expressive syntax helps keep the solutions short.
* It has extensive standard library with tons of handy methods for AoC style problems: Enumerable#each_cons, Enumerable#each_slice, Array#transpose, Array#permutation, ...
* The bundled "prime" gem (for generating primes, checking primality, and prime factorization) comes in handy for at least a few of problems each year.
* The tools for parsing inputs and string manipulation are a bit more ergonomic than what you get even in Python: first class regular expression syntax, String#scan, String#[], Regexp::union, ...
* You can easily build your solution step-by-step by chaining method calls. I would typically start with `p File.readlines("input.txt")` and keep executing the script after adding each new method call so I can inspect the intermediate results.
I use python at work but code these in kotlin. The stdlib for lists is very comprehensive, and the syntax is sweet. So easy to make a chain of map, filter and some reduction or nice util (foldr, zipwithnext, windowed etc). Flows very well with my thought process, where in python I feel list comprehensions are the wrong order, lambdas are weak etc.
I write most as pure functional/immutable code unless a problem calls for speed. And with extension functions I've made over the years and a small library (like 2d vectors or grid utils) it's quite nice to work with. Like, if I have a 2D list (List<List<E>>), and my 2d vec, like a = IntVec(5,3), I can do myList[a] and get the element due to an operator overload extension on list-lists.
and with my own utils and extension functions added over years of competitive programming (like it's very fluent
Go is strong. You get something where writing a solution doesn't take too much time, you get a type system, you can brute-force problems, and the usual mind-numbing boring data-manipulation handling fits well into the standard tools.
OCaml is strong too. Stellar type system, fast execution and sane semantics unlike like 99% of all programming languages. If you want to create elegant solutions to problems, it's a good language.
For both, I recommend coming prepared. Set up a scaffold and create a toolbox which matches the typical problems you see in AoC. There's bound to be a 2d grid among the problems, and you need an implementation. If it can handle out-of-bounds access gracefully, things are often much easier, and so on. You don't want to hammer the head against the wall not solving the problem, but solving parsing problems. Having a combinator-parser library already in the project will help, for instance.
Any recommendations for Go? Traditionally I've gone for Python or Clojure with an 'only builtins or things I add myself' approach (e.g. no NetworkX), but I've been keen to try doing a year in Go however was a bit put off by the verbosity of the parsing and not wanting to get caught spending more time futzing with input lines and err.
Naturally later problems get more puzzle-heavy so the ratio of input-handling to puzzle-solving code changes, but it seemed a bit off putting for early days, and while I like a builtins-only approach it seems like the input handling would really benefit from a 'parse don't validate' type approach (goparsec?).
It's usually easy enough for Go you can just roll your own for the problems at hand. It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Once you have something which can "load \n seperated numbers into array/slice" you are mostly set for the first few days. Go has verbosity. You can't really get around that.
The key thing in typed languages are to cook up the right data structures. In something without a type system, you can just wing things and work with a mess of dictionaries and lists. But trying to do the same in a typed language is just going to be uphill as you don't have the tools to manipulate the mess.
Historically, the problems has had some inter-linkage. If you built something day 3, then it's often used day 4-6 as well. Hence, you can win by spending a bit more time on elegance at day 3, and that makes the work at day 4-6 easier.
Mind you, if you just want to LLM your way through, then this doesn't matter since generating the same piece of code every day is easier. But obviously, this won't scale.
> It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Yeah, this is essentially it for me. While it might not be a 'type-safe and correct regarding error handling' approach with Python, part of the interest of the AoC puzzles is the ability to approach them as 'almost pure' programs - no files except for puzzle input and output, no awkward areas like date time handling (usually), absolutely zero frameworks required.
> you can just wing things and work with a mess of dictionaries and lists.
Checks previous years type-hinted solutions with map[tuple[int, int], list[int]]
Yeah...
> but all of the AoC problems aren't parsing problems.
I'd say for the first ten years at least the first ten-ish days are 90% parsing and 10% solving ;) But yes, I agree, and maybe I'm worrying over a few extra visible err's in the code that I shouldn't be.
> if you just want to LLM your way through
Totally fair point if I constrain LLM usage to input handling and the things that I already know that I know how to do but don't want to type, although I've always quite liked being able to treat each day as an independent problem with no bootstrapping of any code, no 'custom AoC library', and just the minimal program required to solve the problem.
This is what is great about it, the community posting hyper-creative (sometimes cursed) solutions for fun! I usually use AoC to try out a new language and that has been fun for me over the years.
AoC has been a highlight of the season for me since the beginning in 2015. I experimented with many languages over the years, zeroing in on Haskell, then Miranda as my language of choice. Finally, I decided to write my own language to do AoC, and created Admiran (based upon Miranda and other lazy, pure, functional languages) with its own self-hosted compiler and library of functional data structures that are useful in AoC puzzles:
I've had a lot of fun using Nim for AOC for many years. Once you're familiar with the language and std lib, its almost as fast to write as python, but much faster (Nim compiles to C, which then gets compiled to your executable). This means that sometimes, if your solution isn't perfect in terms of algorithmic complexity, waiting a few minutes can still save you (waiting 5 mins for your slow Nim code is OK, waiting 5 hours for your slow Python isn't really, for me). Of course all problems have a solution that can run in seconds even in Python, but sometimes it's not the one I figure out first try.
Downsides: The debugging situation is pretty bad (hope you like printf debugging), smaller community means smaller package ecosystem and fewer reference solutions to look up if you're stuck or looking for interesting alternative ideas after solving a problem on your own, but there's still quality stuff out there.
Though personally I'm thinking of trying Go this year, just for fun and learning something new.
Edit: also a static type system can save you from a few stupid bugs that you then spend 15 minutes tracking down because you added a "15" to your list without converting it to an int first or something like that.
I’ve always used AoC as my jump-off point for new languages. I was thinking about using Gleam this year! I wish I had more profound reasons, but the pipeline syntax is intriguing and I just want to give it a whirl.
I tried AoC out one year with the Wolfram language, which sounds insane now, but back then it was just a "seemed like the thing to do at the time" and I'm glad I did it.
With both AoC and Project Euler I like seeing how fast I can get my solution to run with SBCL. Finding all palindromic primes below a million in less than a second is pretty neat.
SBCL is an absolute beast. I think only surpassed by LispWorks, but SBCL is a miracle machine. Even without type annotations it usually performs well enough.
I've been using Elixir, which has been wonderful, mostly because of how amazing the built in `Enum` library is for working on lists and maps (since the majority of AoC problems are list / map processing problems, at least for the first while)
Enum really does feel like a superpower sometimes. I’ll knock out some loop and then spend a few mins with h Enum.<tab> and realise it could’ve been one or two Enum functions.
I am going to try and stick with Prolog as much as I can this year. Plenty of problems involve a lot of parsing and searching, both could be expressed declaratively in Prolog and it just works (though you do have to keep the execution model in mind).
I used MATLAB last year while I was re-learning it for work. It did okay, but we didn't have a license for the Image Processing Toolbox, which has a boatload of tools for the grid based problems.
I've done AoC on what I call "hard mode", where I do the solutions in a language I designed and implemented myself. It's not because the language is particularly suited to AoC in any particular way, but it gives me confidence that my language can be used to solve real problems.
For me (and most of my friends/coworkers) the point of AoC was to write in some language that you always wanted to learn but never had the chance. The AoC problems tend to be excellent material for a crash course in a new PL because they cover a range of common programming tasks.
Historically good candidates are:
- Rust (despite it's popularity, I know a lot of devs who haven't had time to play with it).
- Haskell (though today I'd try Lean4)
- Racket/Common Lisp/Other scheme lisp you haven't tried
- Erlang/Elixir (probably my choice this year)
- Prolog
Especially for those langs that people typically dabble in but never get a change to write non-trivial software in (Haskell, Prolog, Racket) AoC is fantastic for really getting a feel for the language.
It's a great language.
It's dependent-types / theorem-proving-oriented type-system combined with AI assistants makes it the language of the future IMO.
I used my homemade shell language last year, called elk shell. It worked surprisingly well, better than other languages I've tried, because unlike other shell languages it is just a regular general purpose scripting language with a standard library that can also run programs with the same syntax as function calls.
Elixir Livebook is my tool of choice for Advent of Code. The language is well-suited for the puzzles, I can write some Markdown if I need to record some algebra or my thought process, the notebook format serves as a REPL for instant code testing, and if the solution doesn't fit neatly into an executable form, I can write up my manual steps as well.
If I remember correctly, one of the competitive programming experts from the global leaderboard made his own language, specifically tailored to help solve AoC problems:
(post title: "Designing a Programming Language to Speedrun Advent of Code", but starts off "The title is clickbait. I did not design and implement a programming language for the sole or even primary purpose of leaderboarding on Advent of Code. It just turned out that the programming language I was working on fit the task remarkably well.")
> I solve and write a lot of puzzlehunts, and I wanted a better programming language to use to search word lists for words satisfying unusual constraints, such as, “Find all ten-letter words that contain each of the letters A, B, and C exactly once and that have the ninth letter K.”1 I have a folder of ten-line scripts of this kind, mostly Python, and I thought there was surely a better way to do this.
I'll chose to remember it was designed for AoC :-D
For some grid based problems, I think spreadsheets are very powerful and under-appreciated.
The spatial and functional problem solving makes it easy to reason about how a single cell is calculated. Then simply apply that logic to all cells to come up with the solution.
I usually do it with ruby with is well suite just like python, but last year I did it with Elixir.
I think it lends itself very well to the problem set, the language is very expressive, the standard library is extensive, you can solve most things functionally with no state at all. Yet, you can use global state for things like memoization without having to rewrite all your functions so that's nice too.
I've done some of the problems in R. Vectorized-by-default can avoid a lot of boilerplate. And for problems that aren't in R's happy path, I learn how to optimize in the language. And then I try to make those optimizations non-hideous to read.
IMO it's maybe the best suited language to AoC.
You can write it even faster than Python, has a very terse syntax and great numerical performance for the few challenges where that matters.
Another vote for Haskell. It’s fun and the parsing bit is easy. I do struggle with some of the 2d map style questions which are simpler in a mutable 2d array in c++. It’s sometimes hard to write throwaway code in Haskell!
I respect the effort going into making Advent of Code but with the very heavy emphasis on string parsing, I'm not convinced it's a good way to learn most languages.
Most problems are 80%-90% massaging the input with a little data modeling which you might have to rethink for the second part and algorithms used to play a significant role only in the last few days.
That heavily favours languages which make manipulating string effortless and have very permissive data structures like Python dict or JS objects.
You are right. The exercises are heavy in one area. Still, for starting in a new language can be helpful: you have to do in/out with files. Data structures, and you will be using all flow control. So you will not be an ace, but can help to get started.
I know people who make some arbitrary extra restriction, like “no library at all” which can help to learn the basics of a language.
The downside I see is that suddenly you are solving algorithmic problems, which some times are bot trivial, and at the same time struggling with a new language.
That's a hard agree and a reason why anyone trying to learn Haskell, OCaml, or other language with minimal/"batteries depleted" stdlib will suffer.
Sure Haskell comes packaged with parser combinators, but a new user having to juggle immutability, IO and monads all at once at the same time will be almost certainly impossible.
I typically use OCaml myself for them and have never found the standard library to be particularly "depleted" for AoC, though I do have a couple hundred lines of shared library code built up over the years for parsing things, instrumenting things, and implementing a few algorithms and data structures that keep cropping up.
Also, dune makes pulling in build dependencies easy these days, and there's no shame in pulling in other support libraries. It's years since I've written anything in Haskell, but I'd guess the same goes for cabal, though OCaml is still more approachable than Haskell for most people, I'd say. A newbie is always going to be at some kind of disadvantage regardless.
> I do have a couple hundred lines of shared library code built up over the years for parsing things
I think that's the best example of anemic built-in utilities. Tried AoC two years ago with OCaml; string splitting, character matching and string slicing were very cumbersome coming from Haskell. Whereas the convenient mutation and for-loops in OCaml provide an overall better experience.
Given you're already well-versed in the ecosystem you'll probably have no issues working with dune, but for someone picking up OCaml/Haskell and having to also delve in the package management part of the system is not a productive or pleasant experience.
Bonus points for those trying out Haskell, successfully, than in later challenges having to completely rewrite their solution due to spaceleaks, whereas Go, Rust (and probably OCaml) solutions just bruteforce the work.
Maybe not learning a new language from the ground up, but I think it is good training to "just write" within the language. A daily or twice-daily interaction. Setting up projects, doing the basic stuff to get things running, and reading up on the standard library.
Having smaller problems makes it possible to find multiple solutions as well.
I am very happy that we get the advent of code again this year, however I have read the FAQ for the first time, and I must admit I am not sure I understand the reasoning behind this:
> If you're posting a code repository somewhere, please don't include parts of Advent of Code like the puzzle text or your inputs.
The text I get, but the inputs? Well, I will comply, since I am getting a very nice thing for (almost) free, so it is polite to respect the wishes here, but since I commit the inputs (you know, since I want to be able to run tests) into the repository, it is bit of a shame the repo must be private.
If enough inputs are available online, someone can presumably collect them and clone the entire project without having access to the puzzle input generation code, which is the "secret sauce" of the project.
Are you saying that we all have different inputs? I've never actually checked that, but I don't think it's true. My colleagues have gotten stuck in the same places and have mentioned aspects of puzzles and input characteristics and never spoken past each other. I feel like if we had different inputs we'd have noticed by now.
It depends on the individual problem, some have a smaller problem space than others so unique inputs would be tricky for everyone.
But there are enough possible inputs that most people shouldn't come across anyone else with exactly the same input.
Part of the reason why AoC is so time consuming for Eric is that not only does he design the puzzles, he also generates the inputs programmatically, which he then feeds through his own solver(s) to ensure correctness. There is a team of beta testers that work for months ahead of the contest to ensure things go smoothly.
(The adventofcode subreddit has a lot more info on this.)
He puts together multiple inputs for each day, but they do repeat over users. There's a chance you and your colleagues have the same inputs.
He's also described, over the years, his process of making the inputs. Related to your comment, he tries to make sure that there are no features of some inputs that make the problem especially hard or easy compared to the other inputs. Look at some of the math ones, a few tricks work most of the time (but not every time). Let's say after some processing you get three numbers and the solution is their LCM, that will probably be true of every input, not just coincidental, even if it's not an inherent property of the problem itself.
I don't know how much they "stand out" because their frequency makes it so that the optimal global leaderboard strat is often to just try something dumb and see if you win input roulette.
if we just look at the last three puzzles: day 23 last year, for example, admitted the greedy solution but only for some inputs. greedy clearly shouldn't work (shuffling the vertices in a file that admits it causes it to fail).
I have a solve group that calls it "Advent of Input Roulette" because (back when there was a global leaderboard) you can definitely get a better expected score by just assuming your input is weak in structural ways.
I don't push my solutions publicly, but I made an input downloader so you can input your cookie from your browser and load (and cache) the inputs rather than commit them.
This is not surprising at all, to me. Just commit the example input and write your test cases against that. In a nicely structured solution, this works beautifully with example style tests, like python or rust doctests, or even running jsdoc @example stanzas as tests with e.g. the @linus/testy module.
The example input(s) is part of the "text", and so committing it is also not allowed. I guess I could craft my own example inputs and commit those, but that exceed the level of effort I am willing to expend trying to publish repository no one will likely ever read. :)
I had never heard of this before I saw something announcing this years adventure. It looked interesting so I gave it a try, doing 2024. I had a blast. In concept, its very similar to the Euler Project but oriented more towards programming rather than being heavily mathematical. Like Euler, the first part is typically trivial while part 2 can put the hammer down and make you think to devise an approach that can arrive at a solution in milliseconds rather than the death of the universe.
The part I enjoy the most is after figuring out a solution for myself is seeing what others did on Reddit or among a small group of friends who also does it. We often have slightly different solutions or realize one of our solutions worked "by accident" ignoring some side case that didn't appear in our particular input. That's really the fun of it imho.
I never liked the global leaderboard since I was usually asleep when the puzzles were released. I likely never would have had a competitive time anyway.
I never had any hope or interest to compete in the leaderboard, but I found it fun to check it out, see times, time differences ("omg 1 min for part 1 and 6 for part 2"), lookup the names of the leaders to check if they have something public about their solutions, etc. One time I even ran into the name of an old friend so it was a good excuse to say hi.
I find it interesting how many sponsors run their own "advent of <x>". So far I've seen "cloud", "FPGA", and a "cyber security" one in the sponsors pages (although that last one is one I remember from last year).
I'm also surprised there are a few Dutch language sponsors. Do these show up for everyone or is there some kind of region filtering applied to the sponsors shown?
A little sad that there are fewer puzzles. But also a glad that I'll see my wife and maybe even go outside during the second half of December this year.
Advent of code is such a fantastic event. I am honestly glad it's 12 days this year, primarily because I would only ever get to day 13 or 14 before it would take me an entire day to finish the puzzles! This would be my fourth year doing AoC. Looking forward to it :)
I plan on doing this year in C++ because I have never worked with it and AoC is always a good excuse to learn a new language. My college exams just got over, so I have a ton of free time.
I love advent of code, and I look forward to it every year!
I've never stressed out about the leaderboard. Ive always taken it as an opportunity to learn a new language, or brush up on my skills.
In my day-to-day job, I rarely need to bootstrap a project from scratch, implement a depth first search of a graph, or experiment with new language features.
It's for reasons like these that I look forward to this every year. For me it's a great chance to sharpen the tools in my toolbox.
Some part of me would love a job that was effectively solving AoC type problems all the time, but then I'd probably burn out pretty quickly if that's all I ever had to do.
Sometimes it's nice to have a break by writing a load of error handling, system architecture documentation, test cases, etc.
> For me it's a great chance to sharpen the tools in my toolbox.
That's a good way of putting it.
My way of taking it a step further and honing my AoC solutions is to make them more efficient whilst ensuring they are still easy to follow, and to make sure they work on as many different inputs as possible (to ensure I'm not solving a specific instance based on my personal input). I keep improving and chipping away at the previous years problems in the 11 months between Decembers.
> You don't need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far.
Every time I see this I wonder how many amateur/hobbyist programmers it sets up for disappointment. Unless your definition of “pretty far” is “a small number of the part ones”, it’s simply not true.
In the programming world I feel like there's a lot of info "for beginners" and a lot of folks / activities for experts.
But that middle ground world is strange... a lot of it is a combo of filling in "basics" and also touching more advanced topics at the same time and the amount of content and just activities filling that in seems very low. I get it though, the middle ground skilled audience is a great mix of what they do or do not know / can or can not solve.
This is also true of a lot of other disciplines. I’ve been learning filmmaking lately (and editing, colour science, etc). There’s functionally infinite beginner friendly videos online on anything you can imagine. But very little content that slowly teaches the fundamentals, or presents intermediate skills. It’s all “Here’s 5 pieces of gear you need!” “One trick that will make your lighting better”. But that’s mostly it. There’s almost no intermediate stuff. No 3 hour videos explaining in detail how to set up an interview properly. Stuff like that.
I've found the best route at that point is just... copying people who are really good. For my interest (3d modeling) if you want voice-over and directions, those are all pretty basic, but if you want to see how someone approaches a large, complex object, I will literally watch a timelapse of someone doing it and scrub the video in increments to see each modifier/action they took. It's slow but that's also how I built some intuition and muscle memory. That's just the way...
Makes sense that that's the case: there's usually a limited amount of beginner's knowledge, and then you get to the medium level by arbitrary combinations of that beginner's knowledge, of which there's an exponential number, making it less likely that someone has produced something about that specific combination. Then at the expert level, people can get real deep into some obscure nitty-gritty detail, and other experts will be able to generalise from that by themselves.
It's one of the worst parts of being self taught,
beginner level stuff has a large interest base because everyone can get into it.
Advanced level stuff usually gets recommended directly by experts or will be interesting to beginners too as a way of seeing the high level.
Mid level stuff doesn't have that wide appeal, the freshness in the mind of the experts, or the ease of getting into, so it's not usually worth it for creators if the main metric is reach/interest
Structured (taught) learning is better in this regard, it at least gives you structure to cling on to at the mid level
Yes, and it's hard to point to reference material to newcomers. Hey, yeah that's actually a classic problem, let me show you some book about this... oh there's none. Maybe I should start creating them, but that is of course hard.
But also, the middle ground is often just years of practice.
Realize in anything, there are people who are much better than even the very best. The people doing official collegiate level competitive programming would find AoC problems pretty easy.
>The people doing official collegiate level competitive programming would find AoC problems pretty easy.
I used to program competitively and while that's the case for a lot of the early day problems, usually a few on the later days are pretty tough even by those standards. Don't take it from me, you can look at the finishing times over the years. I just looked at some today because I was going through the earlier years for fun and on Day 21/2023, 1 hour 20 minutes got you into the top 100. A lot of competitive programmers have streamed the challenges over the years and you see plenty of them struggle on occasion.
People just love to BS and brag, and it's quite harmful honestly because it makes beginner programmers feel much worse than they should.
The actual number is going to be higher as more people will have finished the puzzles since then, and many people may have finished all of the puzzles but split across more than one account.
Then again, I'm sure there's a reasonable number of people who have only completed certain puzzles because they found someone else's code on the AoC subreddit and ran that against their input, or got a huge hint from there without which they'd never solve it on their own. (To be clear, I don't mind the latter as it's just a trigger for someone to learn something they didn't know before, but just running someone else's code is not helping them if they don't dig into it further and understand how/why it works.)
There's definitely a certain specific set of knowledge areas that really helps solve AoC puzzles. It's a combination of classic Comp Sci theory (A*/SAT solvers, Dijkstra's algorithm, breadth/depth first searches, parsing, regex, string processing, data structures, dynamic programming, memoization, etc) and Mathematics (finite fields and modular arithmetic, Chinese Remainder Theorem, geometry, combinatorics, grids and coordinates, graph theory, etc).
Not many people have all those skills to the required level to find the majority of AoC "easy". There's no obvious common path to accruing this particular knowledge set. A traditional Comp Sci background may not provide all of the Mathematics required. A Mathematics background may leave you short on the Comp Sci theory front.
My own experience is unusual. I've got two separate bachelors degrees; one in Comp Sci and one in Mathematics with a 7 year gap between them, those degrees and 25+ years of doing software development as a job means I do find the vast majority of AoC quite easy, but not all of it, there are still some stinkers.
Being able to look at an AoC problem and think "There's some algorithm behind this, what is it?" is hugely helpful.
The "Slam Shuffle" problem (2019 day 22) was a classic example of this that sticks in my mind. The magnitude of the numbers involved in part 2 of that problem made it clear that a naive iteration approach was out of the question, so there had to be a more direct path to the answer.
As I write the code for part 1 of any problem I tend to think "What is the twist for part 2 going to be? How is Eric going to make it orders of magnitude harder?" Sometimes I even guess right, sometimes it's just plain evil.
Sorry to focus on just one aspect of your (excellent) post, but do you have recommendations for reading up on A*/SAT beyond wikipedia? I'm mostly self-taught (did about a minor's worth of post-bacc comp sci after getting a chemistry degree) and those just hasn't come up much, e.g. I don't see A* mentioned at a first glance through CLRS and only in passing in Skiena's algorithms book. Thank you!
Not sure. I covered them during my Comp Sci degree in the mid/late 90s. I'm probably not even implementing them properly but whatever I do implement tends to work.
Just checked my copy of TAOCP (Vol 3 - Sorting and Searching) and it doesn't mention A* or SAT.
Yeah, getting 250 or so stars is going to be straightforward, something most programmers with a couple of years of experience can probably manage. Then another 200 or so require some more specialized know-how (maybe some basic experience with parsers or making a simple virtual machine or recognizing a topology sort situation). Then probably the last 50 require something a bit more unusual. For me, I definitely have some trouble with any of the problems where modular inverses show up.
It's just bluffing, lying. People lie to make others think they're hot shit. It's like the guy in school who gets straight A's and says he never studies. Yeah I'll bet.
They... sort of are though? A year or two ago I just waited until the very last problem, which was min-cut. Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time. There are algorithms that don't even require all the high-falutin graph theory.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
You say in your comment: "Anybody with a computer science education ... should be able to tackle this one" which is directly opposed to what they advertise: "You don't need a computer science background to participate"
"Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time."
I have a computer science education and I have no idea what you're talking about. The prompt "Proof." ?
Most people who study Comp Sci never use any of what they learned ever again, and most will have forgotten most of what they learned within one or two years. Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
Holy fuck. I should just grow coconuts or something in the remote Philippines.
> Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
But we are talking about Advent of Code here, which is a set of fairly contrived, theoretical, in vitro learning problems that you don't really see in the real software engineering world either.
It's totally true. I was doing Advent of Code before I had any training or work in programming at all, and a lot of it can be done with just thinking through the problem logically and using basic problem solving. If you can reason a word problem into what it's asking, then break it down into steps, you're 90% of the way there.
Comparing previous years, they're exactly what I'd expect, to be honest. Only people serious about completion will...well...complete it. Even if they do not know any code, if you pick something well-documented like Python or whatever, it should not be a tremendous challenge so long as you have the drive to finish the event. Code isn't exactly magic, though it does require some problem-solving and dedication. Since this is a self-paced event that does not offer any sort of immediate reward for completion, most people will drop out due to limited bandwidth needing to be devoted to everything else in their lives. That versus, say, a college course where you paid to be there and the grade counts toward your degree; there's simply more at stake when it comes to completing the course.
But, speaking to the original question as to the number of newbies that go all the way, I'd say one cannot expect to increase their skills in anything if one sticks in their comfort zone. It should be hard, and as a newbie who participated in previous years, I can confirm it often is. But I learned new things every time I did it, even if I did not finish.
I have to say, I've read many out-of-touch comments on HN over the years but this is definitely among the most out there, borderline delusional comments I've ever seen!
The idea that anyone who doesn't know any code would:
1) Complete in Advent of Code at all.
2) Complete a single part of a single problem.
let alone, complete the whole thing without it being a "tremendous challenge"...
is so completely laughable it makes me question whether you live on the same planet as the rest of us here.
Getting a person who has never coded to write a basic sort algorithm (i.e. bubble sort) is already basically impossible. I work with highly talented non coder co-workers who all attended tier-1 universities (e.g. Oxford, Harvard, Stanford) but for finance/business related degrees, I cannot get them to write while/foreach loops in Python, and simply using Claude Code is way too much for them.
If you are even fully completing one Advent of Code problem, you are in the top 0.1% of coders, completing all of them puts you in the top 0.001%.
I can't begin to describe how valuable your input has been through this whole thread about something you're quite possessive and passionate about, which surely places you in a position to aggressively dismiss any other possible way of looking at it! Wow, love learning about new perspectives on HN!
Wishing you best of luck in AoC, Life and Love but I imagine someone like you doesn't need it, being a complete toolbox and all.
P.S.: Tell your coworkers I'm sorry they have to put up with you.
You're the person saying Advent of Code is "so easy" that anyone even people with no coding ability at all should find it do-able, which is totally diminishing the difficulty of the problems, and asserting your own genius, i.e. that you found it totally trivial.
I am the person saying that actually, stuff like Advent of Code is incredibly difficult and 99% of active programmers aren't able to complete it, let alone people who don't code.
I am not an elitist at all, unlike yourself, I don't find completing "Advent of Code" easy, in fact, it would take me a long time to complete it, more time than I have available in my busy life in the average December. And I doubt I would be able to complete it 100% without looking up help, getting hints, or using LLMs to help.
You clearly didn't read my whole original comment before mouthing off. Go back and do that, you'll find that I pointed out most do not complete it, that it is supposed to be challenging and I never called it "easy" as you imply ("not tremendously difficult" =/= "easy")
Heck, I even talked about having to be serious about completion, and you could not bother to read the whole comment, then proceed to call me delusional? FFS, I am now praying for your co-workers and I'm not even religious.
Did YOU even read your original comment? You asserted that people who have never coded could complete the event!
Did you realize only roughly 500 people of the > 1M who are registered for advent of code even complete it?
You said "it should not be a tremendous challenge", i.e. not that big of a deal even if you don't know how to code. Which is absolutely diminishing the difficulty of the event, I mean, come on man...
This is why I'm asserting you are quietly oblivious to the abilities of most people. I am asserting that most people who CAN code, cannot complete the event, yet alone non-coders. I am a very active coder (for fun mostly these days, but also sometimes for work), but I could not complete Advent of Code. Maybe if I took all of December off work to dedicate serious time, but even then I wonder if it's possible without looking at hints/LLM-help etc.
I often try and help my co-workers who are working on AI based side-projects for fun, so I have a strong insight into the abilities of non-coding smart people, and the reality is that yes, they get very turned off as soon as you get anything more complex than for-loops and if-statements. This isn't me being mean to co-workers, this is the reality of things I have experienced. It's not a brains thing, they can understand more complex stuff, but they don't want to, they find it annoying, boring, not worth the time/effort etc. So the idea of them learning dynamic programming, DFS/BFS, more complex data structures etc, is well, just not going to happen.
My point is that you are effectively saying, "oh just about anyone can do Advent of Code if they want to", is totally not grounded in any sort of reality.
The amount of injected implication you are imposing on everything I said...this is some seriously unhinged gaslighting in effort to obfuscate the fact that you came out of the gate calling someone delusional over a comment you barely understood. We're wasting each other's time, so I'm out.
Got to agree. I'm even surprised at just how little progress many of my friends and ex-colleagues over the years make given that they hold down reasonable developer jobs.
My experience has been "little progress" is related to the fact that, while AoC is insanely fun, it always occurs during a time of year when I have the least free time.
Maybe when I was in college (if AoC had existed back then) I could have kept pace, but if part of your life is also running a household, then between wrapping up projects for work, finalizing various commitments I want wrapped up for the year, getting together with family and friends for various celebrations, and finally travel and/or preparing your own house for guests, I'm lucky if I have time to sit down with a cocktail and book the week before Christmas.
Seeing the format changed to 12 days makes me think this might be the first time in years I could seriously consider doing it (to completion).
Yep, the years I've made it the furthest have been around the 11-12 day mark. The inevitably life and kids and work get in the way and that's it for another year. Changing to a 12 day format is unlikely to affect me at all :)
In order to complete AoC you need more than just the ability to write code and solve problems. You need to find abstract problem-solving motivating. A lot of people don't see the point in competing for social capital (internet points) or expending time and energy on problems that won't live on after they've completed them.
I have no evidence to say this, but I'd guess a lot more people give up on AoC because they don't want to put in the time needed than give up because they're not capable of progressing.
Yeah, time is almost certainly the thing that kills most people's progress but that's not the root cause.
I think it comes down to experience, exposure to problems, and the ability to recognise what the problem boils down to.
A colleague who is an all round better coder than me might spend 4 hours bashing away trying to solve a problem that I might be able to look at and quickly recongise it is isomorphic to a specific classic Comp Sci or Maths problem and know exactly how best to attack it, saving me a huge amount of time.
Spoiler alert: Take the "Slam Shuffle" in 2019 Day 22 (https://adventofcode.com/2019/day/22). I was lucky that I quickly recognised that each of the actions could be represented as '( a*n + b ) mod noscards' (with a and b specific to the action) and therefore any two actions like this can be combined into the same form. The optimal solution follows relatively simply from this.
Doing all of the previous years means there's not much new ground although Eric always manages to find something each year.
There have also been some absolutely amazing inventions along the way. The IntCode Breakout game (2019) and the adventure game (can't remember the year) both stick in my mind as amazing constructions.
That's exactly why I don't do more than I do. I do some of the easy ones and it's fun. Then it gets a little harder and I start wondering how much time I want to put into this.
And then something shiny and fun comes along during a problem that I'm having trouble with, and I just never come back.
It's hard for most people to focus on a single thing for a long period of time. Motivation tends to come and go. I started the 2024 solutions in 2025, without the pressure and got to the end this way (not without help though TBH). Secondary motivation can help, like being bored or wanting to learn another programming language.
I've never tried AoC prior but with other complex challenges I've tried without much research, there comes a point where it just makes more sense to start doing something on the backlog at home or a more specific challenge related to what I want to improve on.
I find the problem I have is once I get going on a problem I can't shake it out of my head. I end up lying in bed for hours pleading with my brain to let it go if I've not found the time to finish it during the crumbs of discretionary time in the day!
This type of problem has very little resemblance to the problems I solve professionally - I’m usually one level of abstraction up. If I run into something that requires anything even as complicated as a DAG it’s a good day.
I think this has a lot more to do with time commitment. Once the problems take more than ~1 hour I tend to stop because I have stuff to do, like a job that already involves coding.
Because like 80% of AoC problems require deep Computer science background and deeply specific algorithms almost nobody is using in their day to day work.
mh, maybe it's cheating because it's still a STEM degree but I have a PhD in physics without any real computer science courses (obviously had computational physics courses etc. though) and I managed to 100% solve quite a few years without too much trouble. (though far away from the global leaderboard and with the last few days always taking several hours to solve)
I have a EE background not CS and haven't had too much trouble the last few years. I'm not aiming to be on the global leader board though. I think that with good problem solving skill, you should be able to push through the first 10 days most years. Some years were more front loaded though.
In general, the problems require less background knowledge than other coding puzzles. They're not always accessible without knowing a particular algorithm, but they're more 'can you think through a problem' than 'have you done this module'.
That's not the same as saying they're easy, but it's a different kind of barrier, and (in my opinion) more a test of 'can you think?' than 'did you do a CS degree?'
Agreed. I have a CS background and years of experience but I don't get very far with these. At some point it becomes a very large time commitment as well which I don't have
BTW the page mentions Alternate Styles, which is an obscure feature in firefox (View -> Page Styles). If you try it out, you will probably run into [0] and not be able to reset the style. The workaround is to open the page in a different tab, which will go back to the default style.
I'm actually pleasantly surprised to see a 2025 edition, last year being the 10th anniversary and the LLM situation with the leaderboard were solid indications that it would have been a great time to wrap it up and let somebody else carry the torch.
It's only going to be 12 problems rather than 24 this year and there isn't going to be a gloabl leaderboard, but I'm still glad we get to take part in this fun Christmas season tradition, and I'm thankful for all those who put in their free time so that we can get to enjoy the problems. It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect, I've always just enjoyed the puzzles, so as far as I'm concerned nothing was really lost.
>It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect
Is this an unpopular stance? Out of a dozen people I know that did/do AoC every year, only one was trying to compete. Everyone else did it for fun, to learn new languages or concepts, to practice coding, etc.
Maybe it helps that, because of timezones, in Europe you need to be really dedicated to play for a win.
No, it's not. At most 200 people could end up on the global leaderboard, and there are tens of thousands of people who participate most days (though it drops off by the end, it's over 100k reliably for the first day). The vast majority of participants are not there for the leaderboard. If you care about being competing, there are always private leaderboards.
A couple of the Slack/Discord groups I’m in do a local leaderboard with friends. It’s fun to do with a trusted group of people who are all in it for fun.
I'm also in a few local leaderboards, but I'm not "really" competing, it's more of a fun group thing.
Premises:
(i) I love Advent of Code and I'm grateful for its continuing existence in whatever form its creators feel like it's best for themselves and the community;
(ii) none of what follows is a request, let alone a demand, for anything to change;
(iii) what follows is just the opinion of some random guy on the Internet.
I have a lot of experience with competitions (although more on the math side than on the programming side), and I've been involved essentially since I was in high school, as a contestant, coach, problem writer, organizer, moving tables, etc.
In my opinion Advent of Code simply isn't a good competition:
- You need to be available for many days in a row for 15 minutes at a very specific time.
- The problems are too easy.
- There is no time/memory check: you can write ooga-booga code and still pass.
- Some problems require weird parsing.
- Some problems are pure implementation challenges.
- The AoC guy loves recursive descent parsers way too much.
- A lot of problems are underspecified (you can make assumptions not in the problem statement).
- Some problems require manual input inspection.
To reiterate once again: I am not saying that any of this needs to change. Many of the things that make Advent of Code a bad competition are what make it an excellent, fun, memorable "Christmas group thing". Coming back every day creates community and gives people time to discuss the problems. Problems being easy and not requiring specific time complexities to be accepted make the event accessible. Problems not being straight algorithmic challenges add welcome variety.
I like doing competitions but Advent of Code has always felt more like a cozy problem solving festival, I never cared too much for the competitive aspect, local or global.
There are definitely some problems that have an indirect time/memory check, in that if you don't have a right-enough algorithm, your program will never finish.
> - The AoC guy loves recursive descent parsers way too much.
The vast majority (though not all) of the inputs can be parsed with regex or no real parsing at all. I actually can't think of a day that needed anything like recursive descent parsing.
I too like the simple nature. If you care about highly performant code, you can always challenge yourself (I got into measuring timing in the second season I participated). Personally I prefer a world like this. Not everyone should have to compete on every detail (I know you stated that your points aren’t demands, I’m just pointing out my own worldview). For any given thing, there will naturally be people that are OK with “good enough”, and people who are interested to take it as far as they can. It’s nice that we can all still participate in this.
One could probably build a separate service that provides a leaderboard for solution runtimes.
I agree that it’s more of a cozy activity than a hardcore competition, that’s what I appreciate about it most.
> The AoC guy loves recursive descent parsers way too much.
LOL!!
I agreed with a lot of what you wrote, but also a lot of us strive for beautiful solutions regardless of time/memory bounds.
In fact, I’m (kind of) tired of leetcode flagging me for one ultra special worst-case scenario. I enjoy writing something that looks good and enjoying the success.
(Not that it’s bad to find out I missed an optimization in the implementation, but… it feels like a lot of details sometimes.)
Do you know of anything like AoC but that feels less contrived? I often spend the most time understanding the problem requirements because they are so arbitrary - like the worst kind of boardgame! Maybe I should go pick up some OSS tickets...
Being contrived, with puns or other weirdness is kinda on par for this kind of problems. Almost every programming competition I've ever been to have those kind of jokes.
But the Kattis website is great. The program runs on their server without you getting to know the input (you just get right/wrong back), so a bit different. But also then gives you memory and time constraints which you for the more difficult problems must find your way out of.
Take a look at Everybody Codes. It occurs in November instead of December, so this year is wrapping up. Like AoC, it is story based but maybe you'll find the problem extraction more to your liking.
I did a post [0] about this last year, and vanilla LLMs didn’t do nearly as well as I’d expected on advent of code, though I’d be curious to try this again with Claude code and codex
> LLMs, and especially coding focused models, have come a very long way in the past year.
I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.
I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.
Last April I asked Claude Sonnet 3.7 to solve AoC 2024 day 3 in x86-64 assembler and it one-shotted solutions for part 1 and 2(!)
It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.
Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.
I know some folks were disappointed with their being 12 puzzles instead of 24 this year, but I never have time to finish anyway so it makes no difference to me lol
Exactly. I have always taken AoC as fun and time to learn. But there is so much going on during December and I do not enjoy doing more than one puzzle a day (it feels like hard work instead of fun). I usually spend time on weekends with kids and family and I am not willing to solve more puzzles during week days so I am falling behind all the time. My plan was always to finish last year puzzles to enjoy more interesting ones but it always felt wrong. So I hope I will have time to finish everything this year :-) But I feel pain from people with enough free time to go full on. I would love to be one of them but there is so much going on everywhere that I have to split my time. Sorry programming world and especially computers :-D
Eliminating the leaderboard might help. By measuring it as a race, it becomes a race, and now the goal is the metric.
Maybe just have a cool advent calendar thingy like a digital tree that gains an ornament for each day you complete. Each ornament can be themed for each puzzle.
Of course I hope it goes without saying that the creator(s) can do it however they want and we’re nothing but richer for it existing.
That 'digital tree' idea is similar to how AoC has always worked. There's a theme-appropriate ASCII graphic on the problem page that gains color and effects as you complete problems. It's not always a tree, but it was in 2015 (the first year), and in several other years at least one tree is visible. https://adventofcode.com/2015
I've ignored the leaderboard for its entire existence, as the puzzles release at something like 4AM-5AM in my timezone; there's no point getting up 4 hours early, or staying awake 4 hours after bedtime, for some points on the internet.
Instead, getting gold stars for solving the puzzles is incentive enough, and can be done as a relaxing thing in the morning.
No matter what you do, as the puzzles get harder, you won't solve them in a day (or even a lifetime) if you don't come up with good algorithms/methods/heuristics.
I disagree. Having a leaderboard also leaks into the puzzle design. So the experience is different, even if you choose to ignore the leaderboard as a participant.
That’s also completely true and something I often say about gaming. You don’t like achievements? Just don’t do them. Your enjoyment shouldn’t be a function of how others interact with the product.
I never, in all the years of participating in AoC did take a look at the global leaderboard.
Even before LLMs I knew it was filled with with results faster then you can blink.
So some of us, from gut feeling the vast majority, it was always just for fun. Usually I spent at least until March to finish as much as I did in every year.
Oh, i’m quite sure it does. In fact, it’s a central thing in so much of psychology. The only difference is how you get there. Some people can just ignore and others take more effort.
I stopped staying up until midnight for the new problem set to be released and instead would do them in the afternoon. Even though I could compare my time to the leaderboard, simply not having the possibility of being on the board removed most of the comparison anxiety.
While part of the fun is doing the daily tasks with your friends, you can still access the previous years and their challenges if you want to continue after advent!
Finally that time of year again! I've been looking forward to this for a long time. I usually drop off about halfway anyways (finished day 13, 14 and 13 the previous 3 years), as that's when December gets too busy for me to enjoy it properly, so I personally don't mind the reduction in problems at all, really. I'm just happy we still have great puzzles to look forward to.
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
Man, those people using LLMs in competitive programming ... where's the fun in that? I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
I’m a very casual gamer but even I run into obvious cheaters in any popular online game all the time.
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
Yeah. I was happy to see this called out in their /about
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
> I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
> high school debate used to be an extracurricular thing students could do for fun.
High school debate has been ruthless for a long time, even before AI. There has been a rise in the use of techniques designed to abuse the rules and derail arguments for several years. In some regions, debates have become more about teams leveraging the rules and technicalities against their opponents than organically trying to debate a subject.
It sucks that the fun is being sucked out of debate, but I guess a silver lining is that the abuse of these tactics helps everyone understand that winning debates isn't about being correct, it's about being a good debater. And a similar principle can be applied to the application of law and public policy as well.
Why is that strange? Competitive programming, as the name suggests, is about competing. If the rules allow that, not using LLM is actually more like running tour de France.
If the rules don't allow that and yet people do then well, you need online qualifiers and then onsite finals to pick the real winners. Which was already necessary, because there are many other ways to cheat (like having more people than allowed in the team).
I'm a bit surprised you can honestly believe that a competition of humans isn't somehow different if allowed to use solution-generators. Like using a calculator in an arithmetic competition. Really?
It's not much different than outlawing performance enhancing drugs. Or aimbots in competitive gaming. The point is to see what the limits of human performance are.
If an alien race came along and said "you will all die unless you beat us in the IEEE programming competition", I would be all for LLM use. Like if they challenged us to Go, I think we'd probably / certainly use AI. Or chess - yeah, we'd be dumb to not use game solvers for this.
But that's not in the spirit of the competition if it's University of Michigan's use of Claude vs MIT's use of Claude vs ....
Imagine if the word "competition" meant "anything goes" automatically.
It's a different kind of fun. Just like doing math problems on paper can be fun, or writing code to do the math can be fun, or getting AI to write the code to do the math can be fun.
They're just different types of fun. The problem is if one type of fun is ruined by another.
It can be a matter of values from your upbringing or immediate environment. There are plenty of places where they value the results, not the journey, and they think that people who avoid cheating are chumps. Think about that: you are in a situation where you just want to do things for fun but everyone around you will disrespect you for not taking the easy way out.
Weirdly I feel lot more accepting of LLMs in this type of environment than in making actual products. Point is doing things fast and correct enough. So in someways LLM is just one more tool.
With products I want actual correctness. And not something thrown away.
We’re starting to get to a point where the ai can generate better code than your average developer, though. Maybe not a great developer yet, but a lot of products are written by average developers.
Given what I understand about the nature of competitive programming competitions, using an LLM seems kind of like using a calculator in an arithmetic competition (if such a thing existed) or a dictionary in a spelling bee.
These contests are about memorizing common patterns and banging out code quickly. Outsourcing that to an LLM defeats the point. You can say it's a stupid contest format, and that's fine.
(I did a couple of these in college, though we didn't practice outside of competition so we weren't especially good at it.)
When I did competitions like these at uni (~10-15 years ago), we all used some thin-clients in the computer lab where the only webpages one could access were those allowed by the competition (mainly the submission portal). And then some admin/organizers would feed us and make sure people didn't cheat. Maybe we need to get back to that setup, heh.
Serious in-person competitions like ICPC are still effective against cheating. The first phase happens in a limited number of venues and the computers run a custom OS without internet access. There are many people watching so competitors don't user their phones, etc.
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s
contest, however, we will not be releasing official results. The reason for this is the significant
number of students who violated the CCC Rules. In particular, it is clear that many students
submitted code that they did not write themselves, relying instead on forbidden external help.
As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
In 1997, Deep Blue beat Gary Kasparov, the world chess champion. Today, chess grandmasters stand no chance against Stockfish, a chess engine that can run on a cheap phone. Yet chess remains super popular and competitive today, and while there are occasional scandals, cheating seems to be mostly prevented.
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
Oof. I had a great time cracking the top 100 of Advent of Code back in 2020. Bittersweet to know that I got in while it was still a fun challenge for humans.
For those who think this is a typo, uiua [1] (pronounced "wee-wuh") is a stack-based array programming language.
I solved a few problems with it last year, and it is amazing how compact the solutions are. It also messes with your head, and the community surrounding it is interesting. Highly recommended.
I love Advent of Code! I have used previous years' problems for my guest lectures to Computer Science students and they have all enjoyed those more than a traditional algorithmic lecture.
Excited to see AOC back and I think it was a solid idea to get rid of the global leaderboard.
We (Depot) are sponsoring this year and have a private leaderboard [0]. We’re donating $1k/each for the top five finishers to a charity of their choice.
Isn't a publicly advertised private leaderboard - especially with cash prizes - against the new guidance? Certainly the spirit of the guidance.
>What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I've made it so you can share a read-only view of your private leaderboard. *Please don't use this feature or data to create a "new" global leaderboard.*)
i don't think it should be a charity of their choice. i think it should have to be one of the top 5 most reputable charities in the world, like doctors without borders or salvation army.
You could, but you shouldn't have to. If you want to sign up for XYZ, you need to sign up for BigCorp, you need to add your phone number to verify your account, etc.
The "etc" is pretty important here. You can log in using Reddit, and you can create a random throwaway Reddit account without filling in any other details (no email address or phone number required).
I believe they no longer allow new accounts without an email address.
It used to be that reddit had a user creation screen that looked like you needed to input an email address, but you could actually just click "Next" to skip it.
The last time I had cause to make a reddit account, they no longer allowed this.
Having done my own auth I get why they do it this way. LLMs are already a massive problem with AoC, I imagine an anonymous endpoint to validate solutions would be even worse.
Having done auth myself, I can also understand why auth is being externalised like this. The site was flooded with bots and scrapers long before LLMs gained relevance and adding all the CAPTCHAs and responding to the "why are you blocking my shady CGNAT ISP when I'm one of the good ones" complaints is just not worth it. Let some company with the right expertise deal with all of that bullshit.
I'd wish the site would have more login options, though. It's a tough nut to crack; pick a small, independent oauth login service not under control of a bit tech company and you're basically DDOSing their account creation page for all of December. Pick a big tech company and you're probably not gaining any new users. You can't do decentralized auth because then you're just doing authentication DDOS with extra steps.
If I didn't have a github account, I'd probably go with a throwaway reddit account to take part. Reddit doesn't really do the same type of tracking Twitter tries to do and it's probably the least privacy invasive of the bunch.
"
Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).
"
I've never done this before but honestly I am just turned off by the website and font being hard to read. I get that's the geek aesthetic or whatever, but it's a huge turn off for me.
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
And yet I expect the whole leaderboard to be full of AI submissions...
I am so glad there is no leaderboard this year. Making it a competition really is against the spirit of advent calendars in general. It’s also not a fair competition by default simply due to the issue of time zones and people’s life schedules not revolving around it.
There are plenty of programming competitions and hackathons out there. Let this one simply be a celebration of learning and the enjoyment of problem solving.
I agree with the first point but the second point feels irrelevant. Yeah, people's life schedules don't revolve around it, but that doesn't mean shouldn't make iy a competition. Most people who play on chess.com don't have lives that revolve around it, but that doesn't mean that chess.com should abolish Elo rankings.
Chess doesn't rank people based on how quickly they complete a puzzle after midnight EST (UTC-5). For people in large parts of Asia, midnight EST translates to late morning / early afternoon. This means someone in Asia can complete each AoC puzzle during daylight hours whereas someone in eastern North America will have to complete the puzzle in the middle of the night.
> The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard.
Depends how you look at it. Some of my colleagues rave about Claude Code, so I was thinking about trying it out on these puzzles. In that sense it is "going to the gym", just for a different thing. Since I do AoC every year, I feel like it'll give me a good feel for Claude Code compared to my baseline. And it's not just "prompting", but figuring out a workflow with tests and brainstorming and iteration and all that. I guess if the LLM can just one-shot every puzzle that's less interesting, but I suppose it would be good to know it can do that...
It 100% can do that. LLMs are trained on an unfathomable amount of data. Every AoC puzzle can be solved by identifying the algorithm behind it. Its Leetcode in a friendlier and more festive spirit.
I mean they're great programming tests, for both people and AI I'd argue - like, it'd be impressive if an AI can come up with a solution in short order, especially with minimal help / prompting / steering. But it wouldn't be a personal achievement, and if it was a competition I'd label it as cheating.
Well, my point, if it wasn’t clear, was that I simply don’t find those problems fun.
I enjoy programming a lot, but most of it comes from things like designing APIs that work well and that people enjoy using, or finding things that allow me to delete on ton of legacy code.
I did try to do the advent of code many times. Usually I get bored half way through reading the first problem. and then when I finally get through I realize that these usually involve tradeoffs that are annoying to make in terms of memory/cpu usage and also several edge cases to deal with.
Looking forward to it but also sad that it is "only" 12 puzzles, but I completely respect Eric's decision to scale it back.
I've got 500 stars (i.e. I've completed every day of all 10 previous years) but not always on the day the puzzles were available, probably 430/500 on the day. (I should say I find the vast majority of AoC relatively easy as I've got a strong grounding in both Maths and Comp Sci.)
First of all I only found out about AoC in 2017 and so I did 2015 and 2016 retrospectively.
Secondly I can keep up with the time commitments required up until about the 22nd-24th (which is when I usually stop working for Christmas). From then time with my wife/kids takes precedence. I'll usually wrap up the last bits sometime from the 27th onwards.
I've never concerned myself with the pointy end of the leaderboards due to timezones as the new puzzles appear at 5am local time for me and I've no desire to be awake at that time if I can avoid it, certainly not for 25 days straight. I expect that's true of a large percentage of people participating in AoC too.
My simple aim every day is that my rank for solving part 2 of a day is considerably lower than my rank for solving part 1.
(To be clear, even if I was up and firing at 5am my time every day I doubt I could consistently get a top 100 rank. I've got ten or so 300-1000 ranks by starting ~2 hours later but that's about it. Props to the people who can consistently appear in the top 100. I also start most days from scratch whilst many people competing for the top 100 have lots of pre-written code to parse things or perform the common algorithms.)
I also use the puzzles to keep me on my toes in terms of programming and I've completed every day in one of Perl, C or Go and I've gone back and produced solutions in all 3 of those for most days. Plus some random days can be done easily on the command-line piping things through awk, sed, sort, grep, and the like.
The point of AoC is that everyone is free to take whatever they want from it.
Some use it to learn a new programming language. Some use it to learn their first language and only get a few days into it. Some use it to make videos to help others on how to program in a specific language. Some use it to learn how/when to use structures like arrays, hashes/maps, red-black trees, etc, and then how/when to use classic Comp Sci algorithms like A* or SAT solvers, Djikstra's, etc all the way to some random esoteric things like Andrew's monotone chain convex hull algorithm for calculating the perimeter of a convex hull. There are also the mathsy type problems often involving Chinese Remainder Theorem and/or some variation of finite fields.
My main goal is to come up with code that is easy to follow and performs well as a general solution rather than overly specific to my individual input. I've also solved most years with a sub 1 second total runtime (per year, so each day averages less than 40msec runtime).
Anyway, roll on tomorrow. I'll get to the day 1 problem once I've got my kid up and out the door to go to school as that's my immediate priority.
Well some people like to code and logic puzzles. And especially as it is in its raw form where you can forget all the noise you encounter while coding professionally with many hoops and responsibilities.
I agree. Didn't these puzzles ruin interviewing for many years now. AI came along and they're still doing it. Some things will needlessly drag on before they die I guess
By the same token, AI came along and we all still have intelligence, needless, eh? I mean people reading and writing stuff has nothing to do with AI. I don't see how some people see everything as a zero-sum game.
All AI is doing is solving these puzzles, which proves they don't need any form of intelligence. You're wrong for associating AI with human intelligence. It will never happen. It might be faked once, like the moon landing, but that's it.
How do they ruin interviewing? The whole point of these puzzles is that they’re meant to be fun to solve, not a means to an end, but enjoyable for what they are.
I'm not sure I understand this. Most puzzles are number-crunching but very little to do with graphics (maybe one or two), so no usually OpenGL isn't used AFAIK.
Of course, folks may use it to visualise the puzzles but not to solve them.
Not a fan of these "Coding for fun" things. Code for a job to earn money, yes, a side project where there is an end goal, yes. This seems like a waste of time for working developer.
Maybe it's useful for people trying to learn but also becoming pointless now as all Junior dev roles can be done with AI.
I mean do plumbers have an advent of plumbing where they try and unblock shit filled toilets for fun?
> I mean do plumbers have an advent of plumbing where they try and unblock shit filled toilets for fun?
Yes, plumbers and other types of craftspeople and technicians do also have these little fun competitions. Why shouldn't they?
I think the reason some of us programmers do these things, is likely because many (myself included) entered the field as enthusiasts and hobbyists in the first place.
>I mean do plumbers have an advent of plumbing where they try and unblock shit filled toilets for fun?
No, but you’ll see it for writers, musicians, and the like.
Engineering (software or not) can be an intellectually rewarding experience for many. I don’t know why some people find this something to scoff at, would you rather have no pleasure derived from your work?
I can't find it, but this question got asked somewhere (Reddit maybe) about 8-10 years ago, and a plumber took the time to respond that many plumbers are actually very passionate about what they do. They don't specifically unclog toilets for fun, but there are plumbers that spend a lot of their free time on plumbing forums, and even some who have projects experimenting with different ways to install certain things.
> mean do plumbers have an advent of plumbing where they try and unblock shit filled toilets for fun?
You've obviously never watched "Drain Cleaning Australia" on YouTube!
Yes, some people find this stuff fun, because they find coding fun, and don't typically get to do the fun kind of coding on company time. Also, there'd be a hell of a lot less open source software in the world if people didn't code for fun.
Let people enjoy things. Just because you don't like that par of your job as much as them doesn't mean they're wrong.
I support the no global leaderboard. I was in 7th place last year but quickly got bored maintaining the aggressive AI pipeline required to achieve that. If I wanted to maintain pipelines I'd just do work, and there will never be a good way to prevent people from using AI like this. Advent of Code should be fun, thank you for continuing to do it. I'm looking forward to casually playing this year!
It was pretty boring trying to place against aggressive AI pipelines like yours throughout the explicit requests not to use them[1]. I’m sorry to hear it became boring for you too.
I mean, everyone else was using them too, how can you not? That was the name of the game if you wanted to be competitive in 2024. Not using them would be like trying to do competitive pro cycling without steroids, basically impossible.
Saying everyone else is cheating is not a valid excuse for cheating. It's why aatrong became a pariah, even though he and everyone else was EPO doping.
It's sad, but inevitable, that the global leaderboard had to be pulled. It's also understandable that this year is just 12 days, so takes some pressure off.
If you've never done it before, I recommend it. Don't try and "win", just enjoy the problem solving and the whimsy.
- install like this
- initialize a directory with this command
- here are the VSCode extensions (or whatever IDE) that are the bare minimum for the language
- here's the command for running tests
Frankly I'm better off with it being this way instead of the sweaty cupstacking LLM% speedrun it became as it gained popularity.
Thing is it may have some interesting challenges, I too, wouldn't want to solve some insane string parsing problem with no interesting idea behind it. For today's problem, I did the naive version and it worked. The modular version created some issues with some corner cases.
There should be more events like AoC. Self-contained problems are very educational.
It's totally fine not to care, but I can't quite get why you would then want to be an active member in a community of people who care about this stuff for no other reason than they fundamentally find it interesting.
Huge thanks to those involved!
https://perladvent.org/archives.html
Advent of Code is awesome also of course -- and was certainly inspired by it.
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
Got nowhere near the leaderboard times so gave up after four days!
Nearly scratched a decent ranking once only, top 300 or so.
Sadly it's 5am for me as I'm in the UK.
In 8 years I can say I've never once tried to be awake at 5am in order to do the puzzle. The one time I happened to still be awake at 5am during AoC I was quite spectacularly drunk so looking at AoC would have been utterly pointless.
Anything before 6.45am and I'm hopefully asleep. 7am isn't great as 7am-8am I'm usually trying to get my kid up, fed and out the door to go to school. Weekends are for not waking up at 7am if I don't need to.
9am or later and it messes with the working day too much.
Looking back at my submission times from 2017 onwards (I only found AoC in 2017 so did 2015/2016 retrospectively) I've only got two submissions under 02:xx:xx (e.g. 7am for me). Both were around 6.42am so I guess I was up a bit earlier that day (6.30am) and was waiting for my kid to wake up and managed to get part 1 done quickly.
My usual plan was to get my kid out of the door sometime between 7.30am and 8am and then work on AoC until I started work around 9am. If I hadn't finished it then I'd get a bit more time during my lunch hour and, if still not finished, find some time in the evening after work and family time.
Out of the 400 submissions from 2017-2024 inclusive I've only got 20 that are marked as ">24h" and many of these were days where I was out for the entire day with my wife/kid so I didn't get to even look at the problem until the next day. Only 4 of them are where I submitted part 1 within 24h but part 2 slipped beyond 24h.
Enormous understatement: I were unencumbered by wife/kids then my life would be quite a bit different.
Python is extremely suitable for these kind of problems. C++ is also often used, especially by competitive programmers.
Which "non-mainstream" or even obscure languages are also well suited for AoC? Please list your weapon of choice and a short statement why it's well suited (not why you like it, why it's good for AoC).
- Array languages such as K or Uiua. Why they're good for AoC: Great for showing off, no-one else can read your solution (including yourself a few days later), good for earlier days that might not feel as challenging
- Raw-dogging it by creating a Game Boy ROM in ASM (for the Game Boy's 'Z80-ish' Sharp LR35902). Why it's good for AoC: All of the above, you've got too much free time on your hands
Just kidding, I use Clojure or Python, and you can pry itertools from my cold, dead hands.
I'm plodding my way through the 2015 challenge here: https://git.thomasballantine.com/thomasballantine/Advent_of_... , it's really sharpened me up on a number of points.
It has many of the required structures (hashes/maps, ad hoc structs, etc) and is great for knocking up a rough and ready prototype of something. It's also quick to write (but often unforgiving).
I can also produce a solution for pretty much every problem in AoC without needing to download a single separate Perl module.
On the negative side there are copious footguns available in Perl.
(Note that if I knew Python as well as I knew Perl I'd almost certainly use Python as a starting point.)
I also try and produce a Go and a C solution for each day too:
* The Go solution is generally a rewrite of the initial Perl solution but doing things "properly" and correcting a lot of the assumptions and hacks that I made in the Perl code. Plus some of those new fangled "test" things.
* The C solution is a useful reminder of how much "fun" things can be in a language that lacks built-in structures like hashes/maps, etc.
Example: find the first example for when this "game of life" variant has more than 1000 cells in the "alive" state.
Solution: generate infinite list of all states and iterate over them until you find one with >= 1000 alive cells.
https://github.com/quchen/articles/blob/master/loeb-moeb.md
But yeah, if you're looking to solve the puzzle in under a microsecond you probably want something like Rust or C and keep all the data in L1 cache like some people do. If solving it in under a millisecond is still good enough, Haskell is fine.
1. https://en.wikipedia.org/wiki/Hashlife
This year I've been working on a bytecode compiler for it, which has been a nice challenge. :)
When I want to get on the leaderboard, though, I use Go. I definitely felt a bit handicapped by the extra typing and lack of 'import solution' (compared to Python), but with an ever-growing 'utils' package and Go's fast compile times, you can still be competitive. I am very proud of my 1st place finish on Day 19 2022, and I credit it to Go's execution speed, which made my brute-force-with-heuristics approach just fast enough to be viable.
That was impressive! Do you have a public repo with your language, anywhere?
Is there a way to drop into a repl like with python and pdb.set_trace()? I couldn't find one last time I played around with Rust.
* The expressive syntax helps keep the solutions short.
* It has extensive standard library with tons of handy methods for AoC style problems: Enumerable#each_cons, Enumerable#each_slice, Array#transpose, Array#permutation, ...
* The bundled "prime" gem (for generating primes, checking primality, and prime factorization) comes in handy for at least a few of problems each year.
* The tools for parsing inputs and string manipulation are a bit more ergonomic than what you get even in Python: first class regular expression syntax, String#scan, String#[], Regexp::union, ...
* You can easily build your solution step-by-step by chaining method calls. I would typically start with `p File.readlines("input.txt")` and keep executing the script after adding each new method call so I can inspect the intermediate results.
Scheme is fairly well suited to both general programming, and abstract math, which tends to be a good fit for AoC.
I wrote a bit more about it here https://laszlo.nu/blog/advent-of-code-2024.html
AoC is a great opportunity for exploring languages!
I write most as pure functional/immutable code unless a problem calls for speed. And with extension functions I've made over the years and a small library (like 2d vectors or grid utils) it's quite nice to work with. Like, if I have a 2D list (List<List<E>>), and my 2d vec, like a = IntVec(5,3), I can do myList[a] and get the element due to an operator overload extension on list-lists.
and with my own utils and extension functions added over years of competitive programming (like it's very fluent
OCaml is strong too. Stellar type system, fast execution and sane semantics unlike like 99% of all programming languages. If you want to create elegant solutions to problems, it's a good language.
For both, I recommend coming prepared. Set up a scaffold and create a toolbox which matches the typical problems you see in AoC. There's bound to be a 2d grid among the problems, and you need an implementation. If it can handle out-of-bounds access gracefully, things are often much easier, and so on. You don't want to hammer the head against the wall not solving the problem, but solving parsing problems. Having a combinator-parser library already in the project will help, for instance.
Any recommendations for Go? Traditionally I've gone for Python or Clojure with an 'only builtins or things I add myself' approach (e.g. no NetworkX), but I've been keen to try doing a year in Go however was a bit put off by the verbosity of the parsing and not wanting to get caught spending more time futzing with input lines and err.
Naturally later problems get more puzzle-heavy so the ratio of input-handling to puzzle-solving code changes, but it seemed a bit off putting for early days, and while I like a builtins-only approach it seems like the input handling would really benefit from a 'parse don't validate' type approach (goparsec?).
Once you have something which can "load \n seperated numbers into array/slice" you are mostly set for the first few days. Go has verbosity. You can't really get around that.
The key thing in typed languages are to cook up the right data structures. In something without a type system, you can just wing things and work with a mess of dictionaries and lists. But trying to do the same in a typed language is just going to be uphill as you don't have the tools to manipulate the mess.
Historically, the problems has had some inter-linkage. If you built something day 3, then it's often used day 4-6 as well. Hence, you can win by spending a bit more time on elegance at day 3, and that makes the work at day 4-6 easier.
Mind you, if you just want to LLM your way through, then this doesn't matter since generating the same piece of code every day is easier. But obviously, this won't scale.
Yeah, this is essentially it for me. While it might not be a 'type-safe and correct regarding error handling' approach with Python, part of the interest of the AoC puzzles is the ability to approach them as 'almost pure' programs - no files except for puzzle input and output, no awkward areas like date time handling (usually), absolutely zero frameworks required.
> you can just wing things and work with a mess of dictionaries and lists.
Checks previous years type-hinted solutions with map[tuple[int, int], list[int]]
Yeah...
> but all of the AoC problems aren't parsing problems.
I'd say for the first ten years at least the first ten-ish days are 90% parsing and 10% solving ;) But yes, I agree, and maybe I'm worrying over a few extra visible err's in the code that I shouldn't be.
> if you just want to LLM your way through
Totally fair point if I constrain LLM usage to input handling and the things that I already know that I know how to do but don't want to type, although I've always quite liked being able to treat each day as an independent problem with no bootstrapping of any code, no 'custom AoC library', and just the minimal program required to solve the problem.
How do you parse the puzzle input into a data structure of your choice?
https://github.com/taolson/Admiran https://github.com/taolson/advent-of-code
A lot of the problems involve manipulating sets and maps, which Clojure makes really straightforward.
Things like `partition`, `cycle` or `repeat` have come in so handy when working with segments of lists or the Conway's Game-of-Life type puzzles.
Downsides: The debugging situation is pretty bad (hope you like printf debugging), smaller community means smaller package ecosystem and fewer reference solutions to look up if you're stuck or looking for interesting alternative ideas after solving a problem on your own, but there's still quality stuff out there.
Though personally I'm thinking of trying Go this year, just for fun and learning something new.
Edit: also a static type system can save you from a few stupid bugs that you then spend 15 minutes tracking down because you added a "15" to your list without converting it to an int first or something like that.
So.. a language that you're interested in or like?
Reminds me of "gamers will optimize the fun out of a game"
I'm pretty clojure-curious so might mess around with doing it in that
Common Lisp. Using 'iterate' package almost feels like cheating.
I have done half a year in (noob level) Haskell long ago. But can't find the code any more.
Most mind blowing thing for me was looking at someone's solutions in APL!
I tried AoC out one year with the Wolfram language, which sounds insane now, but back then it was just a "seemed like the thing to do at the time" and I'm glad I did it.
Neon Language: https://neon-lang.dev/ Some previous AoC solutions: https://github.com/ghewgill/adventofcode
Historically good candidates are:
- Rust (despite it's popularity, I know a lot of devs who haven't had time to play with it).
- Haskell (though today I'd try Lean4)
- Racket/Common Lisp/Other scheme lisp you haven't tried
- Erlang/Elixir (probably my choice this year)
- Prolog
Especially for those langs that people typically dabble in but never get a change to write non-trivial software in (Haskell, Prolog, Racket) AoC is fantastic for really getting a feel for the language.
It's a great language. It's dependent-types / theorem-proving-oriented type-system combined with AI assistants makes it the language of the future IMO.
https://github.com/betaveros/noulith
(post title: "Designing a Programming Language to Speedrun Advent of Code", but starts off "The title is clickbait. I did not design and implement a programming language for the sole or even primary purpose of leaderboarding on Advent of Code. It just turned out that the programming language I was working on fit the task remarkably well.")
> I solve and write a lot of puzzlehunts, and I wanted a better programming language to use to search word lists for words satisfying unusual constraints, such as, “Find all ten-letter words that contain each of the letters A, B, and C exactly once and that have the ninth letter K.”1 I have a folder of ten-line scripts of this kind, mostly Python, and I thought there was surely a better way to do this.
I'll chose to remember it was designed for AoC :-D
The spatial and functional problem solving makes it easy to reason about how a single cell is calculated. Then simply apply that logic to all cells to come up with the solution.
I think it lends itself very well to the problem set, the language is very expressive, the standard library is extensive, you can solve most things functionally with no state at all. Yet, you can use global state for things like memoization without having to rewrite all your functions so that's nice too.
Most problems are 80%-90% massaging the input with a little data modeling which you might have to rethink for the second part and algorithms used to play a significant role only in the last few days.
That heavily favours languages which make manipulating string effortless and have very permissive data structures like Python dict or JS objects.
I know people who make some arbitrary extra restriction, like “no library at all” which can help to learn the basics of a language.
The downside I see is that suddenly you are solving algorithmic problems, which some times are bot trivial, and at the same time struggling with a new language.
Sure Haskell comes packaged with parser combinators, but a new user having to juggle immutability, IO and monads all at once at the same time will be almost certainly impossible.
Also, dune makes pulling in build dependencies easy these days, and there's no shame in pulling in other support libraries. It's years since I've written anything in Haskell, but I'd guess the same goes for cabal, though OCaml is still more approachable than Haskell for most people, I'd say. A newbie is always going to be at some kind of disadvantage regardless.
I think that's the best example of anemic built-in utilities. Tried AoC two years ago with OCaml; string splitting, character matching and string slicing were very cumbersome coming from Haskell. Whereas the convenient mutation and for-loops in OCaml provide an overall better experience.
Given you're already well-versed in the ecosystem you'll probably have no issues working with dune, but for someone picking up OCaml/Haskell and having to also delve in the package management part of the system is not a productive or pleasant experience.
Bonus points for those trying out Haskell, successfully, than in later challenges having to completely rewrite their solution due to spaceleaks, whereas Go, Rust (and probably OCaml) solutions just bruteforce the work.
I'm probably just that bad at programming.
Having smaller problems makes it possible to find multiple solutions as well.
I saw someone one Twitter use Excel.
Or MUMPS.
> If you're posting a code repository somewhere, please don't include parts of Advent of Code like the puzzle text or your inputs.
The text I get, but the inputs? Well, I will comply, since I am getting a very nice thing for (almost) free, so it is polite to respect the wishes here, but since I commit the inputs (you know, since I want to be able to run tests) into the repository, it is bit of a shame the repo must be private.
But there are enough possible inputs that most people shouldn't come across anyone else with exactly the same input.
Part of the reason why AoC is so time consuming for Eric is that not only does he design the puzzles, he also generates the inputs programmatically, which he then feeds through his own solver(s) to ensure correctness. There is a team of beta testers that work for months ahead of the contest to ensure things go smoothly.
(The adventofcode subreddit has a lot more info on this.)
He's also described, over the years, his process of making the inputs. Related to your comment, he tries to make sure that there are no features of some inputs that make the problem especially hard or easy compared to the other inputs. Look at some of the math ones, a few tricks work most of the time (but not every time). Let's say after some processing you get three numbers and the solution is their LCM, that will probably be true of every input, not just coincidental, even if it's not an inherent property of the problem itself.
There has been the odd puzzle where some inputs have allowed simpler solutions than others, but those have stood out.
if we just look at the last three puzzles: day 23 last year, for example, admitted the greedy solution but only for some inputs. greedy clearly shouldn't work (shuffling the vertices in a file that admits it causes it to fail).
I have a solve group that calls it "Advent of Input Roulette" because (back when there was a global leaderboard) you can definitely get a better expected score by just assuming your input is weak in structural ways.
The example input(s) is part of the "text", and so committing it is also not allowed. I guess I could craft my own example inputs and commit those, but that exceed the level of effort I am willing to expend trying to publish repository no one will likely ever read. :)
The part I enjoy the most is after figuring out a solution for myself is seeing what others did on Reddit or among a small group of friends who also does it. We often have slightly different solutions or realize one of our solutions worked "by accident" ignoring some side case that didn't appear in our particular input. That's really the fun of it imho.
I'm also surprised there are a few Dutch language sponsors. Do these show up for everyone or is there some kind of region filtering applied to the sponsors shown?
I plan on doing this year in C++ because I have never worked with it and AoC is always a good excuse to learn a new language. My college exams just got over, so I have a ton of free time.
Previous attempts:
- in Lua https://github.com/Aadv1k/AdventOfLua2021
- in C https://github.com/Aadv1k/AdventOfC2022
- in Go https://github.com/Aadv1k/AdventOfGo2023
really hope I can get all the stars this time...Cheers, and Merry Cristmas!
I've never stressed out about the leaderboard. Ive always taken it as an opportunity to learn a new language, or brush up on my skills.
In my day-to-day job, I rarely need to bootstrap a project from scratch, implement a depth first search of a graph, or experiment with new language features.
It's for reasons like these that I look forward to this every year. For me it's a great chance to sharpen the tools in my toolbox.
Sometimes it's nice to have a break by writing a load of error handling, system architecture documentation, test cases, etc.
> For me it's a great chance to sharpen the tools in my toolbox.
That's a good way of putting it.
My way of taking it a step further and honing my AoC solutions is to make them more efficient whilst ensuring they are still easy to follow, and to make sure they work on as many different inputs as possible (to ensure I'm not solving a specific instance based on my personal input). I keep improving and chipping away at the previous years problems in the 11 months between Decembers.
I am still updating it for this year, so please feel free to submit a PR or share some here.
Every time I see this I wonder how many amateur/hobbyist programmers it sets up for disappointment. Unless your definition of “pretty far” is “a small number of the part ones”, it’s simply not true.
In the programming world I feel like there's a lot of info "for beginners" and a lot of folks / activities for experts.
But that middle ground world is strange... a lot of it is a combo of filling in "basics" and also touching more advanced topics at the same time and the amount of content and just activities filling that in seems very low. I get it though, the middle ground skilled audience is a great mix of what they do or do not know / can or can not solve.
I don't know if that made any sense.
Advanced level stuff usually gets recommended directly by experts or will be interesting to beginners too as a way of seeing the high level.
Mid level stuff doesn't have that wide appeal, the freshness in the mind of the experts, or the ease of getting into, so it's not usually worth it for creators if the main metric is reach/interest
Structured (taught) learning is better in this regard, it at least gives you structure to cling on to at the mid level
But also, the middle ground is often just years of practice.
I used to program competitively and while that's the case for a lot of the early day problems, usually a few on the later days are pretty tough even by those standards. Don't take it from me, you can look at the finishing times over the years. I just looked at some today because I was going through the earlier years for fun and on Day 21/2023, 1 hour 20 minutes got you into the top 100. A lot of competitive programmers have streamed the challenges over the years and you see plenty of them struggle on occasion.
People just love to BS and brag, and it's quite harmful honestly because it makes beginner programmers feel much worse than they should.
According to Eric last year (https://www.reddit.com/r/adventofcode/comments/1hly9dw/2024_...) there were 559 people that had obtained all 500 stars. I'm happy to be one of them.
The actual number is going to be higher as more people will have finished the puzzles since then, and many people may have finished all of the puzzles but split across more than one account.
Then again, I'm sure there's a reasonable number of people who have only completed certain puzzles because they found someone else's code on the AoC subreddit and ran that against their input, or got a huge hint from there without which they'd never solve it on their own. (To be clear, I don't mind the latter as it's just a trigger for someone to learn something they didn't know before, but just running someone else's code is not helping them if they don't dig into it further and understand how/why it works.)
There's definitely a certain specific set of knowledge areas that really helps solve AoC puzzles. It's a combination of classic Comp Sci theory (A*/SAT solvers, Dijkstra's algorithm, breadth/depth first searches, parsing, regex, string processing, data structures, dynamic programming, memoization, etc) and Mathematics (finite fields and modular arithmetic, Chinese Remainder Theorem, geometry, combinatorics, grids and coordinates, graph theory, etc).
Not many people have all those skills to the required level to find the majority of AoC "easy". There's no obvious common path to accruing this particular knowledge set. A traditional Comp Sci background may not provide all of the Mathematics required. A Mathematics background may leave you short on the Comp Sci theory front.
My own experience is unusual. I've got two separate bachelors degrees; one in Comp Sci and one in Mathematics with a 7 year gap between them, those degrees and 25+ years of doing software development as a job means I do find the vast majority of AoC quite easy, but not all of it, there are still some stinkers.
Being able to look at an AoC problem and think "There's some algorithm behind this, what is it?" is hugely helpful.
The "Slam Shuffle" problem (2019 day 22) was a classic example of this that sticks in my mind. The magnitude of the numbers involved in part 2 of that problem made it clear that a naive iteration approach was out of the question, so there had to be a more direct path to the answer.
As I write the code for part 1 of any problem I tend to think "What is the twist for part 2 going to be? How is Eric going to make it orders of magnitude harder?" Sometimes I even guess right, sometimes it's just plain evil.
Just checked my copy of TAOCP (Vol 3 - Sorting and Searching) and it doesn't mention A* or SAT.
Ref: https://en.wikipedia.org/wiki/The_Art_of_Computer_Programmin...
A quick google shows that the newer volumes (Volume 4 fascicles 6 and 7) seem to cover SAT. Links to downloads are on the Wikipedia page above.
Maybe the planned 4C Chapter 7 "Combinatorial searching (continued)" might cover A* searching. Ironically googling "A* search" is tricky.
Hopefully someone else will chip in with a better reference that is somewhere in the middle of Wikipedia's brevity and TAOCP's depth.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
I have a computer science education and I have no idea what you're talking about. The prompt "Proof." ?
Most people who study Comp Sci never use any of what they learned ever again, and most will have forgotten most of what they learned within one or two years. Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
> Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
But we are talking about Advent of Code here, which is a set of fairly contrived, theoretical, in vitro learning problems that you don't really see in the real software engineering world either.
> The prompt "Proof." ?
See this paper on the Stoer-Wagner min-cut algorithm from graph theory, for the last problem in a previous year's Advent of Code: https://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/Handou...
> I have a computer science education and I have no idea what you're talking about.
A post-secondary computer science education? I don't mean bootcamp. I mean a course of study in mathematics.
My only assumption is that you're really out of touch with the ordinary world of humanity if you think most people are aware of stuff like this:
https://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/Handou...
But, speaking to the original question as to the number of newbies that go all the way, I'd say one cannot expect to increase their skills in anything if one sticks in their comfort zone. It should be hard, and as a newbie who participated in previous years, I can confirm it often is. But I learned new things every time I did it, even if I did not finish.
The idea that anyone who doesn't know any code would:
1) Complete in Advent of Code at all.
2) Complete a single part of a single problem.
let alone, complete the whole thing without it being a "tremendous challenge"...
is so completely laughable it makes me question whether you live on the same planet as the rest of us here.
Getting a person who has never coded to write a basic sort algorithm (i.e. bubble sort) is already basically impossible. I work with highly talented non coder co-workers who all attended tier-1 universities (e.g. Oxford, Harvard, Stanford) but for finance/business related degrees, I cannot get them to write while/foreach loops in Python, and simply using Claude Code is way too much for them.
If you are even fully completing one Advent of Code problem, you are in the top 0.1% of coders, completing all of them puts you in the top 0.001%.
Wishing you best of luck in AoC, Life and Love but I imagine someone like you doesn't need it, being a complete toolbox and all.
P.S.: Tell your coworkers I'm sorry they have to put up with you.
You're the person saying Advent of Code is "so easy" that anyone even people with no coding ability at all should find it do-able, which is totally diminishing the difficulty of the problems, and asserting your own genius, i.e. that you found it totally trivial.
I am the person saying that actually, stuff like Advent of Code is incredibly difficult and 99% of active programmers aren't able to complete it, let alone people who don't code.
I am not an elitist at all, unlike yourself, I don't find completing "Advent of Code" easy, in fact, it would take me a long time to complete it, more time than I have available in my busy life in the average December. And I doubt I would be able to complete it 100% without looking up help, getting hints, or using LLMs to help.
Heck, I even talked about having to be serious about completion, and you could not bother to read the whole comment, then proceed to call me delusional? FFS, I am now praying for your co-workers and I'm not even religious.
Did you realize only roughly 500 people of the > 1M who are registered for advent of code even complete it?
You said "it should not be a tremendous challenge", i.e. not that big of a deal even if you don't know how to code. Which is absolutely diminishing the difficulty of the event, I mean, come on man...
This is why I'm asserting you are quietly oblivious to the abilities of most people. I am asserting that most people who CAN code, cannot complete the event, yet alone non-coders. I am a very active coder (for fun mostly these days, but also sometimes for work), but I could not complete Advent of Code. Maybe if I took all of December off work to dedicate serious time, but even then I wonder if it's possible without looking at hints/LLM-help etc.
I often try and help my co-workers who are working on AI based side-projects for fun, so I have a strong insight into the abilities of non-coding smart people, and the reality is that yes, they get very turned off as soon as you get anything more complex than for-loops and if-statements. This isn't me being mean to co-workers, this is the reality of things I have experienced. It's not a brains thing, they can understand more complex stuff, but they don't want to, they find it annoying, boring, not worth the time/effort etc. So the idea of them learning dynamic programming, DFS/BFS, more complex data structures etc, is well, just not going to happen.
My point is that you are effectively saying, "oh just about anyone can do Advent of Code if they want to", is totally not grounded in any sort of reality.
Try to have a better day.
Maybe when I was in college (if AoC had existed back then) I could have kept pace, but if part of your life is also running a household, then between wrapping up projects for work, finalizing various commitments I want wrapped up for the year, getting together with family and friends for various celebrations, and finally travel and/or preparing your own house for guests, I'm lucky if I have time to sit down with a cocktail and book the week before Christmas.
Seeing the format changed to 12 days makes me think this might be the first time in years I could seriously consider doing it (to completion).
I have no evidence to say this, but I'd guess a lot more people give up on AoC because they don't want to put in the time needed than give up because they're not capable of progressing.
I think it comes down to experience, exposure to problems, and the ability to recognise what the problem boils down to.
A colleague who is an all round better coder than me might spend 4 hours bashing away trying to solve a problem that I might be able to look at and quickly recongise it is isomorphic to a specific classic Comp Sci or Maths problem and know exactly how best to attack it, saving me a huge amount of time.
Spoiler alert: Take the "Slam Shuffle" in 2019 Day 22 (https://adventofcode.com/2019/day/22). I was lucky that I quickly recognised that each of the actions could be represented as '( a*n + b ) mod noscards' (with a and b specific to the action) and therefore any two actions like this can be combined into the same form. The optimal solution follows relatively simply from this.
Doing all of the previous years means there's not much new ground although Eric always manages to find something each year.
There have also been some absolutely amazing inventions along the way. The IntCode Breakout game (2019) and the adventure game (can't remember the year) both stick in my mind as amazing constructions.
And then something shiny and fun comes along during a problem that I'm having trouble with, and I just never come back.
https://adventofcode.com/2020/day/1 for example. It's not hard to do part 1 by hand.
You need two numbers from the input list (of 200 numbers) that add to 2020.
For each number n in the list you just have to check if (2020-n) is in the list.
A quick visual scan showed my input only had 9 numbers that were less than 1010, so I'd only have to consider 9 candidate numbers.
It would also be trivial for anyone who can do relatively simple things with a spreadsheet.
That's not the same as saying they're easy, but it's a different kind of barrier, and (in my opinion) more a test of 'can you think?' than 'did you do a CS degree?'
In this sense it's accessible: you won't get stuck because of a word you don't understand or a concept you've never heard of.
0: https://bugzilla.mozilla.org/show_bug.cgi?id=1943796
Much easier so far than it was in 2023 when just basic string wrangling was basically nonexistent.
Could either be really recreational and relaxing.. or painful and annoying.
Though I don't care even if it takes me all of next year, it's all in order to learn :)
It's only going to be 12 problems rather than 24 this year and there isn't going to be a gloabl leaderboard, but I'm still glad we get to take part in this fun Christmas season tradition, and I'm thankful for all those who put in their free time so that we can get to enjoy the problems. It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect, I've always just enjoyed the puzzles, so as far as I'm concerned nothing was really lost.
Is this an unpopular stance? Out of a dozen people I know that did/do AoC every year, only one was trying to compete. Everyone else did it for fun, to learn new languages or concepts, to practice coding, etc.
Maybe it helps that, because of timezones, in Europe you need to be really dedicated to play for a win.
No, it's not. At most 200 people could end up on the global leaderboard, and there are tens of thousands of people who participate most days (though it drops off by the end, it's over 100k reliably for the first day). The vast majority of participants are not there for the leaderboard. If you care about being competing, there are always private leaderboards.
Premises:
(i) I love Advent of Code and I'm grateful for its continuing existence in whatever form its creators feel like it's best for themselves and the community;
(ii) none of what follows is a request, let alone a demand, for anything to change;
(iii) what follows is just the opinion of some random guy on the Internet.
I have a lot of experience with competitions (although more on the math side than on the programming side), and I've been involved essentially since I was in high school, as a contestant, coach, problem writer, organizer, moving tables, etc. In my opinion Advent of Code simply isn't a good competition:
- You need to be available for many days in a row for 15 minutes at a very specific time.
- The problems are too easy.
- There is no time/memory check: you can write ooga-booga code and still pass.
- Some problems require weird parsing.
- Some problems are pure implementation challenges.
- The AoC guy loves recursive descent parsers way too much.
- A lot of problems are underspecified (you can make assumptions not in the problem statement).
- Some problems require manual input inspection.
To reiterate once again: I am not saying that any of this needs to change. Many of the things that make Advent of Code a bad competition are what make it an excellent, fun, memorable "Christmas group thing". Coming back every day creates community and gives people time to discuss the problems. Problems being easy and not requiring specific time complexities to be accepted make the event accessible. Problems not being straight algorithmic challenges add welcome variety.
I like doing competitions but Advent of Code has always felt more like a cozy problem solving festival, I never cared too much for the competitive aspect, local or global.
The vast majority (though not all) of the inputs can be parsed with regex or no real parsing at all. I actually can't think of a day that needed anything like recursive descent parsing.
One could probably build a separate service that provides a leaderboard for solution runtimes.
I agree that it’s more of a cozy activity than a hardcore competition, that’s what I appreciate about it most.
LOL!!
I agreed with a lot of what you wrote, but also a lot of us strive for beautiful solutions regardless of time/memory bounds.
In fact, I’m (kind of) tired of leetcode flagging me for one ultra special worst-case scenario. I enjoy writing something that looks good and enjoying the success.
(Not that it’s bad to find out I missed an optimization in the implementation, but… it feels like a lot of details sometimes.)
Just a random example: https://open.kattis.com/problems/magicallights
But the Kattis website is great. The program runs on their server without you getting to know the input (you just get right/wrong back), so a bit different. But also then gives you memory and time constraints which you for the more difficult problems must find your way out of.
https://everybody.codes/events
The problems are pretty difficult in my book (I never make it past day 3 or so). So I definitely would hope they never increase the difficulty.
[0] https://www.jerpint.io/blog/2024-12-30-advent-of-code-llms/
The difference when working on larger tasks that require reasoning is night and day.
In theory it would be very interesting to go back and retry the 2024 tasks, but those will likely have ended up in the training data by now...
I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.
I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.
It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.
Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.
https://adventofcode.com/2024/day/3
But as others have said, it’s a night and day difference now, particularly with code execution.
From watching them work, they read the spec, write the code, run it on the examples, refine the code until it passes, and so on.
But we can’t tell whether the puzzle solutions are in the training data.
I’m looking forward to seeing how well current agents perform on 2025’s puzzles.
I'm just glad they're keeping this going.
Doing things for the fun of it, for curiosity's sake, for the thrill of solving a fun problem - that's very much alive, don't worry!
Maybe just have a cool advent calendar thingy like a digital tree that gains an ornament for each day you complete. Each ornament can be themed for each puzzle.
Of course I hope it goes without saying that the creator(s) can do it however they want and we’re nothing but richer for it existing.
It becomes a race when you start seeing it as a race :) One can just... ignore the leaderboard
Instead, getting gold stars for solving the puzzles is incentive enough, and can be done as a relaxing thing in the morning.
No matter what you do, as the puzzles get harder, you won't solve them in a day (or even a lifetime) if you don't come up with good algorithms/methods/heuristics.
Even before LLMs I knew it was filled with with results faster then you can blink.
So some of us, from gut feeling the vast majority, it was always just for fun. Usually I spent at least until March to finish as much as I did in every year.
Many people do - well, did - AoC while ignoring the leaderboard.
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
0: https://www.theatlantic.com/technology/archive/2025/09/high-...
1: https://archive.is/Lda1x
High school debate has been ruthless for a long time, even before AI. There has been a rise in the use of techniques designed to abuse the rules and derail arguments for several years. In some regions, debates have become more about teams leveraging the rules and technicalities against their opponents than organically trying to debate a subject.
Imagine the shitshow that gaming would be without any kind of anti-cheat measures, and that's the state of competitive programming.
If the rules don't allow that and yet people do then well, you need online qualifiers and then onsite finals to pick the real winners. Which was already necessary, because there are many other ways to cheat (like having more people than allowed in the team).
It's not much different than outlawing performance enhancing drugs. Or aimbots in competitive gaming. The point is to see what the limits of human performance are.
If an alien race came along and said "you will all die unless you beat us in the IEEE programming competition", I would be all for LLM use. Like if they challenged us to Go, I think we'd probably / certainly use AI. Or chess - yeah, we'd be dumb to not use game solvers for this.
But that's not in the spirit of the competition if it's University of Michigan's use of Claude vs MIT's use of Claude vs ....
Imagine if the word "competition" meant "anything goes" automatically.
They're just different types of fun. The problem is if one type of fun is ruined by another.
With products I want actual correctness. And not something thrown away.
(I did a couple of these in college, though we didn't practice outside of competition so we weren't especially good at it.)
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s contest, however, we will not be releasing official results. The reason for this is the significant number of students who violated the CCC Rules. In particular, it is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help. As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Available here: [PDF] https://cemc.uwaterloo.ca/sites/default/files/documents/2025...
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
I solved a few problems with it last year, and it is amazing how compact the solutions are. It also messes with your head, and the community surrounding it is interesting. Highly recommended.
[1] https://www.uiua.org/
Uiua – A stack-based array programming language - https://news.ycombinator.com/item?id=42590483 - Jan 2025 (6 comments)
Uiua: A minimal stack-based, array-based language - https://news.ycombinator.com/item?id=37673127 - Sept 2023 (104 comments)
We (Depot) are sponsoring this year and have a private leaderboard [0]. We’re donating $1k/each for the top five finishers to a charity of their choice.
[0] https://depot.dev/events/advent-of-code-2025
>What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I've made it so you can share a read-only view of your private leaderboard. *Please don't use this feature or data to create a "new" global leaderboard.*)
It's kotlin and shik for me this year, probably a bit of both. And no stupid competitions, AoC should be fun.
https://gitlab.com/codr7/shik
No thanks.
It used to be that reddit had a user creation screen that looked like you needed to input an email address, but you could actually just click "Next" to skip it.
The last time I had cause to make a reddit account, they no longer allowed this.
Having done auth myself, I can also understand why auth is being externalised like this. The site was flooded with bots and scrapers long before LLMs gained relevance and adding all the CAPTCHAs and responding to the "why are you blocking my shady CGNAT ISP when I'm one of the good ones" complaints is just not worth it. Let some company with the right expertise deal with all of that bullshit.
I'd wish the site would have more login options, though. It's a tough nut to crack; pick a small, independent oauth login service not under control of a bit tech company and you're basically DDOSing their account creation page for all of December. Pick a big tech company and you're probably not gaining any new users. You can't do decentralized auth because then you're just doing authentication DDOS with extra steps.
If I didn't have a github account, I'd probably go with a throwaway reddit account to take part. Reddit doesn't really do the same type of tracking Twitter tries to do and it's probably the least privacy invasive of the bunch.
I always put it down to overthinking and never arriving at a solution but maybe it was actually a much tougher problem!
On a serious note, I just saw this: https://linuxupskillchallenge.org
[1] https://en.wikipedia.org/wiki/Twelve_Days_of_Christmas
From https://adventofcode.com/2025/about:
" Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December). "
https://adventofcode.com/2025/about#faq_highcontrast
https://gist.github.com/rtfeldman/f46bcbfe5132d62c4095dfa687...
And yet I expect the whole leaderboard to be full of AI submissions...
Edit: No leaderboard this year, nice!
There are plenty of programming competitions and hackathons out there. Let this one simply be a celebration of learning and the enjoyment of problem solving.
> The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard.
There will be no global leaderboard this year.
I enjoy programming a lot, but most of it comes from things like designing APIs that work well and that people enjoy using, or finding things that allow me to delete on ton of legacy code.
I did try to do the advent of code many times. Usually I get bored half way through reading the first problem. and then when I finally get through I realize that these usually involve tradeoffs that are annoying to make in terms of memory/cpu usage and also several edge cases to deal with.
it really feels more like work than play.
I've got 500 stars (i.e. I've completed every day of all 10 previous years) but not always on the day the puzzles were available, probably 430/500 on the day. (I should say I find the vast majority of AoC relatively easy as I've got a strong grounding in both Maths and Comp Sci.)
First of all I only found out about AoC in 2017 and so I did 2015 and 2016 retrospectively.
Secondly I can keep up with the time commitments required up until about the 22nd-24th (which is when I usually stop working for Christmas). From then time with my wife/kids takes precedence. I'll usually wrap up the last bits sometime from the 27th onwards.
I've never concerned myself with the pointy end of the leaderboards due to timezones as the new puzzles appear at 5am local time for me and I've no desire to be awake at that time if I can avoid it, certainly not for 25 days straight. I expect that's true of a large percentage of people participating in AoC too.
My simple aim every day is that my rank for solving part 2 of a day is considerably lower than my rank for solving part 1.
(To be clear, even if I was up and firing at 5am my time every day I doubt I could consistently get a top 100 rank. I've got ten or so 300-1000 ranks by starting ~2 hours later but that's about it. Props to the people who can consistently appear in the top 100. I also start most days from scratch whilst many people competing for the top 100 have lots of pre-written code to parse things or perform the common algorithms.)
I also use the puzzles to keep me on my toes in terms of programming and I've completed every day in one of Perl, C or Go and I've gone back and produced solutions in all 3 of those for most days. Plus some random days can be done easily on the command-line piping things through awk, sed, sort, grep, and the like.
The point of AoC is that everyone is free to take whatever they want from it.
Some use it to learn a new programming language. Some use it to learn their first language and only get a few days into it. Some use it to make videos to help others on how to program in a specific language. Some use it to learn how/when to use structures like arrays, hashes/maps, red-black trees, etc, and then how/when to use classic Comp Sci algorithms like A* or SAT solvers, Djikstra's, etc all the way to some random esoteric things like Andrew's monotone chain convex hull algorithm for calculating the perimeter of a convex hull. There are also the mathsy type problems often involving Chinese Remainder Theorem and/or some variation of finite fields.
My main goal is to come up with code that is easy to follow and performs well as a general solution rather than overly specific to my individual input. I've also solved most years with a sub 1 second total runtime (per year, so each day averages less than 40msec runtime).
Anyway, roll on tomorrow. I'll get to the day 1 problem once I've got my kid up and out the door to go to school as that's my immediate priority.
Why would you use a site called HackerNews if you are not a hacker? No idea.
Of course, folks may use it to visualise the puzzles but not to solve them.
Maybe it's useful for people trying to learn but also becoming pointless now as all Junior dev roles can be done with AI.
I mean do plumbers have an advent of plumbing where they try and unblock shit filled toilets for fun?
https://www.plumbingnationals.com/
https://www.youtube.com/watch?v=BWrvnNMsmeM
Yes, plumbers and other types of craftspeople and technicians do also have these little fun competitions. Why shouldn't they?
I think the reason some of us programmers do these things, is likely because many (myself included) entered the field as enthusiasts and hobbyists in the first place.
No, but you’ll see it for writers, musicians, and the like.
Engineering (software or not) can be an intellectually rewarding experience for many. I don’t know why some people find this something to scoff at, would you rather have no pleasure derived from your work?
You've obviously never watched "Drain Cleaning Australia" on YouTube!
Yes, some people find this stuff fun, because they find coding fun, and don't typically get to do the fun kind of coding on company time. Also, there'd be a hell of a lot less open source software in the world if people didn't code for fun.
Let people enjoy things. Just because you don't like that par of your job as much as them doesn't mean they're wrong.
[1] https://web.archive.org/web/20241201070128/https://adventofc...
Although there are now rumours of hidden motors in Tour de France bicycles. So, I guess it's the same.