Here's 12 Sysadmin/DevOps (they're synonyms now!) challenges, straight from the day job:
1. Get a user to stop logging in as root.
2. Get all users to stop sharing the same login and password for all servers.
3. Get a user to upgrade their app's dependencies to versions newer than 2010.
4. Get a user to use configuration management rather than scp'ing config files from their laptop to the server.
5. Get a user to bake immutable images w/configuration rather than using configuration management.
6. Get a user to switch from Jenkins to GitHub Actions.
7. Get a user to stop keeping one file with all production secrets in S3, and use a secrets vault instead.
8. Convince a user (and management) you need to buy new servers, because although "we haven't had one go down in years", every one has faulty power supply, hard drive, network card, RAM, etc, and the hardware's so old you can't find spare parts.
9. Get management to give you the authority to force users to rotate their AWS access keys which are 8 years old.
10. Get a user to stop using the aws root account's access keys for their application.
11. Get a user to build their application in a container.
12. Get a user to deploy their application without you.
After you complete each one, you get a glass of scotch. Happy Holidays!
Github Actions left a bad taste in my mouth after having it randomly removed authenticated workers from the pool, after their offline for ~5 days.
This was after setting up a relatively complex PR workflow (always on cheap server starts up very expensive build server with specific hardware) only to have it break randomly after a PR didn't come in for a few days. And no indication that this happens, and no workaround from GitHub.
There are better solutions for CI, GitHub 's is half baked.
Roll 2d6, sum result. Your CI migration target is:
2. migrate secret manager. Roll again
3. cloud build
4. gocd
5. jenkins
6. gitlab
7. github actions
8. bamboo
9. codepipeline
10. buildbot
11. team foundation server
12. migrate version control. Roll again
Not in love with its insistence on recreating the container from scratch every step of the pipeline, among a bundle of other irksome quirks. There are certainly worse choices, though.
Opposite of Jenkins where you have shared workspaces and have to manually ensure workspace is clean or suffer from reproducibility issues with tainted workspaces.
Hudson/Jenkins is just not architected for large, multi-project deployments, isolated environments and specialized nodes. It can work if you do not need these features, but otherwise it's fight against the environment.
You need a beefy master and it is your single point of failure. Untimely triggers of heavy jobs overwhelm controller? All projects are down. Jobs need to be carefully crafted to be resumable at all.
Heavy reliance on master means that even sending out webhooks on stage status changes is extremely error prone.
When your jobs require certain tools to be available you are expected to package those as part of agent deployment as Jenkins relies on host tools. In reality you end up rolling your own tool management system that every job has to call in some canonical manner.
There is no built in way to isolate environments. You can harden the system a bit with various ACLs, but in the end if you either have to trust projects or build up and maintain infrastructures for different projects isolated at host level.
In cases when time-wise significant processing happens externally, you have to block an executor.
Yeah I was thinking of using it for us actually. Connects to everything, lots of plugins, etc. I wonder what the hate is from, they are all pretty bad aren't they ?
Will test forgejo's CI first as we'll use the repo anyway, but if it ain't for me, it's going to be jenkins I assume.
- DSL is harder to get into.
- Hard to reproduce a setup unless builds are in DSL and Jenkins itself is in a fixed version container with everything stored in easily transferable bind volumes; config export/import isn't straightforward.
- Builds tend to break in a really weird way when something (even external things like Gitea) updates.
- I've had my setup broken once after updating Jenkins and not being able to update the plugins to match the newer Jenkins version.
- Reliance on system packages instead of containerized build environment out of the box.
- Heavier on resources than some of the alternatives.
Pros:
- GUI is getting prettier lately for some reason.
- Great extendability via plugins.
- A known tool for many.
- Can mostly be configured via GUI, including build jobs, which helps to get around things at first (but leads into the reproducibility trap later on).
Wouldn't say there is a lot of hate, but there are some pain points compared to managed Gitlab. Using managed Gitlab/Github is simply the easiest option.
Setting up your own Gitlab instance + Runners with rootless containers is not without quirks, too.
CASC plugin + seed jobs keep all your jobs/configurations in files and update them as needed, and k8s + Helm charts can keep the rest of config (plugins, script approvals, nodes, ...) in a manageable file-based state as well.
We have our main node in a state that we can move it anywhere in a couple of minutes with almost no downtime.
I'll add another point to "Pros": Jenkins is FOSS and it costs $0 per developer per month.
I have a previous experience with it. I agree with most points. Jobs can be downloaded as xml config and thus kept/versioned. But the rest is valid. I just don't want to manage gitlab, we already have it at corp level, just can't use it right now in preprod/prod and I need something which will be either throwaway or kept just for very specific tasks that shouldn't move much in the long run.
For a throwaway, I don't think Jenkins will be much of a problem. Or any other tool for that matter. My only suggestion would be to still put some extra effort into building your own Jenkins container on top of the official one [0]. Add all the packages and plugins you might need to your image, so you can easily move and modify the installation, as well as simply see what all the dependencies are. Did a throwaway, non-containerized Jenkins installation once which ended up not being a throwaway. Couldn't move it into containers (or anywhere for that matter) without really digging in.
Haven't spent a lot of time with it myself, but if Jenkins isn't of much appeal, Drone [1] seems to be another popular (and lightweight) alternative.
Many, many reasons... the most important of which is, Jenkins is a constant security nightmare and a maintenance headache. But also it's much harder to manage a bunch of random Jenkins servers than GHA. Authentication, authorization, access control, configuration, job execution, networking, etc. Then there's the configuration of things like env vars and secrets, environments, etc that can also scale better. I agree GHA kinda sucks as a user tool, but as a sysadmin Jenkins will suck the life out of you and sap your time and energy that can go towards more important [to the company] tasks.
I really scratch my head when I read your comment, as nothing of this is a real issue in my Jenkins.
> bunch of random Jenkins servers
Either PXE boot from an image, or k8s from an image, have a machine or pod rebooted/destroyed after one job. Update your image once a month, or have a Jenkins job to do that for you.
> Authentication, authorization, access control
Either use LDAP or Login via Github, and Matrix security plugin. Put all "Devops" group into admins, the rest into users, never touch it again.
> configuration
CASC plugin and seed for jobs, and/or Helm for just about everything else.
> env vars and secrets
Pull everything from Vault with Vault plugin.
> as a sysadmin Jenkins will suck the life out of you
I spend about 1-2 hours a week managing Jenkins itself, and the rest of the week watching the jobs or developing new ones.
> Get a user to use configuration management rather than scp'ing config files from their laptop to the server.
Damn, this one I'm guilty of. Though, I'm not real Sysadmin/DevOps, I'm just throwing something together and deploying it on a LAN-only VM for security reasons (I don't trust the type of code I would write)
It really depends if the machine is hosting anything that you don't want some users to access. If the machine is single-purpose and any user is already able to access everything valuable from it (DB with customer data, etc) or trivially elevate to root (via sudo, docker access, etc) then it's just pointless extra typing and security theatre.
Q: 3. Get a user to upgrade their app's dependencies to versions newer than 2010.
A: Calculate the average age in years of all dependencies calculated by: (max(most recent version release date, date of most recent CVE on library) - used version release date). Sleep for that many seconds before the app starts.
Is this really like that? Isn't there any Unix/DBA anymore? I associate DevOps to what at my time we called "operations" and "development". We had 5 teams or so:
1) Developers, who would architect and write code, 2) Operations who would deploy, monitor and address customer complaints, 3) Unix (aka SYS) administrators, who would take care of housekeeping of well, the OS (and web servers/middleware), 4) DBA who would be monitoring and optimizing Oracle/Postgres, and 5) Network admins, who would take care of Load Balancers, Routers, Switches, Firewalls (well, there were 2 security experts for that also)
So I think DevOps would be a mix of 1&2, to avoid the daily wars that would constantly happen "THEY did it wrong!"
Can somebody clear my mind, please!? It seems I was out of it for too long?!
Thanks. That is an interesting insight into the current reality. I assume the developers take care of optimization of queries; set up indexes and development of schemas and DB backups is handled by devops.
I must say, again I thought (I read it somewhere?) DevOps should take care of the constant battle between Devs and Operations (I've seen enough of that in my times) by merging 1 and 2 together. But it seems just a name change, and if anything, seems worst, as a (IMHO) critical and central component, like the DB, now has totally distributed responsibilities. I would like to know what happens when e.g. a DB crashes because a filesystem is full, "because one developer made another index, because one from devops had a complaint because X was too slow".
Either the people are extremely more professional that in my times, or it must be a shitshow to look while eating pop-corn.
> DevOps should take care of the constant battle between Devs and Operations
In practice there is no way to relay "query fubar, fix" back, because we are much agile, very scrum: feature is done when the ticket is closed, new tickets are handled by product owners. Reality is antithesis of that double Ouroboros.
In practice developers write code, devops deploy "teh clouds" (writing yamls is the deving part) and we throw moar servers at some cloud db when performance becomes sub-par.
Nobody does 4 until they’ve had multiple large incidents involving DBs, or the spend gets hilariously out of control.
Then they hire DBREs because they think DBA sounds antiquated, who then enter a hellscape of knowing exactly what the root issues are (poorly-designed schemata, unperformant queries, and applications without proper backoff and graceful degradation), and being utterly unable to convince management of this (“what if we switched to $SOME_DBAAS? That would fix it, right?”).
For 4) - consider PGHero[1] and PGTuner[2] instead of a full-time DBA. We use both in production and they work very well to help track down performance issues with Postgres.
Edit: For the record, I have worked at a few small companies as the "SysAdmin" guy who did the whole compliment of servers, OS, storage, networking, VMs, DB, perf tuning, etc.
I know its a common view that sysadmin/devops are the same these days, but witha current sysadmin role nothing youve mentioned sounds relevant. Let's give you my list:
1. Patch Microsoft exchange with only a three hour outage window
2. Train a user to use onedrive instead of emailing 50mb files and back and forth
3. Setup eight printers for six users. Deal with 9gb printer drivers.
4. Ask an exec if he would please let you add mfa to their mailbox.
5. Sit there calmly while that exec yells like a wwe wrestler about the ways he plans to ruin you in response
6. Debate the cost of a custom mouse pad for one person across three meetings
7. Deploy any standard windows app that expects everyone be an administrator without making everyone an administrator
8. Deploy an app that expects uac disabled without disabling uac
9. Debug some finance persons 9000 line excel function
I used to have that job, but my title wasn't Sysadmin, it was IT Manager. For companies small enough that they don't have multiple roles, you do both... but for larger companies, the user-side stuff is done by IT, and the server-side stuff is done by a Sysadmin. (And my condolences; having done that combined role, it's not easy, and you don't get paid enough!)
Former Exchange Admin here: 1 is easy, I used to do 70k mailboxes in middle of the day only but it requires spare hardware or virtualization with headroom.
Deploy new Server(s), patch, install Exchange, Setup DAGs, migrate everyone mailbox, swing load balancer over to new servers, uninstall Exchange from old, remove old from Active Directory, delete servers.
BTW, Upgrades now suck because Office365 uses method above so upgrade system never gets good Q&A from them.
Same feeling here re: migrations being easy if the Customer isn't a cheapass. Small business Customers who had the competing requirements of spending as little money as possible and having as much uptime as possible were the stressor.
9. Get management to give you the authority to force users to rotate their AWS access keys which are 8 years old.
Saying "keys which are 8 years old" implies you're worried about the keys themselves, which is just wrong. (Their security state depends on monitoring)
You can definitely make a strong argument that the organization needs practice rotating, so I would advise reframing it as an org-survivability-planning challenge and not a key-security issue.
A lot of these problems seem pretty solveable, if you're the admin of the machine (or cloud system) and the user isn't.
If you don't want a user to log in as root, disable the root password (or change it to something only you know) and disable root ssh. If you want people to stop sharing the same login and password across all servers, there's several ways to do it but the most straightforward one seems like it would be to enforce the use of a hardware key (yubikey or similar) for login. If people aren't using configuration management software and are leaving machines in an inconsistent state, again there are several options but I'd look into this NixOS project: https://github.com/nix-community/impermanence + some policy of rebooting the machines regularly.
If you don't like how users are making use of AWS resources and secrets, then set up AWS permissions to force them to do so the correct way. In general if someone is using a system in a bad or insecure way, then after alerting them with some lead time, deliberately break their workflow and force them to come to you in order to make progress. If the thing you suggest is actually the correct course of action for your organization, then it will be worthwhile.
We used to run terminal in browser using https://github.com/yudai/gotty and the entire dev team remapped their Ctrl+w to Ctrl+`. We did frontend and backend development with this setup almost for 1.5 years. Muscles memory and till this date, always have the fear if my actual terminal will get closed if I use Ctlr+w :P
It would be cool if we could SSH into the temporary host (I'm guessing these hosts currently aren't internet connected to avoid abuse so might not be possible or require some super careful firewalling)
Hello, SadServers guy here. Free VMs are sandboxed (no way in or out other than coming in through the proxy) for security reasons. Paid accounts have VMs with internet access and SSH access (and your pub key is added to all VMs for convenience)
The definition I liked best, which I _think_ came from one of the Google SRE books though I'm not certain, was: "SRE is what happens when you consider operations to be a software problem".
Nope, SREs keep applications running on a platform. Lots of metrics, tools to deploy apps in whatever rollout process the company has, etc.
In small companies, sysadmin might be a duty of the SRE team, but they definitely diverge if you have a large on-prem deployment or work with bespoke VMs in the cloud.
We have scenarios running on k8s, both on single VMs (the ones you can see in the scenario list) and we also have a beta/PoC k8s cluster where we currently run a couple of scenarios as single pod (a docker container) or as a full system (the "kubernetes playgrounds", which is kind of hidden while we test it).
Is this what you were wondering? we do have pending to introduce podman scenarios as well
Without sharing too many spoilers... I solved the challenge but the check script was unhappy. The curl commands in the script worked fine, the earlier parts of the script failed, i.e. it didn't like how I'd decided to make that work.
This kind of thing annoys me. This is why CTFs are great, where the goal is to get the flag string. Obviously harder to do for sysadmin, but expecting a particular configuration when I managed to make it work without doing things exactly as they wanted is no better than a poorly written exam.
hello, thanks for the feedback. Just deployed a new image that only checks for the objective, not at what docker network somebody uses.
It is hard to have a checker that eliminates both false positives and false negatives in general, but we always try to minimize false negatives and we failed initially here.
It's not clear that you will need an account to see the problems. Logged in with my account and it's exactly the same page. It's not Dec 1st everywhere yet, so they might open up for everyone when they do open them up.
I would like to see and try to solve the scenarios for myself, not to get meaningless internet points. If you look at their front page, you can do that right now. So why do I have to create an account to even see these special advent scenarios?
Time pressures during christmas/holidays mean that the original calendars were becoming too stressful to handle. Seen several calendars switching to 12 consecutive days or 1 every 2 days challenges.
No, Advent is the liturgical season preceding Christmas, beginning the fourth Sunday before Christmas (which is also the Sunday nearest November 30), it is a period of at least three weeks and one day (the shortest period that can start on a Sunday and include four Sundays.)
The 12 days of Christmas start on Christmas and end on January 5, the eve of the Feast of Epiphany.
12-day advent calendars are a fairly recent invention that mirrors the 12-days of Christmas, but has no direct correspondence to anything in any traditional Christian religious calendar (the more common 24-day format is also a modern, but less recent, invention detached from the religious calendar, that simplifies by ignoring the floating start date of advent and always starting on Dec. 1.)
Yes, Christmas is the first of the twelve days of Christmas.
Advent begins on the fourth Sunday before Christmas, which was Nov 30 this year. It ends on Dec 24. Therefore it is technically anywhere from 22 to 28 days long.
Advent calendars begin on Dec 1 and end on Dec 25.
For math, the AMC 10 and AMC 12 tests have 25 questions each, some of them quite challenging. Both are high school level math, no calculus. Search "2025 amc 10" for this year's problems and solutions.
Most are obvious to most people. None are obvious to everybody.
Github Actions left a bad taste in my mouth after having it randomly removed authenticated workers from the pool, after their offline for ~5 days.
This was after setting up a relatively complex PR workflow (always on cheap server starts up very expensive build server with specific hardware) only to have it break randomly after a PR didn't come in for a few days. And no indication that this happens, and no workaround from GitHub.
There are better solutions for CI, GitHub 's is half baked.
That said, I have found runners to be unnecessarily difficult.
But Jenkins and its own quirks, and when I used GitLab, it used ancient docker-machine and outdated AMIs by default.
I think Buildkite has been the only one to make this easy and scalable. But it is meant for self hosted runners.
[1] https://docs.github.com/en/enterprise-cloud@latest/actions/h...
You need a beefy master and it is your single point of failure. Untimely triggers of heavy jobs overwhelm controller? All projects are down. Jobs need to be carefully crafted to be resumable at all.
Heavy reliance on master means that even sending out webhooks on stage status changes is extremely error prone.
When your jobs require certain tools to be available you are expected to package those as part of agent deployment as Jenkins relies on host tools. In reality you end up rolling your own tool management system that every job has to call in some canonical manner.
There is no built in way to isolate environments. You can harden the system a bit with various ACLs, but in the end if you either have to trust projects or build up and maintain infrastructures for different projects isolated at host level.
In cases when time-wise significant processing happens externally, you have to block an executor.
Will test forgejo's CI first as we'll use the repo anyway, but if it ain't for me, it's going to be jenkins I assume.
Setting up your own Gitlab instance + Runners with rootless containers is not without quirks, too.
We have our main node in a state that we can move it anywhere in a couple of minutes with almost no downtime.
I'll add another point to "Pros": Jenkins is FOSS and it costs $0 per developer per month.
Haven't spent a lot of time with it myself, but if Jenkins isn't of much appeal, Drone [1] seems to be another popular (and lightweight) alternative.
[0] https://hub.docker.com/_/jenkins/
[1] https://www.drone.io
Oh, good lord why?
> bunch of random Jenkins servers
Either PXE boot from an image, or k8s from an image, have a machine or pod rebooted/destroyed after one job. Update your image once a month, or have a Jenkins job to do that for you.
> Authentication, authorization, access control
Either use LDAP or Login via Github, and Matrix security plugin. Put all "Devops" group into admins, the rest into users, never touch it again.
> configuration
CASC plugin and seed for jobs, and/or Helm for just about everything else.
> env vars and secrets
Pull everything from Vault with Vault plugin.
> as a sysadmin Jenkins will suck the life out of you
I spend about 1-2 hours a week managing Jenkins itself, and the rest of the week watching the jobs or developing new ones.
I've notified the authorities and social services.
Damn, this one I'm guilty of. Though, I'm not real Sysadmin/DevOps, I'm just throwing something together and deploying it on a LAN-only VM for security reasons (I don't trust the type of code I would write)
It really depends if the machine is hosting anything that you don't want some users to access. If the machine is single-purpose and any user is already able to access everything valuable from it (DB with customer data, etc) or trivially elevate to root (via sudo, docker access, etc) then it's just pointless extra typing and security theatre.
A: Calculate the average age in years of all dependencies calculated by: (max(most recent version release date, date of most recent CVE on library) - used version release date). Sleep for that many seconds before the app starts.
Is this really like that? Isn't there any Unix/DBA anymore? I associate DevOps to what at my time we called "operations" and "development". We had 5 teams or so:
1) Developers, who would architect and write code, 2) Operations who would deploy, monitor and address customer complaints, 3) Unix (aka SYS) administrators, who would take care of housekeeping of well, the OS (and web servers/middleware), 4) DBA who would be monitoring and optimizing Oracle/Postgres, and 5) Network admins, who would take care of Load Balancers, Routers, Switches, Firewalls (well, there were 2 security experts for that also)
So I think DevOps would be a mix of 1&2, to avoid the daily wars that would constantly happen "THEY did it wrong!"
Can somebody clear my mind, please!? It seems I was out of it for too long?!
Developers handle 1). Devops handle 2)/3)/5). Nobody does 4)
I must say, again I thought (I read it somewhere?) DevOps should take care of the constant battle between Devs and Operations (I've seen enough of that in my times) by merging 1 and 2 together. But it seems just a name change, and if anything, seems worst, as a (IMHO) critical and central component, like the DB, now has totally distributed responsibilities. I would like to know what happens when e.g. a DB crashes because a filesystem is full, "because one developer made another index, because one from devops had a complaint because X was too slow".
Either the people are extremely more professional that in my times, or it must be a shitshow to look while eating pop-corn.
In practice there is no way to relay "query fubar, fix" back, because we are much agile, very scrum: feature is done when the ticket is closed, new tickets are handled by product owners. Reality is antithesis of that double Ouroboros.
In practice developers write code, devops deploy "teh clouds" (writing yamls is the deving part) and we throw moar servers at some cloud db when performance becomes sub-par.
Then they hire DBREs because they think DBA sounds antiquated, who then enter a hellscape of knowing exactly what the root issues are (poorly-designed schemata, unperformant queries, and applications without proper backoff and graceful degradation), and being utterly unable to convince management of this (“what if we switched to $SOME_DBAAS? That would fix it, right?”).
[1] https://github.com/ankane/pghero
[2] https://pgtune.leopard.in.ua/
Edit: For the record, I have worked at a few small companies as the "SysAdmin" guy who did the whole compliment of servers, OS, storage, networking, VMs, DB, perf tuning, etc.
1. Patch Microsoft exchange with only a three hour outage window 2. Train a user to use onedrive instead of emailing 50mb files and back and forth 3. Setup eight printers for six users. Deal with 9gb printer drivers. 4. Ask an exec if he would please let you add mfa to their mailbox. 5. Sit there calmly while that exec yells like a wwe wrestler about the ways he plans to ruin you in response 6. Debate the cost of a custom mouse pad for one person across three meetings 7. Deploy any standard windows app that expects everyone be an administrator without making everyone an administrator 8. Deploy an app that expects uac disabled without disabling uac 9. Debug some finance persons 9000 line excel function
Deploy new Server(s), patch, install Exchange, Setup DAGs, migrate everyone mailbox, swing load balancer over to new servers, uninstall Exchange from old, remove old from Active Directory, delete servers.
BTW, Upgrades now suck because Office365 uses method above so upgrade system never gets good Q&A from them.
You can definitely make a strong argument that the organization needs practice rotating, so I would advise reframing it as an org-survivability-planning challenge and not a key-security issue.
If you don't want a user to log in as root, disable the root password (or change it to something only you know) and disable root ssh. If you want people to stop sharing the same login and password across all servers, there's several ways to do it but the most straightforward one seems like it would be to enforce the use of a hardware key (yubikey or similar) for login. If people aren't using configuration management software and are leaving machines in an inconsistent state, again there are several options but I'd look into this NixOS project: https://github.com/nix-community/impermanence + some policy of rebooting the machines regularly.
If you don't like how users are making use of AWS resources and secrets, then set up AWS permissions to force them to do so the correct way. In general if someone is using a system in a bad or insecure way, then after alerting them with some lead time, deliberately break their workflow and force them to come to you in order to make progress. If the thing you suggest is actually the correct course of action for your organization, then it will be worthwhile.
If you just do any of this list without the proper migration plan/time, someone senior in the org will complain and you will lose.
More accurate statement imo.
Feedback from candidates is that they find it a bit stressful during the actual interview but love the approach once it's completed.
The interview option also makes it trivial to just send to a candidate via Zoom chat, ask them to share their screen and "just works".
Happy to answer questions folks may have about how we use it.
In small companies, sysadmin might be a duty of the SRE team, but they definitely diverge if you have a large on-prem deployment or work with bespoke VMs in the cloud.
We have scenarios running on k8s, both on single VMs (the ones you can see in the scenario list) and we also have a beta/PoC k8s cluster where we currently run a couple of scenarios as single pod (a docker container) or as a full system (the "kubernetes playgrounds", which is kind of hidden while we test it).
Is this what you were wondering? we do have pending to introduce podman scenarios as well
This kind of thing annoys me. This is why CTFs are great, where the goal is to get the flag string. Obviously harder to do for sysadmin, but expecting a particular configuration when I managed to make it work without doing things exactly as they wanted is no better than a poorly written exam.
It is hard to have a checker that eliminates both false positives and false negatives in general, but we always try to minimize false negatives and we failed initially here.
Somehow, SadServers seems to have entirely missed the concept of a "puzzle".
I don't know of any other SaaS which gives you a VM with one click without any registration but we do it.
In any case thanks for the feedback, I've put a button on this /advent page for clarity, cheers
If you tell me more, I might sign up. If I have to create an account first, I'm walking away.
I would like to see and try to solve the scenarios for myself, not to get meaningless internet points. If you look at their front page, you can do that right now. So why do I have to create an account to even see these special advent scenarios?
> do you even sysadmin?
Yes.
At 5$/m I might give the paid subscription a try.
[1] https://adventofcode.com/2025/about#faq_num_days
The 12 days of Christmas start on Christmas and end on January 5, the eve of the Feast of Epiphany.
12-day advent calendars are a fairly recent invention that mirrors the 12-days of Christmas, but has no direct correspondence to anything in any traditional Christian religious calendar (the more common 24-day format is also a modern, but less recent, invention detached from the religious calendar, that simplifies by ignoring the floating start date of advent and always starting on Dec. 1.)
Advent begins on the fourth Sunday before Christmas, which was Nov 30 this year. It ends on Dec 24. Therefore it is technically anywhere from 22 to 28 days long.
Advent calendars begin on Dec 1 and end on Dec 25.