Note: Someone commented on the “limited shelf-life” of ransomware and why this doesn’t hurt other victims. They deleted their comment but I’m posting my response.
You are incorrect. What is limited is the number of attacks that can be used for victims to recover their files. If you think the author is the only person that was using this attack to recover files, you are incorrect again. I’d recommend checking out book The Ransomware Hunting Team. It’s interesting book about what happens behind the scene for helping victims recover their files.
You’re making a lot of assumptions about the capability to reconnect and patch/update itself, preface the fix with “keep your machine offline from here in out” and we’re back to fixing it for everyone before that point.
Anyone know why they are using timestamps instead of /dev/random?
Dont get me wrong,im glad they don't, its just kind of surprising as it seems like such a rookie mistake. Is there something i'm missing here or is it more a caseof people who know what they are doing don't chose a life of crime?
My unqualified hunch: if they did that, then a mitigation against such malware could be for the OS to serve completely deterministic data from /dev/random for all but a select few processes which are a priori defined.
Even if the attackers used a fully broken since 1980s encryption-how many organizations have the expertise to dissect it?
I assume that threat detection maintains a big fingerprint databases of tools associated with malware. Rolling your own tooling, rather than importing a known library, gives one less heuristic to trip detection.
afaik the majority of ransomware does manage to use cryptography securely, so we only hear about decrypting like this when they fuck up. I don't think there's any good reason beyond the fact that they evidently don't know what they're doing.
If it works (reasonably) it works, and it throws wrenches into the gears of security researchers when the code isn't the usual, immediately recognizable S boxes and other patterns or library calls.
Obviously if you give all sandboxed processes access to /, that doesn't improve anything.
The idea is that you'd notice that your new git binary is trying to get access to /var/postgres, and you'd deny it, because it has no reason to want that.
Which doesn't scale to office workstations or workplaces with network drives, where users needing to search and update hundreds of files at a time is the norm.
Developers with 1 project open have potentially hundreds to thousands of open, quite valuable files.
Now of course, we generally expect developers to have backups via VCS but that's exactly the point: snapshotting filesystems with append semantics for common use cases is an actual, practical defense.
I'm the old days we had mechanical write protect. I find it hard taking modern security seriously.
It should be pretty simple to [say] make a hardware solution to allow only writing out new files.
I also find it comical that my production database has instructions to conveniently delete or modify all rows in my table. That would be at the top of the list of features I don't want.
I have backups of course, backups on writable usb drives.
Like, when I lose everything it is really nice to be able to delete files from my backup drive. This is such a great idea.
Excuse my ignorance but is one really updating hundreds of files the day round? On some factory machines that do dangerous things you have to hold down two buttons.
About tge two buttons thing in factorys. The reason is, you don't have a hand in the machine. So it's not just two buttons, it's two buttons with such distance you have to use two hands. And, usually one of the two buttons, you have to hold in a middle position, if you push the button to much, it does also not work.
Something else, how many times, because of a bad mousepad, whole directorys got moved somewhere. Often you don't even know what you moved, so you can't even search. Special in my last company, we had for sure once a month such a "new" in our data.
Again: define "explicit"? Does clicking a file count? Asking for code reformatting across the project? How long does access last? How is it revoked?
If the user runs "reformat project" once, then gets a new version, are try going to have any warning that "reformat project" is about to encrypt every file it touches?
Explicit as in when you run a new app it does not have access to any of your files and there is no way for it to gain access without you, the user, giving it.
>Does clicking a file count?
Yes, clicking a file from the file picker counts.
>Asking for code reformatting across the project?
You can grant access to a directory in that case.
>How long does access last? How is it revoked?
It can last forever or until the application is closed. There is room to choose how exactly it could work.
>If the user runs "reformat project" once, then gets a new version, are try going to have any warning that "reformat project" is about to encrypt every file it touches?
> I expect [the attackers] will change their encryption again after I publish this.
If they realize that, why publish this? Seems irresponsible at best to give a decryptor in such gory detail for what, Internet cred? It's an interesting read, and my intellectual curiosity is piqued, it just seems keeping the details to yourself would be better for the community at-large.
> Everytime I wrote something about ransomware (in my Indonesian blog), many people will ask for ransomware help.
...
> Just checking if the ransomware is recoverable or not may take several hours with a lot of efforts (e.g: if the malware is obfuscated/protected). So please don’t ask me to do that for free
Presuming this results in a cryptosystem change for Akira, there’s a real number of victims who won’t get their data back as a result of this disclosure.
Whether the number is more than that of victims to date who can recreate this? Who knows
I can’t remember the example (it was a conference talk a few years ago), but I’m pretty sure there’s LE and DFIR companies who also reverse this stuff and assist in recovery, they just don’t publish the actual flaws exploited to recover the data.
It was already disclosed to the bad guys that someone managed to break their encryption, when they didn't get paid and they saw that the customer had somehow managed to recover their data. That probably meant they might go looking for weaknesses, or modify their encryption, even without this note.
Other victims whose data were encrypted by the same malware (before any updates) could benefit from this disclosure to try to recover their data.
once your files are encrypted by ransomware, does the encryption change if the malware gets updated? if not, then anyone currently infected with this version can now possibly recover.
if they don't release their code, then what's the point of having the code? they accomplished their task, and now here you go for someone else that might have the same need. otherwise, don't get infected by a new version
You are incorrect. What is limited is the number of attacks that can be used for victims to recover their files. If you think the author is the only person that was using this attack to recover files, you are incorrect again. I’d recommend checking out book The Ransomware Hunting Team. It’s interesting book about what happens behind the scene for helping victims recover their files.
This feels like a net win.
Huge props to the author for coming up with this whole process and providing such fascinating details
Dont get me wrong,im glad they don't, its just kind of surprising as it seems like such a rookie mistake. Is there something i'm missing here or is it more a caseof people who know what they are doing don't chose a life of crime?
I assume that threat detection maintains a big fingerprint databases of tools associated with malware. Rolling your own tooling, rather than importing a known library, gives one less heuristic to trip detection.
Ransomeware wouldn't be a problem at all if copy-on-write snapshotting filesystems were the default.
Then changes made to files should be stored as deltas to the original.
But realistically a good readonly/write new backup solution is needed, you never know when something bad might happen.
I think most people don’t care about their system directories but their data?
Backups and onedrive for enterprises, yes. :)
The idea is that you'd notice that your new git binary is trying to get access to /var/postgres, and you'd deny it, because it has no reason to want that.
Developers with 1 project open have potentially hundreds to thousands of open, quite valuable files.
Now of course, we generally expect developers to have backups via VCS but that's exactly the point: snapshotting filesystems with append semantics for common use cases is an actual, practical defense.
It should be pretty simple to [say] make a hardware solution to allow only writing out new files.
I also find it comical that my production database has instructions to conveniently delete or modify all rows in my table. That would be at the top of the list of features I don't want.
I have backups of course, backups on writable usb drives.
Like, when I lose everything it is really nice to be able to delete files from my backup drive. This is such a great idea.
Excuse my ignorance but is one really updating hundreds of files the day round? On some factory machines that do dangerous things you have to hold down two buttons.
Something else, how many times, because of a bad mousepad, whole directorys got moved somewhere. Often you don't even know what you moved, so you can't even search. Special in my last company, we had for sure once a month such a "new" in our data.
>Developers with 1 project open have potentially hundreds to thousands of open, quite valuable files.
And malware wouldn't be able to access any of those files without the developer explicitly giving it access.
Append only semantics doesn't scale for consumer devices as they do not have the luxury of extra storage space.
If the user runs "reformat project" once, then gets a new version, are try going to have any warning that "reformat project" is about to encrypt every file it touches?
>Does clicking a file count?
Yes, clicking a file from the file picker counts.
>Asking for code reformatting across the project?
You can grant access to a directory in that case.
>How long does access last? How is it revoked?
It can last forever or until the application is closed. There is room to choose how exactly it could work.
>If the user runs "reformat project" once, then gets a new version, are try going to have any warning that "reformat project" is about to encrypt every file it touches?
That would be up to the design.
If they realize that, why publish this? Seems irresponsible at best to give a decryptor in such gory detail for what, Internet cred? It's an interesting read, and my intellectual curiosity is piqued, it just seems keeping the details to yourself would be better for the community at-large.
> Everytime I wrote something about ransomware (in my Indonesian blog), many people will ask for ransomware help. ... > Just checking if the ransomware is recoverable or not may take several hours with a lot of efforts (e.g: if the malware is obfuscated/protected). So please don’t ask me to do that for free
So charge them for it?
Whether the number is more than that of victims to date who can recreate this? Who knows
Other victims whose data were encrypted by the same malware (before any updates) could benefit from this disclosure to try to recover their data.
New versions of Akira and any other ransomware are constantly being developed. This code is specific to a certain version of the malware.
As noted in the article, it also requires:
1. An extremely capable sysadmin 2. A bunch of GPU capacity 3. That the timestamps be brute-forced separately
So it's not exactly a turn-key defeat of Akira.
if they don't release their code, then what's the point of having the code? they accomplished their task, and now here you go for someone else that might have the same need. otherwise, don't get infected by a new version