
Meshtastic just released an eye-watering 9.5 CVSS CVE, warning about public/private keys being re-used among devices. And I’m the one that wrote the code. Not to mention, I triaged and fixed it. And I’m part of Meshtastic Solutions, the company associated with the project. This is is the story of how we got here, and a bit of perspective.
First things first, what kind of keys are we talking about, and what does Meshtastic use them for? These are X25519 keys, used specifically for encrypting and authenticating Direct Messages (DMs), as well as optionally for authorizing remote administration actions. It is, by the way, this remote administration scenario using a compromised key, that leads to such a high CVSS rating. Before version 2.5 of Meshtastic, the only cryptography in place was simple AES-CTR encryption using shared symmetric keys, still in use for multi-user channels. The problem was that DMs were also encrypted with this channel key, and just sent with the “to” field populated. Anyone with the channel key could read the DM.
I re-worked an old pull request that generated X25519 keys on boot, using the rweather/crypto library. This sentence highlights two separate problems, that both can lead to unintentional key re-use. First, the keys are generated at first boot. I was made painfully aware that this was a weakness, when a user sent an email to the project warning us that he had purchased two devices, and they had matching keys out of the box. When the vendor had manufactured this device, they flashed Meshtastic on one device, let it boot up once, and then use a debugger to copy off a “golden image” of the flash. Then every other device in that particular manufacturing run was flashed with this golden image — containing same private key. sigh
There’s a second possible cause for duplicated keys, discovered while triaging the golden image issue. On the Arduino platform, it’s reasonably common to use the random()
function to generate a pseudo-random value, and the Meshtastic firmware is careful to manage the random seed and the random()
function so it produces properly unpredictable values. The crypto
library is solid code, but it doesn’t call random()
. On ESP32 targets, it does call the esp_random()
function, but on a target like the NRF52, there isn’t a call to any hardware randomness sources. This puts such a device in the precarious position of relying on a call to micros()
for its randomness source. While non-ideal, this is made disastrous by the fact that the randomness pool is being called automatically on first boot, leading to significantly lower entropy in the generated keys.
Release 2.6.11 of the Meshtastic firmware fixes both of these issues. First, by delaying key generation until the user selects the LoRa region. This makes it much harder for vendors to accidentally ship devices with duplicated keys. It gives users an easy way to check, just make sure the private key is blank when you receive the device. And since the device is waiting for the user to set the region, the micros()
clock is a much better source of randomness. And second, by mixing in the results of random()
and the burnt-in hardware ID, we ensure that the crypto library’s randomness pool is seeded with some unique, unpredictable values.
The reality is that IoT devices without dedicated cryptography chips will always struggle to produce high quality randomness. If you really need secure Meshtastic keys, you should generate them on a platform with better randomness guarantees. The openssl binary on a modern Linux or Mac machine would be a decent choice, and the Meshtastic private key can be generated using openssl genpkey -algorithm x25519 -outform DER | tail -c32 | base64
.
What’s Up with SVGs?
You may have tried to share a Scalable Vector Graphics (SVG) on a platform like Discord, and been surprised to see an obtuse text document rather than your snazzy logo. Browsers can display SVGs, so why do many web platforms refuse to render them? I’ve quipped that it’s because SVGs are Turing complete, which is almost literally true. But in reality it’s because SVGs can include inline HTML and JavaScript. IBM’s X-Force has the inside scoop on the use of SVG files in fishing campaigns. The key here is that JavaScript and data inside an SVG can often go undetected by security solutions.
The attack chain that X-Force highlights is convoluted, with the SVG containing a link offering a PDF download. Clicking this actually downloads a ZIP containing a JS file, which when run, downloads and attempts to execute a JAR file. This may seem ridiculous, but it’s all intended to defeat a somewhat sophisticated corporate security system, so an inattentive user will click through all the files in order to get the day’s work done. And apparently this tactic works.
*OS Spyware
Apple published updates to its entire line back in February, fixing a pair of vulnerabilities that were being used in sophisticated targeted attacks. CVE-2025-43200 was “a logic issue” that could be exploited by malicious images or videos sent in iCloud links. CVE-2025-24200 was a flaw in USB Restricted Mode, that allowed that mode to be disabled with physical access to a device.
What’s newsworthy about these vulnerabilities is that Citizen Lab has published a report that CVE-2025-43200 was used in a 0-day exploitation of journalists by the Paragon Graphite spyware. It is slightly odd that Apple credits the other fixed vulnerability, CVE-2025-24200, to Bill Marczak, a Citizen Lab researcher and co-author of this report. Perhaps there is another shoe yet to drop.
Regardless, iOS infections have been found on the phones of two separate European Journalists, with a third confirmed targeted. It’s unclear what customer contracted Paragon to spy on these journalists, and what the impetus was for doing so. Companies like Paragon, NSO Group, and others operate within a legal grey area, taking actions that would normally be criminal, but under the authority of governments.
A for Anonymous, B for Backdoor
WatchTowr has a less-snarky-than-usual treatment of a chain of problems in the Sitecore Experience that take an unauthenticated attacker all the way to Remote Code Execution (RCE). The initial issue here is the pre-configured user accounts, like defaultAnonymous
, used to represent unauthenticated users, and sitecoreServicesAPI
, used for internal actions. Those special accounts do have password hashes. Surely there isn’t some insanely weak password set for one of those users, right? Right? The password for ServicesAPI
is b
.
ServicesAPI
is interesting, but trying the easy approach of just logging in with that user on the web interface fails with a unique error message, that this user does not have access to the system. Someone knew this could be a problem, and added logic to prevent this user from being used for general system access, by checking which database the current handler is attached to. Is there an endpoint that connects to a different database? Naturally. Here it’s the administrative web login, that has no database attached. The ServicesAPI
user can log in here! Good news is that it can’t do anything, as this user isn’t an admin. But the login does work, and does result in a valid session cookie, which does allow for other actions.
There are several approaches the WatchTowr researchers tried, in order to get RCE from the user account. They narrowed in on a file upload action that was available to them, noting that they could upload a zip file, and it would be automatically extracted. There were no checks for path traversal, so it seems like an easy win. Except Sitecore doesn’t necessarily have a standard install location, so this approach has to guess at the right path traversal steps to use. The key is that there is just a little bit of filename mangling that can be induced, where a backslash gets replaced with an underscore. This allows a //
in the path traversal path to become /_/
, a special sequence that represents the webroot directory. And we have RCE. These vulnerabilities have been patched, but there were more discovered in this research, that are still to be revealed.
The Day the Internet Went Down
OK, that may be overselling it just a little bit. But Google Cloud had an eight hour event on the 12th, and the repercussions were wide, including taking down parts of Cloudflare for part of that time on the same day.
Google’s downtime was caused by bad code that was pushed to production with insufficient testing, and that lacked error handling. It was intended to be a quota policy check. A separate policy change was rolled out globally, that had unintentional blank fields. These blank fields hit the new code, and triggered null pointer de-references all around the globe all at once. An emergency fix was deployed within an hour, but the problem was large enough to have quite a long tail.
Cloudflare’s issue was connected to their Workers KV service, a Key-Value store that is used in many of Cloudflare’s other products. Workers KV is intended to be “coreless”, meaning a cascading failure should be impossible. The reality is that Workers KV still uses a third-party service as the bootstrap for that live data, and Google Cloud is part of that core. When Google’s cloud starting having problems, so did Cloudflare, and much of the rest of the Internet.
I can’t help but worry just a bit about the possible scenario, where Google relies on an outside service, that itself relies on Cloudflare. In the realm of the power grid, we sometimes hear about the cold start scenario, where everything is powered down. It seems like there is a real danger of a cold start scenario for the Internet, where multiple giant interdependent cloud vendors are all down at the same time.
Bits and Bytes
Fault injection is still an interesting research topic, particularly for embedded targets. [Maurizio Agazzini] from HN Security is doing work on voltage injection against an ESP32 V3 target, with the aim of coercing the processor to jump over an instruction and interpret a CRC32 code as an instruction pointer. It’s not easy, but he managed 1.5% success rate at bypassing secure boot with the voltage injection approach.
Intentional jitter is used in many exploitation tools, as a way to disguise what might otherwise be tell-tale traffic patterns. But Varonis Threat Labs has produced Jitter-Trap, a tool that looks for the Jitter, and attempts to identify the exploitation framework in use from the timing information.
We’ve talked a few times about vibe researching, but [Craig Young] is only tipping his toes in here. He used an LLM to find a published vulnerability, and then analyzed it himself. Turns out that the GIMP despeckle plugin doesn’t do bounds checking for very large images. Back again to an LLM, to get a Python script to generate such a file. It does indeed crash GIMP when trying to despeckle, confirming the vulnerability report, and demonstrating that there really are good ways to use LLMs while doing security research.
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.